Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates

08/28/2023
by   Murat Onur Yildirim, et al.
0

Continual learning (CL) refers to the ability of an intelligent system to sequentially acquire and retain knowledge from a stream of data with as little computational overhead as possible. To this end; regularization, replay, architecture, and parameter isolation approaches were introduced to the literature. Parameter isolation using a sparse network which enables to allocate distinct parts of the neural network to different tasks and also allows to share of parameters between tasks if they are similar. Dynamic Sparse Training (DST) is a prominent way to find these sparse networks and isolate them for each task. This paper is the first empirical study investigating the effect of different DST components under the CL paradigm to fill a critical research gap and shed light on the optimal configuration of DST for CL if it exists. Therefore, we perform a comprehensive study in which we investigate various DST components to find the best topology per task on well-known CIFAR100 and miniImageNet benchmarks in a task-incremental CL setup since our primary focus is to evaluate the performance of various DST criteria, rather than the process of mask selection. We found that, at a low sparsity level, Erdos-Renyi Kernel (ERK) initialization utilizes the backbone more efficiently and allows to effectively learn increments of tasks. At a high sparsity level, however, uniform initialization demonstrates more reliable and robust performance. In terms of growth strategy; performance is dependent on the defined initialization strategy, and the extent of sparsity. Finally, adaptivity within DST components is a promising way for better continual learners.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2021

Recent Advances of Continual Learning in Computer Vision: An Overview

In contrast to batch learning where all training data is available at on...
research
08/31/2023

ScrollNet: Dynamic Weight Importance for Continual Learning

The principle underlying most existing continual learning (CL) methods i...
research
02/27/2022

Robust Continual Learning through a Comprehensively Progressive Bayesian Neural Network

This work proposes a comprehensively progressive Bayesian neural network...
research
07/13/2022

CoSCL: Cooperation of Small Continual Learners is Stronger than a Big One

Continual learning requires incremental compatibility with a sequence of...
research
07/06/2020

Continual Learning in Human Activity Recognition: an Empirical Analysis of Regularization

Given the growing trend of continual learning techniques for deep neural...
research
01/25/2022

Representation learnt by SGD and Adaptive learning rules – Conditions that Vary Sparsity and Selectivity in Neural Network

From the point of view of the human brain, continual learning can perfor...
research
11/29/2022

Isolation and Impartial Aggregation: A Paradigm of Incremental Learning without Interference

This paper focuses on the prevalent performance imbalance in the stages ...

Please sign up or login with your details

Forgot password? Click here to reset