Balancing Stability and Plasticity through Advanced Null Space in Continual Learning

07/25/2022
by   Yajing Kong, et al.
0

Continual learning is a learning paradigm that learns tasks sequentially with resources constraints, in which the key challenge is stability-plasticity dilemma, i.e., it is uneasy to simultaneously have the stability to prevent catastrophic forgetting of old tasks and the plasticity to learn new tasks well. In this paper, we propose a new continual learning approach, Advanced Null Space (AdNS), to balance the stability and plasticity without storing any old data of previous tasks. Specifically, to obtain better stability, AdNS makes use of low-rank approximation to obtain a novel null space and projects the gradient onto the null space to prevent the interference on the past tasks. To control the generation of the null space, we introduce a non-uniform constraint strength to further reduce forgetting. Furthermore, we present a simple but effective method, intra-task distillation, to improve the performance of the current task. Finally, we theoretically find that null space plays a key role in plasticity and stability, respectively. Experimental results show that the proposed method can achieve better performance compared to state-of-the-art continual learning approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/12/2021

Training Networks in Null Space of Feature Covariance for Continual Learning

In the setting of continual learning, a network is trained on a sequence...
research
10/15/2021

Towards Better Plasticity-Stability Trade-off in Incremental Learning: A simple Linear Connector

Plasticity-stability dilemma is a main problem for incremental learning,...
research
09/25/2022

Exploring Example Influence in Continual Learning

Continual Learning (CL) sequentially learns new tasks like human beings,...
research
06/19/2020

SOLA: Continual Learning with Second-Order Loss Approximation

Neural networks have achieved remarkable success in many cognitive tasks...
research
02/02/2023

Continual Learning with Scaled Gradient Projection

In neural networks, continual learning results in gradient interference ...
research
01/28/2021

Self-Attention Meta-Learner for Continual Learning

Continual learning aims to provide intelligent agents capable of learnin...
research
09/20/2023

Create and Find Flatness: Building Flat Training Spaces in Advance for Continual Learning

Catastrophic forgetting remains a critical challenge in the field of con...

Please sign up or login with your details

Forgot password? Click here to reset