Center Loss Regularization for Continual Learning

10/21/2021
by   kaustubh-olpadkar, et al.
0

The ability to learn different tasks sequentially is essential to the development of artificial intelligence. In general, neural networks lack this capability, the major obstacle being catastrophic forgetting. It occurs when the incrementally available information from non-stationary data distributions is continually acquired, disrupting what the model has already learned. Our approach remembers old tasks by projecting the representations of new tasks close to that of old tasks while keeping the decision boundaries unchanged. We employ the center loss as a regularization penalty that enforces new tasks' features to have the same class centers as old tasks and makes the features highly discriminative. This, in turn, leads to the least forgetting of already learned information. This method is easy to implement, requires minimal computational and memory overhead, and allows the neural network to maintain high performance across many sequentially encountered tasks. We also demonstrate that using the center loss in conjunction with the memory replay outperforms other replay-based strategies. Along with standard MNIST variants for continual learning, we apply our method to continual domain adaptation scenarios with the Digits and PACS datasets. We demonstrate that our approach is scalable, effective, and gives competitive performance compared to state-of-the-art continual learning methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/23/2022

Sample Condensation in Online Continual Learning

Online Continual learning is a challenging learning scenario where the m...
research
10/25/2021

Mixture-of-Variational-Experts for Continual Learning

One significant shortcoming of machine learning is the poor ability of m...
research
11/28/2018

Experience Replay for Continual Learning

Continual learning is the problem of learning new tasks or knowledge whi...
research
06/19/2020

SOLA: Continual Learning with Second-Order Loss Approximation

Neural networks have achieved remarkable success in many cognitive tasks...
research
10/31/2019

Continual Unsupervised Representation Learning

Continual learning aims to improve the ability of modern learning system...
research
02/07/2023

Keeping Pace with Ever-Increasing Data: Towards Continual Learning of Code Intelligence Models

Previous research on code intelligence usually trains a deep learning mo...
research
09/27/2020

Beneficial Perturbation Network for designing general adaptive artificial intelligence systems

The human brain is the gold standard of adaptive learning. It not only c...

Please sign up or login with your details

Forgot password? Click here to reset