Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion

by   Yun Luo, et al.

Task-incremental continual learning refers to continually training a model in a sequence of tasks while overcoming the problem of catastrophic forgetting (CF). The issue arrives for the reason that the learned representations are forgotten for learning new tasks, and the decision boundary is destructed. Previous studies mostly consider how to recover the representations of learned tasks. It is seldom considered to adapt the decision boundary for new representations and in this paper we propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning (SCCL), In our method, a contrastive loss is used to directly learn representations for different tasks and a limited number of data samples are saved as the classification criterion. During inference, the saved data samples are fed into the current model to obtain updated representations, and a k Nearest Neighbour module is used for classification. In this way, the extensible model can solve the learned tasks with adaptive criteria of saved samples. To mitigate CF, we further use an instance-wise relation distillation regularization term and a memory replay module to maintain the information of previous tasks. Experiments show that SCCL achieves state-of-the-art performance and has a stronger ability to overcome CF compared with the classification baselines.


Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning

In Continual learning (CL) balancing effective adaptation while combatin...

Co^2L: Contrastive Continual Learning

Recent breakthroughs in self-supervised learning show that such algorith...

Online Continual Learning with Contrastive Vision Transformer

Online continual learning (online CL) studies the problem of learning se...

ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning for Session-based Recommendation

Session-based recommendation has received growing attention recently due...

Continual Variational Autoencoder Learning via Online Cooperative Memorization

Due to their inference, data representation and reconstruction propertie...

Generalized Few-Shot Continual Learning with Contrastive Mixture of Adapters

The goal of Few-Shot Continual Learning (FSCL) is to incrementally learn...

Is Continual Learning Truly Learning Representations Continually?

Continual learning (CL) aims to learn from sequentially arriving tasks w...

Please sign up or login with your details

Forgot password? Click here to reset