Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework

by   Gehui Shen, et al.

Continual learning requires the model to maintain the learned knowledge while learning from a non-i.i.d data stream continually. Due to the single-pass training setting, online continual learning is very challenging, but it is closer to the real-world scenarios where quick adaptation to new data is appealing. In this paper, we focus on online class-incremental learning setting in which new classes emerge over time. Almost all existing methods are replay-based with a softmax classifier. However, the inherent logits bias problem in the softmax classifier is a main cause of catastrophic forgetting while existing solutions are not applicable for online settings. To bypass this problem, we abandon the softmax classifier and propose a novel generative framework based on the feature space. In our framework, a generative classifier which utilizes replay memory is used for inference, and the training objective is a pair-based metric learning loss which is proven theoretically to optimize the feature space in a generative way. In order to improve the ability to learn new data, we further propose a hybrid of generative and discriminative loss to train the model. Extensive experiments on several benchmarks, including newly introduced task-free datasets, show that our method beats a series of state-of-the-art replay-based methods with discriminative classifiers, and reduces catastrophic forgetting consistently with a remarkable margin.


Supervised Contrastive Replay: Revisiting the Nearest Class Mean Classifier in Online Class-Incremental Continual Learning

Online class-incremental continual learning (CL) studies the problem of ...

Prediction Error-based Classification for Class-Incremental Learning

Class-incremental learning (CIL) is a particularly challenging variant o...

Dealing with Cross-Task Class Discrimination in Online Continual Learning

Existing continual learning (CL) research regards catastrophic forgettin...

Offline-Online Class-incremental Continual Learning via Dual-prototype Self-augment and Refinement

This paper investigates a new, practical, but challenging problem named ...

New Insights on Relieving Task-Recency Bias for Online Class Incremental Learning

To imitate the ability of keeping learning of human, continual learning ...

Incremental Learning from Low-labelled Stream Data in Open-Set Video Face Recognition

Deep Learning approaches have brought solutions, with impressive perform...

Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning

In Continual learning (CL) balancing effective adaptation while combatin...

Please sign up or login with your details

Forgot password? Click here to reset