Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines

by   Yen-Chang Hsu, et al.

Continual learning has received a great deal of attention recently with several approaches being proposed. However, evaluations involve a diverse set of scenarios making meaningful comparison difficult. This work provides a systematic categorization of the scenarios and evaluates them within a consistent framework including strong baselines and state-of-the-art methods. The results provide an understanding of the relative difficulty of the scenarios and that simple baselines (Adagrad, L2 regularization, and naive rehearsal strategies) can surprisingly achieve similar performance to current mainstream methods. We conclude with several suggestions for creating harder evaluation scenarios and future research directions.


page 1

page 2

page 3

page 4


Three scenarios for continual learning

Standard artificial neural networks suffer from the well-known issue of ...

Towards Robust Evaluations of Continual Learning

Continual learning experiments used in current deep learning papers do n...

Ex-Model: Continual Learning from a Stream of Trained Models

Learning continually from non-stationary data streams is a challenging r...

Real-Time Evaluation in Online Continual Learning: A New Paradigm

Current evaluations of Continual Learning (CL) methods typically assume ...

Evaluation of Regularization-based Continual Learning Approaches: Application to HAR

Pervasive computing allows the provision of services in many important a...

Please sign up or login with your details

Forgot password? Click here to reset