Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines

10/30/2018
by   Yen-Chang Hsu, et al.
0

Continual learning has received a great deal of attention recently with several approaches being proposed. However, evaluations involve a diverse set of scenarios making meaningful comparison difficult. This work provides a systematic categorization of the scenarios and evaluates them within a consistent framework including strong baselines and state-of-the-art methods. The results provide an understanding of the relative difficulty of the scenarios and that simple baselines (Adagrad, L2 regularization, and naive rehearsal strategies) can surprisingly achieve similar performance to current mainstream methods. We conclude with several suggestions for creating harder evaluation scenarios and future research directions.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/15/2019

Three scenarios for continual learning

Standard artificial neural networks suffer from the well-known issue of ...
05/24/2018

Towards Robust Evaluations of Continual Learning

Continual learning experiments used in current deep learning papers do n...
12/13/2021

Ex-Model: Continual Learning from a Stream of Trained Models

Learning continually from non-stationary data streams is a challenging r...
02/02/2023

Real-Time Evaluation in Online Continual Learning: A New Paradigm

Current evaluations of Continual Learning (CL) methods typically assume ...
04/26/2023

Evaluation of Regularization-based Continual Learning Approaches: Application to HAR

Pervasive computing allows the provision of services in many important a...

Please sign up or login with your details

Forgot password? Click here to reset