Sample Condensation in Online Continual Learning

06/23/2022
by   Mattia Sangermano, et al.
10

Online Continual learning is a challenging learning scenario where the model must learn from a non-stationary stream of data where each sample is seen only once. The main challenge is to incrementally learn while avoiding catastrophic forgetting, namely the problem of forgetting previously acquired knowledge while learning from new data. A popular solution in these scenario is to use a small memory to retain old data and rehearse them over time. Unfortunately, due to the limited memory size, the quality of the memory will deteriorate over time. In this paper we propose OLCGM, a novel replay-based continual learning strategy that uses knowledge condensation techniques to continuously compress the memory and achieve a better use of its limited size. The sample condensation step compresses old samples, instead of removing them like other replay strategies. As a result, the experiments show that, whenever the memory budget is limited compared to the complexity of the data, OLCGM improves the final accuracy compared to state-of-the-art replay strategies.

READ FULL TEXT

page 1

page 4

page 7

research
03/19/2022

Practical Recommendations for Replay-based Continual Learning Methods

Continual Learning requires the model to learn from a stream of dynamic,...
research
10/21/2021

Center Loss Regularization for Continual Learning

The ability to learn different tasks sequentially is essential to the de...
research
08/28/2021

Prototypes-Guided Memory Replay for Continual Learning

Continual learning (CL) refers to a machine learning paradigm that using...
research
03/23/2023

Adiabatic replay for continual learning

Conventional replay-based approaches to continual learning (CL) require,...
research
12/02/2019

Latent Replay for Real-Time Continual Learning

Training deep networks on light computational devices is nowadays very c...
research
12/22/2021

Continual learning of longitudinal health records

Continual learning denotes machine learning methods which can adapt to n...
research
11/15/2022

Exploring the Joint Use of Rehearsal and Knowledge Distillation in Continual Learning for Spoken Language Understanding

Continual learning refers to a dynamical framework in which a model or a...

Please sign up or login with your details

Forgot password? Click here to reset