It's all About Consistency: A Study on Memory Composition for Replay-Based Methods in Continual Learning

by   Julio Hurtado, et al.
University of Pisa
Pontificia Universidad Católica de Chile

Continual Learning methods strive to mitigate Catastrophic Forgetting (CF), where knowledge from previously learned tasks is lost when learning a new one. Among those algorithms, some maintain a subset of samples from previous tasks when training. These samples are referred to as a memory. These methods have shown outstanding performance while being conceptually simple and easy to implement. Yet, despite their popularity, little has been done to understand which elements to be included into the memory. Currently, this memory is often filled via random sampling with no guiding principles that may aid in retaining previous knowledge. In this work, we propose a criterion based on the learning consistency of a sample called Consistency AWare Sampling (CAWS). This criterion prioritizes samples that are easier to learn by deep networks. We perform studies on three different memory-based methods: AGEM, GDumb, and Experience Replay, on MNIST, CIFAR-10 and CIFAR-100 datasets. We show that using the most consistent elements yields performance gains when constrained by a compute budget; when under no such constrain, random sampling is a strong baseline. However, using CAWS on Experience Replay yields improved performance over the random baseline. Finally, we show that CAWS achieves similar results to a popular memory selection method while requiring significantly less computational resources.


page 5

page 14

page 15


Online Continual Learning with Maximally Interfered Retrieval

Continual learning, the setting where a learning agent is faced with a n...

Continual Learning with Strong Experience Replay

Continual Learning (CL) aims at incrementally learning new tasks without...

Rethinking Experience Replay: a Bag of Tricks for Continual Learning

In Continual Learning, a Neural Network is trained on a stream of data w...

Revisiting Experience Replay: Continual Learning by Adaptively Tuning Task-wise Relationship

Continual learning requires models to learn new tasks while maintaining ...

Studying Generalization on Memory-Based Methods in Continual Learning

One of the objectives of Continual Learning is to learn new concepts con...

CaT: Balanced Continual Graph Learning with Graph Condensation

Continual graph learning (CGL) is purposed to continuously update a grap...

Online Learned Continual Compression with Stacked Quantization Module

We introduce and study the problem of Online Continual Compression, wher...

Please sign up or login with your details

Forgot password? Click here to reset