Continual Learning: Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes

07/01/2020
by   Timothée Lesort, et al.
0

Humans learn all their life long. They accumulate knowledge from a sequence of learning experiences and remember the essential concepts without forgetting what they have learned previously. Artificial neural networks struggle to learn similarly. They often rely on data rigorously preprocessed to learn solutions to specific problems such as classification or regression. In particular, they forget their past learning experiences if trained on new ones. Therefore, artificial neural networks are often inept to deal with real-life settings such as an autonomous-robot that has to learn on-line to adapt to new situations and overcome new problems without forgetting its past learning-experiences. Continual learning (CL) is a branch of machine learning addressing this type of problem. Continual algorithms are designed to accumulate and improve knowledge in a curriculum of learning-experiences without forgetting. In this thesis, we propose to explore continual algorithms with replay processes. Replay processes gather together rehearsal methods and generative replay methods. Generative Replay consists of regenerating past learning experiences with a generative model to remember them. Rehearsal consists of saving a core-set of samples from past learning experiences to rehearse them later. The replay processes make possible a compromise between optimizing the current learning objective and the past ones enabling learning without forgetting in sequences of tasks settings. We show that they are very promising methods for continual learning. Notably, they enable the re-evaluation of past data with new knowledge and the confrontation of data from different learning-experiences. We demonstrate their ability to learn continually through unsupervised learning, supervised learning and reinforcement learning tasks.

READ FULL TEXT

page 1

page 23

page 28

page 29

page 30

research
02/20/2018

Continual Reinforcement Learning with Complex Synapses

Unlike humans, who are capable of continual learning over their lifetime...
research
04/12/2022

Generative Negative Replay for Continual Learning

Learning continually is a key aspect of intelligence and a necessary abi...
research
06/03/2019

Continual Learning of New Sound Classes using Generative Replay

Continual learning consists in incrementally training a model on a seque...
research
06/19/2023

Partial Hypernetworks for Continual Learning

Hypernetworks mitigate forgetting in continual learning (CL) by generati...
research
12/09/2021

Reducing Catastrophic Forgetting in Self Organizing Maps with Internally-Induced Generative Replay

A lifelong learning agent is able to continually learn from potentially ...
research
05/08/2023

BiRT: Bio-inspired Replay in Vision Transformers for Continual Learning

The ability of deep neural networks to continually learn and adapt to a ...
research
04/19/2019

Continual Learning with Self-Organizing Maps

Despite remarkable successes achieved by modern neural networks in a wid...

Please sign up or login with your details

Forgot password? Click here to reset