Regularization Shortcomings for Continual Learning

12/06/2019
by   Timothée Lesort, et al.
0

In classical machine learning, the data streamed to the algorithms is assumed to be independent and identically distributed. Otherwise, if the data distribution changes through time, the algorithm risks to remember only the data from the current state of the distribution and forget everything else. Continual learning is a sub-field of machine learning that aims to find automatic learning processes to solve non-iid problems. The main challenges of continual learning are two-fold. Firstly, to detect concept-drift in the distribution and secondly to remember what happened before a concept-drift. In this article, we study a specific case of continual learning approaches: the regularization method. It consists of finding a smart regularization term that will protect important parameters from being modified to not forget. We show in this article, that in the context of multi-task learning for classification, this process does not learn to discriminate classes from different tasks. We propose theoretical reasoning to prove this shortcoming and illustrate it with examples and experiments with the "MNIST Fellowship" dataset.

READ FULL TEXT
research
04/04/2021

Understanding Continual Learning Settings with Data Distribution Drift Analysis

Classical machine learning algorithms often assume that the data are dra...
research
10/10/2022

Tracking changes using Kullback-Leibler divergence for the continual learning

Recently, continual learning has received a lot of attention. One of the...
research
07/07/2021

Regularization-based Continual Learning for Fault Prediction in Lithium-Ion Batteries

In recent years, the use of lithium-ion batteries has greatly expanded i...
research
04/26/2023

Evaluation of Regularization-based Continual Learning Approaches: Application to HAR

Pervasive computing allows the provision of services in many important a...
research
11/15/2021

Target Layer Regularization for Continual Learning Using Cramer-Wold Generator

We propose an effective regularization strategy (CW-TaLaR) for solving c...
research
05/03/2023

Continual Reasoning: Non-Monotonic Reasoning in Neurosymbolic AI using Continual Learning

Despite the extensive investment and impressive recent progress at reaso...
research
01/02/2021

Regularization-based Continual Learning for Anomaly Detection in Discrete Manufacturing

The early and robust detection of anomalies occurring in discrete manufa...

Please sign up or login with your details

Forgot password? Click here to reset