Learning to Continually Learn

02/21/2020
by   Shawn Beaulieu, et al.
43

Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it. Much work has gone towards preventing the default tendency of machine learning models to catastrophically forget, yet virtually all such work involves manually-designed solutions to the problem. We instead advocate meta-learning a solution to catastrophic forgetting, allowing AI to learn to continually learn. Inspired by neuromodulatory processes in the brain, we propose A Neuromodulated Meta-Learning Algorithm (ANML). It differentiates through a sequential learning process to meta-learn an activation-gating function that enables context-dependent selective activation within a deep neural network. Specifically, a neuromodulatory (NM) neural network gates the forward pass of another (otherwise normal) neural network called the prediction learning network (PLN). The NM network also thus indirectly controls selective plasticity (i.e. the backward pass of) the PLN. ANML enables continual learning without catastrophic forgetting at scale: it produces state-of-the-art continual learning performance, sequentially learning as many as 600 classes (over 9,000 SGD updates).

READ FULL TEXT

page 6

page 7

page 11

research
03/12/2020

Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning

Learning from non-stationary data remains a great challenge for machine ...
research
07/27/2020

La-MAML: Look-ahead Meta Learning for Continual Learning

The continual learning problem involves training models with limited cap...
research
12/13/2020

Learn-Prune-Share for Lifelong Learning

In lifelong learning, we wish to maintain and update a model (e.g., a ne...
research
06/16/2021

SPeCiaL: Self-Supervised Pretraining for Continual Learning

This paper presents SPeCiaL: a method for unsupervised pretraining of re...
research
12/08/2020

Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning

Most standard learning approaches lead to fragile models which are prone...
research
09/29/2018

Continuous Learning of Context-dependent Processing in Neural Networks

Deep artificial neural networks (DNNs) are powerful tools for recognitio...
research
03/06/2021

Learning to Continually Learn Rapidly from Few and Noisy Data

Neural networks suffer from catastrophic forgetting and are unable to se...

Please sign up or login with your details

Forgot password? Click here to reset