A Study on Efficiency in Continual Learning Inspired by Human Learning

10/28/2020
by   Philip J. Ball, et al.
0

Humans are efficient continual learning systems; we continually learn new skills from birth with finite cells and resources. Our learning is highly optimized both in terms of capacity and time while not suffering from catastrophic forgetting. In this work we study the efficiency of continual learning systems, taking inspiration from human learning. In particular, inspired by the mechanisms of sleep, we evaluate popular pruning-based continual learning algorithms, using PackNet as a case study. First, we identify that weight freezing, which is used in continual learning without biological justification, can result in over 2× as many weights being used for a given level of performance. Secondly, we note the similarity in human day and night time behaviors to the training and pruning phases respectively of PackNet. We study a setting where the pruning phase is given a time budget, and identify connections between iterative pruning and multiple sleep cycles in humans. We show there exists an optimal choice of iteration v.s. epochs given different tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2023

How Efficient Are Today's Continual Learning Algorithms?

Supervised Continual learning involves updating a deep neural network (D...
research
09/09/2022

Continual learning benefits from multiple sleep mechanisms: NREM, REM, and Synaptic Downscaling

Learning new tasks and skills in succession without losing prior learnin...
research
08/14/2023

Ada-QPacknet – adaptive pruning with bit width reduction as an efficient continual learning method without forgetting

Continual Learning (CL) is a process in which there is still huge gap be...
research
06/06/2019

Uncertainty-guided Continual Learning with Bayesian Neural Networks

Continual learning aims to learn new tasks without forgetting previously...
research
03/11/2019

Continual Learning via Neural Pruning

We introduce Continual Learning via Neural Pruning (CLNP), a new method ...
research
12/18/2018

Continual Match Based Training in Pommerman: Technical Report

Continual learning is the ability of agents to improve their capacities ...
research
06/23/2023

Maintaining Plasticity in Deep Continual Learning

Modern deep-learning systems are specialized to problem settings in whic...

Please sign up or login with your details

Forgot password? Click here to reset