Learning to modulate random weights can induce task-specific contexts for economical meta and continual learning

04/08/2022
by   Jinyung Hong, et al.
0

Neural networks are vulnerable to catastrophic forgetting when data distributions are non-stationary during continual online learning; learning of a later task often leads to forgetting of an earlier task. One solution approach is model-agnostic continual meta-learning, whereby both task-specific and meta parameters are trained. Here, we depart from this view and introduce a novel neural-network architecture inspired by neuromodulation in biological nervous systems. Neuromodulation is the biological mechanism that dynamically controls and fine-tunes synaptic dynamics to complement the behavioral context in real-time, which has received limited attention in machine learning. We introduce a single-hidden-layer network that learns only a relatively small context vector per task (task-specific parameters) that neuromodulates unchanging, randomized weights (meta parameters) that transform the input. We show that when task boundaries are available, this approach can eliminate catastrophic forgetting entirely while also drastically reducing the number of learnable parameters relative to other context-vector-based approaches. Furthermore, by combining this model with a simple meta-learning approach for inferring task identity, we demonstrate that the model can be generalized into a framework to perform continual learning without knowledge of task boundaries. Finally, we showcase the framework in a supervised continual online learning scenario and discuss the implications of the proposed formalism.

READ FULL TEXT
research
06/12/2019

Task Agnostic Continual Learning via Meta Learning

While neural networks are powerful function approximators, they suffer f...
research
03/12/2020

Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning

Learning from non-stationary data remains a great challenge for machine ...
research
08/26/2021

Continual learning under domain transfer with sparse synaptic bursting

Existing machines are functionally specific tools that were made for eas...
research
08/18/2023

On the Effectiveness of LayerNorm Tuning for Continual Learning in Vision Transformers

State-of-the-art rehearsal-free continual learning methods exploit the p...
research
11/12/2019

Learning from the Past: Continual Meta-Learning via Bayesian Graph Modeling

Meta-learning for few-shot learning allows a machine to leverage previou...
research
03/01/2021

Posterior Meta-Replay for Continual Learning

Continual Learning (CL) algorithms have recently received a lot of atten...
research
08/07/2019

Visualizing the PHATE of Neural Networks

Understanding why and how certain neural networks outperform others is k...

Please sign up or login with your details

Forgot password? Click here to reset