Continual Learning Using Bayesian Neural Networks

by   Honglin Li, et al.

Continual learning models allow to learn and adapt to new changes and tasks over time. However, in continual and sequential learning scenarios in which the models are trained using different data with various distributions, neural networks tend to forget the previously learned knowledge. This phenomenon is often referred to as catastrophic forgetting. The catastrophic forgetting is an inevitable problem in continual learning models for dynamic environments. To address this issue, we propose a method, called Continual Bayesian Learning Networks (CBLN), which enables the networks to allocate additional resources to adapt to new tasks without forgetting the previously learned tasks. Using a Bayesian Neural Network, CBLN maintains a mixture of Gaussian posterior distributions that are associated with different tasks. The proposed method tries to optimise the number of resources that are needed to learn each task and avoids an exponential increase in the number of resources that are involved in learning multiple tasks. The proposed method does not need to access the past training data and can choose suitable weights to classify the data points during the test time automatically based on an uncertainty criterion. We have evaluated our method on the MNIST and UCR time-series datasets. The evaluation results show that our method can address the catastrophic forgetting problem at a promising rate compared to the state-of-the-art models.


page 5

page 7


Facilitating Bayesian Continual Learning by Natural Gradients and Stein Gradients

Continual learning aims to enable machine learning models to learn a gen...

Continual Learning Using Task Conditional Neural Networks

Conventional deep learning models have limited capacity in learning mult...

Uncertainty-guided Continual Learning with Bayesian Neural Networks

Continual learning aims to learn new tasks without forgetting previously...

HC-Net: Memory-based Incremental Dual-Network System for Continual learning

Training a neural network for a classification task typically assumes th...

Structured Compression and Sharing of Representational Space for Continual Learning

Humans are skilled at learning adaptively and efficiently throughout the...

Using World Models for Pseudo-Rehearsal in Continual Learning

The utility of learning a dynamics/world model of the environment in rei...

Continuous Detection, Rapidly React: Unseen Rumors Detection based on Continual Prompt-Tuning

Since open social platforms allow for a large and continuous flow of unv...

Please sign up or login with your details

Forgot password? Click here to reset