Functional Regularisation for Continual Learning using Gaussian Processes

by   Michalis K. Titsias, et al.

We introduce a novel approach for supervised continual learning based on approximate Bayesian inference over function space rather than the parameters of a deep neural network. We use a Gaussian process obtained by treating the weights of the last layer of a neural network as random and Gaussian distributed. Functional regularisation for continual learning naturally arises by applying the variational sparse GP inference method in a sequential fashion as new tasks are encountered. At each step of the process, a summary is constructed for the current task that consists of (i) inducing inputs and (ii) a posterior distribution over the function values at these inputs. This summary then regularises learning of future tasks, through Kullback-Leibler regularisation terms that appear in the variational lower bound, and reduces the effects of catastrophic forgetting. We fully develop the theory of the method and we demonstrate its effectiveness in classification datasets, such as Split-MNIST, Permuted-MNIST and Omniglot.


page 7

page 12


Continual Multi-task Gaussian Processes

We address the problem of continual learning in multi-task Gaussian proc...

Variational Auto-Regressive Gaussian Processes for Continual Learning

This paper proposes Variational Auto-Regressive Gaussian Process (VAR-GP...

Improving and Understanding Variational Continual Learning

In the continual learning setting, tasks are encountered sequentially. T...

Variational Density Propagation Continual Learning

Deep Neural Networks (DNNs) deployed to the real world are regularly sub...

Memory-Based Dual Gaussian Processes for Sequential Learning

Sequential learning with Gaussian processes (GPs) is challenging when ac...

Continual Learning with Extended Kronecker-factored Approximate Curvature

We propose a quadratic penalty method for continual learning of neural n...

Partitioned Variational Inference: A unified framework encompassing federated and continual learning

Variational inference (VI) has become the method of choice for fitting m...

Code Repositories


Functional Regularisation for Continual Learning with Gaussian Processes

view repo

Please sign up or login with your details

Forgot password? Click here to reset