Is Multi-Task Learning an Upper Bound for Continual Learning?

10/26/2022
by   Zihao Wu, et al.
0

Continual and multi-task learning are common machine learning approaches to learning from multiple tasks. The existing works in the literature often assume multi-task learning as a sensible performance upper bound for various continual learning algorithms. While this assumption is empirically verified for different continual learning benchmarks, it is not rigorously justified. Moreover, it is imaginable that when learning from multiple tasks, a small subset of these tasks could behave as adversarial tasks reducing the overall learning performance in a multi-task setting. In contrast, continual learning approaches can avoid the performance drop caused by such adversarial tasks to preserve their performance on the rest of the tasks, leading to better performance than a multi-task learner. This paper proposes a novel continual self-supervised learning setting, where each task corresponds to learning an invariant representation for a specific class of data augmentations. In this setting, we show that continual learning often beats multi-task learning on various benchmark datasets, including MNIST, CIFAR-10, and CIFAR-100.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2021

Boosting a Model Zoo for Multi-Task and Continual Learning

Leveraging data from multiple tasks, either all at once, or incrementall...
research
04/20/2020

CLOPS: Continual Learning of Physiological Signals

Deep learning algorithms are known to experience destructive interferenc...
research
07/09/2021

Behavior Self-Organization Supports Task Inference for Continual Robot Learning

Recent advances in robot learning have enabled robots to become increasi...
research
09/10/2023

Continual Robot Learning using Self-Supervised Task Inference

Endowing robots with the human ability to learn a growing set of skills ...
research
02/25/2021

On continual single index learning

In this paper, we generalize the problem of single index model to the co...
research
07/13/2022

CoSCL: Cooperation of Small Continual Learners is Stronger than a Big One

Continual learning requires incremental compatibility with a sequence of...
research
05/24/2023

Zero-shot Task Preference Addressing Enabled by Imprecise Bayesian Continual Learning

Like generic multi-task learning, continual learning has the nature of m...

Please sign up or login with your details

Forgot password? Click here to reset