Continuous Learning in Single-Incremental-Task Scenarios

by   Davide Maltoni, et al.
University of Bologna

It was recently shown that architectural, regularization and rehearsal strategies can be used to train deep models sequentially on a number of disjoint tasks without forgetting previously acquired knowledge. However, these strategies are still unsatisfactory if the tasks are not disjoint but constitute a single incremental task (e.g., class-incremental learning). In this paper we point out the differences between multi-task and single-incremental-task scenarios and show that well-known approaches such as LWF, EWC and SI are not ideal for incremental task scenarios. A new approach, denoted as AR1, combining architectural and regularization strategies is then specifically proposed. AR1 overhead (in term of memory and computation) is very small thus making it suitable for online learning. When tested on CORe50 and iCIFAR-100, AR1 outperformed existing regularization strategies by a good margin.


page 13

page 18


Adaptive Regularization for Class-Incremental Learning

Class-Incremental Learning updates a deep classifier with new categories...

Continual Few-Shot Learning Using HyperTransformers

We focus on the problem of learning without forgetting from multiple tas...

Reparameterizing Convolutions for Incremental Multi-Task Learning without Task Interference

Multi-task networks are commonly utilized to alleviate the need for a la...

Rethinking Task-Incremental Learning Baselines

It is common to have continuous streams of new data that need to be intr...

Energy-based Latent Aligner for Incremental Learning

Deep learning models tend to forget their earlier knowledge while increm...

Applying Incremental Deep Neural Networks-based Posture Recognition Model for Injury Risk Assessment in Construction

Monitoring awkward postures is a proactive prevention for Musculoskeleta...

Please sign up or login with your details

Forgot password? Click here to reset