SHARP: Sparsity and Hidden Activation RePlay for Neuro-Inspired Continual Learning

by   Mustafa Burak Gurbuz, et al.

Deep neural networks (DNNs) struggle to learn in dynamic environments since they rely on fixed datasets or stationary environments. Continual learning (CL) aims to address this limitation and enable DNNs to accumulate knowledge incrementally, similar to human learning. Inspired by how our brain consolidates memories, a powerful strategy in CL is replay, which involves training the DNN on a mixture of new and all seen classes. However, existing replay methods overlook two crucial aspects of biological replay: 1) the brain replays processed neural patterns instead of raw input, and 2) it prioritizes the replay of recently learned information rather than revisiting all past experiences. To address these differences, we propose SHARP, an efficient neuro-inspired CL method that leverages sparse dynamic connectivity and activation replay. Unlike other activation replay methods, which assume layers not subjected to replay have been pretrained and fixed, SHARP can continually update all layers. Also, SHARP is unique in that it only needs to replay few recently seen classes instead of all past classes. Our experiments on five datasets demonstrate that SHARP outperforms state-of-the-art replay methods in class incremental learning. Furthermore, we showcase SHARP's flexibility in a novel CL scenario where the boundaries between learning episodes are blurry. The SHARP code is available at <>.


Self-recovery of memory via generative replay

A remarkable capacity of the brain is its ability to autonomously reorga...

A Benchmark and Empirical Analysis for Replay Strategies in Continual Learning

With the capacity of continual learning, humans can continuously acquire...

Integrating Curricula with Replays: Its Effects on Continual Learning

Humans engage in learning and reviewing processes with curricula when ac...

Class-Incremental Continual Learning into the eXtended DER-verse

The staple of human intelligence is the capability of acquiring knowledg...

Looking through the past: better knowledge retention for generative replay in continual learning

In this work, we improve the generative replay in a continual learning s...

Regularizing Second-Order Influences for Continual Learning

Continual learning aims to learn on non-stationary data streams without ...

Class-Incremental Learning with Repetition

Real-world data streams naturally include the repetition of previous con...

Please sign up or login with your details

Forgot password? Click here to reset