Hypothesis-driven Stream Learning with Augmented Memory

04/06/2021
by   Mengmi Zhang, et al.
10

Stream learning refers to the ability to acquire and transfer knowledge across a continuous stream of data without forgetting and without repeated passes over the data. A common way to avoid catastrophic forgetting is to intersperse new examples with replays of old examples stored as image pixels or reproduced by generative models. Here, we considered stream learning in image classification tasks and proposed a novel hypotheses-driven Augmented Memory Network, which efficiently consolidates previous knowledge with a limited number of hypotheses in the augmented memory and replays relevant hypotheses to avoid catastrophic forgetting. The advantages of hypothesis-driven replay over image pixel replay and generative replay are two-fold. First, hypothesis-based knowledge consolidation avoids redundant information in the image pixel space and makes memory usage more efficient. Second, hypotheses in the augmented memory can be re-used for learning new tasks, improving generalization and transfer learning ability. We evaluated our method on three stream learning object recognition datasets. Our method performs comparably well or better than SOTA methods, while offering more efficient memory usage. All source code and data are publicly available https://github.com/kreimanlab/AugMem.

READ FULL TEXT
research
04/20/2020

Generative Feature Replay For Class-Incremental Learning

Humans are capable of learning new tasks without forgetting previous one...
research
05/03/2023

Map-based Experience Replay: A Memory-Efficient Solution to Catastrophic Forgetting in Reinforcement Learning

Deep Reinforcement Learning agents often suffer from catastrophic forget...
research
07/23/2019

Lifelong GAN: Continual Learning for Conditional Image Generation

Lifelong learning is challenging for deep neural networks due to their s...
research
09/16/2018

Memory Efficient Experience Replay for Streaming Learning

In supervised machine learning, an agent is typically trained once and t...
research
05/25/2023

Condensed Prototype Replay for Class Incremental Learning

Incremental learning (IL) suffers from catastrophic forgetting of old ta...
research
07/23/2023

Augmented Box Replay: Overcoming Foreground Shift for Incremental Object Detection

In incremental learning, replaying stored samples from previous tasks to...
research
06/03/2019

Episodic Memory in Lifelong Language Learning

We introduce a lifelong language learning setup where a model needs to l...

Please sign up or login with your details

Forgot password? Click here to reset