HC-Net: Memory-based Incremental Dual-Network System for Continual learning

09/07/2018
by   Jangho Kim, et al.
0

Training a neural network for a classification task typically assumes that the data to train are given from the beginning. However, in the real world, additional data accumulate gradually and the model requires additional training without accessing the old training data. This usually leads to the catastrophic forgetting problem which is inevitable for the traditional training methodology of neural networks. In this paper, we propose a memory-based continual learning method that is able to learn additional tasks while retaining the performance of previously learned tasks. Composed of two complementary networks, the Hippocampus-net (H-net) and the Cortex-net (C-net), our model estimates the index of the corresponding task for an input sample and utilizes a particular portion of itself with the estimated index. The C-net guarantees no degradation in the performance of the previously learned tasks and the H-net shows high confidence in finding the origin of the input sample

READ FULL TEXT

page 2

page 8

research
10/09/2019

Continual Learning Using Bayesian Neural Networks

Continual learning models allow to learn and adapt to new changes and ta...
research
03/08/2022

Overcoming Catastrophic Forgetting beyond Continual Learning: Balanced Training for Neural Machine Translation

Neural networks tend to gradually forget the previously learned knowledg...
research
08/14/2023

CBA: Improving Online Continual Learning via Continual Bias Adaptor

Online continual learning (CL) aims to learn new knowledge and consolida...
research
10/06/2022

Topological Continual Learning with Wasserstein Distance and Barycenter

Continual learning in neural networks suffers from a phenomenon called c...
research
01/04/2021

CLeaR: An Adaptive Continual Learning Framework for Regression Tasks

Catastrophic forgetting means that a trained neural network model gradua...
research
03/03/2022

Provable and Efficient Continual Representation Learning

In continual learning (CL), the goal is to design models that can learn ...
research
05/26/2019

Sequential mastery of multiple tasks: Networks naturally learn to learn

We explore the behavior of a standard convolutional neural net in a sett...

Please sign up or login with your details

Forgot password? Click here to reset