EXPANSE: A Deep Continual / Progressive Learning System for Deep Transfer Learning

05/19/2022
by   Mohammadreza Iman, et al.
0

Deep transfer learning techniques try to tackle the limitations of deep learning, the dependency on extensive training data and the training costs, by reusing obtained knowledge. However, the current DTL techniques suffer from either catastrophic forgetting dilemma (losing the previously obtained knowledge) or overly biased pre-trained models (harder to adapt to target data) in finetuning pre-trained models or freezing a part of the pre-trained model, respectively. Progressive learning, a sub-category of DTL, reduces the effect of the overly biased model in the case of freezing earlier layers by adding a new layer to the end of a frozen pre-trained model. Even though it has been successful in many cases, it cannot yet handle distant source and target data. We propose a new continual/progressive learning approach for deep transfer learning to tackle these limitations. To avoid both catastrophic forgetting and overly biased-model problems, we expand the pre-trained model by expanding pre-trained layers (adding new nodes to each layer) in the model instead of only adding new layers. Hence the method is named EXPANSE. Our experimental results confirm that we can tackle distant source and target data using this technique. At the same time, the final model is still valid on the source data, achieving a promising deep continual learning approach. Moreover, we offer a new way of training deep learning models inspired by the human education system. We termed this two-step training: learning basics first, then adding complexities and uncertainties. The evaluation implies that the two-step training extracts more meaningful features and a finer basin on the error surface since it can achieve better accuracy in comparison to regular training. EXPANSE (model expansion and two-step training) is a systematic continual learning approach applicable to different problems and DL models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/05/2021

Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning

Continual learning (CL) learns a sequence of tasks incrementally with th...
research
02/26/2023

TransferD2: Automated Defect Detection Approach in Smart Manufacturing using Transfer Learning Techniques

Quality assurance is crucial in the smart manufacturing industry as it i...
research
06/27/2022

Transfer Learning via Test-Time Neural Networks Aggregation

It has been demonstrated that deep neural networks outperform traditiona...
research
09/13/2023

PILOT: A Pre-Trained Model-Based Continual Learning Toolbox

While traditional machine learning can effectively tackle a wide range o...
research
03/09/2023

SLCA: Slow Learner with Classifier Alignment for Continual Learning on a Pre-trained Model

The goal of continual learning is to improve the performance of recognit...
research
12/03/2021

Learning Curves for Sequential Training of Neural Networks: Self-Knowledge Transfer and Forgetting

Sequential training from task to task is becoming one of the major objec...
research
03/02/2023

Optimal transfer protocol by incremental layer defrosting

Transfer learning is a powerful tool enabling model training with limite...

Please sign up or login with your details

Forgot password? Click here to reset