Associated Learning: Decomposing End-to-end Backpropagation based on Auto-encoders and Target Propagation

06/13/2019
by   Yu-Wei Kao, et al.
National Central University
8

Backpropagation has been widely used in deep learning approaches, but it is inefficient and sometimes unstable because of backward locking and vanishing/exploding gradient problems, especially when the gradient flow is long. Additionally, updating all edge weights based on a single objective seems biologically implausible. In this paper, we introduce a novel biologically motivated learning structure called Associated Learning, which modularizes the network into smaller components, each of which has a local objective. Because the objectives are mutually independent, Associated Learning can learn the parameters independently and simultaneously when these parameters belong to different components. Surprisingly, training deep models by Associated Learning yields comparable accuracies to models trained using typical backpropagation methods, which aims at fitting the target variable directly. Moreover, probably because the gradient flow of each component is short, deep networks can still be trained with Associated Learning even when some of the activation functions are sigmoid-a situation that usually results in the vanishing gradient problem when using typical backpropagation. We also found that the Associated Learning generates better metafeatures, which we demonstrated both quantitatively (via inter-class and intra-class distance comparisons in the hidden layers) and qualitatively (by visualizing the hidden layers using t-SNE).

READ FULL TEXT

page 6

page 7

12/04/2019

Are skip connections necessary for biologically plausible learning rules?

Backpropagation is the workhorse of deep learning, however, several othe...
08/05/2019

The HSIC Bottleneck: Deep Learning without Back-Propagation

We introduce the HSIC (Hilbert-Schmidt independence criterion) bottlenec...
08/06/2017

Training of Deep Neural Networks based on Distance Measures using RMSProp

The vanishing gradient problem was a major obstacle for the success of d...
06/26/2018

Unsupervised Learning by Competing Hidden Units

It is widely believed that the backpropagation algorithm is essential fo...
11/05/2019

Guided Layer-wise Learning for Deep Models using Side Information

Training of deep models for classification tasks is hindered by local mi...
12/23/2014

Difference Target Propagation

Back-propagation has been the workhorse of recent successes of deep lear...

Code Repositories

Associated_Learning

Decomposing End-to-end Backpropagation based on Auto-encoders and Target Propagation


view repo

Please sign up or login with your details

Forgot password? Click here to reset