How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation

07/29/2014
by   Yoshua Bengio, et al.
0

We propose to exploit reconstruction as a layer-local training signal for deep learning. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on derivatives in order to perform credit assignment across many levels of possibly strong non-linearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can could thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally, including a hypothesis stating that such auto-encoder mediated target propagation could play in brains the role of credit assignment through many non-linear, noisy and discrete transformations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/23/2014

Difference Target Propagation

Back-propagation has been the workhorse of recent successes of deep lear...
research
07/29/2020

Deriving Differential Target Propagation from Iterating Approximate Inverses

We show that a particular form of target propagation, i.e., relying on l...
research
01/16/2013

Discriminative Recurrent Sparse Auto-Encoders

We present the discriminative recurrent sparse auto-encoder model, compr...
research
10/18/2019

Decoupling feature propagation from the design of graph auto-encoders

We present two instances, L-GAE and L-VGAE, of the variational graph aut...
research
06/27/2012

A Generative Process for Sampling Contractive Auto-Encoders

The contractive auto-encoder learns a representation of the input data t...
research
11/25/2021

Quantised Transforming Auto-Encoders: Achieving Equivariance to Arbitrary Transformations in Deep Networks

In this work we investigate how to achieve equivariance to input transfo...
research
10/01/2018

An Empirical Evaluation of Time-Aware LSTM Autoencoder on Chronic Kidney Disease

In this paper, we perform an empirical analysis on T-LSTM Auto-encoder -...

Please sign up or login with your details

Forgot password? Click here to reset