Difference Target Propagation

12/23/2014
by   Dong-Hyun Lee, et al.
0

Back-propagation has been the workhorse of recent successes of deep learning but it relies on infinitesimal effects (partial derivatives) in order to perform credit assignment. This could become a serious issue as one considers deeper and more non-linear functions, e.g., consider the extreme case of nonlinearity where the relation between parameters and cost is actually discrete. Inspired by the biological implausibility of back-propagation, a few approaches have been proposed in the past that could play a similar credit assignment role. In this spirit, we explore a novel approach to credit assignment in deep networks that we call target propagation. The main idea is to compute targets rather than gradients, at each layer. Like gradients, they are propagated backwards. In a way that is related but different from previously proposed proxies for back-propagation which rely on a backwards network with symmetric weights, target propagation relies on auto-encoders at each layer. Unlike back-propagation, it can be applied even when units exchange stochastic bits rather than real numbers. We show that a linear correction for the imperfectness of the auto-encoders, called difference target propagation, is very effective to make target propagation actually work, leading to results comparable to back-propagation for deep networks with discrete and continuous units and denoising auto-encoders and achieving state of the art for stochastic networks.

READ FULL TEXT
research
07/29/2014

How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation

We propose to exploit reconstruction as a layer-local training signal f...
research
10/14/2021

Hindsight Network Credit Assignment: Efficient Credit Assignment in Networks of Discrete Stochastic Units

Training neural networks with discrete stochastic variables presents a u...
research
07/29/2020

Deriving Differential Target Propagation from Iterating Approximate Inverses

We show that a particular form of target propagation, i.e., relying on l...
research
10/09/2015

Early Inference in Energy-Based Models Approximates Back-Propagation

We show that Langevin MCMC inference in an energy-based model with laten...
research
03/05/2018

Conducting Credit Assignment by Aligning Local Representations

The use of back-propagation and its variants to train deep networks is o...
research
12/02/2022

Credit Assignment for Trained Neural Networks Based on Koopman Operator Theory

Credit assignment problem of neural networks refers to evaluating the cr...
research
06/13/2019

Associated Learning: Decomposing End-to-end Backpropagation based on Auto-encoders and Target Propagation

Backpropagation has been widely used in deep learning approaches, but it...

Please sign up or login with your details

Forgot password? Click here to reset