The Forward-Forward Algorithm: Some Preliminary Investigations

12/27/2022
by   Geoffrey Hinton, et al.
0

The aim of this paper is to introduce a new learning procedure for neural networks and to demonstrate that it works well enough on a few small problems to be worth further investigation. The Forward-Forward algorithm replaces the forward and backward passes of backpropagation by two forward passes, one with positive (i.e. real) data and the other with negative data which could be generated by the network itself. Each layer has its own objective function which is simply to have high goodness for positive data and low goodness for negative data. The sum of the squared activities in a layer can be used as the goodness but there are many other possibilities, including minus the sum of the squared activities. If the positive and negative passes could be separated in time, the negative passes could be done offline, which would make the learning much simpler in the positive pass and allow video to be pipelined through the network without ever storing activities or stopping to propagate derivatives.

READ FULL TEXT

page 4

page 6

research
02/10/2023

Graph Neural Networks Go Forward-Forward

We present the Graph Forward-Forward (GFF) algorithm, an extension of th...
research
05/26/2023

Emergent representations in networks trained with the Forward-Forward algorithm

The Backpropagation algorithm, widely used to train neural networks, has...
research
03/15/2023

SymBa: Symmetric Backpropagation-Free Contrastive Learning with Forward-Forward Algorithm for Optimizing Convergence

The paper proposes a new algorithm called SymBa that aims to achieve mor...
research
11/09/2020

Derivatives of partial eigendecomposition of a real symmetric matrix for degenerate cases

This paper presents the forward and backward derivatives of partial eige...
research
02/10/2023

Forward Learning with Top-Down Feedback: Empirical and Analytical Characterization

"Forward-only" algorithms, which train neural networks while avoiding a ...
research
11/21/2020

Neural Group Testing to Accelerate Deep Learning

Recent advances in deep learning have made the use of large, deep neural...
research
02/28/2018

Avoiding overfitting of multilayer perceptrons by training derivatives

Resistance to overfitting is observed for neural networks trained with e...

Please sign up or login with your details

Forgot password? Click here to reset