A Study of Forward-Forward Algorithm for Self-Supervised Learning

09/21/2023
by   Jonas Brenig, et al.
0

Self-supervised representation learning has seen remarkable progress in the last few years, with some of the recent methods being able to learn useful image representations without labels. These methods are trained using backpropagation, the de facto standard. Recently, Geoffrey Hinton proposed the forward-forward algorithm as an alternative training method. It utilizes two forward passes and a separate loss function for each layer to train the network without backpropagation. In this study, for the first time, we study the performance of forward-forward vs. backpropagation for self-supervised representation learning and provide insights into the learned representation spaces. Our benchmark employs four standard datasets, namely MNIST, F-MNIST, SVHN and CIFAR-10, and three commonly used self-supervised representation learning techniques, namely rotation, flip and jigsaw. Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-)supervised training, the transfer performance is significantly lagging behind in all the studied settings. This may be caused by a combination of factors, including having a loss function for each layer and the way the supervised training is realized in the forward-forward paradigm. In comparison to backpropagation, the forward-forward algorithm focuses more on the boundaries and drops part of the information unnecessary for making decisions which harms the representation learning goal. Further investigation and research are necessary to stabilize the forward-forward strategy for self-supervised learning, to work beyond the datasets and configurations demonstrated by Geoffrey Hinton.

READ FULL TEXT

page 16

page 17

research
12/01/2020

Towards Good Practices in Self-supervised Representation Learning

Self-supervised representation learning has seen remarkable progress in ...
research
02/03/2023

Blockwise Self-Supervised Learning at Scale

Current state-of-the-art deep networks are all powered by backpropagatio...
research
02/17/2022

Survey on Self-supervised Representation Learning Using Image Transformations

Deep neural networks need huge amount of training data, while in real wo...
research
08/11/2022

On the Pros and Cons of Momentum Encoder in Self-Supervised Visual Representation Learning

Exponential Moving Average (EMA or momentum) is widely used in modern se...
research
07/09/2023

Extending the Forward Forward Algorithm

The Forward Forward algorithm, proposed by Geoffrey Hinton in November 2...
research
05/21/2023

Layer Collaboration in the Forward-Forward Algorithm

Backpropagation, which uses the chain rule, is the de-facto standard alg...
research
02/02/2022

AtmoDist: Self-supervised Representation Learning for Atmospheric Dynamics

Representation learning has proven to be a powerful methodology in a wid...

Please sign up or login with your details

Forgot password? Click here to reset