A Computational Framework of Cortical Microcircuits Approximates Sign-concordant Random Backpropagation

05/15/2022
by   Yukun Yang, et al.
0

Several recent studies attempt to address the biological implausibility of the well-known backpropagation (BP) method. While promising methods such as feedback alignment, direct feedback alignment, and their variants like sign-concordant feedback alignment tackle BP's weight transport problem, their validity remains controversial owing to a set of other unsolved issues. In this work, we answer the question of whether it is possible to realize random backpropagation solely based on mechanisms observed in neuroscience. We propose a hypothetical framework consisting of a new microcircuit architecture and its supporting Hebbian learning rules. Comprising three types of cells and two types of synaptic connectivity, the proposed microcircuit architecture computes and propagates error signals through local feedback connections and supports the training of multi-layered spiking neural networks with a globally defined spiking error function. We employ the Hebbian rule operating in local compartments to update synaptic weights and achieve supervised learning in a biologically plausible manner. Finally, we interpret the proposed framework from an optimization point of view and show its equivalence to sign-concordant feedback alignment. The proposed framework is benchmarked on several datasets including MNIST and CIFAR10, demonstrating promising BP-comparable accuracy.

READ FULL TEXT
research
11/14/2021

BioLeaF: A Bio-plausible Learning Framework for Training of Spiking Neural Networks

Our brain consists of biological neurons encoding information through ac...
research
12/01/2022

Synaptic Dynamics Realize First-order Adaptive Learning and Weight Symmetry

Gradient-based first-order adaptive optimization methods such as the Ada...
research
04/03/2023

Learning with augmented target information: An alternative theory of Feedback Alignment

While error backpropagation (BP) has dominated the training of nearly al...
research
11/08/2018

Biologically-plausible learning algorithms can scale to large datasets

The backpropagation (BP) algorithm is often thought to be biologically i...
research
06/08/2021

Credit Assignment Through Broadcasting a Global Error Vector

Backpropagation (BP) uses detailed, unit-specific feedback to train deep...
research
06/23/2020

Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures

Despite being the workhorse of deep learning, the backpropagation algori...
research
07/13/2021

Tourbillon: a Physically Plausible Neural Architecture

In a physical neural system, backpropagation is faced with a number of o...

Please sign up or login with your details

Forgot password? Click here to reset