Multi-Sample Online Learning for Spiking Neural Networks based on Generalized Expectation Maximization

02/05/2021
by   Hyeryung Jang, et al.
0

Spiking Neural Networks (SNNs) offer a novel computational paradigm that captures some of the efficiency of biological brains by processing through binary neural dynamic activations. Probabilistic SNN models are typically trained to maximize the likelihood of the desired outputs by using unbiased estimates of the log-likelihood gradients. While prior work used single-sample estimators obtained from a single run of the network, this paper proposes to leverage multiple compartments that sample independent spiking signals while sharing synaptic weights. The key idea is to use these signals to obtain more accurate statistical estimates of the log-likelihood training criterion, as well as of its gradient. The approach is based on generalized expectation-maximization (GEM), which optimizes a tighter approximation of the log-likelihood using importance sampling. The derived online learning algorithm implements a three-factor rule with global per-compartment learning signals. Experimental results on a classification task on the neuromorphic MNIST-DVS data set demonstrate significant improvements in terms of log-likelihood, accuracy, and calibration when increasing the number of compartments used for training and inference.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/23/2020

Multi-Compartment Variational Online Learning for Spiking Neural Networks

Spiking Neural Networks (SNNs) offer a novel computational paradigm that...
research
12/15/2020

BiSNN: Training Spiking Neural Networks with Binary Weights via Bayesian Learning

Artificial Neural Network (ANN)-based inference on battery-powered devic...
research
11/04/2014

Expectation-Maximization for Learning Determinantal Point Processes

A determinantal point process (DPP) is a probabilistic model of set dive...
research
12/30/2014

Accurate and Conservative Estimates of MRF Log-likelihood using Reverse Annealing

Markov random fields (MRFs) are difficult to evaluate as generative mode...
research
10/21/2019

Federated Neuromorphic Learning of Spiking Neural Networks for Low-Power Edge Intelligence

Spiking Neural Networks (SNNs) offer a promising alternative to conventi...
research
04/18/2014

CTBNCToolkit: Continuous Time Bayesian Network Classifier Toolkit

Continuous time Bayesian network classifiers are designed for temporal c...
research
10/24/2020

PEP: Parameter Ensembling by Perturbation

Ensembling is now recognized as an effective approach for increasing the...

Please sign up or login with your details

Forgot password? Click here to reset