Supervised Learning in Temporally-Coded Spiking Neural Networks with Approximate Backpropagation

07/27/2020
by   Andrew Stephan, et al.
11

In this work we propose a new supervised learning method for temporally-encoded multilayer spiking networks to perform classification. The method employs a reinforcement signal that mimics backpropagation but is far less computationally intensive. The weight update calculation at each layer requires only local data apart from this signal. We also employ a rule capable of producing specific output spike trains; by setting the target spike time equal to the actual spike time with a slight negative offset for key high-value neurons the actual spike time becomes as early as possible. In simulated MNIST handwritten digit classification, two-layer networks trained with this rule matched the performance of a comparable backpropagation based non-spiking network.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset