Connection Pruning for Deep Spiking Neural Networks with On-Chip Learning

10/09/2020
by   Thao N. N. Nguyen, et al.
0

Long training time hinders the potential of the deep Spiking Neural Network (SNN) with the online learning capability to be realized on the embedded systems hardware. Our work proposes a novel connection pruning approach that can be applied during the online Spike Timing Dependent Plasticity (STDP)-based learning to optimize the learning time and the network connectivity of the SNN. Our connection pruning approach was evaluated on a deep SNN with the Time To First Spike (TTFS) coding and has successfully achieved 2.1x speed-up in the online learning and reduced the network connectivity by 92.83 consumption in the online learning was saved by 64 reduction results in 2.83x speed-up and 78.24 Meanwhile, the classification accuracy remains the same as our non-pruning baseline on the Caltech 101 dataset. In addition, we developed an event-driven hardware architecture on the Field Programmable Gate Array (FPGA) platform that efficiently incorporates our proposed connection pruning approach while incurring as little as 0.56 comparison between our work and the existing works on connection pruning for SNN to highlight the key features of each approach. To the best of our knowledge, our work is the first to propose a connection pruning algorithm that can be applied during the online STDP-based learning for a deep SNN with the TTFS coding.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset