To Spike or Not to Spike? A Quantitative Comparison of SNN and CNN FPGA Implementations

06/22/2023
by   Patrick Plagwitz, et al.
0

Convolutional Neural Networks (CNNs) are widely employed to solve various problems, e.g., image classification. Due to their compute- and data-intensive nature, CNN accelerators have been developed as ASICs or on FPGAs. Increasing complexity of applications has caused resource costs and energy requirements of these accelerators to grow. Spiking Neural Networks (SNNs) are an emerging alternative to CNN implementations, promising higher resource and energy efficiency. The main research question addressed in this paper is whether SNN accelerators truly meet these expectations of reduced energy requirements compared to their CNN equivalents. For this purpose, we analyze multiple SNN hardware accelerators for FPGAs regarding performance and energy efficiency. We present a novel encoding scheme of spike event queues and a novel memory organization technique to improve SNN energy efficiency further. Both techniques have been integrated into a state-of-the-art SNN architecture and evaluated for MNIST, SVHN, and CIFAR-10 datasets and corresponding network architectures on two differently sized modern FPGA platforms. For small-scale benchmarks such as MNIST, SNN designs provide rather no or little latency and energy efficiency advantages over corresponding CNN implementations. For more complex benchmarks such as SVHN and CIFAR-10, the trend reverses.

READ FULL TEXT

page 1

page 9

page 10

research
07/12/2022

Photonic Reconfigurable Accelerators for Efficient Inference of CNNs with Mixed-Sized Tensors

Photonic Microring Resonator (MRR) based hardware accelerators have been...
research
03/28/2018

Structured Weight Matrices-Based Hardware Accelerators in Deep Neural Networks: FPGAs and ASICs

Both industry and academia have extensively investigated hardware accele...
research
02/03/2023

An Optical XNOR-Bitcount Based Accelerator for Efficient Inference of Binary Neural Networks

Binary Neural Networks (BNNs) are increasingly preferred over full-preci...
research
03/05/2020

Memory Organization for Energy-Efficient Learning and Inference in Digital Neuromorphic Accelerators

The energy efficiency of neuromorphic hardware is greatly affected by th...
research
05/04/2020

An Experimental Study of Reduced-Voltage Operation in Modern FPGAs for Neural Network Acceleration

We empirically evaluate an undervolting technique, i.e., underscaling th...
research
06/14/2016

A Systematic Approach to Blocking Convolutional Neural Networks

Convolutional Neural Networks (CNNs) are the state of the art solution f...
research
10/19/2022

Virtual Screening on FPGA: Performance and Energy versus Effort

With their widespread availability, FPGA-based accelerators cards have b...

Please sign up or login with your details

Forgot password? Click here to reset