Energy Efficient Hardware Acceleration of Neural Networks with Power-of-Two Quantisation

09/30/2022
by   Dominika Przewlocka-Rus, et al.
0

Deep neural networks virtually dominate the domain of most modern vision systems, providing high performance at a cost of increased computational complexity.Since for those systems it is often required to operate both in real-time and with minimal energy consumption (e.g., for wearable devices or autonomous vehicles, edge Internet of Things (IoT), sensor networks), various network optimisation techniques are used, e.g., quantisation, pruning, or dedicated lightweight architectures. Due to the logarithmic distribution of weights in neural network layers, a method providing high performance with significant reduction in computational precision (for 4-bit weights and less) is the Power-of-Two (PoT) quantisation (and therefore also with a logarithmic distribution). This method introduces additional possibilities of replacing the typical for neural networks Multiply and ACcumulate (MAC – performing, e.g., convolution operations) units, with more energy-efficient Bitshift and ACcumulate (BAC). In this paper, we show that a hardware neural network accelerator with PoT weights implemented on the Zynq UltraScale + MPSoC ZCU104 SoC FPGA can be at least 1.4x more energy efficient than the uniform quantisation version. To further reduce the actual power requirement by omitting part of the computation for zero weights, we also propose a new pruning method adapted to logarithmic quantisation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/01/2020

Optimisation of a Siamese Neural Network for Real-Time Energy Efficient Object Tracking

In this paper the research on optimisation of visual object tracking usi...
research
01/25/2021

AdderNet and its Minimalist Hardware Design for Energy-Efficient Artificial Intelligence

Convolutional neural networks (CNN) have been widely used for boosting t...
research
11/16/2016

Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning

Deep convolutional neural networks (CNNs) are indispensable to state-of-...
research
11/09/2020

Nanopore Base Calling on the Edge

We developed a new base caller DeepNano-coral for nanopore sequencing, w...
research
03/08/2021

Design and implementation of Energy Efficient Lightweight Encryption (EELWE) algorithm for medical applications

Proportional to the growth in the usage of Human Sensor Networks (HSN), ...
research
02/18/2022

PISA: A Binary-Weight Processing-In-Sensor Accelerator for Edge Image Processing

This work proposes a Processing-In-Sensor Accelerator, namely PISA, as a...

Please sign up or login with your details

Forgot password? Click here to reset