An Experimental Study of Reduced-Voltage Operation in Modern FPGAs for Neural Network Acceleration

05/04/2020
by   Behzad Salami, et al.
0

We empirically evaluate an undervolting technique, i.e., underscaling the circuit supply voltage below the nominal level, to improve the power-efficiency of Convolutional Neural Network (CNN) accelerators mapped to Field Programmable Gate Arrays (FPGAs). Undervolting below a safe voltage level can lead to timing faults due to excessive circuit latency increase. We evaluate the reliability-power trade-off for such accelerators. Specifically, we experimentally study the reduced-voltage operation of multiple components of real FPGAs, characterize the corresponding reliability behavior of CNN accelerators, propose techniques to minimize the drawbacks of reduced-voltage operation, and combine undervolting with architectural CNN optimization techniques, i.e., quantization and pruning. We investigate the effect of environmental temperature on the reliability-power trade-off of such accelerators. We perform experiments on three identical samples of modern Xilinx ZCU102 FPGA platforms with five state-of-the-art image classification CNN benchmarks. This approach allows us to study the effects of our undervolting technique for both software and hardware variability. We achieve more than 3X power-efficiency (GOPs/W) gain via undervolting. 2.6X of this gain is the result of eliminating the voltage guardband region, i.e., the safe voltage region below the nominal level that is set by FPGA vendor to ensure correct functionality in worst-case environmental and circuit conditions. 43 of the power-efficiency gain is due to further undervolting below the guardband, which comes at the cost of accuracy loss in the CNN accelerator. We evaluate an effective frequency underscaling technique that prevents this accuracy loss, and find that it reduces the power-efficiency gain from 43 25

READ FULL TEXT

page 4

page 6

page 7

page 9

research
05/10/2020

Power and Accuracy of Multi-Layer Perceptrons (MLPs) under Reduced-voltage FPGA BRAMs Operation

In this paper, we exploit the aggressive supply voltage underscaling tec...
research
12/14/2020

Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators on Multi-Tenant FPGAs

Field-programmable gate arrays (FPGAs) are becoming widely used accelera...
research
06/22/2023

To Spike or Not to Spike? A Quantitative Comparison of SNN and CNN FPGA Implementations

Convolutional Neural Networks (CNNs) are widely employed to solve variou...
research
12/02/2018

Training for 'Unstable' CNN Accelerator:A Case Study on FPGA

With the great advancements of convolution neural networks(CNN), CNN acc...
research
04/15/2022

AID: Accuracy Improvement of Analog Discharge-Based in-SRAM Multiplication Accelerator

This paper presents a novel circuit (AID) to improve the accuracy of an ...
research
12/30/2020

Understanding Power Consumption and Reliability of High-Bandwidth Memory with Voltage Underscaling

Modern computing devices employ High-Bandwidth Memory (HBM) to meet thei...

Please sign up or login with your details

Forgot password? Click here to reset