AX-DBN: An Approximate Computing Framework for the Design of Low-Power Discriminative Deep Belief Networks

by   Ian Colbert, et al.

The power budget for embedded hardware implementations of Deep Learning algorithms can be extremely tight. To address implementation challenges in such domains, new design paradigms, like Approximate Computing, have drawn significant attention. Approximate Computing exploits the innate error-resilience of Deep Learning algorithms, a property that makes them amenable for deployment on low-power computing platforms. This paper describes an Approximate Computing design methodology, AX-DBN, for an architecture belonging to the class of stochastic Deep Learning algorithms known as Deep Belief Networks (DBNs). Specifically, we consider procedures for efficiently implementing the Discriminative Deep Belief Network (DDBN), a stochastic neural network which is used for classification tasks. For the purpose of optimizing the DDBN for hardware implementations, we explore the use of: (a) Limited precision of neurons and functional approximations of activation functions; (b) Criticality analysis to identify the nodes in the network which can operate at reduced precision while allowing the network to maintain target accuracy levels; and (c) A greedy search methodology with incremental retraining to determine the optimal reduction in precision for all neurons to maximize power savings. Using the AX-DBN methodology proposed in this paper, we present experimental results across several network architectures that show significant power savings under a user-specified accuracy loss constraint with respect to ideal full precision implementations.


page 1

page 8

page 9


ApproxDBN: Approximate Computing for Discriminative Deep Belief Networks

Probabilistic generative neural networks are useful for many application...

Generative and Discriminative Deep Belief Network Classifiers: Comparisons Under an Approximate Computing Framework

The use of Deep Learning hardware algorithms for embedded applications i...

Implementation of Optical Deep Neural Networks using the Fabry-Perot Interferometer

Future developments in deep learning applications requiring large datase...

ADIC: Anomaly Detection Integrated Circuit in 65nm CMOS utilizing Approximate Computing

In this paper, we present a low-power anomaly detection integrated circu...

Training Neural Networks for Execution on Approximate Hardware

Approximate computing methods have shown great potential for deep learni...

Design Challenges of Neural Network Acceleration Using Stochastic Computing

The enormous and ever-increasing complexity of state-of-the-art neural n...

ReD-CaNe: A Systematic Methodology for Resilience Analysis and Design of Capsule Networks under Approximations

Recent advances in Capsule Networks (CapsNets) have shown their superior...

Please sign up or login with your details

Forgot password? Click here to reset