Tartan: Accelerating Fully-Connected and Convolutional Layers in Deep Learning Networks by Exploiting Numerical Precision Variability

07/27/2017
by   Alberto Delmas, et al.
0

Tartan (TRT), a hardware accelerator for inference with Deep Neural Networks (DNNs), is presented and evaluated on Convolutional Neural Networks. TRT exploits the variable per layer precision requirements of DNNs to deliver execution time that is proportional to the precision p in bits used per layer for convolutional and fully-connected layers. Prior art has demonstrated an accelerator with the same execution performance only for convolutional layers. Experiments on image classification CNNs show that on average across all networks studied, TRT outperforms a state-of-the-art bit-parallel accelerator by 1:90x without any loss in accuracy while it is 1:17x more energy efficient. TRT requires no network retraining while it enables trading off accuracy for additional improvements in execution performance and energy efficiency. For example, if a 1 2:04x faster and 1:25x more energy efficient than a conventional bit-parallel accelerator. A Tartan configuration that processes 2-bits at time, requires less area than the 1-bit configuration, improves efficiency to 1:24x over the bit-parallel baseline while being 73 faster for fully-connected layers is also presented.

READ FULL TEXT
research
06/23/2017

Loom: Exploiting Weight and Activation Precisions to Accelerate Convolutional Neural Networks

Loom (LM), a hardware inference accelerator for Convolutional Neural Net...
research
05/10/2018

Laconic Deep Learning Computing

We motivate a method for transparently identifying ineffectual computati...
research
10/20/2016

Bit-pragmatic Deep Neural Network Computing

We quantify a source of ineffectual computations when processing the mul...
research
12/20/2018

AIDA: Associative DNN Inference Accelerator

We propose AIDA, an inference engine for accelerating fully-connected (F...
research
11/25/2020

Low Latency CMOS Hardware Acceleration for Fully Connected Layers in Deep Neural Networks

We present a novel low latency CMOS hardware accelerator for fully conne...
research
12/24/2019

PANTHER: A Programmable Architecture for Neural Network Training Harnessing Energy-efficient ReRAM

The wide adoption of deep neural networks has been accompanied by ever-i...
research
02/09/2023

DeepCAM: A Fully CAM-based Inference Accelerator with Variable Hash Lengths for Energy-efficient Deep Neural Networks

With ever increasing depth and width in deep neural networks to achieve ...

Please sign up or login with your details

Forgot password? Click here to reset