PANTHER: A Programmable Architecture for Neural Network Training Harnessing Energy-efficient ReRAM

by   Aayush Ankit, et al.

The wide adoption of deep neural networks has been accompanied by ever-increasing energy and performance demands due to the expensive nature of training them. Numerous special-purpose architectures have been proposed to accelerate training: both digital and hybrid digital-analog using resistive RAM (ReRAM) crossbars. ReRAM-based accelerators have demonstrated the effectiveness of ReRAM crossbars at performing matrix-vector multiplication operations that are prevalent in training. However, they still suffer from inefficiency due to the use of serial reads and writes for performing the weight gradient and update step. A few works have demonstrated the possibility of performing outer products in crossbars, which can be used to realize the weight gradient and update step without the use of serial reads and writes. However, these works have been limited to low precision operations which are not sufficient for typical training workloads. Moreover, they have been confined to a limited set of training algorithms for fully-connected layers only. To address these limitations, we propose a bit-slicing technique for enhancing the precision of ReRAM-based outer products, which is substantially different from bit-slicing for matrix-vector multiplication only. We incorporate this technique into a crossbar architecture with three variants catered to different training algorithms. To evaluate our design on different types of layers in neural networks (fully-connected, convolutional, etc.) and training algorithms, we develop PANTHER, an ISA-programmable training accelerator with compiler support. Our evaluation shows that PANTHER achieves up to 8.02×, 54.21×, and 103× energy reductions as well as 7.16×, 4.02×, and 16× execution time reductions compared to digital accelerators, ReRAM-based accelerators, and GPUs, respectively.


page 1

page 2

page 3

page 4


Tartan: Accelerating Fully-Connected and Convolutional Layers in Deep Learning Networks by Exploiting Numerical Precision Variability

Tartan (TRT), a hardware accelerator for inference with Deep Neural Netw...

BrainTTA: A 35 fJ/op Compiler Programmable Mixed-Precision Transport-Triggered NN SoC

Recently, accelerators for extremely quantized deep neural network (DNN)...

PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference

Memristor crossbars are circuits capable of performing analog matrix-vec...

ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning

Deep neural networks (DNNs) have surpassed human-level accuracy in a var...

Newton: Gravitating Towards the Physical Limits of Crossbar Acceleration

Many recent works have designed accelerators for Convolutional Neural Ne...

PPAC: A Versatile In-Memory Accelerator for Matrix-Vector-Product-Like Operations

Processing in memory (PIM) moves computation into memories with the goal...

Neural Network Training with Asymmetric Crosspoint Elements

Analog crossbar arrays comprising programmable nonvolatile resistors are...

Please sign up or login with your details

Forgot password? Click here to reset