SWIM: Selective Write-Verify for Computing-in-Memory Neural Accelerators

02/17/2022
by   Zheyu Yan, et al.
0

Computing-in-Memory architectures based on non-volatile emerging memories have demonstrated great potential for deep neural network (DNN) acceleration thanks to their high energy efficiency. However, these emerging devices can suffer from significant variations during the mapping process i.e., programming weights to the devices), and if left undealt with, can cause significant accuracy degradation. The non-ideality of weight mapping can be compensated by iterative programming with a write-verify scheme, i.e., reading the conductance and rewriting if necessary. In all existing works, such a practice is applied to every single weight of a DNN as it is being mapped, which requires extensive programming time. In this work, we show that it is only necessary to select a small portion of the weights for write-verify to maintain the DNN accuracy, thus achieving significant speedup. We further introduce a second derivative based technique SWIM, which only requires a single pass of forward and backpropagation, to efficiently select the weights that need write-verify. Experimental results on various DNN architectures for different datasets show that SWIM can achieve up to 10x programming speedup compared with conventional full-blown write-verify while attaining a comparable accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/15/2022

Computing-In-Memory Neural Network Accelerators for Safety-Critical Systems: Can Small Device Variations Be Disastrous?

Computing-in-Memory (CiM) architectures based on emerging non-volatile m...
research
05/25/2022

On the Reliability of Computing-in-Memory Accelerators for Deep Neural Networks

Computing-in-memory with emerging non-volatile memory (nvCiM) is shown t...
research
07/06/2021

Uncertainty Modeling of Emerging Device-based Computing-in-Memory Neural Accelerators with Application to Neural Architecture Search

Emerging device-based Computing-in-memory (CiM) has been proved to be a ...
research
08/03/2023

Evaluation of STT-MRAM as a Scratchpad for Training in ML Accelerators

Progress in artificial intelligence and machine learning over the past d...
research
02/01/2023

Weight Prediction Boosts the Convergence of AdamW

In this paper, we introduce weight prediction into the AdamW optimizer t...
research
06/14/2017

MATIC: Adaptation and In-situ Canaries for Energy-Efficient Neural Network Acceleration

- The primary author has withdrawn this paper due to conflict of interes...
research
04/22/2023

A Deep Neural Network Deployment Based on Resistive Memory Accelerator Simulation

The objective of this study is to illustrate the process of training a D...

Please sign up or login with your details

Forgot password? Click here to reset