A Survey on Impact of Transient Faults on BNN Inference Accelerators

by   Navid Khoshavi, et al.

Over past years, the philosophy for designing the artificial intelligence algorithms has significantly shifted towards automatically extracting the composable systems from massive data volumes. This paradigm shift has been expedited by the big data booming which enables us to easily access and analyze the highly large data sets. The most well-known class of big data analysis techniques is called deep learning. These models require significant computation power and extremely high memory accesses which necessitate the design of novel approaches to reduce the memory access and improve power efficiency while taking into account the development of domain-specific hardware accelerators to support the current and future data sizes and model structures.The current trends for designing application-specific integrated circuits barely consider the essential requirement for maintaining the complex neural network computation to be resilient in the presence of soft errors. The soft errors might strike either memory storage or combinational logic in the hardware accelerator that can affect the architectural behavior such that the precision of the results fall behind the minimum allowable correctness. In this study, we demonstrate that the impact of soft errors on a customized deep learning algorithm called Binarized Neural Network might cause drastic image misclassification. Our experimental results show that the accuracy of image classifier can drastically drop by 76.70 networks,respectively across CIFAR-10 and MNIST datasets during the fault injection for the worst-case scenarios


page 1

page 2

page 3

page 4


SoftSNN: Low-Cost Fault Tolerance for Spiking Neural Network Accelerators under Soft Errors

Specialized hardware accelerators have been designed and employed to max...

R2F: A Remote Retraining Framework for AIoT Processors with Computing Errors

AIoT processors fabricated with newer technology nodes suffer rising sof...

Compiler Infrastructure for Specializing Domain-Specific Memory Templates

Specialized hardware accelerators are becoming important for more and mo...

Tetris: Re-architecting Convolutional Neural Network Computation for Machine Learning Accelerators

Inference efficiency is the predominant consideration in designing deep ...

Domain-Specific Computational Storage for Serverless Computing

While (1) serverless computing is emerging as a popular form of cloud ex...

AI and ML Accelerator Survey and Trends

This paper updates the survey of AI accelerators and processors from pas...

CACTUS: a Comprehensive Abstraction and Classification Tool for Uncovering Structures

The availability of large data sets is providing an impetus for driving ...

Please sign up or login with your details

Forgot password? Click here to reset