Thales: Formulating and Estimating Architectural Vulnerability Factors for DNN Accelerators

by   Abhishek Tyagi, et al.

As Deep Neural Networks (DNNs) are increasingly deployed in safety critical and privacy sensitive applications such as autonomous driving and biometric authentication, it is critical to understand the fault-tolerance nature of DNNs. Prior work primarily focuses on metrics such as Failures In Time (FIT) rate and the Silent Data Corruption (SDC) rate, which quantify how often a device fails. Instead, this paper focuses on quantifying the DNN accuracy given that a transient error has occurred, which tells us how well a network behaves when a transient error occurs. We call this metric Resiliency Accuracy (RA). We show that existing RA formulation is fundamentally inaccurate, because it incorrectly assumes that software variables (model weights/activations) have equal faulty probability under hardware transient faults. We present an algorithm that captures the faulty probabilities of DNN variables under transient faults and, thus, provides correct RA estimations validated by hardware. To accelerate RA estimation, we reformulate RA calculation as a Monte Carlo integration problem, and solve it using importance sampling driven by DNN specific heuristics. Using our lightweight RA estimation method, we show that transient faults lead to far greater accuracy degradation than what todays DNN resiliency tools estimate. We show how our RA estimation tool can help design more resilient DNNs by integrating it with a Network Architecture Search framework.


page 1

page 4


Ranger: Boosting Error Resilience of Deep Neural Networks through Range Restriction

With the emerging adoption of deep neural networks (DNNs) in the HPC dom...

ISimDL: Importance Sampling-Driven Acceleration of Fault Injection Simulations for Evaluating the Robustness of Deep Learning

Deep Learning (DL) systems have proliferated in many applications, requi...

Lightning: Striking the Secure Isolation on GPU Clouds with Transient Hardware Faults

GPU clouds have become a popular computing platform because of the cost ...

enpheeph: A Fault Injection Framework for Spiking and Compressed Deep Neural Networks

Research on Deep Neural Networks (DNNs) has focused on improving perform...

FAT: Training Neural Networks for Reliable Inference Under Hardware Faults

Deep neural networks (DNNs) are state-of-the-art algorithms for multiple...

Automated design of error-resilient and hardware-efficient deep neural networks

Applying deep neural networks (DNNs) in mobile and safety-critical syste...

DeepVigor: Vulnerability Value Ranges and Factors for DNNs' Reliability Assessment

Deep Neural Networks (DNNs) and their accelerators are being deployed ev...

Please sign up or login with your details

Forgot password? Click here to reset