NeuroAttack: Undermining Spiking Neural Networks Security through Externally Triggered Bit-Flips

05/16/2020
by   Valerio Venceslai, et al.
Université Polytechnique Hauts-de-France
Politecnico di Torino
0

Due to their proven efficiency, machine-learning systems are deployed in a wide range of complex real-life problems. More specifically, Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems. While these systems are going mainstream, they have inherent security and reliability issues. In this paper, we propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues through a high-level attack. Particularly, we trigger a fault-injection based sneaky hardware backdoor through a carefully crafted adversarial input noise. Our results on Deep Neural Networks (DNNs) and SNNs show a serious integrity threat to state-of-the art machine-learning techniques.

READ FULL TEXT

page 6

page 7

09/20/2021

Towards Energy-Efficient and Secure Edge AI: A Cross-Layer Framework

The security and privacy concerns along with the amount of data that is ...
12/08/2022

Models Developed for Spiking Neural Networks

Emergence of deep neural networks (DNNs) has raised enormous attention t...
07/31/2022

enpheeph: A Fault Injection Framework for Spiking and Compressed Deep Neural Networks

Research on Deep Neural Networks (DNNs) has focused on improving perform...
05/30/2023

Low Precision Quantization-aware Training in Spiking Neural Networks with Differentiable Quantization Function

Deep neural networks have been proven to be highly effective tools in va...
04/10/2022

Analysis of Power-Oriented Fault Injection Attacks on Spiking Neural Networks

Spiking Neural Networks (SNN) are quickly gaining traction as a viable a...
10/14/2021

An Optimization Perspective on Realizing Backdoor Injection Attacks on Deep Neural Networks in Hardware

State-of-the-art deep neural networks (DNNs) have been proven to be vuln...
03/05/2023

Discrepancies among Pre-trained Deep Neural Networks: A New Threat to Model Zoo Reliability

Training deep neural networks (DNNs) takes signifcant time and resources...

Please sign up or login with your details

Forgot password? Click here to reset