Adversarial Sparsity Attacks on Deep Neural Networks

06/14/2020
by   Sarada Krithivasan, et al.
0

Adversarial attacks have exposed serious vulnerabilities in Deep Neural Networks (DNNs) through their ability to force misclassifications through human-imperceptible perturbations to DNN inputs. We explore a new direction in the field of adversarial attacks by suggesting attacks that aim to degrade the computational efficiency of DNNs rather than their classification accuracy. Specifically, we propose and demonstrate sparsity attacks, which adversarial modify a DNN's inputs so as to reduce sparsity (or the presence of zero values) in its internal activation values. In resource-constrained systems, a wide range of hardware and software techniques have been proposed that exploit sparsity to improve DNN efficiency. The proposed attack increases the execution time and energy consumption of sparsity-optimized DNN implementations, raising concern over their deployment in latency and energy-critical applications. We propose a systematic methodology to generate adversarial inputs for sparsity attacks by formulating an objective function that quantifies the network's activation sparsity, and minimizing this function using iterative gradient-descent techniques. We launch both white-box and black-box versions of adversarial sparsity attacks on image recognition DNNs and demonstrate that they decrease activation sparsity by up to 1.82x. We also evaluate the impact of the attack on a sparsity-optimized DNN accelerator and demonstrate degradations up to 1.59x in latency, and also study the performance of the attack on a sparsity-optimized general-purpose processor. Finally, we evaluate defense techniques such as activation thresholding and input quantization and demonstrate that the proposed attack is able to withstand them, highlighting the need for further efforts in this new direction within the field of adversarial machine learning.

READ FULL TEXT

page 1

page 3

research
04/25/2023

Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks

Reducing the memory footprint of Machine Learning (ML) models, particula...
research
08/03/2020

Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks

Recent studies identify that Deep learning Neural Networks (DNNs) are vu...
research
02/27/2023

Physical Adversarial Attacks on Deep Neural Networks for Traffic Sign Recognition: A Feasibility Study

Deep Neural Networks (DNNs) are increasingly applied in the real world i...
research
03/13/2023

Can Adversarial Examples Be Parsed to Reveal Victim Model Information?

Numerous adversarial attack methods have been developed to generate impe...
research
02/14/2018

Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks

DNN is presenting human-level performance for many complex intelligent t...
research
11/18/2018

The Taboo Trap: Behavioural Detection of Adversarial Samples

Deep Neural Networks (DNNs) have become a powerful tool for a wide range...
research
11/07/2017

SparCE: Sparsity aware General Purpose Core Extensions to Accelerate Deep Neural Networks

Deep Neural Networks (DNNs) have emerged as the method of choice for sol...

Please sign up or login with your details

Forgot password? Click here to reset