A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference

10/06/2020
by   Sanghyun Hong, et al.
0

Recent increases in the computational demands of deep neural networks (DNNs), combined with the observation that most input samples require only simple models, have sparked interest in input-adaptive multi-exit architectures, such as MSDNets or Shallow-Deep Networks. These architectures enable faster inferences and could bring DNNs to low-power devices, e.g. in the Internet of Things (IoT). However, it is unknown if the computational savings provided by this approach are robust against adversarial pressure. In particular, an adversary may aim to slow down adaptive DNNs by increasing their average inference time-a threat analogous to the denial-of-service attacks from the Internet. In this paper, we conduct a systematic evaluation of this threat by experimenting with three generic multi-exit DNNs (based on VGG16, MobileNet, and ResNet56) and a custom multi-exit architecture, on two popular image classification benchmarks (CIFAR-10 and Tiny ImageNet). To this end, we show that adversarial sample-crafting techniques can be modified to cause slowdown, and we propose a metric for comparing their impact on different architectures. We show that a slowdown attack reduces the efficacy of multi-exit DNNs by 90 deployment. We also show that it is possible to craft universal, reusable perturbations and that the attack can be effective in realistic black-box scenarios, where the attacker has limited knowledge about the victim. Finally, we show that adversarial training provides limited protection against slowdowns. These results suggest that further research is needed for defending multi-exit architectures against this emerging threat.

READ FULL TEXT
research
04/14/2018

On the Limitation of MagNet Defense against L_1-based Adversarial Examples

In recent years, defending adversarial perturbations to natural examples...
research
03/09/2022

Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation

In recent years, the adversarial vulnerability of deep neural networks (...
research
12/05/2019

Scratch that! An Evolution-based Adversarial Attack against Neural Networks

Recent research has shown that Deep Neural Networks (DNNs) for image cla...
research
03/20/2021

Compacting Deep Neural Networks for Internet of Things: Methods and Applications

Deep Neural Networks (DNNs) have shown great success in completing compl...
research
02/08/2019

Adversarial Initialization -- when your network performs the way I want

The increase in computational power and available data has fueled a wide...
research
12/28/2022

Publishing Efficient On-device Models Increases Adversarial Vulnerability

Recent increases in the computational demands of deep neural networks (D...
research
10/26/2021

Disrupting Deep Uncertainty Estimation Without Harming Accuracy

Deep neural networks (DNNs) have proven to be powerful predictors and ar...

Please sign up or login with your details

Forgot password? Click here to reset