Pruning in the Face of Adversaries

by   Florian Merkle, et al.

The vulnerability of deep neural networks against adversarial examples - inputs with small imperceptible perturbations - has gained a lot of attention in the research community recently. Simultaneously, the number of parameters of state-of-the-art deep learning models has been growing massively, with implications on the memory and computational resources required to train and deploy such models. One approach to control the size of neural networks is retrospectively reducing the number of parameters, so-called neural network pruning. Available research on the impact of neural network pruning on the adversarial robustness is fragmentary and often does not adhere to established principles of robustness evaluation. We close this gap by evaluating the robustness of pruned models against L-0, L-2 and L-infinity attacks for a wide range of attack strengths, several architectures, data sets, pruning methods, and compression rates. Our results confirm that neural network pruning and adversarial robustness are not mutually exclusive. Instead, sweet spots can be found that are favorable in terms of model size and adversarial robustness. Furthermore, we extend our analysis to situations that incorporate additional assumptions on the adversarial scenario and show that depending on the situation, different strategies are optimal.


page 1

page 2

page 3

page 4


Pruning Adversarially Robust Neural Networks without Adversarial Examples

Adversarial pruning compresses models while preserving robustness. Curre...

Quantisation and Pruning for Neural Network Compression and Regularisation

Deep neural networks are typically too computationally expensive to run ...

Hardening DNNs against Transfer Attacks during Network Compression using Greedy Adversarial Pruning

The prevalence and success of Deep Neural Network (DNN) applications in ...

Blind Adversarial Pruning: Balance Accuracy, Efficiency and Robustness

With the growth of interest in the attack and defense of deep neural net...

NICGSlowDown: Evaluating the Efficiency Robustness of Neural Image Caption Generation Models

Neural image caption generation (NICG) models have received massive atte...

"Understanding Robustness Lottery": A Comparative Visual Analysis of Neural Network Pruning Approaches

Deep learning approaches have provided state-of-the-art performance in m...

Structured Pruning of Neural Networks for Constraints Learning

In recent years, the integration of Machine Learning (ML) models with Op...

Please sign up or login with your details

Forgot password? Click here to reset