On Pruning Adversarially Robust Neural Networks

by   Vikash Sehwag, et al.

In safety-critical but computationally resource-constrained applications, deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size (often millions of parameters). While the research community has extensively explored the use of robust training and network pruning independently to address one of these challenges, we show that integrating existing pruning techniques with multiple types of robust training techniques, including verifiably robust training, leads to poor robust accuracy even though such techniques can preserve high regular accuracy. We further demonstrate that making pruning techniques aware of the robust learning objective can lead to a large improvement in performance. We realize this insight by formulating the pruning objective as an empirical risk minimization problem which is then solved using SGD. We demonstrate the success of the proposed pruning technique across CIFAR-10, SVHN, and ImageNet dataset with four different robust training techniques: iterative adversarial training, randomized smoothing, MixTrain, and CROWN-IBP. Specifically, at 99% connection pruning ratio, we achieve gains up to 3.2, 10.0, and 17.8 percentage points in robust accuracy under state-of-the-art adversarial attacks for ImageNet, CIFAR-10, and SVHN dataset, respectively. Our code and compressed networks are publicly available at https://github.com/inspire-group/compactness-robustness


page 17

page 18


Pruning Adversarially Robust Neural Networks without Adversarial Examples

Adversarial pruning compresses models while preserving robustness. Curre...

The Search for Sparse, Robust Neural Networks

Recent work on deep neural network pruning has shown there exist sparse ...

A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNs

This paper presents a dynamic network rewiring (DNR) method to generate ...

Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy

Neural network pruning is a popular technique used to reduce the inferen...

CFDP: Common Frequency Domain Pruning

As the saying goes, sometimes less is more – and when it comes to neural...

Robust Principles: Architectural Design Principles for Adversarially Robust CNNs

Our research aims to unify existing works' diverging opinions on how arc...

Probabilistic Jacobian-based Saliency Maps Attacks

Machine learning models have achieved spectacular performances in variou...

Code Repositories


Code and checkpoints of compressed networks for the paper titled "On Pruning Adversarially Robust Neural Networks" (https://arxiv.org/abs/2002.10509).

view repo

Please sign up or login with your details

Forgot password? Click here to reset