Dropping Pixels for Adversarial Robustness

05/01/2019
by   Hossein Hosseini, et al.
0

Deep neural networks are vulnerable against adversarial examples. In this paper, we propose to train and test the networks with randomly subsampled images with high drop rates. We show that this approach significantly improves robustness against adversarial examples in all cases of bounded L0, L2 and L_inf perturbations, while reducing the standard accuracy by a small value. We argue that subsampling pixels can be thought to provide a set of robust features for the input image and, thus, improves robustness without performing adversarial training.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset