Single-step Adversarial training with Dropout Scheduling

by   Vivek B. S., et al.

Deep learning models have shown impressive performance across a spectrum of computer vision applications including medical diagnosis and autonomous driving. One of the major concerns that these models face is their susceptibility to adversarial attacks. Realizing the importance of this issue, more researchers are working towards developing robust models that are less affected by adversarial attacks. Adversarial training method shows promising results in this direction. In adversarial training regime, models are trained with mini-batches augmented with adversarial samples. Fast and simple methods (e.g., single-step gradient ascent) are used for generating adversarial samples, in order to reduce computational complexity. It is shown that models trained using single-step adversarial training method (adversarial samples are generated using non-iterative method) are pseudo robust. Further, this pseudo robustness of models is attributed to the gradient masking effect. However, existing works fail to explain when and why gradient masking effect occurs during single-step adversarial training. In this work, (i) we show that models trained using single-step adversarial training method learn to prevent the generation of single-step adversaries, and this is due to over-fitting of the model during the initial stages of training, and (ii) to mitigate this effect, we propose a single-step adversarial training method with dropout scheduling. Unlike models trained using existing single-step adversarial training methods, models trained using the proposed single-step adversarial training method are robust against both single-step and multi-step adversarial attacks, and the performance is on par with models trained using computationally expensive multi-step adversarial training methods, in white-box and black-box settings.


page 1

page 2

page 3

page 4


Regularizers for Single-step Adversarial Training

The progress in the last decade has enabled machine learning models to a...

Gray-box Adversarial Training

Adversarial samples are perturbed inputs crafted to mislead the machine ...

Single-Step Adversarial Training for Semantic Segmentation

Even though deep neural networks succeed on many different tasks includi...

Purifying Adversarial Perturbation with Adversarially Trained Auto-encoders

Machine learning models are vulnerable to adversarial examples. Iterativ...

Fast and Stable Adversarial Training through Noise Injection

Adversarial training is the most successful empirical method, to increas...

Improving Interpretability via Regularization of Neural Activation Sensitivity

State-of-the-art deep neural networks (DNNs) are highly effective at tac...

Adversarial Training for High-Stakes Reliability

In the future, powerful AI systems may be deployed in high-stakes settin...

Please sign up or login with your details

Forgot password? Click here to reset