Data Augmentation Can Improve Robustness

11/09/2021
by   Sylvestre-Alvise Rebuffi, et al.
0

Adversarial training suffers from robust overfitting, a phenomenon where the robust test accuracy starts to decrease during training. In this paper, we focus on reducing robust overfitting by using common data augmentation schemes. We demonstrate that, contrary to previous findings, when combined with model weight averaging, data augmentation can significantly boost robust accuracy. Furthermore, we compare various augmentations techniques and observe that spatial composition techniques work the best for adversarial training. Finally, we evaluate our approach on CIFAR-10 against ℓ_∞ and ℓ_2 norm-bounded perturbations of size ϵ = 8/255 and ϵ = 128/255, respectively. We show large absolute improvements of +2.93 robust accuracy compared to previous state-of-the-art methods. In particular, against ℓ_∞ norm-bounded perturbations of size ϵ = 8/255, our model reaches 60.07 also achieve a significant performance boost with this approach while using other architectures and datasets such as CIFAR-100, SVHN and TinyImageNet.

READ FULL TEXT
research
03/02/2021

Fixing Data Augmentation to Improve Adversarial Robustness

Adversarial training suffers from robust overfitting, a phenomenon where...
research
01/24/2023

Data Augmentation Alone Can Improve Adversarial Training

Adversarial training suffers from the issue of robust overfitting, which...
research
06/13/2023

Rethinking Adversarial Training with A Simple Baseline

We report competitive results on RobustBench for CIFAR and SVHN using a ...
research
03/22/2021

Adversarially Optimized Mixup for Robust Classification

Mixup is a procedure for data augmentation that trains networks to make ...
research
10/18/2021

Improving Robustness using Generated Data

Recent work argues that robust training requires substantially larger da...
research
12/14/2022

Generative Robust Classification

Training adversarially robust discriminative (i.e., softmax) classifier ...
research
08/18/2021

Semantic Perturbations with Normalizing Flows for Improved Generalization

Data augmentation is a widely adopted technique for avoiding overfitting...

Please sign up or login with your details

Forgot password? Click here to reset