Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models

03/03/2023
by   Naman D. Singh, et al.
0

While adversarial training has been extensively studied for ResNet architectures and low resolution datasets like CIFAR, much less is known for ImageNet. Given the recent debate about whether transformers are more robust than convnets, we revisit adversarial training on ImageNet comparing ViTs and ConvNeXts. Extensive experiments show that minor changes in architecture, most notably replacing PatchStem with ConvStem, and training scheme have a significant impact on the achieved robustness. These changes not only increase robustness in the seen ℓ_∞-threat model, but even more so improve generalization to unseen ℓ_1/ℓ_2-robustness. Our modified ConvNeXt, ConvNeXt + ConvStem, yields the most robust models across different ranges of model parameters and FLOPs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset