Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis

10/02/2022
by   Jiancong Xiao, et al.
4

Adversarial Training (AT) has been demonstrated as one of the most effective methods against adversarial examples. While most existing works focus on AT with a single type of perturbation e.g., the ℓ_∞ attacks), DNNs are facing threats from different types of adversarial examples. Therefore, adversarial training for multiple perturbations (ATMP) is proposed to generalize the adversarial robustness over different perturbation types (in ℓ_1, ℓ_2, and ℓ_∞ norm-bounded perturbations). However, the resulting model exhibits trade-off between different attacks. Meanwhile, there is no theoretical analysis of ATMP, limiting its further development. In this paper, we first provide the smoothness analysis of ATMP and show that ℓ_1, ℓ_2, and ℓ_∞ adversaries give different contributions to the smoothness of the loss function of ATMP. Based on this, we develop the stability-based excess risk bounds and propose adaptive smoothness-weighted adversarial training for multiple perturbations. Theoretically, our algorithm yields better bounds. Empirically, our experiments on CIFAR10 and CIFAR100 achieve the state-of-the-art performance against the mixture of multiple perturbations attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2019

Adversarial Training and Robustness for Multiple Perturbations

Defenses against adversarial examples, such as adversarial training, are...
research
02/09/2022

Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations

Model robustness against adversarial examples of single perturbation typ...
research
06/07/2022

Adaptive Regularization for Adversarial Training

Adversarial training, which is to enhance robustness against adversarial...
research
04/25/2023

Combining Adversaries with Anti-adversaries in Training

Adversarial training is an effective learning technique to improve the r...
research
10/01/2021

Calibrated Adversarial Training

Adversarial training is an approach of increasing the robustness of mode...
research
10/18/2022

Scaling Adversarial Training to Large Perturbation Bounds

The vulnerability of Deep Neural Networks to Adversarial Attacks has fue...
research
11/09/2019

Adaptive versus Standard Descent Methods and Robustness Against Adversarial Examples

Adversarial examples are a pervasive phenomenon of machine learning mode...

Please sign up or login with your details

Forgot password? Click here to reset