Combining Adversaries with Anti-adversaries in Training

04/25/2023
by   Xiaoling Zhou, et al.
0

Adversarial training is an effective learning technique to improve the robustness of deep neural networks. In this study, the influence of adversarial training on deep learning models in terms of fairness, robustness, and generalization is theoretically investigated under more general perturbation scope that different samples can have different perturbation directions (the adversarial and anti-adversarial directions) and varied perturbation bounds. Our theoretical explorations suggest that the combination of adversaries and anti-adversaries (samples with anti-adversarial perturbations) in training can be more effective in achieving better fairness between classes and a better tradeoff between robustness and generalization in some typical learning scenarios (e.g., noisy label learning and imbalance learning) compared with standard adversarial training. On the basis of our theoretical findings, a more general learning objective that combines adversaries and anti-adversaries with varied bounds on each training sample is presented. Meta learning is utilized to optimize the combination weights. Experiments on benchmark datasets under different learning scenarios verify our theoretical findings and the effectiveness of the proposed methodology.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/26/2021

Combating Adversaries with Anti-Adversaries

Deep neural networks are vulnerable to small input perturbations known a...
research
08/26/2022

Lower Difficulty and Better Robustness: A Bregman Divergence Perspective for Adversarial Training

In this paper, we investigate on improving the adversarial robustness ob...
research
10/02/2022

Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis

Adversarial Training (AT) has been demonstrated as one of the most effec...
research
10/18/2022

Scaling Adversarial Training to Large Perturbation Bounds

The vulnerability of Deep Neural Networks to Adversarial Attacks has fue...
research
10/20/2020

Towards Understanding the Dynamics of the First-Order Adversaries

An acknowledged weakness of neural networks is their vulnerability to ad...
research
07/29/2020

Stylized Adversarial Defense

Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, ...
research
02/20/2018

Out-distribution training confers robustness to deep neural networks

The easiness at which adversarial instances can be generated in deep neu...

Please sign up or login with your details

Forgot password? Click here to reset