Adversarial Distributional Training for Robust Deep Learning

02/14/2020
by   Zhijie Deng, et al.
0

Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples. However, the adversarially trained models do not perform well enough on test data or under other attack algorithms unseen during training, which remains to be improved. In this paper, we introduce a novel adversarial distributional training (ADT) framework for learning robust models. Specifically, we formulate ADT as a minimax optimization problem, where the inner maximization aims to learn an adversarial distribution to characterize the potential adversarial examples around a natural one, and the outer minimization aims to train robust classifiers by minimizing the expected loss over the worst-case adversarial distributions. We conduct a theoretical analysis on how to solve the minimax problem, leading to a general algorithm for ADT. We further propose three different approaches to parameterize the adversarial distributions. Empirical results on various benchmarks validate the effectiveness of ADT compared with the state-of-the-art AT methods.

READ FULL TEXT
research
12/15/2021

On the Convergence and Robustness of Adversarial Training

Improving the robustness of deep neural networks (DNNs) to adversarial e...
research
07/21/2023

Improving Viewpoint Robustness for Visual Recognition via Adversarial Training

Viewpoint invariance remains challenging for visual recognition in the 3...
research
04/25/2020

Improved Adversarial Training via Learned Optimizer

Adversarial attack has recently become a tremendous threat to deep learn...
research
02/27/2022

A Unified Wasserstein Distributional Robustness Framework for Adversarial Training

It is well-known that deep neural networks (DNNs) are susceptible to adv...
research
10/16/2020

Learning Robust Algorithms for Online Allocation Problems Using Adversarial Training

We address the challenge of finding algorithms for online allocation (i....
research
10/30/2021

Get Fooled for the Right Reason: Improving Adversarial Robustness through a Teacher-guided Curriculum Learning Approach

Current SOTA adversarially robust models are mostly based on adversarial...
research
10/07/2021

Adversarial Unlearning of Backdoors via Implicit Hypergradient

We propose a minimax formulation for removing backdoors from a given poi...

Please sign up or login with your details

Forgot password? Click here to reset