Fundamental Tradeoffs in Distributionally Adversarial Training

01/15/2021
by   Mohammad Mehrabi, et al.
0

Adversarial training is among the most effective techniques to improve the robustness of models against adversarial perturbations. However, the full effect of this approach on models is not well understood. For example, while adversarial training can reduce the adversarial risk (prediction error against an adversary), it sometimes increase standard risk (generalization error when there is no adversary). Even more, such behavior is impacted by various elements of the learning problem, including the size and quality of training data, specific forms of adversarial perturbations in the input, model overparameterization, and adversary's power, among others. In this paper, we focus on distribution perturbing adversary framework wherein the adversary can change the test distribution within a neighborhood of the training data distribution. The neighborhood is defined via Wasserstein distance between distributions and the radius of the neighborhood is a measure of adversary's manipulative power. We study the tradeoff between standard risk and adversarial risk and derive the Pareto-optimal tradeoff, achievable over specific classes of models, in the infinite data limit with features dimension kept fixed. We consider three learning settings: 1) Regression with the class of linear models; 2) Binary classification under the Gaussian mixtures data model, with the class of linear classifiers; 3) Regression with the class of random features model (which can be equivalently represented as two-layer neural network with random first-layer weights). We show that a tradeoff between standard and adversarial risk is manifested in all three settings. We further characterize the Pareto-optimal tradeoff curves and discuss how a variety of factors, such as features correlation, adversary's power or the width of two-layer neural network would affect this tradeoff.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2020

Precise Tradeoffs in Adversarial Training for Linear Regression

Despite breakthrough performance, modern learning models are known to be...
research
06/14/2019

Adversarial Training Can Hurt Generalization

While adversarial training can improve robust accuracy (against an adver...
research
06/18/2022

The Consistency of Adversarial Training for Binary Classification

Robustness to adversarial perturbations is of paramount concern in moder...
research
03/27/2019

On the Adversarial Robustness of Multivariate Robust Estimation

In this paper, we investigate the adversarial robustness of multivariate...
research
02/25/2020

The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization

Despite remarkable success, deep neural networks are sensitive to human-...
research
10/22/2021

Adversarial robustness for latent models: Revisiting the robust-standard accuracies tradeoff

Over the past few years, several adversarial training methods have been ...
research
10/21/2020

Precise Statistical Analysis of Classification Accuracies for Adversarial Training

Despite the wide empirical success of modern machine learning algorithms...

Please sign up or login with your details

Forgot password? Click here to reset