Adversarial Training Generalizes Data-dependent Spectral Norm Regularization

06/04/2019
by   Kevin Roth, et al.
12

We establish a theoretical link between adversarial training and operator norm regularization for deep neural networks. Specifically, we show that adversarial training is a data-dependent generalization of spectral norm regularization. This intriguing connection provides fundamental insights into the origin of adversarial vulnerability and hints at novel ways to robustify and defend against adversarial attacks. We provide extensive empirical evidence to support our theoretical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2018

Improved robustness to adversarial examples using Lipschitz regularization of the loss

Adversarial training is an effective method for improving robustness to ...
research
07/08/2023

Sup-Norm Convergence of Deep Neural Network Estimator for Nonparametric Regression by Adversarial Training

We show the sup-norm convergence of deep neural network estimators with ...
research
09/30/2018

On Regularization and Robustness of Deep Neural Networks

Despite their success, deep neural networks suffer from several drawback...
research
06/27/2022

Exact Spectral Norm Regularization for Neural Networks

We pursue a line of research that seeks to regularize the spectral norm ...
research
02/10/2022

Controlling the Complexity and Lipschitz Constant improves polynomial nets

While the class of Polynomial Nets demonstrates comparable performance t...
research
10/09/2018

Average Margin Regularization for Classifiers

Adversarial robustness has become an important research topic given empi...
research
09/15/2022

Explicit Tradeoffs between Adversarial and Natural Distributional Robustness

Several existing works study either adversarial or natural distributiona...

Please sign up or login with your details

Forgot password? Click here to reset