Adversarial examples and where to find them

04/22/2020
by   Niklas Risse, et al.
0

Adversarial robustness of trained models has attracted considerable attention over recent years, within and beyond the scientific community. This is not only because of a straight-forward desire to deploy reliable systems, but also because of how adversarial attacks challenge our beliefs about deep neural networks. Demanding more robust models seems to be the obvious solution – however, this requires a rigorous understanding of how one should judge adversarial robustness as a property of a given model. In this work, we analyze where adversarial examples occur, in which ways they are peculiar, and how they are processed by robust models. We use robustness curves to show that ℓ_∞ threat models are surprisingly effective in improving robustness for other ℓ_p norms; we introduce perturbation cost trajectories to provide a broad perspective on how robust and non-robust networks perceive adversarial perturbations as opposed to random perturbations; and we explicitly examine the scale of certain common data sets, showing that robustness thresholds must be adapted to the data set they pertain to. This allows us to provide concrete recommendations for anyone looking to train a robust model or to estimate how much robustness they should require for their operation. The code for all our experiments is available at www.github.com/niklasrisse/adversarial-examples-and-where-to-find-them .

READ FULL TEXT

page 11

page 13

research
09/04/2019

Are Adversarial Robustness and Common Perturbation Robustness Independant Attributes ?

Neural Networks have been shown to be sensitive to common perturbations ...
research
07/31/2019

Adversarial Robustness Curves

The existence of adversarial examples has led to considerable uncertaint...
research
07/28/2020

Reachable Sets of Classifiers Regression Models: (Non-)Robustness Analysis and Robust Training

Neural networks achieve outstanding accuracy in classification and regre...
research
11/03/2020

Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks

Adversarial examples are inevitable on the road of pervasive application...
research
03/20/2020

One Neuron to Fool Them All

Despite vast research in adversarial examples, the root causes of model ...
research
07/12/2022

Adversarial Robustness Assessment of NeuroEvolution Approaches

NeuroEvolution automates the generation of Artificial Neural Networks th...
research
11/15/2020

Towards Understanding the Regularization of Adversarial Robustness on Neural Networks

The problem of adversarial examples has shown that modern Neural Network...

Please sign up or login with your details

Forgot password? Click here to reset