A Formalization of Robustness for Deep Neural Networks

03/24/2019
by   Tommaso Dreossi, et al.
0

Deep neural networks have been shown to lack robustness to small input perturbations. The process of generating the perturbations that expose the lack of robustness of neural networks is known as adversarial input generation. This process depends on the goals and capabilities of the adversary, In this paper, we propose a unifying formalization of the adversarial input generation process from a formal methods perspective. We provide a definition of robustness that is general enough to capture different formulations. The expressiveness of our formalization is shown by modeling and comparing a variety of adversarial attack techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/25/2020

Are Deep Neural Networks "Robust"?

Separating outliers from inliers is the definition of robustness in comp...
research
12/01/2020

Adversarial Robustness Across Representation Spaces

Adversarial robustness corresponds to the susceptibility of deep neural ...
research
09/08/2017

Towards Proving the Adversarial Robustness of Deep Neural Networks

Autonomous vehicles are highly complex systems, required to function rel...
research
11/30/2022

Efficient Adversarial Input Generation via Neural Net Patching

The adversarial input generation problem has become central in establish...
research
08/20/2020

β-Variational Classifiers Under Attack

Deep Neural networks have gained lots of attention in recent years thank...
research
02/03/2021

Towards Robust Neural Networks via Close-loop Control

Despite their success in massive engineering applications, deep neural n...
research
03/01/2021

A Multiclass Boosting Framework for Achieving Fast and Provable Adversarial Robustness

Alongside the well-publicized accomplishments of deep neural networks th...

Please sign up or login with your details

Forgot password? Click here to reset