DeepAI AI Chat
Log In Sign Up

Property-driven Training: All You (N)Ever Wanted to Know About

04/03/2021
by   Marco Casadio, et al.
Heriot-Watt University
0

Neural networks are known for their ability to detect general patterns in noisy data. This makes them a popular tool for perception components in complex AI systems. Paradoxically, they are also known for being vulnerable to adversarial attacks. In response, various methods such as adversarial training, data-augmentation and Lipschitz robustness training have been proposed as means of improving their robustness. However, as this paper explores, these training methods each optimise for a different definition of robustness. We perform an in-depth comparison of these different definitions, including their relationship, assumptions, interpretability and verifiability after training. We also look at constraint-driven training, a general approach designed to encode arbitrary constraints, and show that not all of these definitions are directly encodable. Finally we perform experiments to compare the applicability and efficacy of the training methods at ensuring the network obeys these different definitions. These results highlight that even the encoding of such a simple piece of knowledge such as robustness in neural network training is fraught with difficult choices and pitfalls.

READ FULL TEXT
04/21/2021

Dual Head Adversarial Training

Deep neural networks (DNNs) are known to be vulnerable to adversarial ex...
11/10/2022

Impact of Adversarial Training on Robustness and Generalizability of Language Models

Adversarial training is widely acknowledged as the most effective defens...
09/29/2018

Interpreting Adversarial Robustness: A View from Decision Surface in Input Space

One popular hypothesis of neural network generalization is that the flat...
06/01/2022

The robust way to stack and bag: the local Lipschitz way

Recent research has established that the local Lipschitz constant of a n...
09/15/2022

A Light Recipe to Train Robust Vision Transformers

In this paper, we ask whether Vision Transformers (ViTs) can serve as an...
06/15/2023

Exact Count of Boundary Pieces of ReLU Classifiers: Towards the Proper Complexity Measure for Classification

Classic learning theory suggests that proper regularization is the key t...

Code Repositories