A general framework for defining and optimizing robustness

by   Alessandro Tibo, et al.
Aalborg University

Robustness of neural networks has recently attracted a great amount of interest. The many investigations in this area lack a precise common foundation of robustness concepts. Therefore, in this paper, we propose a rigorous and flexible framework for defining different types of robustness that also help to explain the interplay between adversarial robustness and generalization. The different robustness objectives directly lead to an adjustable family of loss functions. For two robustness concepts of particular interest we show effective ways to minimize the corresponding loss functions. One loss is designed to strengthen robustness against adversarial off-manifold attacks, and another to improve generalization under the given data distribution. Empirical results show that we can effectively train under different robustness objectives, obtaining higher robustness scores and better generalization, for the two examples respectively, compared to the state-of-the-art data augmentation and regularization techniques.


Certifying Out-of-Domain Generalization for Blackbox Functions

Certifying the robustness of model performance under bounded data distri...

How Does Mixup Help With Robustness and Generalization?

Mixup is a popular data augmentation technique based on taking convex co...

Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness

Adversarial data augmentation has shown promise for training robust deep...

Feedback Learning for Improving the Robustness of Neural Networks

Recent research studies revealed that neural networks are vulnerable to ...

Learning Stochastic Dynamical Systems as an Implicit Regularization with Graph Neural Networks

Stochastic Gumbel graph networks are proposed to learn high-dimensional ...

Understanding Adversarial Robustness Through Loss Landscape Geometries

The pursuit of explaining and improving generalization in deep learning ...

Improve Adversarial Robustness via Weight Penalization on Classification Layer

It is well-known that deep neural networks are vulnerable to adversarial...

Please sign up or login with your details

Forgot password? Click here to reset