Robustness and Generalization via Generative Adversarial Training

09/06/2021
by   Omid Poursaeed, et al.
2

While deep neural networks have achieved remarkable success in various computer vision tasks, they often fail to generalize to new domains and subtle variations of input images. Several defenses have been proposed to improve the robustness against these variations. However, current defenses can only withstand the specific attack used in training, and the models often remain vulnerable to other input variations. Moreover, these methods often degrade performance of the model on clean images and do not generalize to out-of-domain samples. In this paper we present Generative Adversarial Training, an approach to simultaneously improve the model's generalization to the test set and out-of-domain samples as well as its robustness to unseen adversarial attacks. Instead of altering a low-level pre-defined aspect of images, we generate a spectrum of low-level, mid-level and high-level changes using generative models with a disentangled latent space. Adversarial training with these examples enable the model to withstand a wide range of attacks by observing a variety of input alterations during training. We show that our approach not only improves performance of the model on clean images and out-of-domain samples but also makes it robust against unforeseen attacks and outperforms prior work. We validate effectiveness of our method by demonstrating results on various tasks such as classification, segmentation and object detection.

READ FULL TEXT

page 6

page 7

page 13

page 15

page 16

page 17

page 18

page 19

research
07/01/2018

Towards Adversarial Training with Moderate Performance Improvement for Neural Network Classification

It has been demonstrated that deep neural networks are prone to noisy ex...
research
09/16/2022

Robust Ensemble Morph Detection with Domain Generalization

Although a substantial amount of studies is dedicated to morph detection...
research
08/20/2023

Towards Generalizable Morph Attack Detection with Consistency Regularization

Though recent studies have made significant progress in morph attack det...
research
03/14/2020

VarMixup: Exploiting the Latent Space for Robust Training and Inference

The vulnerability of Deep Neural Networks (DNNs) to adversarial attacks ...
research
12/06/2019

Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations

Recent research has made the surprising finding that state-of-the-art de...
research
08/08/2017

Cascade Adversarial Machine Learning Regularized with a Unified Embedding

Deep neural network classifiers are vulnerable to small input perturbati...
research
10/07/2021

Adversarial Unlearning of Backdoors via Implicit Hypergradient

We propose a minimax formulation for removing backdoors from a given poi...

Please sign up or login with your details

Forgot password? Click here to reset