Optimal Transport as a Defense Against Adversarial Attacks

02/05/2021
by   Quentin Bouniot, et al.
10

Deep learning classifiers are now known to have flaws in the representations of their class. Adversarial attacks can find a human-imperceptible perturbation for a given image that will mislead a trained model. The most effective methods to defend against such attacks trains on generated adversarial examples to learn their distribution. Previous work aimed to align original and adversarial image representations in the same way as domain adaptation to improve robustness. Yet, they partially align the representations using approaches that do not reflect the geometry of space and distribution. In addition, it is difficult to accurately compare robustness between defended models. Until now, they have been evaluated using a fixed perturbation size. However, defended models may react differently to variations of this perturbation size. In this paper, the analogy of domain adaptation is taken a step further by exploiting optimal transport theory. We propose to use a loss between distributions that faithfully reflect the ground distance. This leads to SAT (Sinkhorn Adversarial Training), a more robust defense against adversarial attacks. Then, we propose to quantify more precisely the robustness of a model to adversarial attacks over a wide range of perturbation sizes using a different metric, the Area Under the Accuracy Curve (AUAC). We perform extensive experiments on both CIFAR-10 and CIFAR-100 datasets and show that our defense is globally more robust than the state-of-the-art.

READ FULL TEXT
research
02/09/2021

Target Training Does Adversarial Training Without Adversarial Samples

Neural network classifiers are vulnerable to misclassification of advers...
research
04/05/2021

Adaptive Clustering of Robust Semantic Representations for Adversarial Image Purification

Deep Learning models are highly susceptible to adversarial manipulations...
research
06/25/2023

On Evaluating the Adversarial Robustness of Semantic Segmentation Models

Achieving robustness against adversarial input perturbation is an import...
research
05/30/2019

Identifying Classes Susceptible to Adversarial Attacks

Despite numerous attempts to defend deep learning based image classifier...
research
09/07/2023

Adversarially Robust Deep Learning with Optimal-Transport-Regularized Divergences

We introduce the ARMOR_D methods as novel approaches to enhancing the ad...
research
12/28/2021

DeepAdversaries: Examining the Robustness of Deep Learning Models for Galaxy Morphology Classification

Data processing and analysis pipelines in cosmological survey experiment...
research
12/25/2018

PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning

Deep neural networks have demonstrated cutting edge performance on vario...

Please sign up or login with your details

Forgot password? Click here to reset