ADef: an Iterative Algorithm to Construct Adversarial Deformations

by   Rima Alaifari, et al.
ETH Zurich
Università di Genova

While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood. In the past, image classifiers have been shown to be vulnerable to so-called adversarial attacks, which are created by additively perturbing the correctly classified image. In this paper, we propose the ADef algorithm to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step. We demonstrate our results on MNIST with a convolutional neural network and on ImageNet with Inception-v3 and ResNet-101.


page 7

page 8


Adversarial Attacks on Image Classification Models: FGSM and Patch Attacks and their Impact

This chapter introduces the concept of adversarial attacks on image clas...

Protecting JPEG Images Against Adversarial Attacks

As deep neural networks (DNNs) have been integrated into critical system...

Adversarial Robustness is at Odds with Lazy Training

Recent works show that random neural networks are vulnerable against adv...

Adversarial Attack on Graph Structured Data

Deep learning on graph structures has shown exciting results in various ...

SNN under Attack: are Spiking Deep Belief Networks vulnerable to Adversarial Examples?

Recently, many adversarial examples have emerged for Deep Neural Network...

Principal Component Properties of Adversarial Samples

Deep Neural Networks for image classification have been found to be vuln...

Semantically Adversarial Learnable Filters

We present the first adversarial framework that crafts perturbations tha...

Please sign up or login with your details

Forgot password? Click here to reset