Structure Matters: Towards Generating Transferable Adversarial Images

10/22/2019
by   Dan Peng, et al.
0

Recent works on adversarial examples for image classification focus on directly modifying pixels with minor perturbations. The small perturbation requirement is imposed to ensure the generated adversarial examples being natural and realistic to humans, which, however, puts a curb on the attack space thus limiting the attack ability and transferability especially for systems protected by a defense mechanism. In this paper, we propose the novel concepts of structure patterns and structure-aware perturbations that relax the small perturbation constraint while still keeping images natural. The key idea of our approach is to allow perceptible deviation in adversarial examples while keeping structure patterns that are central to a human classifier. Built upon these concepts, we propose a structure-preserving attack (SPA) for generating natural adversarial examples with extremely high transferability. Empirical results on the MNIST and the CIFAR10 datasets show that SPA adversarial images can easily bypass strong PGD-based adversarial training and are still effective against SPA-based adversarial training. Further, they transfer well to other target models with little or no loss of successful attack rate, thus exhibiting competitive black-box attack performance. Our code is available at <https://github.com/spasrccode/SPA>.

READ FULL TEXT
research
09/08/2018

Structure-Preserving Transformation: Generating Diverse and Transferable Adversarial Examples

Adversarial examples are perturbed inputs designed to fool machine learn...
research
07/02/2020

Generating Adversarial Examples withControllable Non-transferability

Adversarial attacks against Deep Neural Networks have been widely studie...
research
10/12/2022

Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation

Deep neural networks (DNNs) have been shown to be vulnerable to adversar...
research
10/05/2022

Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks

Unrestricted color attacks, which manipulate semantically meaningful col...
research
04/21/2022

Fast AdvProp

Adversarial Propagation (AdvProp) is an effective way to improve recogni...
research
03/22/2021

Grey-box Adversarial Attack And Defence For Sentiment Classification

We introduce a grey-box adversarial attack and defence framework for sen...
research
03/28/2019

Smooth Adversarial Examples

This paper investigates the visual quality of the adversarial examples. ...

Please sign up or login with your details

Forgot password? Click here to reset