Improving Transferability of Adversarial Examples with Input Diversity

03/19/2018
by   Cihang Xie, et al.
0

Though convolutional neural networks have achieved state-of-the-art performance on various vision tasks, they are extremely vulnerable to adversarial examples, which are obtained by adding human-imperceptible perturbations to the original images. Adversarial examples can thus be used as an useful tool to evaluate and select the most robust models in safety-critical applications. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. To further improve the transferability, we (1) integrate the recently proposed momentum method into the attack process; and (2) attack an ensemble of networks simultaneously. By evaluating our method against top defense submissions and official baselines from NIPS 2017 adversarial competition, this enhanced attack reaches an average success rate of 73.0 NIPS competition by a large margin of 6.6 strategy can serve as a benchmark for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in future. The code is public available at https://github.com/cihangxie/DI-2-FGSM.

READ FULL TEXT
research
05/11/2021

Improving Adversarial Transferability with Gradient Refining

Deep neural networks are vulnerable to adversarial examples, which are c...
research
07/12/2022

Frequency Domain Model Augmentation for Adversarial Attack

For black-box attacks, the gap between the substitute model and the vict...
research
10/22/2020

Defense-guided Transferable Adversarial Attacks

Though deep neural networks perform challenging tasks excellently, they ...
research
05/24/2023

Introducing Competition to Boost the Transferability of Targeted Adversarial Examples through Clean Feature Mixup

Deep neural networks are widely known to be susceptible to adversarial e...
research
12/09/2018

Learning Transferable Adversarial Examples via Ghost Networks

The recent development of adversarial attack has proven that ensemble-ba...
research
11/06/2017

Mitigating adversarial effects through randomization

Convolutional neural networks have demonstrated their powerful ability o...
research
08/15/2023

Backpropagation Path Search On Adversarial Transferability

Deep neural networks are vulnerable to adversarial examples, dictating t...

Please sign up or login with your details

Forgot password? Click here to reset