Mitigating adversarial effects through randomization

11/06/2017
by   Cihang Xie, et al.
0

Convolutional neural networks have demonstrated their powerful ability on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. I.e., clean images, with imperceptible perturbations added, can easily cause convolutional neural networks to fail. In this paper, we propose to utilize randomization to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method also enjoys the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 110 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at https://github.com/cihangxie/NIPS2017_adv_challenge_defense.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2020

Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples

Adversarial examples have become one of the largest challenges that mach...
research
03/19/2018

Improving Transferability of Adversarial Examples with Input Diversity

Though convolutional neural networks have achieved state-of-the-art perf...
research
05/10/2020

Class-Aware Domain Adaptation for Improving Adversarial Robustness

Recent works have demonstrated convolutional neural networks are vulnera...
research
07/31/2023

Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks for Defending Adversarial Examples

Deep Neural Networks (DNNs) for 3D point cloud recognition are vulnerabl...
research
12/25/2018

Deflecting 3D Adversarial Point Clouds Through Outlier-Guided Removal

Neural networks are vulnerable to adversarial examples, which poses a th...
research
08/31/2020

Shape Defense

Humans rely heavily on shape information to recognize objects. Conversel...
research
04/24/2020

RAIN: Robust and Accurate Classification Networks with Randomization and Enhancement

Along with the extensive applications of CNN models for classification, ...

Please sign up or login with your details

Forgot password? Click here to reset