Towards Natural Robustness Against Adversarial Examples

12/04/2020
by   Haoyu Chu, et al.
0

Recent studies have shown that deep neural networks are vulnerable to adversarial examples, but most of the methods proposed to defense adversarial examples cannot solve this problem fundamentally. In this paper, we theoretically prove that there is an upper bound for neural networks with identity mappings to constrain the error caused by adversarial noises. However, in actual computations, this kind of neural network no longer holds any upper bound and is therefore susceptible to adversarial examples. Following similar procedures, we explain why adversarial examples can fool other deep neural networks with skip connections. Furthermore, we demonstrate that a new family of deep neural networks called Neural ODEs (Chen et al., 2018) holds a weaker upper bound. This weaker upper bound prevents the amount of change in the result from being too large. Thus, Neural ODEs have natural robustness against adversarial examples. We evaluate the performance of Neural ODEs compared with ResNet under three white-box adversarial attacks (FGSM, PGD, DI2-FGSM) and one black-box adversarial attack (Boundary Attack). Finally, we show that the natural robustness of Neural ODEs is even better than the robustness of neural networks that are trained with adversarial training methods, such as TRADES and YOPO.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/21/2018

Simultaneous Adversarial Training - Learn from Others Mistakes

Adversarial examples are maliciously tweaked images that can easily fool...
research
11/08/2017

Intriguing Properties of Adversarial Examples

It is becoming increasingly clear that many machine learning classifiers...
research
10/29/2018

Logit Pairing Methods Can Fool Gradient-Based Attacks

Recently, several logit regularization methods have been proposed in [Ka...
research
12/04/2020

Kernel-convoluted Deep Neural Networks with Data Augmentation

The Mixup method (Zhang et al. 2018), which uses linearly interpolated d...
research
04/13/2020

Adversarial robustness guarantees for random deep neural networks

The reliability of most deep learning algorithms is fundamentally challe...
research
11/20/2019

Deep Minimax Probability Machine

Deep neural networks enjoy a powerful representation and have proven eff...
research
06/28/2023

Does Saliency-Based Training bring Robustness for Deep Neural Networks in Image Classification?

Deep Neural Networks are powerful tools to understand complex patterns a...

Please sign up or login with your details

Forgot password? Click here to reset