Towards Deep Learning Models Resistant to Large Perturbations

03/30/2020
by   Amirreza Shaeiri, et al.
13

Adversarial robustness has proven to be a required property of machine learning algorithms. A key and often overlooked aspect of this problem is to try to make the adversarial noise magnitude as large as possible to enhance the benefits of the model robustness. We show that the well-established algorithm called "adversarial training" fails to train a deep neural network given a large, but reasonable, perturbation magnitude. In this paper, we propose a simple yet effective initialization of the network weights that makes learning on higher levels of noise possible. We next evaluate this idea rigorously on MNIST (ϵ up to ≈ 0.40) and CIFAR10 (ϵ up to ≈ 32/255) datasets assuming the ℓ_∞ attack model. Additionally, in order to establish the limits of ϵ in which the learning is feasible, we study the optimal robust classifier assuming full access to the joint data and label distribution. Then, we provide some theoretical results on the adversarial accuracy for a simple multi-dimensional Bernoulli distribution, which yields some insights on the range of feasible perturbations for the MNIST dataset.

READ FULL TEXT

page 7

page 29

page 30

page 31

research
01/12/2018

A3T: Adversarially Augmented Adversarial Training

Recent research showed that deep neural networks are highly sensitive to...
research
04/08/2023

Robust Deep Learning Models Against Semantic-Preserving Adversarial Attack

Deep learning models can be fooled by small l_p-norm adversarial perturb...
research
03/29/2021

Lagrangian Objective Function Leads to Improved Unforeseen Attack Generalization in Adversarial Training

Recent improvements in deep learning models and their practical applicat...
research
02/28/2020

Applying Tensor Decomposition to image for Robustness against Adversarial Attack

Nowadays the deep learning technology is growing faster and shows dramat...
research
10/08/2019

Directional Adversarial Training for Cost Sensitive Deep Learning Classification Applications

In many real-world applications of Machine Learning it is of paramount i...
research
03/15/2021

Constant Random Perturbations Provide Adversarial Robustness with Minimal Effect on Accuracy

This paper proposes an attack-independent (non-adversarial training) tec...
research
05/20/2020

Model-Based Robust Deep Learning

While deep learning has resulted in major breakthroughs in many applicat...

Please sign up or login with your details

Forgot password? Click here to reset