L 1-norm double backpropagation adversarial defense

03/05/2019
by   Ismaïla Seck, et al.
0

Adversarial examples are a challenging open problem for deep neural networks. We propose in this paper to add a penalization term that forces the decision function to be at in some regions of the input space, such that it becomes, at least locally, less sensitive to attacks. Our proposition is theoretically motivated and shows on a first set of carefully conducted experiments that it behaves as expected when used alone, and seems promising when coupled with adversarial training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/05/2018

Unifying Bilateral Filtering and Adversarial Training for Robust Neural Networks

Recent analysis of deep neural networks has revealed their vulnerability...
research
05/23/2019

A Direct Approach to Robust Deep Learning Using Adversarial Networks

Deep neural networks have been shown to perform well in many classical m...
research
12/04/2020

Advocating for Multiple Defense Strategies against Adversarial Examples

It has been empirically observed that defense mechanisms designed to pro...
research
03/06/2019

GanDef: A GAN based Adversarial Training Defense for Neural Network Classifier

Machine learning models, especially neural network (NN) classifiers, are...
research
01/09/2018

Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks

Deep neural networks are vulnerable to adversarial examples. Prior defen...
research
05/13/2018

Curriculum Adversarial Training

Recently, deep learning has been applied to many security-sensitive appl...
research
03/24/2023

How many dimensions are required to find an adversarial example?

Past work exploring adversarial vulnerability have focused on situations...

Please sign up or login with your details

Forgot password? Click here to reset