DeepAI AI Chat
Log In Sign Up

L 1-norm double backpropagation adversarial defense

by   Ismaïla Seck, et al.

Adversarial examples are a challenging open problem for deep neural networks. We propose in this paper to add a penalization term that forces the decision function to be at in some regions of the input space, such that it becomes, at least locally, less sensitive to attacks. Our proposition is theoretically motivated and shows on a first set of carefully conducted experiments that it behaves as expected when used alone, and seems promising when coupled with adversarial training.


page 1

page 2

page 3

page 4


Unifying Bilateral Filtering and Adversarial Training for Robust Neural Networks

Recent analysis of deep neural networks has revealed their vulnerability...

A Direct Approach to Robust Deep Learning Using Adversarial Networks

Deep neural networks have been shown to perform well in many classical m...

Advocating for Multiple Defense Strategies against Adversarial Examples

It has been empirically observed that defense mechanisms designed to pro...

GanDef: A GAN based Adversarial Training Defense for Neural Network Classifier

Machine learning models, especially neural network (NN) classifiers, are...

Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks

Deep neural networks are vulnerable to adversarial examples. Prior defen...

Curriculum Adversarial Training

Recently, deep learning has been applied to many security-sensitive appl...

How many dimensions are required to find an adversarial example?

Past work exploring adversarial vulnerability have focused on situations...