On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses

04/10/2018
by   Anish Athalye, et al.
0

Neural networks are known to be vulnerable to adversarial examples. In this note, we evaluate the two white-box defenses that appeared at CVPR 2018 and find they are ineffective: when applying existing techniques, we can reduce the accuracy of the defended models to 0

READ FULL TEXT
research
10/23/2018

Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial Examples Against Gradient Obfuscation Defenses

It has been shown that adversaries can craft example inputs to neural ne...
research
02/18/2019

AuxBlocks: Defense Adversarial Example via Auxiliary Blocks

Deep learning models are vulnerable to adversarial examples, which poses...
research
05/23/2023

Adversarial Defenses via Vector Quantization

Building upon Randomized Discretization, we develop two novel adversaria...
research
04/21/2020

Certifying Joint Adversarial Robustness for Model Ensembles

Deep Neural Networks (DNNs) are often vulnerable to adversarial examples...
research
09/14/2022

Robust Transferable Feature Extractors: Learning to Defend Pre-Trained Networks Against White Box Adversaries

The widespread adoption of deep neural networks in computer vision appli...
research
09/21/2023

How Robust is Google's Bard to Adversarial Image Attacks?

Multimodal Large Language Models (MLLMs) that integrate text and other m...
research
07/08/2022

Not all broken defenses are equal: The dead angles of adversarial accuracy

Robustness to adversarial attack is typically evaluated with adversarial...

Please sign up or login with your details

Forgot password? Click here to reset