Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models

07/08/2021
by   Daniel Park, et al.
0

Adversarial examples pose a threat to deep neural network models in a variety of scenarios, from settings where the adversary has complete knowledge of the model in a "white box" setting and to the opposite in a "black box" setting. In this paper, we explore the use of output randomization as a defense against attacks in both the black box and white box models and propose two defenses. In the first defense, we propose output randomization at test time to thwart finite difference attacks in black box settings. Since this type of attack relies on repeated queries to the model to estimate gradients, we investigate the use of randomization to thwart such adversaries from successfully creating adversarial examples. We empirically show that this defense can limit the success rate of a black box adversary using the Zeroth Order Optimization attack to 0 against white box adversaries. Unlike prior approaches that use randomization, our defense does not require its use at test time, eliminating the Backward Pass Differentiable Approximation attack, which was shown to be effective against other randomization defenses. Additionally, this defense has low overhead and is easily implemented, allowing it to be used together with other defenses across various model architectures. We evaluate output randomization training against the Projected Gradient Descent attacker and show that the defense can reduce the PGD attack's success rate down to 12 cross-entropy loss.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2019

Thwarting finite difference adversarial attacks with output randomization

Adversarial examples pose a threat to deep neural network models in a va...
research
03/19/2022

Adversarial Defense via Image Denoising with Chaotic Encryption

In the literature on adversarial examples, white box and black box attac...
research
01/06/2021

Adversarial Robustness by Design through Analog Computing and Synthetic Gradients

We propose a new defense mechanism against adversarial attacks inspired ...
research
03/28/2018

Defending against Adversarial Images using Basis Functions Transformations

We study the effectiveness of various approaches that defend against adv...
research
01/12/2022

Adversarially Robust Classification by Conditional Generative Model Inversion

Most adversarial attack defense methods rely on obfuscating gradients. T...
research
08/20/2019

Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses

Despite achieving remarkable success in various domains, recent studies ...
research
09/05/2022

An Adaptive Black-box Defense against Trojan Attacks (TrojDef)

Trojan backdoor is a poisoning attack against Neural Network (NN) classi...

Please sign up or login with your details

Forgot password? Click here to reset