Machine Learning as an Adversarial Service: Learning Black-Box Adversarial Examples

08/17/2017
by   Jamie Hayes, et al.
0

Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentionally perturbed to remain visually similar to the source input, but cause a misclassification. Until now, black-box attacks against neural networks have relied on transferability of adversarial examples. White-box attacks are used to generate adversarial examples on a substitute model and then transferred to the black-box target model. In this paper, we introduce a direct attack against black-box neural networks, that uses another attacker neural network to learn to craft adversarial examples. We show that our attack is capable of crafting adversarial examples that are indistinguishable from the source input and are misclassified with overwhelming probability - reducing accuracy of the black-box neural network from 99.4 0.77 attack can adapt and reduce the effectiveness of proposed defenses against adversarial examples, requires very little training data, and produces adversarial examples that can transfer to different machine learning models such as Random Forest, SVM, and K-Nearest Neighbor. To demonstrate the practicality of our attack, we launch a live attack against a target black-box model hosted online by Amazon: the crafted adversarial examples reduce its accuracy from 91.8 literature have unique, identifiable distributions. We use this information to train a classifier that is robust against such attacks.

READ FULL TEXT

page 5

page 6

page 7

page 8

page 11

research
11/20/2018

Intermediate Level Adversarial Attack for Enhanced Transferability

Neural networks are vulnerable to adversarial examples, malicious inputs...
research
04/10/2023

Certifiable Black-Box Attack: Ensuring Provably Successful Attack for Adversarial Examples

Black-box adversarial attacks have shown strong potential to subvert mac...
research
11/03/2018

CAAD 2018: Powerful None-Access Black-Box Attack Based on Adversarial Transformation Network

In this paper, we propose an improvement of Adversarial Transformation N...
research
07/21/2018

Simultaneous Adversarial Training - Learn from Others Mistakes

Adversarial examples are maliciously tweaked images that can easily fool...
research
05/24/2016

Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

Many machine learning models are vulnerable to adversarial examples: inp...
research
11/15/2018

A note on hyperparameters in black-box adversarial examples

Since Biggio et al. (2013) and Szegedy et al. (2013) first drew attentio...
research
09/30/2018

Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Neural Networks

Deep neural networks have been shown to be vulnerable to adversarial exa...

Please sign up or login with your details

Forgot password? Click here to reset