Adversarial Scratches: Deployable Attacks to CNN Classifiers

04/20/2022
by   Loris Giulivi, et al.
7

A growing body of work has shown that deep neural networks are susceptible to adversarial examples. These take the form of small perturbations applied to the model's input which lead to incorrect predictions. Unfortunately, most literature focuses on visually imperceivable perturbations to be applied to digital images that often are, by design, impossible to be deployed to physical targets. We present Adversarial Scratches: a novel L0 black-box attack, which takes the form of scratches in images, and which possesses much greater deployability than other state-of-the-art attacks. Adversarial Scratches leverage Bézier Curves to reduce the dimension of the search space and possibly constrain the attack to a specific location. We test Adversarial Scratches in several scenarios, including a publicly available API and images of traffic signs. Results show that, often, our attack achieves higher fooling rate than other deployable state-of-the-art methods, while requiring significantly fewer queries and modifying very few pixels.

READ FULL TEXT

page 2

page 4

page 8

page 13

page 16

page 18

research
10/03/2019

Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions

Deep neural network image classifiers are reported to be susceptible to ...
research
06/04/2022

Saliency Attack: Towards Imperceptible Black-box Adversarial Attack

Deep neural networks are vulnerable to adversarial examples, even in the...
research
09/09/2018

Towards Query Efficient Black-box Attacks: An Input-free Perspective

Recent studies have highlighted that deep neural networks (DNNs) are vul...
research
06/01/2020

Adversarial Attacks on Classifiers for Eye-based User Modelling

An ever-growing body of work has demonstrated the rich information conte...
research
10/05/2021

Adversarial Attacks on Black Box Video Classifiers: Leveraging the Power of Geometric Transformations

When compared to the image classification models, black-box adversarial ...
research
04/19/2018

Attacking Convolutional Neural Network using Differential Evolution

The output of Convolutional Neural Networks (CNN) has been shown to be d...
research
06/25/2018

Exploring Adversarial Examples: Patterns of One-Pixel Attacks

Failure cases of black-box deep learning, e.g. adversarial examples, mig...

Please sign up or login with your details

Forgot password? Click here to reset