Towards Visual Distortion in Black-Box Attacks

07/21/2020
by   Nannan Li, et al.
0

Constructing adversarial examples in a black-box threat model injures the original images by introducing visual distortion. In this paper, we propose a novel black-box attack approach that can directly minimize the induced distortion by learning the noise distribution of the adversarial example, assuming only loss-oracle access to the black-box network. The quantified visual distortion, which measures the perceptual distance between the adversarial example and the original image, is introduced in our loss whilst the gradient of the corresponding non-differentiable loss function is approximated by sampling noise from the learned noise distribution. We validate the effectiveness of our attack on ImageNet. Our attack results in much lower distortion when compared to the state-of-the-art black-box attacks and achieves 100% success rate on ResNet50 and VGG16bn. The code is available at https://github.com/Alina-1997/visual-distortion-in-attack.

READ FULL TEXT

page 2

page 6

page 7

page 8

page 9

page 10

research
11/03/2018

CAAD 2018: Powerful None-Access Black-Box Attack Based on Adversarial Transformation Network

In this paper, we propose an improvement of Adversarial Transformation N...
research
01/06/2022

SABLAS: Learning Safe Control for Black-box Dynamical Systems

Control certificates based on barrier functions have been a powerful too...
research
05/01/2021

A Perceptual Distortion Reduction Framework for Adversarial Perturbation Generation

Most of the adversarial attack methods suffer from large perceptual dist...
research
10/12/2021

A Rate-Distortion Framework for Explaining Black-box Model Decisions

We present the Rate-Distortion Explanation (RDE) framework, a mathematic...
research
11/15/2021

Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

One major problem in black-box adversarial attacks is the high query com...
research
01/31/2021

Towards Imperceptible Query-limited Adversarial Attacks with Perceptual Feature Fidelity Loss

Recently, there has been a large amount of work towards fooling deep-lea...
research
03/06/2023

A Systematic Approach to Automotive Security

We propose a holistic methodology for designing automotivesystems that c...

Please sign up or login with your details

Forgot password? Click here to reset