Imperceptible Adversarial Attack via Invertible Neural Networks

11/28/2022
by   Zihan Chen, et al.
0

Adding perturbations via utilizing auxiliary gradient information or discarding existing details of the benign images are two common approaches for generating adversarial examples. Though visual imperceptibility is the desired property of adversarial examples, conventional adversarial attacks still generate traceable adversarial perturbations. In this paper, we introduce a novel Adversarial Attack via Invertible Neural Networks (AdvINN) method to produce robust and imperceptible adversarial examples. Specifically, AdvINN fully takes advantage of the information preservation property of Invertible Neural Networks and thereby generates adversarial examples by simultaneously adding class-specific semantic information of the target class and dropping discriminant information of the original class. Extensive experiments on CIFAR-10, CIFAR-100, and ImageNet-1K demonstrate that the proposed AdvINN method can produce less imperceptible adversarial images than the state-of-the-art methods and AdvINN yields more robust adversarial examples with high confidence compared to other adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 6

page 7

page 9

research
06/01/2022

On the reversibility of adversarial attacks

Adversarial attacks modify images with perturbations that change the pre...
research
03/19/2020

Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates

To deflect adversarial attacks, a range of "certified" classifiers have ...
research
02/19/2018

Robustness of Rotation-Equivariant Networks to Adversarial Perturbations

Deep neural networks have been shown to be vulnerable to adversarial exa...
research
03/16/2018

Semantic Adversarial Examples

Deep neural networks are known to be vulnerable to adversarial examples,...
research
05/30/2021

Generating Adversarial Examples with Graph Neural Networks

Recent years have witnessed the deployment of adversarial attacks to eva...
research
01/01/2023

ExploreADV: Towards exploratory attack for Neural Networks

Although deep learning has made remarkable progress in processing variou...
research
03/10/2022

Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity

Current adversarial attack research reveals the vulnerability of learnin...

Please sign up or login with your details

Forgot password? Click here to reset