Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World

03/01/2021
by   Jiakai Wang, et al.
0

Deep learning models are vulnerable to adversarial examples. As a more threatening type for practical deep learning systems, physical adversarial examples have received extensive research attention in recent years. However, without exploiting the intrinsic characteristics such as model-agnostic and human-specific patterns, existing works generate weak adversarial perturbations in the physical world, which fall short of attacking across different models and show visually suspicious appearance. Motivated by the viewpoint that attention reflects the intrinsic characteristics of the recognition process, this paper proposes the Dual Attention Suppression (DAS) attack to generate visually-natural physical adversarial camouflages with strong transferability by suppressing both model and human attention. As for attacking, we generate transferable adversarial camouflages by distracting the model-shared similar attention patterns from the target to non-target regions. Meanwhile, based on the fact that human visual attention always focuses on salient items (e.g., suspicious distortions), we evade the human-specific bottom-up attention to generate visually-natural camouflages which are correlated to the scenario context. We conduct extensive experiments in both the digital and physical world for classification and detection tasks on up-to-date models (e.g., Yolo-V5) and significantly demonstrate that our method outperforms state-of-the-art methods.

READ FULL TEXT

page 1

page 4

page 7

research
10/18/2021

Boosting the Transferability of Video Adversarial Examples via Temporal Translation

Although deep-learning based video recognition models have achieved rema...
research
11/27/2020

Robust Attacks on Deep Learning Face Recognition in the Physical World

Deep neural networks (DNNs) have been increasingly used in face recognit...
research
10/27/2022

Isometric 3D Adversarial Examples in the Physical World

3D deep learning models are shown to be as vulnerable to adversarial exa...
research
04/12/2019

Big but Imperceptible Adversarial Perturbations via Semantic Manipulation

Machine learning, especially deep learning, is widely applied to a range...
research
08/14/2023

ACTIVE: Towards Highly Transferable 3D Physical Camouflage for Universal and Robust Vehicle Evasion

Adversarial camouflage has garnered attention for its ability to attack ...
research
09/16/2021

Harnessing Perceptual Adversarial Patches for Crowd Counting

Crowd counting, which is significantly important for estimating the numb...
research
05/19/2022

Transferable Physical Attack against Object Detection with Separable Attention

Transferable adversarial attack is always in the spotlight since deep le...

Please sign up or login with your details

Forgot password? Click here to reset