Object-Attentional Untargeted Adversarial Attack

by   Chao Zhou, et al.

Deep neural networks are facing severe threats from adversarial attacks. Most existing black-box attacks fool target model by generating either global perturbations or local patches. However, both global perturbations and local patches easily cause annoying visual artifacts in adversarial example. Compared with some smooth regions of an image, the object region generally has more edges and a more complex texture. Thus small perturbations on it will be more imperceptible. On the other hand, the object region is undoubtfully the decisive part of an image to classification tasks. Motivated by these two facts, we propose an object-attentional adversarial attack method for untargeted attack. Specifically, we first generate an object region by intersecting the object detection region from YOLOv4 with the salient object detection (SOD) region from HVPNet. Furthermore, we design an activation strategy to avoid the reaction caused by the incomplete SOD. Then, we perform an adversarial attack only on the detected object region by leveraging Simple Black-box Adversarial Attack (SimBA). To verify the proposed method, we create a unique dataset by extracting all the images containing the object defined by COCO from ImageNet-1K, named COCO-Reduced-ImageNet in this paper. Experimental results on ImageNet-1K and COCO-Reduced-ImageNet show that under various system settings, our method yields the adversarial example with better perceptual quality meanwhile saving the query budget up to 24.16% compared to the state-of-the-art approaches including SimBA.


page 2

page 6

page 8

page 16


Saliency Attack: Towards Imperceptible Black-box Adversarial Attack

Deep neural networks are vulnerable to adversarial examples, even in the...

PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack

The studies on black-box adversarial attacks have become increasingly pr...

GLOW: Global Layout Aware Attacks for Object Detection

Adversarial attacks aims to perturb images such that a predictor outputs...

Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection

Object detection has been widely used in many safety-critical tasks, suc...

Attack on Multi-Node Attention for Object Detection

This paper focuses on high-transferable adversarial attacks on detection...

APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection

Physical adversarial attacks threaten to fool object detection systems, ...

Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks

State-of-the-art generative model-based attacks against image classifier...

Please sign up or login with your details

Forgot password? Click here to reset