GLOW: Global Layout Aware Attacks for Object Detection

02/27/2023
by   Jun Bao, et al.
0

Adversarial attacks aims to perturb images such that a predictor outputs incorrect results. Due to the limited research in structured attacks, imposing consistency checks on natural multi-object scenes is a promising yet practical defense against conventional adversarial attacks. More desired attacks, to this end, should be able to fool defenses with such consistency checks. Therefore, we present the first approach GLOW that copes with various attack requests by generating global layout-aware adversarial attacks where both categorical and geometric layout constraints are explicitly established. Specifically, we focus on object detection task and given a victim image, GLOW first localizes victim objects according to target labels. And then it generates multiple attack plans, together with their context-consistency scores. Our proposed GLOW, on the one hand, is capable of handling various types of requests, including single or multiple victim objects, with or without specified victim objects. On the other hand, it produces a consistency score for each attack plan, reflecting the overall contextual consistency that both semantic category and global scene layout are considered. In experiment, we design multiple types of attack requests and validate our ideas on MS COCO validation set. Extensive experimental results demonstrate that we can achieve about 40% average relative improvement compared to state-of-the-art methods in conventional single object attack request; Moreover, our method outperforms SOTAs significantly on more generic attack requests by at least 30%; Finally, our method produces superior performance under challenging zero-query black-box setting, or 30% better than SOTAs. Our code, model and attack requests would be made available.

READ FULL TEXT

page 1

page 4

page 5

page 6

research
03/29/2022

Zero-Query Transfer Attacks on Context-Aware Object Detectors

Adversarial attacks perturb images such that a deep neural network produ...
research
10/16/2022

Object-Attentional Untargeted Adversarial Attack

Deep neural networks are facing severe threats from adversarial attacks....
research
10/24/2021

ADC: Adversarial attacks against object Detection that evade Context consistency checks

Deep Neural Networks (DNNs) have been shown to be vulnerable to adversar...
research
09/20/2022

GAMA: Generative Adversarial Multi-Object Scene Attacks

The majority of methods for crafting adversarial attacks have focused on...
research
08/16/2020

Attack on Multi-Node Attention for Object Detection

This paper focuses on high-transferable adversarial attacks on detection...
research
09/20/2022

Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks

State-of-the-art generative model-based attacks against image classifier...
research
03/04/2021

Structure-Preserving Progressive Low-rank Image Completion for Defending Adversarial Attacks

Deep neural networks recognize objects by analyzing local image details ...

Please sign up or login with your details

Forgot password? Click here to reset