Towards Generic and Controllable Attacks Against Object Detection

07/23/2023
by   Guopeng Li, et al.
0

Existing adversarial attacks against Object Detectors (ODs) suffer from two inherent limitations. Firstly, ODs have complicated meta-structure designs, hence most advanced attacks for ODs concentrate on attacking specific detector-intrinsic structures, which makes it hard for them to work on other detectors and motivates us to design a generic attack against ODs. Secondly, most works against ODs make Adversarial Examples (AEs) by generalizing image-level attacks from classification to detection, which brings redundant computations and perturbations in semantically meaningless areas (e.g., backgrounds) and leads to an emergency for seeking controllable attacks for ODs. To this end, we propose a generic white-box attack, LGP (local perturbations with adaptively global attacks), to blind mainstream object detectors with controllable perturbations. For a detector-agnostic attack, LGP tracks high-quality proposals and optimizes three heterogeneous losses simultaneously. In this way, we can fool the crucial components of ODs with a part of their outputs without the limitations of specific structures. Regarding controllability, we establish an object-wise constraint that exploits foreground-background separation adaptively to induce the attachment of perturbations to foregrounds. Experimentally, the proposed LGP successfully attacked sixteen state-of-the-art object detectors on MS-COCO and DOTA datasets, with promising imperceptibility and transferability obtained. Codes are publicly released in https://github.com/liguopeng0923/LGP.git

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 8

page 10

research
12/26/2020

Sparse Adversarial Attack to Object Detection

Adversarial examples have gained tons of attention in recent years. Many...
research
10/31/2019

Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors

We present a systematic study of adversarial attacks on state-of-the-art...
research
06/01/2022

Attack-Agnostic Adversarial Detection

The growing number of adversarial attacks in recent years gives attacker...
research
01/04/2021

Fooling Object Detectors: Adversarial Attacks by Half-Neighbor Masks

Although there are a great number of adversarial attacks on deep learnin...
research
09/10/2019

UPC: Learning Universal Physical Camouflage Attacks on Object Detectors

In this paper, we study physical adversarial attacks on object detectors...
research
03/22/2022

Making DeepFakes more spurious: evading deep face forgery detection via trace removal attack

DeepFakes are raising significant social concerns. Although various Deep...
research
08/16/2020

Attack on Multi-Node Attention for Object Detection

This paper focuses on high-transferable adversarial attacks on detection...

Please sign up or login with your details

Forgot password? Click here to reset