Backdoor Attack against Object Detection with Clean Annotation

by   Yize Cheng, et al.

Deep neural networks (DNNs) have shown unprecedented success in object detection tasks. However, it was also discovered that DNNs are vulnerable to multiple kinds of attacks, including Backdoor Attacks. Through the attack, the attacker manages to embed a hidden backdoor into the DNN such that the model behaves normally on benign data samples, but makes attacker-specified judgments given the occurrence of a predefined trigger. Although numerous backdoor attacks have been experimented on image classification, backdoor attacks on object detection tasks have not been properly investigated and explored. As object detection has been adopted as an important module in multiple security-sensitive applications such as autonomous driving, backdoor attacks on object detection could pose even more severe threats. Inspired by the inherent property of deep learning-based object detectors, we propose a simple yet effective backdoor attack method against object detection without modifying the ground truth annotations, specifically focusing on the object disappearance attack and object generation attack. Extensive experiments and ablation studies prove the effectiveness of our attack on two benchmark object detection datasets, PASCAL VOC07+12 and MSCOCO, on which we achieve an attack success rate of more than 92


page 1

page 3

page 7


BadDet: Backdoor Attacks on Object Detection

Deep learning models have been deployed in numerous real-world applicati...

An Object Detection based Solver for Google's Image reCAPTCHA v2

Previous work showed that reCAPTCHA v2's image challenges could be solve...

Mitigating Backdoor Attack Via Prerequisite Transformation

In recent years, with the successful application of DNN in fields such a...

CCA: Exploring the Possibility of Contextual Camouflage Attack on Object Detection

Deep neural network based object detection hasbecome the cornerstone of ...

Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World

Deep learning models have been shown to be vulnerable to recent backdoor...

Testing Deep Learning Models for Image Analysis Using Object-Relevant Metamorphic Relations

Deep learning models are widely used for image analysis. While they offe...

It's Raining Cats or Dogs? Adversarial Rain Attack on DNN Perception

Rain is a common phenomenon in nature and an essential factor for many d...

Please sign up or login with your details

Forgot password? Click here to reset