RABA: A Robust Avatar Backdoor Attack on Deep Neural Network

04/02/2021
by   Ying He, et al.
0

With the development of Deep Neural Network (DNN), as well as the demand growth of third-party DNN model stronger, there leaves a gap for backdoor attack. Backdoor can be injected into a third-party model and has strong stealthiness in normal situation, thus has been widely discussed. Nowadays backdoor attack on deep neural network has been concerned a lot and there comes lots of researches about attack and defense around backdoor in DNN. In this paper, we propose a robust avatar backdoor attack that integrated with adversarial attack. Our attack can escape mainstream detection schemes with popularity and impact that detect whether a model has backdoor or not before deployed. It reveals that although many effective backdoor defense schemes has been put forward, backdoor attack in DNN still needs to be concerned. We select three popular datasets and two detection schemes with high impact factor to prove that our attack has a great performance in aggressivity and stealthiness.

READ FULL TEXT

page 1

page 2

page 3

page 5

page 6

page 10

page 11

page 12

research
01/31/2022

Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks

Recent researches demonstrate that Deep Neural Networks (DNN) models are...
research
09/19/2020

It's Raining Cats or Dogs? Adversarial Rain Attack on DNN Perception

Rain is a common phenomenon in nature and an essential factor for many d...
research
12/22/2022

Mind Your Heart: Stealthy Backdoor Attack on Dynamic Deep Neural Network in Edge Computing

Transforming off-the-shelf deep neural network (DNN) models into dynamic...
research
02/26/2020

Adversarial Ranking Attack and Defense

Deep Neural Network (DNN) classifiers are vulnerable to adversarial atta...
research
11/16/2022

PBSM: Backdoor attack against Keyword spotting based on pitch boosting and sound masking

Keyword spotting (KWS) has been widely used in various speech control sc...
research
06/17/2019

CheckNet: Secure Inference on Untrusted Devices

We introduce CheckNet, a method for secure inference with deep neural ne...
research
08/29/2022

Demystifying Arch-hints for Model Extraction: An Attack in Unified Memory System

The deep neural network (DNN) models are deemed confidential due to thei...

Please sign up or login with your details

Forgot password? Click here to reset