SATBA: An Invisible Backdoor Attack Based On Spatial Attention

by   Huasong Zhou, et al.

As a new realm of AI security, backdoor attack has drew growing attention research in recent years. It is well known that backdoor can be injected in a DNN model through the process of model training with poisoned dataset which is consist of poisoned sample. The injected model output correct prediction on benign samples yet behave abnormally on poisoned samples included trigger pattern. Most existing trigger of poisoned sample are visible and can be easily found by human visual inspection, and the trigger injection process will cause the feature loss of natural sample and trigger. To solve the above problems and inspire by spatial attention mechanism, we introduce a novel backdoor attack named SATBA, which is invisible and can minimize the loss of trigger to improve attack success rate and model accuracy. It extracts data features and generate trigger pattern related to clean data through spatial attention, poisons clean image by using a U-type models to plant a trigger into the original data. We demonstrate the effectiveness of our attack against three popular image classification DNNs on three standard datasets. Besides, we conduct extensive experiments about image similarity to show that our proposed attack can provide practical stealthiness which is critical to resist to backdoor defense.


page 4

page 5


Beating Backdoor Attack at Its Own Game

Deep neural networks (DNNs) are vulnerable to backdoor attack, which doe...

Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks

Recent studies have shown that DNNs can be compromised by backdoor attac...

Robust Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers

Deep neural networks (DNNs) can be manipulated to exhibit specific behav...

Debiasing Backdoor Attack: A Benign Application of Backdoor Attack in Eliminating Data Bias

Backdoor attack is a new AI security risk that has emerged in recent yea...

Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy

Data-poisoning based backdoor attacks aim to insert backdoor into models...

Imperceptible Sample-Specific Backdoor to DNN with Denoising Autoencoder

The backdoor attack poses a new security threat to deep neural networks....

Mitigating Backdoor Attack Via Prerequisite Transformation

In recent years, with the successful application of DNN in fields such a...

Please sign up or login with your details

Forgot password? Click here to reset