Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for Image Classifier Models

08/19/2022
by   Yulong Wang, et al.
2

Typical deep neural network (DNN) backdoor attacks are based on triggers embedded in inputs. Existing imperceptible triggers are computationally expensive or low in attack success. In this paper, we propose a new backdoor trigger, which is easy to generate, imperceptible, and highly effective. The new trigger is a uniformly randomly generated three-dimensional (3D) binary pattern that can be horizontally and/or vertically repeated and mirrored and superposed onto three-channel images for training a backdoored DNN model. Dispersed throughout an image, the new trigger produces weak perturbation to individual pixels, but collectively holds a strong recognizable pattern to train and activate the backdoor of the DNN. We also analytically reveal that the trigger is increasingly effective with the improving resolution of the images. Experiments are conducted using the ResNet-18 and MLP models on the MNIST, CIFAR-10, and BTSR datasets. In terms of imperceptibility, the new trigger outperforms existing triggers, such as BadNets, Trojaned NN, and Hidden Backdoor, by over an order of magnitude. The new trigger achieves an almost 100 0.7

READ FULL TEXT

page 1

page 7

page 8

page 12

page 16

research
01/31/2022

Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks

Recent researches demonstrate that Deep Neural Networks (DNN) models are...
research
02/18/2020

On the Matrix-Free Generation of Adversarial Perturbations for Black-Box Attacks

In general, adversarial perturbations superimposed on inputs are realist...
research
11/30/2020

Iterative Error Decimation for Syndrome-Based Neural Network Decoders

In this letter, we introduce a new syndrome-based decoder where a deep n...
research
02/12/2021

Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards A Fourier Perspective

The booming interest in adversarial attacks stems from a misalignment be...
research
02/23/2020

VisionGuard: Runtime Detection of Adversarial Inputs to Perception Systems

Deep neural network (DNN) models have proven to be vulnerable to adversa...
research
07/09/2020

Efficient detection of adversarial images

In this paper, detection of deception attack on deep neural network (DNN...
research
03/24/2018

DeepWarp: DNN-based Nonlinear Deformation

DeepWarp is an efficient and highly re-usable deep neural network (DNN) ...

Please sign up or login with your details

Forgot password? Click here to reset