Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios

06/14/2023
by   Hong Sun, et al.
0

Recent deep neural networks (DNNs) have come to rely on vast amounts of training data, providing an opportunity for malicious attackers to exploit and contaminate the data to carry out backdoor attacks. These attacks significantly undermine the reliability of DNNs. However, existing backdoor attack methods make unrealistic assumptions, assuming that all training data comes from a single source and that attackers have full access to the training data. In this paper, we address this limitation by introducing a more realistic attack scenario where victims collect data from multiple sources, and attackers cannot access the complete training data. We refer to this scenario as data-constrained backdoor attacks. In such cases, previous attack methods suffer from severe efficiency degradation due to the entanglement between benign and poisoning features during the backdoor injection process. To tackle this problem, we propose a novel approach that leverages the pre-trained Contrastive Language-Image Pre-Training (CLIP) model. We introduce three CLIP-based technologies from two distinct streams: Clean Feature Suppression, which aims to suppress the influence of clean features to enhance the prominence of poisoning features, and Poisoning Feature Augmentation, which focuses on augmenting the presence and impact of poisoning features to effectively manipulate the model's behavior. To evaluate the effectiveness, harmlessness to benign accuracy, and stealthiness of our method, we conduct extensive experiments on 3 target models, 3 datasets, and over 15 different settings. The results demonstrate remarkable improvements, with some settings achieving over 100 compared to existing attacks in data-constrained scenarios. Our research contributes to addressing the limitations of existing methods and provides a practical and effective solution for data-constrained backdoor attacks.

READ FULL TEXT

page 4

page 7

page 12

page 22

research
03/06/2020

Clean-Label Backdoor Attacks on Video Recognition Models

Deep neural networks (DNNs) are vulnerable to backdoor attacks which can...
research
06/10/2022

Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers

Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthin...
research
06/08/2022

Can Backdoor Attacks Survive Time-Varying Models?

Backdoors are powerful attacks against deep neural networks (DNNs). By p...
research
06/14/2023

A Proxy-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks

Poisoning efficiency is a crucial factor in poisoning-based backdoor att...
research
04/19/2022

Indiscriminate Data Poisoning Attacks on Neural Networks

Data poisoning attacks, in which a malicious adversary aims to influence...
research
05/24/2023

Sharpness-Aware Data Poisoning Attack

Recent research has highlighted the vulnerability of Deep Neural Network...
research
06/27/2023

IMPOSITION: Implicit Backdoor Attack through Scenario Injection

This paper presents a novel backdoor attack called IMPlicit BackdOor Att...

Please sign up or login with your details

Forgot password? Click here to reset