Excess Capacity and Backdoor Poisoning

09/02/2021
by   Naren Sarayu Manoj, et al.
0

A backdoor data poisoning attack is an adversarial attack wherein the attacker injects several watermarked, mislabeled training examples into a training set. The watermark does not impact the test-time performance of the model on typical data; however, the model reliably errs on watermarked examples. To gain a better foundational understanding of backdoor data poisoning attacks, we present a formal theoretical framework within which one can discuss backdoor data poisoning attacks for classification problems. We then use this to analyze important statistical and computational issues surrounding these attacks. On the statistical front, we identify a parameter we call the memorization capacity that captures the intrinsic vulnerability of a learning problem to a backdoor attack. This allows us to argue about the robustness of several natural learning problems to backdoor attacks. Our results favoring the attacker involve presenting explicit constructions of backdoor attacks, and our robustness results show that some natural problem settings cannot yield successful backdoor attacks. From a computational standpoint, we show that under certain assumptions, adversarial training can detect the presence of backdoors in a training set. We then show that under similar assumptions, two closely related problems we call backdoor filtering and robust generalization are nearly equivalent. This implies that it is both asymptotically necessary and sufficient to design algorithms that can identify watermarked examples in the training set in order to obtain a learning algorithm that both generalizes well to unseen data and is robust to backdoors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2021

Lagrangian Objective Function Leads to Improved Unforeseen Attack Generalization in Adversarial Training

Recent improvements in deep learning models and their practical applicat...
research
09/30/2019

Hidden Trigger Backdoor Attacks

With the success of deep learning algorithms in various domains, studyin...
research
10/12/2022

Few-shot Backdoor Attacks via Neural Tangent Kernels

In a backdoor attack, an attacker injects corrupted examples into the tr...
research
08/11/2020

Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks

In a data poisoning attack, an attacker modifies, deletes, and/or insert...
research
02/12/2019

A new Backdoor Attack in CNNs by training set corruption without label poisoning

Backdoor attacks against CNNs represent a new threat against deep learni...
research
03/23/2021

The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?

One of the most concerning threats for modern AI systems is data poisoni...
research
04/18/2020

Protecting Classifiers From Attacks. A Bayesian Approach

Classification problems in security settings are usually modeled as conf...

Please sign up or login with your details

Forgot password? Click here to reset