On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping

02/26/2020
by   Sanghyun Hong, et al.
0

Machine learning algorithms are vulnerable to data poisoning attacks. Prior taxonomies that focus on specific scenarios, e.g., indiscriminate or targeted, have enabled defenses for the corresponding subset of known attacks. Yet, this introduces an inevitable arms race between adversaries and defenders. In this work, we study the feasibility of an attack-agnostic defense relying on artifacts that are common to all poisoning attacks. Specifically, we focus on a common element between all attacks: they modify gradients computed to train the model. We identify two main artifacts of gradients computed in the presence of poison: (1) their ℓ_2 norms have significantly higher magnitudes than those of clean gradients, and (2) their orientation differs from clean gradients. Based on these observations, we propose the prerequisite for a generic poisoning defense: it must bound gradient magnitudes and minimize differences in orientation. We call this gradient shaping. As an exemplar tool to evaluate the feasibility of gradient shaping, we use differentially private stochastic gradient descent (DP-SGD), which clips and perturbs individual gradients during training to obtain privacy guarantees. We find that DP-SGD, even in configurations that do not result in meaningful privacy guarantees, increases the model's robustness to indiscriminate attacks. It also mitigates worst-case targeted attacks and increases the adversary's cost in multi-poison scenarios. The only attack we find DP-SGD to be ineffective against is a strong, yet unrealistic, indiscriminate attack. Our results suggest that, while we currently lack a generic poisoning defense, gradient shaping is a promising direction for future research.

READ FULL TEXT

page 4

page 16

page 18

research
07/01/2023

Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD

Differentially private stochastic gradient descent (DP-SGD) is the canon...
research
06/06/2022

Per-Instance Privacy Accounting for Differentially Private Stochastic Gradient Descent

Differentially private stochastic gradient descent (DP-SGD) is the workh...
research
11/18/2020

Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff

Data poisoning and backdoor attacks manipulate victim models by maliciou...
research
11/09/2022

Directional Privacy for Deep Learning

Differentially Private Stochastic Gradient Descent (DP-SGD) is a key met...
research
07/14/2021

An Efficient DP-SGD Mechanism for Large Scale NLP Models

Recent advances in deep learning have drastically improved performance o...
research
05/28/2021

A BIC based Mixture Model Defense against Data Poisoning Attacks on Classifiers

Data Poisoning (DP) is an effective attack that causes trained classifie...
research
08/17/2022

Efficient Detection and Filtering Systems for Distributed Training

A plethora of modern machine learning tasks requires the utilization of ...

Please sign up or login with your details

Forgot password? Click here to reset