Manipulating SGD with Data Ordering Attacks

04/19/2021
by   Ilia Shumailov, et al.
14

Machine learning is vulnerable to a wide variety of different attacks. It is now well understood that by changing the underlying data distribution, an adversary can poison the model trained with it or introduce backdoors. In this paper we present a novel class of training-time attacks that require no changes to the underlying model dataset or architecture, but instead only change the order in which data are supplied to the model. In particular, an attacker can disrupt the integrity and availability of a model by simply reordering training batches, with no knowledge about either the model or the dataset. Indeed, the attacks presented here are not specific to the model or dataset, but rather target the stochastic nature of modern learning procedures. We extensively evaluate our attacks to find that the adversary can disrupt model training and even introduce backdoors. For integrity we find that the attacker can either stop the model from learning, or poison it to learn behaviours specified by the attacker. For availability we find that a single adversarially-ordered epoch can be enough to slow down model learning, or even to reset all of the learning progress. Such attacks have a long-term impact in that they decrease model performance hundreds of epochs after the attack took place. Reordering is a very powerful adversarial paradigm in that it removes the assumption that an adversary must inject adversarial data points or perturbations to perform training-time attacks. It reminds us that stochastic gradient descent relies on the assumption that data are sampled at random. If this randomness is compromised, then all bets are off.

READ FULL TEXT

page 7

page 8

page 11

page 32

page 36

page 37

page 38

page 39

research
06/28/2018

Adversarial Reprogramming of Neural Networks

Deep neural networks are susceptible to adversarial attacks. In computer...
research
06/18/2019

Poisoning Attacks with Generative Adversarial Nets

Machine learning algorithms are vulnerable to poisoning attacks: An adve...
research
05/31/2019

Real-Time Adversarial Attacks

In recent years, many efforts have demonstrated that modern machine lear...
research
07/31/2020

Class-Oriented Poisoning Attack

Poisoning attacks on machine learning systems compromise the model perfo...
research
06/15/2022

Architectural Backdoors in Neural Networks

Machine learning is vulnerable to adversarial manipulation. Previous lit...
research
04/21/2021

Dataset Inference: Ownership Resolution in Machine Learning

With increasingly more data and computation involved in their training, ...
research
03/23/2021

The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?

One of the most concerning threats for modern AI systems is data poisoni...

Please sign up or login with your details

Forgot password? Click here to reset