BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models

10/06/2020
by   Ahmed Salem, et al.
14

The tremendous progress of autoencoders and generative adversarial networks (GANs) has led to their application to multiple critical tasks, such as fraud detection and sanitized data generation. This increasing adoption has fostered the study of security and privacy risks stemming from these models. However, previous works have mainly focused on membership inference attacks. In this work, we explore one of the most severe attacks against machine learning models, namely the backdoor attack, against both autoencoders and GANs. The backdoor attack is a training time attack where the adversary implements a hidden backdoor in the target model that can only be activated by a secret trigger. State-of-the-art backdoor attacks focus on classification-based tasks. We extend the applicability of backdoor attacks to autoencoders and GAN-based models. More concretely, we propose the first backdoor attack against autoencoders and GANs where the adversary can control what the decoded or generated images are when the backdoor is activated. Our results show that the adversary can build a backdoored autoencoder that returns a target output for all backdoored inputs, while behaving perfectly normal on clean inputs. Similarly, for the GANs, our experiments show that the adversary can generate data from a different distribution when the backdoor is activated, while maintaining the same utility when the backdoor is not.

READ FULL TEXT

page 3

page 5

page 6

page 9

page 10

research
01/06/2021

Model Extraction and Defenses on Generative Adversarial Networks

Model extraction attacks aim to duplicate a machine learning model throu...
research
10/17/2022

Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class

In recent years, machine learning models have been shown to be vulnerabl...
research
08/03/2021

The Devil is in the GAN: Defending Deep Generative Models Against Backdoor Attacks

Deep Generative Models (DGMs) allow users to synthesize data from comple...
research
09/09/2019

GAN-Leaks: A Taxonomy of Membership Inference Attacks against GANs

In recent years, the success of deep learning has carried over from disc...
research
01/28/2022

Plug Play Attacks: Towards Robust and Flexible Model Inversion Attacks

Model inversion attacks (MIAs) aim to create synthetic images that refle...
research
05/25/2022

Misleading Deep-Fake Detection with GAN Fingerprints

Generative adversarial networks (GANs) have made remarkable progress in ...
research
06/17/2020

Disrupting Deepfakes with an Adversarial Attack that Survives Training

The rapid progress in generative models and autoencoders has given rise ...

Please sign up or login with your details

Forgot password? Click here to reset