LatentPoison - Adversarial Attacks On The Latent Space

by   Antonia Creswell, et al.
Imperial College London

Robustness and security of machine learning (ML) systems are intertwined, wherein a non-robust ML system (classifiers, regressors, etc.) can be subject to attacks using a wide variety of exploits. With the advent of scalable deep learning methodologies, a lot of emphasis has been put on the robustness of supervised, unsupervised and reinforcement learning algorithms. Here, we study the robustness of the latent space of a deep variational autoencoder (dVAE), an unsupervised generative framework, to show that it is indeed possible to perturb the latent space, flip the class predictions and keep the classification probability approximately equal before and after an attack. This means that an agent that looks at the outputs of a decoder would remain oblivious to an attack.


page 5

page 7

page 8

page 10

page 11

page 12

page 13

page 14


Generating Out of Distribution Adversarial Attack using Latent Space Poisoning

Traditional adversarial attacks rely upon the perturbations generated by...

Generating Adversarial Attacks in the Latent Space

Adversarial attacks in the input (pixel) space typically incorporate noi...

Benchmarking Adversarially Robust Quantum Machine Learning at Scale

Machine learning (ML) methods such as artificial neural networks are rap...

Scanflow: A multi-graph framework for Machine Learning workflow management, supervision, and debugging

Machine Learning (ML) is more than just training models, the whole workf...

A Deep Generative Model for Interactive Data Annotation through Direct Manipulation in Latent Space

The impact of machine learning (ML) in many fields of application is con...

Outcome-Guided Counterfactuals for Reinforcement Learning Agents from a Jointly Trained Generative Latent Space

We present a novel generative method for producing unseen and plausible ...

Please sign up or login with your details

Forgot password? Click here to reset