LatentPoison - Adversarial Attacks On The Latent Space

11/08/2017
by   Antonia Creswell, et al.
0

Robustness and security of machine learning (ML) systems are intertwined, wherein a non-robust ML system (classifiers, regressors, etc.) can be subject to attacks using a wide variety of exploits. With the advent of scalable deep learning methodologies, a lot of emphasis has been put on the robustness of supervised, unsupervised and reinforcement learning algorithms. Here, we study the robustness of the latent space of a deep variational autoencoder (dVAE), an unsupervised generative framework, to show that it is indeed possible to perturb the latent space, flip the class predictions and keep the classification probability approximately equal before and after an attack. This means that an agent that looks at the outputs of a decoder would remain oblivious to an attack.

READ FULL TEXT

page 5

page 7

page 8

page 10

page 11

page 12

page 13

page 14

research
12/09/2020

Generating Out of Distribution Adversarial Attack using Latent Space Poisoning

Traditional adversarial attacks rely upon the perturbations generated by...
research
04/10/2023

Generating Adversarial Attacks in the Latent Space

Adversarial attacks in the input (pixel) space typically incorporate noi...
research
11/23/2022

Benchmarking Adversarially Robust Quantum Machine Learning at Scale

Machine learning (ML) methods such as artificial neural networks are rap...
research
11/04/2021

Scanflow: A multi-graph framework for Machine Learning workflow management, supervision, and debugging

Machine Learning (ML) is more than just training models, the whole workf...
research
05/24/2023

A Deep Generative Model for Interactive Data Annotation through Direct Manipulation in Latent Space

The impact of machine learning (ML) in many fields of application is con...
research
07/15/2022

Outcome-Guided Counterfactuals for Reinforcement Learning Agents from a Jointly Trained Generative Latent Space

We present a novel generative method for producing unseen and plausible ...
research
02/28/2020

Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation

Machine Learning (ML) algorithms are vulnerable to poisoning attacks, wh...

Please sign up or login with your details

Forgot password? Click here to reset