Built-in Vulnerabilities to Imperceptible Adversarial Perturbations

06/19/2018
by   Thomas Tanay, et al.
2

Designing models that are robust to small adversarial perturbations of their inputs has proven remarkably difficult. In this work we show that the reverse problem---making models more vulnerable---is surprisingly easy. After presenting some proofs of concept on MNIST, we introduce a generic tilting attack that injects vulnerabilities into the linear layers of pre-trained networks without affecting their performance on natural data. We illustrate this attack on a multilayer perceptron trained on SVHN and use it to design a stand-alone adversarial module which we call a steganogram decoder. Finally, we show on CIFAR-10 that a state-of-the-art network can be trained to misclassify images in the presence of imperceptible backdoor signals. These different results suggest that adversarial perturbations are not always informative of the true features used by a model.

READ FULL TEXT

page 3

page 5

page 6

page 8

research
02/14/2017

On Detecting Adversarial Perturbations

Machine learning and deep learning in particular has advanced tremendous...
research
12/01/2021

Adv-4-Adv: Thwarting Changing Adversarial Perturbations via Adversarial Domain Adaptation

Whereas adversarial training can be useful against specific adversarial ...
research
09/21/2020

Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations

We study the effect of adversarial perturbations of images on the estima...
research
08/05/2019

A principled approach for generating adversarial images under non-smooth dissimilarity metrics

Deep neural networks are vulnerable to adversarial perturbations: small ...
research
06/20/2021

Attack to Fool and Explain Deep Networks

Deep visual models are susceptible to adversarial perturbations to input...
research
05/31/2019

Residual Networks as Nonlinear Systems: Stability Analysis using Linearization

We regard pre-trained residual networks (ResNets) as nonlinear systems a...
research
07/02/2018

Adversarial Perturbations Against Real-Time Video Classification Systems

Recent research has demonstrated the brittleness of machine learning sys...

Please sign up or login with your details

Forgot password? Click here to reset