Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks

09/07/2020
by   Shankar A. Deka, et al.
5

Convolutional and recurrent neural networks have been widely employed to achieve state-of-the-art performance on classification tasks. However, it has also been noted that these networks can be manipulated adversarially with relative ease, by carefully crafted additive perturbations to the input. Though several experimentally established prior works exist on crafting and defending against attacks, it is also desirable to have theoretical guarantees on the existence of adversarial examples and robustness margins of the network to such examples. We provide both in this paper. We focus specifically on recurrent architectures and draw inspiration from dynamical systems theory to naturally cast this as a control problem, allowing us to dynamically compute adversarial perturbations at each timestep of the input sequence, thus resembling a feedback controller. Illustrative examples are provided to supplement the theoretical discussions.

READ FULL TEXT

page 1

page 9

page 10

page 11

research
05/30/2022

Searching for the Essence of Adversarial Perturbations

Neural networks have achieved the state-of-the-art performance in variou...
research
10/08/2019

SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations

Deep neural networks are susceptible to adversarial manipulations in the...
research
06/21/2023

Universal adversarial perturbations for multiple classification tasks with quantum classifiers

Quantum adversarial machine learning is an emerging field that studies t...
research
12/19/2016

Simple Black-Box Adversarial Perturbations for Deep Networks

Deep neural networks are powerful and popular learning models that achie...
research
01/06/2020

Deceiving Image-to-Image Translation Networks for Autonomous Driving with Adversarial Perturbations

Deep neural networks (DNNs) have achieved impressive performance on hand...
research
04/03/2019

Interpreting Adversarial Examples by Activation Promotion and Suppression

It is widely known that convolutional neural networks (CNNs) are vulnera...
research
01/16/2020

A Little Fog for a Large Turn

Small, carefully crafted perturbations called adversarial perturbations ...

Please sign up or login with your details

Forgot password? Click here to reset