Robustness Analysis of Classification Using Recurrent Neural Networks with Perturbed Sequential Input

03/10/2022
by   Guangyi Liu, et al.
0

For a given stable recurrent neural network (RNN) that is trained to perform a classification task using sequential inputs, we quantify explicit robustness bounds as a function of trainable weight matrices. The sequential inputs can be perturbed in various ways, e.g., streaming images can be deformed due to robot motion or imperfect camera lens. Using the notion of the Voronoi diagram and Lipschitz properties of stable RNNs, we provide a thorough analysis and characterize the maximum allowable perturbations while guaranteeing the full accuracy of the classification task. We illustrate and validate our theoretical results using a map dataset with clouds as well as the MNIST dataset.

READ FULL TEXT

page 1

page 2

page 8

research
05/03/2021

Robust Learning of Recurrent Neural Networks in Presence of Exogenous Noise

Recurrent Neural networks (RNN) have shown promising potential for learn...
research
11/06/2017

Mandarin tone modeling using recurrent neural networks

We propose an Encoder-Classifier framework to model the Mandarin tones u...
research
01/09/2020

Internal representation dynamics and geometry in recurrent neural networks

The efficiency of recurrent neural networks (RNNs) in dealing with seque...
research
11/11/2019

RNN-Test: Adversarial Testing Framework for Recurrent Neural Network Systems

While huge efforts have been investigated in the adversarial testing of ...
research
01/30/2019

Generalized Tensor Models for Recurrent Neural Networks

Recurrent Neural Networks (RNNs) are very successful at solving challeng...
research
04/11/2016

Binarized Neural Networks on the ImageNet Classification Task

We trained Binarized Neural Networks (BNNs) on the high resolution Image...

Please sign up or login with your details

Forgot password? Click here to reset