Improved memory in recurrent neural networks with sequential non-normal dynamics

05/31/2019
by   A. Emin Orhan, et al.
0

Training recurrent neural networks (RNNs) is a hard problem due to degeneracies in the optimization landscape, a problem also known as the vanishing/exploding gradients problem. Short of designing new RNN architectures, various methods for dealing with this problem that have been previously proposed usually boil down to orthogonalization of the recurrent dynamics, either at initialization or during the entire training period. The basic motivation behind these methods is that orthogonal transformations are isometries of the Euclidean space, hence they preserve (Euclidean) norms and effectively deal with the vanishing/exploding gradients problem. However, this idea ignores the crucial effects of non-linearity and noise. In the presence of a non-linearity, orthogonal transformations no longer preserve norms, suggesting that alternative transformations might be better suited to non-linear networks. Moreover, in the presence of noise, norm preservation itself ceases to be the ideal objective. A more sensible objective is maximizing the signal-to-noise ratio (SNR) of the propagated signal instead. Previous work has shown that in the linear case, recurrent networks that maximize the SNR display strongly non-normal dynamics and orthogonal networks are highly suboptimal by this measure. Motivated by this finding, in this paper, we investigate the potential of non-normal RNNs, i.e. RNNs with a non-normal recurrent connectivity matrix, in sequential processing tasks. Our experimental results show that non-normal RNNs significantly outperform their orthogonal counterparts in a diverse range of benchmarks. We also find evidence for increased non-normality and hidden chain-like feedforward structures in trained RNNs initialized with orthogonal recurrent connectivity matrices.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2019

Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics

A recent strategy to circumvent the exploding and vanishing gradient pro...
research
11/15/2021

Category-orthogonal object features guide information processing in recurrent neural networks trained for object categorization

Recurrent neural networks (RNNs) have been shown to perform better than ...
research
10/11/2022

On Scrambling Phenomena for Randomly Initialized Recurrent Networks

Recurrent Neural Networks (RNNs) frequently exhibit complicated dynamics...
research
02/14/2023

Convolutional unitary or orthogonal recurrent neural networks

Recurrent neural networks are extremely powerful yet hard to train. One ...
research
04/24/2023

Adaptive-saturated RNN: Remember more with less instability

Orthogonal parameterization is a compelling solution to the vanishing gr...
research
07/27/2023

Fading memory as inductive bias in residual recurrent networks

Residual connections have been proposed as architecture-based inductive ...
research
08/04/2023

Universal Approximation of Linear Time-Invariant (LTI) Systems through RNNs: Power of Randomness in Reservoir Computing

Recurrent neural networks (RNNs) are known to be universal approximators...

Please sign up or login with your details

Forgot password? Click here to reset