Path-Normalized Optimization of Recurrent Neural Networks with ReLU Activations

05/23/2016
by   Behnam Neyshabur, et al.
0

We investigate the parameter-space geometry of recurrent neural networks (RNNs), and develop an adaptation of path-SGD optimization method, attuned to this geometry, that can learn plain RNNs with ReLU activations. On several datasets that require capturing long-term dependency structure, we show that path-SGD can significantly improve trainability of ReLU RNNs compared to RNNs trained with SGD, even with various recently suggested initialization schemes.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset