RNN-based Online Learning: An Efficient First-Order Optimization Algorithm with a Convergence Guarantee

03/07/2020
by   N. Mert Vural, et al.
0

We investigate online nonlinear regression with continually running recurrent neural network networks (RNNs), i.e., RNN-based online learning. For RNN-based online learning, we introduce an efficient first-order training algorithm that theoretically guarantees to converge to the optimum network parameters. Our algorithm is truly online such that it does not make any assumption on the learning environment to guarantee convergence. Through numerical simulations, we verify our theoretical results and illustrate significant performance improvements achieved by our algorithm with respect to the state-of-the-art RNN training methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2019

An Efficient EKF Based Algorithm For LSTM-Based Online Learning

We investigate online nonlinear regression with long short term memory (...
research
07/05/2019

A Unified Framework of Online Learning Algorithms for Training Recurrent Neural Networks

We present a framework for compactly summarizing many recent results in ...
research
03/13/2020

Identification of AC Networks via Online Learning

The increasing integration of intermittent renewable generation in power...
research
05/16/2020

Achieving Online Regression Performance of LSTMs with Simple RNNs

Recurrent Neural Networks (RNNs) are widely used for online regression d...
research
03/29/2015

Towards Shockingly Easy Structured Classification: A Search-based Probabilistic Online Learning Framework

There are two major approaches for structured classification. One is the...
research
10/12/2020

RNN Training along Locally Optimal Trajectories via Frank-Wolfe Algorithm

We propose a novel and efficient training method for RNNs by iteratively...

Please sign up or login with your details

Forgot password? Click here to reset