Convergence Analyses of Online ADAM Algorithm in Convex Setting and Two-Layer ReLU Neural Network

05/22/2019
by   Biyi Fang, et al.
0

Nowadays, online learning is an appealing learning paradigm, which is of great interest in practice due to the recent emergence of large scale applications such as online advertising placement and online web ranking. Standard online learning assumes a finite number of samples while in practice data is streamed infinitely. In such a setting gradient descent with a diminishing learning rate does not work. We first introduce regret with rolling window, a new performance metric for online streaming learning, which measures the performance of an algorithm on every fixed number of contiguous samples. At the same time, we propose a family of algorithms based on gradient descent with a constant or adaptive learning rate and provide very technical analyses establishing regret bound properties of the algorithms. We cover the convex setting showing the regret of the order of the square root of the size of the window in the constant and dynamic learning rate scenarios. Our proof is applicable also to the standard online setting where we provide the first analysis of the same regret order (the previous proofs have flaws). We also study a two layer neural network setting with ReLU activation. In this case we establish that if initial weights are close to a stationary point, the same square root regret bound is attainable. We conduct computational experiments demonstrating a superior performance of the proposed algorithms.

READ FULL TEXT
research
05/29/2019

Matrix-Free Preconditioning in Online Learning

We provide an online convex optimization algorithm with regret that inte...
research
02/25/2010

Less Regret via Online Conditioning

We analyze and evaluate an online gradient descent algorithm with adapti...
research
06/27/2012

Online Alternating Direction Method

Online optimization has emerged as powerful tool in large scale optimiza...
research
10/28/2011

Adaptive Hedge

Most methods for decision-theoretic online learning are based on the Hed...
research
06/03/2020

Online mirror descent and dual averaging: keeping pace in the dynamic case

Online mirror descent (OMD) and dual averaging (DA) are two fundamental ...
research
08/26/2020

Auxiliary Network: Scalable and agile online learning for dynamic system with inconsistently available inputs

Streaming classification methods assume the number of input features is ...
research
06/04/2015

Rivalry of Two Families of Algorithms for Memory-Restricted Streaming PCA

We study the problem of recovering the subspace spanned by the first k p...

Please sign up or login with your details

Forgot password? Click here to reset