On the Convergence of Adam and Adagrad

03/05/2020
by   Alexandre Défossez, et al.
19

We provide a simple proof of the convergence of the optimization algorithms Adam and Adagrad with the assumptions of smooth gradients and almost sure uniform bound on the ℓ_∞ norm of the gradients. This work builds on the techniques introduced by Ward et al. (2019) and extends them to the Adam optimizer. We show that in expectation, the squared norm of the objective gradient averaged over the trajectory has an upper-bound which is explicit in the constants of the problem, parameters of the optimizer and the total number of iterations N. This bound can be made arbitrarily small. In particular, Adam with a learning rate α=1/√(N) and a momentum parameter on squared gradients β_2=1 - 1/N achieves the same rate of convergence O(ln(N)/√(N)) as Adagrad. Thus, it is possible to use Adam as a finite horizon version of Adagrad, much like constant step size SGD can be used instead of its asymptotically converging decaying step size version.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset