Convergence of Adam for Non-convex Objectives: Relaxed Hyperparameters and Non-ergodic Case

07/20/2023
by   Meixuan He, et al.
0

Adam is a commonly used stochastic optimization algorithm in machine learning. However, its convergence is still not fully understood, especially in the non-convex setting. This paper focuses on exploring hyperparameter settings for the convergence of vanilla Adam and tackling the challenges of non-ergodic convergence related to practical application. The primary contributions are summarized as follows: firstly, we introduce precise definitions of ergodic and non-ergodic convergence, which cover nearly all forms of convergence for stochastic optimization algorithms. Meanwhile, we emphasize the superiority of non-ergodic convergence over ergodic convergence. Secondly, we establish a weaker sufficient condition for the ergodic convergence guarantee of Adam, allowing a more relaxed choice of hyperparameters. On this basis, we achieve the almost sure ergodic convergence rate of Adam, which is arbitrarily close to o(1/√(K)). More importantly, we prove, for the first time, that the last iterate of Adam converges to a stationary point for non-convex objectives. Finally, we obtain the non-ergodic convergence rate of O(1/K) for function values under the Polyak-Lojasiewicz (PL) condition. These findings build a solid theoretical foundation for Adam to solve non-convex stochastic optimization problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2023

Near-Optimal High-Probability Convergence for Non-Convex Stochastic Optimization with Variance Reduction

Traditional analyses for non-convex stochastic optimization problems cha...
research
05/09/2023

UAdam: Unified Adam-Type Algorithmic Framework for Non-Convex Stochastic Optimization

Adam-type algorithms have become a preferred choice for optimisation in ...
research
02/13/2020

Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization

Adaptivity is an important yet under-studied property in modern optimiza...
research
10/14/2022

Revisiting Optimal Convergence Rate for Smooth and Non-convex Stochastic Decentralized Optimization

Decentralized optimization is effective to save communication in large-s...
research
12/09/2021

Continuation Path with Linear Convergence Rate

Path-following algorithms are frequently used in composite optimization ...
research
10/11/2020

Fast Convergence of Langevin Dynamics on Manifold: Geodesics meet Log-Sobolev

Sampling is a fundamental and arguably very important task with numerous...
research
12/10/2020

Asymptotic study of stochastic adaptive algorithm in non-convex landscape

This paper studies some asymptotic properties of adaptive algorithms wid...

Please sign up or login with your details

Forgot password? Click here to reset