Accelerated gradient methods for nonconvex optimization: Escape trajectories from strict saddle points and convergence to local minima

07/13/2023
by   Rishabh Dixit, et al.
0

This paper considers the problem of understanding the behavior of a general class of accelerated gradient methods on smooth nonconvex functions. Motivated by some recent works that have proposed effective algorithms, based on Polyak's heavy ball method and the Nesterov accelerated gradient method, to achieve convergence to a local minimum of nonconvex functions, this work proposes a broad class of Nesterov-type accelerated methods and puts forth a rigorous study of these methods encompassing the escape from saddle-points and convergence to local minima through a both asymptotic and a non-asymptotic analysis. In the asymptotic regime, this paper answers an open question of whether Nesterov's accelerated gradient method (NAG) with variable momentum parameter avoids strict saddle points almost surely. This work also develops two metrics of asymptotic rate of convergence and divergence, and evaluates these two metrics for several popular standard accelerated methods such as the NAG, and Nesterov's accelerated gradient with constant momentum (NCM) near strict saddle points. In the local regime, this work provides an analysis that leads to the "linear" exit time estimates from strict saddle neighborhoods for trajectories of these accelerated methods as well the necessary conditions for the existence of such trajectories. Finally, this work studies a sub-class of accelerated methods that can converge in convex neighborhoods of nonconvex functions with a near optimal rate to a local minima and at the same time this sub-class offers superior saddle-escape behavior compared to that of NAG.

READ FULL TEXT
research
02/26/2020

Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart for Nonconvex Optimization

Various types of parameter restart schemes have been proposed for accele...
research
01/07/2021

Boundary Conditions for Linear Exit Time Gradient Trajectories Around Saddle Points: Analysis and Algorithm

Gradient-related first-order methods have become the workhorse of large-...
research
06/01/2020

Exit Time Analysis for Approximations of Gradient Descent Trajectories Around Saddle Points

This paper considers the problem of understanding the exit time for traj...
research
06/02/2019

Generalized Momentum-Based Methods: A Hamiltonian Perspective

We take a Hamiltonian-based perspective to generalize Nesterov's acceler...
research
05/07/2019

Accelerated Target Updates for Q-learning

This paper studies accelerations in Q-learning algorithms. We propose an...
research
06/17/2021

Escaping strict saddle points of the Moreau envelope in nonsmooth optimization

Recent work has shown that stochastically perturbed gradient methods can...
research
10/15/2017

Accelerated Block Coordinate Proximal Gradients with Applications in High Dimensional Statistics

Nonconvex optimization problems arise in different research fields and a...

Please sign up or login with your details

Forgot password? Click here to reset