A Homogenization Approach for Gradient-Dominated Stochastic Optimization

08/21/2023
by   Jiyuan Tan, et al.
0

Gradient dominance property is a condition weaker than strong convexity, yet it sufficiently ensures global convergence for first-order methods even in non-convex optimization. This property finds application in various machine learning domains, including matrix decomposition, linear neural networks, and policy-based reinforcement learning (RL). In this paper, we study the stochastic homogeneous second-order descent method (SHSODM) for gradient-dominated optimization with α∈ [1, 2] based on a recently proposed homogenization approach. Theoretically, we show that SHSODM achieves a sample complexity of O(ϵ^-7/(2 α) +1) for α∈ [1, 3/2) and Õ(ϵ^-2/α) for α∈ [3/2, 2]. We further provide a SHSODM with a variance reduction technique enjoying an improved sample complexity of O( ϵ ^-( 7-3α ) /( 2α )) for α∈ [1,3/2). Our results match the state-of-the-art sample complexity bounds for stochastic gradient-dominated optimization without cubic regularization. Since the homogenization approach only relies on solving extremal eigenvector problems instead of Newton-type systems, our methods gain the advantage of cheaper iterations and robustness in ill-conditioned problems. Numerical experiments on several RL tasks demonstrate the efficiency of SHSODM compared to other off-the-shelf methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset