High Probability Convergence of Stochastic Gradient Methods

02/28/2023
by   Zijian Liu, et al.
0

In this work, we describe a generic approach to show convergence with high probability for both stochastic convex and non-convex optimization with sub-Gaussian noise. In previous works for convex optimization, either the convergence is only in expectation or the bound depends on the diameter of the domain. Instead, we show high probability convergence with bounds depending on the initial distance to the optimal solution. The algorithms use step sizes analogous to the standard settings and are universal to Lipschitz functions, smooth functions, and their linear combinations. This method can be applied to the non-convex case. We demonstrate an O((1+σ^2log(1/δ))/T+σ/√(T)) convergence rate when the number of iterations T is known and an O((1+σ^2log(T/δ))/√(T)) convergence rate when T is unknown for SGD, where 1-δ is the desired success probability. These bounds improve over existing bounds in the literature. Additionally, we demonstrate that our techniques can be used to obtain high probability bound for AdaGrad-Norm (Ward et al., 2019) that removes the bounded gradients assumption from previous works. Furthermore, our technique for AdaGrad-Norm extends to the standard per-coordinate AdaGrad algorithm (Duchi et al., 2011), providing the first noise-adapted high probability convergence for AdaGrad.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/03/2022

High Probability Convergence for Accelerated Stochastic Mirror Descent

In this work, we describe a generic approach to show convergence with hi...
research
02/14/2023

Breaking the Lower Bound with (Little) Structure: Acceleration in Non-Convex Stochastic Optimization with Heavy-Tailed Noise

We consider the stochastic optimization problem with smooth but not nece...
research
09/29/2022

On the Convergence of AdaGrad on ^d: Beyond Convexity, Non-Asymptotic Rate and Acceleration

Existing analysis of AdaGrad and other adaptive methods for smooth conve...
research
09/29/2022

META-STORM: Generalized Fully-Adaptive Variance Reduced SGD for Unbounded Functions

We study the application of variance reduction (VR) techniques to genera...
research
04/06/2022

High Probability Bounds for a Class of Nonconvex Algorithms with AdaGrad Stepsize

In this paper, we propose a new, simplified high probability analysis of...
research
02/17/2023

SGD with AdaGrad Stepsizes: Full Adaptivity with High Probability to Unknown Parameters, Unbounded Gradients and Affine Variance

We study Stochastic Gradient Descent with AdaGrad stepsizes: a popular a...
research
02/13/2023

Beyond Uniform Smoothness: A Stopped Analysis of Adaptive SGD

This work considers the problem of finding a first-order stationary poin...

Please sign up or login with your details

Forgot password? Click here to reset