The Convergence Rate of SGD's Final Iterate: Analysis on Dimension Dependence

by   Daogao Liu, et al.

Stochastic Gradient Descent (SGD) is among the simplest and most popular methods in optimization. The convergence rate for SGD has been extensively studied and tight analyses have been established for the running average scheme, but the sub-optimality of the final iterate is still not well-understood. shamir2013stochastic gave the best known upper bound for the final iterate of SGD minimizing non-smooth convex functions, which is O(log T/√(T)) for Lipschitz convex functions and O(log T/ T) with additional assumption on strongly convexity. The best known lower bounds, however, are worse than the upper bounds by a factor of log T. harvey2019tight gave matching lower bounds but their construction requires dimension d= T. It was then asked by koren2020open how to characterize the final-iterate convergence of SGD in the constant dimension setting. In this paper, we answer this question in the more general setting for any d≤ T, proving Ω(log d/√(T)) and Ω(log d/T) lower bounds for the sub-optimality of the final iterate of SGD in minimizing non-smooth Lipschitz convex and strongly convex functions respectively with standard step size schedules. Our results provide the first general dimension dependent lower bound on the convergence of SGD's final iterate, partially resolving a COLT open question raised by koren2020open. We also present further evidence to show the correct rate in one dimension should be Θ(1/√(T)), such as a proof of a tight O(1/√(T)) upper bound for one-dimensional special cases in settings more general than koren2020open.


page 1

page 2

page 3

page 4


Error Lower Bounds of Constant Step-size Stochastic Gradient Descent

Stochastic Gradient Descent (SGD) plays a central role in modern machine...

Making the Last Iterate of SGD Information Theoretically Optimal

Stochastic gradient descent (SGD) is one of the most widely used algorit...

Tight Dimension Independent Lower Bound on Optimal Expected Convergence Rate for Diminishing Step Sizes in SGD

We study convergence of Stochastic Gradient Descent (SGD) for strongly c...

Generalization Bounds for Stochastic Gradient Descent via Localized ε-Covers

In this paper, we propose a new covering technique localized for the tra...

How Good is SGD with Random Shuffling?

We study the performance of stochastic gradient descent (SGD) on smooth ...

On the Complexity of Minimizing Convex Finite Sums Without Using the Indices of the Individual Functions

Recent advances in randomized incremental methods for minimizing L-smoot...

Convergence and concentration properties of constant step-size SGD through Markov chains

We consider the optimization of a smooth and strongly convex objective u...

Please sign up or login with your details

Forgot password? Click here to reset