High Probability Convergence for Accelerated Stochastic Mirror Descent

10/03/2022
by   Alina Ene, et al.
0

In this work, we describe a generic approach to show convergence with high probability for stochastic convex optimization. In previous works, either the convergence is only in expectation or the bound depends on the diameter of the domain. Instead, we show high probability convergence with bounds depending on the initial distance to the optimal solution as opposed to the domain diameter. The algorithms use step sizes analogous to the standard settings and are universal to Lipschitz functions, smooth functions, and their linear combinations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2023

High Probability Convergence of Stochastic Gradient Methods

In this work, we describe a generic approach to show convergence with hi...
research
12/13/2018

Tight Analyses for Non-Smooth Stochastic Gradient Descent

Consider the problem of minimizing functions that are Lipschitz and stro...
research
02/14/2022

Stochastic linear optimization never overfits with quadratically-bounded losses on general data

This work shows that a diverse collection of linear optimization methods...
research
07/04/2016

Accelerated Stochastic Subgradient Methods under Local Error Bound Condition

In this paper, we propose two accelerated stochastic subgradient method...
research
08/16/2021

Stochastic optimization under time drift: iterate averaging, step decay, and high probability guarantees

We consider the problem of minimizing a convex function that is evolving...
research
09/29/2022

On the Convergence of AdaGrad on ^d: Beyond Convexity, Non-Asymptotic Rate and Acceleration

Existing analysis of AdaGrad and other adaptive methods for smooth conve...
research
03/20/2023

High Probability Bounds for Stochastic Continuous Submodular Maximization

We consider maximization of stochastic monotone continuous submodular fu...

Please sign up or login with your details

Forgot password? Click here to reset