Distributionally Time-Varying Online Stochastic Optimization under Polyak-Łojasiewicz Condition with Application in Conditional Value-at-Risk Statistical Learning

09/18/2023
by   Yuen-Man Pun, et al.
0

In this work, we consider a sequence of stochastic optimization problems following a time-varying distribution via the lens of online optimization. Assuming that the loss function satisfies the Polyak-Łojasiewicz condition, we apply online stochastic gradient descent and establish its dynamic regret bound that is composed of cumulative distribution drifts and cumulative gradient biases caused by stochasticity. The distribution metric we adopt here is Wasserstein distance, which is well-defined without the absolute continuity assumption or with a time-varying support set. We also establish a regret bound of online stochastic proximal gradient descent when the objective function is regularized. Moreover, we show that the above framework can be applied to the Conditional Value-at-Risk (CVaR) learning problem. Particularly, we improve an existing proof on the discovery of the PL condition of the CVaR problem, resulting in a regret bound of online stochastic gradient descent.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2016

Without-Replacement Sampling for Stochastic Gradient Methods: Convergence Results and Application to Distributed Optimization

Stochastic gradient methods for machine learning and optimization proble...
research
05/26/2022

Predictor-corrector algorithms for stochastic optimization under gradual distribution shift

Time-varying stochastic optimization problems frequently arise in machin...
research
06/30/2020

Learning to Read through Machine Teaching

Learning to read words aloud is a major step towards becoming a reader. ...
research
06/02/2020

Online Stochastic Convex Optimization: Wasserstein Distance Variation

Distributionally-robust optimization is often studied for a fixed set of...
research
09/01/2022

Optimal Regularized Online Convex Allocation by Adaptive Re-Solving

This paper introduces a dual-based algorithm framework for solving the r...
research
12/07/2021

Improving Dynamic Regret in Distributed Online Mirror Descent Using Primal and Dual Information

We consider the problem of distributed online optimization, with a group...
research
10/24/2022

Langevin dynamics based algorithm e-THεO POULA for stochastic optimization problems with discontinuous stochastic gradient

We introduce a new Langevin dynamics based algorithm, called e-THεO POUL...

Please sign up or login with your details

Forgot password? Click here to reset