Unifying mirror descent and dual averaging

10/30/2019
by   Anatoli Juditsky, et al.
0

We introduce and analyse a new family of algorithms which generalizes and unifies both the mirror descent and the dual averaging algorithms. The unified analysis of the algorithms involves the introduction of a generalized Bregman divergence which utilizes subgradients instead of gradients. Our approach is general enough to encompass classical settings in convex optimization, online learning, and variational inequalities such as saddle-point problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/29/2020

Gradient Descent Averaging and Primal-dual Averaging for Strongly Convex Optimization

Averaging scheme has attracted extensive attention in deep learning as w...
research
05/20/2014

Convex Optimization: Algorithms and Complexity

This monograph presents the main complexity theorems in convex optimizat...
research
06/03/2020

Online mirror descent and dual averaging: keeping pace in the dynamic case

Online mirror descent (OMD) and dual averaging (DA) are two fundamental ...
research
07/11/2018

Modified Regularized Dual Averaging Method for Training Sparse Convolutional Neural Networks

We proposed a modified regularized dual averaging method for training sp...
research
02/26/2021

Fast Cyclic Coordinate Dual Averaging with Extrapolation for Generalized Variational Inequalities

We propose the Cyclic cOordinate Dual avEraging with extRapolation (CODE...
research
06/01/2023

Extragradient SVRG for Variational Inequalities: Error Bounds and Increasing Iterate Averaging

We study variance reduction methods for extragradient (EG) algorithms fo...
research
05/26/2019

Dual Averaging Method for Online Graph-structured Sparsity

Online learning algorithms update models via one sample per iteration, t...

Please sign up or login with your details

Forgot password? Click here to reset