Revisiting Stochastic Extragradient

05/27/2019
by   Konstantin Mishchenko, et al.
0

We consider a new extension of the extragradient method that is motivated by approximating implicit updates. Since in a recent work chavdarova2019reducing it was shown that the existing stochastic extragradient algorithm (called mirror-prox) of juditsky2011solving diverges on a simple bilinear problem, we prove guarantees for solving variational inequality that are more general than in juditsky2011solving. Furthermore, we illustrate numerically that the proposed variant converges faster than many other methods on the example of chavdarova2019reducing. We also discuss how extragradient can be applied to training Generative Adversarial Networks (GANs). Our experiments on GANs demonstrate that the introduced approach may make the training faster in terms of data passes, while its higher iteration complexity makes the advantage smaller. To further accelerate method's convergence on problems such as bilinear minimax, we combine the extragradient step with negative momentum gidel2018negative and discuss the optimal momentum value.

READ FULL TEXT

page 6

page 7

page 8

page 14

research
04/21/2023

Near-Optimal Decentralized Momentum Method for Nonconvex-PL Minimax Problems

Minimax optimization plays an important role in many machine learning ta...
research
10/06/2021

Solve Minimax Optimization by Anderson Acceleration

Many modern machine learning algorithms such as generative adversarial n...
research
02/28/2018

A Variational Inequality Perspective on Generative Adversarial Nets

Stability has been a recurrent issue in training generative adversarial ...
research
08/17/2020

On the Suboptimality of Negative Momentum for Minimax Optimization

Smooth game optimization has recently attracted great interest in machin...
research
06/18/2023

Stabilizing GANs' Training with Brownian Motion Controller

The training process of generative adversarial networks (GANs) is unstab...
research
02/16/2021

Complex Momentum for Learning in Games

We generalize gradient descent with momentum for learning in differentia...
research
07/07/2018

Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile

Owing to their connection with generative adversarial networks (GANs), s...

Please sign up or login with your details

Forgot password? Click here to reset