Extragradient and Extrapolation Methods with Generalized Bregman Distances for Saddle Point Problems

01/25/2021
by   Hui Zhang, et al.
0

In this work, we introduce two algorithmic frameworks, named Bregman extragradient method and Bregman extrapolation method, for solving saddle point problems. The proposed frameworks not only include the well-known extragradient and optimistic gradient methods as special cases, but also generate new variants such as sparse extragradient and extrapolation methods. With the help of the recent concept of relative Lipschitzness and some Bregman distance related tools, we are able to show certain upper bounds in terms of Bregman distances for “regret" measures. Further, we use those bounds to deduce the convergence rate of (1/k) for the Bregman extragradient and Bregman extrapolation methods applied to solving smooth convex-concave saddle point problems. Our theory recovers the main discovery made in [Mokhtari et al. (2020), SIAM J. Optim., 20, pp. 3230-3251] for more general algorithmic frameworks with weaker assumptions via a conceptually different approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset