Variance-reduced accelerated methods for decentralized stochastic double-regularized nonconvex strongly-concave minimax problems

07/14/2023
by   Gabriel Mancino-Ball, et al.
0

In this paper, we consider the decentralized, stochastic nonconvex strongly-concave (NCSC) minimax problem with nonsmooth regularization terms on both primal and dual variables, wherein a network of m computing agents collaborate via peer-to-peer communications. We consider when the coupling function is in expectation or finite-sum form and the double regularizers are convex functions, applied separately to the primal and dual variables. Our algorithmic framework introduces a Lagrangian multiplier to eliminate the consensus constraint on the dual variable. Coupling this with variance-reduction (VR) techniques, our proposed method, entitled VRLM, by a single neighbor communication per iteration, is able to achieve an 𝒪(κ^3ε^-3) sample complexity under the general stochastic setting, with either a big-batch or small-batch VR option, where κ is the condition number of the problem and ε is the desired solution accuracy. With a big-batch VR, we can additionally achieve 𝒪(κ^2ε^-2) communication complexity. Under the special finite-sum setting, our method with a big-batch VR can achieve an 𝒪(n + √(n)κ^2ε^-2) sample complexity and 𝒪(κ^2ε^-2) communication complexity, where n is the number of components in the finite sum. All complexity results match the best-known results achieved by a few existing methods for solving special cases of the problem we consider. To the best of our knowledge, this is the first work which provides convergence guarantees for NCSC minimax problems with general convex nonsmooth regularizers applied to both the primal and dual variables in the decentralized stochastic setting. Numerical experiments are conducted on two machine learning problems. Our code is downloadable from https://github.com/RPI-OPT/VRLM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/01/2022

Decentralized Stochastic Variance Reduced Extragradient Method

This paper studies decentralized convex-concave minimax optimization pro...
research
09/09/2020

Variance Reduced EXTRA and DIGing and Their Optimal Acceleration for Strongly Convex Decentralized Optimization

We study stochastic decentralized optimization for the problem of traini...
research
04/05/2023

Decentralized gradient descent maximization method for composite nonconvex strongly-concave minimax problems

Minimax problems have recently attracted a lot of research interests. A ...
research
02/26/2021

Variance Reduction via Primal-Dual Accelerated Dual Averaging for Nonsmooth Convex Finite-Sums

We study structured nonsmooth convex finite-sum optimization that appear...
research
09/02/2023

Switch and Conquer: Efficient Algorithms By Switching Stochastic Gradient Oracles For Decentralized Saddle Point Problems

We consider a class of non-smooth strongly convex-strongly concave saddl...
research
09/17/2020

Coordinate Methods for Matrix Games

We develop primal-dual coordinate methods for solving bilinear saddle-po...
research
07/27/2022

INTERACT: Achieving Low Sample and Communication Complexities in Decentralized Bilevel Learning over Networks

In recent years, decentralized bilevel optimization problems have receiv...

Please sign up or login with your details

Forgot password? Click here to reset