Distributed saddle point problems for strongly concave-convex functions

02/11/2022
βˆ™
by   Muhammad I. Qureshi, et al.
βˆ™
0
βˆ™

In this paper, we propose GT-GDA, a distributed optimization method to solve saddle point problems of the form: min_𝐱max_𝐲{F(𝐱,𝐲) :=G(𝐱) + ⟨𝐲, P𝐱⟩ - H(𝐲)}, where the functions G(·), H(·), and the the coupling matrix P are distributed over a strongly connected network of nodes. GT-GDA is a first-order method that uses gradient tracking to eliminate the dissimilarity caused by heterogeneous data distribution among the nodes. In the most general form, GT-GDA includes a consensus over the local coupling matrices to achieve the optimal (unique) saddle point, however, at the expense of increased communication. To avoid this, we propose a more efficient variant GT-GDA-Lite that does not incur the additional communication and analyze its convergence in various scenarios. We show that GT-GDA converges linearly to the unique saddle point solution when G(·) is smooth and convex, H(·) is smooth and strongly convex, and the global coupling matrix P has full column rank. We further characterize the regime under which GT-GDA exhibits a network topology-independent convergence behavior. We next show the linear convergence of GT-GDA to an error around the unique saddle point, which goes to zero when the coupling cost ⟨𝐲, P𝐱⟩ is common to all nodes, or when G(·) and H(·) are quadratic. Numerical experiments illustrate the convergence properties and importance of GT-GDA and GT-GDA-Lite for several applications.

READ FULL TEXT
research
βˆ™ 02/15/2021

Decentralized Distributed Optimization for Saddle Point Problems

We consider distributed convex-concave saddle point problems over arbitr...
research
βˆ™ 09/03/2018

A Dual Approach for Optimal Algorithms in Distributed Optimization over Networks

We study the optimal convergence rates for distributed convex optimizati...
research
βˆ™ 08/13/2020

Push-SAGA: A decentralized stochastic algorithm with variance reduction over directed graphs

In this paper, we propose Push-SAGA, a decentralized stochastic first-or...
research
βˆ™ 02/07/2022

Variance reduced stochastic optimization over directed graphs with row and column stochastic weights

This paper proposes AB-SAGA, a first-order distributed stochastic optimi...
research
βˆ™ 09/21/2020

Zeroth-Order Algorithms for Smooth Saddle-Point Problems

In recent years, the importance of saddle-point problems in machine lear...
research
βˆ™ 03/18/2019

Distributed stochastic optimization with gradient tracking over strongly-connected networks

In this paper, we study distributed stochastic optimization to minimize ...
research
βˆ™ 11/15/2019

A System Theoretical Perspective to Gradient-Tracking Algorithms for Distributed Quadratic Optimization

In this paper we consider a recently developed distributed optimization ...

Please sign up or login with your details

Forgot password? Click here to reset