Geometrically Convergent Distributed Optimization with Uncoordinated Step-Sizes

by   Angelia Nedić, et al.
Boston University
University of Illinois at Urbana-Champaign

A recent algorithmic family for distributed optimization, DIGing's, have been shown to have geometric convergence over time-varying undirected/directed graphs. Nevertheless, an identical step-size for all agents is needed. In this paper, we study the convergence rates of the Adapt-Then-Combine (ATC) variation of the DIGing algorithm under uncoordinated step-sizes. We show that the ATC variation of DIGing algorithm converges geometrically fast even if the step-sizes are different among the agents. In addition, our analysis implies that the ATC structure can accelerate convergence compared to the distributed gradient descent (DGD) structure which has been used in the original DIGing algorithm.


page 1

page 2

page 3

page 4


Learning step sizes for unfolded sparse coding

Sparse coding is typically solved by iterative optimization techniques, ...

Understanding the unstable convergence of gradient descent

Most existing analyses of (stochastic) gradient descent rely on the cond...

Variance Reduction on Adaptive Stochastic Mirror Descent

We study the idea of variance reduction applied to adaptive stochastic m...

S-ADDOPT: Decentralized stochastic first-order optimization over directed graphs

In this report, we study decentralized stochastic optimization to minimi...

ASY-SONATA: Achieving Geometric Convergence for Distributed Asynchronous Optimization

Can one obtain a geometrically convergent algorithm for distributed asyn...

On Convergence-Diagnostic based Step Sizes for Stochastic Gradient Descent

Constant step-size Stochastic Gradient Descent exhibits two phases: a tr...

Analysis of gradient descent methods with non-diminishing, bounded errors

The main aim of this paper is to provide an analysis of gradient descent...

Please sign up or login with your details

Forgot password? Click here to reset