On the Divergence of Decentralized Non-Convex Optimization

by   Mingyi Hong, et al.

We study a generic class of decentralized algorithms in which N agents jointly optimize the non-convex objective f(u):=1/N∑_i=1^Nf_i(u), while only communicating with their neighbors. This class of problems has become popular in modeling many signal processing and machine learning applications, and many efficient algorithms have been proposed. However, by constructing some counter-examples, we show that when certain local Lipschitz conditions (LLC) on the local function gradient ∇ f_i's are not satisfied, most of the existing decentralized algorithms diverge, even if the global Lipschitz condition (GLC) is satisfied, where the sum function f has Lipschitz gradient. This observation raises an important open question: How to design decentralized algorithms when the LLC, or even the GLC, is not satisfied? To address the above question, we design a first-order algorithm called Multi-stage gradient tracking algorithm (MAGENTA), which is capable of computing stationary solutions with neither the LLC nor the GLC. In particular, we show that the proposed algorithm converges sublinearly to certain ϵ-stationary solution, where the precise rate depends on various algorithmic and problem parameters. In particular, if the local function f_i's are Qth order polynomials, then the rate becomes O(1/ϵ^Q-1). Such a rate is tight for the special case of Q=2 where each f_i satisfies LLC. To our knowledge, this is the first attempt that studies decentralized non-convex optimization problems with neither the LLC nor the GLC.


page 1

page 2

page 3

page 4


Distributed Non-Convex First-Order Optimization and Information Processing: Lower Complexity Bounds and Rate Optimal Algorithms

We consider a class of distributed non-convex optimization problems ofte...

Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization: A Joint Gradient Estimation and Tracking Approach

Many modern large-scale machine learning problems benefit from decentral...

Understanding A Class of Decentralized and Federated Optimization Algorithms: A Multi-Rate Feedback Control Perspective

Distributed algorithms have been playing an increasingly important role ...

Distributed stochastic gradient tracking algorithm with variance reduction for non-convex optimization

This paper proposes a distributed stochastic algorithm with variance red...

Decentralized Gradient Tracking with Local Steps

Gradient tracking (GT) is an algorithm designed for solving decentralize...

ALADIN-α – An open-source MATLAB toolbox for distributed non-convex optimization

This paper introduces an open-source software for distributed and decent...

Decomposition of non-convex optimization via bi-level distributed ALADIN

Decentralized optimization algorithms are important in different context...

Please sign up or login with your details

Forgot password? Click here to reset