A Non-Asymptotic Analysis of Network Independence for Distributed Stochastic Gradient Descent

06/06/2019
by   Alex Olshevsky, et al.
0

This paper is concerned with minimizing the average of n cost functions over a network, in which agents may communicate and exchange information with their peers in the network. Specifically, we consider the setting where only noisy gradient information is available. To solve the problem, we study the standard distributed stochastic gradient descent (DSGD) method and perform a non-asymptotic convergence analysis. For strongly convex and smooth objective functions, we not only show that DSGD asymptotically achieves the optimal network independent convergence rate compared to centralized stochastic gradient descent (SGD), but also explicitly identify the non-asymptotic convergence rate as a function of characteristics of the objective functions and the network. Furthermore, we derive the time needed for DSGD to approach the asymptotic convergence rate, which behaves as K_T=O(n^16/15/(1-ρ_w)^31/15), where (1-ρ_w) denotes the spectral gap of the mixing matrix of communicating agents.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset