Asynchronous decentralized accelerated stochastic gradient descent

09/24/2018
by   Guanghui Lan, et al.
0

In this work, we introduce an asynchronous decentralized accelerated stochastic gradient descent type of method for decentralized stochastic optimization, considering communication and synchronization are the major bottlenecks. We establish O(1/ϵ) (resp., O(1/√(ϵ))) communication complexity and O(1/ϵ^2) (resp., O(1/ϵ)) sampling complexity for solving general convex (resp., strongly convex) problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/16/2017

Conditional Accelerated Lazy Stochastic Gradient Descent

In this work we introduce a conditional accelerated lazy stochastic grad...
research
02/27/2018

Accelerating Asynchronous Algorithms for Convex Optimization by Momentum Compensation

Asynchronous algorithms have attracted much attention recently due to th...
research
04/26/2017

Accelerating Stochastic Gradient Descent

There is widespread sentiment that it is not possible to effectively uti...
research
02/02/2022

Asynchronous Decentralized Learning over Unreliable Wireless Networks

Decentralized learning enables edge users to collaboratively train model...
research
12/08/2020

A Primal-Dual Framework for Decentralized Stochastic Optimization

We consider the decentralized convex optimization problem, where multipl...
research
02/25/2020

Network-Density-Controlled Decentralized Parallel Stochastic Gradient Descent in Wireless Systems

This paper proposes a communication strategy for decentralized learning ...
research
01/10/2022

Learning Without a Global Clock: Asynchronous Learning in a Physics-Driven Learning Network

In a neuron network, synapses update individually using local informatio...

Please sign up or login with your details

Forgot password? Click here to reset