Gradient Sparification for Asynchronous Distributed Training

10/24/2019
by   Zijie Yan, et al.
0

Modern large scale machine learning applications require stochastic optimization algorithms to be implemented on distributed computational architectures. A key bottleneck is the communication overhead for exchanging information, such as stochastic gradients, among different nodes. Recently, gradient sparsification techniques have been proposed to reduce communications cost and thus alleviate the network overhead. However, most of gradient sparsification techniques consider only synchronous parallelism and cannot be applied in asynchronous scenarios, such as asynchronous distributed training for federated learning at mobile devices. In this paper, we present a dual-way gradient sparsification approach (DGS) that is suitable for asynchronous distributed training. We let workers download model difference, instead of the global model, from the server, and the model difference information is also sparsified so that the information exchanged overhead is reduced by sparsifying the dual-way communication between the server and workers. To preserve accuracy under dual-way sparsification, we design a sparsification aware momentum (SAMomentum) to turn sparsification into adaptive batch size between each parameter. We conduct experiments at a cluster of 32 workers, and the results show that, with the same compression ratio but much lower communication cost, our approach can achieve better scalability and generalization ability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2017

Gradient Sparsification for Communication-Efficient Distributed Optimization

Modern large scale machine learning applications require stochastic opti...
research
10/06/2017

Accumulated Gradient Normalization

This work addresses the instability in asynchronous data parallel optimi...
research
12/08/2021

SASG: Sparsification with Adaptive Stochastic Gradients for Communication-efficient Distributed Learning

Stochastic optimization algorithms implemented on distributed computing ...
research
06/19/2020

A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning

Modern large-scale machine learning applications require stochastic opti...
research
01/19/2022

Flexible Parallel Learning in Edge Scenarios: Communication, Computational and Energy Cost

Traditionally, distributed machine learning takes the guise of (i) diffe...
research
02/21/2019

Gradient Scheduling with Global Momentum for Non-IID Data Distributed Asynchronous Training

Distributed asynchronous offline training has received widespread attent...
research
06/08/2015

DUAL-LOCO: Distributing Statistical Estimation Using Random Projections

We present DUAL-LOCO, a communication-efficient algorithm for distribute...

Please sign up or login with your details

Forgot password? Click here to reset