A Mechanism for Distributed Deep Learning Communication Optimization

12/10/2020
by   Yemao Xu, et al.
0

Intensive communication and synchronization cost for gradients and parameters is the well-known bottleneck of distributed deep learning training. Based on the observations that Synchronous SGD (SSGD) obtains good convergence accuracy while asynchronous SGD (ASGD) delivers a faster raw training speed, we propose Several Steps Delay SGD (DeSGD) to combine their merits, aiming at tackling the communication bottleneck via communication sparsification. DeSGD explores both global synchronous updates in the parameter servers and asynchronous local updates in the workers in each periodic iteration. The periodic and flexible synchronization makes DeSGD achieve good convergence accuracy and fast training speed. To the best of our knowledge, we strike the new balance between synchronization quality and communication sparsification, and improve the trade-off between accuracy and training speed. Specifically, the core components of DeSGD include proper warm-up stage, steps delay stage, and our novel algorithm of global gradient for local update (GGLU). GGLU is critical for local update operations to effectively compensate the delayed local weights. Furthermore, we implement DeSGD on MXNet framework and comprehensively evaluate its performance with CIFAR-10 and ImageNet datasets. Experimental results show that DeSGD can accelerate distributed training speed under different experimental configurations, by up to 110 convergence accuracy.

READ FULL TEXT

page 3

page 5

page 6

page 7

page 8

page 10

page 13

page 14

research
05/14/2020

OD-SGD: One-step Delay Stochastic Gradient Descent for Distributed Training

The training of modern deep learning neural network calls for large amou...
research
02/17/2021

Oscars: Adaptive Semi-Synchronous Parallel Model for Distributed Deep Learning with Global View

Deep learning has become an indispensable part of life, such as face rec...
research
08/12/2019

Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations

Load imbalance pervasively exists in distributed deep learning training ...
research
07/02/2020

Adaptive Braking for Mitigating Gradient Delay

Neural network training is commonly accelerated by using multiple synchr...
research
11/06/2017

Impact of Communication Delay on Asynchronous Distributed Optimal Power Flow Using ADMM

Distributed optimization has attracted lots of attention in the operatio...
research
01/06/2020

Elastic Bulk Synchronous Parallel Model for Distributed Deep Learning

The bulk synchronous parallel (BSP) is a celebrated synchronization mode...
research
10/30/2019

Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization

Communication overhead is one of the key challenges that hinders the sca...

Please sign up or login with your details

Forgot password? Click here to reset