A Distributed Synchronous SGD Algorithm with Global Top-k Sparsification for Low Bandwidth Networks

01/14/2019
by   Shaohuai Shi, et al.
0

Distributed synchronous stochastic gradient descent (S-SGD) with data parallelism requires very high communication bandwidth between computational workers (e.g., GPUs) to exchange gradients iteratively. Recently, Top-k sparsification techniques have been proposed to reduce the volume of data to be exchanged among workers and thus alleviate the network pressure. Top-k sparsification can zero-out a significant portion of gradients without impacting the model convergence. However, the sparse gradients should be transferred with their indices, and the irregular indices make the sparse gradients aggregation difficult. Current methods that use AllGather to accumulate the sparse gradients have a communication complexity of O(kP), where P is the number of workers, which is inefficient on low bandwidth networks with a large number of workers. We observe that not all Top-k gradients from P workers are needed for the model update, and therefore we propose a novel global Top-k (gTop-k) sparsification mechanism to address the difficulty of aggregating sparse gradients. Specifically, we choose global Top-k largest absolute values of gradients from P workers, instead of accumulating all local Top-k gradients to update the model in each iteration. The gradient aggregation method based on gTop-k sparsification, namely gTopKAllReduce, reduces the communication complexity from O(kP) to O(klog_2P). Through extensive experiments on different DNNs, we verify that gTop-k S-SGD has nearly consistent convergence performance with S-SGD. We evaluate the training efficiency of gTop-k on a cluster with 32 GPU machines which are inter-connected with 1 Gbps Ethernet. The experimental results show that our method achieves up to 2.7-12× higher scaling efficiency than S-SGD with dense gradients, and 1.1-1.7× improvement than the existing Top-k S-SGD.

READ FULL TEXT
research
08/29/2023

ABS-SGD: A Delayed Synchronous Stochastic Gradient Descent Algorithm with Adaptive Batch Size for Heterogeneous GPU Clusters

As the size of models and datasets grows, it has become increasingly com...
research
05/31/2020

DaSGD: Squeezing SGD Parallelization Performance in Distributed Training Using Delayed Averaging

The state-of-the-art deep learning algorithms rely on distributed traini...
research
09/06/2020

HLSGD Hierarchical Local SGD With Stale Gradients Featuring

While distributed training significantly speeds up the training process ...
research
06/16/2023

Just One Byte (per gradient): A Note on Low-Bandwidth Decentralized Language Model Finetuning Using Shared Randomness

Language model training in distributed settings is limited by the commun...
research
09/10/2023

Linear Speedup of Incremental Aggregated Gradient Methods on Streaming Data

This paper considers a type of incremental aggregated gradient (IAG) met...
research
04/03/2023

SparDL: Distributed Deep Learning Training with Efficient Sparse Communication

Top-k sparsification has recently been widely used to reduce the communi...
research
07/07/2023

DEFT: Exploiting Gradient Norm Difference between Model Layers for Scalable Gradient Sparsification

Gradient sparsification is a widely adopted solution for reducing the ex...

Please sign up or login with your details

Forgot password? Click here to reset