Sparse Communication for Distributed Gradient Descent

04/17/2017
by   Alham Fikri Aji, et al.
0

We make distributed stochastic gradient descent faster by exchanging sparse updates instead of dense updates. Gradient updates are positively skewed as most updates are near zero, so we map the 99 value) to zero then exchange sparse matrices. This method can be combined with quantization to further improve the compression. We explore different configurations and apply them to neural machine translation and MNIST image classification tasks. Most configurations work on MNIST, whereas different configurations reduce convergence rate on the more complex translation task. Our experiments show that we can achieve up to 49 NMT without damaging the final accuracy or BLEU.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset