Achieving the fundamental convergence-communication tradeoff with Differentially Quantized Gradient Descent

02/06/2020
by   Chung-Yi Lin, et al.
9

The problem of reducing the communication cost in distributed training through gradient quantization is considered. For the class of smooth and strongly convex objective functions, we characterize the minimum achievable linear convergence rate for a given number of bits per problem dimension n. We propose Differentially Quantized Gradient Descent, a quantization algorithm with error compensation, and prove that it achieves the fundamental tradeoff between communication rate and convergence rate as n goes to infinity. In contrast, the naive quantizer that compresses the current gradient directly fails to achieve that optimal tradeoff. Experimental results on both simulated and real-world least-squares problems confirm our theoretical analysis.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset