PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning

08/04/2020
by   Thijs Vogels, et al.
0

Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models. However, algorithms for decentralized training with compressed communication over arbitrary connected networks have been more complicated, requiring additional memory and hyperparameters. We introduce a simple algorithm that directly compresses the model differences between neighboring workers using low-rank linear compressors applied on model differences. Inspired by the PowerSGD algorithm for centralized deep learning, this algorithm uses power iteration steps to maximize the information transferred per bit. We prove that our method requires no additional hyperparameters, converges faster than prior methods, and is asymptotically independent of both the network and the compression. Out of the box, these compressors perform on par with state-of-the-art tuned compression algorithms in a series of deep learning benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/22/2019

Decentralized Deep Learning with Arbitrary Communication Compression

Decentralized training of deep learning models is a key element for enab...
research
05/31/2019

PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization

We study gradient compression methods to alleviate the communication bot...
research
11/30/2021

A Highly Effective Low-Rank Compression of Deep Neural Networks with Modified Beam-Search and Modified Stable Rank

Compression has emerged as one of the essential deep learning research t...
research
07/17/2019

DeepSqueeze: Decentralization Meets Error-Compensated Compression

Communication is a key bottleneck in distributed training. Recently, an ...
research
01/31/2022

BEER: Fast O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression

Communication efficiency has been widely recognized as the bottleneck fo...
research
03/17/2018

Decentralization Meets Quantization

Optimizing distributed learning systems is an art of balancing between c...
research
05/15/2020

A flexible, extensible software framework for model compression based on the LC algorithm

We propose a software framework based on the ideas of the Learning-Compr...

Please sign up or login with your details

Forgot password? Click here to reset