Consensus Control for Decentralized Deep Learning

by   Lingjing Kong, et al.

Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters. Experiments in earlier works reveal that, even in a data-center setup, decentralized training often suffers from the degradation in the quality of the model: the training and test performance of models trained in a decentralized fashion is in general worse than that of models trained in a centralized fashion, and this performance drop is impacted by parameters such as network size, communication topology and data partitioning. We identify the changing consensus distance between devices as a key parameter to explain the gap between centralized and decentralized training. We show in theory that when the training consensus distance is lower than a critical quantity, decentralized training converges as fast as the centralized counterpart. We empirically validate that the relation between generalization performance and consensus distance is consistent with this theoretical observation. Our empirical insights allow the principled design of better decentralized training schemes that mitigate the performance drop. To this end, we propose practical training guidelines for the data-center setup as the important first step.


Decentralized Deep Learning with Arbitrary Communication Compression

Decentralized training of deep learning models is a key element for enab...

Decentralized Learning with Separable Data: Generalization and Fast Algorithms

Decentralized learning offers privacy and communication efficiency when ...

Global Update Tracking: A Decentralized Learning Algorithm for Heterogeneous Data

Decentralized learning enables the training of deep learning models over...

Decentralized Stochastic First-Order Methods for Large-scale Machine Learning

Decentralized consensus-based optimization is a general computational fr...

Q-Learning for Conflict Resolution in B5G Network Automation

Network automation is gaining significant attention in the development o...

Decentralized Training of Foundation Models in Heterogeneous Environments

Training foundation models, such as GPT-3 and PaLM, can be extremely exp...

Collaborative Deep Learning Across Multiple Data Centers

Valuable training data is often owned by independent organizations and l...

Please sign up or login with your details

Forgot password? Click here to reset