Decentralized Training of Foundation Models in Heterogeneous Environments

06/02/2022
∙
by   Binhang Yuan, et al.
∙
8
∙

Training foundation models, such as GPT-3 and PaLM, can be extremely expensive, often involving tens of thousands of GPUs running continuously for months. These models are typically trained in specialized clusters featuring fast, homogeneous interconnects and using carefully designed software systems that support both data parallelism and model/pipeline parallelism. Such dedicated clusters can be costly and difficult to obtain. Can we instead leverage the much greater amount of decentralized, heterogeneous, and lower-bandwidth interconnected compute? Previous works examining the heterogeneous, decentralized setting focus on relatively small models that can be trained in a purely data parallel manner. State-of-the-art schemes for model parallel foundation model training, such as Megatron, only consider the homogeneous data center setting. In this paper, we present the first study of training large foundation models with model parallelism in a decentralized regime over a heterogeneous network. Our key technical contribution is a scheduling algorithm that allocates different computational "tasklets" in the training of foundation models to a group of decentralized GPU devices connected by a slow heterogeneous network. We provide a formal cost model and further propose an efficient evolutionary algorithm to find the optimal allocation strategy. We conduct extensive experiments that represent different scenarios for learning over geo-distributed devices simulated using real-world network measurements. In the most extreme case, across 8 different cities spanning 3 continents, our approach is 4.8X faster than prior state-of-the-art training systems (Megatron).

READ FULL TEXT

page 2

page 9

research
∙ 05/28/2020

HetPipe: Enabling Large DNN Training on (Whimpy) Heterogeneous GPU Clusters through Integration of Pipelined Model Parallelism and Data Parallelism

Deep Neural Network (DNN) models have continuously been growing in size ...
research
∙ 06/10/2022

Merak: An Efficient Distributed DNN Training Framework with Automated 3D Parallelism for Giant Foundation Models

Foundation models are becoming the dominant deep learning technologies. ...
research
∙ 01/27/2023

SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient

Many deep learning applications benefit from using large models with bil...
research
∙ 05/25/2023

Automated Tensor Model Parallelism with Overlapped Communication for Efficient Foundation Model Training

Deep learning is experiencing a rise in foundation models that are expec...
research
∙ 02/09/2021

Consensus Control for Decentralized Deep Learning

Decentralized training of deep learning models enables on-device learnin...
research
∙ 11/16/2021

Task allocation for decentralized training in heterogeneous environment

The demand for large-scale deep learning is increasing, and distributed ...
research
∙ 08/25/2023

Heterogeneous Decentralized Machine Unlearning with Seed Model Distillation

As some recent information security legislation endowed users with uncon...

Please sign up or login with your details

Forgot password? Click here to reset