HetSeq: Distributed GPU Training on Heterogeneous Infrastructure

09/25/2020
by   Yifan Ding, et al.
0

Modern deep learning systems like PyTorch and Tensorflow are able to train enormous models with billions (or trillions) of parameters on a distributed infrastructure. These systems require that the internal nodes have the same memory capacity and compute performance. Unfortunately, most organizations, especially universities, have a piecemeal approach to purchasing computer systems resulting in a heterogeneous infrastructure, which cannot be used to compute large models. The present work describes HetSeq, a software package adapted from the popular PyTorch package that provides the capability to train large neural network models on heterogeneous infrastructure. Experiments with transformer translation and BERT language model shows that HetSeq scales over heterogeneous systems. HetSeq can be easily extended to other models like image classification. Package with supported document is publicly available at https://github.com/yifding/hetseq.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/14/2016

TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems

TensorFlow is an interface for expressing machine learning algorithms, a...
research
10/07/2020

Pkwrap: a PyTorch Package for LF-MMI Training of Acoustic Models

We present a simple wrapper that is useful to train acoustic models in P...
research
04/14/2023

Groebner.jl: A package for Gröbner bases computations in Julia

We introduce the Julia package Groebner.jl for computing Gröbner bases w...
research
03/24/2021

FastMoE: A Fast Mixture-of-Expert Training System

Mixture-of-Expert (MoE) presents a strong potential in enlarging the siz...
research
07/12/2022

RcTorch: a PyTorch Reservoir Computing Package with Automated Hyper-Parameter Optimization

Reservoir computers (RCs) are among the fastest to train of all neural n...
research
06/25/2022

PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance

Large Transformer-based models have exhibited superior performance in va...
research
03/24/2023

ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale

As deep learning models and input data are scaling at an unprecedented r...

Please sign up or login with your details

Forgot password? Click here to reset