Boosting Distributed Full-graph GNN Training with Asynchronous One-bit Communication

03/02/2023
by   Meng Zhang, et al.
0

Training Graph Neural Networks (GNNs) on large graphs is challenging due to the conflict between the high memory demand and limited GPU memory. Recently, distributed full-graph GNN training has been widely adopted to tackle this problem. However, the substantial inter-GPU communication overhead can cause severe throughput degradation. Existing communication compression techniques mainly focus on traditional DNN training, whose bottleneck lies in synchronizing gradients and parameters. We find they do not work well in distributed GNN training as the barrier is the layer-wise communication of features during the forward pass feature gradients during the backward pass. To this end, we propose an efficient distributed GNN training framework Sylvie, which employs one-bit quantization technique in GNNs and further pipelines the curtailed communication with computation to enormously shrink the overhead while maintaining the model quality. In detail, Sylvie provides a lightweight Low-bit Module to quantize the sent data and dequantize the received data back to full precision values in each layer. Additionally, we propose a Bounded Staleness Adaptor to control the introduced staleness to achieve further performance enhancement. We conduct theoretical convergence analysis and extensive experiments on various models datasets to demonstrate Sylvie can considerably boost the training throughput by up to 28.1x.

READ FULL TEXT
research
06/02/2023

Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training

Distributed full-graph training of Graph Neural Networks (GNNs) over lar...
research
07/29/2022

BiFeat: Supercharge GNN Training via Graph Feature Quantization

Graph Neural Networks (GNNs) is a promising approach for applications wi...
research
11/16/2021

Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks

Despite the recent success of Graph Neural Networks (GNNs), training GNN...
research
08/19/2023

GNNPipe: Accelerating Distributed Full-Graph GNN Training with Pipelined Model Parallelism

Current distributed full-graph GNN training methods adopt a variant of d...
research
07/09/2020

SGQuant: Squeezing the Last Bit on Graph Neural Networks with Specialized Quantization

With the increasing popularity of graph-based learning, Graph Neural Net...
research
12/08/2022

TinyKG: Memory-Efficient Training Framework for Knowledge Graph Neural Recommender Systems

There has been an explosion of interest in designing various Knowledge G...
research
04/20/2023

Decouple Graph Neural Networks: Train Multiple Simple GNNs Simultaneously Instead of One

Graph neural networks (GNN) suffer from severe inefficiency. It is mainl...

Please sign up or login with your details

Forgot password? Click here to reset