AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels on GPUs

05/27/2023
by   Yangjie Zhou, et al.
0

Graph neural networks (GNNs) are powerful tools for exploring and learning from graph structures and features. As such, achieving high-performance execution for GNNs becomes crucially important. Prior works have proposed to explore the sparsity (i.e., low density) in the input graph to accelerate GNNs, which uses the full-graph-level or block-level sparsity format. We show that they fail to balance the sparsity benefit and kernel execution efficiency. In this paper, we propose a novel system, referred to as AdaptGear, that addresses the challenge of optimizing GNNs performance by leveraging kernels tailored to the density characteristics at the subgraph level. Meanwhile, we also propose a method that dynamically chooses the optimal set of kernels for a given input graph. Our evaluation shows that AdaptGear can achieve a significant performance improvement, up to 6.49 × (1.87 × on average), over the state-of-the-art works on two mainstream NVIDIA GPUs across various datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset