Hypergraph Pre-training with Graph Neural Networks

by   Boxin Du, et al.

Despite the prevalence of hypergraphs in a variety of high-impact applications, there are relatively few works on hypergraph representation learning, most of which primarily focus on hyperlink prediction, often restricted to the transductive learning setting. Among others, a major hurdle for effective hypergraph representation learning lies in the label scarcity of nodes and/or hyperedges. To address this issue, this paper presents an end-to-end, bi-level pre-training strategy with Graph Neural Networks for hypergraphs. The proposed framework named HyperGene bears three distinctive advantages. First, it is capable of ingesting the labeling information when available, but more importantly, it is mainly designed in the self-supervised fashion which significantly broadens its applicability. Second, at the heart of the proposed HyperGene are two carefully designed pretexts, one on the node level and the other on the hyperedge level, which enable us to encode both the local and the global context in a mutually complementary way. Third, the proposed framework can work in both transductive and inductive settings. When applying the two proposed pretexts in tandem, it can accelerate the adaptation of the knowledge from the pre-trained model to downstream applications in the transductive setting, thanks to the bi-level nature of the proposed method. The extensive experimental results demonstrate that: (1) HyperGene achieves up to 5.69 efficiency by up to 42.80


page 1

page 2

page 3

page 4


Pairwise Half-graph Discrimination: A Simple Graph-level Self-supervised Strategy for Pre-training Graph Neural Networks

Self-supervised learning has gradually emerged as a powerful technique f...

Pre-Training on Dynamic Graph Neural Networks

The pre-training on the graph neural network model can learn the general...

Pre-Training Graph Neural Networks for Generic Structural Feature Extraction

Graph neural networks (GNNs) are shown to be successful in modeling appl...

BatmanNet: Bi-branch Masked Graph Transformer Autoencoder for Molecular Representation

Although substantial efforts have been made using graph neural networks ...

Self-supervised Graph-level Representation Learning with Local and Global Structure

This paper studies unsupervised/self-supervised whole-graph representati...

Unified 2D and 3D Pre-Training of Molecular Representations

Molecular representation learning has attracted much attention recently....

Graph Contrastive Pre-training for Effective Theorem Reasoning

Interactive theorem proving is a challenging and tedious process, which ...

Please sign up or login with your details

Forgot password? Click here to reset