GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner

04/10/2023
by   Zhenyu Hou, et al.
0

Graph self-supervised learning (SSL), including contrastive and generative approaches, offers great potential to address the fundamental challenge of label scarcity in real-world graph data. Among both sets of graph SSL techniques, the masked graph autoencoders (e.g., GraphMAE)–one type of generative method–have recently produced promising results. The idea behind this is to reconstruct the node features (or structures)–that are randomly masked from the input–with the autoencoder architecture. However, the performance of masked feature reconstruction naturally relies on the discriminability of the input features and is usually vulnerable to disturbance in the features. In this paper, we present a masked self-supervised learning framework GraphMAE2 with the goal of overcoming this issue. The idea is to impose regularization on feature reconstruction for graph SSL. Specifically, we design the strategies of multi-view random re-mask decoding and latent representation prediction to regularize the feature reconstruction. The multi-view random re-mask decoding is to introduce randomness into reconstruction in the feature space, while the latent representation prediction is to enforce the reconstruction in the embedding space. Extensive experiments show that GraphMAE2 can consistently generate top results on various public datasets, including at least 2.45 on ogbn-Papers100M with 111M nodes and 1.6B edges.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2022

GraphMAE: Self-Supervised Masked Graph Autoencoders

Self-supervised learning (SSL) has been extensively explored in recent y...
research
04/04/2023

RARE: Robust Masked Graph Autoencoder

Masked graph autoencoder (MGAE) has emerged as a promising self-supervis...
research
01/11/2023

Generative-Contrastive Learning for Self-Supervised Latent Representations of 3D Shapes from Multi-Modal Euclidean Input

We propose a combined generative and contrastive neural architecture for...
research
08/21/2022

Heterogeneous Graph Masked Autoencoders

Generative self-supervised learning (SSL), especially masked autoencoder...
research
06/18/2021

Graph Context Encoder: Graph Feature Inpainting for Graph Generation and Self-supervised Pretraining

We propose the Graph Context Encoder (GCE), a simple but efficient appro...
research
01/07/2022

MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs

We introduce a novel masked graph autoencoder (MGAE) framework to perfor...
research
07/30/2022

A Survey on Masked Autoencoder for Self-supervised Learning in Vision and Beyond

Masked autoencoders are scalable vision learners, as the title of MAE <c...

Please sign up or login with your details

Forgot password? Click here to reset