GraphCL: Contrastive Self-Supervised Learning of Graph Representations

07/15/2020
by   Hakim Hafidi, et al.
56

We propose Graph Contrastive Learning (GraphCL), a general framework for learning node representations in a self supervised manner. GraphCL learns node embeddings by maximizing the similarity between the representations of two randomly perturbed versions of the intrinsic features and link structure of the same node's local subgraph. We use graph neural networks to produce two representations of the same node and leverage a contrastive learning loss to maximize agreement between them. In both transductive and inductive learning setups, we demonstrate that our approach significantly outperforms the state-of-the-art in unsupervised learning on a number of node classification benchmarks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset