PairNorm: Tackling Oversmoothing in GNNs

09/26/2019
by   Lingxiao Zhao, et al.
0

The performance of graph neural nets (GNNs) is known to gradually decrease with increasing number of layers. This decay is partly attributed to oversmoothing, where repeated graph convolutions eventually make node embeddings indistinguishable. We take a closer look at two different interpretations, aiming to quantify oversmoothing. Our main contribution is , a novel normalization layer that is based on a careful analysis of the graph convolution operator, which prevents all node embeddings from becoming too similar. What is more, is fast, easy to implement without any change to network architecture nor any additional parameters, and is broadly applicable to any GNN. Experiments on real-world graphs demonstrate that makes deeper GCN, GAT, and SGC models more robust against oversmoothing, and significantly boosts performance for a new problem setting that benefits from deeper GNNs.

READ FULL TEXT

page 14

page 15

research
01/07/2023

Reducing Over-smoothing in Graph Neural Networks Using Relational Embeddings

Graph Neural Networks (GNNs) have achieved a lot of success with graph-s...
research
06/12/2020

Towards Deeper Graph Neural Networks with Differentiable Group Normalization

Graph neural networks (GNNs), which learn the representation of a node b...
research
09/12/2019

GRESNET: Graph Residuals for Reviving Deep Graph Neural Nets from Suspended Animation

In this paper, we will investigate the causes of the GNNs' "suspended an...
research
07/22/2023

Collaborative Graph Neural Networks for Attributed Network Embedding

Graph neural networks (GNNs) have shown prominent performance on attribu...
research
06/15/2022

Feature Overcorrelation in Deep Graph Neural Networks: A New Perspective

Recent years have witnessed remarkable success achieved by graph neural ...
research
10/24/2022

Binary Graph Convolutional Network with Capacity Exploration

The current success of Graph Neural Networks (GNNs) usually relies on lo...
research
10/08/2022

SlenderGNN: Accurate, Robust, and Interpretable GNN, and the Reasons for its Success

Can we design a GNN that is accurate and interpretable at the same time?...

Please sign up or login with your details

Forgot password? Click here to reset