Deconfounded Training for Graph Neural Networks

12/30/2021
by   Yongduo Sui, et al.
0

Learning powerful representations is one central theme of graph neural networks (GNNs). It requires refining the critical information from the input graph, instead of the trivial patterns, to enrich the representations. Towards this end, graph attention and pooling methods prevail. They mostly follow the paradigm of "learning to attend". It maximizes the mutual information between the attended subgraph and the ground-truth label. However, this training paradigm is prone to capture the spurious correlations between the trivial subgraph and the label. Such spurious correlations are beneficial to in-distribution (ID) test evaluations, but cause poor generalization in the out-of-distribution (OOD) test data. In this work, we revisit the GNN modeling from the causal perspective. On the top of our causal assumption, the trivial information serves as a confounder between the critical information and the label, which opens a backdoor path between them and makes them spuriously correlated. Hence, we present a new paradigm of deconfounded training (DTP) that better mitigates the confounding effect and latches on the critical information, to enhance the representation and generalization ability. Specifically, we adopt the attention modules to disentangle the critical subgraph and trivial subgraph. Then we make each critical subgraph fairly interact with diverse trivial subgraphs to achieve a stable prediction. It allows GNNs to capture a more reliable subgraph whose relation with the label is robust across different distributions. We conduct extensive experiments on synthetic and real-world datasets to demonstrate the effectiveness.

READ FULL TEXT

page 7

page 8

research
11/20/2021

Generalizing Graph Neural Networks on Out-Of-Distribution Graphs

Graph Neural Networks (GNNs) are proposed without considering the agnost...
research
06/18/2020

Subgraph Neural Networks

Deep learning methods for graphs achieve remarkable performance on many ...
research
01/30/2022

Discovering Invariant Rationales for Graph Neural Networks

Intrinsic interpretability of graph neural networks (GNNs) is to find a ...
research
01/21/2022

Deconfounding to Explanation Evaluation in Graph Neural Networks

Explainability of graph neural networks (GNNs) aims to answer “Why the G...
research
12/18/2021

Improving Subgraph Recognition with Variational Graph Information Bottleneck

Subgraph recognition aims at discovering a compressed substructure of a ...
research
05/22/2023

Causal-Based Supervision of Attention in Graph Neural Network: A Better and Simpler Choice towards Powerful Attention

In recent years, attention mechanisms have demonstrated significant pote...
research
06/29/2021

Predictive Modeling in the Presence of Nuisance-Induced Spurious Correlations

Deep predictive models often make use of spurious correlations between t...

Please sign up or login with your details

Forgot password? Click here to reset