A Meta-Learning Approach for Training Explainable Graph Neural Networks

by   Indro Spinelli, et al.

In this paper, we investigate the degree of explainability of graph neural networks (GNNs). Existing explainers work by finding global/local subgraphs to explain a prediction, but they are applied after a GNN has already been trained. Here, we propose a meta-learning framework for improving the level of explainability of a GNN directly at training time, by steering the optimization procedure towards what we call `interpretable minima'. Our framework (called MATE, MetA-Train to Explain) jointly trains a model to solve the original task, e.g., node classification, and to provide easily processable outputs for downstream algorithms that explain the model's decisions in a human-friendly way. In particular, we meta-train the model's parameters to quickly minimize the error of an instance-level GNNExplainer trained on-the-fly on randomly sampled nodes. The final internal representation relies upon a set of features that can be `better' understood by an explanation algorithm, e.g., another instance of GNNExplainer. Our model-agnostic approach can improve the explanations produced for different GNN architectures and use any instance-based explainer to drive this process. Experiments on synthetic and real-world datasets for node and graph classification show that we can produce models that are consistently easier to explain by different algorithms. Furthermore, this increase in explainability comes at no cost for the accuracy of the model.


page 1

page 2

page 3

page 4


MotifExplainer: a Motif-based Graph Neural Network Explainer

We consider the explanation problem of Graph Neural Networks (GNNs). Mos...

Generative Causal Explanations for Graph Neural Networks

This paper presents Gem, a model-agnostic approach for providing interpr...

Towards Self-Explainable Graph Neural Network

Graph Neural Networks (GNNs), which generalize the deep neural networks ...

Interpretable Deep Convolutional Neural Networks via Meta-learning

Model interpretability is a requirement in many applications in which cr...

GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks

As one of the most popular machine learning models today, graph neural n...

EchoGNN: Explainable Ejection Fraction Estimation with Graph Neural Networks

Ejection fraction (EF) is a key indicator of cardiac function, allowing ...

Please sign up or login with your details

Forgot password? Click here to reset