GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks

06/20/2022
by   Kenza Amara, et al.
0

As one of the most popular machine learning models today, graph neural networks (GNNs) have attracted intense interest recently, and so does their explainability. Users are increasingly interested in a better understanding of GNN models and their outcomes. Unfortunately, today's evaluation frameworks for GNN explainability often rely on synthetic datasets, leading to conclusions of limited scope due to a lack of complexity in the problem instances. As GNN models are deployed to more mission-critical applications, we are in dire need for a common evaluation protocol of explainability methods of GNNs. In this paper, we propose, to our best knowledge, the first systematic evaluation framework for GNN explainability, considering explainability on three different "user needs:" explanation focus, mask nature, and mask transformation. We propose a unique metric that combines the fidelity measures and classify explanations based on their quality of being sufficient or necessary. We scope ourselves to node classification tasks and compare the most representative techniques in the field of input-level explainability for GNNs. For the widely used synthetic benchmarks, surprisingly shallow techniques such as personalized PageRank have the best performance for a minimum computation time. But when the graph structure is more complex and nodes have meaningful features, gradient-based methods, in particular Saliency, are the best according to our evaluation criteria. However, none dominates the others on all evaluation dimensions and there is always a trade-off. We further apply our evaluation protocol in a case study on eBay graphs to reflect the production environment.

READ FULL TEXT
research
11/01/2021

Edge-Level Explanations for Graph Neural Networks by Extending Explainability Methods for Convolutional Neural Networks

Graph Neural Networks (GNNs) are deep learning models that take graph da...
research
06/16/2021

SEEN: Sharpening Explanations for Graph Neural Networks using Explanations from Neighborhoods

Explaining the foundations for predictions obtained from graph neural ne...
research
06/28/2022

BAGEL: A Benchmark for Assessing Graph Neural Network Explanations

The problem of interpreting the decisions of machine learning is a well-...
research
12/31/2020

Explainability in Graph Neural Networks: A Taxonomic Survey

Deep learning methods are achieving ever-increasing performance on many ...
research
08/30/2022

EchoGNN: Explainable Ejection Fraction Estimation with Graph Neural Networks

Ejection fraction (EF) is a key indicator of cardiac function, allowing ...
research
11/03/2022

Exploring Explainability Methods for Graph Neural Networks

With the growing use of deep learning methods, particularly graph neural...
research
09/20/2021

A Meta-Learning Approach for Training Explainable Graph Neural Networks

In this paper, we investigate the degree of explainability of graph neur...

Please sign up or login with your details

Forgot password? Click here to reset