BAGEL: A Benchmark for Assessing Graph Neural Network Explanations

by   Mandeep Rathee, et al.

The problem of interpreting the decisions of machine learning is a well-researched and important. We are interested in a specific type of machine learning model that deals with graph data called graph neural networks. Evaluating interpretability approaches for graph neural networks (GNN) specifically are known to be challenging due to the lack of a commonly accepted benchmark. Given a GNN model, several interpretability approaches exist to explain GNN models with diverse (sometimes conflicting) evaluation methodologies. In this paper, we propose a benchmark for evaluating the explainability approaches for GNNs called Bagel. In Bagel, we firstly propose four diverse GNN explanation evaluation regimes – 1) faithfulness, 2) sparsity, 3) correctness. and 4) plausibility. We reconcile multiple evaluation metrics in the existing literature and cover diverse notions for a holistic evaluation. Our graph datasets range from citation networks, document graphs, to graphs from molecules and proteins. We conduct an extensive empirical study on four GNN models and nine post-hoc explanation approaches for node and graph classification tasks. We open both the benchmarks and reference implementations and make them available at


Evaluating Explainability for Graph Neural Networks

As post hoc explanations are increasingly used to understand the behavio...

GIF: A General Graph Unlearning Strategy via Influence Function

With the greater emphasis on privacy and security in our society, the pr...

GNNInterpreter: A Probabilistic Generative Model-Level Explanation for Graph Neural Networks

Recently, Graph Neural Networks (GNNs) have significantly advanced the p...

BenchTemp: A General Benchmark for Evaluating Temporal Graph Neural Networks

To handle graphs in which features or connectivities are evolving over t...

GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks

As one of the most popular machine learning models today, graph neural n...

Parameterized Hypercomplex Graph Neural Networks for Graph Classification

Despite recent advances in representation learning in hypercomplex (HC) ...

On the Robustness of Post-hoc GNN Explainers to Label Noise

Proposed as a solution to the inherent black-box limitations of graph ne...

Please sign up or login with your details

Forgot password? Click here to reset