Exploring Explainability Methods for Graph Neural Networks

by   Harsh Patel, et al.
IIT Gandhinagar

With the growing use of deep learning methods, particularly graph neural networks, which encode intricate interconnectedness information, for a variety of real tasks, there is a necessity for explainability in such settings. In this paper, we demonstrate the applicability of popular explainability approaches on Graph Attention Networks (GAT) for a graph-based super-pixel image classification task. We assess the qualitative and quantitative performance of these techniques on three different datasets and describe our findings. The results shed a fresh light on the notion of explainability in GNNs, particularly GATs.


page 2

page 3

page 9

page 10


A Survey on Explainability of Graph Neural Networks

Graph neural networks (GNNs) are powerful graph-based deep-learning mode...

Explainability in Graph Neural Networks: A Taxonomic Survey

Deep learning methods are achieving ever-increasing performance on many ...

Quantifying Explainers of Graph Neural Networks in Computational Pathology

Explainability of deep learning methods is imperative to facilitate thei...

EchoGNN: Explainable Ejection Fraction Estimation with Graph Neural Networks

Ejection fraction (EF) is a key indicator of cardiac function, allowing ...

Explainability Techniques for Graph Convolutional Networks

Graph Networks are used to make decisions in potentially complex scenari...

GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks

As one of the most popular machine learning models today, graph neural n...

Explainability-aided Domain Generalization for Image Classification

Traditionally, for most machine learning settings, gaining some degree o...

Please sign up or login with your details

Forgot password? Click here to reset