Counterfactual Graphs for Explainable Classification of Brain Networks

06/16/2021
by   Carlo Abrate, et al.
0

Training graph classifiers able to distinguish between healthy brains and dysfunctional ones, can help identifying substructures associated to specific cognitive phenotypes. However, the mere predictive power of the graph classifier is of limited interest to the neuroscientists, which have plenty of tools for the diagnosis of specific mental disorders. What matters is the interpretation of the model, as it can provide novel insights and new hypotheses. In this paper we propose counterfactual graphs as a way to produce local post-hoc explanations of any black-box graph classifier. Given a graph and a black-box, a counterfactual is a graph which, while having high structural similarity with the original graph, is classified by the black-box in a different class. We propose and empirically compare several strategies for counterfactual graph search. Our experiments against a white-box classifier with known optimal counterfactual, show that our methods, although heuristic, can produce counterfactuals very close to the optimal one. Finally, we show how to use counterfactual graphs to build global explanations correctly capturing the behaviour of different black-box classifiers and providing interesting insights for the neuroscientists.

READ FULL TEXT
research
09/14/2023

Text-to-Image Models for Counterfactual Explanations: a Black-Box Approach

This paper addresses the challenge of generating Counterfactual Explanat...
research
10/28/2021

Counterfactual Explanation of Brain Activity Classifiers using Image-to-Image Transfer by Generative Adversarial Network

Deep neural networks (DNNs) can accurately decode task-related informati...
research
01/29/2019

Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions

When the average performance of a prediction model varies significantly ...
research
07/15/2022

CheXplaining in Style: Counterfactual Explanations for Chest X-rays using StyleGAN

Deep learning models used in medical image analysis are prone to raising...
research
07/15/2023

Explainable AI with counterfactual paths

Explainable AI (XAI) is an increasingly important area of research in ma...
research
08/04/2023

Adapting to Change: Robust Counterfactual Explanations in Dynamic Data Landscapes

We introduce a novel semi-supervised Graph Counterfactual Explainer (GCE...
research
10/21/2022

A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation

In recent years, Graph Neural Networks have reported outstanding perform...

Please sign up or login with your details

Forgot password? Click here to reset