Using Causal Analysis for Conceptual Deep Learning Explanation

by   Sumedha Singla, et al.

Model explainability is essential for the creation of trustworthy Machine Learning models in healthcare. An ideal explanation resembles the decision-making process of a domain expert and is expressed using concepts or terminology that is meaningful to the clinicians. To provide such an explanation, we first associate the hidden units of the classifier to clinically relevant concepts. We take advantage of radiology reports accompanying the chest X-ray images to define concepts. We discover sparse associations between concepts and hidden units using a linear sparse logistic regression. To ensure that the identified units truly influence the classifier's outcome, we adopt tools from Causal Inference literature and, more specifically, mediation analysis through counterfactual interventions. Finally, we construct a low-depth decision tree to translate all the discovered concepts into a straightforward decision rule, expressed to the radiologist. We evaluated our approach on a large chest x-ray dataset, where our model produces a global explanation consistent with clinical knowledge.


Unsupervised Causal Binary Concepts Discovery with VAE for Black-box Model Explanation

We aim to explain a black-box classifier with the form: `data X is class...

Explaining the Black-box Smoothly- A Counterfactual Approach

We propose a BlackBox Counterfactual Explainer that is explicitly develo...

Classification of radiology reports by modality and anatomy: A comparative study

Data labeling is currently a time-consuming task that often requires exp...

DISSECT: Disentangled Simultaneous Explanations via Concept Traversals

Explaining deep learning model inferences is a promising venue for scien...

Feature Concepts for Data Federative Innovations

A feature concept, the essence of the data-federative innovation process...

Automated Cardiothoracic Ratio Calculation and Cardiomegaly Detection using Deep Learning Approach

We propose an algorithm for calculating the cardiothoracic ratio (CTR) f...

Inducing Semantic Grouping of Latent Concepts for Explanations: An Ante-Hoc Approach

Self-explainable deep models are devised to represent the hidden concept...

Please sign up or login with your details

Forgot password? Click here to reset