DeepAI AI Chat
Log In Sign Up

Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties

03/16/2021
by   Lisa Schut, et al.
10

Counterfactual explanations (CEs) are a practical tool for demonstrating why machine learning classifiers make particular decisions. For CEs to be useful, it is important that they are easy for users to interpret. Existing methods for generating interpretable CEs rely on auxiliary generative models, which may not be suitable for complex datasets, and incur engineering overhead. We introduce a simple and fast method for generating interpretable CEs in a white-box setting without an auxiliary model, by using the predictive uncertainty of the classifier. Our experiments show that our proposed algorithm generates more interpretable CEs, according to IM1 scores, than existing methods. Additionally, our approach allows us to estimate the uncertainty of a CE, which may be important in safety-critical applications, such as those in the medical domain.

READ FULL TEXT

page 2

page 9

page 19

page 20

page 21

06/09/2023

Self-Interpretable Time Series Prediction with Counterfactual Explanations

Interpretable time series prediction is crucial for safety-critical area...
11/13/2018

Interpretable Credit Application Predictions With Counterfactual Explanations

We predict credit applications with off-the-shelf, interchangeable black...
07/15/2023

Explainable AI with counterfactual paths

Explainable AI (XAI) is an increasingly important area of research in ma...
07/03/2019

Interpretable Counterfactual Explanations Guided by Prototypes

We propose a fast, model agnostic method for finding interpretable count...
09/27/2022

Learning to Counter: Stochastic Feature-based Learning for Diverse Counterfactual Explanations

Interpretable machine learning seeks to understand the reasoning process...
04/01/2022

Provable concept learning for interpretable predictions using variational inference

In safety critical applications, practitioners are reluctant to trust ne...
06/20/2023

MoleCLUEs: Optimizing Molecular Conformers by Minimization of Differentiable Uncertainty

Structure-based models in the molecular sciences can be highly sensitive...

Code Repositories

explanations-by-minimizing-uncertainty

Code for "Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties"


view repo