DeepAI AI Chat
Log In Sign Up

Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties

by   Lisa Schut, et al.

Counterfactual explanations (CEs) are a practical tool for demonstrating why machine learning classifiers make particular decisions. For CEs to be useful, it is important that they are easy for users to interpret. Existing methods for generating interpretable CEs rely on auxiliary generative models, which may not be suitable for complex datasets, and incur engineering overhead. We introduce a simple and fast method for generating interpretable CEs in a white-box setting without an auxiliary model, by using the predictive uncertainty of the classifier. Our experiments show that our proposed algorithm generates more interpretable CEs, according to IM1 scores, than existing methods. Additionally, our approach allows us to estimate the uncertainty of a CE, which may be important in safety-critical applications, such as those in the medical domain.


page 2

page 9

page 19

page 20

page 21


Self-Interpretable Time Series Prediction with Counterfactual Explanations

Interpretable time series prediction is crucial for safety-critical area...

Interpretable Credit Application Predictions With Counterfactual Explanations

We predict credit applications with off-the-shelf, interchangeable black...

Explainable AI with counterfactual paths

Explainable AI (XAI) is an increasingly important area of research in ma...

Interpretable Counterfactual Explanations Guided by Prototypes

We propose a fast, model agnostic method for finding interpretable count...

Learning to Counter: Stochastic Feature-based Learning for Diverse Counterfactual Explanations

Interpretable machine learning seeks to understand the reasoning process...

Provable concept learning for interpretable predictions using variational inference

In safety critical applications, practitioners are reluctant to trust ne...

MoleCLUEs: Optimizing Molecular Conformers by Minimization of Differentiable Uncertainty

Structure-based models in the molecular sciences can be highly sensitive...

Code Repositories


Code for "Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties"

view repo