Learning to Counter: Stochastic Feature-based Learning for Diverse Counterfactual Explanations

09/27/2022
by   Vy Vo, et al.
0

Interpretable machine learning seeks to understand the reasoning process of complex black-box systems that are long notorious for lack of explainability. One growing interpreting approach is through counterfactual explanations, which go beyond why a system arrives at a certain decision to further provide suggestions on what a user can do to alter the outcome. A counterfactual example must be able to counter the original prediction from the black-box classifier, while also satisfying various constraints for practical applications. These constraints exist at trade-offs between one and another presenting radical challenges to existing works. To this end, we propose a stochastic learning-based framework that effectively balances the counterfactual trade-offs. The framework consists of a generation and a feature selection module with complementary roles: the former aims to model the distribution of valid counterfactuals whereas the latter serves to enforce additional constraints in a way that allows for differentiable training and amortized optimization. We demonstrate the effectiveness of our method in generating actionable and plausible counterfactuals that are more diverse than the existing methods and particularly in a more efficient manner than counterparts of the same capacity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/23/2020

Multi-Objective Counterfactual Explanations

Counterfactual explanations are one of the most popular methods to make ...
research
08/18/2021

CARE: Coherent Actionable Recourse based on Sound Counterfactual Explanations

Counterfactual explanation methods interpret the outputs of a machine le...
research
05/16/2022

Gradient-based Counterfactual Explanations using Tractable Probabilistic Models

Counterfactual examples are an appealing class of post-hoc explanations ...
research
07/03/2019

Interpretable Counterfactual Explanations Guided by Prototypes

We propose a fast, model agnostic method for finding interpretable count...
research
09/10/2020

On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning

There is a growing concern that the recent progress made in AI, especial...
research
03/16/2021

Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties

Counterfactual explanations (CEs) are a practical tool for demonstrating...
research
12/02/2021

Multi-Domain Transformer-Based Counterfactual Augmentation for Earnings Call Analysis

Earnings call (EC), as a periodic teleconference of a publicly-traded co...

Please sign up or login with your details

Forgot password? Click here to reset