Calculating and Visualizing Counterfactual Feature Importance Values

by   Bjorge Meulemeester, et al.

Despite the success of complex machine learning algorithms, mostly justified by an outstanding performance in prediction tasks, their inherent opaque nature still represents a challenge to their responsible application. Counterfactual explanations surged as one potential solution to explain individual decision results. However, two major drawbacks directly impact their usability: (1) the isonomic view of feature changes, in which it is not possible to observe how much each modified feature influences the prediction, and (2) the lack of graphical resources to visualize the counterfactual explanation. We introduce Counterfactual Feature (change) Importance (CFI) values as a solution: a way of assigning an importance value to each feature change in a given counterfactual explanation. To calculate these values, we propose two potential CFI methods. One is simple, fast, and has a greedy nature. The other, coined CounterShapley, provides a way to calculate Shapley values between the factual-counterfactual pair. Using these importance values, we additionally introduce three chart types to visualize the counterfactual explanations: (a) the Greedy chart, which shows a greedy sequential path for prediction score increase up to predicted class change, (b) the CounterShapley chart, depicting its respective score in a simple and one-dimensional chart, and finally (c) the Constellation chart, which shows all possible combinations of feature changes, and their impact on the model's prediction score. For each of our proposed CFI methods and visualization schemes, we show how they can provide more information on counterfactual explanations. Finally, an open-source implementation is offered, compatible with any counterfactual explanation generator algorithm. Code repository at:


page 1

page 2

page 3

page 4


Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End

To explain a machine learning model, there are two main approaches: feat...

counterfactuals: An R Package for Counterfactual Explanation Methods

Counterfactual explanation methods provide information on how feature va...

OCTET: Object-aware Counterfactual Explanations

Nowadays, deep vision models are being widely deployed in safety-critica...

Counterfactual Explanations for Models of Code

Machine learning (ML) models play an increasingly prevalent role in many...

Counterfactual Explanations for Concepts in ℰℒℋ

Knowledge bases are widely used for information management on the web, e...

Change Surfaces for Expressive Multidimensional Changepoints and Counterfactual Prediction

Identifying changes in model parameters is fundamental in machine learni...

Counterfactual Shapley Additive Explanations

Feature attributions are a common paradigm for model explanations due to...

Please sign up or login with your details

Forgot password? Click here to reset