Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models

11/03/2020
by   Tom Heskes, et al.
0

Shapley values underlie one of the most popular model-agnostic methods within explainable artificial intelligence. These values are designed to attribute the difference between a model's prediction and an average baseline to the different features used as input to the model. Being based on solid game-theoretic principles, Shapley values uniquely satisfy several desirable properties, which is why they are increasingly used to explain the predictions of possibly complex and highly non-linear machine learning models. Shapley values are well calibrated to a user's intuition when features are independent, but may lead to undesirable, counterintuitive explanations when the independence assumption is violated. In this paper, we propose a novel framework for computing Shapley values that generalizes recent work that aims to circumvent the independence assumption. By employing Pearl's do-calculus, we show how these 'causal' Shapley values can be derived for general causal graphs without sacrificing any of their desirable properties. Moreover, causal Shapley values enable us to separate the contribution of direct and indirect effects. We provide a practical implementation for computing causal Shapley values based on causal chain graphs when only partial information is available and illustrate their utility on a real-world example.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2020

Problems with Shapley-value-based explanations as feature importance measures

Game-theoretic formulations of feature importance have become popular as...
research
11/04/2021

Causal versus Marginal Shapley Values for Robotic Lever Manipulation Controlled using Deep Reinforcement Learning

We investigate the effect of including domain knowledge about a robotic ...
research
01/18/2022

Socioeconomic disparities and COVID-19: the causal connections

The analysis of causation is a challenging task that can be approached i...
research
10/14/2019

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

Explaining AI systems is fundamental both to the development of high per...
research
03/10/2020

A different perspective of cross-world independence assumption and the utility of natural effects versus controlled effects

The pure effects described by Robins and Greenland, and later called nat...
research
03/30/2023

Shapley Chains: Extending Shapley Values to Classifier Chains

In spite of increased attention on explainable machine learning models, ...
research
06/18/2021

Rational Shapley Values

Explaining the predictions of opaque machine learning algorithms is an i...

Please sign up or login with your details

Forgot password? Click here to reset