iBreakDown: Uncertainty of Model Explanations for Non-additive Predictive Models

03/27/2019
by   Alicja Gosiewska, et al.
0

Explainable Artificial Intelligence (XAI) brings a lot of attention recently. Explainability is being presented as a remedy for lack of trust in model predictions. Model agnostic tools such as LIME, SHAP, or Break Down promise instance level interpretability for any complex machine learning model. But how certain are these explanations? Can we rely on additive explanations for non-additive models? In this paper, we examine the behavior of model explainers under the presence of interactions. We define two sources of uncertainty, model level uncertainty, and explanation level uncertainty. We show that adding interactions reduces explanation level uncertainty. We introduce a new method iBreakDown that generates non-additive explanations with local interaction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/03/2023

Commentary on explainable artificial intelligence methods: SHAP and LIME

eXplainable artificial intelligence (XAI) methods have emerged to conver...
research
06/12/2020

Generalized SHAP: Generating multiple types of explanations in machine learning

Many important questions about a model cannot be answered just explainin...
research
05/03/2023

Calibrated Explanations: with Uncertainty Information and Counterfactuals

Artificial Intelligence (AI) has become an integral part of decision sup...
research
03/12/2020

Model Agnostic Multilevel Explanations

In recent years, post-hoc local instance-level and global dataset-level ...
research
12/12/2018

Can I trust you more? Model-Agnostic Hierarchical Explanations

Interactions such as double negation in sentences and scene interactions...
research
05/07/2021

Order in the Court: Explainable AI Methods Prone to Disagreement

In Natural Language Processing, feature-additive explanation methods qua...

Please sign up or login with your details

Forgot password? Click here to reset