How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations

08/11/2020
by   Dylan Slack, et al.
24

As local explanations of black box models are increasingly being employed to establish model credibility in high stakes settings, it is important to ensure that these explanations are accurate and reliable. However, local explanations generated by existing techniques are often prone to high variance. Further, these techniques are computationally inefficient, require significant hyper-parameter tuning, and provide little insight into the quality of the resulting explanations. By identifying lack of uncertainty modeling as the main cause of these challenges, we propose a novel Bayesian framework that produces explanations that go beyond point-wise estimates of feature importance. We instantiate this framework to generate Bayesian versions of LIME and KernelSHAP. In particular, we estimate credible intervals (CIs) that capture the uncertainty associated with each feature importance in local explanations. These credible intervals are tight when we have high confidence in the feature importances of a local explanation. The CIs are also informative both for estimating how many perturbations we need to sample – sampling can proceed until the CIs are sufficiently narrow – and where to sample – sampling in regions with high predictive uncertainty leads to faster convergence. Experimental evaluation with multiple real world datasets and user studies demonstrate the efficacy of our framework and the resulting explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2022

Explanation Uncertainty with Decision Boundary Awareness

Post-hoc explanation methods have become increasingly depended upon for ...
research
01/13/2023

Local Model Explanations and Uncertainty Without Model Access

We present a model-agnostic algorithm for generating post-hoc explanatio...
research
04/29/2023

EBLIME: Enhanced Bayesian Local Interpretable Model-agnostic Explanations

We propose EBLIME to explain black-box machine learning models and obtai...
research
08/23/2023

Approximating Score-based Explanation Techniques Using Conformal Regression

Score-based explainable machine-learning techniques are often used to un...
research
06/16/2020

High Dimensional Model Explanations: an Axiomatic Approach

Complex black-box machine learning models are regularly used in critical...
research
12/02/2019

EMAP: Explanation by Minimal Adversarial Perturbation

Modern instance-based model-agnostic explanation methods (LIME, SHAP, L2...
research
07/20/2021

Uncertainty Estimation and Out-of-Distribution Detection for Counterfactual Explanations: Pitfalls and Solutions

Whilst an abundance of techniques have recently been proposed to generat...

Please sign up or login with your details

Forgot password? Click here to reset