Uncertainty Quantification of Surrogate Explanations: an Ordinal Consensus Approach

11/17/2021
by   Jonas Schulz, et al.
11

Explainability of black-box machine learning models is crucial, in particular when deployed in critical applications such as medicine or autonomous cars. Existing approaches produce explanations for the predictions of models, however, how to assess the quality and reliability of such explanations remains an open question. In this paper we take a step further in order to provide the practitioner with tools to judge the trustworthiness of an explanation. To this end, we produce estimates of the uncertainty of a given explanation by measuring the ordinal consensus amongst a set of diverse bootstrapped surrogate explainers. While we encourage diversity by using ensemble techniques, we propose and analyse metrics to aggregate the information contained within the set of explainers through a rating scheme. We empirically illustrate the properties of this approach through experiments on state-of-the-art Convolutional Neural Network ensembles. Furthermore, through tailored visualisations, we show specific examples of situations where uncertainty estimates offer concrete actionable insights to the user beyond those arising from standard surrogate explainers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/04/2020

LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees

Systems based on artificial intelligence and machine learning models sho...
research
06/10/2021

On the overlooked issue of defining explanation objectives for local-surrogate explainers

Local surrogate approaches for explaining machine learning model predict...
research
02/22/2021

Explainers in the Wild: Making Surrogate Explainers Robust to Distortions through Perception

Explaining the decisions of models is becoming pervasive in the image pr...
research
10/05/2022

Explanation Uncertainty with Decision Boundary Awareness

Post-hoc explanation methods have become increasingly depended upon for ...
research
10/30/2022

A view on model misspecification in uncertainty quantification

Estimating uncertainty of machine learning models is essential to assess...
research
04/13/2021

δ-CLUE: Diverse Sets of Explanations for Uncertainty Estimates

To interpret uncertainty estimates from differentiable probabilistic mod...
research
08/02/2022

s-LIME: Reconciling Locality and Fidelity in Linear Explanations

The benefit of locality is one of the major premises of LIME, one of the...

Please sign up or login with your details

Forgot password? Click here to reset