A Human-Grounded Evaluation of SHAP for Alert Processing

by   Hilde J. P. Weerts, et al.

In the past years, many new explanation methods have been proposed to achieve interpretability of machine learning predictions. However, the utility of these methods in practical applications has not been researched extensively. In this paper we present the results of a human-grounded evaluation of SHAP, an explanation method that has been well-received in the XAI and related communities. In particular, we study whether this local model-agnostic explanation method can be useful for real human domain experts to assess the correctness of positive predictions, i.e. alerts generated by a classifier. We performed experimentation with three different groups of participants (159 in total), who had basic knowledge of explainable machine learning. We performed a qualitative analysis of recorded reflections of experiment participants performing alert processing with and without SHAP information. The results suggest that the SHAP explanations do impact the decision-making process, although the model's confidence score remains to be a leading source of evidence. We statistically test whether there is a significant difference in task utility metrics between tasks for which an explanation was available and tasks in which it was not provided. As opposed to common intuitions, we did not find a significant difference in alert processing performance when a SHAP explanation is available compared to when it is not.


page 1

page 2

page 3

page 4


A Human-Grounded Evaluation Benchmark for Local Explanations of Machine Learning

In order for people to be able to trust and take advantage of the result...

Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain

In the present paper we present the potential of Explainable Artificial ...

How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations

There have been several research works proposing new Explainable AI (XAI...

Knowledge-based Transfer Learning Explanation

Machine learning explanation can significantly boost machine learning's ...

Sequential Explanations with Mental Model-Based Policies

The act of explaining across two parties is a feedback loop, where one p...

Assisting Human Decisions in Document Matching

Many practical applications, ranging from paper-reviewer assignment in p...

Why Change My Design: Explaining Poorly Constructed Visualization Designs with Explorable Explanations

Although visualization tools are widely available and accessible, not ev...

Please sign up or login with your details

Forgot password? Click here to reset