Use-Case-Grounded Simulations for Explanation Evaluation

06/05/2022
by   Valerie Chen, et al.
0

A growing body of research runs human subject evaluations to study whether providing users with explanations of machine learning models can help them with practical real-world use cases. However, running user studies is challenging and costly, and consequently each study typically only evaluates a limited number of different settings, e.g., studies often only evaluate a few arbitrarily selected explanation methods. To address these challenges and aid user study design, we introduce Use-Case-Grounded Simulated Evaluations (SimEvals). SimEvals involve training algorithmic agents that take as input the information content (such as model explanations) that would be presented to each participant in a human subject study, to predict answers to the use case of interest. The algorithmic agent's test set accuracy provides a measure of the predictiveness of the information content for the downstream use case. We run a comprehensive evaluation on three real-world use cases (forward simulation, model debugging, and counterfactual reasoning) to demonstrate that Simevals can effectively identify which explanation methods will help humans for each use case. These results provide evidence that SimEvals can be used to efficiently screen an important set of user study design decisions, e.g. selecting which explanations should be presented to the user, before running a potentially costly user study.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2023

A Case Study on Designing Evaluations of ML Explanations with Simulated User Studies

When conducting user studies to ascertain the usefulness of model explan...
research
03/16/2023

Explaining Groups of Instances Counterfactually for XAI: A Use Case, Algorithm and User Study for Group-Counterfactuals

Counterfactual explanations are an increasingly popular form of post hoc...
research
11/03/2018

SimplerVoice: A Key Message & Visual Description Generator System for Illiteracy

We introduce SimplerVoice: a key message and visual description generato...
research
05/17/2023

Unveiling the Potential of Counterfactuals Explanations in Employability

In eXplainable Artificial Intelligence (XAI), counterfactual explanation...
research
08/22/2022

SoK: Explainable Machine Learning for Computer Security Applications

Explainable Artificial Intelligence (XAI) is a promising solution to imp...
research
05/04/2020

Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?

Algorithmic approaches to interpreting machine learning models have prol...
research
05/30/2022

From Explanation to Recommendation: Ethical Standards for Algorithmic Recourse

People are increasingly subject to algorithmic decisions, and it is gene...

Please sign up or login with your details

Forgot password? Click here to reset