Sequential Explanations with Mental Model-Based Policies

07/17/2020
by   Arnold YS Yeung, et al.
13

The act of explaining across two parties is a feedback loop, where one provides information on what needs to be explained and the other provides an explanation relevant to this information. We apply a reinforcement learning framework which emulates this format by providing explanations based on the explainee's current mental model. We conduct novel online human experiments where explanations generated by various explanation methods are selected and presented to participants, using policies which observe participants' mental models, in order to optimize an interpretability proxy. Our results suggest that mental model-based policies (anchored in our proposed state representation) may increase interpretability over multiple sequential explanations, when compared to a random selection baseline. This work provides insight into how to select explanations which increase relevant information for users, and into conducting human-grounded experimentation to understand interpretability.

READ FULL TEXT
research
07/09/2018

Supervised Local Modeling for Interpretability

Model interpretability is an increasingly important component of practic...
research
11/14/2022

(When) Are Contrastive Explanations of Reinforcement Learning Helpful?

Global explanations of a reinforcement learning (RL) agent's expected be...
research
06/02/2021

Towards an Explanation Space to Align Humans and Explainable-AI Teamwork

Providing meaningful and actionable explanations to end-users is a funda...
research
07/07/2019

A Human-Grounded Evaluation of SHAP for Alert Processing

In the past years, many new explanation methods have been proposed to ac...
research
05/21/2019

Look Who's Talking Now: Implications of AV's Explanations on Driver's Trust, AV Preference, Anxiety and Mental Workload

Explanations given by automation are often used to promote automation ad...
research
06/10/2021

Brittle AI, Causal Confusion, and Bad Mental Models: Challenges and Successes in the XAI Program

The advances in artificial intelligence enabled by deep learning archite...
research
01/31/2022

Won't you see my neighbor?: User predictions, mental models, and similarity-based explanations of AI classifiers

Humans should be able work more effectively with artificial intelligence...

Please sign up or login with your details

Forgot password? Click here to reset