Diagnosing AI Explanation Methods with Folk Concepts of Behavior

01/27/2022
by   Alon Jacovi, et al.
0

When explaining AI behavior to humans, how is the communicated information being comprehended by the human explainee, and does it match what the explanation attempted to communicate? When can we say that an explanation is explaining something? We aim to provide an answer by leveraging theory of mind literature about the folk concepts that humans use to understand behavior. We establish a framework of social attribution by the human explainee, which describes the function of explanations: the concrete information that humans comprehend from them. Specifically, effective explanations should be coherent (communicate information which generalizes to other contrast cases), complete (communicating an explicit contrast case, objective causes, and subjective causes), and interactive (surfacing and resolving contradictions to the generalization property through iterations). We demonstrate that many XAI mechanisms can be mapped to folk concepts of behavior. This allows us to uncover their modes of failure that prevent current methods from explaining effectively, and what is necessary to enable coherent explanations.

READ FULL TEXT
research
01/28/2017

Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy

When AI systems interact with humans in the loop, they are often called ...
research
09/06/2022

"Mama Always Had a Way of Explaining Things So I Could Understand”: A Dialogue Corpus for Learning to Construct Explanations

As AI is more and more pervasive in everyday life, humans have an increa...
research
11/14/2022

(When) Are Contrastive Explanations of Reinforcement Learning Helpful?

Global explanations of a reinforcement learning (RL) agent's expected be...
research
11/18/2022

Towards Explaining Subjective Ground of Individuals on Social Media

Large-scale language models have been reducing the gap between machines ...
research
08/08/2023

Adding Why to What? Analyses of an Everyday Explanation

In XAI it is important to consider that, in contrast to explanations for...
research
11/19/2017

How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time Strategy Games

How should an AI-based explanation system explain an agent's complex beh...
research
04/16/2021

Towards Human-Understandable Visual Explanations:Imperceptible High-frequency Cues Can Better Be Removed

Explainable AI (XAI) methods focus on explaining what a neural network h...

Please sign up or login with your details

Forgot password? Click here to reset