In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making

05/12/2023
by   Raymond Fok, et al.
0

The current literature on AI-advised decision making – involving explainable AI systems advising human decision makers – presents a series of inconclusive and confounding results. To synthesize these findings, we propose a simple theory that elucidates the frequent failure of AI explanations to engender appropriate reliance and complementary decision making performance. We argue explanations are only useful to the extent that they allow a human decision maker to verify the correctness of an AI's prediction, in contrast to other desiderata, e.g., interpretability or spelling out the AI's reasoning process. Prior studies find in many decision making contexts AI explanations do not facilitate such verification. Moreover, most contexts fundamentally do not allow verification, regardless of explanation method. We conclude with a discussion of potential approaches for more effective explainable AI-advised decision making and human-AI collaboration.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset