What Else Do I Need to Know? The Effect of Background Information on Users' Reliance on AI Systems

by   Navita Goyal, et al.

AI systems have shown impressive performance at answering questions by retrieving relevant context. However, with the increasingly large models, it is impossible and often undesirable to constrain models' knowledge or reasoning to only the retrieved context. This leads to a mismatch between the information that these models access to derive the answer and the information available to the user consuming the AI predictions to assess the AI predicted answer. In this work, we study how users interact with AI systems in absence of sufficient information to assess AI predictions. Further, we ask the question of whether adding the requisite background alleviates the concerns around over-reliance in AI predictions. Our study reveals that users rely on AI predictions even in the absence of sufficient information needed to assess its correctness. Providing the relevant background, however, helps users catch AI errors better, reducing over-reliance on incorrect AI predictions. On the flip side, background information also increases users' confidence in their correct as well as incorrect judgments. Contrary to common expectation, aiding a user's perusal of the context and the background through highlights is not helpful in alleviating the issue of over-confidence stemming from availability of more information. Our work aims to highlight the gap between how NLP developers perceive informational need in human-AI interaction and the actual human interaction with the information available to them.


page 1

page 2

page 3

page 4


How to Answer Why – Evaluating the Explanations of AI Through Mental Model Analysis

To achieve optimal human-system integration in the context of user-AI in...

AI in HCI Design and User Experience

In this chapter, we review and discuss the transformation of AI technolo...

Cognitive Anthropomorphism of AI: How Humans and Computers Classify Images

Modern AI image classifiers have made impressive advances in recent year...

The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations

Explainability of AI systems is critical for users to take informed acti...

It's better to say "I can't answer" than answering incorrectly: Towards Safety critical NLP systems

In order to make AI systems more reliable and their adoption in safety c...

Toward Foraging for Understanding of StarCraft Agents: An Empirical Study

Assessing and understanding intelligent agents is a difficult task for u...

Does Interacting Help Users Better Understand the Structure of Probabilistic Models?

Despite growing interest in probabilistic modeling approaches and availa...

Please sign up or login with your details

Forgot password? Click here to reset