Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making

by   Yunfeng Zhang, et al.

Today, AI is being increasingly used to help human experts make decisions in high-stakes scenarios. In these scenarios, full automation is often undesirable, not only due to the significance of the outcome, but also because human experts can draw on their domain knowledge complementary to the model's to ensure task success. We refer to these scenarios as AI-assisted decision making, where the individual strengths of the human and the AI come together to optimize the joint decision outcome. A key to their success is to appropriately calibrate human trust in the AI on a case-by-case basis; knowing when to trust or distrust the AI allows the human expert to appropriately apply their knowledge, improving decision outcomes in cases where the model is likely to perform poorly. This research conducts a case study of AI-assisted decision making in which humans and AI have comparable performance alone, and explores whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI. Specifically, we study the effect of showing confidence score and local explanation for a particular prediction. Through two human experiments, we show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making, which may also depend on whether the human can bring in enough unique knowledge to complement the AI's errors. We also highlight the problems in using local explanation for AI-assisted decision making scenarios and invite the research community to explore new approaches to explainability for calibrating human trust in AI.


page 1

page 2

page 3

page 4


Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making

In AI-assisted decision-making, it is critical for human decision-makers...

Improving Human-AI Collaboration With Descriptions of AI Behavior

People work with AI systems to improve their decision making, but often ...

Exploring How Anomalous Model Input and Output Alerts Affect Decision-Making in Healthcare

An important goal in the field of human-AI interaction is to help users ...

Knowing About Knowing: An Illusion of Human Competence Can Hinder Appropriate Reliance on AI Systems

The dazzling promises of AI systems to augment humans in various tasks h...

Human-Aligned Calibration for AI-Assisted Decision Making

Whenever a binary classifier is used to provide decision support, it typ...

Uncalibrated Models Can Improve Human-AI Collaboration

In many practical applications of AI, an AI model is used as a decision ...

Measuring Perceived Trust in XAI-Assisted Decision-Making by Eliciting a Mental Model

This empirical study proposes a novel methodology to measure users' perc...

Please sign up or login with your details

Forgot password? Click here to reset