Knowing About Knowing: An Illusion of Human Competence Can Hinder Appropriate Reliance on AI Systems

by   Gaole He, et al.

The dazzling promises of AI systems to augment humans in various tasks hinge on whether humans can appropriately rely on them. Recent research has shown that appropriate reliance is the key to achieving complementary team performance in AI-assisted decision making. This paper addresses an under-explored problem of whether the Dunning-Kruger Effect (DKE) among people can hinder their appropriate reliance on AI systems. DKE is a metacognitive bias due to which less-competent individuals overestimate their own skill and performance. Through an empirical study (N = 249), we explored the impact of DKE on human reliance on an AI system, and whether such effects can be mitigated using a tutorial intervention that reveals the fallibility of AI advice, and exploiting logic units-based explanations to improve user understanding of AI advice. We found that participants who overestimate their performance tend to exhibit under-reliance on AI systems, which hinders optimal team performance. Logic units-based explanations did not help users in either improving the calibration of their competence or facilitating appropriate reliance. While the tutorial intervention was highly effective in helping users calibrate their self-assessment and facilitating appropriate reliance among participants with overestimated self-assessment, we found that it can potentially hurt the appropriate reliance of participants with underestimated self-assessment. Our work has broad implications on the design of methods to tackle user cognitive biases while facilitating appropriate reliance on AI systems. Our findings advance the current understanding of the role of self-assessment in shaping trust and reliance in human-AI decision making. This lays out promising future directions for relevant HCI research in this community.


page 6

page 9


Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making

In AI-assisted decision-making, it is critical for human decision-makers...

Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making

Today, AI is being increasingly used to help human experts make decision...

The Impact of Imperfect XAI on Human-AI Decision-Making

Explainability techniques are rapidly being developed to improve human-A...

Holding AI to Account: Challenges for the Delivery of Trustworthy AI in Healthcare

The need for AI systems to provide explanations for their behaviour is n...

Exploring How Anomalous Model Input and Output Alerts Affect Decision-Making in Healthcare

An important goal in the field of human-AI interaction is to help users ...

Why not both? Complementing explanations with uncertainty, and the role of self-confidence in Human-AI collaboration

AI and ML models have already found many applications in critical domain...

Toward Affective XAI: Facial Affect Analysis for Understanding Explainable Human-AI Interactions

As machine learning approaches are increasingly used to augment human de...

Please sign up or login with your details

Forgot password? Click here to reset