User Trust on an Explainable AI-based Medical Diagnosis Support System

by   Yao Rong, et al.

Recent research has supported that system explainability improves user trust and willingness to use medical AI for diagnostic support. In this paper, we use chest disease diagnosis based on X-Ray images as a case study to investigate user trust and reliance. Building off explainability, we propose a support system where users (radiologists) can view causal explanations for final decisions. After observing these causal explanations, users provided their opinions of the model predictions and could correct explanations if they did not agree. We measured user trust as the agreement between the model's and the radiologist's diagnosis as well as the radiologists' feedback on the model explanations. Additionally, they reported their trust in the system. We tested our model on the CXR-Eye dataset and it achieved an overall accuracy of 74.1 However, the experts in our user study agreed with the model for only 46.4 the cases, indicating the necessity of improving the trust. The self-reported trust score was 3.2 on a scale of 1.0 to 5.0, showing that the users tended to trust the model but the trust still needs to be enhanced.


page 1

page 2

page 3

page 4


How model accuracy and explanation fidelity influence user trust

Machine learning systems have become popular in fields such as marketing...

Explainable AI for medical imaging: Explaining pneumothorax diagnoses with Bayesian Teaching

Limited expert time is a key bottleneck in medical imaging. Due to advan...

The Impact of Explanations on Layperson Trust in Artificial Intelligence-Driven Symptom Checker Apps: Experimental Study

To achieve the promoted benefits of an AI symptom checker, laypeople mus...

TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security

Despite AI's significant growth, its "black box" nature creates challeng...

The Usability and Trustworthiness of Medical Eye Images

The majority of blindness is preventable, and is located in developing c...

Causal Scoring Medical Image Explanations: A Case Study On Ex-vivo Kidney Stone Images

On the promise that if human users know the cause of an output, it would...

To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions

In hybrid human-AI systems, users need to decide whether or not to trust...

Please sign up or login with your details

Forgot password? Click here to reset