Outlining the Design Space of Explainable Intelligent Systems for Medical Diagnosis

02/16/2019
by   Yao Xie, et al.
0

The adoption of intelligent systems creates opportunities as well as challenges for medical work. On the positive side, intelligent systems have the potential to compute complex data from patients and generate automated diagnosis recommendations for doctors. However, medical professionals often perceive such systems as black boxes and, therefore, feel concerned about relying on system generated results to make decisions. In this paper, we contribute to the ongoing discussion of explainable artificial intelligence (XAI) by exploring the concept of explanation from a human-centered perspective. We hypothesize that medical professionals would perceive a system as explainable if the system was designed to think and act like doctors. We report a preliminary interview study that collected six medical professionals' reflection of how they interact with data for diagnosis and treatment purposes. Our data reveals when and how doctors prioritize among various types of data as a central part of their diagnosis process. Based on these findings, we outline future directions regarding the design of XAI systems in the medical context.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset