Evaluation of Human-Understandability of Global Model Explanations using Decision Tree

09/18/2023
by   Adarsa Sivaprasad, et al.
0

In explainable artificial intelligence (XAI) research, the predominant focus has been on interpreting models for experts and practitioners. Model agnostic and local explanation approaches are deemed interpretable and sufficient in many applications. However, in domains like healthcare, where end users are patients without AI or domain expertise, there is an urgent need for model explanations that are more comprehensible and instil trust in the model's operations. We hypothesise that generating model explanations that are narrative, patient-specific and global(holistic of the model) would enable better understandability and enable decision-making. We test this using a decision tree model to generate both local and global explanations for patients identified as having a high risk of coronary heart disease. These explanations are presented to non-expert users. We find a strong individual preference for a specific type of explanation. The majority of participants prefer global explanations, while a smaller group prefers local explanations. A task based evaluation of mental models of these participants provide valuable feedback to enhance narrative global explanations. This, in turn, guides the design of health informatics systems that are both trustworthy and actionable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/16/2021

Faithful and Plausible Explanations of Medical Code Predictions

Machine learning models that offer excellent predictive performance ofte...
research
09/13/2021

Towards Better Model Understanding with Path-Sufficient Explanations

Feature based local attribution methods are amongst the most prevalent i...
research
07/18/2023

Identifying Explanation Needs of End-users: Applying and Extending the XAI Question Bank

Explanations in XAI are typically developed by AI experts and focus on a...
research
01/21/2020

Deceptive AI Explanations: Creation and Detection

Artificial intelligence comes with great opportunities and but also grea...
research
10/12/2022

Feasible and Desirable Counterfactual Generation by Preserving Human Defined Constraints

We present a human-in-the-loop approach to generate counterfactual (CF) ...
research
07/14/2023

Visual Explanations with Attributions and Counterfactuals on Time Series Classification

With the rising necessity of explainable artificial intelligence (XAI), ...

Please sign up or login with your details

Forgot password? Click here to reset