On the Impact of Explanations on Understanding of Algorithmic Decision-Making
Ethical principles for algorithms are gaining importance as more and more stakeholders are affected by "high-risk" algorithmic decision-making (ADM) systems. Understanding how these systems work enables stakeholders to make informed decisions and to assess the systems' adherence to ethical values. Explanations are a promising way to create understanding, but current explainable artificial intelligence (XAI) research does not always consider theories on how understanding is formed and evaluated. In this work, we aim to contribute to a better understanding of understanding by conducting a qualitative task-based study with 30 participants, including "users" and "affected stakeholders". We use three explanation modalities (textual, dialogue, and interactive) to explain a "high-risk" ADM system to participants and analyse their responses both inductively and deductively, using the "six facets of understanding" framework by Wiggins McTighe. Our findings indicate that the "six facets" are a fruitful approach to analysing participants' understanding, highlighting processes such as "empathising" and "self-reflecting" as important parts of understanding. We further introduce the "dialogue" modality as a valid alternative to increase participant engagement in ADM explanations. Our analysis further suggests that individuality in understanding affects participants' perceptions of algorithmic fairness, confirming the link between understanding and ADM assessment that previous studies have outlined. We posit that drawing from theories on learning and understanding like the "six facets" and leveraging explanation modalities can guide XAI research to better suit explanations to learning processes of individuals and consequently enable their assessment of ethical values of ADM systems.
READ FULL TEXT