A Co-design Study for Multi-Stakeholder Job Recommender System Explanations

09/11/2023
by   Roan Schellingerhout, et al.
0

Recent legislation proposals have significantly increased the demand for eXplainable Artificial Intelligence (XAI) in many businesses, especially in so-called `high-risk' domains, such as recruitment. Within recruitment, AI has become commonplace, mainly in the form of job recommender systems (JRSs), which try to match candidates to vacancies, and vice versa. However, common XAI techniques often fall short in this domain due to the different levels and types of expertise of the individuals involved, making explanations difficult to generalize. To determine the explanation preferences of the different stakeholder types - candidates, recruiters, and companies - we created and validated a semi-structured interview guide. Using grounded theory, we structurally analyzed the results of these interviews and found that different stakeholder types indeed have strongly differing explanation preferences. Candidates indicated a preference for brief, textual explanations that allow them to quickly judge potential matches. On the other hand, hiring managers preferred visual graph-based explanations that provide a more technical and comprehensive overview at a glance. Recruiters found more exhaustive textual explanations preferable, as those provided them with more talking points to convince both parties of the match. Based on these findings, we describe guidelines on how to design an explanation interface that fulfills the requirements of all three stakeholder types. Furthermore, we provide the validated interview guide, which can assist future research in determining the explanation preferences of different stakeholder types.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/03/2018

Providing Explanations for Recommendations in Reciprocal Environments

Automated platforms which support users in finding a mutually beneficial...
research
05/19/2023

Visualization for Recommendation Explainability: A Survey and New Perspectives

Providing system-generated explanations for recommendations represents a...
research
06/26/2021

Explanatory Pluralism in Explainable AI

The increasingly widespread application of AI models motivates increased...
research
03/16/2023

Measuring the Impact of Explanation Bias: A Study of Natural Language Justifications for Recommender Systems

Despite the potential impact of explanations on decision making, there i...
research
10/07/2021

Explanation as a process: user-centric construction of multi-level and multi-modal explanations

In the last years, XAI research has mainly been concerned with developin...
research
08/03/2021

Changing Salty Food Preferences with Visual and Textual Explanations in a Search Interface

Salt is consumed at too high levels in the general population, causing h...

Please sign up or login with your details

Forgot password? Click here to reset