Explaining Preference-driven Schedules: the EXPRES Framework

by   Alberto Pozanco, et al.

Scheduling is the task of assigning a set of scarce resources distributed over time to a set of agents, who typically have preferences about the assignments they would like to get. Due to the constrained nature of these problems, satisfying all agents' preferences is often infeasible, which might lead to some agents not being happy with the resulting schedule. Providing explanations has been shown to increase satisfaction and trust in solutions produced by AI tools. However, it is particularly challenging to explain solutions that are influenced by and impact on multiple agents. In this paper we introduce the EXPRES framework, which can explain why a given preference was unsatisfied in a given optimal schedule. The EXPRES framework consists of: (i) an explanation generator that, based on a Mixed-Integer Linear Programming model, finds the best set of reasons that can explain an unsatisfied preference; and (ii) an explanation parser, which translates the generated explanations into human interpretable ones. Through simulations, we show that the explanation generator can efficiently scale to large instances. Finally, through a set of user studies within J.P. Morgan, we show that employees preferred the explanations generated by EXPRES over human-generated ones when considering workforce scheduling scenarios.


page 12

page 13


Explaining Preferences with Shapley Values

While preference modelling is becoming one of the pillars of machine lea...

Towards Personalized Explanation of Robotic Planning via User Feedback

Prior studies have found that providing explanations about robots' decis...

AI for Explaining Decisions in Multi-Agent Environments

Explanation is necessary for humans to understand and accept decisions m...

Towards Solving the Multiple Extension Problem: Combining Defaults and Probabilities

The multiple extension problem arises frequently in diagnostic and defau...

A Knowledge Driven Approach to Adaptive Assistance Using Preference Reasoning and Explanation

There is a need for socially assistive robots (SARs) to provide transpar...

Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions

Automated rationale generation is an approach for real-time explanation ...

Explaining reputation assessments

Reputation is crucial to enabling human or software agents to select amo...

Please sign up or login with your details

Forgot password? Click here to reset