Learning the Preferences of Uncertain Humans with Inverse Decision Theory

by   Cassidy Laidlaw, et al.

Existing observational approaches for learning human preferences, such as inverse reinforcement learning, usually make strong assumptions about the observability of the human's environment. However, in reality, people make many important decisions under uncertainty. To better understand preference learning in these cases, we study the setting of inverse decision theory (IDT), a previously proposed framework where a human is observed making non-sequential binary decisions under uncertainty. In IDT, the human's preferences are conveyed through their loss function, which expresses a tradeoff between different types of mistakes. We give the first statistical analysis of IDT, providing conditions necessary to identify these preferences and characterizing the sample complexity – the number of decisions that must be observed to learn the tradeoff the human is making to a desired precision. Interestingly, we show that it is actually easier to identify preferences when the decision problem is more uncertain. Furthermore, uncertain decision problems allow us to relax the unrealistic assumption that the human is an optimal decision maker but still identify their exact preferences; we give sample complexities in this suboptimal case as well. Our analysis contradicts the intuition that partial observability should make preference learning more difficult. It also provides a first step towards understanding and improving preference learning methods for uncertain and suboptimal humans.


page 1

page 2

page 3

page 4


Preference elicitation and inverse reinforcement learning

We state the problem of inverse reinforcement learning in terms of prefe...

Research: Analysis of Transport Model that Approximates Decision Taker's Preferences

Paper provides a method for solving the reverse Monge-Kantorovich transp...

Dealing with incomplete agents' preferences and an uncertain agenda in group decision making via sequential majority voting

We consider multi-agent systems where agents' preferences are aggregated...

Discrete, recurrent, and scalable patterns in human judgement underlie affective picture ratings

Operant keypress tasks, where each action has a consequence, have been a...

Including Uncertainty when Learning from Human Corrections

It is difficult for humans to efficiently teach robots how to correctly ...

Robustness and Adaptiveness Analysis of Future Fleets

Making decisions about the structure of a future military fleet is a cha...

Accounting for Human Learning when Inferring Human Preferences

Inverse reinforcement learning (IRL) is a common technique for inferring...

Please sign up or login with your details

Forgot password? Click here to reset