Identifiability and generalizability from multiple experts in Inverse Reinforcement Learning

by   Paul Rolland, et al.

While Reinforcement Learning (RL) aims to train an agent from a reward function in a given environment, Inverse Reinforcement Learning (IRL) seeks to recover the reward function from observing an expert's behavior. It is well known that, in general, various reward functions can lead to the same optimal policy, and hence, IRL is ill-defined. However, (Cao et al., 2021) showed that, if we observe two or more experts with different discount factors or acting in different environments, the reward function can under certain conditions be identified up to a constant. This work starts by showing an equivalent identifiability statement from multiple experts in tabular MDPs based on a rank condition, which is easily verifiable and is shown to be also necessary. We then extend our result to various different scenarios, i.e., we characterize reward identifiability in the case where the reward function can be represented as a linear combination of given features, making it more interpretable, or when we have access to approximate transition matrices. Even when the reward is not identifiable, we provide conditions characterizing when data on multiple experts in a given environment allows to generalize and train an optimal agent in a new environment. Our theoretical results on reward identifiability and generalizability are validated in various numerical experiments.


page 8

page 9

page 26

page 27


Identifiability in inverse reinforcement learning

Inverse reinforcement learning attempts to reconstruct the reward functi...

Inverse Reinforcement Learning with Multiple Ranked Experts

We consider the problem of learning to behave optimally in a Markov Deci...

Calculus on MDPs: Potential Shaping as a Gradient

In reinforcement learning, different reward functions can be equivalent ...

InfoRL: Interpretable Reinforcement Learning using Information Maximization

Recent advances in reinforcement learning have proved that given an envi...

Towards Resolving Unidentifiability in Inverse Reinforcement Learning

We consider a setting for Inverse Reinforcement Learning (IRL) where the...

Inverse Reinforcement Learning for Text Summarization

Current state-of-the-art summarization models are trained with either ma...

Joint Goal and Strategy Inference across Heterogeneous Demonstrators via Reward Network Distillation

Reinforcement learning (RL) has achieved tremendous success as a general...

Please sign up or login with your details

Forgot password? Click here to reset