On the Correctness and Sample Complexity of Inverse Reinforcement Learning

06/02/2019
by   Abi Komanduru, et al.
0

Inverse reinforcement learning (IRL) is the problem of finding a reward function that generates a given optimal policy for a given Markov Decision Process. This paper looks at an algorithmic-independent geometric analysis of the IRL problem with finite states and actions. A L1-regularized Support Vector Machine formulation of the IRL problem motivated by the geometric analysis is then proposed with the basic objective of the inverse reinforcement problem in mind: to find a reward function that generates a specified optimal policy. The paper further analyzes the proposed formulation of inverse reinforcement learning with n states and k actions, and shows a sample complexity of O(n^2 (nk)) for recovering a reward function that generates a policy that satisfies Bellman's optimality condition with respect to the true transition probabilities.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset