Online Observer-Based Inverse Reinforcement Learning

11/03/2020
by   Ryan Self, et al.
0

In this paper, a novel approach to the output-feedback inverse reinforcement learning (IRL) problem is developed by casting the IRL problem, for linear systems with quadratic cost functions, as a state estimation problem. Two observer-based techniques for IRL are developed, including a novel observer method that re-uses previous state estimates via history stacks. Theoretical guarantees for convergence and robustness are established under appropriate excitation conditions. Simulations demonstrate the performance of the developed observers and filters under noisy and noise-free measurements.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset