Receding Horizon Inverse Reinforcement Learning

06/09/2022
by   Yiqing Xu, et al.
9

Inverse reinforcement learning (IRL) seeks to infer a cost function that explains the underlying goals and preferences of expert demonstrations. This paper presents receding horizon inverse reinforcement learning (RHIRL), a new IRL algorithm for high-dimensional, noisy, continuous systems with black-box dynamic models. RHIRL addresses two key challenges of IRL: scalability and robustness. To handle high-dimensional continuous systems, RHIRL matches the induced optimal trajectories with expert demonstrations locally in a receding horizon manner and 'stitches' together the local solutions to learn the cost; it thereby avoids the 'curse of dimensionality'. This contrasts sharply with earlier algorithms that match with expert demonstrations globally over the entire high-dimensional state space. To be robust against imperfect expert demonstrations and system control noise, RHIRL learns a state-dependent cost function 'disentangled' from system dynamics under mild conditions. Experiments on benchmark tasks show that RHIRL outperforms several leading IRL algorithms in most instances. We also prove that the cumulative error of RHIRL grows linearly with the task duration.

READ FULL TEXT
research
10/18/2020

Model-Based Inverse Reinforcement Learning from Visual Demonstrations

Scaling model-based inverse reinforcement learning (IRL) to real robotic...
research
03/01/2016

Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization

Reinforcement learning can acquire complex behaviors from high-level spe...
research
10/01/2022

Bayesian Q-learning With Imperfect Expert Demonstrations

Guided exploration with expert demonstrations improves data efficiency f...
research
05/25/2021

A Generalised Inverse Reinforcement Learning Framework

The gloabal objective of inverse Reinforcement Learning (IRL) is to esti...
research
05/18/2023

Massively Scalable Inverse Reinforcement Learning in Google Maps

Optimizing for humans' latent preferences is a grand challenge in route ...
research
12/21/2020

Explicitly Encouraging Low Fractional Dimensional Trajectories Via Reinforcement Learning

A key limitation in using various modern methods of machine learning in ...
research
06/09/2021

Offline Inverse Reinforcement Learning

The objective of offline RL is to learn optimal policies when a fixed ex...

Please sign up or login with your details

Forgot password? Click here to reset