Minimax Optimal Online Imitation Learning via Replay Estimation

by   Gokul Swamy, et al.

Online imitation learning is the problem of how best to mimic expert demonstrations, given access to the environment or an accurate simulator. Prior work has shown that in the infinite sample regime, exact moment matching achieves value equivalence to the expert policy. However, in the finite sample regime, even if one has no optimization error, empirical variance can lead to a performance gap that scales with H^2 / N for behavioral cloning and H / √(N) for online moment matching, where H is the horizon and N is the size of the expert dataset. We introduce the technique of replay estimation to reduce this empirical variance: by repeatedly executing cached expert actions in a stochastic simulator, we compute a smoother expert visitation distribution estimate to match. In the presence of general function approximation, we prove a meta theorem reducing the performance gap of our approach to the parameter estimation error for offline classification (i.e. learning the expert policy). In the tabular setting or with linear function approximation, our meta theorem shows that the performance gap incurred by our approach achieves the optimal O( min(H^3/2 / N, H / √(N)) dependency, under significantly weaker assumptions compared to prior work. We implement multiple instantiations of our approach on several continuous control tasks and find that we are able to significantly improve policy performance across a variety of dataset sizes.


page 1

page 2

page 3

page 4


SoftDICE for Imitation Learning: Rethinking Off-policy Distribution Matching

We present SoftDICE, which achieves state-of-the-art performance for imi...

CEIL: Generalized Contextual Imitation Learning

In this paper, we present ContExtual Imitation Learning (CEIL), a genera...

Provably Efficient Generative Adversarial Imitation Learning for Online and Offline Setting with Linear Function Approximation

In generative adversarial imitation learning (GAIL), the agent aims to l...

Understanding Adversarial Imitation Learning in Small Sample Regime: A Stage-coupled Analysis

Imitation learning learns a policy from expert trajectories. While the e...

Proximal Point Imitation Learning

This work develops new algorithms with rigorous efficiency guarantees fo...

Causal Imitation Learning under Temporally Correlated Noise

We develop algorithms for imitation learning from policy data that was c...

Theoretical Analysis of Offline Imitation With Supplementary Dataset

Behavioral cloning (BC) can recover a good policy from abundant expert d...

Please sign up or login with your details

Forgot password? Click here to reset