IQ-Learn: Inverse soft-Q Learning for Imitation

by   Divyansh Garg, et al.
Stanford University

In many sequential decision-making problems (e.g., robotics control, game playing, sequential prediction), human or expert data is available containing useful information about the task. However, imitation learning (IL) from a small amount of expert data can be challenging in high-dimensional environments with complex dynamics. Behavioral cloning is a simple method that is widely used due to its simplicity of implementation and stable convergence but doesn't utilize any information involving the environment's dynamics. Many existing methods that exploit dynamics information are difficult to train in practice due to an adversarial optimization process over reward and policy approximators or biased, high variance gradient estimators. We introduce a method for dynamics-aware IL which avoids adversarial training by learning a single Q-function, implicitly representing both reward and policy. On standard benchmarks, the implicitly learned rewards show a high positive correlation with the ground-truth rewards, illustrating our method can also be used for inverse reinforcement learning (IRL). Our method, Inverse soft-Q learning (IQ-Learn) obtains state-of-the-art results in offline and online imitation learning settings, surpassing existing methods both in the number of required environment interactions and scalability in high-dimensional spaces.


page 9

page 22


Discriminator Soft Actor Critic without Extrinsic Rewards

It is difficult to be able to imitate well in unknown states from a smal...

Adversarial Imitation via Variational Inverse Reinforcement Learning

We consider a problem of learning a reward and policy from expert exampl...

Imitation Learning by State-Only Distribution Matching

Imitation Learning from observation describes policy learning in a simil...

SQIL: Imitation Learning via Regularized Behavioral Cloning

Learning to imitate expert behavior given action demonstrations containi...

Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos

Pretraining on noisy, internet-scale datasets has been heavily studied a...

Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning

Behavioral cloning has proven to be effective for learning sequential de...

Robust Imitation via Mirror Descent Inverse Reinforcement Learning

Recently, adversarial imitation learning has shown a scalable reward acq...

Please sign up or login with your details

Forgot password? Click here to reset