FIRL: Fast Imitation and Policy Reuse Learning

by   Yiwen Chen, et al.
National University of Singapore

Intelligent robotics policies have been widely researched for challenging applications such as opening doors, washing dishes, and table organization. We refer to a "Policy Pool", containing skills that be easily accessed and reused. There are researches to leverage the pool, such as policy reuse, modular learning, assembly learning, transfer learning, hierarchical reinforcement learning (HRL), etc. However, most methods generally do not perform well in learning efficiency and require large datasets for training. This work focuses on enabling fast learning based on the policy pool. It should learn fast enough in one-shot or few-shot by avoiding learning from scratch. We also allow it to interact and learn from humans, but the training period should be within minutes. We propose FIRL, Fast (one-shot) Imitation, and Policy Reuse Learning. Instead of learning a new skill from scratch, it performs the one-shot imitation learning on the higher layer under a 2-layer hierarchical mechanism. Our method reduces a complex task learning to a simple regression problem that it could solve in a few offline iterations. The agent could have a good command of a new task given a one-shot demonstration. We demonstrate this method on the OpenDoors mini-grid environment, and the code is available on


page 3

page 5


Transfering Hierarchical Structure with Dual Meta Imitation Learning

Hierarchical Imitation Learning (HIL) is an effective way for robots to ...

One-Shot Visual Imitation Learning via Meta-Learning

In order for a robot to be a generalist that can perform a wide range of...

Guided Meta-Policy Search

Reinforcement learning (RL) algorithms have demonstrated promising resul...

Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning

We present relay policy learning, a method for imitation and reinforceme...

Complex Skill Acquisition through Simple Skill Adversarial Imitation Learning

Humans are able to think of complex tasks as combinations of simpler sub...

One-Shot High-Fidelity Imitation: Training Large-Scale Deep Nets with RL

Humans are experts at high-fidelity imitation -- closely mimicking a dem...

Bayesian Policy Reuse

A long-lived autonomous agent should be able to respond online to novel ...

Please sign up or login with your details

Forgot password? Click here to reset