How to Spend Your Robot Time: Bridging Kickstarting and Offline Reinforcement Learning for Vision-based Robotic Manipulation

by   Alex X. Lee, et al.

Reinforcement learning (RL) has been shown to be effective at learning control from experience. However, RL typically requires a large amount of online interaction with the environment. This limits its applicability to real-world settings, such as in robotics, where such interaction is expensive. In this work we investigate ways to minimize online interactions in a target task, by reusing a suboptimal policy we might have access to, for example from training on related prior tasks, or in simulation. To this end, we develop two RL algorithms that can speed up training by using not only the action distributions of teacher policies, but also data collected by such policies on the task at hand. We conduct a thorough experimental study of how to use suboptimal teachers on a challenging robotic manipulation benchmark on vision-based stacking with diverse objects. We compare our methods to offline, online, offline-to-online, and kickstarting RL algorithms. By doing so, we find that training on data from both the teacher and student, enables the best performance for limited data budgets. We examine how to best allocate a limited data budget – on the target task – between the teacher and the student policy, and report experiments using varying budgets, two teachers with different degrees of suboptimality, and five stacking tasks that require a diverse set of behaviors. Our analysis, both in simulation and in the real world, shows that our approach is the best across data budgets, while standard offline RL from teacher rollouts is surprisingly effective when enough data is given.


page 1

page 2


Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes

We study the problem of robotic stacking with objects of complex geometr...

Conservative Data Sharing for Multi-Task Offline Reinforcement Learning

Offline reinforcement learning (RL) algorithms have shown promising resu...

Efficient Robotic Manipulation Through Offline-to-Online Reinforcement Learning and Goal-Aware State Information

End-to-end learning robotic manipulation with high data efficiency is on...

Learning Vision-based Robotic Manipulation Tasks Sequentially in Offline Reinforcement Learning Settings

With the rise of deep reinforcement learning (RL) methods, many complex ...

Learning to Influence Human Behavior with Offline Reinforcement Learning

In the real world, some of the most complex settings for learned agents ...

DCUR: Data Curriculum for Teaching via Samples with Reinforcement Learning

Deep reinforcement learning (RL) has shown great empirical successes, bu...

Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World

Experimentation on real robots is demanding in terms of time and costs. ...

Please sign up or login with your details

Forgot password? Click here to reset