Making Efficient Use of Demonstrations to Solve Hard Exploration Problems

by   Tom Le Paine, et al.

This paper introduces R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions. We also introduce a suite of eight tasks that combine these three properties, and show that R2D3 can solve several of the tasks where other state of the art methods (both with and without demonstrations) fail to see even a single successful trajectory after tens of billions of steps of exploration.


page 4

page 5

page 16

page 17

page 18

page 22


Guided Exploration with Proximal Policy Optimization using a Single Demonstration

Solving sparse reward tasks through exploration is one of the major chal...

BYOL-Explore: Exploration by Bootstrapped Prediction

We present BYOL-Explore, a conceptually simple yet general approach for ...

Learning Memory-Dependent Continuous Control from Demonstrations

Efficient exploration has presented a long-standing challenge in reinfor...

Improving Learning from Demonstrations by Learning from Experience

How to make imitation learning more general when demonstrations are rela...

Go-Explore: a New Approach for Hard-Exploration Problems

A grand challenge in reinforcement learning is intelligent exploration, ...

ICT4S2022 – Demonstrations and Posters Track Proceedings

Submissions accepted for The 8th International Conference on ICT for Sus...

Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution

Reinforcement Learning algorithms require a large number of samples to s...

Please sign up or login with your details

Forgot password? Click here to reset