Making Efficient Use of Demonstrations to Solve Hard Exploration Problems

09/03/2019
by   Tom Le Paine, et al.
10

This paper introduces R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions. We also introduce a suite of eight tasks that combine these three properties, and show that R2D3 can solve several of the tasks where other state of the art methods (both with and without demonstrations) fail to see even a single successful trajectory after tens of billions of steps of exploration.

READ FULL TEXT

page 4

page 5

page 16

page 17

page 18

page 22

research
07/07/2020

Guided Exploration with Proximal Policy Optimization using a Single Demonstration

Solving sparse reward tasks through exploration is one of the major chal...
research
06/16/2022

BYOL-Explore: Exploration by Bootstrapped Prediction

We present BYOL-Explore, a conceptually simple yet general approach for ...
research
02/18/2021

Learning Memory-Dependent Continuous Control from Demonstrations

Efficient exploration has presented a long-standing challenge in reinfor...
research
11/16/2021

Improving Learning from Demonstrations by Learning from Experience

How to make imitation learning more general when demonstrations are rela...
research
01/30/2019

Go-Explore: a New Approach for Hard-Exploration Problems

A grand challenge in reinforcement learning is intelligent exploration, ...
research
12/07/2022

ICT4S2022 – Demonstrations and Posters Track Proceedings

Submissions accepted for The 8th International Conference on ICT for Sus...
research
09/29/2020

Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution

Reinforcement Learning algorithms require a large number of samples to s...

Please sign up or login with your details

Forgot password? Click here to reset