Pretraining Representations for Data-Efficient Reinforcement Learning

06/09/2021
by   Max Schwarzer, et al.
0

Data efficiency is a key challenge for deep reinforcement learning. We address this problem by using unlabeled data to pretrain an encoder which is then finetuned on a small amount of task-specific data. To encourage learning representations which capture diverse aspects of the underlying MDP, we employ a combination of latent dynamics modelling and unsupervised goal-conditioned RL. When limited to 100k steps of interaction on Atari games (equivalent to two hours of human experience), our approach significantly surpasses prior work combining offline representation pretraining with task-specific finetuning, and compares favourably with other pretraining methods that require orders of magnitude more data. Our approach shows particular promise when combined with larger models as well as more diverse, task-aligned observational data – approaching human-level performance and data-efficiency on Atari in our best setting. We provide code associated with this work at https://github.com/mila-iqia/SGI.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2023

On the Importance of Feature Decorrelation for Unsupervised Representation Learning in Reinforcement Learning

Recently, unsupervised representation learning (URL) has improved the sa...
research
10/06/2021

Pretraining Reinforcement Learning: Sharpening the Axe Before Cutting the Tree

Pretraining is a common technique in deep learning for increasing perfor...
research
09/22/2022

Pretraining the Vision Transformer using self-supervised methods for vision based Deep Reinforcement Learning

The Vision Transformer architecture has shown to be competitive in the c...
research
08/25/2022

A Compact Pretraining Approach for Neural Language Models

Domain adaptation for large neural language models (NLMs) is coupled wit...
research
04/18/2023

Behavior Retrieval: Few-Shot Imitation Learning by Querying Unlabeled Datasets

Enabling robots to learn novel visuomotor skills in a data-efficient man...
research
08/21/2023

When Prompt-based Incremental Learning Does Not Meet Strong Pretraining

Incremental learning aims to overcome catastrophic forgetting when learn...
research
08/31/2021

APS: Active Pretraining with Successor Features

We introduce a new unsupervised pretraining objective for reinforcement ...

Please sign up or login with your details

Forgot password? Click here to reset