Privileged Information Dropout in Reinforcement Learning

Using privileged information during training can improve the sample efficiency and performance of machine learning systems. This paradigm has been applied to reinforcement learning (RL), primarily in the form of distillation or auxiliary tasks, and less commonly in the form of augmenting the inputs of agents. In this work, we investigate Privileged Information Dropout () for achieving the latter which can be applied equally to value-based and policy-based RL algorithms. Within a simple partially-observed environment, we demonstrate that outperforms alternatives for leveraging privileged information, including distillation and auxiliary tasks, and can successfully utilise different types of privileged information. Finally, we analyse its effect on the learned representations.


page 1

page 2

page 3

page 4


Return-Based Contrastive Representation Learning for Reinforcement Learning

Recently, various auxiliary tasks have been proposed to accelerate repre...

Sample Efficient Ensemble Learning with Catalyst.RL

We present Catalyst.RL, an open-source PyTorch framework for reproducibl...

Discovery of Useful Questions as Auxiliary Tasks

Arguably, intelligent agents ought to be able to discover their own ques...

Group Equivariant Deep Reinforcement Learning

In Reinforcement Learning (RL), Convolutional Neural Networks(CNNs) have...

Periodic Intra-Ensemble Knowledge Distillation for Reinforcement Learning

Off-policy ensemble reinforcement learning (RL) methods have demonstrate...

Soft Action Priors: Towards Robust Policy Transfer

Despite success in many challenging problems, reinforcement learning (RL...

Please sign up or login with your details

Forgot password? Click here to reset