Policy Gradients Incorporating the Future

by   David Venuto, et al.
McGill University

Reasoning about the future – understanding how decisions in the present time affect outcomes in the future – is one of the central challenges for reinforcement learning (RL), especially in highly-stochastic or partially observable environments. While predicting the future directly is hard, in this work we introduce a method that allows an agent to "look into the future" without explicitly predicting it. Namely, we propose to allow an agent, during its training on past experience, to observe what actually happened in the future at that time, while enforcing an information bottleneck to avoid the agent overly relying on this privileged information. This gives our agent the opportunity to utilize rich and useful information about the future trajectory dynamics in addition to the present. Our method, Policy Gradients Incorporating the Future (PGIF), is easy to implement and versatile, being applicable to virtually any policy gradient algorithm. We apply our proposed method to a number of off-the-shelf RL algorithms and show that PGIF is able to achieve higher reward faster in a variety of online and offline RL domains, as well as sparse-reward and partially observable environments.


page 1

page 2

page 3

page 4


Learning Reward Machines: A Study in Partially Observable Reinforcement Learning

Reinforcement learning (RL) is a central problem in artificial intellige...

Evolved Policy Gradients

We propose a meta-learning approach for learning gradient-based reinforc...

Cooperative Multi-Agent Reinforcement Learning for Inventory Management

With Reinforcement Learning (RL) for inventory management (IM) being a n...

Robust Predictable Control

Many of the challenges facing today's reinforcement learning (RL) algori...

Causal policy ranking

Policies trained via reinforcement learning (RL) are often very complex ...

Observational Robustness and Invariances in Reinforcement Learning via Lexicographic Objectives

Policy robustness in Reinforcement Learning (RL) may not be desirable at...

Learning to Act and Observe in Partially Observable Domains

We consider a learning agent in a partially observable environment, with...

Please sign up or login with your details

Forgot password? Click here to reset