Learning and Querying Fast Generative Models for Reinforcement Learning

02/08/2018
by   Lars Buesing, et al.
0

A key challenge in model-based reinforcement learning (RL) is to synthesize computationally efficient and accurate environment models. We show that carefully designed generative models that learn and operate on compact state representations, so-called state-space models, substantially reduce the computational costs for predicting outcomes of sequences of actions. Extensive experiments establish that state-space models accurately capture the dynamics of Atari games from the Arcade Learning Environment from raw pixels. The computational speed-up of state-space models while maintaining high accuracy makes their application in RL feasible: We demonstrate that agents which query these models for decision making outperform strong model-free baselines on the game MSPACMAN, demonstrating the potential of using learned environment models for planning.

READ FULL TEXT

page 14

page 15

research
06/21/2019

Shaping Belief States with Generative Environment Models for RL

When agents interact with a complex environment, they must form and main...
research
07/11/2017

Value Prediction Network

This paper proposes a novel deep reinforcement learning (RL) architectur...
research
09/12/2018

Combined Reinforcement Learning via Abstract Representations

In the quest for efficient and robust reinforcement learning methods, bo...
research
11/16/2020

Blind Decision Making: Reinforcement Learning with Delayed Observations

Reinforcement learning typically assumes that the state update from the ...
research
09/15/2019

Model Based Planning with Energy Based Models

Model-based planning holds great promise for improving both sample effic...
research
06/09/2023

Decision Stacks: Flexible Reinforcement Learning via Modular Generative Models

Reinforcement learning presents an attractive paradigm to reason about s...

Please sign up or login with your details

Forgot password? Click here to reset