Learning Bellman Complete Representations for Offline Policy Evaluation

07/12/2022
by   Jonathan D. Chang, et al.
6

We study representation learning for Offline Reinforcement Learning (RL), focusing on the important task of Offline Policy Evaluation (OPE). Recent work shows that, in contrast to supervised learning, realizability of the Q-function is not enough for learning it. Two sufficient conditions for sample-efficient OPE are Bellman completeness and coverage. Prior work often assumes that representations satisfying these conditions are given, with results being mostly theoretical in nature. In this work, we propose BCRL, which directly learns from data an approximately linear Bellman complete representation with good coverage. With this learned representation, we perform OPE using Least Square Policy Evaluation (LSPE) with linear functions in our learned representation. We present an end-to-end theoretical analysis, showing that our two-stage algorithm enjoys polynomial sample complexity provided some representation in the rich class considered is linear Bellman complete. Empirically, we extensively evaluate our algorithm on challenging, image-based continuous control tasks from the Deepmind Control Suite. We show our representation enables better OPE compared to previous representation learning methods developed for off-policy RL (e.g., CURL, SPR). BCRL achieve competitive OPE error with the state-of-the-art method Fitted Q-Evaluation (FQE), and beats FQE when evaluating beyond the initial state distribution. Our ablations show that both linear Bellman complete and coverage components of our method are crucial.

READ FULL TEXT

page 7

page 8

research
11/02/2022

Behavior Prior Representation learning for Offline Reinforcement Learning

Offline reinforcement learning (RL) struggles in environments with rich ...
research
11/21/2021

Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation

We consider the offline reinforcement learning problem, where the aim is...
research
02/09/2022

Offline Reinforcement Learning with Realizability and Single-policy Concentrability

Sample-efficiency guarantees for offline reinforcement learning (RL) oft...
research
07/01/2020

On Linear Identifiability of Learned Representations

Identifiability is a desirable property of a statistical model: it impli...
research
06/15/2022

Contrastive Learning as Goal-Conditioned Reinforcement Learning

In reinforcement learning (RL), it is easier to solve a task if given a ...
research
01/24/2019

Decoupling feature extraction from policy learning: assessing benefits of state representation learning in goal based robotics

Scaling end-to-end reinforcement learning to control real robots from vi...
research
06/16/2023

Bootstrapped Representations in Reinforcement Learning

In reinforcement learning (RL), state representations are key to dealing...

Please sign up or login with your details

Forgot password? Click here to reset