Batch Value-function Approximation with Only Realizability

08/11/2020
by   Tengyang Xie, et al.
0

We solve a long-standing problem in batch reinforcement learning (RL): learning Q^⋆ from an exploratory and polynomial-sized dataset, using a realizable and otherwise arbitrary function class. In fact, all existing algorithms demand function-approximation assumptions stronger than realizability, and the mounting negative evidence has led to a conjecture that sample-efficient learning is impossible in this setting (Chen and Jiang, 2019). Our algorithm, BVFT, breaks the hardness conjecture (albeit under a somewhat stronger notion of exploratory data) via a tournament procedure that reduces the learning problem to pairwise comparison, and solves the latter with the help of a state-action partition constructed from the compared functions. We also discuss how BVFT can be applied to model selection among other extensions and open problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset