Krylov-Bellman boosting: Super-linear policy evaluation in general state spaces

10/20/2022
by   Eric Xia, et al.
0

We present and analyze the Krylov-Bellman Boosting (KBB) algorithm for policy evaluation in general state spaces. It alternates between fitting the Bellman residual using non-parametric regression (as in boosting), and estimating the value function via the least-squares temporal difference (LSTD) procedure applied with a feature set that grows adaptively over time. By exploiting the connection to Krylov methods, we equip this method with two attractive guarantees. First, we provide a general convergence bound that allows for separate estimation errors in residual fitting and LSTD computation. Consistent with our numerical experiments, this bound shows that convergence rates depend on the restricted spectral structure, and are typically super-linear. Second, by combining this meta-result with sample-size dependent guarantees for residual fitting and LSTD computation, we obtain concrete statistical guarantees that depend on the sample size along with the complexity of the function class used to fit the residuals. We illustrate the behavior of the KBB algorithm for various types of policy evaluation problems, and typically find large reductions in sample complexity relative to the standard approach of fitted value iterationn.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2014

Classification-based Approximate Policy Iteration: Experiments and Extended Discussions

Tackling large approximate dynamic programming or reinforcement learning...
research
01/19/2022

On the Convergence Rates of Policy Gradient Methods

We consider infinite-horizon discounted Markov decision problems with fi...
research
01/30/2008

Recursive Bias Estimation and L_2 Boosting

This paper presents a general iterative bias correction procedure for re...
research
02/25/2017

Stochastic Variance Reduction Methods for Policy Evaluation

Policy evaluation is a crucial step in many reinforcement-learning proce...
research
08/22/2021

A Boosting Approach to Reinforcement Learning

We study efficient algorithms for reinforcement learning in Markov decis...
research
11/26/2015

Incremental Truncated LSTD

Balancing between computational efficiency and sample efficiency is an i...
research
06/11/2013

Stochastic approximation for speeding up LSTD (and LSPI)

We propose a stochastic approximation (SA) based method with randomizati...

Please sign up or login with your details

Forgot password? Click here to reset