Operator Augmentation for Model-based Policy Evaluation

10/25/2021
by   Xun Tang, et al.
13

In model-based reinforcement learning, the transition matrix and reward vector are often estimated from random samples subject to noise. Even if the estimated model is an unbiased estimate of the true underlying model, the value function computed from the estimated model is biased. We introduce an operator augmentation method for reducing the error introduced by the estimated model. When the error is in the residual norm, we prove that the augmentation factor is always positive and upper bounded by 1 + O (1/n), where n is the number of samples used in learning each row of the transition matrix. We also propose a practical numerical algorithm for implementing the operator augmentation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset