Temporal Difference Learning as Gradient Splitting

10/27/2020
by   Rui Liu, et al.
0

Temporal difference learning with linear function approximation is a popular method to obtain a low-dimensional approximation of the value function of a policy in a Markov Decision Process. We give a new interpretation of this method in terms of a splitting of the gradient of an appropriately chosen function. As a consequence of this interpretation, convergence proofs for gradient descent can be applied almost verbatim to temporal difference learning. Beyond giving a new, fuller explanation of why temporal difference works, our interpretation also yields improved convergence times. We consider the setting with 1/√(T) step-size, where previous comparable finite-time convergence time bounds for temporal difference learning had the multiplicative factor 1/(1-γ) in front of the bound, with γ being the discount factor. We show that a minor variation on TD learning which estimates the mean of the value function separately has a convergence time where 1/(1-γ) only multiplies an asymptotically negligible term.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset