Accelerating Value Iteration with Anchoring

05/26/2023
by   Jongmin Lee, et al.
0

Value Iteration (VI) is foundational to the theory and practice of modern reinforcement learning, and it is known to converge at a 𝒪(γ^k)-rate, where γ is the discount factor. Surprisingly, however, the optimal rate for the VI setup was not known, and finding a general acceleration mechanism has been an open problem. In this paper, we present the first accelerated VI for both the Bellman consistency and optimality operators. Our method, called Anc-VI, is based on an anchoring mechanism (distinct from Nesterov's acceleration), and it reduces the Bellman error faster than standard VI. In particular, Anc-VI exhibits a 𝒪(1/k)-rate for γ≈ 1 or even γ=1, while standard VI has rate 𝒪(1) for γ≥ 1-1/k, where k is the iteration count. We also provide a complexity lower bound matching the upper bound up to a constant factor of 4, thereby establishing optimality of the accelerated rate of Anc-VI. Finally, we show that the anchoring mechanism provides the same benefit in the approximate VI and Gauss–Seidel VI setups as well.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset