Gradient Q(σ, λ): A Unified Algorithm with Function Approximation for Reinforcement Learning

09/06/2019
by   Long Yang, et al.
0

Full-sampling (e.g., Q-learning) and pure-expectation (e.g., Expected Sarsa) algorithms are efficient and frequently used techniques in reinforcement learning. Q(σ,λ) is the first approach unifies them with eligibility trace through the sampling degree σ. However, it is limited to the tabular case, for large-scale learning, the Q(σ,λ) is too expensive to require a huge volume of tables to accurately storage value functions. To address above problem, we propose a GQ(σ,λ) that extends tabular Q(σ,λ) with linear function approximation. We prove the convergence of GQ(σ,λ). Empirical results on some standard domains show that GQ(σ,λ) with a combination of full-sampling with pure-expectation reach a better performance than full-sampling and pure-expectation methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset