Gradient Q(σ, λ): A Unified Algorithm with Function Approximation for Reinforcement Learning
Full-sampling (e.g., Q-learning) and pure-expectation (e.g., Expected Sarsa) algorithms are efficient and frequently used techniques in reinforcement learning. Q(σ,λ) is the first approach unifies them with eligibility trace through the sampling degree σ. However, it is limited to the tabular case, for large-scale learning, the Q(σ,λ) is too expensive to require a huge volume of tables to accurately storage value functions. To address above problem, we propose a GQ(σ,λ) that extends tabular Q(σ,λ) with linear function approximation. We prove the convergence of GQ(σ,λ). Empirical results on some standard domains show that GQ(σ,λ) with a combination of full-sampling with pure-expectation reach a better performance than full-sampling and pure-expectation methods.
READ FULL TEXT