Sample Complexity and Overparameterization Bounds for Projection-Free Neural TD Learning

03/02/2021
by   Semih Cayci, et al.
0

We study the dynamics of temporal-difference learning with neural network-based value function approximation over a general state space, namely, Neural TD learning. Existing analysis of neural TD learning relies on either infinite width-analysis or constraining the network parameters in a (random) compact set; as a result, an extra projection step is required at each iteration. This paper establishes a new convergence analysis of neural TD learning without any projection. We show that the projection-free TD learning equipped with a two-layer ReLU network of any width exceeding poly(ν,1/ϵ) converges to the true value function with error ϵ given poly(ν,1/ϵ) iterations or samples, where ν is an upper bound on the RKHS norm of the value function induced by the neural tangent kernel. Our sample complexity and overparameterization bounds are based on a drift analysis of the network parameters as a stopped random process in the lazy training regime.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro