Logarithmic Regret for Reinforcement Learning with Linear Function Approximation
Reinforcement learning (RL) with linear function approximation has received increasing attention recently. However, existing work has focused on obtaining √(T)-type regret bound, where T is the number of steps. In this paper, we show that logarithmic regret is attainable under two recently proposed linear MDP assumptions provided that there exists a positive sub-optimality gap for the optimal action-value function. In specific, under the linear MDP assumption (Jin et al. 2019), the LSVI-UCB algorithm can achieve Õ(d^3H^5/gap_min·log(T)) regret; and under the linear mixture model assumption (Ayoub et al. 2020), the UCRL-VTR algorithm can achieve Õ(d^2H^5/gap_min·log^3(T)) regret, where d is the dimension of feature mapping, H is the length of episode, and gap_min is the minimum of sub-optimality gap. To the best of our knowledge, these are the first logarithmic regret bounds for RL with linear function approximation.
READ FULL TEXT