Fast Rates for Maximum Entropy Exploration

03/14/2023
by   Daniil Tiapkin, et al.
0

We consider the reinforcement learning (RL) setting, in which the agent has to act in unknown environment driven by a Markov Decision Process (MDP) with sparse or even reward free signals. In this situation, exploration becomes the main challenge. In this work, we study the maximum entropy exploration problem of two different types. The first type is visitation entropy maximization that was previously considered by Hazan et al. (2019) in the discounted setting. For this type of exploration, we propose an algorithm based on a game theoretic representation that has 𝒪(H^3 S^2 A / ε^2) sample complexity thus improving the ε-dependence of Hazan et al. (2019), where S is a number of states, A is a number of actions, H is an episode length, and ε is a desired accuracy. The second type of entropy we study is the trajectory entropy. This objective function is closely related to the entropy-regularized MDPs, and we propose a simple modification of the UCBVI algorithm that has a sample complexity of order 𝒪(1/ε) ignoring dependence in S, A, H. Interestingly enough, it is the first theoretical result in RL literature establishing that the exploration problem for the regularized MDPs can be statistically strictly easier (in terms of sample complexity) than for the ordinary MDPs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset