DeepAI AI Chat
Log In Sign Up

Fast Rates for Maximum Entropy Exploration

by   Daniil Tiapkin, et al.

We consider the reinforcement learning (RL) setting, in which the agent has to act in unknown environment driven by a Markov Decision Process (MDP) with sparse or even reward free signals. In this situation, exploration becomes the main challenge. In this work, we study the maximum entropy exploration problem of two different types. The first type is visitation entropy maximization that was previously considered by Hazan et al. (2019) in the discounted setting. For this type of exploration, we propose an algorithm based on a game theoretic representation that has π’ͺ(H^3 S^2 A / Ξ΅^2) sample complexity thus improving the Ξ΅-dependence of Hazan et al. (2019), where S is a number of states, A is a number of actions, H is an episode length, and Ξ΅ is a desired accuracy. The second type of entropy we study is the trajectory entropy. This objective function is closely related to the entropy-regularized MDPs, and we propose a simple modification of the UCBVI algorithm that has a sample complexity of order π’ͺ(1/Ξ΅) ignoring dependence in S, A, H. Interestingly enough, it is the first theoretical result in RL literature establishing that the exploration problem for the regularized MDPs can be statistically strictly easier (in terms of sample complexity) than for the ordinary MDPs.


page 1

page 2

page 3

page 4

βˆ™ 04/24/2019

Stochastic Lipschitz Q-Learning

In an episodic Markov Decision Process (MDP) problem, an online algorith...
βˆ™ 07/13/2020

A Provably Efficient Sample Collection Strategy for Reinforcement Learning

A common assumption in reinforcement learning (RL) is to have access to ...
βˆ™ 01/26/2022

Reward-Free RL is No Harder Than Reward-Aware RL in Linear Markov Decision Processes

Reward-free reinforcement learning (RL) considers the setting where the ...
βˆ™ 02/07/2022

The Importance of Non-Markovianity in Maximum State Entropy Exploration

In the maximum state entropy exploration framework, an agent interacts w...
βˆ™ 06/21/2022

On the Statistical Efficiency of Reward-Free Exploration in Non-Linear RL

We study reward-free reinforcement learning (RL) under general non-linea...
βˆ™ 03/06/2020

Active Model Estimation in Markov Decision Processes

We study the problem of efficient exploration in order to learn an accur...
βˆ™ 12/06/2018

Provably Efficient Maximum Entropy Exploration

Suppose an agent is in a (possibly unknown) Markov decision process (MDP...