A Survey of Exploration Methods in Reinforcement Learning

by   Susan Amin, et al.
University of Amsterdam
Montréal Institute of Learning Algorithms
McGill University

Exploration is an essential component of reinforcement learning algorithms, where agents need to learn how to predict and control unknown and often stochastic environments. Reinforcement learning agents depend crucially on exploration to obtain informative data for the learning process as the lack of enough information could hinder effective learning. In this article, we provide a survey of modern exploration methods in (Sequential) reinforcement learning, as well as a taxonomy of exploration methods.


page 1

page 2

page 3

page 4


Some Considerations on Learning to Explore via Meta-Reinforcement Learning

We consider the problem of exploration in meta reinforcement learning. T...

Exploration and Incentives in Reinforcement Learning

How do you incentivize self-interested agents to explore when they prefe...

A Short Survey on Probabilistic Reinforcement Learning

A reinforcement learning agent tries to maximize its cumulative payoff b...

Derivative-Free Reinforcement Learning: A Review

Reinforcement learning is about learning agent models that make the best...

The Dreaming Variational Autoencoder for Reinforcement Learning Environments

Reinforcement learning has shown great potential in generalizing over ra...

Reinforcement learning with human advice. A survey

In this paper, we provide an overview of the existing methods for integr...

A Conceptual Framework for Externally-influenced Agents: An Assisted Reinforcement Learning Review

A long-term goal of reinforcement learning agents is to be able to perfo...

Please sign up or login with your details

Forgot password? Click here to reset