Active Exploration via Experiment Design in Markov Chains

by   Mojmir Mutny, et al.
ETH Zurich

A key challenge in science and engineering is to design experiments to learn about some unknown quantity of interest. Classical experimental design optimally allocates the experimental budget to maximize a notion of utility (e.g., reduction in uncertainty about the unknown quantity). We consider a rich setting, where the experiments are associated with states in a Markov chain, and we can only choose them by selecting a policy controlling the state transitions. This problem captures important applications, from exploration in reinforcement learning to spatial monitoring tasks. We propose an algorithm – markov-design – that efficiently selects policies whose measurement allocation provably converges to the optimal one. The algorithm is sequential in nature, adapting its choice of policies (experiments) informed by past measurements. In addition to our theoretical analysis, we showcase our framework on applications in ecological surveillance and pharmacology.


Learning Multiple Markov Chains via Adaptive Allocation

We study the problem of learning the transition matrices of a set of Mar...

Sequential Information Design: Markov Persuasion Process and Its Efficient Reinforcement Learning

In today's economy, it becomes important for Internet platforms to consi...

Reinforcement Learning with Almost Sure Constraints

In this work we address the problem of finding feasible policies for Con...

Guarantees for Epsilon-Greedy Reinforcement Learning with Function Approximation

Myopic exploration policies such as epsilon-greedy, softmax, or Gaussian...

A Reinforcement Learning Approach for the Multichannel Rendezvous Problem

In this paper, we consider the multichannel rendezvous problem in cognit...

Evaluating COVID-19 vaccine allocation policies using Bayesian m-top exploration

Individual-based epidemiological models support the study of fine-graine...

Please sign up or login with your details

Forgot password? Click here to reset