Expert Selection in High-Dimensional Markov Decision Processes

10/26/2020
by   Vicenc Rubies Royo, et al.
8

In this work we present a multi-armed bandit framework for online expert selection in Markov decision processes and demonstrate its use in high-dimensional settings. Our method takes a set of candidate expert policies and switches between them to rapidly identify the best performing expert using a variant of the classical upper confidence bound algorithm, thus ensuring low regret in the overall performance of the system. This is useful in applications where several expert policies may be available, and one needs to be selected at run-time for the underlying environment.

READ FULL TEXT

page 1

page 5

research
12/31/2020

Robust Asymmetric Learning in POMDPs

Policies for partially observed Markov decision processes can be efficie...
research
11/04/2019

On Online Learning in Kernelized Markov Decision Processes

We develop algorithms with low regret for learning episodic Markov decis...
research
07/24/2012

VOI-aware MCTS

UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS) in ...
research
07/23/2012

MCTS Based on Simple Regret

UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS) in ...
research
02/28/2019

Active Exploration in Markov Decision Processes

We introduce the active exploration problem in Markov decision processes...
research
03/14/2019

Contextual Markov Decision Processes using Generalized Linear Models

We consider the recently proposed reinforcement learning (RL) framework ...
research
10/15/2018

Machine Self-Confidence in Autonomous Systems via Meta-Analysis of Decision Processes

Algorithmic assurances from advanced autonomous systems assist human use...

Please sign up or login with your details

Forgot password? Click here to reset