Minimal Exploration in Structured Stochastic Bandits

11/01/2017
by   Richard Combes, et al.
0

This paper introduces and addresses a wide class of stochastic bandit problems where the function mapping the arm to the corresponding reward exhibits some known structural properties. Most existing structures (e.g. linear, Lipschitz, unimodal, combinatorial, dueling, ...) are covered by our framework. We derive an asymptotic instance-specific regret lower bound for these problems, and develop OSSB, an algorithm whose regret matches this fundamental limit. OSSB is not based on the classical principle of "optimism in the face of uncertainty" or on Thompson sampling, and rather aims at matching the minimal exploration rates of sub-optimal arms as characterized in the derivation of the regret lower bound. We illustrate the efficiency of OSSB using numerical experiments in the case of the linear bandit problem and show that OSSB outperforms existing algorithms, including Thompson sampling.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2021

Reinforcement Learning for Markovian Bandits: Is Posterior Sampling more Scalable than Optimism?

We study learning algorithms for the classical Markovian bandit problem ...
research
07/14/2020

Optimal Learning for Structured Bandits

We study structured multi-armed bandits, which is the problem of online ...
research
02/10/2021

On the Suboptimality of Thompson Sampling in High Dimensions

In this paper we consider Thompson Sampling for combinatorial semi-bandi...
research
12/02/2018

Quick Best Action Identification in Linear Bandit Problems

In this paper, we consider a best action identification problem in the s...
research
09/05/2019

Optimal UCB Adjustments for Large Arm Sizes

The regret lower bound of Lai and Robbins (1985), the gold standard for ...
research
04/02/2020

Predictive Bandits

We introduce and study a new class of stochastic bandit problems, referr...
research
11/18/2021

From Optimality to Robustness: Dirichlet Sampling Strategies in Stochastic Bandits

The stochastic multi-arm bandit problem has been extensively studied und...

Please sign up or login with your details

Forgot password? Click here to reset