A Novel Confidence-Based Algorithm for Structured Bandits

by   Andrea Tirinzoni, et al.

We study finite-armed stochastic bandits where the rewards of each arm might be correlated to those of other arms. We introduce a novel phased algorithm that exploits the given structure to build confidence sets over the parameters of the true bandit problem and rapidly discard all sub-optimal arms. In particular, unlike standard bandit algorithms with no structure, we show that the number of times a suboptimal arm is selected may actually be reduced thanks to the information collected by pulling other arms. Furthermore, we show that, in some structures, the regret of an anytime extension of our algorithm is uniformly bounded over time. For these constant-regret structures, we also derive a matching lower bound. Finally, we demonstrate numerically that our approach better exploits certain structures than existing methods.


page 1

page 2

page 3

page 4


Multi-Armed Bandits with Correlated Arms

We consider a multi-armed bandit framework where the rewards obtained by...

Online Learning with Diverse User Preferences

In this paper, we investigate the impact of diverse user preference on l...

Crush Optimism with Pessimism: Structured Bandits Beyond Asymptotic Optimality

We study stochastic structured bandits for minimizing regret. The fact t...

The Fragility of Optimized Bandit Algorithms

Much of the literature on optimal design of bandit algorithms is based o...

Scalable Generalized Linear Bandits: Online Computation and Hashing

Generalized Linear Bandits (GLBs), a natural extension of the stochastic...

Optimal UCB Adjustments for Large Arm Sizes

The regret lower bound of Lai and Robbins (1985), the gold standard for ...

Online Model Selection: a Rested Bandit Formulation

Motivated by a natural problem in online model selection with bandit inf...

Please sign up or login with your details

Forgot password? Click here to reset