No Regrets for Learning the Prior in Bandits

07/13/2021
by   Soumya Basu, et al.
0

We propose AdaTS, a Thompson sampling algorithm that adapts sequentially to bandit tasks that it interacts with. The key idea in AdaTS is to adapt to an unknown task prior distribution by maintaining a distribution over its parameters. When solving a bandit task, that uncertainty is marginalized out and properly accounted for. AdaTS is a fully-Bayesian algorithm that can be implemented efficiently in several classes of bandit problems. We derive upper bounds on its Bayes regret that quantify the loss due to not knowing the task prior, and show that it is small. Our theory is supported by experiments, where AdaTS outperforms prior algorithms and works well even in challenging real-world problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2022

Meta-Learning for Simple Regret Minimization

We develop a meta-learning framework for simple regret minimization in b...
research
11/12/2021

Hierarchical Bayesian Bandits

Meta-, multi-task, and federated learning can be all viewed as solving s...
research
04/04/2019

Empirical Bayes Regret Minimization

The prevalent approach to bandit algorithm design is to have a low-regre...
research
03/06/2023

Thompson Sampling for Linear Bandit Problems with Normal-Gamma Priors

We consider Thompson sampling for linear bandit problems with finitely m...
research
10/27/2022

Lifelong Bandit Optimization: No Prior and No Regret

In practical applications, machine learning algorithms are often repeate...
research
12/01/2021

Efficient Online Bayesian Inference for Neural Bandits

In this paper we present a new algorithm for online (sequential) inferen...
research
01/12/2023

Thompson Sampling with Diffusion Generative Prior

In this work, we initiate the idea of using denoising diffusion models t...

Please sign up or login with your details

Forgot password? Click here to reset