Generalized Regret Analysis of Thompson Sampling using Fractional Posteriors

by   Prateek Jaiswal, et al.

Thompson sampling (TS) is one of the most popular and earliest algorithms to solve stochastic multi-armed bandit problems. We consider a variant of TS, named α-TS, where we use a fractional or α-posterior (α∈(0,1)) instead of the standard posterior distribution. To compute an α-posterior, the likelihood in the definition of the standard posterior is tempered with a factor α. For α-TS we obtain both instance-dependent 𝒪(∑_k ≠ i^*Δ_k(log(T)/C(α)Δ_k^2 + 1/2)) and instance-independent 𝒪(√(KTlog K)) frequentist regret bounds under very mild conditions on the prior and reward distributions, where Δ_k is the gap between the true mean rewards of the k^th and the best arms, and C(α) is a known constant. Both the sub-Gaussian and exponential family models satisfy our general conditions on the reward distribution. Our conditions on the prior distribution just require its density to be positive, continuous, and bounded. We also establish another instance-dependent regret upper bound that matches (up to constants) to that of improved UCB [Auer and Ortner, 2010]. Our regret analysis carefully combines recent theoretical developments in the non-asymptotic concentration analysis and Bernstein-von Mises type results for the α-posterior distribution. Moreover, our analysis does not require additional structural properties such as closed-form posteriors or conjugate priors.


page 1

page 2

page 3

page 4


Prior-free and prior-dependent regret bounds for Thompson Sampling

We consider the stochastic multi-armed bandit problem with a prior distr...

Decentralized Randomly Distributed Multi-agent Multi-armed Bandit with Heterogeneous Rewards

We study a decentralized multi-agent multi-armed bandit problem in which...

A General Recipe for the Analysis of Randomized Multi-Armed Bandit Algorithms

In this paper we propose a general methodology to derive regret bounds f...

Allocating Divisible Resources on Arms with Unknown and Random Rewards

We consider a decision maker allocating one unit of renewable and divisi...

On Thompson Sampling with Langevin Algorithms

Thompson sampling is a methodology for multi-armed bandit problems that ...

Asymptotically Optimal Thompson Sampling Based Policy for the Uniform Bandits and the Gaussian Bandits

Thompson sampling (TS) for the parametric stochastic multi-armed bandits...

Thompson Sampling for Linear Bandit Problems with Normal-Gamma Priors

We consider Thompson sampling for linear bandit problems with finitely m...

Please sign up or login with your details

Forgot password? Click here to reset