KLUCB Approach to Copeland Bandits

by   Nischal Agrawal, et al.

Multi-armed bandit(MAB) problem is a reinforcement learning framework where an agent tries to maximise her profit by proper selection of actions through absolute feedback for each action. The dueling bandits problem is a variation of MAB problem in which an agent chooses a pair of actions and receives relative feedback for the chosen action pair. The dueling bandits problem is well suited for modelling a setting in which it is not possible to provide quantitative feedback for each action, but qualitative feedback for each action is preferred as in the case of human feedback. The dueling bandits have been successfully applied in applications such as online rank elicitation, information retrieval, search engine improvement and clinical online recommendation. We propose a new method called Sup-KLUCB for K-armed dueling bandit problem specifically Copeland bandit problem by converting it into a standard MAB problem. Instead of using MAB algorithm independently for each action in a pair as in Sparring and in Self-Sparring algorithms, we combine a pair of action and use it as one action. Previous UCB algorithms such as Relative Upper Confidence Bound(RUCB) can be applied only in case of Condorcet dueling bandits, whereas this algorithm applies to general Copeland dueling bandits, including Condorcet dueling bandits as a special case. Our empirical results outperform state of the art Double Thompson Sampling(DTS) in case of Copeland dueling bandits.


page 1

page 2

page 3

page 4


Asymptotic Performance of Thompson Sampling in the Batched Multi-Armed Bandits

We study the asymptotic performance of the Thompson sampling algorithm i...

Merge Double Thompson Sampling for Large Scale Online Ranker Evaluation

Online ranker evaluation is one of the key challenges in information ret...

Duelling Bandits with Weak Regret in Adversarial Environments

Research on the multi-armed bandit problem has studied the trade-off of ...

Stacked Thompson Bandits

We introduce Stacked Thompson Bandits (STB) for efficiently generating p...

Dueling Bandits with Qualitative Feedback

We formulate and study a novel multi-armed bandit problem called the qua...

Linear Jamming Bandits: Sample-Efficient Learning for Non-Coherent Digital Jamming

It has been shown (Amuru et al. 2015) that online learning algorithms ca...

A Map of Bandits for E-commerce

The rich body of Bandit literature not only offers a diverse toolbox of ...

Please sign up or login with your details

Forgot password? Click here to reset