Algorithms for slate bandits with non-separable reward functions

04/21/2020
by   Jason Rhuggenaath, et al.
4

In this paper, we study a slate bandit problem where the function that determines the slate-level reward is non-separable: the optimal value of the function cannot be determined by learning the optimal action for each slot. We are mainly concerned with cases where the number of slates is large relative to the time horizon, so that trying each slate as a separate arm in a traditional multi-armed bandit, would not be feasible. Our main contribution is the design of algorithms that still have sub-linear regret with respect to the time horizon, despite the large number of slates. Experimental results on simulated data and real-world data show that our proposed method outperforms popular benchmark bandit algorithms.

READ FULL TEXT
research
11/29/2018

Regret Bounds for Stochastic Combinatorial Multi-Armed Bandits with Linear Space Complexity

Many real-world problems face the dilemma of choosing best K out of N op...
research
01/05/2022

Bridging Adversarial and Nonstationary Multi-armed Bandit

In the multi-armed bandit framework, there are two formulations that are...
research
01/18/2023

Complexity Analysis of a Countable-armed Bandit Problem

We consider a stochastic multi-armed bandit (MAB) problem motivated by “...
research
05/30/2022

Quantum Multi-Armed Bandits and Stochastic Linear Bandits Enjoy Logarithmic Regrets

Multi-arm bandit (MAB) and stochastic linear bandit (SLB) are important ...
research
10/01/2022

Speed Up the Cold-Start Learning in Two-Sided Bandits with Many Arms

Multi-armed bandit (MAB) algorithms are efficient approaches to reduce t...
research
04/10/2017

Automated Curriculum Learning for Neural Networks

We introduce a method for automatically selecting the path, or syllabus,...
research
10/23/2020

Approximation Methods for Kernelized Bandits

The RKHS bandit problem (also called kernelized multi-armed bandit probl...

Please sign up or login with your details

Forgot password? Click here to reset