Stochastic One-Sided Full-Information Bandit

06/20/2019
by   Haoyu Zhao, et al.
0

In this paper, we study the stochastic version of the one-sided full information bandit problem, where we have K arms [K] = {1, 2, ..., K}, and playing arm i would gain reward from an unknown distribution for arm i while obtaining reward feedback for all arms j > i. One-sided full information bandit can model the online repeated second-price auctions, where the auctioneer could select the reserved price in each round and the bidders only reveal their bids when their bids are higher than the reserved price. In this paper, we present an elimination-based algorithm to solve the problem. Our elimination based algorithm achieves distribution independent regret upper bound O(√(T· (TK))), and distribution dependent bound O(( T + K)f(Δ)), where T is the time horizon, Δ is a vector of gaps between the mean reward of arms and the mean reward of the best arm, and f(Δ) is a formula depending on the gap vector that we will specify in detail. Our algorithm has the best theoretical regret upper bound so far. We also validate our algorithm empirically against other possible alternatives.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2023

Combinatorial Bandits for Maximum Value Reward Function under Max Value-Index Feedback

We consider a combinatorial multi-armed bandit problem for maximum value...
research
02/12/2021

The Symmetry between Arms and Knapsacks: A Primal-Dual Approach for Bandits with Knapsacks

In this paper, we study the bandits with knapsacks (BwK) problem and dev...
research
02/16/2021

Optimal Algorithms for Private Online Learning in a Stochastic Environment

We consider two variants of private stochastic online learning. The firs...
research
06/19/2021

Variance-Dependent Best Arm Identification

We study the problem of identifying the best arm in a stochastic multi-a...
research
05/30/2018

Infinite Arms Bandit: Optimality via Confidence Bounds

The infinite arms bandit problem was initiated by Berry et al. (1997). T...
research
02/29/2020

Contextual-Bandit Based Personalized Recommendation with Time-Varying User Interests

A contextual bandit problem is studied in a highly non-stationary enviro...
research
06/04/2020

Differentiable Linear Bandit Algorithm

Upper Confidence Bound (UCB) is arguably the most commonly used method f...

Please sign up or login with your details

Forgot password? Click here to reset