Continuous Time Bandits With Sampling Costs

by   Rahul Vaze, et al.

We consider a continuous-time multi-arm bandit problem (CTMAB), where the learner can sample arms any number of times in a given interval and obtain a random reward from each sample, however, increasing the frequency of sampling incurs an additive penalty/cost. Thus, there is a tradeoff between obtaining large reward and incurring sampling cost as a function of the sampling frequency. The goal is to design a learning algorithm that minimizes regret, that is defined as the difference of the payoff of the oracle policy and that of the learning algorithm. CTMAB is fundamentally different than the usual multi-arm bandit problem (MAB), e.g., even the single-arm case is non-trivial in CTMAB, since the optimal sampling frequency depends on the mean of the arm, which needs to be estimated. We first establish lower bounds on the regret achievable with any algorithm and then propose algorithms that achieve the lower bound up to logarithmic factors. For the single-arm case, we show that the lower bound on the regret is Ω((log T)^2/μ), where μ is the mean of the arm, and T is the time horizon. For the multiple arms case, we show that the lower bound on the regret is Ω((log T)^2 μ/Δ^2), where μ now represents the mean of the best arm, and Δ is the difference of the mean of the best and the second-best arm. We then propose an algorithm that achieves the bound up to constant terms.


page 1

page 2

page 3

page 4


Tight Memory-Regret Lower Bounds for Streaming Bandits

In this paper, we investigate the streaming bandits problem, wherein the...

Optimal UCB Adjustments for Large Arm Sizes

The regret lower bound of Lai and Robbins (1985), the gold standard for ...

Infinite Arms Bandit: Optimality via Confidence Bounds

The infinite arms bandit problem was initiated by Berry et al. (1997). T...

Budget-Constrained Bandits over General Cost and Reward Distributions

We consider a budget-constrained bandit problem where each arm pull incu...

The Typical Behavior of Bandit Algorithms

We establish strong laws of large numbers and central limit theorems for...

Thompson Sampling in Non-Episodic Restless Bandits

Restless bandit problems assume time-varying reward distributions of the...

Online Model Selection: a Rested Bandit Formulation

Motivated by a natural problem in online model selection with bandit inf...

Please sign up or login with your details

Forgot password? Click here to reset