The bias of the sample mean in multi-armed bandits can be positive or negative

05/27/2019
by   Jaehyeok Shin, et al.
0

It is well known that in stochastic multi-armed bandits (MAB), the sample mean of an arm is typically not an unbiased estimator of its true mean. In this paper, we decouple three different sources of this selection bias: adaptive sampling of arms, adaptive stopping of the experiment and adaptively choosing which arm to study. Through a new notion called "optimism" that captures certain natural monotonic behaviors of algorithms, we provide a clean and unified analysis of how optimistic rules affect the sign of the bias. The main takeaway message is that optimistic sampling induces a negative bias, but optimistic stopping and optimistic choosing both induce a positive bias. These results are derived in a general stochastic MAB setup that is entirely agnostic to the final aim of the experiment (regret minimization or best-arm identification or anything else). We provide examples of optimistic rules of each type, demonstrate that simulations confirm our theoretical predictions, and pose some natural but hard open problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset