Learning to Optimize under Non-Stationarity

10/06/2018
by   Wang Chi Cheung, et al.
0

We introduce algorithms that achieve state-of-the-art dynamic regret bounds for non-stationary linear stochastic bandits setting. It captures natural applications such as dynamic pricing and ads allocation in a changing environment. We show how the difficulty posed by the (possibly adversarial) non-stationarity can be overcome by a novel marriage between stochastic and adversarial bandits learning algorithms. Defining d,B_T, and T as the problem dimension, the variation budget, and the total time horizon, respectively, our main contributions are the tuned Sliding Window UCB (SW-UCB) algorithm with optimal O(d^2/3(B_T+1)^1/3T^2/3) dynamic regret, and the tuning free bandits-over-bandits (BOB) framework built on top of the SW-UCB algorithm with best O(d^2/3(B_T+1)^1/4T^3/4) dynamic regret.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset