Thresholded LASSO Bandit

10/22/2020
by   Kaito Ariu, et al.
2

In this paper, we revisit sparse stochastic contextual linear bandits. In these problems, feature vectors may be of large dimension d, but the reward function depends on a few, say s_0, of these features only. We present Thresholded LASSO bandit, an algorithm that (i) estimates the vector defining the reward function as well as its sparse support using the LASSO framework with thresholding, and (ii) selects an arm greedily according to this estimate projected on its support. The algorithm does not require prior knowledge of the sparsity index s_0. For this simple algorithm, we establish non-asymptotic regret upper bounds scaling as 𝒪( log d + √(Tlog T) ) in general, and as 𝒪( log d + log T) under the so-called margin condition (a setting where arms are well separated). The regret of previous algorithms scales as 𝒪( √(T)log (d T)) and 𝒪( log T log d) in the two settings, respectively. Through numerical experiments, we confirm that our algorithm outperforms existing methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset