Does Sparsity Help in Learning Misspecified Linear Bandits?

03/29/2023
by   Jialin Dong, et al.
0

Recently, the study of linear misspecified bandits has generated intriguing implications of the hardness of learning in bandits and reinforcement learning (RL). In particular, Du et al. (2020) show that even if a learner is given linear features in ℝ^d that approximate the rewards in a bandit or RL with a uniform error of ε, searching for an O(ε)-optimal action requires pulling at least Ω(exp(d)) queries. Furthermore, Lattimore et al. (2020) show that a degraded O(ε√(d))-optimal solution can be learned within poly(d/ε) queries. Yet it is unknown whether a structural assumption on the ground-truth parameter, such as sparsity, could break the ε√(d) barrier. In this paper, we address this question by showing that algorithms can obtain O(ε)-optimal actions by querying O(ε^-sd^s) actions, where s is the sparsity parameter, removing the exp(d)-dependence. We then establish information-theoretical lower bounds, i.e., Ω(exp(s)), to show that our upper bound on sample complexity is nearly tight if one demands an error O(s^δε) for 0<δ<1. For δ≥ 1, we further show that poly(s/ε) queries are possible when the linear features are "good" and even in general settings. These results provide a nearly complete picture of how sparsity can help in misspecified bandit learning and provide a deeper understanding of when linear features are "useful" for bandit and reinforcement learning with misspecification.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset