Does Sparsity Help in Learning Misspecified Linear Bandits?

by   Jialin Dong, et al.

Recently, the study of linear misspecified bandits has generated intriguing implications of the hardness of learning in bandits and reinforcement learning (RL). In particular, Du et al. (2020) show that even if a learner is given linear features in ℝ^d that approximate the rewards in a bandit or RL with a uniform error of ε, searching for an O(ε)-optimal action requires pulling at least Ω(exp(d)) queries. Furthermore, Lattimore et al. (2020) show that a degraded O(ε√(d))-optimal solution can be learned within poly(d/ε) queries. Yet it is unknown whether a structural assumption on the ground-truth parameter, such as sparsity, could break the ε√(d) barrier. In this paper, we address this question by showing that algorithms can obtain O(ε)-optimal actions by querying O(ε^-sd^s) actions, where s is the sparsity parameter, removing the exp(d)-dependence. We then establish information-theoretical lower bounds, i.e., Ω(exp(s)), to show that our upper bound on sample complexity is nearly tight if one demands an error O(s^δε) for 0<δ<1. For δ≥ 1, we further show that poly(s/ε) queries are possible when the linear features are "good" and even in general settings. These results provide a nearly complete picture of how sparsity can help in misspecified bandit learning and provide a deeper understanding of when linear features are "useful" for bandit and reinforcement learning with misspecification.


page 1

page 2

page 3

page 4


Learning with Good Feature Representations in Bandits and in RL with a Generative Model

The construction in the recent paper by Du et al. [2019] implies that se...

Minimax Policies for Combinatorial Prediction Games

We address the online linear optimization problem when the actions of th...

Uniform-PAC Guarantees for Model-Based RL with Bounded Eluder Dimension

Recently, there has been remarkable progress in reinforcement learning (...

Is Reinforcement Learning More Difficult Than Bandits? A Near-optimal Algorithm Escaping the Curse of Horizon

Episodic reinforcement learning and contextual bandits are two widely st...

An Exponential Lower Bound for Linearly-Realizable MDPs with Constant Suboptimality Gap

A fundamental question in the theory of reinforcement learning is: suppo...

On the Statistical Efficiency of Reward-Free Exploration in Non-Linear RL

We study reward-free reinforcement learning (RL) under general non-linea...

Noisy searching: simple, fast and correct

This work revisits the multiplicative weights update technique (MWU) whi...

Please sign up or login with your details

Forgot password? Click here to reset