Pessimism for Offline Linear Contextual Bandits using ℓ_p Confidence Sets
We present a family {π̂}_p≥ 1 of pessimistic learning rules for offline learning of linear contextual bandits, relying on confidence sets with respect to different ℓ_p norms, where π̂_2 corresponds to Bellman-consistent pessimism (BCP), while π̂_∞ is a novel generalization of lower confidence bound (LCB) to the linear setting. We show that the novel π̂_∞ learning rule is, in a sense, adaptively optimal, as it achieves the minimax performance (up to log factors) against all ℓ_q-constrained problems, and as such it strictly dominates all other predictors in the family, including π̂_2.
READ FULL TEXT