Bypassing the Monster: A Faster and Simpler Optimal Algorithm for Contextual Bandits under Realizability
We consider the general (stochastic) contextual bandit problem under the realizability assumption, i.e., the expected reward, as a function of contexts and actions, belongs to a general function class F. We design a fast and simple algorithm that achieves the statistically optimal regret with only O(log T) calls to an offline least-squares regression oracle across all T rounds (the number of oracle calls can be further reduced to O(loglog T) if T is known in advance). Our algorithm provides the first universal and optimal reduction from contextual bandits to offline regression, solving an important open problem for the realizable setting of contextual bandits. Our algorithm is also the first provably optimal contextual bandit algorithm with a logarithmic number of oracle calls.
READ FULL TEXT