Taking a hint: How to leverage loss predictors in contextual bandits?

03/04/2020
by   Chen-Yu Wei, et al.
0

We initiate the study of learning in contextual bandits with the help of loss predictors. The main question we address is whether one can improve over the minimax regret O(√(T)) for learning over T rounds, when the total error of the predictor E≤ T is relatively small. We provide a complete answer to this question, including upper and lower bounds for various settings: adversarial versus stochastic environments, known versus unknown E, and single versus multiple predictors. We show several surprising results, such as 1) the optimal regret is O(min{√(T), √(E)T^1/4}) when E is known, a sharp contrast to the standard and better bound O(√(E)) for non-contextual problems (such as multi-armed bandits); 2) the same bound cannot be achieved if E is unknown, but as a remedy, O(√(E)T^1/3) is achievable; 3) with M predictors, a linear dependence on M is necessary, even if logarithmic dependence is possible for non-contextual problems. We also develop several novel algorithmic techniques to achieve matching upper bounds, including 1) a key action remapping technique for optimal regret with known E, 2) implementing Catoni's robust mean estimator efficiently via an ERM oracle leading to an efficient algorithm in the stochastic setting with optimal regret, 3) constructing an underestimator for E via estimating the histogram with bins of exponentially increasing size for the stochastic setting with unknown E, and 4) a self-referential scheme for learning with multiple predictors, all of which might be of independent interest.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset