Fast Rates for Nonparametric Online Learning: From Realizability to Learning in Games

11/17/2021
by   Constantinos Daskalakis, et al.
0

We study fast rates of convergence in the setting of nonparametric online regression, namely where regret is defined with respect to an arbitrary function class which has bounded complexity. Our contributions are two-fold: - In the realizable setting of nonparametric online regression with the absolute loss, we propose a randomized proper learning algorithm which gets a near-optimal mistake bound in terms of the sequential fat-shattering dimension of the hypothesis class. In the setting of online classification with a class of Littlestone dimension d, our bound reduces to d · polylog T. This result answers a question as to whether proper learners could achieve near-optimal mistake bounds; previously, even for online classification, the best known mistake bound was Õ( √(dT)). Further, for the real-valued (regression) setting, the optimal mistake bound was not even known for improper learners, prior to this work. - Using the above result, we exhibit an independent learning algorithm for general-sum binary games of Littlestone dimension d, for which each player achieves regret Õ(d^3/4· T^1/4). This result generalizes analogous results of Syrgkanis et al. (2015) who showed that in finite games the optimal regret can be accelerated from O(√(T)) in the adversarial setting to O(T^1/4) in the game setting. To establish the above results, we introduce several new techniques, including: a hierarchical aggregation rule to achieve the optimal mistake bound for real-valued classes, a multi-scale extension of the proper online realizable learner of Hanneke et al. (2021), an approach to show that the output of such nonparametric learning algorithms is stable, and a proof that the minimax theorem holds in all online learnable games.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset