Isotuning With Applications To Scale-Free Online Learning

12/29/2021
by   Laurent Orseau, et al.
5

We extend and combine several tools of the literature to design fast, adaptive, anytime and scale-free online learning algorithms. Scale-free regret bounds must scale linearly with the maximum loss, both toward large losses and toward very small losses. Adaptive regret bounds demonstrate that an algorithm can take advantage of easy data and potentially have constant regret. We seek to develop fast algorithms that depend on as few parameters as possible, in particular they should be anytime and thus not depend on the time horizon. Our first and main tool, isotuning, is a generalization of the idea of balancing the trade-off of the regret. We develop a set of tools to design and analyze such learning rates easily and show that they adapts automatically to the rate of the regret (whether constant, O(log T), O(√(T)), etc.) within a factor 2 of the optimal learning rate in hindsight for the same observed quantities. The second tool is an online correction, which allows us to obtain centered bounds for many algorithms, to prevent the regret bounds from being vacuous when the domain is overly large or only partially constrained. The last tool, null updates, prevents the algorithm from performing overly large updates, which could result in unbounded regret, or even invalid updates. We develop a general theory using these tools and apply it to several standard algorithms. In particular, we (almost entirely) restore the adaptivity to small losses of FTRL for unbounded domains, design and prove scale-free adaptive guarantees for a variant of Mirror Descent (at least when the Bregman divergence is convex in its second argument), extend Adapt-ML-Prod to scale-free guarantees, and provide several other minor contributions about Prod, AdaHedge, BOA and Soft-Bayes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2023

Unconstrained Online Learning with Unbounded Losses

Algorithms for online learning typically require one or more boundedness...
research
02/27/2020

Lipschitz and Comparator-Norm Adaptivity in Online Learning

We study Online Convex Optimization in the unbounded setting where neith...
research
02/24/2019

Combining Online Learning Guarantees

We show how to take any two parameter-free online learning algorithms wi...
research
02/10/2020

Adaptive Online Learning with Varying Norms

Given any increasing sequence of norms ·_0,...,·_T-1, we provide an onli...
research
02/26/2022

Parameter-free Mirror Descent

We develop a modified online mirror descent framework that is suitable f...
research
03/10/2014

Generalised Mixability, Constant Regret, and Bayesian Updating

Mixability of a loss is known to characterise when constant regret bound...
research
07/09/2015

Fast rates in statistical and online learning

The speed with which a learning algorithm converges as it is presented w...

Please sign up or login with your details

Forgot password? Click here to reset