Meta-strategy for Learning Tuning Parameters with Guarantees

02/04/2021
by   Dimitri Meunier, et al.
0

Online gradient methods, like the online gradient algorithm (OGA), often depend on tuning parameters that are difficult to set in practice. We consider an online meta-learning scenario, and we propose a meta-strategy to learn these parameters from past tasks. Our strategy is based on the minimization of a regret bound. It allows to learn the initialization and the step size in OGA with guarantees. We provide a regret analysis of the strategy in the case of convex losses. It suggests that, when there are parameters θ_1,…,θ_T solving well tasks 1,…,T respectively and that are close enough one to each other, our strategy indeed improves on learning each task in isolation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2021

Learning-to-learn non-convex piecewise-Lipschitz functions

We analyze the meta-learning of the initialization and step-size of lear...
research
05/27/2022

Meta-Learning Adversarial Bandits

We study online learning with bandit feedback across multiple tasks, wit...
research
06/30/2020

Guarantees for Tuning the Step Size using a Learning-to-Learn Approach

Learning-to-learn (using optimization algorithms to learn a new optimize...
research
08/18/2022

Meta-Learning Online Control for Linear Dynamical Systems

In this paper, we consider the problem of finding a meta-learning online...
research
09/29/2021

Dynamic Regret Analysis for Online Meta-Learning

The online meta-learning framework has arisen as a powerful tool for the...
research
10/27/2016

Regret Bounds for Lifelong Learning

We consider the problem of transfer learning in an online setting. Diffe...
research
08/21/2021

Fairness-Aware Online Meta-learning

In contrast to offline working fashions, two research paradigms are devi...

Please sign up or login with your details

Forgot password? Click here to reset