EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization

06/19/2021
by   Ondrej Bohdal, et al.
0

Gradient-based meta-learning and hyperparameter optimization have seen significant progress recently, enabling practical end-to-end training of neural networks together with many hyperparameters. Nevertheless, existing approaches are relatively expensive as they need to compute second-order derivatives and store a longer computational graph. This cost prevents scaling them to larger network architectures. We present EvoGrad, a new approach to meta-learning that draws upon evolutionary techniques to more efficiently compute hypergradients. EvoGrad estimates hypergradient with respect to hyperparameters without calculating second-order gradients, or storing a longer computational graph, leading to significant improvements in efficiency. We evaluate EvoGrad on two substantial recent meta-learning applications, namely cross-domain few-shot learning with feature-wise transformations and noisy label learning with MetaWeightNet. The results show that EvoGrad significantly improves efficiency and enables scaling meta-learning to bigger CNN architectures such as from ResNet18 to ResNet34.

READ FULL TEXT
research
10/22/2018

How to train your MAML

The field of few-shot learning has recently seen substantial advancement...
research
04/04/2021

A contrastive rule for meta-learning

Meta-learning algorithms leverage regularities that are present on a set...
research
10/06/2021

Online Hyperparameter Meta-Learning with Hypergradient Distillation

Many gradient-based meta-learning methods assume a set of parameters tha...
research
09/15/2021

Sign-MAML: Efficient Model-Agnostic Meta-Learning by SignSGD

We propose a new computationally-efficient first-order algorithm for Mod...
research
04/21/2021

Stateless Neural Meta-Learning using Second-Order Gradients

Deep learning typically requires large data sets and much compute power ...
research
06/08/2020

Multi-step Estimation for Gradient-based Meta-learning

Gradient-based meta-learning approaches have been successful in few-shot...
research
11/02/2021

Meta-Learning to Improve Pre-Training

Pre-training (PT) followed by fine-tuning (FT) is an effective method fo...

Please sign up or login with your details

Forgot password? Click here to reset