On the Subspace Structure of Gradient-Based Meta-Learning

07/08/2022
by   Gustaf Tegnér, et al.
0

In this work we provide an analysis of the distribution of the post-adaptation parameters of Gradient-Based Meta-Learning (GBML) methods. Previous work has noticed how, for the case of image-classification, this adaption only takes place on the last layers of the network. We propose the more general notion that parameters are updated over a low-dimensional subspace of the same dimensionality as the task-space and show that this holds for regression as well. Furthermore, the induced subspace structure provides a method to estimate the intrinsic dimension of the space of tasks of common few-shot learning datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2018

Meta-Learning with Latent Embedding Optimization

Gradient-based meta-learning techniques are both widely applicable and p...
research
06/07/2022

Few-Shot Learning by Dimensionality Reduction in Gradient Space

We introduce SubGD, a novel few-shot learning method which is based on t...
research
09/25/2019

Decoder Choice Network for Meta-Learning

Meta-learning has been widely used for implementing few-shot learning an...
research
07/06/2020

Covariate Distribution Aware Meta-learning

Meta-learning has proven to be successful at few-shot learning across th...
research
10/28/2021

Meta Subspace Optimization

Subspace optimization methods have the attractive property of reducing l...
research
06/15/2022

On Enforcing Better Conditioned Meta-Learning for Rapid Few-Shot Adaptation

Inspired by the concept of preconditioning, we propose a novel method to...
research
02/14/2021

Sample Efficient Subspace-based Representations for Nonlinear Meta-Learning

Constructing good representations is critical for learning complex tasks...

Please sign up or login with your details

Forgot password? Click here to reset