Trace norm regularization for multi-task learning with scarce data

02/14/2022
by   Etienne Boursier, et al.
0

Multi-task learning leverages structural similarities between multiple tasks to learn despite very few samples. Motivated by the recent success of neural networks applied to data-scarce tasks, we consider a linear low-dimensional shared representation model. Despite an extensive literature, existing theoretical results either guarantee weak estimation rates or require a large number of samples per task. This work provides the first estimation error bound for the trace norm regularized estimator when the number of samples per task is small. The advantages of trace norm regularization for learning data-scarce tasks extend to meta-learning and are confirmed empirically on synthetic datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2021

Sample Efficient Linear Meta-Learning by Alternating Minimization

Meta-learning synthesizes and leverages the knowledge from a given set o...
research
02/21/2022

Multi-task Representation Learning with Stochastic Linear Bandits

We study the problem of transfer-learning in the setting of stochastic l...
research
03/07/2016

Distributed Multi-Task Learning with Shared Representation

We study the problem of distributed multi-task learning with shared repr...
research
11/23/2021

Multi-task manifold learning for small sample size datasets

In this study, we develop a method for multi-task manifold learning. The...
research
10/07/2022

Private and Efficient Meta-Learning with Low Rank and Sparse Decomposition

Meta-learning is critical for a variety of practical ML systems – like p...
research
07/20/2023

Nonlinear Meta-Learning Can Guarantee Faster Rates

Many recent theoretical works on meta-learning aim to achieve guarantees...
research
09/24/2022

Trace-based cryptoanalysis of cyclotomic PLWE for the non-split case

We provide an attack against the decision version of PLWE over the cyclo...

Please sign up or login with your details

Forgot password? Click here to reset