Uncertainty in Multitask Transfer Learning

06/20/2018
by   Alexandre Lacoste, et al.
0

Using variational Bayes neural networks, we develop an algorithm capable of accumulating knowledge into a prior from multiple different tasks. The result is a rich and meaningful prior capable of few-shot learning on new tasks. The posterior can go beyond the mean field approximation and yields good uncertainty on the performed experiments. Analysis on toy tasks shows that it can learn from significantly different tasks while finding similarities among them. Experiments of Mini-Imagenet yields the new state of the art with 74.5 accuracy on 5 shot learning. Finally, we provide experiments showing that other existing methods can fail to perform well in different benchmarks.

READ FULL TEXT
research
07/12/2018

Metalearning with Hebbian Fast Weights

We unify recent neural approaches to one-shot learning with older ideas ...
research
04/18/2018

One-Shot Learning using Mixture of Variational Autoencoders: a Generalization Learning approach

Deep learning, even if it is very successful nowadays, traditionally nee...
research
06/01/2017

Discriminative k-shot learning using probabilistic models

This paper introduces a probabilistic framework for k-shot image classif...
research
12/07/2021

Learning Instance and Task-Aware Dynamic Kernels for Few Shot Learning

Learning and generalizing to novel concepts with few samples (Few-Shot L...
research
03/05/2020

PAC-Bayesian Meta-learning with Implicit Prior

We introduce a new and rigorously-formulated PAC-Bayes few-shot meta-lea...
research
04/27/2020

Empirical Bayes Transductive Meta-Learning with Synthetic Gradients

We propose a meta-learning approach that learns from multiple tasks in a...
research
11/11/2020

Transferred Fusion Learning using Skipped Networks

Identification of an entity that is of interest is prominent in any inte...

Please sign up or login with your details

Forgot password? Click here to reset