Uncertainty in Multitask Transfer Learning

by   Alexandre Lacoste, et al.
Element AI Inc

Using variational Bayes neural networks, we develop an algorithm capable of accumulating knowledge into a prior from multiple different tasks. The result is a rich and meaningful prior capable of few-shot learning on new tasks. The posterior can go beyond the mean field approximation and yields good uncertainty on the performed experiments. Analysis on toy tasks shows that it can learn from significantly different tasks while finding similarities among them. Experiments of Mini-Imagenet yields the new state of the art with 74.5 accuracy on 5 shot learning. Finally, we provide experiments showing that other existing methods can fail to perform well in different benchmarks.


Metalearning with Hebbian Fast Weights

We unify recent neural approaches to one-shot learning with older ideas ...

One-Shot Learning using Mixture of Variational Autoencoders: a Generalization Learning approach

Deep learning, even if it is very successful nowadays, traditionally nee...

Discriminative k-shot learning using probabilistic models

This paper introduces a probabilistic framework for k-shot image classif...

Learning Instance and Task-Aware Dynamic Kernels for Few Shot Learning

Learning and generalizing to novel concepts with few samples (Few-Shot L...

PAC-Bayesian Meta-learning with Implicit Prior

We introduce a new and rigorously-formulated PAC-Bayes few-shot meta-lea...

Empirical Bayes Transductive Meta-Learning with Synthetic Gradients

We propose a meta-learning approach that learns from multiple tasks in a...

Transferred Fusion Learning using Skipped Networks

Identification of an entity that is of interest is prominent in any inte...

Please sign up or login with your details

Forgot password? Click here to reset