Meta Learning as Bayes Risk Minimization

06/02/2020
by   Shin-ichi Maeda, et al.
0

Meta-Learning is a family of methods that use a set of interrelated tasks to learn a model that can quickly learn a new query task from a possibly small contextual dataset. In this study, we use a probabilistic framework to formalize what it means for two tasks to be related and reframe the meta-learning problem into the problem of Bayesian risk minimization (BRM). In our formulation, the BRM optimal solution is given by the predictive distribution computed from the posterior distribution of the task-specific latent variable conditioned on the contextual dataset, and this justifies the philosophy of Neural Process. However, the posterior distribution in Neural Process violates the way the posterior distribution changes with the contextual dataset. To address this problem, we present a novel Gaussian approximation for the posterior distribution that generalizes the posterior of the linear Gaussian model. Unlike that of the Neural Process, our approximation of the posterior distributions converges to the maximum likelihood estimate with the same rate as the true posterior distribution. We also demonstrate the competitiveness of our approach on benchmark datasets.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 10

page 11

research
01/03/2021

Meta-Learning Conjugate Priors for Few-Shot Bayesian Optimization

Bayesian Optimization is methodology used in statistical modelling that ...
research
06/28/2012

Fixed-Form Variational Posterior Approximation through Stochastic Linear Regression

We propose a general algorithm for approximating nonstandard Bayesian po...
research
03/17/2023

Batch Updating of a Posterior Tree Distribution over a Meta-Tree

Previously, we proposed a probabilistic data generation model represente...
research
03/20/2018

Meta Reinforcement Learning with Latent Variable Gaussian Processes

Data efficiency, i.e., learning from small data sets, is critical in man...
research
06/16/2023

A Hierarchical Bayesian Model for Deep Few-Shot Meta Learning

We propose a novel hierarchical Bayesian model for learning with a large...
research
11/09/2021

Gaussian Process Meta Few-shot Classifier Learning via Linear Discriminant Laplace Approximation

The meta learning few-shot classification is an emerging problem in mach...
research
01/19/2021

Sequential Bayesian Risk Set Inference for Robust Discrete Optimization via Simulation

Optimization via simulation (OvS) procedures that assume the simulation ...

Please sign up or login with your details

Forgot password? Click here to reset