Deep Prior

by   Alexandre Lacoste, et al.

The recent literature on deep learning offers new tools to learn a rich probability distribution over high dimensional data such as images or sounds. In this work we investigate the possibility of learning the prior distribution over neural network parameters using such tools. Our resulting variational Bayes algorithm generalizes well to new tasks, even when very few training examples are provided. Furthermore, this learned prior allows the model to extrapolate correctly far from a given task's training data on a meta-dataset of periodic signals.


page 1

page 2

page 3

page 4


PCENet: High Dimensional Surrogate Modeling for Learning Uncertainty

Learning data representations under uncertainty is an important task tha...

Using Deep Neural Network Approximate Bayesian Network

We present a new method to approximate posterior probabilities of Bayesi...

Meta Dropout: Learning to Perturb Features for Generalization

A machine learning model that generalizes well should obtain low errors ...

Geometric Understanding of Deep Learning

Deep learning is the mainstream technique for many machine learning task...

Generating Accurate Virtual Examples For Lifelong Machine Learning

Lifelong machine learning (LML) is an area of machine learning research ...

Precipitation nowcasting using a stochastic variational frame predictor with learned prior distribution

We propose the use of a stochastic variational frame prediction deep neu...

Using Deep Image Prior to Assist Variational Selective Segmentation Deep Learning Algorithms

Variational segmentation algorithms require a prior imposed in the form ...

Please sign up or login with your details

Forgot password? Click here to reset