Posterior Collapse of a Linear Latent Variable Model

05/09/2022
by   Zihao Wang, et al.
0

This work identifies the existence and cause of a type of posterior collapse that frequently occurs in the Bayesian deep learning practice. For a general linear latent variable model that includes linear variational autoencoders as a special case, we precisely identify the nature of posterior collapse to be the competition between the likelihood and the regularization of the mean due to the prior. Our result also suggests that posterior collapse may be a general problem of learning for deeper architectures and deepens our understanding of Bayesian deep learning.

READ FULL TEXT
research
01/16/2019

Lagging Inference Networks and Posterior Collapse in Variational Autoencoders

The variational autoencoder (VAE) is a popular combination of deep laten...
research
04/12/2018

Variational Composite Autoencoders

Learning in the latent variable model is challenging in the presence of ...
research
06/20/2022

Latent Variable Modelling Using Variational Autoencoders: A survey

A probability distribution allows practitioners to uncover hidden struct...
research
06/05/2018

Cycle-Consistent Adversarial Learning as Approximate Bayesian Inference

We formalize the problem of learning interdomain correspondences in the ...
research
10/30/2015

Latent Bayesian melding for integrating individual and population models

In many statistical problems, a more coarse-grained model may be suitabl...
research
12/18/2019

Sampling Good Latent Variables via CPP-VAEs: VAEs with Condition Posterior as Prior

In practice, conditional variational autoencoders (CVAEs) perform condit...
research
10/16/2020

Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models

The learning and evaluation of energy-based latent variable models (EBLV...

Please sign up or login with your details

Forgot password? Click here to reset