Towards Deeper Understanding of Variational Autoencoding Models

by   Shengjia Zhao, et al.

We propose a new family of optimization criteria for variational auto-encoding models, generalizing the standard evidence lower bound. We provide conditions under which they recover the data distribution and learn latent features, and formally show that common issues such as blurry samples and uninformative latent features arise when these conditions are not met. Based on these new insights, we propose a new sequential VAE model that can generate sharp samples on the LSUN image dataset based on pixel-wise reconstruction loss, and propose an optimization criterion that encourages unsupervised learning of informative latent features.


page 6

page 7

page 8


Auto-Encoding Total Correlation Explanation

Advances in unsupervised learning enable reconstruction and generation o...

WiSE-VAE: Wide Sample Estimator VAE

Variational Auto-encoders (VAEs) have been very successful as methods fo...

Training Latent Variable Models with Auto-encoding Variational Bayes: A Tutorial

Auto-encoding Variational Bayes (AEVB) is a powerful and general algorit...

Adversarial Training of Variational Auto-encoders for High Fidelity Image Generation

Variational auto-encoders (VAEs) provide an attractive solution to image...

Anomaly detection through latent space restoration using vector-quantized variational autoencoders

We propose an out-of-distribution detection method that combines density...

Deep Variational Inference Without Pixel-Wise Reconstruction

Variational autoencoders (VAEs), that are built upon deep neural network...

Explicitly Minimizing the Blur Error of Variational Autoencoders

Variational autoencoders (VAEs) are powerful generative modelling method...

Please sign up or login with your details

Forgot password? Click here to reset