Towards Deeper Understanding of Variational Autoencoding Models

02/28/2017
by   Shengjia Zhao, et al.
0

We propose a new family of optimization criteria for variational auto-encoding models, generalizing the standard evidence lower bound. We provide conditions under which they recover the data distribution and learn latent features, and formally show that common issues such as blurry samples and uninformative latent features arise when these conditions are not met. Based on these new insights, we propose a new sequential VAE model that can generate sharp samples on the LSUN image dataset based on pixel-wise reconstruction loss, and propose an optimization criterion that encourages unsupervised learning of informative latent features.

READ FULL TEXT

page 6

page 7

page 8

02/16/2018

Auto-Encoding Total Correlation Explanation

Advances in unsupervised learning enable reconstruction and generation o...
02/16/2019

WiSE-VAE: Wide Sample Estimator VAE

Variational Auto-encoders (VAEs) have been very successful as methods fo...
08/16/2022

Training Latent Variable Models with Auto-encoding Variational Bayes: A Tutorial

Auto-encoding Variational Bayes (AEVB) is a powerful and general algorit...
04/27/2018

Adversarial Training of Variational Auto-encoders for High Fidelity Image Generation

Variational auto-encoders (VAEs) provide an attractive solution to image...
12/12/2020

Anomaly detection through latent space restoration using vector-quantized variational autoencoders

We propose an out-of-distribution detection method that combines density...
11/16/2016

Deep Variational Inference Without Pixel-Wise Reconstruction

Variational autoencoders (VAEs), that are built upon deep neural network...
04/12/2023

Explicitly Minimizing the Blur Error of Variational Autoencoders

Variational autoencoders (VAEs) are powerful generative modelling method...

Please sign up or login with your details

Forgot password? Click here to reset