Three Variations on Variational Autoencoders

12/06/2022
by   R. I. Cukier, et al.
0

Variational autoencoders (VAEs) are one class of generative probabilistic latent-variable models designed for inference based on known data. We develop three variations on VAEs by introducing a second parameterized encoder/decoder pair and, for one variation, an additional fixed encoder. The parameters of the encoders/decoders are to be learned with a neural network. The fixed encoder is obtained by probabilistic-PCA. The variations are compared to the Evidence Lower Bound (ELBO) approximation to the original VAE. One variation leads to an Evidence Upper Bound (EUBO) that can be used in conjunction with the original ELBO to interrogate the convergence of the VAE.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/12/2018

Variational Composite Autoencoders

Learning in the latent variable model is challenging in the presence of ...
research
09/09/2021

Supervising the Decoder of Variational Autoencoders to Improve Scientific Utility

Probabilistic generative models are attractive for scientific modeling b...
research
10/28/2020

The Evidence Lower Bound of Variational Autoencoders Converges to a Sum of Three Entropies

The central objective function of a variational autoencoder (VAE) is its...
research
07/23/2019

Noise Contrastive Variational Autoencoders

We take steps towards understanding the "posterior collapse (PC)" diffic...
research
05/31/2021

Variational Autoencoders: A Harmonic Perspective

In this work we study Variational Autoencoders (VAEs) from the perspecti...
research
02/15/2021

Certifiably Robust Variational Autoencoders

We introduce an approach for training Variational Autoencoders (VAEs) th...
research
03/14/2019

Diagnosing and Enhancing VAE Models

Although variational autoencoders (VAEs) represent a widely influential ...

Please sign up or login with your details

Forgot password? Click here to reset