DeepAI AI Chat
Log In Sign Up

Tutorial: Deriving the Standard Variational Autoencoder (VAE) Loss Function

07/21/2019
by   Stephen Odaibo, et al.
retina-ai.com
37

In Bayesian machine learning, the posterior distribution is typically computationally intractable, hence variational inference is often required. In this approach, an evidence lower bound on the log likelihood of data is maximized during training. Variational Autoencoders (VAE) are one important example where variational inference is utilized. In this tutorial, we derive the variational lower bound loss function of the standard variational autoencoder. We do so in the instance of a gaussian latent prior and gaussian approximate posterior, under which assumptions the Kullback-Leibler term in the variational lower bound has a closed form solution. We derive essentially everything we use along the way; everything from Bayes' theorem to the Kullback-Leibler divergence.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/11/2017

Least Square Variational Bayesian Autoencoder with Regularization

In recent years Variation Autoencoders have become one of the most popul...
03/26/2020

A lower bound for the ELBO of the Bernoulli Variational Autoencoder

We consider a variational autoencoder (VAE) for binary data. Our main in...
06/11/2019

Approximate Variational Inference Based on a Finite Sample of Gaussian Latent Variables

Variational methods are employed in situations where exact Bayesian infe...
09/01/2015

Importance Weighted Autoencoders

The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently ...
06/27/2012

Variational Bayesian Inference with Stochastic Search

Mean-field variational inference is a method for approximate Bayesian po...
06/19/2022

Bounding Evidence and Estimating Log-Likelihood in VAE

Many crucial problems in deep learning and statistics are caused by a va...
09/02/2020

Quasi-symplectic Langevin Variational Autoencoder

Variational autoencoder (VAE) as one of the well investigated generative...