DeepAI AI Chat
Log In Sign Up

From optimal transport to generative modeling: the VEGAN cookbook

by   Olivier Bousquet, et al.

We study unsupervised generative modeling in terms of the optimal transport (OT) problem between true (but unknown) data distribution P_X and the latent variable model distribution P_G. We show that the OT problem can be equivalently written in terms of probabilistic encoders, which are constrained to match the posterior and prior distributions over the latent space. When relaxed, this constrained optimization problem leads to a penalized optimal transport (POT) objective, which can be efficiently minimized using stochastic gradient descent by sampling from P_X and P_G. We show that POT for the 2-Wasserstein distance coincides with the objective heuristically employed in adversarial auto-encoders (AAE) (Makhzani et al., 2016), which provides the first theoretical justification for AAEs known to the authors. We also compare POT to other popular techniques like variational auto-encoders (VAE) (Kingma and Welling, 2014). Our theoretical results include (a) a better understanding of the commonly observed blurriness of images generated by VAEs, and (b) establishing duality between Wasserstein GAN (Arjovsky and Bottou, 2017) and POT for the 1-Wasserstein distance.


Wasserstein-Wasserstein Auto-Encoders

To address the challenges in learning deep generative models (e.g.,the b...

Primal-Dual Wasserstein GAN

We introduce Primal-Dual Wasserstein GAN, a new learning algorithm for b...

Latent Space Optimal Transport for Generative Models

Variational Auto-Encoders enforce their learned intermediate latent-spac...

Sliced-Wasserstein Flows: Nonparametric Generative Modeling via Optimal Transport and Diffusions

By building up on the recent theory that established the connection betw...

Wasserstein GANs with Gradient Penalty Compute Congested Transport

Wasserstein GANs with Gradient Penalty (WGAN-GP) are an extremely popula...

Partial Wasserstein Covering

We consider a general task called partial Wasserstein covering with the ...

Probabilistic Contrastive Learning Recovers the Correct Aleatoric Uncertainty of Ambiguous Inputs

Contrastively trained encoders have recently been proven to invert the d...