From optimal transport to generative modeling: the VEGAN cookbook

05/22/2017
by   Olivier Bousquet, et al.
0

We study unsupervised generative modeling in terms of the optimal transport (OT) problem between true (but unknown) data distribution P_X and the latent variable model distribution P_G. We show that the OT problem can be equivalently written in terms of probabilistic encoders, which are constrained to match the posterior and prior distributions over the latent space. When relaxed, this constrained optimization problem leads to a penalized optimal transport (POT) objective, which can be efficiently minimized using stochastic gradient descent by sampling from P_X and P_G. We show that POT for the 2-Wasserstein distance coincides with the objective heuristically employed in adversarial auto-encoders (AAE) (Makhzani et al., 2016), which provides the first theoretical justification for AAEs known to the authors. We also compare POT to other popular techniques like variational auto-encoders (VAE) (Kingma and Welling, 2014). Our theoretical results include (a) a better understanding of the commonly observed blurriness of images generated by VAEs, and (b) establishing duality between Wasserstein GAN (Arjovsky and Bottou, 2017) and POT for the 1-Wasserstein distance.

READ FULL TEXT
research
02/25/2019

Wasserstein-Wasserstein Auto-Encoders

To address the challenges in learning deep generative models (e.g.,the b...
research
05/24/2018

Primal-Dual Wasserstein GAN

We introduce Primal-Dual Wasserstein GAN, a new learning algorithm for b...
research
09/16/2018

Latent Space Optimal Transport for Generative Models

Variational Auto-Encoders enforce their learned intermediate latent-spac...
research
06/21/2018

Sliced-Wasserstein Flows: Nonparametric Generative Modeling via Optimal Transport and Diffusions

By building up on the recent theory that established the connection betw...
research
09/01/2021

Wasserstein GANs with Gradient Penalty Compute Congested Transport

Wasserstein GANs with Gradient Penalty (WGAN-GP) are an extremely popula...
research
06/02/2021

Partial Wasserstein Covering

We consider a general task called partial Wasserstein covering with the ...
research
02/06/2023

Probabilistic Contrastive Learning Recovers the Correct Aleatoric Uncertainty of Ambiguous Inputs

Contrastively trained encoders have recently been proven to invert the d...

Please sign up or login with your details

Forgot password? Click here to reset