Distribution Matching in Variational Inference
The difficulties in matching the latent posterior to the prior, balancing powerful posteriors with computational efficiency, and the reduced flexibility of data likelihoods are the biggest challenges in the advancement of Variational Autoencoders. We show that these issues arise due to struggles in marginal divergence minimization, and explore an alternative to using conditional distributions that is inspired by Generative Adversarial Networks. The class probability estimation that GANs offer for marginal divergence minimization uncovers a family of VAE-GAN hybrids, which offer the promise of addressing these major challenges in variational inference. We systematically explore the solutions available for distribution matching, but show that these hybrid methods do not fulfill this promise, and the trade-off between generation and inference that they give rise to remains an ongoing research topic.
READ FULL TEXT