Preventing Posterior Collapse with Levenshtein Variational Autoencoder

04/30/2020
by   Serhii Havrylov, et al.
2

Variational autoencoders (VAEs) are a standard framework for inducing latent variable models that have been shown effective in learning text representations as well as in text generation. The key challenge with using VAEs is the posterior collapse problem: learning tends to converge to trivial solutions where the generators ignore latent variables. In our Levenstein VAE, we propose to replace the evidence lower bound (ELBO) with a new objective which is simple to optimize and prevents posterior collapse. Intuitively, it corresponds to generating a sequence from the autoencoder and encouraging the model to predict an optimal continuation according to the Levenshtein distance (LD) with the reference sentence at each time step in the generated sequence. We motivate the method from the probabilistic perspective by showing that it is closely related to optimizing a bound on the intractable Kullback-Leibler divergence of an LD-based kernel density estimator from the model distribution. With this objective, any generator disregarding latent variables will incur large penalties and hence posterior collapse does not happen. We relate our approach to policy distillation <cit.> and dynamic oracles <cit.>. By considering Yelp and SNLI benchmarks, we show that Levenstein VAE produces more informative latent representations than alternative approaches to preventing posterior collapse.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/16/2019

Lagging Inference Networks and Posterior Collapse in Variational Autoencoders

The variational autoencoder (VAE) is a popular combination of deep laten...
research
09/09/2019

Neural Gaussian Copula for Variational Autoencoder

Variational language models seek to estimate the posterior of latent var...
research
06/08/2023

Unscented Autoencoder

The Variational Autoencoder (VAE) is a seminal approach in deep generati...
research
11/10/2019

Preventing Posterior Collapse in Sequence VAEs with Pooling

Variational Autoencoders (VAEs) hold great potential for modelling text,...
research
09/02/2019

A Surprisingly Effective Fix for Deep Latent Variable Modeling of Text

When trained effectively, the Variational Autoencoder (VAE) is both a po...
research
02/09/2022

Covariate-informed Representation Learning with Samplewise Optimal Identifiable Variational Autoencoders

Recently proposed identifiable variational autoencoder (iVAE, Khemakhem ...
research
11/11/2018

Multi-Source Neural Variational Inference

Learning from multiple sources of information is an important problem in...

Please sign up or login with your details

Forgot password? Click here to reset