Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation

10/13/2021
by   Florian Mai, et al.
0

Text autoencoders are often used for unsupervised conditional text generation by applying mappings in the latent space to change attributes to the desired values. Recently, Mai et al. (2020) proposed Emb2Emb, a method to learn these mappings in the embedding space of an autoencoder. However, their method is restricted to autoencoders with a single-vector embedding, which limits how much information can be retained. We address this issue by extending their method to Bag-of-Vectors Autoencoders (BoV-AEs), which encode the text into a variable-size bag of vectors that grows with the size of the text, as in attention-based models. This allows to encode and reconstruct much longer texts than standard autoencoders. Analogous to conventional autoencoders, we propose regularization techniques that facilitate learning meaningful operations in the latent space. Finally, we adapt for a training scheme that learns to map an input bag to an output bag, including a novel loss function and neural architecture. Our experimental evaluations on unsupervised sentiment transfer and sentence summarization show that our method performs substantially better than a standard autoencoder.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/06/2020

Plug and Play Autoencoders for Conditional Text Generation

Text autoencoders are commonly used for conditional generation tasks suc...
research
05/29/2019

Latent Space Secrets of Denoising Text-Autoencoders

While neural language models have recently demonstrated impressive perfo...
research
12/05/2019

Solving Forward and Inverse Problems Using Autoencoders

This work develops a model-aware autoencoder networks as a new method fo...
research
11/18/2022

Hub-VAE: Unsupervised Hub-based Regularization of Variational Autoencoders

Exemplar-based methods rely on informative data points or prototypes to ...
research
02/13/2014

Squeezing bottlenecks: exploring the limits of autoencoder semantic representation capabilities

We present a comprehensive study on the use of autoencoders for modellin...
research
11/28/2022

CLIP2GAN: Towards Bridging Text with the Latent Space of GANs

In this work, we are dedicated to text-guided image generation and propo...
research
06/29/2021

A Mechanism for Producing Aligned Latent Spaces with Autoencoders

Aligned latent spaces, where meaningful semantic shifts in the input spa...

Please sign up or login with your details

Forgot password? Click here to reset