Learning Implicit Text Generation via Feature Matching

05/07/2020
by   Inkit Padhi, et al.
0

Generative feature matching network (GFMN) is an approach for training implicit generative models for images by performing moment matching on features from pre-trained neural networks. In this paper, we present new GFMN formulations that are effective for sequential data. Our experimental results show the effectiveness of the proposed method, SeqGFMN, for three distinct generation tasks in English: unconditional text generation, class-conditional text generation, and unsupervised text style transfer. SeqGFMN is stable to train and outperforms various adversarial approaches for text generation and text style transfer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2018

Adversarial Text Generation via Feature-Mover's Distance

Generative adversarial networks (GANs) have achieved significant success...
research
04/04/2019

Learning Implicit Generative Models by Matching Perceptual Features

Perceptual features (PFs) have been used with great success in tasks suc...
research
07/20/2022

GenText: Unsupervised Artistic Text Generation via Decoupled Font and Texture Manipulation

Automatic artistic text generation is an emerging topic which receives i...
research
03/23/2023

DreamBooth3D: Subject-Driven Text-to-3D Generation

We present DreamBooth3D, an approach to personalize text-to-3D generativ...
research
12/06/2022

Style transfer and classification in hebrew news items

Hebrew is a Morphological rich language, making its modeling harder than...
research
04/16/2022

Efficient Reinforcement Learning for Unsupervised Controlled Text Generation

Controlled text generation tasks such as unsupervised text style transfe...
research
08/30/2019

Implicit Deep Latent Variable Models for Text Generation

Deep latent variable models (LVM) such as variational auto-encoder (VAE)...

Please sign up or login with your details

Forgot password? Click here to reset