Evade the Trap of Mediocrity: Promoting Diversity and Novelty in Text Generation via Concentrating Attention

11/14/2022
by   Wenhao Li, et al.
0

Recently, powerful Transformer architectures have proven superior in generating high-quality sentences. Nevertheless, these models tend to produce dull high-frequency phrases, severely hurting the diversity and novelty of generated text. In this work, we dig into the intrinsic mechanism of this problem and found that sparser attention values in Transformer could improve diversity. To understand such a phenomenon, we first conduct both empirical and theoretical analysis and then attribute it to representation degeneration caused by the attentive mixture of the hidden states during training. We term this process the Trap of Mediocrity. To escape from such a trap, we introduce a novel attention regularization loss to control the sharpness of the attention distribution, which is transparent to model structures and can be easily implemented within 20 lines of python code. We prove that this method could be mathematically regarded as learning a Bayesian approximation of posterior attention. Experiments show that our method improved the diversity and novelty of the generated text while maintaining comparable quality on a variety of conditional and unconditional generation tasks.

READ FULL TEXT

page 3

page 18

research
08/27/2021

Lingxi: A Diversity-aware Chinese Modern Poetry Generation System

Poetry generation has been a difficult task in natural language processi...
research
11/18/2021

How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN

Current language models can generate high-quality text. Are they simply ...
research
09/28/2018

SALSA-TEXT : self attentive latent space based adversarial text generation

Inspired by the success of self attention mechanism and Transformer arch...
research
04/04/2022

Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors

Generating high quality texts with high diversity is important for many ...
research
06/17/2023

KEST: Kernel Distance Based Efficient Self-Training for Improving Controllable Text Generation

Self-training (ST) has come to fruition in language understanding tasks ...
research
10/22/2022

Recurrence Boosts Diversity! Revisiting Recurrent Latent Variable in Transformer-Based Variational AutoEncoder for Diverse Text Generation

Variational Auto-Encoder (VAE) has been widely adopted in text generatio...
research
04/29/2020

Generating Safe Diversity in NLG via Imitation Learning

Deep-learning models for language generation tasks tend to produce repet...

Please sign up or login with your details

Forgot password? Click here to reset