Improving Language Generation with Sentence Coherence Objective

09/07/2020
by   Ruixiao Sun, et al.
0

Conditional story generation and contextual text continuation have become increasingly popular topics in NLP community. Existing models are often prone to output paragraphs of texts that gradually diverge from the given prompt. Although the generated text may have a reasonable perplexity and diversity, it could easily be identified by human as gibberish. The goal of our project is to improve the coherence and consistency across sentences in a language-generation model. We aim to solve this issue by first training a sentence pair coherence classifier with GPT-2 pretrained model, and then co-train the GPT-2 language model with this new coherence objective using a method analogous to the REINFORCE algorithm. This fine-tuned language model is able to generate lengthy paragraph conditioned on a given topic without diverging too much. The simplicity of this model allows it to be applicable to a variety of underlying language model architecture since it only modifies the final layer of the pre-trained model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/16/2021

Goal-Directed Story Generation: Augmenting Generative Language Models with Reinforcement Learning

The advent of large pre-trained generative language models has provided ...
research
06/16/2023

FALL-E: A Foley Sound Synthesis Model and Strategies

This paper introduces FALL-E, a foley synthesis system and its training/...
research
03/17/2022

Coherence boosting: When your pretrained language model is not paying enough attention

Long-range semantic coherence remains a challenge in automatic language ...
research
10/24/2021

Sentence Punctuation for Collaborative Commentary Generation in Esports Live-Streaming

To solve the existing sentence punctuation problem for collaborative com...
research
09/30/2021

Focused Contrastive Training for Test-based Constituency Analysis

We propose a scheme for self-training of grammaticality models for const...
research
06/01/2019

Adversarial Generation and Encoding of Nested Texts

In this paper we propose a new language model called AGENT, which stands...
research
04/10/2023

Automated Reading Passage Generation with OpenAI's Large Language Model

The widespread usage of computer-based assessments and individualized le...

Please sign up or login with your details

Forgot password? Click here to reset