Text Generation by Learning from Off-Policy Demonstrations

09/16/2020
by   Richard Yuanzhe Pang, et al.
0

Current approaches to text generation largely rely on autoregressive models and maximum likelihood estimation. This paradigm leads to (i) diverse but low-quality samples due to mismatched learning objective and evaluation metric (likelihood vs. quality) and (ii) exposure bias due to mismatched history distributions (gold vs. model-generated). To alleviate these problems, we frame text generation as a reinforcement learning (RL) problem with expert demonstrations (i.e., the training data), where the goal is to maximize quality given model-generated histories. Prior RL approaches to generation often face optimization issues due to the large action space and sparse reward. We propose GOLD (generation by off-policy learning from demonstrations): an algorithm that learns from the off-policy demonstrations by importance weighting and does not suffer from degenerative solutions. We find that GOLD outperforms the baselines according to automatic and human evaluation on summarization, question generation, and machine translation, including attaining state-of-the-art results for CNN/DailyMail summarization. Further, we show that models trained by GOLD are less sensitive to decoding algorithms and the generation quality does not degrade much as the length increases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2021

Text Generation with Efficient (Soft) Q-Learning

Maximum likelihood estimation (MLE) is the predominant algorithm for tra...
research
05/29/2022

CoNT: Contrastive Neural Text Generation

Recently, contrastive learning attracts increasing interests in neural t...
research
04/30/2018

Towards Diverse Text Generation with Inverse Reinforcement Learning

Text generation is a crucial task in NLP. Recently, several adversarial ...
research
09/08/2021

Smelting Gold and Silver for Improved Multilingual AMR-to-Text Generation

Recent work on multilingual AMR-to-text generation has exclusively focus...
research
08/26/2021

Alleviating Exposure Bias via Contrastive Learning for Abstractive Text Summarization

Encoder-decoder models have achieved remarkable success in abstractive t...
research
11/24/2018

Connecting the Dots Between MLE and RL for Sequence Generation

Sequence generation models such as recurrent networks can be trained wit...
research
02/28/2019

Evaluating Rewards for Question Generation Models

Recent approaches to question generation have used modifications to a Se...

Please sign up or login with your details

Forgot password? Click here to reset