Dynamic Scheduled Sampling with Imitation Loss for Neural Text Generation

01/31/2023
by   Xiang Lin, et al.
6

State-of-the-art neural text generation models are typically trained to maximize the likelihood of each token in the ground-truth sequence conditioned on the previous target tokens. However, during inference, the model needs to make a prediction conditioned on the tokens generated by itself. This train-test discrepancy is referred to as exposure bias. Scheduled sampling is a curriculum learning strategy that gradually exposes the model to its own predictions during training to mitigate this bias. Most of the proposed approaches design a scheduler based on training steps, which generally requires careful tuning depending on the training setup. In this work, we introduce Dynamic Scheduled Sampling with Imitation Loss (DySI), which maintains the schedule based solely on the training time accuracy, while enhancing the curriculum learning by introducing an imitation loss, which attempts to make the behavior of the decoder indistinguishable from the behavior of a teacher-forced decoder. DySI is universally applicable across training setups with minimal tuning. Extensive experiments and analysis show that DySI not only achieves notable improvements on standard machine translation benchmarks, but also significantly improves the robustness of other text generation models.

READ FULL TEXT
research
10/12/2020

Improving Text Generation with Student-Forcing Optimal Transport

Neural language models are often trained with maximum likelihood estimat...
research
12/14/2020

Contrastive Learning with Adversarial Perturbations for Conditional Text Generation

Recently, sequence-to-sequence (seq2seq) models with the Transformer arc...
research
10/07/2020

TeaForN: Teacher-Forcing with N-grams

Sequence generation models trained with teacher-forcing suffer from issu...
research
01/23/2018

MaskGAN: Better Text Generation via Filling in the ______

Neural text generation models are often autoregressive language models o...
research
09/16/2018

Curriculum-Based Neighborhood Sampling For Sequence Prediction

The task of multi-step ahead prediction in language models is challengin...
research
02/06/2021

Does the Order of Training Samples Matter? Improving Neural Data-to-Text Generation with Curriculum Learning

Recent advancements in data-to-text generation largely take on the form ...
research
01/22/2021

k-Neighbor Based Curriculum Sampling for Sequence Prediction

Multi-step ahead prediction in language models is challenging due to the...

Please sign up or login with your details

Forgot password? Click here to reset