Non-autoregressive Transformer by Position Learning

11/25/2019
by   Yu Bao, et al.
0

Non-autoregressive models are promising on various text generation tasks. Previous work hardly considers to explicitly model the positions of generated words. However, position modeling is an essential problem in non-autoregressive text generation. In this study, we propose PNAT, which incorporates positions as a latent variable into the text generative process. Experimental results show that PNAT achieves top results on machine translation and paraphrase generation tasks, outperforming several strong baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/21/2021

Non-Autoregressive Translation by Learning Target Categorical Codes

Non-autoregressive Transformer is a promising text generation model. How...
research
08/26/2022

Nearest Neighbor Non-autoregressive Text Generation

Non-autoregressive (NAR) models can generate sentences with less computa...
research
04/19/2023

Controlling keywords and their positions in text generation

One of the challenges in text generation is to control generation as int...
research
05/06/2023

An Adversarial Non-Autoregressive Model for Text Generation with Incomplete Information

Non-autoregressive models have been widely studied in the Complete Infor...
research
11/24/2021

Octree Transformer: Autoregressive 3D Shape Generation on Hierarchically Structured Sequences

Autoregressive models have proven to be very powerful in NLP text genera...
research
02/16/2021

Non-Autoregressive Text Generation with Pre-trained Language Models

Non-autoregressive generation (NAG) has recently attracted great attenti...
research
08/30/2019

Autoregressive Text Generation Beyond Feedback Loops

Autoregressive state transitions, where predictions are conditioned on p...

Please sign up or login with your details

Forgot password? Click here to reset