Target-Side Augmentation for Document-Level Machine Translation

by   Guangsheng Bao, et al.

Document-level machine translation faces the challenge of data sparsity due to its long input length and a small amount of training data, increasing the risk of learning spurious patterns. To address this challenge, we propose a target-side augmentation method, introducing a data augmentation (DA) model to generate many potential translations for each source document. Learning on these wider range translations, an MT model can learn a smoothed distribution, thereby reducing the risk of data sparsity. We demonstrate that the DA model, which estimates the posterior distribution, largely improves the MT performance, outperforming the previous best system by 2.30 s-BLEU on News and achieving new state-of-the-art on News and Europarl benchmarks. Our code is available at <>.


page 1

page 2

page 3

page 4


Exploring Document-Level Literary Machine Translation with Parallel Paragraphs from World Literature

Literary translation is a culturally significant task, but it is bottlen...

Simulated Multiple Reference Training Improves Low-Resource Machine Translation

Many valid translations exist for a given sentence, and yet machine tran...

Generating Synthetic Speech from SpokenVocab for Speech Translation

Training end-to-end speech translation (ST) systems requires sufficientl...

Machine Translation Customization via Automatic Training Data Selection from the Web

Machine translation (MT) systems, especially when designed for an indust...

Few-shot learning through contextual data augmentation

Machine translation (MT) models used in industries with constantly chang...

G-Transformer for Document-level Machine Translation

Document-level MT models are still far from satisfactory. Existing work ...

Please sign up or login with your details

Forgot password? Click here to reset