JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation

05/07/2020
by   Zhuoyuan Mao, et al.
0

Neural machine translation (NMT) needs large parallel corpora for state-of-the-art translation quality. Low-resource NMT is typically addressed by transfer learning which leverages large monolingual or parallel corpora for pre-training. Monolingual pre-training approaches such as MASS (MAsked Sequence to Sequence) are extremely effective in boosting NMT quality for languages with small parallel corpora. However, they do not account for linguistic information obtained using syntactic analyzers which is known to be invaluable for several Natural Language Processing (NLP) tasks. To this end, we propose JASS, Japanese-specific Sequence to Sequence, as a novel pre-training alternative to MASS for NMT involving Japanese as the source or target language. JASS is joint BMASS (Bunsetsu MASS) and BRSS (Bunsetsu Reordering Sequence to Sequence) pre-training which focuses on Japanese linguistic units called bunsetsus. In our experiments on ASPEC Japanese–English and News Commentary Japanese–Russian translation we show that JASS can give results that are competitive with if not better than those given by MASS. Furthermore, we show for the first time that joint MASS and JASS pre-training gives results that significantly surpass the individual methods indicating their complementary nature. We will release our code, pre-trained models and bunsetsu annotated data as resources for researchers to use in their own NLP tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/23/2020

Pre-training via Leveraging Assisting Languages and Data Selection for Neural Machine Translation

Sequence-to-sequence (S2S) pre-training using large monolingual data is ...
research
01/20/2022

Linguistically-driven Multi-task Pre-training for Low-resource Neural Machine Translation

In the present study, we propose novel sequence-to-sequence pre-training...
research
12/19/2022

Synthetic Pre-Training Tasks for Neural Machine Translation

Pre-training is an effective technique for ensuring robust performance o...
research
11/13/2020

Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling

Pre-training models on vast quantities of unlabeled data has emerged as ...
research
11/07/2019

Microsoft Research Asia's Systems for WMT19

We Microsoft Research Asia made submissions to 11 language directions in...
research
04/03/2023

PEACH: Pre-Training Sequence-to-Sequence Multilingual Models for Translation with Semi-Supervised Pseudo-Parallel Document Generation

Multilingual pre-training significantly improves many multilingual NLP t...
research
07/02/2019

A Neural Grammatical Error Correction System Built On Better Pre-training and Sequential Transfer Learning

Grammatical error correction can be viewed as a low-resource sequence-to...

Please sign up or login with your details

Forgot password? Click here to reset