Sometimes We Want Translationese

04/15/2021
by   Prasanna Parthasarathi, et al.
5

Rapid progress in Neural Machine Translation (NMT) systems over the last few years has been driven primarily towards improving translation quality, and as a secondary focus, improved robustness to input perturbations (e.g. spelling and grammatical mistakes). While performance and robustness are important objectives, by over-focusing on these, we risk overlooking other important properties. In this paper, we draw attention to the fact that for some applications, faithfulness to the original (input) text is important to preserve, even if it means introducing unusual language patterns in the (output) translation. We propose a simple, novel way to quantify whether an NMT system exhibits robustness and faithfulness, focusing on the case of word-order perturbations. We explore a suite of functions to perturb the word order of source sentences without deleting or injecting tokens, and measure the effects on the target side in terms of both robustness and faithfulness. Across several experimental conditions, we observe a strong tendency towards robustness rather than faithfulness. These results allow us to better understand the trade-off between faithfulness and robustness in NMT, and opens up the possibility of developing systems where users have more autonomy and control in selecting which property is best suited for their use case.

READ FULL TEXT

page 7

page 16

research
05/01/2020

Evaluating Robustness to Input Perturbations for Neural Machine Translation

Neural Machine Translation (NMT) models are sensitive to small perturbat...
research
12/12/2018

SMT vs NMT: A Comparison over Hindi & Bengali Simple Sentences

In the present article, we identified the qualitative differences betwee...
research
09/01/2019

Towards Understanding Neural Machine Translation with Word Importance

Although neural machine translation (NMT) has advanced the state-of-the-...
research
10/09/2019

Novel Applications of Factored Neural Machine Translation

In this work, we explore the usefulness of target factors in neural mach...
research
11/17/2022

Reducing Hallucinations in Neural Machine Translation with Feature Attribution

Neural conditional language generation models achieve the state-of-the-a...
research
04/20/2021

Addressing the Vulnerability of NMT in Input Perturbations

Neural Machine Translation (NMT) has achieved significant breakthrough i...

Please sign up or login with your details

Forgot password? Click here to reset