Large language models effectively leverage document-level context for literary translation, but critical errors persist

04/06/2023
by   Marzena Karpinska, et al.
0

Large language models (LLMs) are competitive with the state of the art on a wide range of sentence-level translation datasets. However, their ability to translate paragraphs and documents remains unexplored because evaluation in these settings is costly and difficult. We show through a rigorous human evaluation that asking the Gpt-3.5 (text-davinci-003) LLM to translate an entire literary paragraph (e.g., from a novel) at once results in higher-quality translations than standard sentence-by-sentence translation across 18 linguistically-diverse language pairs (e.g., translating into and out of Japanese, Polish, and English). Our evaluation, which took approximately 350 hours of effort for annotation and analysis, is conducted by hiring translators fluent in both the source and target language and asking them to provide both span-level error annotations as well as preference judgments of which system's translations are better. We observe that discourse-level LLM translators commit fewer mistranslations, grammar errors, and stylistic inconsistencies than sentence-level approaches. With that said, critical errors still abound, including occasional content omissions, and a human translator's intervention remains necessary to ensure that the author's voice remains intact. We publicly release our dataset and error annotations to spur future research on evaluation of document-level literary translation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2020

Leveraging Discourse Rewards for Document-Level Neural Machine Translation

Document-level machine translation focuses on the translation of entire ...
research
05/18/2023

Discourse Centric Evaluation of Machine Translation with a Densely Annotated Parallel Corpus

Several recent papers claim human parity at sentence-level Machine Trans...
research
10/18/2020

Capturing Longer Context for Document-level Neural Machine Translation: A Multi-resolutional Approach

Discourse context has been proven useful when translating documents. It ...
research
04/25/2023

Escaping the sentence-level paradigm in machine translation

It is well-known that document context is vital for resolving a range of...
research
03/23/2018

Automated Evaluation of Out-of-Context Errors

We present a new approach to evaluate computational models for the task ...
research
07/02/2021

Scarecrow: A Framework for Scrutinizing Machine Text

Modern neural text generation systems can produce remarkably fluent and ...
research
04/22/2022

ChapterBreak: A Challenge Dataset for Long-Range Language Models

While numerous architectures for long-range language models (LRLMs) have...

Please sign up or login with your details

Forgot password? Click here to reset