Unsupervised Domain Clusters in Pretrained Language Models

04/05/2020
by   Roee Aharoni, et al.
0

The notion of "in-domain data" in NLP is often over-simplistic and vague, as textual data varies in many nuanced linguistic aspects such as topic, style or level of formality. In addition, domain labels are many times unavailable, making it challenging to build domain-specific systems. We show that massive pre-trained language models implicitly learn sentence representations that cluster by domains without supervision – suggesting a simple data-driven definition of domains in textual data. We harness this property and propose domain data selection methods based on such models, which require only a small set of in-domain monolingual data. We evaluate our data selection methods for neural machine translation across five diverse domains, where they outperform an established approach as measured by both BLEU and by precision and recall of sentence selection with respect to an oracle.

READ FULL TEXT

page 15

page 16

research
09/16/2021

Translation Transformers Rediscover Inherent Data Domains

Many works proposed methods to improve the performance of Neural Machine...
research
01/22/2020

Unsupervised Domain Adaptation for Neural Machine Translation with Iterative Back Translation

State-of-the-art neural machine translation (NMT) systems are data-hungr...
research
09/11/2021

Multilingual Translation via Grafting Pre-trained Language Models

Can pre-trained BERT for one language and GPT for another be glued toget...
research
09/09/2021

Generalised Unsupervised Domain Adaptation of Neural Machine Translation with Cross-Lingual Data Selection

This paper considers the unsupervised domain adaptation problem for neur...
research
08/27/2019

Unsupervised Domain Adaptation for Neural Machine Translation with Domain-Aware Feature Embeddings

The recent success of neural machine translation models relies on the av...
research
10/02/2014

Not All Neural Embeddings are Born Equal

Neural language models learn word representations that capture rich ling...
research
12/16/2021

Efficient Hierarchical Domain Adaptation for Pretrained Language Models

Generative language models are trained on diverse, general domain corpor...

Please sign up or login with your details

Forgot password? Click here to reset