Unsupervised Domain Adaptation of Contextualized Embeddings for Sequence Labeling

04/04/2019
by   Xiaochuang Han, et al.
0

Contextualized word embeddings such as ELMo and BERT provide a foundation for strong performance across a wide range of natural language processing tasks by pretraining on large corpora of unlabeled text. However, the applicability of this approach is unknown when the target domain varies substantially from the pretraining corpus. We are specifically interested in the scenario in which labeled data is available in only a canonical source domain such as newstext, and the target domain is distinct from both the labeled and pretraining texts. To address this scenario, we propose domain-adaptive fine-tuning, in which the contextualized embeddings are adapted by masked language modeling on text from the target domain. We test this approach on sequence labeling in two challenging domains: Early Modern English and Twitter. Both domains differ substantially from existing pretraining corpora, and domain-adaptive fine-tuning yields substantial improvements over strong BERT baselines, with particularly impressive results on out-of-vocabulary words. We conclude that domain-adaptive fine-tuning offers a simple and effective approach for the unsupervised adaptation of sequence labeling to difficult new domains.

READ FULL TEXT
research
04/04/2019

Unsupervised Domain Adaptation of Contextualized Embeddings: A Case Study in Early Modern English

Contextualized word embeddings such as ELMo and BERT provide a foundatio...
research
09/20/2022

Generalizing through Forgetting – Domain Generalization for Symptom Event Extraction in Clinical Notes

Symptom information is primarily documented in free-text clinical notes ...
research
09/01/2021

DILBERT: Customized Pre-Training for Domain Adaptation withCategory Shift, with an Application to Aspect Extraction

The rise of pre-trained language models has yielded substantial progress...
research
10/22/2020

ConVEx: Data-Efficient and Few-Shot Slot Labeling

We propose ConVEx (Conversational Value Extractor), an efficient pretrai...
research
04/17/2021

Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained Models

There is growing evidence that pretrained language models improve task-s...
research
05/22/2020

Living Machines: A study of atypical animacy

This paper proposes a new approach to animacy detection, the task of det...
research
03/10/2016

Part-of-Speech Tagging for Historical English

As more historical texts are digitized, there is interest in applying na...

Please sign up or login with your details

Forgot password? Click here to reset