Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation

03/17/2022
by   Xinyi Wang, et al.
0

The performance of multilingual pretrained models is highly dependent on the availability of monolingual or parallel text present in a target language. Thus, the majority of the world's languages cannot benefit from recent progress in NLP as they have no or limited textual data. To expand possibilities of using NLP technology in these under-represented languages, we systematically study strategies that relax the reliance on conventional language resources through the use of bilingual lexicons, an alternative resource with much better language coverage. We analyze different strategies to synthesize textual or labeled data using lexicons, and how this data can be combined with monolingual or parallel text when available. For 19 under-represented languages across 3 tasks, our methods lead to consistent improvements of up to 5 and 15 points with and without extra monolingual text respectively. Overall, our study highlights how NLP methods can be adapted to thousands more languages that are under-served by current technology

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/31/2020

How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models

In this work we provide a systematic empirical comparison of pretrained ...
research
07/28/2022

Sequence to sequence pretraining for a less-resourced Slovenian language

Large pretrained language models have recently conquered the area of nat...
research
03/17/2022

Dim Wihl Gat Tun: The Case for Linguistic Expertise in NLP for Underdocumented Languages

Recent progress in NLP is driven by pretrained models leveraging massive...
research
09/16/2020

Reusing a Pretrained Language Model on Languages with Limited Corpora for Unsupervised NMT

Using a language model (LM) pretrained on two languages with large monol...
research
05/24/2023

GlobalBench: A Benchmark for Global Progress in Natural Language Processing

Despite the major advances in NLP, significant disparities in NLP system...
research
10/11/2022

Are Pretrained Multilingual Models Equally Fair Across Languages?

Pretrained multilingual language models can help bridge the digital lang...
research
03/04/2016

A Bayesian Model of Multilingual Unsupervised Semantic Role Induction

We propose a Bayesian model of unsupervised semantic role induction in m...

Please sign up or login with your details

Forgot password? Click here to reset