Massive vs. Curated Word Embeddings for Low-Resourced Languages. The Case of Yorùbá and Twi

12/05/2019
by   Jesujoba O. Alabi, et al.
0

The success of several architectures to learn semantic representations from unannotated text and the availability of these kind of texts in online multilingual resources such as Wikipedia has facilitated the massive and automatic creation of resources for multiple languages. The evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. For low-resourced languages, the evaluation is more difficult and normally ignored, with the hope that the impressive capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced setting too. In this paper we focus on two African languages, Yorùbá and Twi, and compare the word embeddings obtained in this way, with word embeddings obtained from curated corpora and a language-dependent processing. We analyse the noise in the publicly available corpora, collect high quality and noisy data for the two languages and quantify the improvements that depend not only on the amount of data but on the quality too. We also use different architectures that learn word representations both from surface forms and characters to further exploit all the available information which showed to be important for these languages. For the evaluation, we manually translate the wordsim-353 word pairs dataset from English into Yorùbá and Twi. As output of the work, we provide corpora, embeddings and the test suits for both languages.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2020

AI4Bharat-IndicNLP Corpus: Monolingual Corpora and Word Embeddings for Indic Languages

We present the IndicNLP corpus, a large-scale, general-domain corpus con...
research
07/21/2021

Debiasing Multilingual Word Embeddings: A Case Study of Three Indian Languages

In this paper, we advance the current state-of-the-art method for debias...
research
11/22/2019

Multilingual Culture-Independent Word Analogy Datasets

In text processing, deep neural networks mostly use word embeddings as a...
research
06/25/2017

Beyond Bilingual: Multi-sense Word Embeddings using Multilingual Context

Word embeddings, which represent a word as a point in a vector space, ha...
research
03/23/2019

Expanding the Text Classification Toolbox with Cross-Lingual Embeddings

Most work in text classification and Natural Language Processing (NLP) f...
research
11/02/2018

Unsupervised Hyperalignment for Multilingual Word Embeddings

We consider the problem of aligning continuous word representations, lea...
research
10/25/2018

Learning Neural Emotion Analysis from 100 Observations: The Surprising Effectiveness of Pre-Trained Word Representations

Deep Learning has drastically reshaped virtually all areas of NLP. Yet o...

Please sign up or login with your details

Forgot password? Click here to reset