Corpora Compared: The Case of the Swedish Gigaword Wikipedia Corpora

11/06/2020
by   Tosin P. Adewumi, et al.
0

In this work, we show that the difference in performance of embeddings from differently sourced data for a given language can be due to other factors besides data size. Natural language processing (NLP) tasks usually perform better with embeddings from bigger corpora. However, broadness of covered domain and noise can play important roles. We evaluate embeddings based on two Swedish corpora: The Gigaword and Wikipedia, in analogy (intrinsic) tests and discover that the embeddings from the Wikipedia corpus generally outperform those from the Gigaword corpus, which is a bigger corpus. Downstream tests will be required to have a definite evaluation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset