fugashi, a Tool for Tokenizing Japanese in Python

10/14/2020
by   Paul McCann, et al.
0

Recent years have seen an increase in the number of large-scale multilingual NLP projects. However, even in such projects, languages with special processing requirements are often excluded. One such language is Japanese. Japanese is written without spaces, tokenization is non-trivial, and while high quality open source tokenizers exist they can be hard to use and lack English documentation. This paper introduces fugashi, a MeCab wrapper for Python, and gives an introduction to tokenizing Japanese.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset