Tackling the Low-resource Challenge for Canonical Segmentation

10/06/2020
by   Manuel Mager, et al.
0

Canonical morphological segmentation consists of dividing words into their standardized morphemes. Here, we are interested in approaches for the task when training data is limited. We compare model performance in a simulated low-resource setting for the high-resource languages German, English, and Indonesian to experiments on new datasets for the truly low-resource languages Popoluca and Tepehua. We explore two new models for the task, borrowing from the closely related area of morphological generation: an LSTM pointer-generator and a sequence-to-sequence model with hard monotonic attention trained with imitation learning. We find that, in the low-resource setting, the novel approaches outperform existing ones on all languages by up to 11.4 However, while accuracy in emulated low-resource scenarios is over 50 languages, for the truly low-resource languages Popoluca and Tepehua, our best model only obtains 37.4 that canonical segmentation is still a challenging task for low-resource languages.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset