Superbizarre Is Not Superb: Improving BERT's Interpretations of Complex Words with Derivational Morphology

01/02/2021
by   Valentin Hofmann, et al.
0

How does the input segmentation of pretrained language models (PLMs) affect their generalization capabilities? We present the first study investigating this question, taking BERT as the example PLM and focusing on the semantic representations of derivationally complex words. We show that PLMs can be interpreted as serial dual-route models, i.e., the meanings of complex words are either stored or else need to be computed from the subwords, which implies that maximally meaningful input tokens should allow for the best generalization on new words. This hypothesis is confirmed by a series of semantic probing tasks on which derivational segmentation consistently outperforms BERT's WordPiece segmentation by a large margin. Our results suggest that the generalization capabilities of PLMs could be further improved if a morphologically-informed vocabulary of input tokens were used.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset