Is POS Tagging Necessary or Even Helpful for Neural Dependency Parsing?
In the pre deep learning era, part-of-speech tags have been considered as indispensable ingredients for feature engineering in dependency parsing due to their important role in alleviating data sparseness of purely lexical features, and quite a few works focus on joint tagging and parsing models to avoid error propagation. In contrast, recent studies suggest that POS tagging becomes much less important or even useless for neural parsing, especially when using character-based word representations such as CharLSTM. Yet there still lacks a full and systematic investigation on this interesting issue, both empirically and linguistically. To answer this, we design four typical multi-task learning frameworks (i.e., Share-Loose, Share-Tight, Stack-Discrete, Stack-Hidden), for joint tagging and parsing based on the state-of-the-art biaffine parser. Considering that it is much cheaper to annotate POS tags than parse trees, we also investigate the utilization of large-scale heterogeneous POS-tag data. We conduct experiments on both English and Chinese datasets, and the results clearly show that POS tagging (both homogeneous and heterogeneous) can still significantly improve parsing performance when using the Stack-Hidden joint framework. We conduct detailed analysis and gain more insights from the linguistic aspect.
READ FULL TEXT