Improved Transition-Based Parsing by Modeling Characters instead of Words with LSTMs

08/04/2015
by   Miguel Ballesteros, et al.
0

We present extensions to a continuous-state dependency parsing method that makes it applicable to morphologically rich languages. Starting with a high-performance transition-based parser that uses long short-term memory (LSTM) recurrent neural networks to learn representations of the parser state, we replace lookup-based word representations with representations constructed from the orthographic representations of the words, also using LSTMs. This allows statistical sharing across word forms that are similar on the surface. Experiments for morphologically rich languages show that the parsing model benefits from incorporating the character-based encodings of words.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset