Not All Neural Embeddings are Born Equal

10/02/2014
by   Felix Hill, et al.
0

Neural language models learn word representations that capture rich linguistic and conceptual information. Here we investigate the embeddings learned by neural machine translation models. We show that translation-based embeddings outperform those learned by cutting-edge monolingual models at single-language tasks requiring knowledge of conceptual similarity and/or syntactic role. The findings suggest that, while monolingual models learn information about how concepts are related, neural-translation models better capture their true ontological status.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset