Interpretable Entity Representations through Large-Scale Typing

04/30/2020
by   Yasumasa Onoe, et al.
0

In standard methodology for natural language processing, entities in text are typically embedded in dense vector spaces with pre-trained models. Such approaches are strong building blocks for entity-related tasks, but the embeddings they produce require extensive additional processing in neural models, and these entity embeddings are fundamentally difficult to interpret. In this paper, we present an approach to creating interpretable entity representations that are human readable and achieve high performance on entity-related tasks out of the box. Our representations are vectors whose values correspond to posterior probabilities over fine-grained entity types, indicating the confidence of a typing model's decision that the entity belongs to the corresponding type. We obtain these representations using a fine-grained entity typing model, trained either on supervised ultra-fine entity typing data (Choi et al. 2018) or distantly-supervised examples from Wikipedia. On entity probing tasks involving recognizing entity identity, our embeddings achieve competitive performance with ELMo and BERT without using any extra parameters. We also show that it is possible to reduce the size of our type set in a learning-based way for particular domains. Finally, we show that these embeddings can be post-hoc modified through simple rules to incorporate domain knowledge and improve performance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset