Identification, Interpretability, and Bayesian Word Embeddings

04/02/2019
by   Adam M. Lauretig, et al.
0

Social scientists have recently turned to analyzing text using tools from natural language processing like word embeddings to measure concepts like ideology, bias, and affinity. However, word embeddings are difficult to use in the regression framework familiar to social scientists: embeddings are are neither identified, nor directly interpretable. I offer two advances on standard embedding models to remedy these problems. First, I develop Bayesian Word Embeddings with Automatic Relevance Determination priors, relaxing the assumption that all embedding dimensions have equal weight. Second, I apply work identifying latent variable models to anchor the dimensions of the resulting embeddings, identifying them, and making them interpretable and usable in a regression. I then apply this model and anchoring approach to two cases, the shift in internationalist rhetoric in the American presidents' inaugural addresses, and the relationship between bellicosity in American foreign policy decision-makers' deliberations. I find that inaugural addresses became less internationalist after 1945, which goes against the conventional wisdom, and that an increase in bellicosity is associated with an increase in hostile actions by the United States, showing that elite deliberations are not cheap talk, and helping confirm the validity of the model.

READ FULL TEXT
research
09/03/2019

Interpretable Word Embeddings via Informative Priors

Word embeddings have demonstrated strong performance on NLP tasks. Howev...
research
08/03/2016

Morphological Priors for Probabilistic Neural Word Embeddings

Word embeddings allow natural language processing systems to share stati...
research
02/01/2018

Disunited Nations? A Multiplex Network Approach to Detecting Preference Affinity Blocs using Texts and Votes

This paper contributes to an emerging literature that models votes and t...
research
01/11/2023

SensePOLAR: Word sense aware interpretability for pre-trained contextual word embeddings

Adding interpretability to word embeddings represents an area of active ...
research
04/04/2023

A Survey on Contextualised Semantic Shift Detection

Semantic Shift Detection (SSD) is the task of identifying, interpreting,...
research
07/14/2022

A tool to overcome technical barriers for bias assessment in human language technologies

Automatic processing of language is becoming pervasive in our lives, oft...
research
04/18/2019

Analytical Methods for Interpretable Ultradense Word Embeddings

Word embeddings are useful for a wide variety of tasks, but they lack in...

Please sign up or login with your details

Forgot password? Click here to reset