DeepAI AI Chat
Log In Sign Up

Factors Influencing the Surprising Instability of Word Embeddings

04/25/2018
by   Laura Wendlandt, et al.
University of Michigan
0

Despite the recent popularity of word embedding methods, there is only a small body of work exploring the limitations of these representations. In this paper, we consider one aspect of embedding spaces, namely their stability. We show that even relatively high frequency words (100-200 occurrences) are often unstable. We provide empirical evidence for how various factors contribute to the stability of word embeddings, and we analyze the effects of stability on downstream tasks.

READ FULL TEXT

page 8

page 9

05/12/2016

On the Convergent Properties of Word Embedding Methods

Do word embeddings converge to learn similar things over different initi...
04/30/2020

Analyzing the Surprising Variability in Word Embedding Stability Across Languages

Word embeddings are powerful representations that form the foundation of...
04/21/2019

Understanding Stability of Medical Concept Embeddings: Analysis and Prediction

In biomedical area, medical concepts linked to external knowledge bases ...
06/16/2022

On the Surprising Behaviour of node2vec

Graph embedding techniques are a staple of modern graph learning researc...
05/20/2020

The Effects of Randomness on the Stability of Node Embeddings

We systematically evaluate the (in-)stability of state-of-the-art node e...
03/11/2020

How Powerful Are Randomly Initialized Pointcloud Set Functions?

We study random embeddings produced by untrained neural set functions, a...
04/27/2020

Synonyms and Antonyms: Embedded Conflict

Since modern word embeddings are motivated by a distributional hypothesi...