Contrastive Loss is All You Need to Recover Analogies as Parallel Lines

06/14/2023
by   Narutatsu Ri, et al.
0

While static word embedding models are known to represent linguistic analogies as parallel lines in high-dimensional space, the underlying mechanism as to why they result in such geometric structures remains obscure. We find that an elementary contrastive-style method employed over distributional information performs competitively with popular word embedding models on analogy recovery tasks, while achieving dramatic speedups in training time. Further, we demonstrate that a contrastive loss is sufficient to create these parallel structures in word embeddings, and establish a precise relationship between the co-occurrence statistics and the geometric structure of the resulting word embeddings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset