Semantic Holism and Word Representations in Artificial Neural Networks

03/11/2020
by   Tomáš Musil, et al.
0

Artificial neural networks are a state-of-the-art solution for many problems in natural language processing. What can we learn about language and meaning from the way artificial neural networks represent it? Word representations obtained from the Skip-gram variant of the word2vec model exhibit interesting semantic properties. This is usually explained by referring to the general distributional hypothesis, which states that the meaning of the word is given by the contexts where it occurs. We propose a more specific approach based on Frege's holistic and functional approach to meaning. Taking Tugendhat's formal reinterpretation of Frege's work as a starting point, we demonstrate that it is analogical to the process of training the Skip-gram model and offers a possible explanation of its semantic properties.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/18/2018

SubGram: Extending Skip-gram Word Representation with Substrings

Skip-gram (word2vec) is a recent method for creating vector representati...
research
02/28/2020

Learning to See: You Are What You See

The authors present a visual instrument developed as part of the creatio...
research
03/18/2020

An Analysis on the Learning Rules of the Skip-Gram Model

To improve the generalization of the representations for natural languag...
research
01/12/2015

Combining Language and Vision with a Multimodal Skip-gram Model

We extend the SKIP-GRAM model of Mikolov et al. (2013a) by taking visual...
research
04/30/2022

To Know by the Company Words Keep and What Else Lies in the Vicinity

The development of state-of-the-art (SOTA) Natural Language Processing (...
research
11/12/2015

Multimodal Skip-gram Using Convolutional Pseudowords

This work studies the representational mapping across multimodal data su...

Please sign up or login with your details

Forgot password? Click here to reset