Do Trajectories Encode Verb Meaning?

06/23/2022
by   Dylan Ebert, et al.
0

Distributional models learn representations of words from text, but are criticized for their lack of grounding, or the linking of text to the non-linguistic world. Grounded language models have had success in learning to connect concrete categories like nouns and adjectives to the world via images and videos, but can struggle to isolate the meaning of the verbs themselves from the context in which they typically occur. In this paper, we investigate the extent to which trajectories (i.e. the position and rotation of objects over time) naturally encode verb semantics. We build a procedurally generated agent-object-interaction dataset, obtain human annotations for the verbs that occur in this data, and compare several methods for representation learning given the trajectories. We find that trajectories correlate as-is with some verbs (e.g., fall), and that additional abstraction via self-supervised pretraining can further capture nuanced differences in verb meaning (e.g., roll vs. slide).

READ FULL TEXT

page 3

page 5

page 6

page 7

research
03/08/2023

Comparing Trajectory and Vision Modalities for Verb Representation

Three-dimensional trajectories, or the 3D position and rotation of objec...
research
07/05/2022

Pretraining on Interactions for Learning Grounded Affordance Representations

Lexical semantics and cognitive science point to affordances (i.e. the a...
research
05/31/2017

Are distributional representations ready for the real world? Evaluating word vectors for grounded perceptual meaning

Distributional word representation methods exploit word co-occurrences t...
research
08/07/2020

Learning a natural-language to LTL executable semantic parser for grounded robotics

Children acquire their native language with apparent ease by observing h...
research
04/04/2023

The Vector Grounding Problem

The remarkable performance of large language models (LLMs) on complex li...
research
09/24/2020

Toward a Thermodynamics of Meaning

As language models such as GPT-3 become increasingly successful at gener...
research
04/08/2022

CERES: Pretraining of Graph-Conditioned Transformer for Semi-Structured Session Data

User sessions empower many search and recommendation tasks on a daily ba...

Please sign up or login with your details

Forgot password? Click here to reset