ShapeGlot: Learning Language for Shape Differentiation

05/08/2019
by   Panos Achlioptas, et al.
2

In this work we explore how fine-grained differences between the shapes of common objects are expressed in language, grounded on images and 3D models of the objects. We first build a large scale, carefully controlled dataset of human utterances that each refers to a 2D rendering of a 3D CAD model so as to distinguish it from a set of shape-wise similar alternatives. Using this dataset, we develop neural language understanding (listening) and production (speaking) models that vary in their grounding (pure 3D forms via point-clouds vs. rendered 2D images), the degree of pragmatic reasoning captured (e.g. speakers that reason about a listener or not), and the neural architecture (e.g. with or without attention). We find models that perform well with both synthetic and human partners, and with held out utterances and objects. We also find that these models are amenable to zero-shot transfer learning to novel object classes (e.g. transfer from training on chairs to testing on lamps), as well as to real-world images drawn from furniture catalogs. Lesion studies indicate that the neural listeners depend heavily on part-related words and associate these words correctly with visual parts of objects (without any explicit network training on object parts), and that transfer to novel classes is most successful when known part-words are available. This work illustrates a practical approach to language grounding, and provides a case study in the relationship between object shape and linguistic structure when it comes to object differentiation.

READ FULL TEXT

page 3

page 6

page 9

page 10

page 12

page 14

page 15

page 16

research
05/24/2023

GRILL: Grounded Vision-language Pre-training via Aligning Text and Image Regions

Generalization to unseen tasks is an important ability for few-shot lear...
research
09/07/2023

DetermiNet: A Large-Scale Diagnostic Dataset for Complex Visually-Grounded Referencing using Determiners

State-of-the-art visual grounding models can achieve high detection accu...
research
08/20/2021

Patch2CAD: Patchwise Embedding Learning for In-the-Wild Shape Retrieval from a Single Image

3D perception of object shapes from RGB image input is fundamental towar...
research
11/17/2022

Language Conditioned Spatial Relation Reasoning for 3D Object Grounding

Localizing objects in 3D scenes based on natural language requires under...
research
07/26/2021

Language Grounding with 3D Objects

Seemingly simple natural language requests to a robot are generally unde...
research
04/05/2022

ObjectFolder 2.0: A Multisensory Object Dataset for Sim2Real Transfer

Objects play a crucial role in our everyday activities. Though multisens...
research
12/13/2021

PartGlot: Learning Shape Part Segmentation from Language Reference Games

We introduce PartGlot, a neural framework and associated architectures f...

Please sign up or login with your details

Forgot password? Click here to reset