Learning Visually Grounded Sentence Representations

07/19/2017
by   Douwe Kiela, et al.
0

We introduce a variety of models, trained on a supervised image captioning corpus to predict the image features for a given caption, to perform sentence representation grounding. We train a grounded sentence encoder that achieves good performance on COCO caption and image retrieval and subsequently show that this encoder can successfully be transferred to various NLP tasks, with improved performance over text-only models. Lastly, we analyze the contribution of grounding, and show that word embeddings learned by this system outperform non-grounded ones.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/02/2017

Improving Visually Grounded Sentence Representations with Self-Attention

Sentence representation models trained only on language could potentiall...
research
09/29/2021

Visually Grounded Concept Composition

We investigate ways to compose complex concepts in texts from primitive ...
research
04/01/2020

More Grounded Image Captioning by Distilling Image-Text Matching Model

Visual attention not only improves the performance of image captioners, ...
research
12/07/2021

Grounded Language-Image Pre-training

This paper presents a grounded language-image pre-training (GLIP) model ...
research
05/12/2021

Connecting What to Say With Where to Look by Modeling Human Attention Traces

We introduce a unified framework to jointly model images, text, and huma...
research
10/19/2020

Image Captioning with Visual Object Representations Grounded in the Textual Modality

We present our work in progress exploring the possibilities of a shared ...
research
05/11/2017

Imagination improves Multimodal Translation

We decompose multimodal translation into two sub-tasks: learning to tran...

Please sign up or login with your details

Forgot password? Click here to reset