Dense and Aligned Captions (DAC) Promote Compositional Reasoning in VL Models

05/31/2023
by   Sivan Doveh, et al.
3

Vision and Language (VL) models offer an effective method for aligning representation spaces of images and text, leading to numerous applications such as cross-modal retrieval, visual question answering, captioning, and more. However, the aligned image-text spaces learned by all the popular VL models are still suffering from the so-called `object bias' - their representations behave as `bags of nouns', mostly ignoring or downsizing the attributes, relations, and states of objects described/appearing in texts/images. Although some great attempts at fixing these `compositional reasoning' issues were proposed in the recent literature, the problem is still far from being solved. In this paper, we uncover two factors limiting the VL models' compositional reasoning performance. These two factors are properties of the paired VL dataset used for finetuning and pre-training the VL model: (i) the caption quality, or in other words `image-alignment', of the texts; and (ii) the `density' of the captions in the sense of mentioning all the details appearing on the image. We propose a fine-tuning approach for automatically treating these factors leveraging a standard VL dataset (CC3M). Applied to CLIP, we demonstrate its significant compositional reasoning performance increase of up to ∼27% over the base model, up to ∼20% over the strongest baseline, and by 6.7% on average.

READ FULL TEXT
research
10/24/2022

Learning by Hallucinating: Vision-Language Pre-training with Weak Supervision

Weakly-supervised vision-language (V-L) pre-training (W-VLP) aims at lea...
research
08/16/2023

Pro-Cap: Leveraging a Frozen Vision-Language Model for Hateful Meme Detection

Hateful meme detection is a challenging multimodal task that requires co...
research
10/18/2022

Probing Cross-modal Semantics Alignment Capability from the Textual Perspective

In recent years, vision and language pre-training (VLP) models have adva...
research
05/24/2023

Text encoders are performance bottlenecks in contrastive vision-language models

Performant vision-language (VL) models like CLIP represent captions usin...
research
05/11/2023

Simple Token-Level Confidence Improves Caption Correctness

The ability to judge whether a caption correctly describes an image is a...
research
11/01/2022

Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality

Recent visuolinguistic pre-trained models show promising progress on var...
research
10/06/2020

Learning to Represent Image and Text with Denotation Graph

Learning to fuse vision and language information and representing them i...

Please sign up or login with your details

Forgot password? Click here to reset