Self-Supervised VQA: Answering Visual Questions using Images and Captions

12/04/2020
by   Pratyay Banerjee, et al.
3

Methodologies for training VQA models assume the availability of datasets with human-annotated Image-Question-Answer(I-Q-A) triplets for training. This has led to a heavy reliance and overfitting on datasets and a lack of generalization to new types of questions and scenes. Moreover, these datasets exhibit annotator subjectivity, biases, and errors, along with linguistic priors, which percolate into VQA models trained on such samples. We study whether models can be trained without any human-annotated Q-A pairs, but only with images and associated text captions which are descriptive and less subjective. We present a method to train models with procedurally generated Q-A pairs from captions using techniques, such as templates and annotation frameworks like QASRL. As most VQA models rely on dense and costly object annotations extracted from object detectors, we propose spatial-pyramid image patches as a simple but effective alternative to object bounding boxes, and demonstrate that our method uses fewer human annotations. We benchmark on VQA-v2, GQA, and on VQA-CP which contains a softer version of label shift. Our methods surpass prior supervised methods on VQA-CP and are competitive with methods without object features in fully supervised setting.

READ FULL TEXT

page 3

page 5

page 11

page 12

research
05/04/2022

All You May Need for VQA are Image Captions

Visual Question Answering (VQA) has benefited from increasingly sophisti...
research
12/01/2017

Don't Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering

A number of studies have found that today's Visual Question Answering (V...
research
08/01/2023

Making the V in Text-VQA Matter

Text-based VQA aims at answering questions by reading the text present i...
research
09/18/2020

MUTANT: A Training Paradigm for Out-of-Distribution Generalization in Visual Question Answering

While progress has been made on the visual question answering leaderboar...
research
12/17/2020

Overcoming Language Priors with Self-supervised Learning for Visual Question Answering

Most Visual Question Answering (VQA) models suffer from the language pri...
research
03/08/2021

Multiple Instance Captioning: Learning Representations from Histopathology Textbooks and Articles

We present ARCH, a computational pathology (CP) multiple instance captio...
research
12/13/2022

CREPE: Can Vision-Language Foundation Models Reason Compositionally?

A fundamental characteristic common to both human vision and natural lan...

Please sign up or login with your details

Forgot password? Click here to reset