Coarse-to-Fine Reasoning for Visual Question Answering

10/06/2021
by   Binh X. Nguyen, et al.
0

Bridging the semantic gap between image and question is an important step to improve the accuracy of the Visual Question Answering (VQA) task. However, most of the existing VQA methods focus on attention mechanisms or visual relations for reasoning the answer, while the features at different semantic levels are not fully utilized. In this paper, we present a new reasoning framework to fill the gap between visual features and semantic clues in the VQA task. Our method first extracts the features and predicates from the image and question. We then propose a new reasoning framework to effectively jointly learn these features and predicates in a coarse-to-fine manner. The intensively experimental results on three large-scale VQA datasets show that our proposed approach achieves superior accuracy comparing with other state-of-the-art methods. Furthermore, our reasoning framework also provides an explainable way to understand the decision of the deep neural network when predicting the answer.

READ FULL TEXT
research
03/01/2019

Answer Them All! Toward Universal Visual Question Answering Models

Visual Question Answering (VQA) research is split into two camps: the fi...
research
03/27/2023

Curriculum Learning for Compositional Visual Reasoning

Visual Question Answering (VQA) is a complex task requiring large datase...
research
05/19/2021

Multiple Meta-model Quantifying for Medical Visual Question Answering

Transfer learning is an important step to extract meaningful features an...
research
07/09/2019

Learning by Abstraction: The Neural State Machine

We introduce the Neural State Machine, seeking to bridge the gap between...
research
11/09/2022

Towards Reasoning-Aware Explainable VQA

The domain of joint vision-language understanding, especially in the con...
research
02/25/2022

Joint Answering and Explanation for Visual Commonsense Reasoning

Visual Commonsense Reasoning (VCR), deemed as one challenging extension ...
research
05/15/2021

Show Why the Answer is Correct! Towards Explainable AI using Compositional Temporal Attention

Visual Question Answering (VQA) models have achieved significant success...

Please sign up or login with your details

Forgot password? Click here to reset