Explicit Reasoning over End-to-End Neural Architectures for Visual Question Answering

03/23/2018
by   Somak Aditya, et al.
0

Many vision and language tasks require commonsense reasoning beyond data-driven image and natural language processing. Here we adopt Visual Question Answering (VQA) as an example task, where a system is expected to answer a question in natural language about an image. Current state-of-the-art systems attempted to solve the task using deep neural architectures and achieved promising performance. However, the resulting systems are generally opaque and they struggle in understanding questions for which extra knowledge is required. In this paper, we present an explicit reasoning layer on top of a set of penultimate neural network based systems. The reasoning layer enables reasoning and answering questions where additional knowledge is required, and at the same time provides an interpretable interface to the end users. Specifically, the reasoning layer adopts a Probabilistic Soft Logic (PSL) based engine to reason over a basket of inputs: visual relations, the semantic parse of the question, and background ontological knowledge from word2vec and ConceptNet. Experimental analysis of the answers and the key evidential predicates generated on the VQA dataset validate our approach.

READ FULL TEXT
research
06/03/2022

A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge

The Visual Question Answering (VQA) task aspires to provide a meaningful...
research
11/08/2021

Visual Question Answering based on Formal Logic

Visual question answering (VQA) has been gaining a lot of traction in th...
research
11/17/2016

Answering Image Riddles using Vision and Reasoning through Probabilistic Soft Logic

In this work, we explore a genre of puzzles ("image riddles") which invo...
research
10/21/2019

Enforcing Reasoning in Visual Commonsense Reasoning

The task of Visual Commonsense Reasoning is extremely challenging in the...
research
04/17/2021

Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments

In recent years, vision-language research has shifted to study tasks whi...
research
06/24/2019

Integrating Knowledge and Reasoning in Image Understanding

Deep learning based data-driven approaches have been successfully applie...
research
12/25/2020

LOREN: Logic Enhanced Neural Reasoning for Fact Verification

Given a natural language statement, how to verify whether it is supporte...

Please sign up or login with your details

Forgot password? Click here to reset