Automatic Generation of Contrast Sets from Scene Graphs: Probing the Compositional Consistency of GQA

by   Yonatan Bitton, et al.

Recent works have shown that supervised models often exploit data artifacts to achieve good test scores while their performance severely degrades on samples outside their training distribution. Contrast sets (Gardneret al., 2020) quantify this phenomenon by perturbing test samples in a minimal way such that the output label is modified. While most contrast sets were created manually, requiring intensive annotation effort, we present a novel method which leverages rich semantic input representation to automatically generate contrast sets for the visual question answering task. Our method computes the answer of perturbed questions, thus vastly reducing annotation cost and enabling thorough evaluation of models' performance on various semantic aspects (e.g., spatial or relational reasoning). We demonstrate the effectiveness of our approach on the GQA dataset and its semantic scene graph image representation. We find that, despite GQA's compositionality and carefully balanced label distribution, two high-performing models drop 13-17 compared to the original test set. Finally, we show that our automatic perturbation can be applied to the training set to mitigate the degradation in performance, opening the door to more robust models.


page 1

page 4

page 9

page 11

page 12


Exploring Contrast Consistency of Open-Domain Question Answering Systems on Minimally Edited Questions

Contrast consistency, the ability of a model to make consistently correc...

Scene Graph Reasoning for Visual Question Answering

Visual question answering is concerned with answering free-form question...

Evaluating NLP Models via Contrast Sets

Standard test sets for supervised learning evaluate in-distribution gene...

The Effect of Natural Distribution Shift on Question Answering Models

We build four new test sets for the Stanford Question Answering Dataset ...

Can you even tell left from right? Presenting a new challenge for VQA

Visual Question Answering (VQA) needs a means of evaluating the strength...

Linguistically-Informed Transformations (LIT): A Method forAutomatically Generating Contrast Sets

Although large-scale pretrained language models, such as BERT and RoBERT...

Please sign up or login with your details

Forgot password? Click here to reset