Do Explanations make VQA Models more Predictable to a Human?

10/29/2018
by   Arjun Chandrasekaran, et al.
13

A rich line of research attempts to make deep neural networks more transparent by generating human-interpretable 'explanations' of their decision process, especially for interactive tasks like Visual Question Answering (VQA). In this work, we analyze if existing explanations indeed make a VQA model -- its responses as well as failures -- more predictable to a human. Surprisingly, we find that they do not. On the other hand, we find that human-in-the-loop approaches that treat the model as a black-box do.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset