Explain by Evidence: An Explainable Memory-based Neural Network for Question Answering

by   Quan Tran, et al.

Interpretability and explainability of deep neural networks are challenging due to their scale, complexity, and the agreeable notions on which the explaining process rests. Previous work, in particular, has focused on representing internal components of neural networks through human-friendly visuals and concepts. On the other hand, in real life, when making a decision, human tends to rely on similar situations and/or associations in the past. Hence arguably, a promising approach to make the model transparent is to design it in a way such that the model explicitly connects the current sample with the seen ones, and bases its decision on these samples. Grounded on that principle, we propose in this paper an explainable, evidence-based memory network architecture, which learns to summarize the dataset and extract supporting evidences to make its decision. Our model achieves state-of-the-art performance on two popular question answering datasets (i.e. TrecQA and WikiQA). Via further analysis, we show that this model can reliably trace the errors it has made in the validation step to the training instances that might have caused these errors. We believe that this error-tracing capability provides significant benefit in improving dataset quality in many applications.


page 1

page 2

page 3

page 4


Explainable Fact-checking through Question Answering

Misleading or false information has been creating chaos in some places a...

REM-Net: Recursive Erasure Memory Network for Commonsense Evidence Refinement

When answering a question, people often draw upon their rich world knowl...

Learning to Compose Neural Networks for Question Answering

We describe a question answering model that applies to both images and s...

A Memory Model for Question Answering from Streaming Data Supported by Rehearsal and Anticipation of Coreference Information

Existing question answering methods often assume that the input content ...

DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim Verification

Recently, many methods discover effective evidence from reliable sources...

Remember Intentions: Retrospective-Memory-based Trajectory Prediction

To realize trajectory prediction, most previous methods adopt the parame...

A Road-map Towards Explainable Question Answering A Solution for Information Pollution

The increasing rate of information pollution on the Web requires novel s...

Please sign up or login with your details

Forgot password? Click here to reset