Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension

05/12/2020
by   Bo Zheng, et al.
0

Natural Questions is a new challenging machine reading comprehension benchmark with two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer). Despite the effectiveness of existing methods on this benchmark, they treat these two sub-tasks individually during training while ignoring their dependencies. To address this issue, we present a novel multi-grained machine reading comprehension framework that focuses on modeling documents at their hierarchical nature, which are different levels of granularity: documents, paragraphs, sentences, and tokens. We utilize graph attention networks to obtain different levels of representations so that they can be learned simultaneously. The long and short answers can be extracted from paragraph-level representation and token-level representation, respectively. In this way, we can model the dependencies between the two-grained answers to provide evidence for each other. We jointly train the two sub-tasks, and our experiments show that our approach significantly outperforms previous systems at both long and short answer criteria.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/25/2020

No Answer is Better Than Wrong Answer: A Reflection Model for Document Level Machine Reading Comprehension

The Natural Questions (NQ) benchmark set brings new challenges to Machin...
research
06/15/2017

S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension

In this paper, we present a novel approach to machine reading comprehens...
research
12/20/2020

Adaptive Bi-directional Attention: Exploring Multi-Granularity Representations for Machine Reading Comprehension

Recently, the attention-enhanced multi-layer encoder, such as Transforme...
research
04/15/2020

Exploring Probabilistic Soft Logic as a framework for integrating top-down and bottom-up processing of language in a task context

This technical report describes a new prototype architecture designed to...
research
11/01/2019

Select, Answer and Explain: Interpretable Multi-hop Reading Comprehension over Multiple Documents

Interpretable multi-hop reading comprehension (RC) over multiple documen...
research
09/14/2020

Composing Answer from Multi-spans for Reading Comprehension

This paper presents a novel method to generate answers for non-extractio...
research
05/07/2021

VAULT: VAriable Unified Long Text Representation for Machine Reading Comprehension

Existing models on Machine Reading Comprehension (MRC) require complex m...

Please sign up or login with your details

Forgot password? Click here to reset