Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension

09/14/2021
by   Naoya Inoue, et al.
0

How can we generate concise explanations for multi-hop Reading Comprehension (RC)? The current strategies of identifying supporting sentences can be seen as an extractive question-focused summarization of the input text. However, these extractive explanations are not necessarily concise i.e. not minimally sufficient for answering a question. Instead, we advocate for an abstractive approach, where we propose to generate a question-focused, abstractive summary of input paragraphs and then feed it to an RC system. Given a limited amount of human-annotated abstractive explanations, we train the abstractive explainer in a semi-supervised manner, where we start from the supervised model and then train it further through trial and error maximizing a conciseness-promoted reward function. Our experiments demonstrate that the proposed abstractive explainer can generate more compact explanations than an extractive explainer with limited supervision (only 2k instances) while maintaining sufficiency.

READ FULL TEXT
research
11/07/2022

Complex Reading Comprehension Through Question Decomposition

Multi-hop reading comprehension requires not only the ability to reason ...
research
10/01/2019

Identifying Supporting Facts for Multi-hop Question Answering with Document Graph Networks

Recent advances in reading comprehension have resulted in models that su...
research
05/02/2020

Teaching Machine Comprehension with Compositional Explanations

Advances in extractive machine reading comprehension (MRC) rely heavily ...
research
01/08/2019

Multi-style Generative Reading Comprehension

This study focuses on the task of multi-passage reading comprehension (R...
research
12/08/2022

A Comprehensive Survey on Multi-hop Machine Reading Comprehension Approaches

Machine reading comprehension (MRC) is a long-standing topic in natural ...
research
04/09/2021

Evaluating Explanations for Reading Comprehension with Realistic Counterfactuals

Token-level attributions have been extensively studied to explain model ...

Please sign up or login with your details

Forgot password? Click here to reset