Improving Multi-Document Summarization through Referenced Flexible Extraction with Credit-Awareness

by   Yun-Zhu Song, et al.

A notable challenge in Multi-Document Summarization (MDS) is the extremely-long length of the input. In this paper, we present an extract-then-abstract Transformer framework to overcome the problem. Specifically, we leverage pre-trained language models to construct a hierarchical extractor for salient sentence selection across documents and an abstractor for rewriting the selected contents as summaries. However, learning such a framework is challenging since the optimal contents for the abstractor are generally unknown. Previous works typically create pseudo extraction oracle to enable the supervised learning for both the extractor and the abstractor. Nevertheless, we argue that the performance of such methods could be restricted due to the insufficient information for prediction and inconsistent objectives between training and testing. To this end, we propose a loss weighting mechanism that makes the model aware of the unequal importance for the sentences not in the pseudo extraction oracle, and leverage the fine-tuned abstractor to generate summary references as auxiliary signals for learning the extractor. Moreover, we propose a reinforcement learning method that can efficiently apply to the extractor for harmonizing the optimization between training and testing. Experiment results show that our framework substantially outperforms strong baselines with comparable model sizes and achieves the best results on the Multi-News, Multi-XScience, and WikiCatSum corpora.


page 1

page 2

page 3

page 4


A Condense-then-Select Strategy for Text Summarization

Select-then-compress is a popular hybrid, framework for text summarizati...

Transductive Learning for Abstractive News Summarization

Pre-trained language models have recently advanced abstractive summariza...

Reinforcing Semantic-Symmetry for Document Summarization

Document summarization condenses a long document into a short version wi...

Leveraging Graph to Improve Abstractive Multi-Document Summarization

Graphs that capture relations between textual units have great benefits ...

Combination of abstractive and extractive approaches for summarization of long scientific texts

In this research work, we present a method to generate summaries of long...

UniREx: A Unified Learning Framework for Language Model Rationale Extraction

An extractive rationale explains a language model's (LM's) prediction on...

Exploring Explainable Selection to Control Abstractive Generation

It is a big challenge to model long-range input for document summarizati...

Please sign up or login with your details

Forgot password? Click here to reset