Learning by Semantic Similarity Makes Abstractive Summarization Better

02/18/2020
by   Wonjin Yoon, et al.
0

One of the obstacles of abstractive summarization is the presence of various potentially correct predictions. Widely used objective functions for supervised learning, such as cross-entropy loss, cannot handle alternative answers effectively. Rather, they act as a training noise. In this paper, we propose Semantic Similarity strategy that can consider semantic meanings of generated summaries while training. Our training objective includes maximizing semantic similarity score which is calculated by an additional layer that estimates semantic similarity between generated summary and reference summary. By leveraging pre-trained language models, our model achieves a new state-of-the-art performance, ROUGE-L score of 41.5 on CNN/DM dataset. To support automatic evaluation, we also conducted human evaluation and received higher scores relative to both baseline and reference summaries.

READ FULL TEXT
research
09/09/2021

ARMAN: Pre-training with Semantically Selecting and Reordering of Sentences for Persian Abstractive Summarization

Abstractive text summarization is one of the areas influenced by the eme...
research
05/13/2020

End-to-end Semantics-based Summary Quality Assessment for Single-document Summarization

ROUGE is the de facto criterion for summarization research. However, its...
research
05/23/2023

On Learning to Summarize with Large Language Models as References

Recent studies have found that summaries generated by large language mod...
research
05/17/2023

Balancing Lexical and Semantic Quality in Abstractive Summarization

An important problem of the sequence-to-sequence neural models widely us...
research
04/04/2022

Semantic Similarity Metrics for Evaluating Source Code Summarization

Source code summarization involves creating brief descriptions of source...
research
10/25/2022

Towards Interpretable Summary Evaluation via Allocation of Contextual Embeddings to Reference Text Topics

Despite extensive recent advances in summary generation models, evaluati...
research
03/31/2022

BRIO: Bringing Order to Abstractive Summarization

Abstractive summarization models are commonly trained using maximum like...

Please sign up or login with your details

Forgot password? Click here to reset