Better Rewards Yield Better Summaries: Learning to Summarise Without References

09/03/2019
by   Florian Böhm, et al.
0

Reinforcement Learning (RL) based document summarisation systems yield state-of-the-art performance in terms of ROUGE scores, because they directly use ROUGE as the rewards during training. However, summaries with high ROUGE scores often receive low human judgement. To find a better reward function that can guide RL to generate human-appealing summaries, we learn a reward function from human ratings on 2,500 summaries. Our reward function only takes the document and system summary as input. Hence, once trained, it can be used to train RL-based summarisation systems without using any reference summaries. We show that our learned rewards have significantly higher correlation with human ratings than previous approaches. Human evaluation experiments show that, compared to the state-of-the-art supervised-learning systems and ROUGE-as-rewards RL summarisation systems, the RL systems using our learned rewards during training generate summarieswith higher human ratings. The learned reward function and our source code are available at https://github.com/yg211/summary-reward-no-reference.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/30/2019

Reward Learning for Efficient Reinforcement Learning in Extractive Document Summarisation

Document summarisation can be formulated as a sequential decision-making...
research
02/20/2023

Fantastic Rewards and How to Tame Them: A Case Study on Reward Learning for Task-oriented Dialogue Systems

When learning task-oriented dialogue (ToD) agents, reinforcement learnin...
research
11/04/2021

B-Pref: Benchmarking Preference-Based Reinforcement Learning

Reinforcement learning (RL) requires access to a reward function that in...
research
06/10/2019

Neural Keyphrase Generation via Reinforcement Learning with Adaptive Rewards

Generating keyphrases that summarize the main points of a document is a ...
research
09/04/2018

The Effect of Context on Metaphor Paraphrase Aptness Judgments

We conduct two experiments to study the effect of context on metaphor pa...
research
12/19/2022

Optimizing Prompts for Text-to-Image Generation

Well-designed prompts can guide text-to-image models to generate amazing...
research
12/19/2022

Inverse Reinforcement Learning for Text Summarization

Current state-of-the-art summarization models are trained with either ma...

Please sign up or login with your details

Forgot password? Click here to reset