Rethinking Automatic Evaluation in Sentence Simplification

04/15/2021
by   Thomas Scialom, et al.
0

Automatic evaluation remains an open research question in Natural Language Generation. In the context of Sentence Simplification, this is particularly challenging: the task requires by nature to replace complex words with simpler ones that shares the same meaning. This limits the effectiveness of n-gram based metrics like BLEU. Going hand in hand with the recent advances in NLG, new metrics have been proposed, such as BERTScore for Machine Translation. In summarization, the QuestEval metric proposes to automatically compare two texts by questioning them. In this paper, we first propose a simple modification of QuestEval allowing it to tackle Sentence Simplification. We then extensively evaluate the correlations w.r.t. human judgement for several metrics including the recent BERTScore and QuestEval, and show that the latter obtain state-of-the-art correlations, outperforming standard metrics like BLEU and SARI. More importantly, we also show that a large part of the correlations are actually spurious for all the metrics. To investigate this phenomenon further, we release a new corpus of evaluated simplifications, this time not generated by systems but instead, written by humans. This allows us to remove the spurious correlations and draw very different conclusions from the original ones, resulting in a better understanding of these metrics. In particular, we raise concerns about very low correlations for most of traditional metrics. Our results show that the only significant measure of the Meaning Preservation is our adaptation of QuestEval.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/30/2019

Reference-less Quality Estimation of Text Simplification Systems

The evaluation of text simplification (TS) systems remains an open chall...
research
05/30/2023

BLEU Meets COMET: Combining Lexical and Neural Metrics Towards Robust Machine Translation Evaluation

Although neural-based machine translation evaluation metrics, such as CO...
research
07/22/2021

To Ship or Not to Ship: An Extensive Evaluation of Automatic Metrics for Machine Translation

Automatic metrics are commonly used as the exclusive tool for declaring ...
research
04/14/2020

A Human Evaluation of AMR-to-English Generation Systems

Most current state-of-the art systems for generating English text from A...
research
08/10/2015

Improve the Evaluation of Fluency Using Entropy for Machine Translation Evaluation Metrics

The widely-used automatic evaluation metrics cannot adequately reflect t...
research
08/20/2020

Towards a Decomposable Metric for Explainable Evaluation of Text Generation from AMR

Systems that generate sentences from (abstract) meaning representations ...
research
12/02/2021

InfoLM: A New Metric to Evaluate Summarization Data2Text Generation

Assessing the quality of natural language generation systems through hum...

Please sign up or login with your details

Forgot password? Click here to reset