The Glass Ceiling of Automatic Evaluation in Natural Language Generation

08/31/2022
by   Pierre Colombo, et al.
15

Automatic evaluation metrics capable of replacing human judgments are critical to allowing fast development of new methods. Thus, numerous research efforts have focused on crafting such metrics. In this work, we take a step back and analyze recent progress by comparing the body of existing automatic metrics and human metrics altogether. As metrics are used based on how they rank systems, we compare metrics in the space of system rankings. Our extensive statistical analysis reveals surprising findings: automatic metrics – old and new – are much more similar to each other than to humans. Automatic metrics are not complementary and rank systems similarly. Strikingly, human metrics predict each other much better than the combination of all automatic metrics used to predict a human metric. It is surprising because human metrics are often designed to be independent, to capture different aspects of quality, e.g. content fidelity or readability. We provide a discussion of these findings and recommendations for future work in the field of evaluation.

READ FULL TEXT

page 3

page 10

research
03/18/2023

Revisiting Automatic Question Summarization Evaluation in the Biomedical Domain

Automatic evaluation metrics have been facilitating the rapid developmen...
research
05/29/2018

Human vs Automatic Metrics: on the Importance of Correlation Design

This paper discusses two existing approaches to the correlation analysis...
research
10/18/2021

BEAMetrics: A Benchmark for Language Generation Evaluation Evaluation

Natural language processing (NLP) systems are increasingly trained to ge...
research
07/06/2021

Comparing PCG metrics with Human Evaluation in Minecraft Settlement Generation

There are a range of metrics that can be applied to the artifacts produc...
research
09/19/2023

What is the Best Automated Metric for Text to Motion Generation?

There is growing interest in generating skeleton-based human motions fro...
research
04/14/2020

A Human Evaluation of AMR-to-English Generation Systems

Most current state-of-the art systems for generating English text from A...
research
07/31/2019

On conducting better validation studies of automatic metrics in natural language generation evaluation

Natural language generation (NLG) has received increasing attention, whi...

Please sign up or login with your details

Forgot password? Click here to reset