Reranking for Natural Language Generation from Logical Forms: A Study based on Large Language Models

09/21/2023
by   Levon Haroutunian, et al.
0

Large language models (LLMs) have demonstrated impressive capabilities in natural language generation. However, their output quality can be inconsistent, posing challenges for generating natural language from logical forms (LFs). This task requires the generated outputs to embody the exact semantics of LFs, without missing any LF semantics or creating any hallucinations. In this work, we tackle this issue by proposing a novel generate-and-rerank approach. Our approach involves initially generating a set of candidate outputs by prompting an LLM and subsequently reranking them using a task-specific reranker model. In addition, we curate a manually collected dataset to evaluate the alignment between different ranking metrics and human judgements. The chosen ranking metrics are utilized to enhance the training and evaluation of the reranker model. By conducting extensive experiments on three diverse datasets, we demonstrate that the candidates selected by our reranker outperform those selected by baseline methods in terms of semantic consistency and fluency, as measured by three comprehensive metrics. Our findings provide strong evidence for the effectiveness of our approach in improving the quality of generated outputs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2022

Training Language Models with Natural Language Feedback

Pretrained language models often do not perform tasks in ways that are i...
research
06/14/2019

Comparison of Diverse Decoding Methods from Conditional Language Models

While conditional language models have greatly improved in their ability...
research
05/02/2023

From Words to Code: Harnessing Data for Program Synthesis from Natural Language

Creating programs to correctly manipulate data is a difficult task, as t...
research
11/10/2022

Measuring Reliability of Large Language Models through Semantic Consistency

While large pretrained language models (PLMs) demonstrate incredible flu...
research
11/28/2022

Controlled Language Generation for Language Learning Items

This work aims to employ natural language generation (NLG) to rapidly ge...
research
04/20/2023

Multi-aspect Repetition Suppression and Content Moderation of Large Language Models

Natural language generation is one of the most impactful fields in NLP, ...
research
05/29/2023

Large Language Models are not Fair Evaluators

We uncover a systematic bias in the evaluation paradigm of adopting larg...

Please sign up or login with your details

Forgot password? Click here to reset