Semantic Consistency for Assuring Reliability of Large Language Models

08/17/2023
by   Harsh Raj, et al.
0

Large Language Models (LLMs) exhibit remarkable fluency and competence across various natural language tasks. However, recent research has highlighted their sensitivity to variations in input prompts. To deploy LLMs in a safe and reliable manner, it is crucial for their outputs to be consistent when prompted with expressions that carry the same meaning or intent. While some existing work has explored how state-of-the-art LLMs address this issue, their evaluations have been confined to assessing lexical equality of single- or multi-word answers, overlooking the consistency of generative text sequences. For a more comprehensive understanding of the consistency of LLMs in open-ended text generation scenarios, we introduce a general measure of semantic consistency, and formulate multiple versions of this metric to evaluate the performance of various LLMs. Our proposal demonstrates significantly higher consistency and stronger correlation with human evaluations of output consistency than traditional metrics based on lexical consistency. Finally, we propose a novel prompting strategy, called Ask-to-Choose (A2C), to enhance semantic consistency. When evaluated for closed-book question answering based on answer variations from the TruthfulQA benchmark, A2C increases accuracy metrics for pretrained and finetuned LLMs by up to 47 consistency metrics for instruction-tuned models by up to 7-fold.

READ FULL TEXT

page 2

page 6

research
11/10/2022

Measuring Reliability of Large Language Models through Semantic Consistency

While large pretrained language models (PLMs) demonstrate incredible flu...
research
09/09/2023

FaNS: a Facet-based Narrative Similarity Metric

Similar Narrative Retrieval is a crucial task since narratives are essen...
research
08/06/2023

Towards Multiple References Era – Addressing Data Leakage and Limited Reference Diversity in NLG Evaluation

N-gram matching-based evaluation metrics, such as BLEU and chrF, are wid...
research
05/31/2021

A Semantic-based Method for Unsupervised Commonsense Question Answering

Unsupervised commonsense question answering is appealing since it does n...
research
02/19/2023

Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation

We introduce a method to measure uncertainty in large language models. F...
research
05/19/2023

Examining the Inter-Consistency of Large Language Models: An In-depth Analysis via Debate

Large Language Models (LLMs) have demonstrated human-like intelligence a...
research
07/06/2023

Style Over Substance: Evaluation Biases for Large Language Models

As large language models (LLMs) continue to advance, accurately and comp...

Please sign up or login with your details

Forgot password? Click here to reset