Information Filter upon Diversity-Improved Decoding for Diversity-Faithfulness Tradeoff in NLG

10/25/2022
by   Han Meng, et al.
0

Some Natural Language Generation (NLG) tasks require both faithfulness and diversity. The decoding strategy is intensively related to the quality of the generated text. Strategies such as beam search, greedy search, etc., perform with low diversity and high repetition. On the other hand, guided decoding, the solution towards diversity, may generate unfaithful expressions. To this end, this paper presents Information Filter upon Diversity-Improved Decoding (IFDID) to obtain the tradeoff between diversity and faithfulness. IFDID is a two-stage decoding strategy leveraging the proposed Enhance-Filter framework, which achieves the tradeoff by increasing the probabilities of some typical tokens being selected and subsequently filtering them by their information amount. To verify the effectiveness, we compare our method with other baselines on related CommonGEN, RocStories and AdGen benchmarks, which cover Chinese and English datasets. Our numerical experimental results and human evaluation outcomes verify the effectiveness of the proposed approach, as our approach achieves a 1.24 higher ROUGE score describing faithfulness as well as higher diversity represented by 62.5 demonstrating that IFDID is a novel SOTA decoding strategy for the tradeoff between diversity and faithfulness.

READ FULL TEXT

page 1

page 4

research
03/29/2022

On Decoding Strategies for Neural Text Generators

When generating text from probabilistic models, the chosen decoding stra...
research
04/22/2020

Trading Off Diversity and Quality in Natural Language Generation

For open-ended language generation tasks such as storytelling and dialog...
research
12/06/2022

Improved Beam Search for Hallucination Mitigation in Abstractive Summarization

Advancement in large pretrained language models has significantly improv...
research
11/22/2022

Best-k Search Algorithm for Neural Text Generation

Modern natural language generation paradigms require a good decoding str...
research
02/27/2020

Analysis of diversity-accuracy tradeoff in image captioning

We investigate the effect of different model architectures, training obj...
research
05/01/2023

Decomposition Enhances Reasoning via Self-Evaluation Guided Decoding

We endow Large Language Models (LLMs) with fine-grained self-evaluation ...
research
08/05/2021

Analyzing the Abstractiveness-Factuality Tradeoff With Nonlinear Abstractiveness Constraints

We analyze the tradeoff between factuality and abstractiveness of summar...

Please sign up or login with your details

Forgot password? Click here to reset