DeepAI AI Chat
Log In Sign Up

Rationale-Augmented Ensembles in Language Models

by   Xuezhi Wang, et al.

Recent research has shown that rationales, or step-by-step chains of thought, can be used to improve performance in multi-step reasoning tasks. We reconsider rationale-augmented prompting for few-shot in-context learning, where (input -> output) prompts are expanded to (input, rationale -> output) prompts. For rationale-augmented prompting we demonstrate how existing approaches, which rely on manual prompt engineering, are subject to sub-optimal rationales that may harm performance. To mitigate this brittleness, we propose a unified framework of rationale-augmented ensembles, where we identify rationale sampling in the output space as the key component to robustly improve performance. This framework is general and can easily be extended to common natural language processing tasks, even those that do not traditionally leverage intermediate steps, such as question answering, word sense disambiguation, and sentiment analysis. We demonstrate that rationale-augmented ensembles achieve more accurate and interpretable results than existing prompting approaches–including standard prompting without rationales and rationale-based chain-of-thought prompting–while simultaneously improving interpretability of model predictions through the associated rationales.


page 1

page 2

page 3

page 4


Unlocking Temporal Question Answering for Large Language Models Using Code Execution

Large language models (LLMs) have made significant progress in natural l...

Boosted Prompt Ensembles for Large Language Models

Methods such as chain-of-thought prompting and self-consistency have pus...

Reasoning Circuits: Few-shot Multihop Question Generation with Structured Rationales

Multi-hop Question Generation is the task of generating questions which ...

Prompt Space Optimizing Few-shot Reasoning Success with Large Language Models

Prompt engineering is an essential technique for enhancing the abilities...

Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context Reasoning with Language Models

Generating intermediate steps, or Chain of Thought (CoT), is an effectiv...

Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework

As large language models (LLMs) have become the norm in NLP, demonstrati...

Human-in-the-Loop through Chain-of-Thought

While the emergence of powerful language models along with Chain-of-thou...