Prompt Space Optimizing Few-shot Reasoning Success with Large Language Models

06/06/2023
by   Fobo Shi, et al.
0

Prompt engineering is an essential technique for enhancing the abilities of large language models (LLMs) by providing explicit and specific instructions. It enables LLMs to excel in various tasks, such as arithmetic reasoning, question answering, summarization, relation extraction, machine translation, and sentiment analysis. Researchers have been actively exploring different prompt engineering strategies, such as Chain of Thought (CoT), Zero-CoT, and In-context learning. However, an unresolved problem arises from the fact that current approaches lack a solid theoretical foundation for determining optimal prompts. To address this issue in prompt engineering, we propose a new and effective approach called Prompt Space. Our methodology utilizes text embeddings to obtain basis vectors by matrix decomposition, and then constructs a space for representing all prompts. Prompt Space significantly outperforms state-of-the-art prompt paradigms on ten public reasoning benchmarks. Notably, without the help of the CoT method and the prompt "Let's think step by step", Prompt Space shows superior performance over the few-shot method. Overall, our approach provides a robust and fundamental theoretical framework for selecting simple and effective prompts. This advancement marks a significant step towards improving prompt engineering for a wide variety of applications in LLMs.

READ FULL TEXT

page 15

page 24

research
06/02/2023

Learning Multi-Step Reasoning by Solving Arithmetic Tasks

Mathematical reasoning is regarded as a necessary ability for Language M...
research
05/24/2022

Large Language Models are Zero-Shot Reasoners

Pretrained large language models (LLMs) are widely used in many sub-fiel...
research
10/07/2022

Automatic Chain of Thought Prompting in Large Language Models

Large language models (LLMs) can perform complex reasoning by generating...
research
07/02/2022

Rationale-Augmented Ensembles in Language Models

Recent research has shown that rationales, or step-by-step chains of tho...
research
08/24/2023

Exploring the Integration Strategies of Retriever and Large Language Models

The integration of retrieved passages and large language models (LLMs), ...
research
05/04/2023

An automatically discovered chain-of-thought prompt generalizes to novel models and datasets

Emergent chain-of-thought (CoT) reasoning capabilities promise to improv...
research
04/06/2023

When do you need Chain-of-Thought Prompting for ChatGPT?

Chain-of-Thought (CoT) prompting can effectively elicit complex multi-st...

Please sign up or login with your details

Forgot password? Click here to reset