What Makes Good In-Context Examples for GPT-3?

01/17/2021
by   Jiachang Liu, et al.
0

GPT-3 has attracted lots of attention due to its superior performance across a wide range of NLP tasks, especially with its powerful and versatile in-context few-shot learning ability. Despite its success, we found that the empirical results of GPT-3 depend heavily on the choice of in-context examples. In this work, we investigate whether there are more effective strategies for judiciously selecting in-context examples (relative to random sampling) that better leverage GPT-3's few-shot capabilities. Inspired by the recent success of leveraging a retrieval module to augment large-scale neural network models, we propose to retrieve examples that are semantically-similar to a test sample to formulate its corresponding prompt. Intuitively, the in-context examples selected with such a strategy may serve as more informative inputs to unleash GPT-3's extensive knowledge. We evaluate the proposed approach on several natural language understanding and generation benchmarks, where the retrieval-based prompt selection approach consistently outperforms the random baseline. Moreover, it is observed that the sentence encoders fine-tuned on task-related datasets yield even more helpful retrieval results. Notably, significant gains are observed on tasks such as table-to-text generation (41.9 (45.5 behaviors of GPT-3 and large-scale pre-trained LMs in general and enhance their few-shot capabilities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/16/2022

In-Context Learning for Few-Shot Dialogue State Tracking

Collecting and annotating task-oriented dialogues is time-consuming and ...
research
05/24/2023

GPTAraEval: A Comprehensive Evaluation of ChatGPT on Arabic NLP

The recent emergence of ChatGPT has brought a revolutionary change in th...
research
11/10/2019

INSET: Sentence Infilling with Inter-sentential Generative Pre-training

Missing sentence generation (or sentence infilling) fosters a wide range...
research
12/05/2022

In-context Examples Selection for Machine Translation

Large-scale generative models show an impressive ability to perform a wi...
research
06/30/2023

Meta-training with Demonstration Retrieval for Efficient Few-shot Learning

Large language models show impressive results on few-shot NLP tasks. How...
research
01/31/2023

What Makes Good Examples for Visual In-Context Learning?

Large-scale models trained on broad data have recently become the mainst...
research
06/11/2023

Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective

Molecule discovery plays a crucial role in various scientific fields, ad...

Please sign up or login with your details

Forgot password? Click here to reset