A Survey for In-context Learning

12/31/2022
by   Qingxiu Dong, et al.
Peking University
The Regents of the University of California
0

With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/20/2023

Exploring the Landscape of Natural Language Processing Research

As an efficient approach to understand, generate, and process natural la...
10/01/2020

A Survey of the State of Explainable AI for Natural Language Processing

Recent years have seen important advances in the quality of state-of-the...
06/27/2023

A Survey on Out-of-Distribution Evaluation of Neural NLP Models

Adversarial robustness, domain generalization and dataset biases are thr...
07/12/2023

A Comprehensive Overview of Large Language Models

Large Language Models (LLMs) have shown excellent generalization capabil...
10/22/2010

A Partial Taxonomy of Substitutability and Interchangeability

Substitutability, interchangeability and related concepts in Constraint ...
05/17/2021

Cybernetics and the Future of Work

The disruption caused by the pandemic has called into question industria...
08/14/2023

Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice

Much of the recent discourse within the NLP research community has been ...

Please sign up or login with your details

Forgot password? Click here to reset