Evaluating statistical language models as pragmatic reasoners

05/01/2023
by   Benjamin Lipkin, et al.
0

The relationship between communicated language and intended meaning is often probabilistic and sensitive to context. Numerous strategies attempt to estimate such a mapping, often leveraging recursive Bayesian models of communication. In parallel, large language models (LLMs) have been increasingly applied to semantic parsing applications, tasked with inferring logical representations from natural language. While existing LLM explorations have been largely restricted to literal language use, in this work, we evaluate the capacity of LLMs to infer the meanings of pragmatic utterances. Specifically, we explore the case of threshold estimation on the gradable adjective “strong”, contextually conditioned on a strength prior, then extended to composition with qualification, negation, polarity inversion, and class comparison. We find that LLMs can derive context-grounded, human-like distributions over the interpretations of several complex pragmatic utterances, yet struggle composing with negation. These results inform the inferential capacity of statistical language models, and their use in pragmatic and semantic parsing applications. All corresponding code is made publicly available (https://github.com/benlipkin/probsem/tree/CogSci2023).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/16/2021

Few-Shot Semantic Parsing with Language Models Trained On Code

Large language models, prompted with in-context examples, can perform se...
research
12/21/2022

ZEROTOP: Zero-Shot Task-Oriented Semantic Parsing using Large Language Models

We explore the use of large language models (LLMs) for zero-shot semanti...
research
11/02/2020

Context Dependent Semantic Parsing: A Survey

Semantic parsing is the task of translating natural language utterances ...
research
05/24/2023

Large Language Models are In-Context Semantic Reasoners rather than Symbolic Reasoners

The emergent few-shot reasoning capabilities of Large Language Models (L...
research
06/20/2018

StructVAE: Tree-structured Latent Variable Models for Semi-supervised Semantic Parsing

Semantic parsing is the task of transducing natural language (NL) uttera...
research
05/26/2023

Large Language Models Are Partially Primed in Pronoun Interpretation

While a large body of literature suggests that large language models (LL...
research
11/17/2022

Ignore Previous Prompt: Attack Techniques For Language Models

Transformer-based large language models (LLMs) provide a powerful founda...

Please sign up or login with your details

Forgot password? Click here to reset