Language Models as Agent Models

by   Jacob Andreas, et al.

Language models (LMs) are trained on collections of documents, written by individual human agents to achieve specific goals in an outside world. During training, LMs have access only to text of these documents, with no direct evidence of the internal states of the agents that produced them – a fact often used to argue that LMs are incapable of modeling goal-directed aspects of human language production and comprehension. Can LMs trained on text learn anything at all about the relationship between language and use? I argue that LMs are models of intentional communication in a specific, narrow sense. When performing next word prediction given a textual context, an LM can infer and represent properties of an agent likely to have produced that context. These representations can in turn influence subsequent LM generation in the same way that agents' communicative intentions influence their language. I survey findings from the recent literature showing that – even in today's non-robust and error-prone models – LMs infer and use representations of fine-grained communicative intentions and more abstract beliefs and goals. Despite the limited nature of their training data, they can thus serve as building blocks for systems that communicate and act intentionally.


page 1

page 2

page 3

page 4


Augmenting Autotelic Agents with Large Language Models

Humans learn to master open-ended repertoires of skills by imagining and...

Measuring and Manipulating Knowledge Representations in Language Models

Neural language models (LMs) represent facts about the world described b...

Passive learning of active causal strategies in agents and language models

What can be learned about causality and experimentation from passive dat...

Automatic Exploration of Textual Environments with Language-Conditioned Autotelic Agents

In this extended abstract we discuss the opportunities and challenges of...

Meaning without reference in large language models

The widespread success of large language models (LLMs) has been met with...

Of Models and Tin Men – a behavioural economics study of principal-agent problems in AI alignment using large-language models

AI Alignment is often presented as an interaction between a single desig...

Toward Stance-based Personas for Opinionated Dialogues

In the context of chit-chat dialogues it has been shown that endowing sy...

Please sign up or login with your details

Forgot password? Click here to reset