The Curse of Recursion: Training on Generated Data Makes Models Forget

05/27/2023
by   Ilia Shumailov, et al.
0

Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-n once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

READ FULL TEXT

page 3

page 10

research
01/10/2023

Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations

Generative language models have improved drastically, and can now produc...
research
05/15/2023

DarkBERT: A Language Model for the Dark Side of the Internet

Recent research has suggested that there are clear differences in the la...
research
08/22/2023

Hey That's Mine Imperceptible Watermarks are Preserved in Diffusion Generated Outputs

Generative models have seen an explosion in popularity with the release ...
research
07/20/2023

The Extractive-Abstractive Axis: Measuring Content "Borrowing" in Generative Language Models

Generative language models produce highly abstractive outputs by design,...
research
04/25/2023

Stable and low-precision training for large-scale vision-language models

We introduce new methods for 1) accelerating and 2) stabilizing training...
research
10/19/2022

Language Does More Than Describe: On The Lack Of Figurative Speech in Text-To-Image Models

The impressive capacity shown by recent text-to-image diffusion models t...
research
05/17/2023

Large-Scale Text Analysis Using Generative Language Models: A Case Study in Discovering Public Value Expressions in AI Patents

Labeling data is essential for training text classifiers but is often di...

Please sign up or login with your details

Forgot password? Click here to reset