Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations

01/10/2023
by   Josh A. Goldstein, et al.
0

Generative language models have improved drastically, and can now produce realistic text outputs that are difficult to distinguish from human-written content. For malicious actors, these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations. This report assesses how language models might change influence operations in the future, and what steps can be taken to mitigate this threat. We lay out possible changes to the actors, behaviors, and content of online influence operations, and provide a framework for stages of the language model-to-influence operations pipeline that mitigations could target (model construction, model access, content dissemination, and belief formation). While no reasonable mitigation can be expected to fully prevent the threat of AI-enabled influence operations, a combination of multiple mitigations may make an important difference.

READ FULL TEXT

page 18

page 19

page 41

research
09/07/2023

Social Media Influence Operations

Social media platforms enable largely unrestricted many-to-many communic...
research
08/07/2023

A Cost Analysis of Generative Language Models and Influence Operations

Despite speculation that recent large language models (LLMs) are likely ...
research
05/27/2023

The Curse of Recursion: Training on Generated Data Makes Models Forget

Stable Diffusion revolutionised image creation from descriptive text. GP...
research
07/20/2023

The Extractive-Abstractive Axis: Measuring Content "Borrowing" in Generative Language Models

Generative language models produce highly abstractive outputs by design,...
research
01/03/2023

Large Language Models as Corporate Lobbyists

We demonstrate a proof-of-concept of a large language model conducting c...
research
09/15/2020

The Radicalization Risks of GPT-3 and Advanced Neural Language Models

In this paper, we expand on our previous research of the potential for a...
research
11/21/2021

Modelling Direct Messaging Networks with Multiple Recipients for Cyber Deception

Cyber deception is emerging as a promising approach to defending network...

Please sign up or login with your details

Forgot password? Click here to reset