Where's the Liability in Harmful AI Speech?

08/09/2023
by   Peter Henderson, et al.
0

Generative AI, in particular text-based "foundation models" (large models trained on a huge variety of information including the internet), can generate speech that could be problematic under a wide range of liability regimes. Machine learning practitioners regularly "red team" models to identify and mitigate such problematic speech: from "hallucinations" falsely accusing people of serious misconduct to recipes for constructing an atomic bomb. A key question is whether these red-teamed behaviors actually present any liability risk for model creators and deployers under U.S. law, incentivizing investments in safety mechanisms. We examine three liability regimes, tying them to common examples of red-teamed model behaviors: defamation, speech integral to criminal conduct, and wrongful death. We find that any Section 230 immunity analysis or downstream liability analysis is intimately wrapped up in the technical details of algorithm design. And there are many roadblocks to truly finding models (and their associated parties) liable for generated speech. We argue that AI should not be categorically immune from liability in these scenarios and that as courts grapple with the already fine-grained complexities of platform algorithms, the technical details of generative AI loom above with thornier questions. Courts and policymakers should think carefully about what technical design incentives they create as they evaluate these issues.

READ FULL TEXT
research
09/12/2023

Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts

Text-to-image diffusion models, e.g. Stable Diffusion (SD), lately have ...
research
08/23/2022

Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned

We describe our early efforts to red team language models in order to si...
research
08/28/2023

The Promise and Peril of Artificial Intelligence – Violet Teaming Offers a Balanced Path Forward

Artificial intelligence (AI) promises immense benefits across sectors, y...
research
06/05/2023

AHA!: Facilitating AI Impact Assessment by Generating Examples of Harms

While demands for change and accountability for harmful AI consequences ...
research
09/15/2023

Talkin' 'Bout AI Generation: Copyright and the Generative-AI Supply Chain

"Does generative AI infringe copyright?" is an urgent question. It is al...
research
08/16/2023

Freedom of Speech and AI Output

Is the output of generative AI entitled to First Amendment protection? W...
research
05/30/2023

Seeing Seeds Beyond Weeds: Green Teaming Generative AI for Beneficial Uses

Large generative AI models (GMs) like GPT and DALL-E are trained to gene...

Please sign up or login with your details

Forgot password? Click here to reset