Regulating ChatGPT and other Large Generative AI Models

02/05/2023
by   Philipp Hacker, et al.
0

Large generative AI models (LGAIMs), such as ChatGPT or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering (1) direct regulation, (2) data protection, (3) content moderation, and (4) policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and non-professional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest four strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pre-trained models. In particular, regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and should include (i) obligations regarding transparency and (ii) risk management. Non-discrimination provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core of the DSA content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers. In all areas, regulators and lawmakers need to act fast to keep track with the dynamics of ChatGPT et al.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/07/2023

AI and the EU Digital Markets Act: Addressing the Risks of Bigness in Generative AI

As AI technology advances rapidly, concerns over the risks of bigness in...
research
04/15/2023

The Design Space of Generative Models

Card et al.'s classic paper "The Design Space of Input Devices" establis...
research
03/14/2023

The AI Act proposal: a new right to technical interpretability?

The debate about the concept of the so called right to explanation in AI...
research
03/20/2023

Heterogeneity of AI-Induced Societal Harms and the Failure of Omnibus AI Laws

AI-induced societal harms mirror existing problems in domains where AI r...
research
06/01/2023

Sustainable AI Regulation

This paper suggests that AI regulation needs a shift from trustworthines...
research
07/27/2023

Generative AI for Medical Imaging: extending the MONAI Framework

Recent advances in generative AI have brought incredible breakthroughs i...
research
09/15/2023

Talkin' 'Bout AI Generation: Copyright and the Generative-AI Supply Chain

"Does generative AI infringe copyright?" is an urgent question. It is al...

Please sign up or login with your details

Forgot password? Click here to reset