The European AI Liability Directives – Critique of a Half-Hearted Approach and Lessons for the Future

by   Philipp Hacker, et al.

The optimal liability framework for AI systems remains an unsolved problem across the globe. In a much-anticipated move, the European Commission advanced two proposals outlining the European approach to AI liability in September 2022: a novel AI Liability Directive and a revision of the Product Liability Directive. They constitute the final, and much-anticipated, cornerstone of AI regulation in the EU. Crucially, the liability proposals and the EU AI Act are inherently intertwined: the latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment. Taken together, these acts may well trigger a Brussels effect in AI regulation, with significant consequences for the US and other countries. This paper makes three novel contributions. First, it examines in detail the Commission proposals and shows that, while making steps in the right direction, they ultimately represent a half-hearted approach: if enacted as foreseen, AI liability in the EU will primarily rest on disclosure of evidence mechanisms and a set of narrowly defined presumptions concerning fault, defectiveness and causality. Hence, second, the article suggests amendments, which are collected in an Annex at the end of the paper. Third, based on an analysis of the key risks AI poses, the final part of the paper maps out a road for the future of AI liability and regulation, in the EU and beyond. This includes: a comprehensive framework for AI liability; provisions to support innovation; an extension to non-discrimination/algorithmic fairness, as well as explainable AI; and sustainability. I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime. In this way, the law may help spur not only fair AI and XAI, but potentially also sustainable AI (SAI).


page 7

page 13


Regulating Gatekeeper AI and Data: Transparency, Access, and Fairness under the DMA, the GDPR, and beyond

Artificial intelligence is not only increasingly used in business and ad...

Cybersecurity of AI medical devices: risks, legislation, and challenges

Medical devices and artificial intelligence systems rapidly transform he...

Quantitative study about the estimated impact of the AI Act

With the Proposal for a Regulation laying down harmonised rules on Artif...

Sustainable AI Regulation

This paper suggests that AI regulation needs a shift from trustworthines...

The Brussels Effect and Artificial Intelligence: How EU regulation will impact the global AI market

The European Union is likely to introduce among the first, most stringen...

Using sensitive data to prevent discrimination by AI: Does the GDPR need a new exception?

Organisations can use artificial intelligence to make decisions about pe...

Explainable AI: A Neurally-Inspired Decision Stack Framework

European Law now requires AI to be explainable in the context of adverse...

Please sign up or login with your details

Forgot password? Click here to reset