Artificial intelligence liability directive

This page is also available in

EESC opinion on the Artificial intelligence liability Directive.


The purpose of the AI Liability Directive is to lay down uniform rules for access to information and alleviation of the burden of proof in relation to damages caused by AI systems, establishing broader protection for victims (be it individuals or businesses), and fostering the AI sector by increasing guarantees. It will harmonise certain rules for claims outside of the scope of the Product Liability Directive, in cases in which damage is caused due to wrongful behaviour.
The Directive simplifies the legal process for victims when it comes to proving that someone's fault led to damage, by introducing two main features: first, in circumstances where a relevant fault has been established and a causal link to the AI performance seems reasonably likely, the so called ‘presumption of causality' will address the difficulties experienced by victims in having to explain in detail how harm was caused by a specific fault or omission, which can be particularly hard when trying to understand and navigate complex AI systems. Second, victims will have more tools to seek legal reparation, by introducing a right of access to evidence from companies and suppliers, in cases in which high-risk AI is involved.

Practical information