The European Commission (EC) recently made two announcements: one concerning the proposed AI Liability Directive (AILD) and another concerning the Product Liability Directive (PLD) to complement the AI Act announced earlier this year. The EC considers both the Directives and the Act to be two sides of the same coin, as the AI Act lays down the compliance requirements for AI developers to follow while the AILD and PLD provides the consumer (claimants) with a way to claim damages.
This article explores the content of the AILD and PLD, including consumers’ rights and burden of proof. It outlines how these new rules may affect manufacturers, developers and implementers of AI systems and what organisations can do now to build readiness for new rules coming into force.
Importance of the proposed Directives
The EC have highlighted three major reasons for proposing the Directives:
- The need to modernise the liability rules for the circular economy business models, as well as for products in the digital age. The European Commission found liability to be ranked as one of the top three barriers to the use of AI by European companies. Nearly 43% of companies consider AI related liability to be an external obstacle towards implementing AI tools into their business strategies.
The EC also took notice of the fact that existing liability regimes across the EU are based on claimants establishing a manufacturers or developers fault and connecting it to the resulting damage. The above situation was termed as the Black Box effect due to the complex nature of AI and the lack of technical know-how coupled with the costs of litigation.
- Creating a more level playing field between the EU and non-EU manufacturers and developers.
- Putting consumers on an equal footing with the manufacturers and developers.
Therefore, through the Directives, the EC aims to modernise the liability regime through an AI perspective while encouraging and reinforcing public trust. The EC also aims at harmonising the current fragmented approach across the EU by creating a balance of power between the consumers and AI manufacturers.
Commonalities between the two Directives:
Due to extent of commonalities present between the two Directives the EC opines that they form a package with different types of liability claims falling within their scope. They both complement each other to form an overall effective civil liability system that promotes trust in AI by ensuring claimants are appropriately compensated.
- Alignment on definitions:
- Claimants are persons / consumers that have been caused harm / loss resulting from an AI system / product or resulting from an AI software developer.
- Defendants are the company / entity / organisation or institution that owns or manages or develops the AI or, in some cases, manufactures the AI product.
- Both the Directives highlight the fact that, in certain cases, claimants may find it difficult to produce evidence. Therefore, AILD and PLD both empower national courts to mandate disclosure of relevant evidence by law in cases that involve a specifically high risk AI systems potentially causing damage to the claimant.
- Both ALD and PLD provide a “rebuttable presumption of causality” in favour of the claimant. Once established by the claimant, the court shall presume a connection between AI developer and the damage caused. However to establish that rebuttable presumption applies, a claimant has to establish, firstly, that the developer has failed disclose a key fact resulting in non-compliance with the duty of care arising out of an EU or Member State law. Secondly, the claimant must prove that the fault has substantially affected the final outcome of the AI system. Lastly, the claimant must prove that the final outcome has caused damage. The rebuttable presumption will only apply if the claimant has established all of the above factors.
- When mandating the disclosure of evidence from a product manufacturer, distributor, importer, or any other third-party who has placed that high-risk AI system onto the EU market, or even from the user of the high-risk AI system, the court must ensure that the principles of proportionality and necessity are satisfied. In doing so the courts must consider:
- The legitimate interests of the parties involved
- Whether the evidence involves disclosure of trade secrets or confidential information relating to national security.
Additionally, the burden of proof on the defendant arising from the “rebuttable presumption” (detailed above) can shift back to the claimant if, for instance, the AI developer takes reasonable steps to rebut the presumption by disclosing evidence. The burden of proof will then rest with the claimant to establish his or her claim.
Differences between the two directives:
The major difference between the directives lie in the nomenclature; while one directive is called Non- Contractual Civil Liability rules to AI (AILD) and the other is known as the Product Liability Directive (PLD). This difference is prominently stated in Article 5 (1) of the PLD that makes product manufacturers strictly liable for damages to a claimant. The above is complimented by Article 1 of the PLD that states only “natural people” (i.e. humans) can claim compensation for damages caused by defective products. In contrast to the above, the EC specifically states that AILD [Article 2(7)] will be broader in scope making both natural people (i.e. humans) along with legal personas (i.e. companies and business entities) able to establish claim.
In addition to the above, PLD and AILD also differ on the kinds of damages that a claimant can claim. Article 4(6) of the PLD defines damages to be material losses to people. Article 4(6) further states, material losses can be arising out of death, physical injury or injury to the psychological health of a person. The definition also covers damage caused to property and loss of or corruption data. The above differs from AILD that covers damages resulting from discrimination, or breach of fundamental rights.
Conclusions
This legislative package, and especially the AILD, provides a framework to enable organisations and individuals to pursue compensation for discrimination and breaches of fundamental rights with the benefit of a presumption of causality that falls on those responsible for the (high-risk) AI System. As such, developers, manufacturers and users of such AI Systems need to consider and evaluate the potential risks to fundamental rights that their system may engender prior to being pursued for a claim in order to successfully disrupt that presumption of causality.
The European consumer Organisation (BEUC) have expressed some disappointment that the rules may seem to provide individuals an advantage, and other companies have sent an open letter to the EC expressing concerns about the “chilling effect” that the new Directives may have on innovation and research. However, the two proposed Directives coupled with the AI Act will form a comprehensive set of laws on AI systems in the EU. Such legal developments could accelerate a shift to explainable and ethical AI as “business as usual” activities.
Trilateral’s Ethics Innovation team has specialists with extensive expertise and experience in implementing explainable AI tools to meet legislative and social requirements. Our Explainable AI Service and our Ethics Assessment Tools can support organisations to identify how algorithmic functions may cause harm or create good, and which values are at play in their development and use. Please contact us for more information about how to build readiness for emerging AI compliance requirements.