On Dec. 8, the European Union (EU) updated its Product Liability Directive (PLD) to, inter alia, specifically address AI software. This directive must next be “transposed,” or written into law by each of the 27 member states.
The PLD specifically calls out software developers as product manufacturers, and expands the scope of product to include software, although there are some intricacies related to open-source software. This is in contrast to products liability doctrine in the U.S., where courts are typically loath to classify software as a product. The PLD covers damages including death, personal injury, medically recognized psychological damage, damage to property, and destroyed or corrupted data.
Although damage to property used exclusively for professional purposes is not compensated under the PLD, “the destruction or corruption of data that are used for professional purposes, even if not exclusively so, should not be compensated for under this Directive.”
The PLD only requires the injured party to demonstrate that the product, which can include AI software, was defective, and that such defect caused the harm. A defect is presumed when: (i) the Defendant fails to disclose relevant evidence, (ii) the product does not comply with mandatory product safety obligations, or (iii) the damage was caused by an obvious malfunction during reasonably foreseeable use. The causal link between the defect and the harm is presumed when: (a) the product is defective, and (b) the harm is consistent with the defect.
A particularly important aspect of PLD, especially with regard to AI software, is that a court may presume the presence of a defect and/or the causal link to the harm when both of the following conditions are satisfied: (1) the injured party faces excessive difficulty in establishing proof due to technical or scientific complexity, and (2) the injured party demonstrates that it is likely that the product is defective and/or that the causal link is present.
The field of AI explainability/interpretability has made significant strides over the last few years, but as AI models grow in size and complexity, it can be difficult to articulate what precisely is happening inside such models, and hence to demonstrate proof of a defect or causal link to harm. The PLD appears to empower courts to potentially sidestep this challenge in certain circumstances, although it remains to be seen what will constitute “excessive difficulty," as the recitals note that the “assessment of excessive difficulties should also be made by national courts on a case-by-case basis.”
The EU is also considering an AI-specific products liability directive (AILD), which was introduced in September of 2022. The European Parliamentary Research Service recently published a complimentary impact assessment examining AILD in light of the EU AI Act and the PLD and hence changes to the text of the AILD are expected. It will be important to track the progress of AILD, along with the eventual enforcement actions under PLD when it goes into effect on December 9, 2026.