Dr. Raphael Nagel (LL.M.), Founding Partner Tactical Management, on Product liability for AI software
Dr. Raphael Nagel (LL.M.), Founding Partner, Tactical Management
Aus dem Werk · MASCHINENRECHT

Product Liability for AI Software: What the Revised EU Directive Means for Manufacturers, Deployers, and Boards

Product liability for AI software is now codified in the revised EU Product Liability Directive of 2024, which expressly treats software and AI systems as products. Manufacturers face strict liability, substantial updates can trigger new liability exposure, and courts may presume defectiveness in technically complex systems where claimants establish prima facie evidence of harm.

Product liability for AI software is the strict, fault-independent liability regime under which developers, providers, and substantial modifiers of AI systems must compensate harm caused by defective products placed on the European market. The revised EU Product Liability Directive of 2024 expressly classifies software, including machine learning models and integrated AI systems, as products. It introduces evidentiary presumptions for technically complex cases, treats substantial updates as new placings on the market, and obliges defendants to disclose relevant technical documentation. Dr. Raphael Nagel (LL.M.), author of MASCHINENRECHT, Machine Law, calls this regime the structural backbone of the European answer to algorithmic harm.

What does the revised Product Liability Directive change for AI software?

The revised EU Product Liability Directive, adopted in 2024, ends the ambiguity that had surrounded intangible products for decades. It expressly brings software, firmware, machine learning models, and integrated AI systems within the product concept, subjects their manufacturers to strict liability, and introduces evidentiary tools tailored to technical opacity.

The original 1985 Directive was built for physical industrial goods. Its structure presupposed a discrete item with a single manufacturer, a distributor, and a defect that could be inspected after the fact. The 2024 revision, as analyzed in MASCHINENRECHT by Dr. Raphael Nagel (LL.M.), answers a world in which the product is a learning system, the defect is a statistical drift, and the causal chain runs across training data providers, foundation model developers, integrators, and deployers. Each of these actors now sits inside a recalibrated liability architecture.

Three shifts matter most. First, software and AI are now enumerated categories of product, not merely analogized by national case law. Second, substantial post-market modifications reopen liability exposure. Third, national courts may presume causation and defectiveness where the product is technically complex and the claimant has made a plausible case. For European boards, this is not an incremental reform; it is the foundation on which every AI risk register must be rebuilt.

Why is software now unambiguously a product under EU liability law?

Software is unambiguously a product because the revised Directive names it as such, together with digital manufacturing files, related services essential to software function, and AI systems specifically. The 2024 legislature resolved in one provision what thirty years of divergent national case law could not settle definitively.

The practical consequence is that providers of SaaS delivered AI models, foundation models placed on the market commercially, and embedded machine learning components in medical devices governed by the Medical Device Regulation or in vehicles covered by the German Straßenverkehrsgesetz all fall within the regime. The Amazon recruiting tool withdrawn in 2018, documented in MASCHINENRECHT, would today be analyzed as a defective product rather than an internal tool, because its structural gender bias would constitute a defect against the reasonable safety expectations of affected applicants. The COMPAS recidivism scoring system examined by ProPublica in 2016 would face analogous scrutiny if deployed in a European court system.

This classification reshapes the incentive structure. A provider shipping a foundation model under a commercial license must now analyze its outputs the way a pharmaceutical firm analyzes a molecule. Training data composition, known failure modes, adversarial robustness, and documentation quality are no longer engineering preferences. They are elements of compliance with the Directive’s assessment of defectiveness, read against the legitimate safety expectations of the affected public and of regulated deployers such as banks, hospitals, and public authorities.

When does an update to an AI system trigger new liability?

An update triggers new liability when it substantially modifies the AI system’s behavior, risk profile, or intended purpose. The revised Directive treats such modifications as a new placing on the market, restarting the liability period and obliging the modifier to reassess conformity against the standards in force at the moment of modification.

This lifecycle doctrine captures, as MASCHINENRECHT puts it, the recognition that the product is no longer simply there: it becomes. A bank that fine-tunes a credit scoring model on six months of fresh data has not performed a neutral maintenance action. If the retrained model exhibits new discrimination patterns or degraded error rates, the bank, as substantial modifier, inherits manufacturer obligations under the Directive. The same logic applies to hospitals deploying updated radiology models under the MDR, and to manufacturers pushing over the air updates to autonomous driving stacks operating at SAE Level 3 and Level 4.

Dr. Raphael Nagel (LL.M.), Founding Partner of Tactical Management, underlines the strategic consequence: version control, documented change rationales, and pre-deployment evaluation of each material release are now core legal infrastructure, not engineering hygiene. Organizations without disciplined version governance will struggle to identify which version of a system caused a given harm, and courts will not accept that inability as a defense.

How does the directive shift the burden of proof in AI cases?

The directive allows national courts to presume defectiveness or causation where the claimant has shown plausible evidence of harm and where the technical complexity of the AI system makes full proof excessively difficult. Defendants must then affirmatively explain the absence of defect or causal link, or face an adverse inference.

This is one of the most consequential changes in European civil liability since the introduction of strict product liability in 1985. A claimant facing a black box credit decision, an erroneous medical triage, or a denied social benefit no longer has to reverse engineer an opaque model to recover damages. The Robodebt scandal in Australia between 2016 and 2019, cited in MASCHINENRECHT, illustrates exactly the asymmetry the new rule targets: hundreds of thousands of wrongful automated assessments, yet no individual claimant could reconstruct the system’s logic alone. Under the revised European framework, that asymmetry is resolved at the evidentiary stage rather than left to chance.

The Directive also obliges defendants to disclose relevant technical documentation on reasoned request. Refusal or inability to produce such documentation strengthens the presumption against the defendant. The practical lesson: logs, training data provenance, test protocols, and incident reports are now exculpatory evidence. Their absence is itself incriminating. The Dutch Toeslagenaffaire between 2013 and 2021, which brought down a Dutch cabinet, demonstrated how catastrophic undocumented automated decision-making becomes once courts and parliamentary inquiries examine it retrospectively.

What strategic steps must manufacturers and deployers take now?

Manufacturers and deployers must treat product liability for AI software as a board-level governance question, not a procurement annex. This means establishing a full AI inventory, classifying each system against AI Act risk tiers, implementing post market monitoring, and preserving litigation grade documentation across the entire model lifecycle.

The directive operates in tandem with the EU AI Act, in force since August 2024, with the core high risk obligations applicable from August 2026. A breach of AI Act duties on data quality, logging, or human oversight is not merely a regulatory infraction. Under German law, Section 823(2) BGB treats protective statutes as independent tort grounds, and AI Act obligations qualify as such. A failure to perform a conformity assessment where required is, in Dr. Raphael Nagel’s analysis in MASCHINENRECHT, a strong indicator of breach in any subsequent civil liability proceeding.

Tactical Management’s work with European boards confirms a recurring pattern: the companies that will prevail are those that treat the Directive as market architecture rather than compliance cost. Their governance stack documents model versions, training data lineage, and incident response. Their vendor contracts define indemnification, audit rights, and information duties. Their insurance placements, increasingly shaped by reinsurers such as Munich Re and Swiss Re, price governance maturity directly. The rest will absorb the cost of their own opacity through higher premiums, denied coverage, and adverse judgments.

The revised Product Liability Directive is not a technical adjustment to an old statute. It is, as Dr. Raphael Nagel (LL.M.) argues throughout MASCHINENRECHT, Machine Law, the legal backbone of the coming decade of European AI deployment. It codifies the principle that power delegated to machines cannot produce harm without a named, liable human or institution on the other side of the equation. Manufacturers who treat software as untouchable intellectual property and deployers who treat AI as a neutral tool will discover that the Directive has already decided otherwise. The question is no longer whether software can be defective; it is whether the defendant can explain, document, and defend the decisions its system has made. Tactical Management advises boards, general counsel, and institutional investors on precisely this shift: from reactive compliance toward liability resilient architecture. The next phase of European AI competition will not be won by the fastest models. It will be won by the organisations that can stand behind them in court, before regulators, and in front of their insurers, with their documentation, version history, and governance intact.

Frequently asked

Is open source AI software covered by the revised EU Product Liability Directive?

Open source AI software placed on the market outside commercial activity is generally excluded from the revised Directive. Once the same software is offered commercially, embedded in a paid product, or monetised through support or related services, it enters the regime. Providers of foundation models distributed under open source licences but commercialised via hosted APIs or enterprise agreements should assume that the liability framework applies to the commercial channel. Dr. Raphael Nagel (LL.M.) advises that the legal status of a model depends on the concrete placing on the market, not on its licence label in isolation.

Does the Directive apply to AI systems trained outside the EU?

Yes. The revised Directive applies to any product placed on the EU market, irrespective of where it was developed or trained. A foundation model trained in the United States or in China and offered to European deployers falls within the regime. The provider may need to designate an authorised representative in the EU. Importers and distributors can be held liable in parallel where the original manufacturer is not reachable, a mechanism that mirrors classical product liability architecture and that MASCHINENRECHT identifies as a key enforcement tool against non European AI providers.

What counts as a substantial modification triggering new liability?

A substantial modification is any post-market change that materially alters the AI system’s safety relevant behaviour, intended purpose, or risk profile. Fine-tuning on new data, architectural changes, the introduction of new output modalities, and material updates to the training set typically qualify. Routine bug fixes that do not affect safety characteristics generally do not. The threshold is fact specific, and courts will look at whether a reasonable deployer or user would have expected the change to alter risk. Manufacturers should document each release with an explicit substantiality assessment and preserve it for litigation.

How does product liability for AI software interact with the EU AI Act?

The two instruments operate as complementary layers. The AI Act defines ex ante obligations on risk management, data governance, logging, transparency, and human oversight, particularly for high risk systems. The revised Product Liability Directive addresses ex post compensation when harm occurs. Violations of AI Act duties are powerful indicia of defectiveness and breach, and in jurisdictions such as Germany they can ground parallel tort liability under Section 823(2) BGB. Dr. Raphael Nagel (LL.M.) characterises AI Act compliance in MASCHINENRECHT as the central evidentiary proof of due care in any subsequent product liability proceeding.

Claritáte in iudicio · Firmitáte in executione

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

Author: Dr. Raphael Nagel (LL.M.). About