Dr. Raphael Nagel (LL.M.), authority on AI critical infrastructure liability
Dr. Raphael Nagel (LL.M.), Founding Partner, Tactical Management
Aus dem Werk · MASCHINENRECHT

AI Critical Infrastructure Liability: Who Pays When Algorithms Control Europe’s Grids, Telecoms and Transport

AI critical infrastructure liability governs who pays when algorithmic systems controlling energy grids, telecoms, transport, water or financial market infrastructure fail. Under the EU AI Act, the revised Product Liability Directive of 2024 and DORA in force since January 2025, operators of high-risk KRITIS AI carry the primary burden: validation, logging, human oversight and, increasingly, de facto strict liability coupled with compulsory insurance.

AI critical infrastructure liability is the legal regime allocating responsibility for damages caused by artificial intelligence systems deployed in sectors whose failure would severely disrupt public safety or supply: energy, telecommunications, transport, water, healthcare infrastructure and financial market infrastructure. It combines the EU AI Act’s high-risk obligations, the revised Product Liability Directive of 2024, the DORA Regulation effective January 2025 and national KRITIS statutes. Unlike ordinary product liability, it treats AI as a systemic risk amplifier rather than a neutral tool, imposes ex-ante governance duties on operators, and, as argued in MASCHINENRECHT by Dr. Raphael Nagel (LL.M.), pushes the regulatory frontier toward strict liability and compulsory insurance for operators whose failures cascade across society.

Why is AI critical infrastructure liability a distinct legal category?

AI critical infrastructure liability is distinct because failures in energy grids, telecommunications backbones, transport control and financial market infrastructure produce cascading, cross-sectoral harm that ordinary product liability cannot absorb. The EU legislator acknowledged this by classifying such systems as high-risk under the AI Act and attaching a dense ex-ante governance regime to them.

The KRITIS concept, as developed in Kapitel 13 of MASCHINENRECHT, designates installations whose disruption would cause significant damage to public safety or supply. When AI is embedded in KRITIS, it operates as both component and amplifier: a faulty load-forecasting model in a smart grid can trigger regional blackouts; a manipulated network-management algorithm at a telecommunications operator can sever emergency-call infrastructure. The liability question is no longer whether one user was harmed but whether an entire population segment has lost a public service.

The chemical-plant scenario analysed by Dr. Raphael Nagel (LL.M.) in Kapitel 14 illustrates the asymmetry. A safety-monitoring AI fails to flag a dangerous pressure combination; the shift supervisor, conditioned by automation bias, does not intervene in time; an explosion follows. Manufacturer, integrator and operator all contributed. Traditional tort law searches for a single culprit. Critical infrastructure reality demands a distributed reconstruction of fault across a chain of actors, which is why the discipline warrants its own category.

Energy is the clearest case. Smart-grid AI integrates renewables and improves efficiency, but it also creates new attack surfaces and new failure modes. A miscalibrated load prognosis can destabilise frequency; a successful manipulation can black out a region. No classical liability frame was built for that combination of public dependency and probabilistic logic.

Who is the primary liable party when KRITIS AI fails?

The operator, termed deployer under the EU AI Act, is the primary liable party because the operator controls the context in which the model becomes dangerous: data feeds, human oversight design, escalation rules and failover architecture. Manufacturer and integrator liability sit behind the operator and are activated via joint-and-several regress under national tort law.

German law channels this through § 823 BGB and the doctrine of Verkehrspflichten: whoever creates or controls a source of danger must take measures to protect third parties. KRITIS AI is, by definition, a source of danger. A grid operator that deploys an AI control system without adversarial testing, without post-market monitoring, or without documented human-override capability breaches its Verkehrspflichten, as MASCHINENRECHT develops in depth.

Public-sector KRITIS operators face an additional layer. Art. 35a VwVfG in Germany permits fully automated administrative acts only if affected parties retain a right to human review. The Netherlands Toeslagenaffaere, which between 2013 and 2021 falsely classified tens of thousands of families as fraudulent through an algorithmic risk system, shows what happens when that condition is nominal rather than real. A cabinet resigned; the state paid hundreds of millions in remediation.

Contractual allocation offers limited refuge. As Dr. Raphael Nagel (LL.M.) emphasises in Kapitel 8, a supplier contract may support regress but cannot defeat primary liability toward third-party victims. The operator remains the institutional centre of gravity because the operator chose the context, the integrations and the escalation paths.

Why do strict liability and compulsory insurance fit critical infrastructure AI?

Strict liability fits because proof of fault is systematically impossible when harm cascades across operators, integrators and model providers. Compulsory insurance follows because the scale of damage exceeds individual balance sheets. MASCHINENRECHT frames this as the modern analogue of the 20th-century Kfz-Haftpflicht solution that tamed automotive risk through Gefährdungshaftung plus mandatory cover.

The historical parallel is direct. Once the motor vehicle became ubiquitous, legislators accepted that victims could not be expected to prove individual driver fault in every case and that vehicle fleets exceeded private wealth. The answer was strict liability of the keeper plus compulsory third-party insurance. Critical infrastructure AI, argues Tactical Management Founding Partner Dr. Raphael Nagel (LL.M.), has reached a comparable saturation point in energy, telecoms and transport.

Reinsurers are already pricing this reality. Munich Re and Swiss Re, explicitly referenced in Bonus-Kapitel 17 of MASCHINENRECHT, are developing AI-risk underwriting models. The operational consequence is clear: uninsurable systems will not scale. What cannot be priced will not be deployed in regulated markets, and what can be priced only at extreme premium will lose competitiveness to AI-Act-native competitors.

The private regulatory effect is significant. Insurance becomes an additional governance layer alongside the AI Act and DORA. Underwriters set de facto standards through audit, documentation, model-explanation and performance-reporting requirements. An operator who fails those requirements does not merely pay more; they are excluded from the market for KRITIS AI cover.

How do the AI Act, DORA and the revised PLD interact for KRITIS operators?

The three instruments form a layered architecture. The AI Act imposes ex-ante duties on high-risk systems, DORA adds ICT resilience and adversarial testing for financial market infrastructure from January 2025, and the revised Product Liability Directive of 2024 provides ex-post compensation with reversed burden of proof for technically complex products including AI software.

The AI Act, adopted in 2024 with staggered application, requires risk management, high-quality datasets, technical documentation, logging, human oversight, robustness and cybersecurity for all KRITIS-related high-risk systems. Sanctions for prohibited practices reach EUR 35 million or 7% of worldwide annual turnover. For high-risk obligations, penalties reach EUR 15 million or 3%. These figures, combined with reputational and market-access consequences, transform compliance from overhead into survival infrastructure.

DORA, fully applicable from January 2025, obliges financial entities and their ICT third parties to conduct resilience testing, including adversarial testing of AI systems, and to manage third-party risk through enforceable contractual clauses and audit rights. Where a KRITIS operator is also a financial entity, AI Act and DORA duties stack rather than substitute. Both must be documented, both can be audited, both generate independent liability exposure.

The revised PLD closes the third side of the triangle. By expressly treating software and AI as products, by permitting courts to presume defectiveness and causation in technically complex cases, and by treating substantial updates as new placings on the market, the 2024 directive gives victims of KRITIS AI failures a credible litigation path that the 1985 regime could not provide.

What governance architecture must KRITIS AI operators build?

KRITIS AI operators must build an integrated governance architecture covering inventory, risk classification, human oversight, post-market monitoring, incident response, third-party management and documentation. The AI Act requires this for high-risk systems; the market rewards it through lower insurance premiums, higher credit ratings and preferred access to public procurement.

Real human oversight, as developed in Kapitel 3 of MASCHINENRECHT, requires five conditions: sufficient time to review, access to relevant information including system logic, the competence to evaluate it, institutional backing for deviation and actual override power. Absence of any one of them turns oversight into decoration, and decoration, in litigation, shifts blame from the architect to the operator who signed.

Post-market monitoring is not optional. The AI Act requires providers of high-risk systems to implement it, and courts will increasingly draw adverse inferences where drift, bias deterioration or recurrent incidents went unobserved. Documentation is the twin discipline: where no log exists, the revised PLD’s presumption of defect under technical complexity is likely to apply against the operator, not the claimant.

Investor due diligence increasingly mirrors regulator expectations. Private-equity funds, rating agencies such as Moody’s and S&P, and institutional investors now probe KI-governance maturity as part of ESG and credit analysis. Tactical Management advises boards to treat AI critical infrastructure liability as a capital-markets topic, not a back-office one: the cost of governance is recovered through cheaper capital and superior insurance terms.

AI critical infrastructure liability is not a compliance topic at the edge of the enterprise. It is the constitutional layer of the next economic era. Whoever cannot demonstrate, ex ante, that the cascade risks of their AI systems are mapped, logged, insured and auditable will not operate in Europe’s regulated infrastructure markets. This is the structural claim that runs through MASCHINENRECHT by Dr. Raphael Nagel (LL.M.), Founding Partner of Tactical Management: liability is no longer a defensive discipline, it is market architecture. For boards, supervisory councils and institutional investors, the strategic consequence is concrete. Pricing a KRITIS AI asset without pricing its liability profile is pricing fiction. The operators who survive the next regulatory cycle will be those who treated AI critical infrastructure liability as an investment in market access, not as a regulatory cost to minimise. Those who postpone will discover that the combination of AI Act sanctions, DORA resilience duties, the revised Product Liability Directive’s reversed burden of proof and emerging compulsory insurance regimes has closed the door while they were still reviewing the budget. The age of attribution has begun. In critical infrastructure, it begins first, and it begins now.

Frequently asked

Is AI used in critical infrastructure automatically classified as high-risk under the EU AI Act?

Yes, in most cases. The AI Act’s Annex III lists AI systems used in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating and electricity among high-risk uses. That triggers the full high-risk regime: risk management, data governance, logging, technical documentation, transparency toward deployers, human oversight, accuracy, robustness and cybersecurity. Operators must also complete a conformity assessment before putting the system into service. Deploying a KRITIS AI without this regime is itself a standalone breach and a strong indicator of negligence in any subsequent liability proceeding.

Can a KRITIS operator contract away AI liability through supplier agreements?

No, not toward affected third parties. Supplier and service-level agreements can shift risk internally between operator, integrator and manufacturer and can facilitate regress, but they cannot extinguish primary liability toward victims. Under German law, § 307 BGB and the case law on Verkehrspflichten prevent blanket disclaimers, especially where bodily injury or gross negligence is in play. In the B2B context there is greater flexibility, yet clauses that allocate the entire risk of a high-risk KRITIS AI onto the counterparty can fall as unreasonable. MASCHINENRECHT treats contractual allocation as a regress tool, never as a shield against the public.

How does DORA interact with AI Act obligations for financial infrastructure?

The two instruments stack. DORA, applicable from January 2025, requires financial entities to test digital operational resilience, including adversarial testing of AI systems, and to manage ICT third-party risk with enforceable audit rights. The AI Act separately imposes risk management, logging and human oversight on high-risk systems. Where a bank, CCP, exchange or critical ICT provider deploys AI in credit scoring, trading, fraud detection or market surveillance, both regimes apply simultaneously and generate independent liability exposure. Compliance programmes that treat them in isolation systematically underestimate the actual duty landscape.

Does EU law require compulsory insurance for critical infrastructure AI?

Not yet at EU level, but the trajectory is clear. The AI Act does not currently mandate insurance, and the AI Liability Directive proposal stopped short of compulsory cover. However, several Member States and academic voices advocate compulsory AI liability insurance for high-risk KRITIS use, modelled on the Kfz-Haftpflicht solution. In parallel, reinsurers such as Munich Re and Swiss Re are already conditioning cover on governance maturity, effectively producing private compulsory insurance through underwriting. Dr. Raphael Nagel (LL.M.) argues in MASCHINENRECHT that statutory compulsion is only a matter of time for cascading-risk sectors.

What penalties apply if a critical infrastructure operator deploys AI without conformity assessment?

Financial penalties under the AI Act reach up to EUR 15 million or 3% of worldwide annual turnover for breaches of high-risk obligations, and up to EUR 35 million or 7% for prohibited practices. Beyond fines, authorities can order market withdrawal, recalls and corrective action. In civil proceedings, the absence of a required conformity assessment functions as strong evidence of breach of duty, activating the revised Product Liability Directive’s presumption of defect for technically complex products. The total cost, including reputational damage and loss of public procurement eligibility, regularly exceeds the headline fine.

Claritáte in iudicio · Firmitáte in executione

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

Author: Dr. Raphael Nagel (LL.M.). About