Dr. Raphael Nagel (LL.M.), Founding Partner Tactical Management, on Operator liability AI systems deployer
Dr. Raphael Nagel (LL.M.), Founding Partner, Tactical Management
Aus dem Werk · MASCHINENRECHT

Operator Liability for AI Systems: Why the Deployer Becomes the Central Risk Carrier in Europe

Operator liability places the deployer, the entity that embeds an AI system into real processes, at the center of European AI risk allocation. Under the EU AI Act, deployers carry context, validation, human oversight and monitoring duties that convert abstract model risk into concrete liability in healthcare, finance and public administration.

Operator liability AI systems deployer is the body of legal duties assigned to the natural or legal person who operates an AI system in a real context, distinct from the provider, the integrator and the user. Under Article 26 of the EU AI Act, the deployer must use a high risk AI system in line with its instructions, ensure meaningful human oversight, monitor its behaviour, retain logs and inform affected persons. In the analysis of Dr. Raphael Nagel (LL.M.) in MASCHINENRECHT, the deployer is not technically the most culpable actor, but the institutionally most tangible one, because the deployer selects the context in which a statistical model becomes a consequential decision.

Why the deployer is the real center of AI liability

The deployer sits where abstract model risk becomes concrete harm. Manufacturers ship capabilities; deployers choose where those capabilities act on people, capital and infrastructure. Dr. Raphael Nagel (LL.M.) in MASCHINENRECHT calls the deployer the context architect and therefore the first institutional address for AI liability claims in Europe.

European product liability keeps the manufacturer responsible for defective design, yet claimants sue the most tangible counterparty first. In AI cases that is the deployer. Under German joint and several liability the deployer pays and then seeks recourse upstream, a process that requires contractual infrastructure most firms still lack. Deployers therefore absorb first line risk whether or not the defect originated in their own technical domain.

The allocation mirrors the Halterhaftung of the German Strassenverkehrsgesetz, where the vehicle operator, not the manufacturer, carries core risk. The analogy is powerful: AI, like a vehicle, concentrates danger that only the operator controls in context. Tactical Management builds its deployer advisory practice around this insight, treating governance as the operational backbone of any credible AI risk posture for regulated boards and investors.

Context as the decisive variable

The same model can be safe in a validated radiology unit and catastrophic in a triage queue it was never trained for. Context selection is a deployer act. When a hospital imports a cardiac triage model trained predominantly on male patients and a woman with atypical symptoms is deprioritised, the deployer, not the vendor, chose that clinical population.

What Article 26 of the EU AI Act demands from deployers

Article 26 of the EU AI Act imposes explicit duties on deployers of high risk AI: use the system according to its instructions, assign human oversight to competent natural persons, monitor operation, retain automatically generated logs, and inform affected workers and data subjects. These are hard duties running directly to the deployer, not aspirational principles.

The detail is decisive. Oversight must be exercised by a natural person with competence, training and authority. A compliance officer bolted onto an algorithmic fraud engine without time, information or institutional backing does not satisfy Article 26, whatever the title on the organigram. MASCHINENRECHT labels this the illusion of control: formal presence without substantive power, a liability trap rather than a defence.

Enforcement underscores the gravity. The AI Act authorises fines up to 35 million euros or 7 percent of global annual turnover for prohibited practices, with lower tiers at 15 million or 3 percent and 7.5 million or 1 percent. Deployer breaches sit inside the 3 percent tier, a figure that translates straight into board level attention and investor reporting cycles.

Logging and traceability

Article 12 requires automatic event logging across the lifecycle of a high risk system, and deployers must retain those logs. Without them the deployer cannot reconstruct what the system did, which is fatal under the revised Product Liability Directive of 2024, where courts may presume defect and causation when complexity or missing documentation blocks the claimant’s proof.

Context validation and post market monitoring as deployer duties

Context validation is the deployer duty most often neglected and most often decisive in court. A model validated on one population is not automatically valid on another. A deployer that imports a US trained credit model into a European retail bank without revalidation breaches its duty of care before the first decision is rendered against an applicant.

The revised Product Liability Directive sharpens this duty through evidentiary presumptions triggered when complexity or documentation gaps block proof of defect. A deployer who cannot produce validation records on its own population cannot rebut those presumptions. Courts do not need to find the defect; they infer it from the absence of validation evidence, a radical reallocation of the Beweislast in favour of claimants.

MASCHINENRECHT describes deployment as the translation of probability into action. The deployer calibrates thresholds, decides when the system escalates and when it acts autonomously, and sets the human review budget. Each calibration is a liability relevant act. Amazon discontinued its automated recruiting tool in 2018 because deployment had ratified historical male bias as statistical norm, a textbook context validation failure.

Drift and post market monitoring

Models drift as inputs shift and edge cases accumulate. Deployers of high risk AI must operate a post market monitoring system under Article 72 of the AI Act. A deployer that misses statistically visible drift will be judged against the sorgfaeltiger Betreiber who would have monitored monthly against accuracy, bias and outcome metrics tied to the specific population.

Sector cases: healthcare, finance, public administration

Operator liability looks different in each sector, but the logic is constant: whoever embeds the model in real decisions carries the operational weight of its errors. MASCHINENRECHT works through three sectors where deployer duties have already crystallised into enforceable standards backed by documented case material.

In healthcare, hospitals deploying AI diagnostic systems carry duties under both the AI Act and the EU Medical Device Regulation. A radiology department that puts a diagnostic AI into production without clinical validation against its own patient outcomes breaches its organisational duty. German joint and several liability then allocates the loss across manufacturer, hospital and treating physician according to causation.

In finance, algorithmic credit scoring is classified as high risk under Annex III of the AI Act and is additionally supervised under ECB expectations and DORA, in force since January 2025. Banks must explain, validate and monitor their models. A credit institution unable to justify a rejection breaches AI Act Article 13 on transparency and its DORA obligations on ICT and AI third party risk simultaneously.

The Dutch Toeslagenaffaire as warning

Between 2013 and 2021 the Dutch tax authority used a risk profiling algorithm that wrongly flagged tens of thousands of families, many with migrant backgrounds, as fraudulent. The cabinet resigned. The case shows how deployer negligence translates into rule of law failure. Article 22 GDPR and, in Germany, § 35a VwVfG now anchor a citizen’s right to human review of fully automated administrative acts.

The shift to operator liability is not a regulatory accident. It is the European answer to an architectural reality: AI becomes dangerous in the context a deployer chooses, with the thresholds a deployer calibrates, under the oversight a deployer funds. MASCHINENRECHT by Dr. Raphael Nagel (LL.M.) develops this thesis across manufacturer, integrator, deployer and user, but it is the deployer who absorbs first line litigation and first line regulatory attention. Boards that still treat AI governance as a compliance overhead are underwriting a risk they cannot see on the balance sheet. Boards that treat deployer governance as liability infrastructure convert it into faster regulatory clearance, lower insurance premia, stronger access to institutional capital, and measurably better positions in recourse disputes with upstream suppliers. The next decade will not reward the most aggressive AI adopters. It will reward the most attributable ones, the deployers who can show what the system did, why they allowed it and how they monitored its drift. Tactical Management advises boards and investors across Europe on exactly this question, building deployer governance as an asset class rather than a cost centre. The legal scaffolding is now in place: AI Act, revised Product Liability Directive, sectoral regimes such as DORA and MDR, and the familiar instruments of § 823 BGB and Article 22 GDPR. What remains missing in most organisations is the institutional choice to take the deployer role seriously. Dr. Raphael Nagel (LL.M.) argues that the age of attribution has begun, and that the deployer is its central figure. The firms that internalise that conclusion now will set the terms under which competitors, insurers and regulators assess AI risk for the rest of the decade.

Frequently asked

Who qualifies as an AI deployer under the EU AI Act?

The EU AI Act defines a deployer as any natural or legal person, public authority, agency or other body using an AI system under its authority, except for personal non professional activities. The deployer operates the system in a specific context, distinct from the provider that places the system on the market and the distributor in the supply chain. Hospitals using diagnostic AI, banks running credit scoring and public agencies using risk profiling are all deployers and carry the Article 26 obligations.

What is the difference between a provider and a deployer under the AI Act?

The provider develops or has developed an AI system and places it on the market under its own name. The deployer uses that system under its own authority in a real context. Providers carry duties around design, data governance, documentation and conformity assessment. Deployers carry duties around appropriate use, human oversight, monitoring and information of affected persons. The same entity can be both, for example a bank that builds its own credit model and also operates it against its customer base.

Can a deployer contractually shift AI liability to the manufacturer?

Only partially. Contracts allocate risk between the parties but cannot extinguish primary liability toward injured third parties. Under German § 307 BGB and equivalent consumer protection rules across the EU, blanket exclusions are invalid. The AI Act assigns deployer obligations directly, not through contract. Sophisticated deployers negotiate audit rights, incident reporting, post market monitoring data sharing and indemnities, but those govern recourse after the deployer has paid the claimant, not the claimant’s primary claim.

What documentation must a deployer keep for a high risk AI system?

Under Article 26 and Article 12 of the AI Act, deployers must retain automatically generated logs for an appropriate period, keep instructions for use, document human oversight arrangements, and record incidents and corrective actions. Under the revised Product Liability Directive of 2024, failure to produce relevant documentation can trigger a presumption of defect and causation. Practical best practice adds versioned model records, validation reports on the deployer’s own population, threshold calibration notes and monthly monitoring reviews.

How does operator liability interact with DORA in financial services?

DORA, in force since January 2025, imposes digital operational resilience duties on EU financial entities covering ICT and AI third party providers. For deployers of algorithmic credit scoring or algorithmic trading, DORA duties run alongside AI Act Article 26 obligations. Banks must manage provider concentration, run adversarial resilience tests, document ICT risk and report major incidents. A breach of DORA supports a parallel finding of AI Act deployer negligence and feeds directly into civil liability under § 823 BGB as a Schutzgesetz violation.

Claritáte in iudicio · Firmitáte in executione

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

Author: Dr. Raphael Nagel (LL.M.). About