Dr. Raphael Nagel (LL.M.) on AI medical devices liability MDR — Tactical Management
Dr. Raphael Nagel (LL.M.)
Aus dem Werk · MASCHINENRECHT

AI Medical Devices Liability Under MDR and the EU AI Act: How Manufacturer, Hospital and Physician Share the Risk

AI medical devices liability under MDR combines two regimes: the Medical Device Regulation for clinical safety and the EU AI Act for high-risk AI. Liability splits across manufacturer, hospital as deployer, and treating physician. Dr. Raphael Nagel (LL.M.) argues in MASCHINENRECHT that this three-layer structure redefines European healthcare accountability.

AI medical devices liability MDR is the legal framework governing responsibility for harm caused by AI systems qualified as medical devices under Regulation (EU) 2017/745 and simultaneously regulated as high-risk AI under the EU AI Act. It allocates accountability across three actors: the manufacturer, bound by the revised Product Liability Directive of 2024; the healthcare provider, classified as deployer under Article 26 of the AI Act; and the treating physician, bound by professional standards of care. In MASCHINENRECHT, Dr. Raphael Nagel (LL.M.) frames this architecture as a deliberately layered system in which no actor can externalize risk through contractual disclaimers or CE marking alone.

What does AI medical devices liability under MDR cover?

AI medical devices liability under MDR covers any AI system qualifying as a medical device under Regulation (EU) 2017/745 that causes patient harm. It layers three distinct actors: the manufacturer under the revised EU Product Liability Directive, the hospital as deployer under the EU AI Act, and the treating physician under professional duty of care.

The Medical Device Regulation, applicable across the EU since May 2021, classifies software as a medical device when its intended purpose includes diagnosis, prevention, monitoring, prediction, or treatment of disease. Under Rule 11 of Annex VIII, most diagnostic or triage AI falls into Class IIa or higher, which mandates conformity assessment by a notified body. CE marking is therefore not cosmetic but a substantive safety representation enforceable by national market surveillance authorities such as BfArM in Germany or AEMPS in Spain.

On top of MDR, the EU AI Act classifies medical AI as high-risk under Annex III where it performs diagnosis, triage, or clinical decision support. Dr. Raphael Nagel (LL.M.) argues in MASCHINENRECHT that this produces a Doppelregulierung: two parallel regimes with overlapping yet non-identical requirements on risk management, data governance, logging, human oversight, and post-market monitoring. Neither regime absorbs the other; both apply in full.

This duality reshapes litigation. A patient injured by an AI diagnosis can sue the manufacturer under the revised Product Liability Directive, which since 2024 expressly includes AI software within the product definition. The claimant may simultaneously pursue the hospital for organizational negligence and the physician for breach of the professional standard. The tool fiction, as MASCHINENRECHT puts it, has collapsed in healthcare.

How do MDR and the EU AI Act interact for diagnostic software?

MDR and the EU AI Act impose parallel, cumulative obligations on AI diagnostic software. MDR demands clinical evaluation and notified-body certification; the AI Act demands risk management, data quality, transparency, and human oversight. Neither regime displaces the other, and compliance with one does not create any presumption of compliance with the other.

Clinical evaluation under MDR Article 61 requires manufacturers to demonstrate safety and performance through clinical data, not merely technical benchmarks. For an AI system, this means validating performance across representative patient populations, stratified by age, sex, and comorbidity. The MASCHINENRECHT cardiology case shows the cost of failure: a triage system trained predominantly on male patients misclassified atypical female heart-attack symptoms, leading to a delayed infarct diagnosis and avoidable harm.

The AI Act adds data governance obligations under Article 10, which require training, validation, and testing data to be relevant, representative, and free of discriminatory error. A Class IIb AI device that passes MDR notified-body review but fails Article 10 remains non-compliant with EU law, exposing its manufacturer to administrative fines up to 15 million euros or 3 percent of worldwide turnover.

For hospital Vorstand and medical directors, the operational consequence is unambiguous: CE marking alone does not discharge the AI Act’s deployer duties under Article 26, which include using the system according to instructions, assigning competent human oversight, and monitoring operation. Tactical Management advises medtech clients and hospital groups that both regimes must be mapped jointly from day one, not sequentially.

Where does liability split between manufacturer, hospital and physician?

Liability splits across three layers that operate as joint and several debtors toward the injured patient. The manufacturer bears product liability for defect, the hospital bears organizational liability for deployment context, and the physician bears personal liability for professional negligence. Each layer answers a distinct question about the harm and none can be eliminated by contract.

The manufacturer is responsible for design choices: training-data composition, model architecture, robustness thresholds, uncertainty communication, and post-market surveillance. In MASCHINENRECHT, Dr. Raphael Nagel (LL.M.) identifies the choice between explainable and opaque models as itself a liability-relevant decision. A deep-learning system that cannot explain why it downgraded a patient’s triage priority weakens the manufacturer’s defense when harm materializes and a court applies the presumption rules of the revised Product Liability Directive.

The hospital, as deployer under Article 26 of the AI Act, answers for the context: clinical validation on the local patient population, staff training, escalation protocols, audit mechanisms, and incident documentation. A hospital that transfers a radiology AI into production without local validation has failed its organizational duty of care, even if the device is CE-marked and the manufacturer itself is fully AI-Act-compliant.

The physician’s liability is residual but real. Automation bias, analyzed extensively in MASCHINENRECHT, explains why clinicians adopt AI recommendations in roughly 85 percent of cases even when their own clinical intuition diverges. A physician who follows a manifestly implausible recommendation without independent review breaches the professional standard. German courts will apportion internal shares under § 840 and § 254 BGB, and comparable continental doctrines apply across the single market.

What do the MASCHINENRECHT case studies reveal about medical AI failures?

The MASCHINENRECHT case studies reveal that medical AI failures are almost never attributable to a single actor. They emerge from training-data bias at the manufacturer, insufficient validation at the hospital, and uncritical adoption by the physician. Only a joint-and-several analysis captures the harm accurately and distributes loss proportionally.

The cardiology triage case in MASCHINENRECHT illustrates the mechanism. An AI triage system classified a middle-aged woman presenting with chest pain and dyspnea as medium priority. She waited two hours and suffered an avoidable myocardial infarction. Forensic analysis showed the model was trained on predominantly male cardiology data, where infarct presentations differ substantially from the atypical female symptom pattern, a gender gap documented in cardiology literature for decades.

The radiology case extends the argument. A European hospital deployed a pulmonary diagnosis AI that classified a suspicious finding as benign. The treating physician accepted the recommendation; the lesion was later confirmed as early-stage lung carcinoma. The manufacturer defense, the hospital validation defense, and the physician’s review defense each failed in turn, because none of the three layers had performed its assigned due diligence.

Dr. Raphael Nagel (LL.M.) uses these cases to make a structural point: medical AI harm is architectural, not individual. European judges will increasingly distribute responsibility proportionally across the chain, rather than pinning the entire loss on whichever defendant is most visible or most solvent. The winning litigation strategy, for claimant or defendant, is now to reconstruct the entire architecture of the decision.

What governance must medtech manufacturers and hospitals implement?

Medtech manufacturers and hospitals must implement an integrated governance architecture covering MDR technical documentation, AI Act risk management, GDPR Article 22 explanation rights, clinical validation on the actual patient population, physician training, and incident reporting. Fragmented compliance is today the single largest source of avoidable liability exposure in European healthcare.

Manufacturers must structure a unified technical file that satisfies both MDR Annex II and AI Act Article 11 documentation requirements. This includes training-data provenance, bias-testing results, post-market performance metrics, and version logs. Substantial modifications trigger a new conformity assessment under MDR, and under the revised Product Liability Directive of 2024 potentially a fresh liability phase for the updated system version.

Hospitals must operationalize Article 26 duties: appoint a responsible deployer role at the Vorstand or medical-directorate level, document local clinical validation, train physicians on each system’s known limits, and establish incident escalation to both the manufacturer and the national competent authority. Article 22 GDPR obliges hospitals to provide meaningful explanations to patients subjected to automated clinical decisions, which in turn requires system-level explainability designed in by the manufacturer.

Dr. Raphael Nagel (LL.M.) and Tactical Management advise that the governance gap most frequently exploited in litigation is the absence of documented clinical validation at the hospital level. A prospective validation memo covering the institution’s actual patient demographics often separates a defensible deployment from a losing case, and costs a fraction of a single adverse outcome.

AI medical devices liability under MDR and the EU AI Act is no longer a compliance checklist. It is the operating constitution of European digital healthcare. Hospitals, medtech manufacturers, and clinicians who treat MDR certification and AI Act conformity as separate paperwork exercises will find themselves unable to defend a single coherent story when a patient is harmed. Those who integrate both regimes into one governance architecture will reduce clinical risk and, critically, remain insurable at market terms as reinsurers tighten underwriting. Dr. Raphael Nagel (LL.M.), Founding Partner of Tactical Management and author of MASCHINENRECHT, advises medtech boards, hospital supervisory bodies, and institutional investors across Europe on exactly this integration. The book’s central thesis applies with particular force in healthcare: liability in the AI economy is not a brake on innovation but the selection mechanism that determines which systems reach patients at all. Expect the decisive case law to emerge in the 2026 to 2028 window, as the first serious MDR-plus-AI-Act claims reach German, Dutch, and French courts. The decisions medtech founding partners and hospital Vorstände make this year will define their defensibility then.

Frequently asked

Does the Medical Device Regulation apply to all hospital AI systems?

No. MDR applies only to AI systems with a medical intended purpose under Article 2(1) of Regulation (EU) 2017/745. Administrative scheduling or resource-planning AI falls outside MDR, while diagnostic, triage, or therapy-planning AI almost always falls inside, typically at Class IIa or higher under Rule 11 of Annex VIII. Hospitals must classify each system individually before deployment. Dr. Raphael Nagel (LL.M.) notes in MASCHINENRECHT that the borderline cases, such as clinical decision support that only suggests rather than decides, are precisely where most unresolved legal risk accumulates.

Who holds primary liability when an AI triage system misclassifies a patient?

Primary liability is joint and several across three actors. The patient may sue the manufacturer under the revised Product Liability Directive of 2024, the hospital for organizational negligence under Article 26 of the AI Act and national tort law, and the treating physician for professional negligence. The hospital is typically the most solvent and easiest target; it then pursues recourse against the manufacturer and, where appropriate, the physician. German law applies § 840 BGB joint tortfeasor rules to allocate internal shares according to fault, and other Member States apply comparable doctrines.

Does a CE-marked AI device shield the manufacturer from product liability?

No. CE marking evidences conformity at the moment the device was placed on the market but does not create a presumption of non-defectiveness under the revised Product Liability Directive. The directive introduces evidentiary facilitations that allow courts to presume defect in technically complex products when the claimant shows plausible indications of malfunction. Manufacturers who rely on CE marking as a litigation defense without maintaining current post-market surveillance documentation, adverse-event registers, and bias-monitoring results will lose the presumption advantage that robust governance would have preserved.

Must a physician override an AI recommendation to avoid negligence?

Physicians are not required to override every recommendation, but they must maintain independent clinical judgment. Automation bias, extensively analyzed in MASCHINENRECHT, shows clinicians adopt AI outputs in roughly 85 percent of cases. Where the AI output is manifestly implausible given the clinical presentation, uncritical acceptance constitutes a breach of the professional standard. Courts will ask whether a reasonably prudent specialist in the same field, working under comparable conditions, would have questioned the recommendation. Training, time budgets, and institutional culture are all relevant to that fact-specific standard.

What post-market monitoring does the AI Act require for medical AI?

The AI Act requires providers of high-risk systems to establish a post-market monitoring system that actively collects and reviews data on system performance throughout the lifecycle. For medical AI, this runs parallel to MDR post-market surveillance and vigilance obligations under Articles 83 to 92 of Regulation (EU) 2017/745. Substantial performance drift, serious incidents, and field safety corrective actions must be reported to competent authorities. Hospitals as deployers must support these flows with operational data, incident notifications, and cooperation during vigilance investigations.

How does GDPR Article 22 apply to AI clinical decisions?

Article 22 GDPR gives patients the right not to be subject to solely automated decisions with legal or similarly significant effects, and entitles them to meaningful information about the logic involved. For AI triage, diagnosis, or treatment recommendations, this translates into an explicit duty to provide substantive explanations, not generic disclaimers. Hospitals that deploy unexplainable AI without a human reviewer in the loop risk both Article 22 violations and supervisory fines up to 20 million euros or 4 percent of worldwide turnover under Article 83 GDPR.

Claritáte in iudicio · Firmitáte in executione

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

Author: Dr. Raphael Nagel (LL.M.). About