Dr. Raphael Nagel (LL.M.) on AI Liability for Directors and Officers — Tactical Management
Dr. Raphael Nagel (LL.M.)
Aus dem Werk · ALGORITHMUS

AI Liability for Directors and Officers: Personal Exposure Under the EU AI Act, NIS2, and the AI Liability Directive

AI Liability for Directors and Officers is the personal legal exposure boards face when artificial intelligence systems cause regulatory, financial, or reputational harm. Under the EU AI Act, NIS2, and the proposed AI Liability Directive, fines reach seven percent of global turnover and directors are personally accountable. Dr. Raphael Nagel (LL.M.) documents why delegation to IT does not transfer responsibility.

AI Liability for Directors and Officers is the expanding framework of personal legal accountability that board members, managing directors, and senior executives carry for the development, deployment, and oversight of artificial intelligence systems within their organizations. It combines obligations from the EU AI Act, which imposes corporate fines of up to seven percent of global annual turnover, with the NIS2 Directive that since October 2024 explicitly establishes personal liability of management bodies for cybersecurity and AI-governance implementation, and with the proposed EU AI Liability Directive that reverses the burden of proof in favor of claimants. Directors cannot delegate this responsibility to IT departments, vendors, or external auditors without retaining personal exposure.

Why Director Liability for AI Became Personal in 2024

AI Liability for Directors and Officers became personal in 2024 because three legal instruments converged almost simultaneously: the EU AI Act with its seven-percent-of-turnover fines, the NIS2 Directive establishing personal management-body accountability from October 2024, and the proposed AI Liability Directive reversing the burden of proof for AI-related harm.

Prior to 2024, algorithmic decision-making sat in a regulatory grey zone where IT departments implemented, vendors supplied, and supervisory boards rarely scrutinized. That architecture collapsed when the European Parliament adopted the AI Act on 13 March 2024 with 523 votes to 46, the most decisive legislative signal on technology regulation in the Union’s history. NIS2 entered national transposition in October 2024 with an explicit clause: management bodies bear personal responsibility for the implementation of cybersecurity and risk-management measures. As Dr. Raphael Nagel (LL.M.) documents in ALGORITHMUS, Who Controls AI, Controls the Future, this is a regulatory caesura that permanently reshapes boardroom agenda priorities.

The practical consequence is illustrated in the book through concrete incidents. When a credit decision engine rejects an applicant on proxy-discriminatory grounds, when a predictive-maintenance algorithm misclassifies an anomaly and causes industrial failure, when a deepfake-enabled CEO fraud transfers 220,000 euros from a British energy firm to a fraudulent account, the boards overseeing these deployments can no longer rely on ignorance as a defense. The standard of due diligence has migrated from passive supervision to active, documented oversight, and the evidentiary expectations are now codified in black-letter law rather than left to judicial discretion.

The three instruments that changed the rules

The EU AI Act classifies AI into four risk tiers and subjects high-risk systems, credit scoring, employment selection, critical infrastructure, biometric identification, and administration of justice, to documentation, audit, and human-oversight requirements under Articles 9 through 15. NIS2 extends critical-infrastructure coverage to wastewater, postal services, space, and digital infrastructure, with corporate sanctions of up to ten million euros or two percent of global annual turnover. The AI Liability Directive introduces the evidentiary reversal that Dr. Raphael Nagel (LL.M.) highlights as the most consequential shift for D&O risk management in the coming decade.

What the EU AI Act and NIS2 Actually Require from Boards

Boards must maintain a documented AI inventory, classify every system under the AI Act’s risk categories, implement risk-management processes for high-risk deployments, and ensure meaningful human oversight of algorithmic decisions that affect individuals. NIS2 adds cybersecurity measures, incident-reporting duties within twenty-four to seventy-two hours, and supply-chain security controls.

The AI Act’s Annex III enumerates eight high-risk domains: critical infrastructure; education and vocational training; employment and workforce management; essential public and private services including credit scoring and insurance pricing; law enforcement; migration and border control; administration of justice and democratic processes; and biometric identification. Any company operating in these domains, and most mid-cap European industrials touch at least three, must satisfy risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy, robustness, and cybersecurity obligations. The compliance cost is real. The liability cost of non-compliance is substantially larger.

Tactical Management’s advisory work with portfolio boards surfaces a consistent pattern. Directors discover that systems they assumed were vendor-supplied commodities, HR screening tools, credit engines, fraud detectors, biometric access systems, fall squarely within the AI Act’s high-risk class. At that point, conformity assessment, CE marking, registration in the EU database, and ongoing post-market monitoring become explicit board-level obligations. Companies that acted in 2023 and early 2024 are CE-ready for the phased deadlines between June and August 2026. Those that waited confront a compressed timeline with limited remediation capacity and rising underwriter scrutiny.

The AI inventory as the first board duty

No director can oversee what no director knows exists. The first deliverable documented in ALGORITHMUS is a complete AI inventory: every system in operation, its vendor, its AI Act risk classification, the data it processes, its owner, and its last audit date. Without this inventory, any NIS2 or AI Act supervisory inspection becomes a reconstruction exercise under regulatory pressure, which is precisely the condition in which personal liability of directors crystallizes and D&O defenses weaken.

The Burden-of-Proof Reversal in the AI Liability Directive

The EU AI Liability Directive reverses the traditional evidentiary burden by introducing a causality presumption: once a claimant demonstrates that an AI system had a safety defect and that the defect plausibly caused the harm, the deploying company must prove that no breach of duty of care occurred. This reallocation fundamentally changes director exposure and defensive strategy.

In conventional product-liability litigation, the injured party must prove defect, causation, and damage. For opaque AI systems, proving causation is technically impossible without insider access to training data, model weights, and decision logs. The AI Liability Directive solves this asymmetry by shifting the evidentiary load to the party in control of the evidence, which is the deploying company. For directors, this means that internal documentation, governance records, bias-testing reports, and incident logs become the primary line of defense. Where those records are thin, absent, or inconsistent, the legal presumption stands and judgment follows.

The doctrinal lesson Dr. Raphael Nagel (LL.M.) derives in ALGORITHMUS is that delegated responsibility is not responsibility. Boards that outsourced AI oversight to IT will discover that no court accepts the defense that the computer said so or that the vendor provided assurances. The standard required is documented, informed, reasoned oversight evidenced through board minutes, risk registers, audit reports, incident responses, and procurement due diligence that explicitly addresses AI-specific risks.

Why documentation becomes the core defense

Under the reversal, the company that produces a coherent governance trail, AI inventory, risk assessment, bias testing results, human-oversight logs, incident register, and board-approved AI policy, rebuts the presumption of fault. The company that cannot produce that trail effectively concedes liability at the pleading stage. Tactical Management advises portfolio directors that the annual cost of maintaining this evidentiary infrastructure is a small fraction of the cost of failing to produce it when a regulator, litigant, or insurer demands it on short notice.

Governance Structures That Protect Directors Personally

Effective personal protection against AI liability requires four structural elements: a complete AI inventory updated at least quarterly, a cross-functional AI governance committee with documented decision authority, explicit human-in-the-loop policies for all high-risk systems, and board-level AI reporting that mirrors financial reporting in rigor and cadence.

The 2023 PwC Fortune 500 survey found fewer than twenty percent of boards included a member with explicit AI expertise. That gap is legally relevant. Courts and regulators assess whether the board possessed the competence to exercise informed oversight, and a board without AI literacy cannot credibly invoke the business-judgment rule as a shield. The remedy is not necessarily a dedicated AI director, but ensuring that at least one member of the audit or risk committee can interrogate AI disclosures with substantive rigor and challenge vendor claims without reflexive deference.

Specific instruments detailed in ALGORITHMUS include the Chief AI Officer role, adopted by IBM, Moderna, and several Fortune 500 firms; the cross-functional AI task force suited to mid-caps without dedicated CAIO budgets; and the quarterly AI governance report to the supervisory board covering inventory, incidents, compliance status, material risks, and strategic initiatives. These are not bureaucratic overhead. They are the evidentiary infrastructure that converts good-faith oversight into legal defense when a regulator inspects, a litigant sues, or an insurer audits.

The OpenAI November 2023 lesson for directors

The OpenAI board’s removal of Sam Altman in November 2023, followed by his reinstatement five days later and the replacement of three of four board members, is a case study in what happens when AI governance collides with commercial incentives in the absence of documented process. Dr. Raphael Nagel (LL.M.) draws a clear lesson in ALGORITHMUS: governance that exists on paper but lacks procedural rigor collapses under pressure, and the personal reputational and legal consequences for the directors involved are significant, public, and lasting.

The transformation of AI Liability for Directors and Officers from a compliance footnote into a central boardroom duty is the defining European governance shift of the decade. The convergence of the EU AI Act with its seven-percent-of-turnover fines, the NIS2 Directive with its explicit personal-liability clauses, and the AI Liability Directive with its burden-of-proof reversal leaves no board insulated by ignorance or by delegation. The algorithms now deployed in credit decisions, hiring pipelines, predictive maintenance, customer service, and critical infrastructure are, in legal substance, board-level systems. Dr. Raphael Nagel (LL.M.), in ALGORITHMUS, Who Controls AI, Controls the Future, synthesizes the legal, strategic, and operational consequences into a framework that Tactical Management deploys in its advisory work with portfolio companies and family-held industrials across Europe. The forward-looking claim is sober and specific: the next wave of European regulatory enforcement will target boards, not technologies. Directors who invest today in a rigorous AI inventory, a functioning governance committee, documentation discipline, and substantive board-level AI literacy will have a defense when examined. Those who delegate and wait will discover that the reversed burden of proof, once it falls on them, is almost impossible to reclaim.

Frequently asked

Can directors delegate AI oversight to the IT department?

No. The EU AI Act, NIS2, and the AI Liability Directive establish AI oversight as a management-body responsibility that cannot be transferred to operational departments or external vendors. Directors remain personally liable for the adequacy of the oversight structure they authorize. The IT department implements; the board supervises. Delegation of execution is lawful; delegation of accountability is not. Dr. Raphael Nagel (LL.M.) emphasizes in ALGORITHMUS that the distinction between delegated task and delegated responsibility is the single most frequent misunderstanding at supervisory-board level, and the one most likely to produce personal exposure when a regulator or court examines the governance trail.

What fines can individual directors face under NIS2?

NIS2 imposes corporate sanctions of up to ten million euros or two percent of global annual turnover for essential entities. In addition, national transpositions across EU member states establish personal liability mechanisms for management bodies, including temporary prohibition from exercising managerial functions, personal administrative fines, and civil claims from shareholders for breach of fiduciary duty. Precise personal financial exposure varies by member state, but the structural principle is uniform: management is on the hook individually, not merely through the company’s balance sheet, and D&O insurance terms are being recalibrated accordingly.

How does the AI Liability Directive change D&O defense strategy?

The directive reverses the burden of proof. Once a claimant shows a plausible safety defect caused harm, the deploying company must prove that no breach of duty of care occurred. This makes documented governance the primary defense: AI inventory, risk assessments, bias testing, human-oversight logs, and incident registers. Directors who cannot produce a coherent evidentiary trail face a legal presumption they must overcome. D&O insurance policies are being updated to reflect this shift, and underwriters increasingly require evidence of board-level AI governance before renewing coverage at favorable premiums.

Which AI systems trigger the highest personal liability exposure?

The EU AI Act’s Annex III enumerates eight high-risk categories: critical infrastructure; education; employment and HR; essential public and private services including credit scoring and insurance; law enforcement; migration and asylum; administration of justice; and biometric identification. Any system deployed in these domains triggers full compliance obligations and corresponding personal liability. Tactical Management’s experience with European mid-cap boards shows that the most commonly overlooked category is HR screening software, which nearly all companies operate and few classify correctly under the Act.

Does D&O insurance cover AI-related liability?

Standard D&O policies written before 2023 often contain ambiguous language on AI-specific exposures. Renewals in 2024 and 2025 increasingly include explicit AI exclusions, sub-limits, or coverage conditions requiring documented governance frameworks. Directors should review current policies with counsel, specifically testing coverage for AI Act fines, NIS2 personal sanctions, and AI Liability Directive claims. Insurers now price AI governance maturity into premiums: companies with documented frameworks and board-level AI literacy receive materially better terms than those without, making governance a measurable financial variable.

Claritáte in iudicio · Firmitáte in executione

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

Author: Dr. Raphael Nagel (LL.M.). About