Dr. Raphael Nagel (LL.M.), Founding Partner Tactical Management, on Agentic AI Governance in the Enterprise
Dr. Raphael Nagel (LL.M.), Founding Partner, Tactical Management
Aus dem Werk · ALGORITHMUS

Agentic AI Governance in the Enterprise: Authority, Liability, and the End of Output-Only Review

Agentic AI Governance in the Enterprise is the set of control structures that define what autonomous AI agents may decide, which actions require human authorisation before execution, how every agent step is logged for audit, and who holds personal liability when an agent commits the organisation to an irreversible outcome.

Agentic AI Governance in the Enterprise is the legal, operational, and technical framework that governs AI systems acting proactively on the organisation’s behalf, not merely generating outputs for human review. Unlike classical AI oversight, which examines recommendations before a human accepts them, agentic governance must authorise actions before they occur. It defines the scope of autonomous authority, the classes of decisions reserved for human approval, the logging standards that make every agent step reconstructible, and the escalation paths when an agent leaves its competence zone. As Dr. Raphael Nagel (LL.M.) argues in ALGORITHMUS, this is no longer an IT question: it is a boardroom liability question.

Why Agentic AI Is Not Another IT Deployment

Agentic AI differs categorically from assistive AI because the system executes actions, not proposals. Controls designed for classifiers and copilots fail for agents that purchase, pay, write, and commit the enterprise before a human reviews the result. The governance regime must move from output inspection to action authorisation.

AutoGPT demonstrated this shift publicly in 2023, collecting more than 150,000 GitHub stars within weeks as developers connected GPT-4 to browsers, shells, and APIs to let the model pursue goals autonomously. Within a year, Salesforce Einstein Agents, ServiceNow AI Agents, Microsoft Copilot Agents, and Google Agentspace had embedded comparable capabilities inside mainstream enterprise software, turning autonomous execution from research curiosity into default platform feature.

Klarna published the operational consequence in early 2024: its agent absorbed the workload of roughly 700 full-time customer service agents within its first months of deployment. This is not a productivity anecdote. It is evidence that agentic deployment scales through an organisation faster than traditional IT governance committees convene, and that a defective agent can replicate the same defect across hundreds of thousands of interactions before anyone notices.

The structural point, developed across ALGORITHMUS, Who Controls AI, Controls the Future, is that agentic AI collapses the latency between decision and consequence. Dr. Raphael Nagel (LL.M.) frames this as a shift in the operating reality of corporate control: the organisation’s exposure is now determined by what its agents are permitted to do unsupervised, not by what its policies say they should do.

The Liability Shift From Output Review to Action Authorisation

When an AI agent executes an irreversible action, a payment released, a purchase order dispatched, a customer communication sent, a contract clause accepted, liability no longer attaches to the human who approved an output. It attaches to whoever defined the agent’s authority scope, its escalation thresholds, and its fallback rules. The legal question shifts from review to design.

German corporate law gives this shift a hard edge. Under §93 AktG, members of the management board owe the company the care of a prudent and conscientious manager; delegating consequential decisions to an unaudited autonomous system is not a defence. The NIS2 Directive, in force through national transposition from October 2024, adds personal director liability for cybersecurity governance covering AI systems in essential and important sectors, with fines reaching ten million euros or two percent of global annual turnover.

The risk is not theoretical. In 2019 the chief executive of a UK energy subsidiary transferred 220,000 euros to a fraudulent account after a deepfake voice cloned his German parent-company chief during a phone call. Agentic systems raise the same risk one order higher: an agent with outbound payment authority can be socially engineered at machine speed, across every channel simultaneously, without a human ever hearing the fraudulent voice.

The practical consequence for boards is that the D&O policy, the AI procurement contract, and the internal authority matrix must be redrafted in light of autonomous execution. A scope that reads ‘the system may draft customer communications’ is not a scope that reads ‘the system may send customer communications.’ The difference is liability, and the drafting of that difference is a board responsibility, not a vendor decision.

Escalation Rules, Logging Duties, and Human-in-the-Loop Architecture

The operational core of agentic AI governance rests on three non-negotiable pillars: explicit authority limits separating reversible from irreversible actions, tamper-evident logging of every agent decision, and mandatory Human-in-the-Loop controls for any commitment the organisation cannot undo at reasonable cost.

The authority-limit principle is straightforward in theory and routinely violated in practice. A reversible action, flagging a ticket, drafting a reply, scheduling a follow-up, can be automated with monetary or procedural ceilings. An irreversible action, releasing a payment, modifying a contract, communicating externally in the organisation’s name, requires positive human authorisation before execution. The EU AI Act reinforces this for high-risk systems by requiring effective human oversight designed to prevent or minimise risks to health, safety, and fundamental rights.

Logging is the second pillar and is already legally mandatory rather than optional. The AI Act obliges providers and deployers of high-risk systems to maintain automatic logs of operation throughout the lifecycle, sufficient to trace the system’s functioning. For agentic systems, this means every prompt, every tool invocation, every external action, and every outcome must be captured in a form that an auditor, a regulator, or a prosecutor can reconstruct months after the fact.

The third pillar, Human-in-the-Loop, is routinely confused with Human-on-the-Loop, and the distinction matters. Human-in-the-Loop means the action does not execute without explicit human approval. Human-on-the-Loop means the human can intervene but is not required to. For consequential autonomous agents, the default must be Human-in-the-Loop, with Human-on-the-Loop reserved for high-frequency, low-stakes, reversible flows where statistical sampling of agent behaviour is sufficient oversight.

Building Board-Level Oversight Before Regulators Build It For You

Boards that treat agentic AI as an IT procurement question will discover, too late, that they personally hold the liability. NIS2 makes directors personally accountable for cybersecurity governance, and the EU AI Act imposes comparable exposure for governance failures in high-risk AI deployments. The passive board is the liable board.

A workable oversight agenda has four standing items. First, an inventory of every agentic system in use, classified by AI Act risk category and by reversibility of the actions each agent can trigger. Second, documented authority scopes for each agent, approved by the board committee responsible for risk, not by the business unit that deployed it. Third, quarterly reporting on agent incidents, near-misses, and logged escalations, analogous to near-miss reporting in aviation or industrial safety. Fourth, a tested contingency plan for taking a misbehaving agent offline without disabling legitimate business flows that depend on it.

Dr. Raphael Nagel (LL.M.), Founding Partner of Tactical Management, argues that this governance discipline is also a valuation lever. Portfolio companies with defensible agentic AI governance command premium multiples because acquirers discount unquantified liability tails. Companies with agents deployed without authority matrices, without logging, and without board oversight carry contingent exposure that surfaces during technical due diligence and depresses the exit price.

The practical starting point is not regulatory compliance, it is an honest inventory. Most organisations underestimate how many agentic capabilities are already live in their Microsoft, Salesforce, ServiceNow, and Google environments, because vendors now enable them by default in platform updates. Governance that is not backed by a current inventory is governance on paper, and governance on paper does not hold when the regulator arrives.

The shift from assistive AI to agentic AI is the most consequential change in the operating reality of the enterprise since the internet moved from read-only to transactional. Every corporate function that used to review AI outputs will, within the next thirty-six months, supervise AI agents that act. The governance question is not whether to permit this, the market has already decided, but under what authority, logging, and Human-in-the-Loop structures the organisation exposes itself to action at machine speed. Agentic AI Governance in the Enterprise is therefore the discipline that decides, in advance, what the organisation’s agents may commit it to, what they must escalate, and how every autonomous decision is reconstructed after the fact. It is a legal task, an operational task, and a board task simultaneously, and treating it as any one of these in isolation produces gaps that the other two cannot close. The analysis developed across ALGORITHMUS, Who Controls AI, Controls the Future makes the broader point explicit. Dr. Raphael Nagel (LL.M.) and the advisory practice at Tactical Management work with boards and investors on precisely this intersection of autonomous execution, personal liability, and enforceable governance. Organisations that design that intersection deliberately in 2025 will look very different, by 2028, from those that let vendors design it for them.

Frequently asked

What counts as an agentic AI system versus an assistive one?

An assistive AI system produces outputs that a human reviews and then accepts or rejects, such as a drafted email or a suggested code change. An agentic AI system executes actions that directly affect the organisation or third parties, including sending messages, triggering payments, modifying records, or initiating workflows. The governance consequence is fundamental: assistive systems fail at the output stage, where a human can still intervene; agentic systems fail at the action stage, where consequences have already materialised and often cannot be reversed without cost.

Who is personally liable when an autonomous AI agent causes damage?

In the EU, liability routes through several instruments simultaneously. Under general corporate duty of care, for example §93 AktG in Germany, board members may be personally liable for inadequate governance of consequential systems. NIS2 imposes direct personal accountability on management bodies for cybersecurity measures covering AI in essential and important sectors. The AI Act adds provider and deployer obligations for high-risk systems. An agent deployed without documented authority scope, logging, and oversight creates personal exposure for directors, not merely corporate exposure.

Does the EU AI Act require logging for agentic systems?

Yes, for high-risk systems it does. The AI Act obliges providers and deployers of high-risk AI systems to maintain automatic logs of operation throughout the lifecycle, sufficient to enable post-market monitoring and traceability. For agentic deployments, this translates concretely into tamper-evident records of every prompt, tool invocation, external action, and outcome. Logging must be designed into the architecture, not bolted on afterwards, because reconstruction of an incident requires contemporaneous records that cannot be produced retrospectively from memory or inference.

What is the minimum governance standard for agentic AI in a mid-sized enterprise?

At minimum: an inventory of every agentic system in use with risk classification; documented authority scopes approved at board or risk-committee level; mandatory Human-in-the-Loop authorisation for irreversible actions; tamper-evident logging of every agent decision; quarterly incident and near-miss reporting to the board; and a tested procedure to disable a misbehaving agent without halting legitimate workflows. This is the floor, not the ceiling, for organisations that want to deploy agentic AI without carrying unquantified personal and corporate liability tails.

How does agentic AI affect valuation in a private equity transaction?

Agentic AI cuts both ways in valuation. Well-governed agentic systems, with documented authority, logging, and Human-in-the-Loop controls, are increasingly treated as defensible operating leverage and command premium multiples. Agentic capabilities deployed without authority matrices or oversight create contingent liability that surfaces during due diligence and depresses exit multiples. Tactical Management’s advisory work with investors confirms that buyers now explicitly probe AI governance in technical due diligence, not only commercial diligence, and price the absence of controls into the deal.

Claritáte in iudicio · Firmitáte in executione

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

Author: Dr. Raphael Nagel (LL.M.). About