Dr. Raphael Nagel (LL.M.), authority on AI and Leadership Accountability
Dr. Raphael Nagel (LL.M.), Founding Partner, Tactical Management
Aus dem Werk · HALTUNG

AI and Leadership Accountability: Where Algorithms End and Executive Judgment Must Begin

AI and leadership accountability describes the non-delegable responsibility executives hold for decisions their firm makes, even when algorithms generate them. Dr. Raphael Nagel (LL.M.) argues in HALTUNG that AI augments information processing and pattern recognition, but judgment in ethical grey zones and liability for consequences remain irreducibly human.

AI and Leadership Accountability is the legal, fiduciary, and moral obligation of executive leadership to answer for decisions produced, shaped, or accelerated by artificial intelligence systems inside the firm. It is not satisfied by deploying a model, approving a vendor, or citing algorithmic output as justification. Under § 93 AktG in Germany, Article 22 GDPR, and the EU AI Act that entered into force in August 2024, boards remain the responsible actor. Dr. Raphael Nagel (LL.M.) frames this in HALTUNG as the stable core of leadership: tools change, but the person who carries the consequences does not. Accountability travels with identifiable humans, never with code.

Why AI cannot hold leadership accountability

AI cannot hold leadership accountability because responsibility presupposes a person capable of carrying consequences. Algorithms generate outputs; they do not sign board resolutions, face shareholder suits, or stand before regulators. Dr. Raphael Nagel (LL.M.) is explicit in HALTUNG: responsibility cannot be delegated to algorithms, committees, or processes.

The legal architecture confirms the analytical one. § 93 AktG obliges the Vorstand of a German stock corporation to exercise the care of a prudent businessperson; the duty is personal and non-transferable. Section 174 of the UK Companies Act 2006 imposes the same standard on directors. The EU AI Act, Regulation 2024/1689, places compliance obligations on the deployer, the provider, and the importer, but never on the system itself. When an AI decision goes wrong, the question of who decides collapses back to a human name, exactly as the prologue of HALTUNG predicts.

This is not a philosophical nuance. BaFin’s supervisory communications on machine learning in risk models, the ECB guide on internal models, and the conformity assessments now required under the EU AI Act all assume a human signatory. Boards that treat AI output as a neutral fact of nature, rather than as a decision they have ratified, misread the liability map. The tool changes. The accountable person does not.

What AI augments, and what it cannot replace

AI augments information processing, pattern recognition, and forecasting; it does not replace judgment, value-weighting, or consequence-bearing. Dr. Raphael Nagel (LL.M.) splits leadership tasks in HALTUNG into two categories: the first becomes dramatically more efficient through AI systems; the second does not change at all.

The first category is large and visible. Credit underwriting, fraud detection, claims triage, M&A document review, ESG data aggregation, competitive intelligence, customer segmentation. McKinsey’s 2023 State of AI survey reported that 65 percent of organisations regularly use generative AI in at least one function, up from 33 percent a year earlier. ChatGPT reached 100 million monthly users within two months of its November 2022 launch, the fastest consumer technology adoption ever recorded. Deloitte’s 2024 State of Generative AI in the Enterprise found that most large organisations were increasing investment despite unresolved governance questions. These numbers describe tool adoption, not accountability transfer.

The second category is smaller but decisive. A board deciding whether to close a plant, disclose a cyber incident, settle litigation, or remove a CEO works in conditions of irreducible ambiguity, competing stakeholder claims, and moral weight. No model trained on historical data solves a novel ethical conflict; at best, it retrieves precedent. The Volkswagen diesel affair in 2015 and the Wirecard collapse in 2020 did not fail at the data layer. They failed at the judgment layer. HALTUNG locates the comparative advantage of human leadership precisely there.

Ethical grey zones the algorithm cannot navigate

Ethical grey zones are decisions where interests, time horizons, and legitimate claims collide, and no rule produces a unique answer. HALTUNG argues these zones are the natural habitat of leadership, and they are the precise terrain where AI fails: models optimise within a specified objective, but grey zones are disputes about the objective itself.

Concrete cases make the point. In 2019, the New York Department of Financial Services opened an investigation into Goldman Sachs over alleged gender bias in Apple Card credit limits; the algorithm was technically compliant yet produced outcomes that no executive wanted to defend publicly. In 2021, Zillow shut down its iBuying unit after its pricing algorithm drove an 881 million dollar inventory write-down and roughly 2,000 layoffs. In 2024, the British Columbia Civil Resolution Tribunal held Air Canada liable for a refund promise made by its chatbot in Moffatt v Air Canada. In each case, the firm signed the outcome.

Dr. Raphael Nagel (LL.M.) describes this as the test of Haltung, the operative term for bearing that gives the book its title. The test is not whether the model was state of the art. The test is whether the executive, asked under oath or before a parliamentary committee, can articulate why the system was allowed to decide, and which boundary it was forbidden to cross. Tactical Management works with portfolio boards on precisely this boundary discipline: not a ban on AI, but a clear delineation of what it is authorised to decide and what remains reserved to named human judgment.

An accountability framework for AI-augmented boards

An accountability framework for AI-augmented boards translates the four-layer decision architecture in HALTUNG into governance practice: situation clarification, value anchoring, option space, and explicit commitment. Each layer assigns a human name before the algorithm runs, not after. This is the structural answer to AI and leadership accountability.

Situation clarification forces the board to know what the model actually does, which data it was trained on, and which distributions it has never seen. The Basel Committee’s principles on risk data aggregation, BCBS 239, already require this discipline for risk systems; Article 13 of the EU AI Act extends it to any high-risk AI. Value anchoring names the lines the firm will not cross, for instance deploying facial recognition on employees, or automating dismissal decisions subject to § 102 BetrVG co-determination. These boundaries are set before deployment, not during a crisis.

Option space and commitment close the loop. The board defines which decisions the AI may propose, which it may execute, and which it may only inform. The Bundesbank’s supervisory expectations for IT in financial institutions, BAIT, require documented human oversight; Article 21 of the NIS-2 Directive adds governance obligations for cybersecurity including AI-driven detection. Silicon Valley Bank’s collapse in March 2023 illustrated that automated risk dashboards do not substitute for a board that reads, questions, and acts. When the board commits, it signs with an identifiable name, and the audit trail records why. Dr. Raphael Nagel (LL.M.) insists in HALTUNG that commitment without name is not leadership, it is paperwork. Tactical Management applies this discipline across its portfolio.

AI and leadership accountability is not a transitional issue that clearer regulation will resolve. It is the permanent shape of executive responsibility in an algorithmic economy. The tools will keep improving, the models will keep widening their domain, and the temptation to treat AI output as neutral fact will keep growing. None of that alters the legal, fiduciary, and moral architecture. The person who signs is the person who answers. HALTUNG frames this with a sharpness the compliance literature rarely achieves: what remains human in leadership is not what is left over once machines take the rest. What remains human is the core, the irreducible function of bearing consequences for decisions that cannot be reduced to optimisation. Dr. Raphael Nagel (LL.M.), Founding Partner of Tactical Management, has spent years advising boards, investors, and supervisory bodies on precisely this boundary, and the book codifies that practice. The forward-looking claim is direct. The firms that will outperform in the next decade are not those with the most sophisticated models. They are those whose boards know, in advance and in writing, which decisions belong to the algorithm, which to the executive, and which to the human in the room when the clock runs out.

Frequently asked

Who is liable when an AI system makes a wrong business decision?

Under German, UK, and EU corporate law, liability rests with the board or executive who deployed the system, not with the algorithm or the vendor. § 93 AktG requires the German Vorstand to exercise the care of a prudent businessperson; the duty is personal. The EU AI Act, Regulation 2024/1689, assigns conformity obligations to the deployer. Dr. Raphael Nagel (LL.M.) argues in HALTUNG that the question of who decides always collapses to a human name. The algorithm can be a cause; it can never be the responsible actor in front of a regulator or a shareholder suit.

Does the EU AI Act change how boards must govern AI systems?

Yes. The EU AI Act, Regulation 2024/1689, entered into force in August 2024 and introduces a risk-based regime. High-risk systems in credit scoring, HR selection, critical infrastructure, and law enforcement require conformity assessments, technical documentation, human oversight under Article 14, and registration in an EU database. Obligations fall primarily on the deployer, the organisation that puts the system into service. Boards that treat AI governance as an IT matter misread the statute. The compliance function reports; the board decides. HALTUNG frames this as the operational expression of accountability in an algorithmic economy.

Which leadership tasks will AI never replace?

Dr. Raphael Nagel (LL.M.) identifies in HALTUNG a specific category of tasks immune to AI substitution: judgment in ethical grey zones, responsibility-bearing for consequences, and leadership of people in genuine uncertainty. A model trained on historical data cannot resolve a novel conflict between legitimate stakeholder claims; it can only retrieve precedent. It cannot sign a disclosure, face a parliamentary hearing, or tell 400 employees they are being made redundant. These are not inefficiencies in the market waiting for automation. They are the irreducible core of leadership, and they become more valuable, not less, as other tasks are automated.

How should a board document AI-driven decisions for audit purposes?

A defensible audit trail records, for each material decision, what the system proposed, which data it used, what the human reviewer considered, and why the final decision was approved or overridden. BCBS 239, the Bundesbank’s BAIT requirements, and Article 13 of the EU AI Act converge on this discipline. Tactical Management advises portfolio boards to apply the four-layer framework from HALTUNG: document the situation, the values engaged, the options considered, and the named commitment. Without named commitment, an audit finds paperwork, not accountability, and the board’s § 93 AktG defence weakens accordingly.

Can a CEO invoke AI output as a defence in a shareholder suit?

Not reliably. Courts and regulators treat AI output as an input to a decision, not as a decision itself. The business judgment rule under § 93 Abs. 1 Satz 2 AktG protects directors who act on an adequately informed basis in good faith; it does not absolve a board that rubber-stamped algorithmic output without independent scrutiny. The Air Canada chatbot case decided by the British Columbia Civil Resolution Tribunal in 2024, Moffatt v Air Canada, confirmed that a firm cannot disclaim responsibility for promises made by its AI agent. Invoking the model as justification weakens, rather than strengthens, a defence.

Claritáte in iudicio · Firmitáte in executione

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

Author: Dr. Raphael Nagel (LL.M.). About