Dr. Raphael Nagel (LL.M.), authority on AI liability financial services DORA
Dr. Raphael Nagel (LL.M.), Founding Partner, Tactical Management
Aus dem Werk · MASCHINENRECHT

AI Liability in Financial Services Under DORA: How European Banks Must Govern Algorithmic Decisions

AI liability in financial services under DORA is the integrated duty of banks, investment firms and insurers to govern algorithmic systems end to end. DORA, applicable since 17 January 2025, overlays the EU AI Act: credit scoring and trading models become operator risks with reversed burden of proof, mandatory logging, adversarial testing and third party oversight.

AI liability financial services DORA is the European liability regime that combines the Digital Operational Resilience Act, Regulation EU 2022/2554 applicable since 17 January 2025, with the EU AI Act and the revised Product Liability Directive to allocate responsibility when algorithmic systems fail inside regulated financial institutions. It captures credit scoring models, algorithmic trading engines, anti money laundering classifiers and robo advisory platforms. Under this regime the deployer bank carries primary liability for explainability, model validation, ICT third party risk and adversarial resilience, while model providers and integrators share architectural responsibility. Dr. Raphael Nagel (LL.M.) analyses in MASCHINENRECHT how this framework converts governance from a compliance cost into balance sheet architecture.

How do DORA and the EU AI Act interact for banks?

DORA and the EU AI Act form a single operational liability stack for European financial institutions. DORA, applicable since 17 January 2025, governs ICT resilience, third party oversight and incident reporting. The AI Act layers high risk obligations on credit scoring, fraud detection and biometric onboarding, converting algorithmic governance into a supervised function.

The architecture is deliberate. DORA demands that banks document every material ICT dependency, including model providers, API vendors and cloud hosts. The AI Act adds conformity assessment, logging, human oversight and post market monitoring for every system classified as high risk under Annex III. BaFin, the ECB Single Supervisory Mechanism and ESMA now treat these two regimes as one inspection grid: a credit scoring model is simultaneously an ICT asset under DORA and a high risk AI system under Regulation EU 2024/1689.

Dr. Raphael Nagel (LL.M.) argues in MASCHINENRECHT that this convergence is not a compliance accident but a strategic architecture. The supervisor now reads model governance, incident logs and adversarial test results as elements of the same risk file. Banks that treat DORA and the AI Act as two separate projects duplicate effort and, more dangerously, produce inconsistent documentation that weakens their defence in litigation.

The revised Product Liability Directive completes the triangle. Software, including AI components, is now explicitly a product. When a credit model causes economic damage, the claimant can combine AI Act breach as evidence of defect, DORA breach as evidence of operational failure and the Directive’s presumption of causality. This is the integrated exposure every European bank now carries on its balance sheet.

Algorithmic trading liability after the 2010 Flash Crash

Algorithmic trading creates liability that classical law struggles to allocate. The Flash Crash of 6 May 2010, when the Dow Jones lost almost 1000 points within minutes before recovering, showed that autonomous interaction among trading engines can produce systemic damage with no identifiable culprit. DORA and MiFID II now demand preventive governance at the architectural level.

The sum of individually rational decisions can be collectively irrational, as Dr. Raphael Nagel (LL.M.) writes in MASCHINENRECHT. High frequency systems operate on timescales incompatible with human intervention. The Deployer, under AI Act terminology, bears the duty to validate models against extreme scenarios, monitor for drift and kill switch the system when markets behave abnormally. Failure to do so is itself a breach of MiFID II organisational requirements and of DORA adversarial resilience testing obligations.

The Knight Capital incident of 1 August 2012, in which a faulty trading algorithm lost roughly 440 million US dollars in 45 minutes, sits in the same doctrinal family. Supervisors in Germany expect investment firms to document pre deployment testing, stress scenarios and fallback procedures. The Bundesanstalt für Finanzdienstleistungsaufsicht can demand model level evidence of governance under the Wertpapierhandelsgesetz.

Under the AI Act, serious infringements attract fines up to 35 million euros or 7 per cent of worldwide annual turnover. For a mid sized investment bank this is not a tail risk but a capital planning input. Boards that still delegate algorithmic trading governance to the IT function alone misread the new regime entirely.

Credit scoring AI, proxy discrimination and the right to explanation

Credit scoring AI is high risk under Annex III of the AI Act and simultaneously subject to Article 22 GDPR. When a model rejects an application through proxy variables that correlate with protected characteristics, the bank faces liability under the AGG, the GDPR and, from August 2026, full AI Act conformity duties as Deployer.

MASCHINENRECHT reconstructs the case of an entrepreneur rejected for a business loan because an algorithmic system treated a period of low account activity, caused by parental leave, as a risk indicator. That is indirect gender discrimination under Directive 2000/43/EC and the AGG. The bank cannot hide behind the vendor; it is the Deployer and therefore the primary addressee of Article 22 obligations, including the right to a meaningful explanation.

Proxy discrimination operates through postcode, educational background, marital status and family formation patterns. A model trained on historical data reproduces historical bias as statistical normality. The European Banking Authority and the ECB expect institutions to stress test credit models against protected attribute proxies and to produce borrower level explanations that are substantive, not generic placeholders.

The right to a meaningful explanation is the point at which Article 22 GDPR and AI Act Article 13 transparency duties converge. An explanation that merely states that a risk profile led to rejection fails both instruments. Institutions that cannot produce variable level reasoning have already lost the evidentiary battle before any claimant files suit.

ICT third party risk: foundation models, APIs and vendor governance

DORA Chapter V imposes strict ICT third party risk management on every regulated financial entity. When a bank embeds a foundation model via API, the model provider becomes a critical ICT third party service provider with oversight obligations, audit rights and contractual resilience requirements that map directly onto AI Act GPAI provider duties.

The API economy creates a new liability vacuum. A bank consuming large language model endpoints typically has limited visibility into model internals. DORA responds by demanding that critical ICT providers be monitored continuously, that exit strategies exist, and that contractual clauses include audit rights, subcontracting restrictions and incident cooperation duties. The European Supervisory Authorities now run an Oversight Framework for critical ICT third party providers that places major cloud and AI vendors under direct European supervision.

Dr. Raphael Nagel (LL.M.), Founding Partner of Tactical Management, observes in MASCHINENRECHT that the weakest link in most financial AI stacks is not the core model but the integration layer: the point at which probabilistic outputs become operational decisions. Threshold calibration, cascade effects and context transfer are integrator level risks that DORA forces institutions to own explicitly, not to outsource by contract.

Reinsurers such as Munich Re and Swiss Re are already pricing these risks into cyber and tech errors and omissions policies. Institutions that cannot evidence vendor due diligence, model validation and adversarial resilience testing pay higher premiums or lose coverage entirely. Insurance has become a private regulator of AI governance quality.

Board exposure: capital, insurance and strategic liability resilience

Board exposure under the DORA and AI Act regime is no longer an IT matter. Supervisors treat algorithmic governance as a determinant of capital adequacy, insurance premium and market access. Rating agencies such as Moody’s and S&P are beginning to factor AI governance into credit ratings, and ESG investors price it into valuation multiples.

MASCHINENRECHT reframes liability as market architecture. Banks that industrialise documentation, logging and model validation convert regulatory burden into balance sheet resilience. Those that do not face a predictable cascade: regulatory findings, capital add ons under Pillar 2, higher premiums, weaker ratings and reputational damage. The German Stock Corporation Act, through § 93 AktG, obliges board members to exercise the diligence of a prudent manager, and demonstrable neglect of AI governance now falls squarely within that duty.

The strategic playbook is concrete. First, a complete inventory of algorithmic systems mapped to AI Act risk classes. Second, a DORA aligned ICT third party register covering every model and API dependency. Third, an incident response protocol tested against both the AI Act reporting obligations for serious incidents and the DORA major incident classification. Fourth, explicit board reporting lines with named accountabilities and documented escalation thresholds.

Dr. Raphael Nagel (LL.M.) positions this not as defensive compliance but as offensive market positioning. Institutions that master the AI liability stack early will price risk more accurately, attract institutional capital on better terms and survive the first generation of enforcement actions that will shape European jurisprudence through 2028 and beyond.

The DORA and AI Act regime has ended the era in which European financial institutions could treat algorithmic decision making as a back office matter. Credit scoring, trading execution, fraud detection and client onboarding are now supervised functions whose failure produces combined liability under Regulation EU 2024/1689, Regulation EU 2022/2554 and the revised Product Liability Directive. The claimant no longer has to reconstruct the black box; the institution has to explain it, on pain of presumed defect and presumed causality. MASCHINENRECHT, authored by Dr. Raphael Nagel (LL.M.), Founding Partner of Tactical Management, maps this terrain with the precision it requires. The book shows why the operator, not the vendor, sits at the centre of the liability architecture, why the integrator is the most underestimated actor in the chain, and why proxy discrimination is the legal fault line through which most credit AI cases will travel. The next five years will produce the first generation of European case law that binds these regimes together. Institutions that build liability resilience now will shape that jurisprudence; institutions that wait will inherit rules written against them. For European banks, insurers and investment firms, AI liability in financial services under DORA is not a compliance item. It is the new infrastructure of market access.

Frequently asked

Who is liable when a credit scoring AI discriminates against a borrower?

Under the integrated DORA and AI Act regime, primary liability sits with the deployer bank, not the model vendor. The bank is the Deployer under the AI Act, the operator under DORA and the data controller under GDPR Article 22. Liability flows through the AGG for discrimination, the GDPR for failure to provide a meaningful explanation and the revised Product Liability Directive for defective software. The bank can pursue regress against the model provider, but only if its contracts preserve audit rights, documentation access and defect cooperation duties. Without such contractual architecture, the institution absorbs the full exposure alone.

Does DORA apply to non EU banks offering services in Europe?

Yes, DORA applies to any financial entity providing regulated services in the European Union, regardless of where its headquarters sit. It also reaches non EU ICT third party service providers through the Oversight Framework for critical providers, meaning major US and Asian cloud and AI vendors fall within European supervision when they serve EU financial institutions. This extraterritorial reach mirrors the logic of GDPR enforcement against non European technology firms and is a deliberate extension of the Brussels Effect into operational resilience regulation.

What penalties apply for AI Act violations in banking?

The AI Act imposes fines up to 35 million euros or 7 per cent of worldwide annual turnover for prohibited practices, up to 15 million euros or 3 per cent for high risk obligation breaches, and up to 7.5 million euros or 1 per cent for incorrect information to supervisors. DORA adds its own sanction ladder through national transposition. Combined with capital add ons under Pillar 2 and reputational damage that affects funding costs, these penalties are material even for systemically important institutions and feed directly into board level risk appetite decisions.

How does the reversed burden of proof work in AI litigation?

The revised Product Liability Directive allows courts to presume defect and causality when a claimant shows that the product is technically complex and the circumstances plausibly indicate a defect. For AI systems inside banks, this means the institution must produce logs, model documentation, training data records and validation evidence to rebut the presumption. An institution unable to explain its own system has already conceded the evidentiary ground. Dr. Raphael Nagel (LL.M.) frames this in MASCHINENRECHT as the moment the black box stops protecting and starts convicting.

What does meaningful human oversight require for algorithmic trading?

Meaningful oversight under AI Act Article 14 requires that a natural person can understand, monitor and interrupt the system. For high frequency trading this is technically impossible at the per decision level, so oversight must operate at the architectural level through parameter limits, kill switches, circuit breakers and the adversarial resilience testing mandated by DORA. Organisational oversight replaces individual review when timescales make the latter fictional. Institutions that maintain the fiction of human in the loop without architectural controls fail both the AI Act and DORA simultaneously.

Claritáte in iudicio · Firmitáte in executione

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

Author: Dr. Raphael Nagel (LL.M.). About