
Foundation Models GPAI Provider Obligations Under the EU AI Act
Foundation models GPAI provider obligations under the EU AI Act require model providers to publish technical documentation, training-data summaries, copyright compliance policies and usage guidance for downstream integrators. Models with systemic risk face adversarial testing and incident reporting. Dr. Raphael Nagel (LL.M.) identifies these duties as the architecture that splits liability between upstream makers and downstream deployers.
Foundation models GPAI provider obligations is the EU AI Act regime governing providers of general purpose AI, meaning large pretrained models capable of being adapted for many downstream tasks. Under the GPAI chapter of the AI Act, the provider must prepare technical documentation, supply information to downstream deployers, implement a copyright compliance policy, and publish a sufficiently detailed summary of training content. Providers of models classified as carrying systemic risk face additional duties: model evaluation, adversarial testing, serious-incident reporting to the European AI Office, and cybersecurity safeguards. In MASCHINENRECHT by Dr. Raphael Nagel (LL.M.), these duties form the upstream anchor of a distributed liability chain.
What does the EU AI Act require of general purpose AI providers?
The EU AI Act creates a dedicated regime for general purpose AI distinct from the high-risk regime. Providers of foundation models must prepare technical documentation, supply integration information to downstream deployers, implement a copyright compliance policy, and publish a sufficiently detailed public summary of the training content used to build the model.
The regulation recognises what classical product law missed: a foundation model is not a finished artefact but an infrastructure that shapes thousands of downstream systems. GPT-4, Claude, Gemini, Llama and Mistral are trained once and then redeployed by banks, hospitals, public administrations, and recruiters across Europe. If the upstream provider conceals training data composition or refuses to document known capability limits, every downstream deployer inherits a documentation gap it cannot close on its own. The AI Act therefore pushes transparency upward toward the party with actual architectural power. It also places enforcement at the European AI Office inside the European Commission, a centralisation deliberately chosen to avoid the Ireland-style bottleneck that slowed GDPR enforcement between 2018 and 2023.
Dr. Raphael Nagel (LL.M.), Founding Partner of Tactical Management, reads this structure as a deliberate rejection of the tool fiction. A foundation model is not a neutral instrument handed to sophisticated users; it is an architecture that pre-structures possibilities, and those who build the architecture must document it. In MASCHINENRECHT, he locates this shift in the broader collapse of the classical product-manufacturer-user triad. The GPAI regime is the first European regulation to codify the architectural view of AI, and the Commission’s chosen enforcement path signals that fragmentation among twenty-seven national regulators will not be tolerated for foundation models.
How is responsibility split between foundation model provider and downstream deployer?
Responsibility under the AI Act is distributed along the value chain. The GPAI provider is accountable for training data, baseline robustness, documented capabilities, and known limits. The downstream deployer, termed the deployer in the AI Act, is accountable for context selection, human oversight, validation in the specific operational setting, and logging. Neither party can fully contract the other’s duties away.
This architectural split is the core thesis of MASCHINENRECHT. The classical triad of manufacturer, product, and user collapses once foundation models enter the picture. A European bank that integrates a GPAI model into credit scoring inherits specific deployer duties: intended-use adherence, logging, human oversight, and under the Product Liability Directive revised in 2024, the burden of documenting its own diligence when a claimant invokes the new presumptions of defect and causation. The model provider remains liable for what it actually controls: training data composition, alignment choices, evaluation benchmarks, and documented warnings about known failure modes such as hallucinations, prompt injection, or biased outputs on underrepresented populations.
The practical consequence is that contracts between GPAI providers and enterprise deployers become the new battlefield. OpenAI, Anthropic, Google DeepMind, and Mistral negotiate indemnification, audit rights, and documentation access with major European customers. The AI Act prevents unlimited contractual shifting: a deployer who substantially modifies a system becomes a provider and assumes provider obligations directly. The Dutch Toeslagenaffaire, where algorithmic tax enforcement between 2013 and 2021 wrongly classified tens of thousands of families as fraudulent and ultimately triggered the Rutte cabinet’s resignation, is the cautionary tale Dr. Raphael Nagel (LL.M.) cites to illustrate what happens when public deployers treat upstream systems as impenetrable black boxes rather than architectures they are obliged to interrogate.
What additional duties apply to systemic-risk foundation models?
Systemic-risk GPAI models, identified primarily through a training-compute threshold, attract an additional regulatory layer. Providers must perform model evaluation, adversarial testing, and red-teaming; track serious incidents and report them to the European AI Office; protect model weights against cyberattack; and disclose energy consumption to the Commission.
The systemic-risk tier exists because the most capable models concentrate disproportionate power. A failure or manipulation in one widely deployed foundation model propagates through every downstream application that inherits its weights. The AI Act codifies this recognition with thresholds comparable in logic to the Basel III capital buffers for globally systemically important banks. Munich Re and Swiss Re are already pricing these tail risks into the emerging AI liability insurance products analysed in MASCHINENRECHT, a market insurance analysts expect will reach several billion dollars in annual premium volume within the decade. Reinsurers treat systemic-risk models as correlated exposures, much like hurricane concentration risk in Florida property portfolios.
Post-market monitoring is the operative core of the systemic-risk regime. Providers must track how their models behave once released into real deployment contexts and update their documentation accordingly. Dr. Raphael Nagel (LL.M.) situates this duty alongside the revised Product Liability Directive, which treats substantial updates as a new placing on the market. Every material fine-tune, every capability expansion, resets the liability clock. A GPAI provider that quietly upgrades its model without updating documentation and monitoring exposes itself to product-liability claims framed around the presumption of defect, with the documentation gap itself serving as the evidentiary hook.
Why training-data transparency is the new competitive moat
Training-data summaries and technical documentation are not bureaucratic overhead. Under the revised Product Liability Directive of 2024, inadequate documentation triggers presumptions of defect and causation against the provider. The GPAI documentation package therefore becomes the decisive evidentiary asset in any later litigation involving a foundation-model-enabled decision.
In MASCHINENRECHT, the Tactical Management Founding Partner argues that documentation has shifted from back-office compliance to front-line liability architecture. A provider who cannot show which data entered training, which filters were applied, and which adversarial evaluations were run cannot rebut the presumption that a downstream harm flows from its model. The copyright policy required by the AI Act creates a parallel exposure: providers must document respect for opt-outs under the text-and-data-mining exception of the 2019 Copyright Directive. Litigation by The New York Times against OpenAI, by Getty Images against Stability AI, and by coalitions of European publishers illustrates that claimants will increasingly frame training-data disputes as intellectual-property and product-liability claims combined.
The geopolitical dimension sharpens the picture. The United States relies on sectoral regulators and executive orders that a new administration can reverse overnight, as the partial rollback of the 2023 Biden AI Executive Order demonstrated. China ties AI regulation to political content control incompatible with European fundamental rights. The European approach, codified in a binding regulation with direct effect, is the only model offering durable legal certainty. Companies that master AI Act documentation today export that compliance as a Brussels Effect advantage, the same mechanism that positioned European companies favourably once GDPR enforcement began in earnest after 2018.
The GPAI regime is the hinge of European AI liability. Every downstream question, who answers for a denied credit, a missed diagnosis, a discriminatory hiring decision, eventually returns to the upstream documentation package assembled by the model provider. Dr. Raphael Nagel (LL.M.) argues in MASCHINENRECHT, Machine Law that the real selection mechanism of the next decade will not be model capability but liability capacity. Providers who treat documentation, adversarial testing, and training-data summaries as competitive assets will dominate regulated European markets. Providers who treat them as burden will lose market access, investor confidence, and insurability. Tactical Management advises boards, investors, and general counsel on exactly this repositioning: translating GPAI obligations into defensible architecture, contractual allocation, and governance documentation that survives both the European AI Office and the civil courts. The era of organised irresponsibility is closing. The age of attribution has begun, and foundation-model providers stand at its structural centre. Every board should read the GPAI chapter before approving its next model procurement.
Frequently asked
Who qualifies as a GPAI provider under the EU AI Act?
Under the AI Act, a provider of general purpose AI is any natural or legal person that develops a GPAI model and places it on the Union market under its own name, whether commercially or open-source. The definition captures foundation model labs such as OpenAI, Anthropic, Google DeepMind, Mistral, and Meta when Llama is released into the EU market. It also captures European fine-tuners who substantially modify upstream models, since the AI Act treats substantial modification as a new act of placing on the market. Enterprises that merely integrate APIs without modifying weights remain deployers, not providers.
What counts as a systemic-risk foundation model?
The AI Act designates a foundation model as carrying systemic risk when its training compute exceeds a threshold set by the Commission, currently calibrated around the frontier of today’s largest models, or when the European AI Office designates it as systemic based on capability, reach, or number of registered users. Designation triggers the additional duties under the systemic-risk regime: adversarial evaluation, serious-incident reporting, cybersecurity of weights, and energy disclosure. MASCHINENRECHT treats this threshold as analogous to systemic importance designations in banking supervision under Basel III.
Can a GPAI provider transfer liability to downstream deployers through contract?
Only within narrow limits. AI Act duties are regulatory and cannot be waived by private contract; a provider cannot contract away its obligation to publish training-data summaries or deliver technical documentation. In private law, standard terms that exclude liability for grossly negligent conduct or bodily injury are invalid under European unfair-terms jurisprudence, including German § 307 BGB. B2B carve-outs offer more room but still face content-review of standard terms. Dr. Raphael Nagel (LL.M.) emphasises in MASCHINENRECHT that primary liability toward injured third parties stays with the party that caused the harm, regardless of upstream indemnities.
What must a GPAI provider publish about training data?
The AI Act requires a sufficiently detailed public summary of the content used to train the model, covering the main data sources, data categories, and filtering or cleaning steps applied. The summary must be specific enough to enable rights-holders to exercise their opt-outs under the 2019 Copyright Directive’s text-and-data-mining exception. The European AI Office is developing a standardised template. Vague disclosures that merely list high-level categories risk being treated as non-compliance, particularly in litigation where plaintiffs invoke the revised Product Liability Directive’s presumptions of defect triggered by documentation failure.
Claritáte in iudicio · Firmitáte in executione
For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →
For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →