1. What the AI Act actually regulates
Regulation (EU) 2024/1689 — the AI Act — is a horizontal product-safety law applied to AI systems placed on the EU market or whose output affects persons in the EU. It is not a sector regulation. It is not a data-protection law (GDPR remains separate and applies in parallel). It is a risk-tiered obligations framework with four categories: prohibited, high-risk, limited-risk transparency, and minimal-risk.
Prohibited AI (Article 5). Eight specific use cases banned outright, including social scoring by public authorities, real-time remote biometric identification in publicly accessible spaces by law enforcement (with narrow exceptions), and AI that exploits the vulnerabilities of specific groups. Applied since 2 February 2025.
High-risk AI (Article 6 + Annex III). The core of the regulation. AI systems that are either (a) used as safety components of regulated products covered by Annex I sectoral legislation, or (b) listed in Annex III as standalone high-risk systems. Annex III covers eight domains, of which the three that matter most for typical PE portfolios are: AI in employment and worker management; AI in access to essential private and public services (credit scoring, insurance, public benefits); and AI in education, biometric categorisation, and workplace monitoring.
Limited-risk transparency (Article 50). Chatbots must disclose AI nature; AI-generated images/video must be labelled. Operationally relevant for portfolio companies running customer-facing chat or generative-AI marketing.
Minimal-risk. Everything else — recommendation systems, spam filters, AI in video games. No mandatory obligations under the AI Act (GDPR and other laws still apply).
The classification of any single AI system is rarely obvious from a high level. The Commission's AI Office is publishing implementing acts and guidance through 2025–2026. The provisional read for portfolio compliance: assume any HR-related AI is high-risk; assume any credit-decisioning AI is high-risk; assume biometric or workplace-monitoring AI is high-risk; everything else, classify on Article 6 and Annex III on a system-by-system basis with legal review.
2. Obligations by role: provider, deployer, importer
The AI Act assigns obligations to four roles. Most operating companies in a typical PE portfolio will be deployers of AI systems built by third-party providers. A few will be providers themselves (when they build internal AI for sale or for use beyond their group). A small number will be importers (acquiring AI systems from non-EU suppliers and placing them on the EU market).
Providers (Article 16 + Chapter III Section 2). Must ensure the high-risk AI system meets the substantive requirements: risk management system, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy and robustness. Must register the system in the EU AI Act database, undergo conformity assessment (third-party for biometric and certain critical-infrastructure systems; self-assessment for most Annex III), affix CE marking, and continuously monitor post-market.
Deployers (Article 26). The role most PE portfolio companies fall into. Must use the AI system in accordance with the provider's instructions; assign human oversight to qualified personnel; monitor operation; inform workers and worker representatives before deploying high-risk AI in employment context; conduct fundamental rights impact assessments for some high-risk systems; cooperate with market surveillance authorities.
Importers (Article 23). When the high-risk AI system comes from outside the EU, the importer must verify that the provider has done the conformity assessment correctly, that the technical documentation is complete, and that CE marking has been affixed. The importer's name and contact must be on the product or its packaging.
Distributors (Article 24). Must verify CE marking and accompanying documentation. Lower bar than importer but real obligations.
The operator's read: the high-risk obligations sit primarily on providers, but the deployer obligations on portfolio operating companies are not trivial and are far less well-understood than the provider side. The fundamental rights impact assessment in particular has been overlooked by 2026 management teams that assumed AI Act compliance was "the vendor's problem."
3. Fines, supervision, and enforcement architecture
€35M or 7% of global turnoverSource · AI Act Art. 99(3)Maximum fine for breach of the prohibited-AI rules in Article 5. The cap applies whichever is higher.
€15M or 3% of global turnoverSource · AI Act Art. 99(4)Maximum fine for breach of high-risk AI obligations or transparency obligations under Article 50.
€7.5M or 1% of global turnoverSource · AI Act Art. 99(5)Maximum fine for supplying incorrect or misleading information to authorities.
The fines are higher than GDPR (which caps at €20M / 4%). For a portfolio company with €500M revenue, the worst-case AI Act exposure on a single high-risk-system breach is €35M — material.
Supervision. Each Member State designates national competent authorities. Germany has split the role between the BNetzA (general AI supervision) and BaFin (financial-sector AI), with the Bundeskartellamt holding parallel jurisdiction on AI-driven market abuse. The European AI Office at the Commission supervises general-purpose AI models with systemic risk. The European AI Board coordinates Member State practice.
Enforcement timing. From 2 August 2026, national authorities have full powers including investigations, document requests, on-site inspections, fines, and orders to bring systems into compliance. Member States with strong supervisory traditions (Germany, France, Netherlands) will likely act first. The first enforcement actions are expected H2 2026 against the easiest cases — providers who failed to register systems in the EU database — and to work outward into deployer obligations through 2027.
4. The portfolio operator's compliance framework
A four-step framework that has held up across portfolio companies:
Step 1: AI inventory. Every portfolio company performs a systematic inventory of AI systems in use, including embedded AI in third-party SaaS (HR platforms, CRM, ERP modules with predictive features). The inventory captures: system name, vendor, function, training-data nature, individuals or groups affected, and a preliminary classification (prohibited / high-risk / limited / minimal). Most portfolio companies discover 5x to 10x more AI in their stack than they expected.
Step 2: Classification review. Legal counsel (preferably with AI Act specialism) reviews each system against Articles 5, 6, and Annex III. Output is a categorised register with confidence levels for each classification. For high-risk systems, the register notes the role of the company (provider / deployer / importer / distributor) and the consequent obligations.
Step 3: Gap analysis and remediation. For each obligation, current state vs. AI Act requirement. The most common gaps in 2026: missing technical documentation from third-party providers, no human oversight assignment, no fundamental rights impact assessment for HR or credit AI, no incident logging procedures, no monitoring of AI system drift. Remediation runs as a 90- to 180-day program per portfolio company, with vendor-management workstream for cases where the provider needs to be held to spec.
Step 4: Continuous compliance. Once compliant, the company runs annual reviews of its AI inventory, post-market monitoring of deployed systems, incident reporting to authorities (within 15 days for serious incidents under Article 73), and contractual obligations on new vendor onboarding. The board gets a quarterly AI compliance report with material breaches escalated immediately.
Cost. For a mid-cap portfolio company with 10–20 AI systems in scope, the initial compliance program runs €150,000–€500,000 in legal and technical work, with €50,000–€150,000 annual run-rate compliance cost thereafter. Larger or AI-native portfolio companies multiply both.
The risk to the fund is reputational as much as financial. A high-profile AI Act breach by a portfolio company in 2026–2027 will draw board-level scrutiny across the wider portfolio and across the LP base. The deal-by-deal compliance cost is small relative to that exposure.
5. AI Act considerations in due diligence
Pre-acquisition diligence in 2026 should include an AI Act compliance workstream alongside financial, commercial, legal, and operational diligence. The workstream addresses:
(a) AI system inventory of the target. Same exercise as above, but with the disadvantage of vendor cooperation only being partially obtainable pre-signing.
(b) Classification of identified systems. With particular focus on Annex III categories: HR/recruiting AI, credit-scoring AI in financial-sector targets, biometric or workplace-monitoring AI.
(c) Provider relationships. Whether the target's third-party AI vendors have done the work to make the target's deployer compliance possible. Vendors that cannot supply technical documentation, post-market monitoring data, or transparent training-data summaries are a material liability.
(d) Litigation and authority interaction. Any AI-related complaints, supervisory inquiries, or breaches in the prior 24 months.
(e) Forward economics. Where the target is dependent on AI systems that will require expensive remediation (or replacement), the cost shows up in working-capital adjustments or in price.
Output: an AI Act risk memo as a numbered work-product alongside the cyber, GDPR, and ESG memos. For deals where the target is itself an AI provider — increasingly common in the technology and data-services portfolio segments — the memo expands into a substantive technical-compliance review with conformity-assessment readiness as a closing condition.
6. Where the regulation evolves through 2027
Three trajectories the operator follows in 2026:
First, harmonised standards. CEN-CENELEC JTC 21 is producing the European harmonised standards that operationalise the AI Act's substantive requirements — e.g. EN 17847 on risk management for AI, technical-documentation standards, and conformity-assessment procedures. Compliance to these standards gives a presumption of conformity with the AI Act. Standards expected H1 2026 onwards.
Second, sectoral guidance. The Commission and Member State authorities will publish sector-specific implementing acts and guidance through 2026–2027. The financial-services sector is most advanced: BaFin has a draft AI supervisory practice document; the EBA has issued AI-specific guidance for credit institutions. Healthcare and education AI guidance follow.
Third, the AI Liability Directive (proposed). A separate but linked instrument that adjusts non-contractual liability rules for AI-caused harm (presumed causation in certain cases, disclosure obligations on providers in litigation). Expected to enter parliamentary procedure 2026, applicable mid-2027 if adopted on schedule.
What this means for portfolio compliance: the substantive bar will rise through 2026–2027 as standards crystallise and authorities issue interpretive guidance. Investments made to compliance in 2026 are durable — the framework is not going away — but the specific requirements will get sharper. The operator's discipline is to do the work continuously rather than wait for the perfect requirement and then sprint.