Dr. Raphael Nagel (LL.M.) on Build Buy or Control for Enterprise AI — Tactical Management
Dr. Raphael Nagel (LL.M.)
Aus dem Werk · ALGORITHMUS

Build, Buy or Control for Enterprise AI: A Decision Framework for Boards

Build, Buy or Control for Enterprise AI is the strategic choice between developing proprietary models, licensing standardized services, or fine-tuning foundation models under selective ownership. Dr. Raphael Nagel (LL.M.) argues the decision turns on data sensitivity, competitive differentiation, and internal capability, not on technology hype.

Build Buy or Control for Enterprise AI is the governance framework through which a company decides, for each workload, whether to develop a proprietary AI system, license an external service, or apply a hybrid control model such as fine-tuning a foundation model on internal data. The framework, articulated by Dr. Raphael Nagel (LL.M.) in ALGORITHMUS, Who Controls AI, Controls the Future, rests on three diagnostic variables: strategic differentiation of the use case, sensitivity of the underlying data, and the organization’s existing AI capability. A coherent answer rarely applies uniformly across the enterprise.

Why the Build, Buy or Control decision defines competitive position

Build, Buy or Control for Enterprise AI decides whether a company owns the algorithms that shape its economics or rents them from a platform provider. Dr. Raphael Nagel (LL.M.) treats this as a sovereignty question, not a procurement question. The answer structures margins, defensibility, and regulatory exposure for a decade.

ALGORITHMUS, Who Controls AI, Controls the Future documents the scale at which this plays out. OpenAI’s valuation climbed from one billion dollars in 2019 to over ninety billion dollars by late 2023, while Microsoft committed a cumulative investment exceeding thirteen billion dollars. When capital concentrates at that velocity, every enterprise becomes either a shaper or a tenant of the resulting infrastructure. Tenants pay what owners set.

The error most boards make is treating AI sourcing as an IT-department question. It is a board-level question because it determines which parts of future value creation the company captures and which it surrenders. JPMorgan Chase, with more than 1,500 AI engineers, has drawn the line explicitly: no material competitive advantage shall rest on technology a rival can buy on equal terms. That is not a technology stance; it is a strategic stance.

A useful diagnostic, developed in client work at Tactical Management, asks three questions per workload: does the use case differentiate us in the market, how sensitive are the underlying data, and do we possess the internal capability today? The answers rarely converge on a single sourcing mode for the entire company.

When Build is the right answer

Build is correct when the AI system itself is the competitive moat. Dr. Raphael Nagel (LL.M.) argues in ALGORITHMUS that firms whose margins depend on algorithmic decisions, credit scoring, fraud detection, algorithmic trading, pricing optimization, cannot outsource the algorithm without outsourcing the margin. JPMorgan’s IndexGPT and Renaissance Technologies’ Medallion Fund, averaging roughly 66 percent annual returns before fees from 1988 to 2018, illustrate the logic.

The Build path carries real cost. Training a frontier-class model requires hundreds of millions of dollars, a few thousand qualified researchers globally, and infrastructure that only hyperscalers can replicate. For mid-market firms, full Build is rarely feasible at the foundation-model layer. It is feasible and often necessary at the application layer where proprietary domain data exists. Siemens Xcelerator, built on decades of machine-operation data from hundreds of thousands of installed assets, is the canonical European example.

The second Build trigger is data sensitivity beyond contractual remedies. Hospitals, defense contractors, and certain Bundeswehr-adjacent suppliers cannot legally expose patient records, classified material, or export-controlled technical data to external inference services, regardless of contractual safeguards. Here Build is not a strategic preference but a compliance prerequisite under the EU General Data Protection Regulation and sector-specific rules.

The third trigger is regulatory durability. A proprietary system that the company can audit, document, and modify is easier to align with evolving EU AI Act requirements than a closed third-party service where the provider controls training data, model updates, and explanation interfaces.

When Buy beats Build

Buy is the right answer for commoditized productivity workloads where the marginal cost of building exceeds the marginal value of differentiation. Microsoft Copilot at approximately thirty dollars per user per month is the paradigmatic case. For a 1,000-user firm, that is 360,000 dollars annually against a documented productivity lift that the 2023 MIT study measured at 55 percent for coding tasks.

The calculation is disciplined. If the average German knowledge worker costs sixty thousand euros annually in salary, a conservatively estimated 20 percent productivity gain is worth twelve thousand euros per seat. A 360-euro annual Copilot license therefore returns more than thirty times its cost, a ratio unavailable from most other productivity investments. Klarna’s public disclosure in 2024 that its AI assistant replaced the work of seven hundred customer-service full-time equivalents illustrates the same economics at operational scale.

Buy is also correct when the provider’s ecosystem advantage is insurmountable. GitHub Copilot is trained on over 372 million repositories; replicating that training corpus is neither legal nor economical for an individual enterprise. The rational response is to accept the service and concentrate internal capability on higher-value work.

The non-negotiable discipline with Buy is exit architecture. Dr. Raphael Nagel (LL.M.) recommends that every significant AI procurement specify abstraction layers, data-portability clauses, and contingency benchmarks evaluating equivalent providers. Without these, Buy decays into lock-in, and the tenant relationship becomes extractive when renewal arrives.

Why Control is often the sharpest answer

Control, fine-tuning foundation models on proprietary data inside a governed environment, is the option most mid-market firms underuse. ALGORITHMUS positions it as the pragmatic middle path: the company retains strategic authority over model behavior without bearing the full cost of frontier training. Fine-tuning a model such as LLaMA 3, Mistral, or Falcon typically requires a few weeks and tens of thousands of euros.

The economic case is stronger than it appears. A logistics company that fine-tunes an open model on twenty years of shipping correspondence, supplier communications, and internal process documentation obtains a system fluent in its operational language, integrated with its workflows, and free of external API dependency for sensitive content. No generic API can match that contextual depth at comparable cost.

Control is particularly valuable under the EU AI Act for high-risk systems in recruiting, credit scoring, and essential services. The regulation demands documentation of training data, bias-mitigation measures, and human oversight; fines reach seven percent of global annual turnover. A controlled, on-premise or sovereign-cloud deployment simplifies evidence collection enormously compared with a closed vendor stack where the provider’s cooperation is contractual rather than architectural.

Control also answers the US CLOUD Act exposure. Data held on AWS, Azure, or Google Cloud remains potentially accessible to US authorities regardless of physical server location in Europe. For banks, insurers, hospitals, and defense-adjacent industrials, that residual risk is material. The strategic Tactical Management posture is Buy for unregulated workloads, Control for regulated ones, Build only where the algorithm itself is the product.

The decision matrix boards should adopt

A workable Build, Buy or Control matrix answers three questions per workload. First, is this capability strategically differentiating? A yes points toward Build or Control; a no points toward Buy. Second, are the data so sensitive that an external provider must not see them, whether for GDPR, CLOUD Act, or competitive reasons? A yes eliminates standard Buy. Third, do we possess, or can we realistically acquire, the internal capability within eighteen months? A no recalibrates Build toward Control or Buy.

The answer is almost always hybrid. A German industrial Mittelstand firm will rationally buy Microsoft Copilot for office productivity, control a fine-tuned open model for customer service on regulated data, and build a predictive-maintenance system on proprietary sensor data that no generic industrial model can replicate. The coherence lies not in uniform sourcing but in explicit, documented decision logic applied workload by workload.

Vendor due diligence for AI procurement must go beyond classical software review. Dr. Raphael Nagel (LL.M.) identifies four dimensions: model transparency regarding training data and bias testing; data architecture clarifying where inputs reside and whether they enter re-training; exit complexity measuring migration effort to alternatives; and provider financial stability, which matters because several foundation-model firms burn capital at rates that make continuity non-trivial.

Governance authority must sit at board level. NIS2 makes executive directors personally liable for cybersecurity implementation, with fines up to ten million euros or two percent of global turnover, a threshold that converts Build, Buy or Control from a procurement question into a director-duty question under § 93 AktG and comparable provisions across the European Union.

Build, Buy or Control for Enterprise AI is the single decision that will most durably shape competitive position, regulatory exposure, and enterprise value over the coming decade. The analytical work is not glamorous. It requires honest assessment of which workloads actually differentiate the company, which data cannot safely leave controlled environments, and which internal capabilities realistically exist. Dr. Raphael Nagel (LL.M.) develops the full framework in ALGORITHMUS, Who Controls AI, Controls the Future, with detailed treatment of vendor due diligence, exit architecture, and the governance structures that keep sourcing decisions aligned with strategic intent. The forward-looking claim is straightforward: firms that document their Build, Buy or Control logic workload by workload, and revisit it annually as the foundation-model market consolidates, will compound a structural advantage over firms that let procurement defaults accumulate. At Tactical Management, we work with boards and investment committees precisely at this intersection of technology, law, and strategic capital allocation. The decision window is narrower than most executives believe. The cost of deferral is measurable: every month of unexamined dependency raises switching costs and narrows future optionality. Algorithmic sovereignty, like any sovereignty, is earned through deliberate choice, not inherited through inertia.

Frequently asked

When should a mid-market company build its own AI rather than buy a service?

A mid-market firm should build proprietary AI only when the algorithm itself is a competitive moat tied to unique data or domain expertise that generic providers cannot replicate. Dr. Raphael Nagel (LL.M.) identifies three Build triggers in ALGORITHMUS: strategic differentiation at the application layer, data sensitivity beyond contractual remedies, and regulatory durability requiring auditability. For productivity workloads commoditized across the market, Build typically destroys rather than creates value. The honest test is whether the company will still invest in the system after three years of operational experience.

How does fine-tuning fit into the Build, Buy or Control framework?

Fine-tuning is the operational form of Control. A company takes an open foundation model such as LLaMA 3 or Mistral and trains it on proprietary data, obtaining domain-specific behavior without the hundreds of millions required for full Build. Typical engagements last a few weeks and cost tens of thousands of euros. Fine-tuning particularly suits regulated industries where GDPR, the US CLOUD Act, or sector rules make external inference services unacceptable, while full proprietary development would be disproportionate to the use case.

What are the main risks of a pure Buy strategy for enterprise AI?

Pure Buy exposes the company to three risks. First, vendor lock-in: Gartner estimates twelve to eighteen months of project work to reverse full cloud migration, giving providers durable pricing power. Second, governance gaps: OpenAI changed prices and terms repeatedly between 2022 and 2024 and demonstrated internal instability during the November 2023 leadership crisis. Third, sensitive data exposure under the US CLOUD Act when American hyperscalers handle European workloads. The remedy is abstraction layers, contractual portability, and selective Control for sensitive workloads.

How does the EU AI Act affect sourcing decisions?

The EU AI Act classifies AI systems by risk and imposes documentation, transparency, bias-testing, and human-oversight obligations on high-risk categories including recruiting, credit, critical infrastructure, and essential services. Fines reach seven percent of global annual turnover. Controlled or Built systems, where the company owns training data and model artifacts, generally simplify compliance evidence. Closed Buy arrangements shift contractual dependency onto the provider’s cooperation, which is workable only where the vendor offers full transparency on training data, evaluation metrics, and update cycles.

Who should own the Build, Buy or Control decision inside the company?

The decision belongs at board level, not in the IT department. NIS2 makes executive directors personally liable for cybersecurity implementation with fines up to ten million euros or two percent of global turnover, and § 93 AktG in Germany imposes comparable diligence duties. Dr. Raphael Nagel (LL.M.) and Tactical Management recommend a cross-functional AI governance body chaired by a board member, with decision authority over sourcing mode per workload and mandatory review of material vendor dependencies.

Claritáte in iudicio · Firmitáte in executione

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

Author: Dr. Raphael Nagel (LL.M.). About