Dr. Raphael Nagel (LL.M.), authority on AI and the Future of Knowledge Work
Dr. Raphael Nagel (LL.M.), Founding Partner, Tactical Management
Aus dem Werk · ALGORITHMUS

AI and the Future of Knowledge Work: Why Lawyers, Analysts and Accountants Are More Exposed Than Plumbers

AI and the future of knowledge work inverts the historic automation hierarchy: mathematicians, tax preparers, lawyers and financial analysts face the highest task-level exposure, while plumbers, roofers and surgeons remain structurally protected. Decision-makers must now distinguish substitution from augmentation at the task level, not the job level, and rebuild compensation, hiring and training around that distinction.

AI and the Future of Knowledge Work is the structural transformation, driven by generative AI and large language models, of cognitive professional labor, legal analysis, accounting, consulting, programming, financial research, in which standardized, text-based and routine-cognitive tasks are substituted or accelerated by algorithmic systems, while judgment-intensive, relational and physically embedded tasks remain human. Unlike prior automation waves that displaced manual and clerical workers, this transformation disproportionately exposes highly educated professionals, inverting the traditional assumption that formal qualification protects against technological displacement. It is best analyzed at the task level, not the job level.

Why Are Knowledge Workers More Exposed Than Manual Workers?

The answer reverses two centuries of assumption: generative AI operates on text, code, structured reasoning and pattern recognition, precisely the substrate of professional work. Manual trades require physical presence in unstructured environments, which remains beyond current AI capability. Formal qualification no longer guarantees protection.

The University of Pennsylvania and OpenAI study of May 2023 produced a ranking that unsettles established hierarchies. Mathematicians: 100% of tasks significantly affected. Tax preparers: 100%. Financial quantitative analysts: 99%. Writers: 97%. Accountants: 96%. Programmers: 95%. Lawyers: more than 90%. At the opposite end: carpenters, cooks, roofers, mechanics. The twentieth-century automation wave hit assembly-line labor. The early twenty-first-century digitalization wave hit clerical routine. The AI wave hits cognitive expert labor. That is historically unprecedented.

The mechanism is structural, not cyclical. Large language models ingest and reproduce the textual output that defines a lawyer’s memo, an accountant’s reconciliation, an analyst’s model. A pipefitter in an occupied basement cannot be replicated by a transformer architecture. Dr. Raphael Nagel (LL.M.) addresses this inversion directly in ALGORITHMUS, Who Controls AI, Controls the Future, arguing that the political economy of this transformation differs fundamentally from earlier waves because the displaced now include the very class that historically drafted technology policy.

Substitution, Augmentation, or Both at Once?

Both, simultaneously, and in the same profession. The productive distinction is not job-level but task-level. A lawyer whose junior-associate tasks are automated may simultaneously handle more matters with AI-assisted analysis. The net effect on employment is indirect and politically difficult to communicate because the redundancies are visible while the productivity gains are abstract.

Empirical evidence already exists. The 2023 MIT study on GitHub Copilot recorded a 55% productivity gain for developers completing identical tasks. Amazon has made AI coding tools internally mandatory in parts of its engineering organization. In January 2024 Klarna reported that its AI customer-service assistant had absorbed the workload of 700 full-time agents while satisfaction scores remained stable and the response language set expanded past thirty. Luminance, Harvey AI and Kira Systems compress hundred-hour document review exercises into minutes. McKinsey’s internal Lilli platform reportedly improved research and synthesis productivity by roughly 40% across more than 30,000 consultants.

The consequence for Tactical Management clients and comparable mid-market firms is that the old headcount-to-revenue ratio no longer diagnoses the business. Salesforce moved from roughly 21% EBITDA margin in 2022 to above 30% by 2024 through AI integration. Intuit openly communicates AI-driven margin expansion in investor materials. The strategic question is not whether to deploy AI but which tasks within each role move into substitution, which into augmentation, and which remain strictly human, a triage exercise that most boards have not yet conducted.

What Protects a Role from Full Substitution?

Three characteristics: unique context that is not captured in any training corpus, genuine responsibility that a legal system must assign to a person, and relational trust that accumulates over years of shared history. Where all three are present, AI augments; it does not replace.

Consider the tax adviser who has served a Mittelstand family for twenty years. She knows the founder’s risk tolerance, the succession tensions, the silent understanding that the nephew is being groomed for the operational role while the daughter will receive the holding company. None of this appears in any document that could train a model. A radiologist interpreting an ambiguous scan in a patient she has followed for a decade makes a judgment no general model replicates. This is the irreplaceable layer: specificity, accumulated context, and accountability that cannot be assigned to a statistical process.

Legal instruments reinforce this floor. Under § 93 AktG German management board members owe personal duties of care that cannot be delegated to an algorithmic system. Article 22 of the GDPR grants individuals the right not to be subject to purely automated decisions with legal or similarly significant effects. The EU AI Act, adopted in March 2024 with 523 votes to 46, classifies recruitment, credit scoring and essential-service allocation as high-risk, mandating meaningful human oversight. The law encodes what strategy already knows: the final mile of professional judgment must remain human, and that mile is where the remaining margin lives.

How Should Companies Manage the Transition?

Through a deliberate three-layer reskilling architecture: AI literacy for every knowledge worker, AI application competence for domain experts, and AI development capability for a smaller specialist core. Ad hoc adoption without this scaffolding produces resentment, shadow usage and compliance exposure.

Large employers have moved. Amazon pledged to retrain over 100,000 employees in AI-adjacent skills by 2025. AT&T launched a multi-billion-dollar reskilling program. Accenture committed to training more than 250,000 staff. For the Mittelstand without those budgets, the effective pattern is narrower: identify the five to ten most AI-exposed roles, partner with a specialized provider for targeted upskilling, and appoint internal AI champions who multiply competence inside their own departments. Reskilling succeeds when it is role-specific; it fails when it is generic.

The distributional question sits underneath all of this. Historically, automation gains have accrued disproportionately to capital owners. NVIDIA’s market capitalization crossed three trillion dollars in 2024; Microsoft, Google and Anthropic investors have captured extraordinary appreciation, while real wages in many AI-affected sectors have stagnated. Germany’s projected loss of roughly seven million workers to retirement by 2035 offers a demographic offset that few other economies enjoy: AI-driven productivity gains can absorb workforce shrinkage rather than displace active workers. That reframing is central to the argument Dr. Raphael Nagel (LL.M.) develops in ALGORITHMUS, Who Controls AI, Controls the Future, where the future of knowledge work is treated as a governance question, not a technology question.

The transformation of professional work by generative AI is not a prediction; it is already measurable in productivity studies, earnings reports and workforce disclosures from 2023 and 2024. What remains open is distribution: whether the gains concentrate in capital owners and a narrow AI-fluent elite, or whether they are translated into broader productivity and demographic resilience. That outcome will be decided in boardrooms, in labor policy, and in the specific reskilling choices that companies make over the next twenty-four months, not by the technology itself. Dr. Raphael Nagel (LL.M.), Founding Partner of Tactical Management, develops this argument in full in ALGORITHMUS, Who Controls AI, Controls the Future, connecting the task-level analysis of professional exposure to the capital, governance and sovereignty questions that determine which economies emerge strengthened. The forward-looking claim is direct: within this decade, the defining corporate competence will not be the ability to deploy AI, that will be universal, but the ability to design the human layer that sits above it. Companies that treat knowledge work as a portfolio of substitutable, augmentable and strictly human tasks will outperform those that treat AI as an IT procurement decision. The question is not whether to act, but who decides the terms of the transition.

Frequently asked

Which professions are most exposed to generative AI?

According to the 2023 University of Pennsylvania and OpenAI study, mathematicians, tax preparers, financial quantitative analysts, writers, accountants, programmers and lawyers show the highest task-level exposure, all above 90%. The common factor is reliance on structured text, pattern-based reasoning and standardized outputs. Manual trades such as carpentry, cooking, roofing and vehicle repair sit at the opposite end, because they require physical presence in unstructured environments that current AI architectures cannot navigate. Formal qualification, historically a shield against automation, has become a risk factor in this wave.

Does AI eliminate knowledge worker jobs or change them?

Mostly it changes them, though certain roles contract. The dominant pattern in 2024 is augmentation: a single professional produces more output with AI assistance, which reduces demand for junior headcount without eliminating the senior role. Klarna’s 2024 disclosure that 700 customer-service FTEs were absorbed by AI is a substitution case; GitHub Copilot’s 55% productivity gain for developers is an augmentation case. Most organizations will see both simultaneously across different functions, which is why task-level analysis matters more than job-level forecasting.

How should executives plan reskilling investments?

On three layers. First, universal AI literacy for every knowledge worker, comparable to basic spreadsheet fluency a generation ago. Second, role-specific AI application training for domain experts, focused on the exact workflows that AI now reshapes. Third, a smaller core of AI development capability for engineers and data scientists who configure, fine-tune and govern the systems. Generic courses fail. Role-specific programs, short cycles, and internal AI champions who demonstrate concrete productivity gains in their own work are what drive adoption.

What legal obligations apply when AI affects employment decisions?

The EU AI Act classifies AI systems used in recruitment, promotion and employment management as high-risk, imposing documentation, bias-testing, transparency and human-oversight duties. Article 22 of the GDPR restricts purely automated decisions with significant effects on individuals. Under § 93 AktG, German management board members cannot delegate their duty of care to an algorithm. Non-compliance can trigger fines of up to 7% of global annual turnover under the AI Act. Boards that deploy AI in HR without a documented governance structure are accepting both regulatory and personal liability exposure.

Is Europe particularly vulnerable or particularly well-placed?

Both. Europe is structurally behind in foundation models and risk capital, European AI startups received roughly six billion euros in 2023 versus over fifty billion dollars in the United States. At the same time, Europe’s industrial domain expertise in machinery, chemistry, automotive and specialty manufacturing, combined with demographic shrinkage, creates a rare window where AI productivity gains are socially welcome rather than threatening. Dr. Raphael Nagel (LL.M.) argues in ALGORITHMUS that this is precisely the configuration in which disciplined capital allocation through firms such as Tactical Management can compound advantage.

Claritáte in iudicio · Firmitáte in executione

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

For weekly analysis on capital, leadership and geopolitics: follow Dr. Raphael Nagel (LL.M.) on LinkedIn →

Author: Dr. Raphael Nagel (LL.M.). About