Back to InsightsPart 1 of 8 · Rethinking the Target Operating Model
Operating Model ArchitectureMay 15, 202611 min read

Target Operating Model: what it is, what it produces, and what has changed

What a target operating model is, what it actually produces, why most failed for a decade, and what works now. A practitioner's view.

The strategy-execution gap is the central management problem of the past two decades. Strategy moves in months and quarters. The organization that has to deliver it moves in years. McKinsey's research shows that even high-performing companies leave roughly 30 percent of their strategy's potential value on the table because their operating model has not kept up. The Project Management Institute's 2025 study found the planning-to-execution disconnect is now the single most-cited barrier to corporate reinvention, named by 35 percent of executives. The discipline whose job is to close that gap is the target operating model.

For most of the last decade, it failed at the job. McKinsey's 2014 survey of more than 1,300 executives found that only 21 percent of operating model redesigns met most of their objectives. Four out of five failed. The 2025 refresh against 2,000 executives across 16 sectors tells a different story: 63 percent now meet most objectives, and 24 percent are highly successful — a tier that did not exist at scale in the earlier data. The success rate has tripled.

This is the hub for a series on what changed. It walks through what a target operating model is, what it actually produces, why most of its 2010s versions failed, what mainstream practice has absorbed since, what is still missing, and what we recommend. The deeper articles in the series go into each layer in detail.

What need does a target operating model address?

Strategy decisions are made at the top of the organization. They are executed by people, processes, and systems that sit five or six layers below. The bigger the gap between those two altitudes, the more strategic intent leaks out before it reaches the work. Larry Bossidy and Ram Charan argued more than two decades ago, in Execution: The Discipline of Getting Things Done, that this gap is the central management problem of large organizations. The data since has only sharpened the point.

A target operating model is the discipline that closes the gap. It is not the strategy itself. It is not the org chart. It is the deliberate, integrated design that translates strategic intent into how the organization actually runs — the people, the work, the systems, and the locations that together produce the strategy's outcomes. When it works, executives at the top see their decisions show up in the work without having to chase them. When it fails, what they see instead is an org that keeps doing what it was doing before, no matter what was announced at the offsite.

What is an operating model, and what is a target operating model?

Every organization has an operating model. It is how the work actually gets done — the people doing it, the structure they sit in, the systems they use, the locations the work happens in. Most operating models were not designed. They accumulated. Decisions were made one at a time, structures grew, technology was bolted on, and after enough years the operating model is whatever the organization ended up with.

A target operating model is the deliberate version. It is the design of how an organization should work to deliver its strategy — the future-state artifact that names the design choices explicitly and treats them as a coherent system rather than as accumulated history. The discipline of designing one is what this series is about. The reason it exists is that the current operating model, the one the organization is running, is almost always failing to keep up with the strategy the organization committed to. A target operating model closes the gap on purpose, rather than waiting for the next round of accumulation to maybe do it.

An operating model — current or target — is a design across four dimensions of how an organization works to deliver its strategy:

  • WHAT work gets done — the capabilities, processes, products, and services that produce the strategy's outcomes.
  • WHO does it — the organizational structure, roles, accountabilities, and decision rights.
  • HOW it is executed — the governance, technology, ways of working, performance management, and data that keep the work running.
  • WHERE it happens — the geographic footprint, sourcing decisions, and locations of work.

This work sits inside the broader discipline of operating model architecture, which integrates the six architecture types every enterprise uses. The four dimensions are stable across firms, across industries, and across decades. What changes is which elements within each dimension a particular practice elevates to peer status — and which it buries as sub-questions.

What does a target operating model actually produce?

When you commission a TOM engagement, you receive a set of concrete deliverables. Every major firm publishes a framework that organizes them differently. The differences matter less than the practitioners' world sometimes pretends, but they are not nothing.

MBB. McKinsey's framework lists 12 elements grouped into strategy, capabilities, ways of working, and people. Bain leads with 7 to 15 design principles, then organizes the model across five elements: structure, accountabilities, governance, ways of working, capabilities. BCG names 12 elements split across front, middle, and back office, with the 2018 Agile Operating Model framework adding capacity-based funding and team-level autonomy.

Big 4. KPMG's Powered Enterprise TOM names six layers: Process, People, Service Delivery Model, Technology, Performance Insights, Governance — the only Big 4 framework that elevates a WHERE element to peer status. Deloitte's published TOM names Organisation, People, Processes, Technology, Products, Channels, Vision & Strategy, and Governance & Reporting; the post-2020 NextGen rephrasing adds Data and Service Delivery as peers. PwC does not publish one canonical framework — Strategy&'s Capabilities-Driven Operating Model sits alongside Fit for Growth, OrgDNA, and the COO 4Vs as overlapping frames. EY's Future-Fit Operating Model refuses the classic people-process-technology-governance taxonomy entirely, organizing instead around five capability areas: Dynamic Ecosystems, Digital DNA, Talent Flexibility, Innovation Platform, Enduring Purpose.

Three things to notice. First, all seven firms ultimately design across WHAT/WHO/HOW/WHERE. The dimensions are stable. The emphasis is not. Second, only KPMG and Deloitte's NextGen version elevate WHERE to a first-class element; the others treat location and sourcing as sub-questions inside other elements. Third, underneath the firm-specific language, the deliverables themselves are recognizable: capability maps and value streams, organization structures, RACI matrices and decision rights, governance schedules, ways-of-working playbooks, role profiles and target operating costs, technology choices, location and sourcing decisions, KPI sets, and an implementation roadmap. If you commission a TOM, you receive most of those.

The full firm-by-firm grid and analysis is in the deliverables article in this series.

Why has TOM failed, and why is it getting better?

For most of the 2010s, target operating model engagements followed a recognizable script. A consulting team arrived. Capability maps and process taxonomies were drawn. Industry benchmarks compared. After eight to fourteen weeks, an 80-to-200-slide deck landed on the sponsor's desk. A roadmap, sometimes. A stylized one-page diagram, almost always. Then very little happened.

Four out of five engagements failed. The reasons sit in a few structural patterns. The deliverable replaced the discipline — operating model design got treated as a project with an end date, not a capability the organization sustained. The tempo was wrong; frameworks built around annual planning cycles could not keep up with markets moving in weeks. The streams stayed fragmented, with decision rights, product strategy, and tech execution living in separate practice areas. Cost reduction crowded out everything else, and the discipline was being commodified into downloadable toolkits.

Two structural assumptions underneath the work had broken without the major frameworks noticing. The first was discretionary translation: that a high-level design could be handed to line leadership with reasonable latitude to translate principles into specific actions. That worked when line managers had time. Spans of control widened, cross-functional handoffs became the norm, and the cognitive load outgrew the available bandwidth. The second was strategy cadence: a 12-to-18-month design cycle no longer lands before the strategy has moved. M&A in months, competitor moves in weeks.

Mainstream practice has now responded to both. Modern engagements produce deeper designs that reach into the operational layer rather than stopping at the executive sketch. They run continuously, on a defined cadence, with adaptive team structures replacing static org charts. Process mining gives an evidence-based view of how work actually flows. Modular tools let practitioners capture parts of the model in software rather than slides. Leadership expectations have shifted: a deck is no longer the end of an engagement.

These responses account for most of the 21-to-63 percent improvement. The discipline did not invent something new. It got more disciplined about what it had always been trying to do. The longer treatment of how the comeback happened is in the failure-and-comeback article.

What is still missing

Underneath the discretionary-translation issue, the traditional premise carried a deeper assumption. Work is executed by people. The design language of accountabilities, roles, supervision, and spans of control only works when humans are the executors. That assumption has held for the entire history of the discipline. It is breaking now, and mainstream practice has not yet built a response.

Software agents now deploy in days against tasks that used to take months to staff with humans. Gartner projects 40 percent of enterprise applications will include task-specific AI agents by the end of 2026, up from less than 5 percent twelve months earlier. Deloitte's State of AI in the Enterprise 2026 puts a number on the gap: 84 percent of companies have not redesigned jobs to fit AI, only 14 percent have agentic solutions ready to deploy, and only 11 percent are running them in production. The rest are layering agent capacity onto jobs that were designed for the work the agents were brought in to replace.

Rule-based agents have been visible for years. What is new is the breadth of agent deployment and the move into non-deterministic work — judgment-bearing tasks that used to require a human in the chair. That changes what the operating model has to do. Workforce composition is now a design variable in its own right. The boundary between human work and agent work has to be drawn explicitly, task by task. The model has to govern when a human must be in the loop and when an agent can act on its own. It has to live in the systems that run the work, not in a deck.

This is the single most important shift for any executive currently sponsoring a TOM engagement to understand. AI does not just amplify the case for TOM. It changes what failing TOM looks like. What an agentic operating model actually requires is the subject of the agentic operating model article in this series.

What a modern TOM should deliver

The major frameworks describe an operating model designed for an organization where humans do the work, governance bodies make the decisions, and locations are where the people sit. A modern TOM produces the same deliverable categories every published firm framework names — strategy translation, capability map, products and channels, organizational structure, decision rights, processes, technology, talent and culture, location and sourcing, performance management, governance, implementation roadmap. What changes is what those deliverables have to contain, because some of the work is no longer done by people, the workforce is now a multi-sourced mix of employment classes and execution models, and the model has to live in the systems that run the work. EY's Future-Fit Operating Model is structurally closest to a modern view — it organizes around capability areas rather than people-process-technology layers, which is the right move — but its five elements do not carry the hybrid execution layer the model now has to produce. The components below are what the MBB and Big 4 frameworks systematically under-specify.

WHAT — strategy translation, capabilities, products, and the work

The traditional deliverable starts with the strategy translation: what the operating model is being built to deliver, with the working assumption about how much of the executable work is human, deterministic agent, and judgment-bearing agent over the design horizon. Then the capability map, the process taxonomy, and the product and channel portfolio. A modern version goes further on each layer. Capabilities are classified by execution model: human-only, agent-only, hybrid. The process taxonomy is decomposed to the task level and names where each class of executor picks up the work. Products and channels are tagged for which customer touchpoints are human, agent, or hybrid — the difference between an underwriter approving a loan, an agent approving inside policy, and a hybrid case where the agent gates the human. Value streams carry embedded confidence thresholds for the agent-handled steps.

WHO — roles, accountability, and the multi-sourced workforce

Org structure and human role definitions are still in the deliverable. What is added: role definitions for agents — what each agent class does, what it cannot do, what triggers escalation. The decision rights matrix names humans and agents, naming for each material decision who or what decides and who or what is accountable. PwC's Capabilities-Driven Operating Model is the only major framework that elevates Decision Rights to a peer element; in a hybrid TOM, that elevation becomes structural rather than optional. A human-in-the-loop protocol specifies when human review is mandatory, when optional, when bypassed. Spans of control are reframed — how many agents one human supervises, and what supervision actually consists of.

The workforce side of the deliverable is now multi-sourced as a deliberate design choice, not as a side effect of cost decisions made elsewhere. Full-time employees, contractors, captive offshore staff, BPO partners, professional services and consulting firms, fractional and part-time workers, gig and platform labor, deterministic agents, judgment-bearing agents — every one of these is a sourcing class with its own cost curve, governance model, supervisory pattern, and exit profile. The TOM has to name which classes carry which work, where the handoffs sit, and what governs the boundary. EY's Dynamic Ecosystems captures part of this; the modern version makes the full sourcing taxonomy explicit. Talent flexibility — how the skill mix, the contract mix, and the human/agent mix flex as the strategy moves and the model adjusts — is a named output, not a side effect of restructuring.

HOW — execution, governance, performance, and culture

Process design, technology choices, governance schedules, performance management — all still in scope. What is added: an orchestration layer that names the handoff protocols across humans, deterministic agents, and judgment-bearing agents, and lives in the systems that run the work, not in a deck. Performance is split into human metrics, agent metrics, and the joint outcome the two produce together — KPMG's Performance Insights framing is closer to right than McKinsey's performance management, because the deliverable is a feedback loop that keeps the model alive, not a dashboard. Governance includes agent oversight, model drift review, and audit-trail integrity, not just the committee calendar. The funding model is part of the deliverable: capacity-based funding for hybrid teams, with explicit allocation between human capacity and agent capacity — BCG's Agile Operating Model introduced capacity-based funding for human teams; the modern version extends it to mixed capacity. Culture and behaviors land here too — the cultural shifts the model assumes about delegating to agents, trusting non-human outputs, handling agent exceptions. PwC names this through OrgDNA; KPMG and Deloitte assume it. A modern TOM names it as a deliverable.

WHERE — footprint, skill hubs, sourcing, and data

Geographic footprint and sourcing decisions for the human workforce are still in the deliverable. KPMG's Service Delivery Model and Deloitte's NextGen Sourcing element are the only published frameworks that elevate WHERE to a peer dimension; a modern TOM extends that move.

The geographic dimension has itself evolved through two decades and is still evolving. The 1990s and 2000s version optimized for wage arbitrage — work moves to the lowest-cost location qualified to do it. Captive offshore centers and single-function shared services centers were the early forms. They consolidated into multi-functional global business services structures through the 2010s. The current generation runs across more flavors than the field's vocabulary admits: captive GBS, hybrid captive-and-BPO, GBS networks, hub-and-spoke skill centers, follow-the-sun delivery networks, GBS-as-innovation-engine models with embedded digital, analytics, and AI capacity. The deciding factor is no longer wage cost. It is skill density — where the specialized capability concentration sits — combined with data residency, regulatory geography, time-zone coverage, and the compute economics of where the AI workload runs. A modern TOM's footprint map shows skill hubs first and arbitrage second, and it pairs the human geography with the compute geography rather than treating them as separate questions.

The compute footprint for agents is part of the model — where they run, what data they can see, what latency the work requires. A data residency map names what the agent accesses, where the data sits, who else has access. Sourcing decisions include "human versus agent" alongside "captive versus vendor" alongside "FTE versus contractor versus BPO versus consulting versus gig." The hybrid handoff topology shows where humans pick up from agents and vice versa, both geographically and organizationally. The extended ecosystem — partners, BPOs, captive centers, professional services firms, agent-as-a-service vendors — is part of the WHERE map, not assumed into other elements.

Four attributes that hold the deliverables together

A target operating model that produces the components above is recognizable by four attributes. They are qualities of the deliverables, not separate work products.

Deep. The design reaches the task level, with the human/agent boundary drawn explicitly on every process. Stopping at the org-chart layer puts the hardest decision back on line managers, and line managers no longer have the time.

Continuous. The model is reviewed and adjusted on a defined cadence, not signed off and parked. The workforce composition is shifting under the model whether the leadership team looks at it or not.

Connected. The strategic intent, the operational rules, and the systems that run the work all point to the same source. APIs carry the handoffs. Agents act inside the boundaries the model defines, not their own. The model is the spine; execution is distributed across it.

Owned. A named role holds the operating model as a permanent capability — with budget, authority, and a seat where decisions get made. The owner decides where humans and agents share work, when a human must be in the loop, when an agent can escalate on its own. None of the existing role names — CTO, COO, Chief Transformation Officer, Head of GBS — were built for this; all of them become structurally essential when some of the work is being done by something that is not a person.

The full design treatment — principles, tiers of ownership, examples — is in the four-attributes article.

Three questions for sponsors

If you are commissioning a target operating model engagement, three questions in the first conversation will tell you whether you are buying a deck or building a discipline.

  1. Who owns the operating model after the consultants leave — with a named role, a budget, and a governance position?
  2. How does the model adjust when the assumptions change — with a documented cadence and a triggering mechanism, or only when somebody escalates?
  3. What is the working assumption about how much of the executable work will be done by AI agents in 36 months — with a number and a rationale, even if the number turns out to be wrong?

The right test for whether your operating model is working is not whether it was approved at the steering committee. It is whether your VPs make decisions inside it without being reminded that it exists. If they do, what you have is a target operating model. If they do not, what you have is a target operating model document.

Diego Navia

Diego Navia

Managing Director, digitiXe · 30+ years in business transformation

Want to discuss this topic?

These insights come from real engagement experience. If something resonates with your situation, let's talk.

Schedule a Conversation