Back to InsightsPart 2 of 8 · Rethinking the Target Operating Model
Operating Model ArchitectureMay 16, 20268 min read

Why four out of five Target Operating Models failed for a decade — and what reshaped the discipline

For a decade, four out of five target operating model engagements failed. McKinsey's 2025 data shows the success rate tripled. Here is what changed.

The reason the target operating model is having a comeback is not a consulting rebrand. It is that the time it takes to translate strategic decisions into running operations — historically measured in years — is no longer acceptable in markets where competitive moves arrive in weeks and quarters. The operating model is the field's only serious attempt at compressing that translation.

Through most of the 2010s, that attempt was failing. Of operating model redesigns surveyed by McKinsey in 2014, only 51 percent were even completed. Of those, 21 percent met most of their objectives. Four out of five failed. McKinsey's 2025 survey, refreshed against the same questions on 2,000 executives across 16 sectors, shows 79 percent completed and 63 percent meeting most objectives. The success rate has tripled.

This article walks through why traditional target operating models failed for most of the 2010s, the two structural assumptions that broke without the major frameworks responding, and what mainstream practice has now absorbed that accounts for the comeback. It does not get into AI — that is the coming inflection, not the past one, and it is the subject of a later article in the series.

The discipline and its purpose

A target operating model is the answer to one question. How does this organization actually work to deliver its strategy. Not the strategy itself. Not the org chart. The translation between the two.

Larry Bossidy and Ram Charan made the case more than two decades ago, in Execution: The Discipline of Getting Things Done, arguing that the heart of strategy sits in execution and that the gap between the two is the central management problem of large organizations. The data since has only sharpened the point. McKinsey shows that even high-performing companies leave 30 percent of their strategy's potential value on the table because their operating model has not kept up with their strategic intent. The Project Management Institute's 2025 research found that the planning-to-execution disconnect is now the single most-cited barrier to corporate reinvention, named by 35 percent of executives.

An operating model is a deliberate design across four dimensions: WHAT work gets done, WHO does it, HOW it is executed, and WHERE it happens. The broader discipline this sits inside is operating model architecture, which integrates the six architecture types every enterprise uses. What MBB and Big 4 firms actually deliver when you commission one is in the next article in the series.

How traditional TOM was done

For most of the 2010s, target operating model engagements followed a recognizable script. A consulting team arrived. Capability maps and process taxonomies were drawn. Industry benchmarks were compared. After eight to fourteen weeks, an 80-to-200-slide deck landed on the sponsor's desk. A roadmap, sometimes. A stylized one-page diagram, almost always. Then very little happened.

Half of the redesigns never finished. Of the half that did, only one in five worked. The reasons sit in a few structural failure patterns.

Why it failed

The deliverable replaced the discipline. Operating model design got treated as a project with an end date — analyze, design, implement, declare victory. The capability the organization needed to sustain went home with the consultants.

The tempo was wrong. Frameworks built around quarterly business reviews and annual planning cycles did not match a market where competitor moves arrived in weeks.

The streams stayed fragmented. Decision rights, product-market logic, and tech-enabled execution lived in separate practice areas with their own slides. The integration the operating model was supposed to provide rarely happened.

Cost reduction crowded out everything else. GBS and outsourcing engagements got rolled into "operating model" work because the math was easy. Workforce orchestration and cross-functional integration produce harder numbers. The hard work got crowded out by the easy savings.

The discipline was being commodified. Templates proliferated. "Operating model toolkits" became a marketplace category. Once a discipline becomes a downloadable template, the judgment that made it useful gets lost. The work still produced real value in pockets — particularly in regulated industries where formal documentation of decision rights had compliance value of its own — but the gap between the slides and the running organization grew steadily through the decade.

Two structural assumptions that broke

Underneath the script, the traditional approach rested on two assumptions. Both held for decades. Both broke in the 2010s and 2020s without the major frameworks fully responding.

The first: that line management could translate principles into action at their discretion. A high-level design could be handed down with reasonable latitude, and line managers would fill in the operational layer. Bain's stated philosophy — that "principles liberate people to do the right thing" — is exactly this. It worked when three conditions held: execution was performed by humans who could be trained on principles, the gap between design and operation was narrow enough that translation was feasible, and line leadership had time to do that translation. All three have weakened. Spans of control have widened. Cross-functional handoffs are now the norm rather than the exception. The cognitive load of running an operation has grown faster than the time available to think about its design.

The second: that strategy was done every 3-to-5 years, slowly enough for operational transformation to keep up. Translation took a year. Execution took several more. A target operating model designed in 2003 against a strategy adopted in 2002 to be implemented by 2007 was an entirely reasonable arrangement. Today the same arrangement is a recipe for irrelevance. Strategic moves now land in quarters. M&A in months. Competitor product launches in weeks. Cloud and product organizations broke the static design assumption — if your platforms evolve monthly, an operating model built for the platforms you had two years ago is documenting a company that no longer exists. Agile and decentralized network structures broke the static structure assumption — once work flows through hybrid teams that assemble and dissolve around problems, the operating model has to describe coordination, not chain of command.

Underneath both broken assumptions is a deeper reason mainstream practice took so long to respond. The technology that an operating model has to design around has changed more in the last fifteen years than in the prior fifty. ERP centralized data. Workflow platforms standardized execution. Cloud broke geographic limits. Mobile and APIs broke channel limits. Agentic AI is now breaking the assumption that work is executed by people. Each wave reshaped the very thing the operating model was built to govern. The people designing the models — most of them trained on 2000s org-design playbooks — have been chasing a target that keeps moving. The discipline was not failing because the answers were unknown; it was failing because the question kept changing faster than mainstream practice could keep up with.

What mainstream practice has absorbed

The discipline got better at what it had always been trying to do. Modern engagements respond to both broken assumptions in three observable ways.

Designs reach deeper into the operational layer. Rather than stopping at the executive sketch, modern operating model designs decompose to the task level. Process catalogs reach activities and tasks. Role definitions are written at the work level. Exception protocols are specified up front. Automation rules are owned by the operating model rather than left to IT to figure out. This is not more documentation — it is structure that lives in the systems that run the work, not in slides. Tools like BizBlocz, which capture business processes as a structured taxonomy of activities and subprocesses, are emerging to make this depth maintainable.

The model runs continuously. It is reviewed and adjusted on a defined cadence — quarterly, after each strategic move, when a real exception fires. It is not signed off once and parked. Process mining gives an evidence-based view of how work actually flows. Modular tools let practitioners capture parts of the model in software rather than slides. The discipline is now a capability the organization sustains, not a project it completes.

Leadership expectations have shifted. A deck is no longer the end of an engagement. Sponsors expect a named owner, a budget, a governance position, and a way of adjusting the model when assumptions change. The Chief Transformation Officer and Operating Model Office roles have appeared across the Fortune 500 over the last decade specifically to hold this responsibility. The practitioners actually leading the work are increasingly people who came up through the technology waves — ERP, workflow, cloud, automation, AI — rather than pure org-design specialists.

These three responses account for most of the 21-to-63 percent improvement. The discipline did not invent something new. It got more disciplined about what it had always been trying to do. The four attributes of a target operating model that actually works — the design framework that captures these responses — is in the next companion article.

What is still missing

Underneath the discretionary-translation issue, the traditional approach carried a deeper assumption that the major frameworks have not yet responded to: that work is executed by people. The design language of accountabilities, roles, supervision, and spans of control only works when humans are the executors. That assumption has held for the entire history of the discipline.

It is breaking now. Software agents now deploy in days against tasks that used to take months to staff with humans. Gartner projects 40 percent of enterprise applications will include task-specific AI agents by the end of 2026, up from less than 5 percent twelve months earlier. Deloitte's State of AI in the Enterprise 2026 puts a number on the gap: 84 percent of companies have not redesigned jobs to fit AI; only 14 percent have agentic solutions ready to deploy; only 11 percent are running them in production. The rest are layering agent capacity onto jobs designed for the work the agents were brought in to replace.

This is the most important shift any executive currently sponsoring a target operating model engagement should understand. AI does not just amplify the case for TOM. It changes what failing TOM looks like. What an agentic operating model actually requires is in a later article in this series.

Three questions to assess your current operating model

If you are sponsoring or running an operating model that was designed before 2020 — or commissioning a refresh — three questions will tell you whether it has caught up to where the discipline now is.

  1. Does the design reach below the executive layer? A manager who joined the organization last month should be able to read the operating model and act on it without inventing the operational layer themselves. If the answer is no, the model is too shallow for the conditions it has to operate in.
  2. What is the cadence and trigger for adjusting the model? Annual reviews are not a cadence — they are an artifact of the planning calendar. Modern operating models adjust on a defined cadence (quarterly, after each strategic move, when a real exception fires) and have a named trigger mechanism. If adjustments only happen when somebody escalates, the model is no longer keeping up with the strategy.
  3. Who owns the model with budget and authority right now? Modern operating models are held by named roles — Chief Transformation Officers, Operating Model Offices, function-level owners with seats where decisions get made. If the answer is "the consultants who designed it" or "we have not assigned that yet," the discipline is missing the third response that produced most of the recent improvement.

What the comeback story actually says

The 21-to-63 percent improvement is real. The reason for it is not mysterious. Mainstream practice responded to two assumptions that broke, with deeper designs and continuous cadence, and modern engagements look meaningfully different from the 2010s script. The discipline got more disciplined.

But the response is incomplete. A third assumption — that work is executed by people — is breaking faster than mainstream practice has caught up. The gap that opens between what AI can execute and what the operating model is designed to govern is the next failure mode, and it is yet another technology wave practitioners will have to absorb. The discipline came back. The question now is whether it can stay caught up.

For a practitioner's view of what a modern target operating model produces across all four dimensions, the series hub is the place to start.

Diego Navia

Diego Navia

Managing Director, digitiXe · 30+ years in business transformation

Want to discuss this topic?

These insights come from real engagement experience. If something resonates with your situation, let's talk.

Schedule a Conversation