Back to InsightsPart 3 of 8 · Rethinking the Target Operating Model
Operating Model ArchitectureMay 15, 202610 min read

What MBB and Big 4 actually deliver when you commission a Target Operating Model

When you commission a target operating model, what you receive is not the same across firms. The firm-by-firm reality across MBB and Big 4.

When you commission a target operating model engagement from a major firm, you receive a recognizable set of deliverables. Capability maps. Organization structures. Decision rights. Governance schedules. Ways-of-working playbooks. Technology choices. Location and sourcing decisions. KPIs. An implementation roadmap. Most engagements produce most of those, regardless of which firm signed the proposal.

Underneath the common deliverables, the frameworks the firms actually publish are not the same. McKinsey leads with 12 elements in four groups. Bain leads with design principles before structure. BCG decomposes around front, middle, and back office. KPMG publishes six layers anchored on a service-delivery foundation. Deloitte's published framework has shifted twice in a decade. PwC does not publish a single canonical framework at all. EY explicitly rejects the people-process-technology-governance taxonomy that the other six are built on.

The disagreements matter. They are not about whether to design across the four dimensions — every firm designs across WHAT, WHO, HOW, and WHERE — but about which elements within each dimension get elevated to peer status, which get buried as sub-questions, and which get ignored. If you are choosing between firms or trying to read what a competing proposal will actually deliver, the differences tell you a lot. This article walks through them.

The common ground: four dimensions, recognizable artifacts

Strip away the firm-specific language and a traditional TOM engagement produces a recognizable set of artifacts. If you commission one, expect to receive most of these regardless of who you hire.

  • A definition of work and capabilities — captured as a capability map, value streams, and a service catalog.
  • An organizational structure — boxes, lines, reporting relationships, business units, shared services, centers of expertise.
  • Accountabilities and decision rights — RACI matrices, authority schedules, ownership of P&Ls, the role of the corporate center.
  • Governance — committees, review cadences, escalation paths.
  • Ways of working — meeting rhythms, planning cycles, performance management, cross-functional protocols.
  • Talent and capabilities — required skills, role profiles, training architecture, leverage ratios.
  • Technology platforms — major system choices and integration approach.
  • Locations and footprint — geographies, time zones, sourcing decisions, captive versus vendor mix.
  • Performance metrics — KPIs, dashboards, accountability for results.
  • An implementation roadmap — phased plan, milestones, dependencies, quick wins, benefits case.

These are the deliverables. Every firm produces most of them. What changes is the published framework — the structural taxonomy each firm uses to organize the work and the deliverables. The frameworks differ in three ways that matter for buyers: which elements they treat as peer-level versus subordinate, which ones they emphasize in proposals and pricing, and how their consultants actually structure the engagement around them.

The full grid

The seven major firms mapped against the four dimensions, using each firm's published element vocabulary:

DimensionMcKinseyBainBCGDeloitte (NextGen)PwC (Strategy&)EY (Future-Fit)KPMG (Powered)
WHAT — scope, capabilities, valueStrategy; CapabilitiesCapabilities (in 5 elements)Capability map, F/M/B splitVision & Strategy; Products; ChannelsWay to Play; Differentiating Capabilities SystemInnovation Platform; Enduring Purpose(implicit, outside the 6 layers)
WHO — structure, roles, accountabilityPeople; Org structureStructure; AccountabilitiesStructure by F/M/BOrganisation; PeopleOrg Structure; Roles & Accountabilities; Decision RightsTalent FlexibilityPeople
HOW — execution, governance, performanceWays of workingGovernance; Ways of workingWays of workingProcesses; Technology; Governance & Reporting; DataKey Processes; Technology; Performance Management; CultureDigital DNAProcess; Technology; Performance Insights; Governance
WHERE — geography, sourcing, footprint(not named)(not named)F/M/B office overlayService Delivery / Sourcing(weak)Dynamic EcosystemsService Delivery Model
Source: each firm's published methodology pages. Bracketed notes indicate where a dimension is treated as a sub-question rather than a peer element.

The patterns in this grid are the article's central observation. They matter for any buyer trying to read what a vendor will actually deliver.

MBB

McKinsey. The published framework lists 12 elements organized into four groups: strategy, capabilities, ways of working, and people. The elements inside those groups are roughly the ones a senior consultant would name: structure, accountabilities, processes, performance management, technology, metrics, talent, culture and mindset. McKinsey's positioning in the framework is anchored on capabilities — the firm treats capabilities as the bridge between strategic intent and structural design. Performance management is named as a peer element rather than as a sub-question of ways of working. Locations and sourcing is not named as a peer element at all; in McKinsey's structure it lives inside the broader operational layer.

Bain. Bain's published framework leads with 7 to 15 design principles that the leadership team agrees on before structural decisions get made. The model itself then organizes across five elements: structure, accountabilities, governance, ways of working, capabilities. The framework's signature move is the principles-first approach, captured in Bain's stated philosophy that "principles liberate people to do the right thing." Like McKinsey, Bain does not name a WHERE element as a peer; geography and sourcing live inside the broader structural element. Like McKinsey, Bain treats governance as a first-class element.

BCG. Twelve elements, with a pronounced front-middle-back office decomposition that runs across them. The split is BCG's signature: each office category gets its own structural design, its own technology choices, and its own ways of working, with the operating model integrating the three. Layered on top, BCG's 2018 Agile Operating Model Framework added capacity-based funding (funding flows to teams, not projects), flatter spans of control, and team-level autonomy as design choices. BCG's front/middle/back office split is the closest any MBB firm comes to a named geographic decomposition, but it is organizational, not locational. WHERE proper is not elevated as a peer dimension.

Big 4

KPMG — *Powered Enterprise TOM*. Six named layers: Process, People, Service Delivery Model, Technology, Performance Insights, Governance. KPMG is the only Big 4 firm to elevate a WHERE element — Service Delivery Model — to first-class peer status. The published positioning is explicit: KPMG calls out that traditional TOMs cover process-people-technology and miss where the work gets done, how it gets measured, and how it gets governed. Performance Insights (the data and insight capability) is named separately from process and technology — closer to the modern view that performance is a feedback loop, not a dashboard. Each layer is populated by predefined assets — workflows, reports, dashboards, training — from the Powered Enterprise library; the commercial differentiation is the asset library, not the taxonomy.

Deloitte. The published TOM names eight to ten elements depending on which Deloitte source you read: Vision & Strategy, Organisation, People, Processes, Technology, Products, Channels, and Governance & Reporting. The most-cited reference is a 2014 "Target Operating Model at a glance" document from Deloitte Luxembourg, which most secondary sources still reproduce. Deloitte's 2020s NextGen Operating Model Transformation rephrasing adds Data and Service Delivery as peer elements. With those additions, Deloitte joins KPMG as the only published Big 4 framework that elevates WHERE to a first-class element. Deloitte also overlays a Future of Work lens (work, workforce, workplace) onto the model in newer thought leadership — an overlay rather than a replacement of the core element list.

PwC. PwC does not publish a single canonical TOM framework. Across pwc.com and strategyand.pwc.com there are at least four overlapping frames in active circulation: Strategy&'s Capabilities-Driven Operating Model (closest to flagship), Fit for Growth (organized as three questions — what do we do? where do we do it? how, and how well?), OrgDNA (a diagnostic across structure, talent, process, culture), and the COO 4Vs model (Vision, Value, Velocity, Viability — positioning rather than an element list).

The Capabilities-Driven version is the closest thing to a canonical PwC TOM and is the one to read if you are evaluating a Strategy& proposal. It names nine elements: Way to Play, Differentiating Capabilities System, Organization Structure, Roles & Accountabilities, Decision Rights, Key Processes, Technology, Performance Management, Culture / Behaviors. The defining PwC fingerprint is capabilities, not structure first — the published line is that clarity about the "what" lets you define the "how." PwC is also the only major firm that elevates Decision Rights to a peer element. Notably, WHERE is weakly represented in the Strategy& list; sourcing lives inside other elements.

EY. EY explicitly rejects the people-process-technology-governance taxonomy that the other six firms are built on. The published Future-Fit Operating Model, developed with MIT SMR Connections against a 370-leader executive survey, organizes around five capability areas rather than structural layers: Dynamic Ecosystems (partner networks, alliances, external value-chain orchestration), Digital DNA (embedded digital, data, and tech capabilities), Talent Flexibility (workforce model that flexes skills, roles, and contracts), Innovation Platform (continuous innovation mechanism), Enduring Purpose (the "why" anchoring strategy and behavior).

EY's framing is deliberately a critique of the classic structural taxonomy — the firm calls the conventional approach "siloed" and "anchored in the present." The five elements map onto WHAT/WHO/HOW/WHERE only by force; every element is meant to span dimensions rather than sit inside one. EY has also been pushing an Agentic AI Operating Model narrative in 2025–2026 thought leadership, but that work sits on top of (not inside) the five-element future-fit model.

Three things this comparison tells you

The dimensions are stable. The emphasis is not. All seven firms ultimately design across WHAT/WHO/HOW/WHERE. None of them treats those four as optional, and none of them adds a fifth. What changes is which elements within each dimension a firm treats as peer-level. Capabilities are McKinsey's anchor. Decision rights are PwC's. Service delivery is KPMG's. Capability areas spanning dimensions are EY's. The four dimensions are the common floor; the elevated elements are the firm fingerprint.

Only KPMG and Deloitte's NextGen version name a WHERE element explicitly. The other five firms treat location and sourcing as sub-questions inside other elements rather than as a peer dimension in its own right. This shows up in proposals: an engagement that does not name a WHERE deliverable on the SOW typically does not produce one, even when location is a strategic decision the operating model needs to govern. The pattern is structural — WHERE has been under-specified across most of the field for two decades, and the firms that elevate it (KPMG since Powered, Deloitte since NextGen) are the ones whose engagements consistently surface geography as a design question.

PwC publishes four overlapping frames, not one canonical framework. Buyers evaluating a Strategy& proposal often assume a single canonical PwC TOM exists. It does not. The Strategy& Capabilities-Driven Operating Model is the closest, but Fit for Growth, OrgDNA, and the COO 4Vs circulate alongside it and are pitched as the lead frame depending on the engagement type. This is not a flaw — multiple frames let PwC fit the engagement to the buyer's question — but a buyer should ask which frame the proposal is anchored on, and read the elements named there.

EY's rejection of the classic taxonomy is the most structurally interesting position of the seven. The capability-area frame is closer to a modern view of operating model design — work organized around what the organization needs to be able to do, not around process and structural layers. But EY's five elements do not carry the hybrid execution layer that modern target operating models now have to produce. The frame is structurally ahead of its content.

What all seven systematically under-specify

The common floor is the four dimensions. The common omissions are revealing. Three things appear in modern operating models that none of the seven frameworks elevate to peer status:

The first is the hybrid execution layer — the explicit drawing of the human/agent boundary inside the model, task by task. Every framework above was built when work was assumed to be executed by people. None of the seven names agent role definitions, human-in-the-loop protocols, or escalation rules as deliverables. EY's emerging Agentic AI Operating Model work and Deloitte's Future of Work overlay are closest, but neither is integrated into the core element list yet.

The second is the multi-sourced workforce as a deliberate design surface. The published frameworks treat sourcing decisions as a downstream consequence of organizational design. In reality, a modern workforce is a deliberate mix — full-time employees, contractors, captive offshore, BPO partners, professional services and consulting firms, fractional and part-time workers, gig platforms, deterministic agents, judgment-bearing agents. EY's Dynamic Ecosystems captures part of this. PwC's Roles & Accountabilities captures part. No framework treats the full sourcing taxonomy as a peer deliverable.

The third is the orchestration layer — the handoff protocols across humans, deterministic agents, and judgment-bearing agents that live in the systems running the work rather than in slides. The published frameworks describe technology choices and process design as separate elements. None names orchestration as the deliverable that ties them together for hybrid execution.

These three omissions are the surface area where the modern target operating model has to do more than the published frameworks ask for. What a modern TOM should produce across all four dimensions, including the hybrid layer, sits in the hub of this series.

What to ask in a procurement conversation

If you are commissioning a target operating model engagement and trying to read what each firm will actually deliver, four questions in the first conversation will tell you most of what you need to know.

  1. Which named elements appear on the SOW? Ask for the firm's own list, in the firm's own vocabulary. Compare against the grid above. The deliverables you receive are the ones named on the SOW.
  2. Is WHERE a peer deliverable, or is it inside another element? If location and sourcing are not on the named list, the engagement will likely treat them as downstream decisions.
  3. How does the deliverable describe the human/agent boundary? Ask for a sample artifact. If the firm cannot show one, the engagement is designing for a workforce that no longer exists.
  4. What is the orchestration layer of the deliverable? A deck is not an orchestration layer. The deliverable should describe how the work runs in the systems, not just how it is designed on paper.

The pattern across MBB and Big 4 frameworks is that the published structure tells you what each firm is good at and what it routinely under-specifies. The buyer's job is to read the structure, name the gaps, and require the gaps as deliverables before the engagement starts.

Diego Navia

Diego Navia

Managing Director, digitiXe · 30+ years in business transformation

Want to discuss this topic?

These insights come from real engagement experience. If something resonates with your situation, let's talk.

Schedule a Conversation