Skip to content
← Back to Blog
·8 min read·Hass Dhia

McKinsey's Agentic Organization Report and the Marginal ROI Crisis Point to the Same Root Cause

agentic-aiorganizational-transformationmarginal-roimarketing-strategydecision-intelligence

McKinsey published a report this week that deserves more attention than it will probably get. The finding, buried in the business language: most companies are running AI everywhere, but almost none of them are becoming what McKinsey calls an "agentic organization." They're running the tools. They're not changing the work.

What's interesting isn't the gap itself. It's that the exact same diagnostic pattern appears in two other unrelated pieces published this week - one on marketing ROI measurement, one on advertising effectiveness. All three are circling the same underlying problem from different directions: organizations are optimizing the wrong variable.

AI Is Everywhere. The Work Hasn't Changed.

McKinsey's analysis documents a paradox that anyone paying attention has felt. AI adoption metrics are strong: most knowledge workers have access to AI tools, usage is growing, and executives report AI in their strategy decks. But the organizations themselves haven't changed. Workflows are the same. Approval chains are the same. The unit of work is still the same human-authored deliverable handed off through the same organizational hierarchy.

The McKinsey framing is that an "agentic organization" isn't one that has AI tools - it's one that has redesigned its processes for AI-native execution. Agents don't just complete tasks faster; they execute multi-step workflows autonomously, removing entire layers of coordination overhead. An organization becomes agentic when the default unit of work shifts from "a human does this" to "the system handles this, humans govern the exceptions."

Most companies are nowhere near that. They've added AI to their existing workflows like a productivity add-on. They're getting faster emails. They haven't restructured who decides what, at what granularity, with what accountability. The tools are deployed. The organization isn't transformed.

This is a structural observation, not a motivational one. The limiting factor isn't ambition - it's that redesigning workflows, leadership structures, and accountability models is genuinely hard, and buying a subscription to a foundation model is not.

This is the kind of structural pattern STI's research tracks systematically: the gap between tool deployment and decision architecture transformation, and why organizations consistently confuse the first for the second.

Marginal ROI and the Channel Saturation Trap

MarketingWeek's analysis on marginal ROI makes a similar point from the measurement side.

The dominant mental model in performance marketing is still "channel ROI" - how much return does this channel generate relative to spend? By that measure, paid search and social look great in most attribution models, and budgets flow accordingly. But marginal ROI asks a different question: how much additional return do you get from the next pound (or dollar) spent in that channel?

In a channel with diminishing returns - which most saturated digital channels are - marginal ROI diverges sharply from average ROI after a certain spend threshold. The average looks good because it includes all the efficient early spend in the denominator. The marginal is terrible because you've already captured the easy conversions and you're now paying more for less.

Why the Metrics Don't Flag It

This matters because inflation in lower-funnel channels is real and persistent. CPMs are up. CPC is up. The math that justified digital-heavy budgets three years ago doesn't hold at today's prices in markets where competition for intent-rich keywords is intense. Optimizing toward "ROI by channel" in this environment means maximizing an average metric while the marginal is destroying value.

The fix sounds simple - measure marginal impact, not average impact - but it requires willingness to question budget decisions that look fine in the existing reporting framework. Most organizations won't do that. The current model shows green, and proposing to reframe it means absorbing the political cost of proving the green is misleading. So the misallocation persists.

If you're evaluating your media mix against these criteria, our analysis tools can help surface where average and marginal ROI are diverging before the performance cliff becomes visible in quarterly results.

Advertising Problems That Aren't Advertising Problems

The Branding Strategy Insider piece is the sharpest of the three. The argument is simple: when advertising fails to deliver, the default diagnosis is a creative or media problem. Wrong brief, wrong targeting, wrong channel, wrong creative execution. But most advertising underperformance is actually a product problem, a pricing problem, or a distribution problem.

If the product doesn't have a compelling value proposition, no amount of creative excellence will manufacture meaningful demand. If the price point is wrong relative to perceived value, lower-funnel intent will exist but convert at ugly rates. If distribution is patchy, even high-intent demand will leak. None of these are fixable by changing the ad. But changing the ad is far easier than fixing the product, the price, or the distribution - so organizations default to the thing they can control.

This is the misdiagnosis problem in its cleanest form. The symptoms present in the advertising channel. The root cause is upstream. Organizations treat the symptom because the root cause requires cross-functional organizational change that advertising departments don't have authority to initiate.

The result is a feedback loop: ad performance is weak, creative is refreshed, performance is still weak, the agency brief changes, performance is still weak. The industry has generated an entire discourse around "creativity crisis" - declining quality of advertising creative as an explanation for performance trends - when the actual explanation for many brands is structural, not creative.

We've covered this pattern before in the context of how the agentic advertising industry is building process infrastructure before proving value - the same impulse to optimize what's controllable rather than interrogate whether the underlying premise is correct.

The Common Thread: Optimizing the Wrong Variable

These three analyses are describing the same failure mode with different vocabularies.

McKinsey: organizations optimize for AI tool deployment when the variable that actually determines outcome is organizational redesign.

MarketingWeek: advertisers optimize for average channel ROI when the variable that actually determines budget efficiency at scale is marginal ROI.

Branding Strategy Insider: brands optimize for advertising execution when the variable that actually determines conversion is product-market fit, price, and distribution.

In each case, the wrong variable is measurable and controllable while the right variable is harder to measure and politically costly to act on. Organizations systematically gravitate toward the tractable problem even when solving it won't address the underlying outcome.

The Incentive Structure Behind the Mistake

This isn't an irrationality story. It's an incentive story. Individuals and teams optimize for metrics they're evaluated on and levers they control. Procurement metrics don't capture organizational redesign. Marketing attribution models don't surface marginal ROI divergence. Advertising KPIs don't flag product-market fit gaps. The systems aren't measuring the thing that matters, so the people inside the systems can't be expected to optimize for it.

The behavioral economics research on this is clear. When decision-making environments create misalignment between measurable proxies and actual outcomes, proxies win reliably. Roger Dooley's work on the paradox of choice offers a useful frame: we optimize obsessively within bounded choice sets while ignoring whether the choice set itself is correctly constructed. Organizations choose harder within the wrong frame before they ask whether the frame is right.

This pattern - where the real decision intelligence gap is in framing rather than execution - is what STI's decision tools are built to address. The analytics matter less than the framework they sit inside.

What the Agentic Organization Actually Requires

McKinsey's report implies something it doesn't fully say: the agentic organization isn't an AI story at its core. It's a workflow governance story.

To become genuinely agentic, organizations need to make a series of uncomfortable architectural decisions. Which decisions can be delegated to autonomous systems without human review? What are the exception conditions that trigger human oversight? What does accountability look like when the output is generated by an agent? Who owns the agent's instructions the way a manager owns a direct report's work?

These questions don't have off-the-shelf answers. They require organizations to examine their own decision-making architecture with unusual rigor - something that most are structurally reluctant to do.

The Parallel in Marketing and Brand

The path to marginal ROI measurement isn't a better analytics tool. It's an organizational willingness to accept that existing budget allocations, optimized against average ROI metrics with institutional momentum behind them, may be badly wrong at the margin. That's a harder conversation to start than deploying an AI tool.

We've written before about how brand positioning decisions in the agentic buying layer create exactly this kind of upstream architecture problem - where the decisions that actually determine outcomes are structural, made early, and difficult to revisit once performance metrics start flashing red.

The organizations that will close the agentic gap, the marginal ROI gap, and the advertising effectiveness gap share one characteristic: they're willing to make the harder measurement and accountability changes before the tools have a chance to work. That sequencing - diagnose the right variable first, then deploy the tool - is what consistently separates organizations that transform from organizations that adopt.

If you're working through what that looks like for your organization, start here.

Want more insights like this?

Follow along for weekly analysis on brand strategy, market dynamics, and the patterns that separate signal from noise.

Browse All Articles →

Or explore partnership opportunities with STI.

Related Articles