Skip to content
← Back to Blog
·8 min read·Hass Dhia

McKinsey's Agentic AI Infrastructure Warning Is Also a Marketing Failure Diagnosis

agentic-aibrand-strategybehavioral-economicsdata-infrastructuredecision-intelligence

Two reports landed this week that most readers filed in completely different mental folders. One is about technology infrastructure. One is about marketing performance. They are, structurally, the same document.

McKinsey's "Reimagining tech infrastructure for and with agentic AI" makes one central claim: scaling AI agents requires transforming unstructured data into governed, reusable assets that systems can interpret and trust. The report cites data limitations as the primary blocker for enterprise AI agent deployment. Not model quality. Not compute cost. Not organizational resistance. The substrate beneath the agents is not ready, and no amount of agent sophistication compensates for it.

Branding Strategy Insider published "Marketing Rarely Fails On Its Own" the same week, with a structurally identical argument: "Brand is not built primarily in communications. It forms through the accumulation of decisions across an organization, over time." When campaigns underperform, the instinct is to adjust creative and messaging. The actual failure is almost always beneath the surface layer, in organizational decisions and purpose-alignment gaps that marketing was never equipped to fix.

Same diagnosis. Different domain vocabulary.

The reason this pattern keeps recurring across unrelated fields is that it is not a domain-specific failure. It is a feature of how organizations allocate attention and resources. The visible layer attracts scrutiny because it produces visible outputs. The substrate layer produces invisible conditions. Organizations consistently invest in what they can measure and skip what they can only understand.

The Infrastructure That Determines Everything

McKinsey's framing sounds technical, but the core insight is organizational. AI agents can only be as reliable as the data environment they operate in. When enterprises deploy agentic AI on top of inconsistent, poorly labeled, and non-standardized data, they produce confident-sounding outputs that do not map reliably to operational reality. The model is not failing. The contract between the data and the model was never established.

The report frames this as a "shared foundation" problem. Enterprises need common definitions, data lineage tracking, and governed access layers before agents can work reliably across functions. Without that substrate, every agent deployment is asking a sophisticated reasoning system to navigate a building with unlabeled rooms, doors that sometimes lead somewhere different than yesterday, and no master floor plan.

Why This Gets Deployed Backwards

The backwards deployment pattern is predictable once you see it. Agents are visible and demonstrable. A working agent demo moves through an organization quickly because it is legible to leadership. You can show it, benchmark it, put it in a slide. Data governance is invisible and slow. The payoff from a well-governed data layer does not appear in the same review cycle as the agent deployment. So organizations deploy the agent and leave the substrate ungoverned, expecting the surface to compensate for the foundation.

The Walmart and Macy's agentic commerce divergence from earlier this year illustrated this precisely. Walmart's OpenAI pilot failed because it placed the agent on top of data Walmart did not control, could not optimize, and could not govern. Macy's 4.75x revenue result from "Ask Macy's" came from running their agent against their own proprietary customer graph. Same technology tier. Opposite infrastructure posture. Opposite outcome. McKinsey's 80% data-limitation finding explains both results simultaneously.

Brand Strategy Has Known This for Decades

The Branding Strategy Insider analysis by Anne Bahr Thompson articulates the brand version of the substrate problem with unusual precision: "Marketing becomes the messenger for tensions it didn't create."

Thompson's framework describes how brand performance actually compounds, through the accumulation of decisions across functions rather than through communications. The daily choices that build or erode brand alignment are not campaign choices. They are operational choices: what trade-offs get approved under pressure, whether quality standards hold when margins are tight, whether a firm's stated purpose functions as a decision filter or as a tagline. These choices accumulate below the surface layer. Marketing picks up the signal they create.

When a campaign underperforms, the diagnosis almost always focuses on the campaign. Refine the positioning. Adjust the creative. Increase the media weight. Sometimes these moves help. More often, the campaign is surfacing misalignment that was already present in the organizational substrate before any brief was written. You can optimize the surface indefinitely without touching the actual lever.

The Decision Layer as Infrastructure

What makes Thompson's framing valuable is that it treats organizational decisions as infrastructure rather than as culture. Culture is a notoriously fuzzy intervention target. Infrastructure is not. You can audit decisions the same way you audit data pipelines: what choices were made, at what level, with what trade-offs, and whether they are consistent with stated purpose.

The case for brand trust as an operational property rather than an aspirational one follows the same logic. Netflix House and MGM's solar-powered operations succeed not because their brand messaging is coherent, but because their operations have become the brand. The surface layer and the substrate layer are aligned. That alignment is not a communications achievement. It is a decisions achievement.

The Behavioral Science Confirmation

A BehavioralEconomics.com review of 80 influence studies spanning 1982 to 2024 adds a third data point. The finding: Cialdini-style influence strategies work best when matched to the individual's personality profile. Mismatched strategies do not just underperform. They actively backfire. Someone low in agreeableness does not respond to social proof the way someone high in agreeableness does. Apply the wrong influence architecture and you have not merely wasted the persuasion attempt. You have damaged the relationship.

The parallel to brand and infrastructure is direct. In each case, a sophisticated technique is applied without examining whether the substrate conditions are met: whether the data is governed, whether organizational decisions align with purpose, whether the audience's disposition matches the influence architecture. The technique is real. The substrate is skipped. The backfire follows.

The Pattern Across Three Domains

Across McKinsey's AI report, Branding Strategy Insider's brand analysis, and the behavioral economics literature, the failure pattern is structurally identical:

  1. An organization invests in a visible output layer: AI agent, marketing campaign, influence tactic
  2. The substrate that determines whether the output works (data governance, organizational decision alignment, personality matching) remains unexamined
  3. The output underperforms
  4. The output layer is diagnosed as the failure, and the substrate is never audited

This cross-domain convergence is the original contribution here - it cannot be found in any of the source articles individually. Across these domains, investment allocation follows the same budget logic: visible layers are legible to leadership and show up in quarterly reviews; substrate layers do not demo well and produce results that lag the investment cycle. McKinsey's data limitations finding, Branding Strategy Insider's organizational misalignment argument, and the behavioral economics backfire literature are all measuring the same underlying phenomenon through different disciplinary lenses.

What Devonshires' First CMO Will Reveal

The MarketingWeek story about property law firm Devonshires appointing its first Chief Marketing Officer is a case in progress. The firm hired a CMO to take all "the threads" of the business and sew them into a coherent growth strategy, framing the hire around distinctive viewpoint, growth orientation, and integrated communications.

Professional services firms are notoriously late adopters of brand discipline. Most law firms have spent decades treating business development as a relationship function, marketing as event sponsorships and directory listings, and brand as logo standards. Bringing in a first CMO represents a genuine investment in brand muscle.

The risk for Devonshires is structural. If the CMO's mandate stops at communications, the hire will underdeliver. The threads Thompson describes that actually build a firm's brand are not communications threads. They are decisions threads: how partners price engagements, what trade-offs they make under client pressure, whether quality standards hold when they are inconvenient, how the firm treats associates when no client is watching. If those threads pull in contradictory directions, no CMO with a communications-focused mandate will sew them together.

Law firm first CMO hires are structurally different from consumer brand CMO appointments in one specific way. In consumer goods, brand infrastructure exists: segmentation research, brand architecture frameworks, established feedback loops between marketing investment and sales outcomes. The CMO is deploying into a prepared substrate. At Devonshires, the CMO may be walking into a firm where the organizational decision substrate has never been audited through a brand lens. The surface-layer tool is ready. The substrate layer has not been examined.

This is not a prediction that the hire will fail. It is a prediction about where the difficulty will emerge, and it follows directly from the same pattern McKinsey and Branding Strategy Insider are both identifying.

The Budget Problem That Sustains the Pattern

There is a structural reason the substrate-first approach remains rare. Substrate investments are hard to pitch and slow to demonstrate. A well-governed data architecture does not have a demo. An organization whose daily decisions align with its stated purpose does not generate a slide that shows direct causation to revenue. The returns are real, but they are not legible in the format that budget allocations require.

This is a coordination problem rather than a knowledge problem. Most senior leaders understand intellectually that data governance matters, that organizational behavior drives brand outcomes, and that influence works differently on different personality profiles. The knowledge is available. The problem is that incentive structures for demonstrating results on quarterly timelines systematically defund substrate investments in favor of surface investments that show faster, more legible signals.

McKinsey's seven-principle data architecture framework is genuinely useful, but its uptake will follow the usual pattern: cited in strategy decks, under-resourced in implementation, and diagnosed as a failure of the framework rather than a failure of substrate investment when the agents underperform.

The same outcome plays out when a law firm's first CMO produces campaigns that win creative awards but do not generate measurable growth, when an influence strategy that worked on one segment backfires on another, and when a brand campaign is redesigned for the fourth consecutive quarter without any examination of the organizational decisions that created the gap it was supposed to close.

The substrate determines the outcome. The surface layer gets the budget. Three separate fields documented it again this week.

What substrate investment actually looks like is worth naming, because it is less glamorous than it sounds. For agentic AI, it means spending six months on data classification, schema standardization, and lineage documentation before deploying a single production agent. For brand strategy, it means auditing the decisions made across every function over the past year before writing a positioning brief. For influence, it means segmenting your audience by personality profile and testing strategy-to-profile fit before scaling a campaign. None of these activities produce a demo. All of them are prerequisites for the demo actually working.

The organizations that figure this out are not the ones with the most sophisticated surface-layer tools. They are the ones disciplined enough to invest in the layer that does not show up in a quarterly review until the absence of failure finally becomes visible.

If your organization is mapping the gap between strategic infrastructure and actual decision architecture, STI's research on decision intelligence tracks this pattern systematically across industries.

Want more insights like this?

Follow along for weekly analysis on brand strategy, market dynamics, and the patterns that separate signal from noise.

Browse All Articles →

Or explore partnership opportunities with STI.

Related Articles