Skip to content
← Back to Blog
·8 min read·Hass Dhia

OpenAI Testing an Ads Manager Is a McKinsey Agentic Architecture Problem, Not an Ad Tech Story

agentic-aienterprise-strategyopenaimckinseyadvertisingarchitecture

OpenAI began testing an Ads Manager last week with a small group of partners. The trade press covered it as an ad tech story - a revenue stream for a company burning billions on compute. That framing is technically accurate and almost entirely wrong.

If you are a brand leader or enterprise strategist, the OpenAI ad test is not a monetization story. It is an early data point in a structural reconfiguration that McKinsey's two most important analyses this week describe from completely different angles - and together they outline a problem most organizations are not set up to see.

The Attention Infrastructure Is Being Rebuilt Under Your Feet

For roughly fifteen years, brand marketing has operated on a stable attention duopoly: Google captures purchase intent, Meta captures social engagement. Brands have optimized their agencies, their technology stacks, and their attribution models around this reality. It worked because the infrastructure was stable.

What changes when the AI layer - the conversational interface that increasingly mediates consumer decisions before they ever reach a search engine - decides to monetize attention directly?

OpenAI's Ads Manager test is the first concrete signal that the layer above the browser is building infrastructure to capture brand revenue directly. When a consumer asks ChatGPT "which car should I buy" or "what's the best insurance for someone in my situation," that query is currently answered without a brand paying for placement. The Ads Manager test suggests that changes.

This is not a Google-versus-OpenAI competitive story, though it will become that. It is a signal about where decision leverage is migrating. Brand discovery - the moment a consumer narrows consideration to a specific set of options - is shifting from search results and social feeds to AI-mediated conversations. The advertising infrastructure that captures that moment is being built now.

The brands that treat this as a "watch and see" situation are operating on the assumption that their current attribution models will still make sense in 18 months. That assumption is untested, and the evidence suggests it is wrong.

What McKinsey's Agentic Architecture Report Actually Warns

Published in the same week, McKinsey's analysis of enterprise architecture for the agentic era frames a specific structural choice for technology leaders: incremental modernization versus full-scale transformation.

The report's core argument is that agentic AI does not fit cleanly into existing enterprise IT frameworks. Traditional architecture was built around human-initiated workflows - a person triggers a process, a system responds. Agentic workflows are different. They are persistent, context-aware, and capable of initiating across systems without a human in the loop at each step.

This creates a structural incompatibility. You can bolt agentic capabilities onto legacy architecture - most enterprises are doing exactly this - but you get isolated AI wins rather than a coherent operating system. McKinsey describes the gap between these paths as: "agentic features on legacy foundations" versus "agentic workflows as the new operating layer."

That distinction matters enormously. The features path produces measurable AI ROI in specific use cases. The operating layer path produces compounding advantage - a system that accumulates cross-functional context over time and becomes increasingly difficult to replicate.

This is the kind of cross-domain signal STI's research tracks systematically - where a capability arrives faster than the infrastructure designed to absorb it, and the gap between early movers and late adopters widens before most organizations realize the race has started.

Why Incremental Is the Wrong Default

The incremental path is appealing because it is legible to existing governance. You can run a pilot. You can show ROI. You can manage risk in a way your board understands. The transformation path requires accepting that your current architecture is fundamentally wrong, not merely incomplete.

The history of enterprise technology suggests most organizations choose incremental until the pain becomes acute. The risk with agentic AI is that the pain arrives suddenly rather than gradually. When a competitor's agentic system has accumulated 18 months of cross-functional context - about customers, inventory, pricing, and competitive signals - you cannot catch up by bolting agents onto your existing stack.

This pattern has appeared before in advertising contexts: the gap between building agentic features and building agentic systems is wider than it looks from inside the organization building the features. The features show up in pilots. The systems show up in market share.

The COO Problem Nobody Is Discussing

McKinsey's separate analysis of COO excellence lands differently when read alongside the architecture piece.

The report's central observation is that the COO role has migrated from pure execution to strategic partnership. Today's COOs sit at the intersection of strategy, execution, and CEO collaboration in ways that were not true a decade ago. The role has expanded to include driving transformation, not just running operations.

Here is the problem this creates: enterprise architecture transformation requires operational commitment, not just IT commitment. The incremental-versus-full-scale choice McKinsey describes is not made in the CTO's office. It is made implicitly, through how the COO allocates organizational capacity, structures cross-functional work, and sets the tempo for change.

If the COO is still running operations on a traditional execution model - clear ownership, sequential process, risk-minimized change - then even a technically sophisticated agentic architecture initiative will stall at the operational boundary. The technology can support agentic workflows. The organization has not changed how it manages work.

This is not a criticism of operations leaders. It is a structural observation about where transformation actually gets stuck. Architecture decisions and operations tempo have to move together. When they do not, you get the situation most enterprises are in now: technically capable pilots that do not scale into the operating model.

Three Separate Stories, One Structural Problem

Taken together, the OpenAI ads test, the McKinsey architecture analysis, and the COO evolution piece describe a problem that is actually one problem dressed as three separate departmental concerns.

  • Marketing sees an emerging threat to its media buying infrastructure
  • IT sees an architectural transformation requirement
  • Operations sees a leadership evolution challenge

If these are managed as three separate tracks - which is the default in most organizations - each function optimizes locally and creates misalignment at the intersections. Marketing rebuilds its OpenAI advertising strategy without knowing that IT is still on the incremental architecture path. IT transforms architecture without knowing that COO capacity planning is running on a traditional execution model. The COO evolves the leadership model without knowing what demands the new architecture will place on organizational tempo.

We have tracked this convergence problem across different industry contexts - the gap between "we have agentic features" and "we are an agentic organization" is where competitive advantage gets decided, and it almost always runs through the intersection of technology, operations, and market positioning rather than any single function.

What a Cross-Functional Scenario Looks Like

A cross-functional scenario is not a task force or a steering committee. It is a shared model of how the agentic transition plays out across functions - what signals indicate progress, what early indicators suggest misalignment, and what trade-offs the organization is making explicitly rather than by accident.

For a brand leader reading these three signals simultaneously, the useful question is not "what does our AI strategy look like?" It is: when our AI-informed customer experiences start to diverge from our operating model's capacity to support them, what breaks first? Marketing will blame the technology. Technology will blame operations. Operations will blame resource constraints. Nobody will have a model that describes the actual failure mode in advance.

The organizations that navigate this well are not the ones with the best AI technology. They are the ones that built a cross-functional scenario before they needed one - when the signals were still legible rather than urgent.

If you are mapping these dependencies for your organization, our analysis tools can help structure the problem before you are past the point where incremental adjustments still work.

The Non-Obvious Conclusion

The OpenAI Ads Manager test will not be the last signal that the attention infrastructure is shifting. But it is the clearest evidence yet that the shift is no longer hypothetical. What makes this week's signals worth paying attention to is not any single data point - it is the convergence. The AI provider layer is moving into advertising. The architecture layer is demanding transformation. The operational leadership layer is evolving the COO role. Three independent analyses pointing at the same structural reckoning.

Most enterprise leaders will read the OpenAI ads story in their industry newsletter and move on. They will read the McKinsey architecture piece in a quarterly briefing and put it in the "important but not urgent" category. They will note the COO evolution analysis and forward it to HR.

The organizations that connect these as a single signal - and build decision infrastructure around that connection rather than departmental responses to each piece - are the ones that will look prescient in 24 months.

The incremental path is comfortable. It will not be for much longer.


To pressure-test your organization's assumptions about the agentic transition, schedule a conversation with our research team. We track cross-domain signals specifically to help brand leaders get ahead of them before they become obvious.

Want more insights like this?

Follow along for weekly analysis on brand strategy, market dynamics, and the patterns that separate signal from noise.

Browse All Articles →

Or explore partnership opportunities with STI.

Related Articles