The ITV Streaming Paradox Explains Why McKinsey's Agentic AI Trust Warning Hits Brands Hardest
ITV's 2025 full-year results contain a number that should be unsettling for anyone running a brand: digital advertising revenue grew 12%, but total advertising revenue fell 5%. ITVX, ITV's streaming platform, is working. Audiences are shifting. The new model is growing. And the company is still losing ground on advertising overall because the old model is collapsing faster than the new one can compensate.
This is what a trust transition looks like in the revenue line before anyone has named it a trust problem.
When Growth Can't Outrun Structural Decline
The streaming pivot was supposed to solve the equation. If audiences move from linear TV to ITVX, and ITVX can serve addressable, data-enriched ads, the math should eventually work. The problem is the word "eventually." Linear TV audiences are declining not linearly but in acceleration. Twelve percent ITVX growth doesn't cover the losses on traditional broadcast because the timing dynamics are asymmetric: the old model loses faster than the new one scales.
McKinsey published a piece this week on "trust in the age of agents" that describes an identical structural dynamic in a different domain. As AI systems become more autonomous - booking travel, purchasing goods, making financial decisions on behalf of users - the trust infrastructure required to operate in that environment fundamentally changes. The risks aren't primarily technical. They're relational. Leaders need to "reckon with new risks" that emerge specifically because the AI is acting, not assisting.
The two stories describe the same underlying transition. A model built for human attention and human decision-making is being replaced by one that operates on different signals. And the organizations that treated the old model as permanent are now discovering the new one requires foundations they don't have.
What ITVX's Revenue Line Is Actually Measuring
The 12%-growth-but-5%-total-decline figure from MarketingWeek's coverage of ITV's annual results isn't just an advertising market story. It's a signal about timing. Organizations that are genuinely building toward the new model - and ITV is - can still show net losses if the structural shift accelerates faster than their transition does.
Brand leaders watching this number should ask: what is the analog in my business to "total advertising revenue," and is my 12% growth in the new model keeping pace with what I'm losing in the old one?
What the BBDO Problem-Proximity Framework Got Right Sixty Years Ago
Branding Strategy Insider examined BBDO's Four Point Process this week - a framework built around something called the Problem Detection Study. The core idea is straightforward: advertising agencies that maintained the closest proximity to actual customer problems consistently produced more effective work. Not assumed problems, not demographic-aggregate problems, but specific, articulated problems identified through direct research.
The framework sounds obvious until you examine how brand strategy actually operates. Most brands invest heavily in audience segmentation, media planning, reach, and frequency. Problem proximity requires a different investment: ongoing, direct research into what specific customers are attempting to do and where they are failing. Most brands commission research, build personas, and derive customer problems from demographic inference rather than direct observation. The BBDO approach required the reverse.
Why This Framework Is Suddenly Relevant Again
An AI agent making a purchase recommendation - whether for a consumer or on behalf of an enterprise buyer - is not influenced by a brand's media spend or historical share of voice. It processes available evidence: reviews, specifications, pricing data, reliability records, verified outcomes from comparable users. The signals that drove brand preference in a human-decision environment (familiarity, emotional resonance, aspirational imagery) do not translate cleanly into agentic selection criteria.
The brands that survive this transition won't be the ones with the most sophisticated AI governance frameworks. They'll be the ones with documented, verifiable proof that they solve real customer problems. Problem proximity isn't just good brand strategy. In an agentic purchasing environment, it's the only kind of brand equity that survives when an AI agent is the buyer.
This is the kind of pattern STI's research tracks systematically: how the signals that made brands trustworthy are changing as the infrastructure for trust-based decisions shifts from human attention to algorithmic evaluation.
Why New Frameworks Keep Appearing Instead of Solutions
BehavioralEconomics.com published a sharp piece this week on what the authors call the "Pioneer Effect" - the tendency of behavioral scientists to create new frameworks and name new effects even when existing ones would suffice. The result is a fragmented literature with overlapping constructs, duplicated research, and terminology inflation that makes the field harder to apply in practice.
The pattern appears in brand strategy too. Each AI wave produces a new category of corporate response: responsible AI commitments, ethical AI pledges, human-in-the-loop frameworks, AI governance charters. These accumulate alongside each other rather than consolidating into applied practice. The organizations producing the most thorough responsible AI documentation are not necessarily the ones building the most trustworthy AI products.
McKinsey's trust framework, for all its analytical rigor, risks the same trap. It is comprehensive and useful. It is also likely to generate a generation of "trust strategies" that mimic the form of problem-detection work without involving any actual problem detection.
The Consistent Behavior That Still Underperforms
Kiplinger's analysis of the gender savings gap documents something structurally important: women are, on average, more consistent savers than men. They demonstrate the correct behavior by standard metrics. Yet long-term balances systematically lag because structural factors - career interruptions, wage gaps, longer lifespans requiring more capital drawdown - that consistent saving behavior alone cannot compensate for.
This maps directly onto the brand trust problem in agentic environments. Doing the right things - publishing transparent AI policies, maintaining human oversight, conducting regular audits - won't produce trustworthy outcomes if the structural dynamics work against you. A brand that spent a decade optimizing for reach and memorability has the wrong trust infrastructure for an environment where AI agents are making recommendations based on evidence. Consistent behavior in the wrong structural context still underperforms.
The Trust Gap That Predates the AI
McKinsey frames the agentic AI trust challenge as a new problem requiring new solutions. The framing is partially correct. The technical infrastructure for governing autonomous AI decisions is genuinely new. Assurance frameworks, audit trails for agentic action, and accountability structures for AI-initiated transactions are all real gaps that need to be built.
But the deeper trust gap - the one that determines whether a brand survives an agentic commerce transition - was created long before AI agents existed. It was created every time a brand chose reach over relevance, every time market research reported demographics instead of problems, every time a communications strategy optimized for memorability over credibility.
We've written about this from different angles: the gap between AI technical capability and consumer psychological readiness, and how brand signal collapse accelerates as AI systems gain access to deeper data infrastructure. What McKinsey's new analysis adds is the governance layer - the organizational machinery required to make trustworthy autonomous decisions at scale.
What Sequencing Gets Wrong for Most Organizations
Edward Jones' approach to agentic AI is instructive here. Their chief brand officer drew the automation boundary based on what the brand requires, not based on what the AI can technically do. AI handles internal workflow and content drafting; humans refine before anything reaches clients. The sequencing is brand requirements first, AI deployment second.
Most organizations are running this in reverse. They assess what the technology can do and then try to determine how far to go with it. The problem with that ordering is that it treats the trust question as a technology question. The technology question ("what can the AI do?") is tractable and improvable over time. The brand question ("what does this organization actually stand for in verifiable terms?") is not improvable on a short timeline. It requires years of accumulated evidence.
The Window Is Not Indefinitely Open
ITV's 12% digital growth and 5% total advertising revenue decline is worth returning to one more time. It shows that successful transitions - genuine investment in the new model, real audience migration, legitimate platform growth - can still produce net losses when the old model collapses faster than the new one scales.
Brands navigating the shift to agentic commerce face the same timing risk. Building problem proximity takes years. Documenting actual customer outcomes, earning verified third-party validation, building the kind of evidence infrastructure that influences AI agent recommendations - none of this is achievable on the timeline of a governance initiative or a responsible AI pledge.
McKinsey's trust framework describes what trustworthy agentic AI requires at the organizational level. It does not - and cannot - provide the underlying substance. The proximity to customer problems that makes trust claims verifiable was never something that could be retrofitted through policy. It either exists or it doesn't, accumulated over years of operational choices.
The ITV numbers suggest time is the critical variable. The old model is declining faster than most brand strategies account for, and the new one selects for different signals than the ones that worked before.
If you're evaluating where your brand sits on this transition, our analysis tools can help surface what the current signals say about your exposure.