Skip to content
← Back to Blog
·8 min read·Hass Dhia

Axe Is Winning the Creator Era. McKinsey Says the Next Era Belongs to AI Agents.

agentic-aibrand-strategybehavioral-scienceenterprisemckinsey

Axe - the Unilever brand that spent 20 years being the punchline of "too much cologne" jokes - is having a genuinely interesting brand moment. According to Adweek, the brand has rebuilt cultural relevance through creator-led storytelling, product-rooted humor, and a social-first strategy aimed at Gen Z men. It's textbook brand thinking executed well: meet the audience where they are, earn attention rather than buy it, make the product feel culturally inevitable rather than commercially desperate.

The problem is that McKinsey published something this week that should make every brand strategist pause before holding up Axe as the model to replicate.

McKinsey's new analysis on securing the agentic enterprise is framed as a cybersecurity report. But what it actually documents is where buying decisions are going. Autonomous AI agents are entering enterprise workflows at scale - handling procurement research, vendor evaluation, and purchasing recommendations that previously required human judgment. The security framing is a proxy for something larger: the intermediary between your brand and the customer is changing, and the new intermediary doesn't watch TikTok.

What Axe Actually Mastered

Understanding why Axe's creator strategy works right now requires being precise about what it's actually doing. This isn't just a distribution play. It's a bet on identity-adjacency - attaching brand associations to creators whose audiences already have an established relationship with them built on trust.

This works because human purchasing is deeply social. Cialdini's principles of influence - social proof, authority, liking - operate through perception of trusted others. Creator content hijacks that mechanism elegantly. The viewer isn't processing an advertisement; they're processing a recommendation from someone whose judgment they've implicitly vouched for by choosing to follow them.

BehavioralEconomics.com's meta-analysis of 80 studies spanning 1982 to 2024 on influence effectiveness reveals something important: influence strategies are most effective when matched to the personality and cognitive frame of the target. Mismatched influence doesn't just fail - it actively backfires. Axe, through deliberate creator selection, is running a form of psychographic matching at scale. The brand finds creators whose audiences already carry the personality profile that responds to their message. It's sophisticated even when it looks casual.

This is the kind of structural pattern STI's research tracks systematically - the gap between what brands believe they're optimizing for and the mechanism actually driving their results.

The Agentic Layer That Changes the Equation

Harvard Business Review's analysis frames agentic AI as an organizational readiness question. The question being asked: how do enterprises restructure workflows when AI agents can autonomously complete tasks that previously required human judgment and decision-making?

The consumer-facing version of that question gets less attention but carries more urgency for brand strategists. Amazon's Rufus, Google's shopping agents, and a proliferating set of AI-powered purchasing assistants are already changing how consumers navigate purchase decisions in high-consideration categories. We've covered how Amazon Rufus is restructuring brand visibility in ways that bypass traditional brand architecture entirely. But that was early innings.

McKinsey's security report maps the next phase: enterprise agentic systems making autonomous procurement decisions. This isn't speculative - it's the logical extension of AI assistants integrated into enterprise resource planning, or AI copilots handling procurement research across vendor databases. When an AI agent handles vendor evaluation, the signals it uses to make that evaluation are fundamentally different from the signals a human uses.

A human evaluator uses social proof, authority signals, and identity resonance. They respond to who vouches for a brand, how familiar it feels, and whether it matches their professional identity. An AI agent uses structured data, review corpus analysis, pricing history, compliance records, and claim verifiability. Axe's creator-led strategy produces exactly none of the signals the second type of decision-maker processes.

The Influence Mismatch Problem

Here's where the behavioral science finding gets unexpectedly relevant to brand strategy.

The BehavioralEconomics.com review found that mismatched influence strategies don't simply fail to persuade - they generate active resistance. They make the target more skeptical than they would have been without any influence attempt at all. This has historically been a human-to-human problem: applying urgency tactics to a deliberate personality type, or social proof to someone who prizes independent thinking.

The same logic applies to the human-versus-agent distinction.

If your entire brand influence architecture is optimized for human psychology, and you deploy it in a context where the effective evaluator is an AI procurement agent, you're not just missing the target. You may be actively degrading your brand's position in the evaluation.

What an AI Procurement Agent Actually Sees

Consider what an AI procurement agent encounters when it finds a brand that has invested heavily in creator-led social content: high social engagement metrics, variable and subjective review sentiment, limited structured product claims, and a content corpus built entirely around emotional resonance rather than verifiable outcomes.

That's not a trust signal for an algorithmic evaluator. In many evaluation frameworks, that profile is actually a flag - a signal that the brand is investing in perception management rather than substantiated claims.

We've written about the trust gap that's emerging between AI agent recommendations and traditional brand signals. The gap isn't just about which brands get recommended. It's about whether brands understand which evaluation game is being played in their category.

If you're making 3-5 year brand investment decisions and haven't mapped your signal architecture against both human and agentic evaluation criteria, our analysis tools can surface what the pitch decks won't show you.

What Brands Operating in Both Worlds Should Do

None of this means Axe is wrong. For consumer goods in 2026, where the dominant purchasing path still runs through human social cognition, creator-led storytelling remains highly effective. Axe is solving the right problem for its current context and moment.

The question is about investment horizon and category structure.

For brands where enterprise or organizational procurement already touches the purchase path - B2B software, professional services, any category with institutional buyers - the agentic evaluation layer is not a 2028 concern. It's operational right now. The HBR piece frames "getting ready for agentic AI" as a forward-looking readiness question, but for many brand contexts, the accurate frame is already "how do we perform in an environment where AI is making or significantly shaping purchasing decisions today?"

The practical moves aren't complicated, but they require deliberate effort.

Build structured claim infrastructure. AI agents evaluate claims that have verifiable backing and citations very favorably. Case studies with specific measured outcomes, third-party audit trails, compliance certifications, pricing transparency. This isn't compelling marketing content for human readers, but it's precisely what the emerging evaluation layer processes most favorably.

Map your review corpus deliberately. AI procurement tools weight review corpora from G2, Capterra, Trustpilot, and similar platforms heavily. The pattern of review sentiment - not the average score but the topics mentioned, complaint patterns, and resolution rates - gets parsed at a level of granularity human readers don't replicate. Brands that recognize this build systematic review response strategies rather than chasing aggregate score improvement.

Build agent-readable content alongside human-targeted content. Agentic advertising is already a nascent discipline, and the brands that develop structured, machine-readable content infrastructure alongside their creator-targeted content will carry a meaningful advantage as procurement workflows continue to automate. This isn't about abandoning the creator model. It's about not letting it be the only model.

The Signal McKinsey's Security Report Sends to Brand Teams

McKinsey's framing of this as a cybersecurity opportunity is worth pausing on. The security risks they document - prompt injection attacks on AI agents, data exfiltration through autonomous workflows, manipulation of agentic decision-making - are the dark mirror of what brand strategists should be thinking about from the other direction.

If an AI procurement agent can be compromised through manipulated inputs, it can also be guided through well-structured, credibly-sourced content that's deliberately built for machine evaluation. The same underlying architecture that creates vulnerability for security teams creates opportunity for brands that understand how to operate within it.

That's not a recommendation to game AI systems. It's an observation that the brands which understand how agentic AI actually processes signals will have a structural advantage over brands that continue to optimize exclusively for human perception in a world that is visibly moving past that assumption.

Axe's brand leadership in the creator era is legitimate and earned. The social-first model is genuinely effective for its audience and moment. But the next iteration of brand investment strategy needs to account for evaluation environments that don't experience creator content, don't apply social proof the way human buyers do, and assess vendors through criteria that look nothing like a scroll through a content feed.

The brands that get this right won't be the ones issuing general warnings about AI disruption. They'll be the ones building two parallel signal systems simultaneously - one calibrated for human buyers, one for the AI agents increasingly influencing, shortlisting, or replacing them in certain categories.

Understanding which system matters more for your specific buying context is where the real strategy work starts. That analysis is what STI's research practice is built to deliver.

Want more insights like this?

Follow along for weekly analysis on brand strategy, market dynamics, and the patterns that separate signal from noise.

Browse All Articles →

Or explore partnership opportunities with STI.

Related Articles