Skip to content
← Back to Blog
·8 min read·Hass Dhia

The Decision-Maker Substitution Problem: Why AmEx's Agentic Commerce Bet Turns Emotional Advertising Into a Stranded Asset

agentic commerceemotional advertisingbrand strategyneuromarketingAmerican Express

There's a Hidden Assumption in Every Emotional Ad Campaign

The neuromarketing research on emotional advertising is about as settled as marketing science gets. Roger Dooley's summary of the evidence is worth reading in full, but the headline is consistent: ads that engage us emotionally outperform those built on rational argument, often by significant margins. This holds across categories, across markets, and across decades of replication. Executives who believe they're immune to emotional appeals are, the research reliably shows, simply wrong about themselves.

But here's the thing about that finding: it rests on an assumption so obvious that nobody mentions it. Somewhere in the purchase funnel, a human brain is making a decision. The emotional processing circuitry of that brain - the amygdala response, the somatic markers, the affective priming - is what advertisers have been engineering to for seventy-plus years. Remove the human brain from the decision, and the entire edifice collapses.

That's not a hypothetical anymore. American Express and Dentsu are both, right now, building infrastructure to do exactly that.

What AmEx and Dentsu Are Actually Building

Read the American Express announcement carefully, and it's easy to mistake it for a payments story. It isn't. AmEx is releasing a developer kit explicitly designed to make AI-driven commerce mainstream - the stated goal is enabling AI agents to complete purchases on behalf of cardholders. The agent authenticates, evaluates options, selects, and transacts. The human sets preferences upfront and reviews outcomes afterward, if they review them at all.

Dentsu's platform revamp is the supply-side equivalent. Dentsu is rebuilding its AI operating system to connect internal and external agents - specifically so that the advertiser's infrastructure can interface with the buyer's infrastructure, agent to agent. The explicit ambition is to "shoulder out the competition" by becoming the connective tissue of this emerging architecture.

Put these two together and you have the basic outline of an agentic commerce stack: on one side, an AI agent with purchasing authority and a cardholder's preferences; on the other side, a brand's agent with inventory, pricing logic, and optimization objectives. They negotiate. The human, in many transactions, is not present.

This is not science fiction. The infrastructure is being built now, by two of the largest players in their respective industries. The question isn't whether this will happen - it's how fast, and what it breaks on its way to scale.

The Stranded Asset Nobody Is Pricing

Here's a useful analogy. Of Dollars and Data recently examined whether AI has undermined the growth assumptions underlying Coast FIRE - the retirement strategy where you accumulate enough invested assets to let compound growth carry you to retirement without further contributions. The math of Coast FIRE is elegant: save aggressively early, reach a threshold, then "coast." The problem is that the threshold calculation depends on expected equity returns. If AI-driven productivity shifts compress the equity risk premium - if the future returns that made the math work turn out to be lower than historical models assumed - then a perfectly executed Coast FIRE portfolio is built on a foundation that's shifted beneath it. The money isn't gone. The model that said it was sufficient might be wrong.

CMOs have the same problem, and it's worth naming precisely: emotional brand equity is a stranded asset in an agentic purchase funnel.

The value of emotional advertising isn't just the spot or the campaign - it's the accumulated neural equity those campaigns build. Brand familiarity, affective associations, trust signals encoded through repeated exposure. This equity is real, it shows up in price elasticity and loyalty metrics, and it took years and serious budget to build. But all of it was built for a specific type of decision-maker: a human one, with an emotional processing system that advertising has been engineered to activate.

An AI agent completing a purchase on behalf of a cardholder doesn't have a limbic system. It doesn't process the warmth of a brand story. It doesn't experience the reassurance of familiarity. It evaluates parameters: price, specifications, availability, loyalty point yield, historical reliability scores, perhaps a brand safety filter set by the cardholder. The "feeling" of the brand - which is the actual mechanism by which emotional advertising creates value - is simply absent from the decision architecture.

This is what makes the AmEx developer kit genuinely significant, and genuinely dangerous to brands that haven't thought it through. It's not a new payments interface. It's a mechanism for systematically removing the emotional processing layer from a category of purchase decisions.

The Scaling Problem That Makes This Worse

The HBR discussion on scaling technology for social good surfaces a pattern worth noting in a different context: the gap between a technology's early-stage performance and its behavior at scale is almost always larger than expected. Interventions that work beautifully in pilots fail at scale for structural reasons - coordination costs, context loss, incentive misalignment - that weren't visible when the system was small.

Agentic commerce will have the same scaling dynamics, but inverted. In the early stages, humans will stay engaged: they'll review agent recommendations, override them, maintain emotional brand preferences that shape how they configure the agent's parameters. At this stage, emotional advertising still reaches the decision-maker - just one step upstream, at the preference-setting stage rather than the point of purchase.

But as agent reliability improves and cognitive trust deepens, the human disengages. This is the pattern with every successful automation: we supervise it carefully until we trust it, then we stop supervising. At scale, the agent's decisions become autonomous in practice if not in architecture. The cardholder stopped reviewing outcomes after the first three months. The emotional advertising that's still running on their social feed is no longer connected to any purchase decision they're making.

We've written before about how agentic AI is already reshaping the buying layer and why that shift isn't gradual - it's structural. The same logic applies here, but with a sharper edge: it's not just that the buying layer is changing, it's that the psychological mechanism that advertising has been optimized for is exiting that layer entirely.

What Actually Survives

None of this means emotional advertising stops working. For every purchase where a human is still present - considered decisions, high-stakes categories, experiences - the neuroscience holds. The brain hasn't changed. What's changed is the percentage of transactions where the brain is in the loop.

The more precise question is: what brand attributes survive the decision-maker substitution? And the answer is uncomfortably specific.

AI agents evaluate things that can be parameterized. Price. Category fit. Reliability signals - return rates, review consistency, product specification accuracy. Loyalty yield. Network-level signals like "this brand integrates cleanly with the agent ecosystem." These are the variables that will determine brand selection in agentic commerce, and most of them have nothing to do with emotional resonance.

This doesn't mean brand equity is worthless in an agentic world. It means that the type of brand equity that survives is operational, not emotional. Consistency. Trustworthiness expressed as low return rates and accurate descriptions. Pricing clarity. The things that make a brand a reliable execution partner for an agent optimizing on behalf of a human.

Amazon Rufus surfaced a version of this problem from the discovery side: when an AI intermediates product search, brands that have built emotional equity in advertising but weak operational signals in product data lose the recommendation. The same logic extends to agentic purchase completion. A brand whose emotional equity doesn't translate into machine-readable trustworthiness signals doesn't survive the substitution.

The irony is that the executives who've always been skeptical of emotional advertising - the ones who think they're above emotion - are accidentally right about one thing: in an agentic commerce environment, their instinct to demand operational proof over emotional resonance aligns better with how agent-mediated purchase decisions actually work. Not because humans aren't emotional (they are), but because the agent isn't.

The CMO's Actual Decision

Here's where the strategic question sharpens. If you're running brand strategy in a category where agentic commerce is plausible within five years - travel, financial products, recurring consumer goods, B2B procurement, insurance - then you have two kinds of brand investment to track: emotional equity (built for human decision-makers) and operational equity (built for agent evaluation).

Most organizations are optimizing heavily on the first and barely measuring the second. That's a Coast FIRE problem. The assumption underneath the investment is that future returns will compound as expected - but the mechanism that generates those returns (a human emotional response at the moment of decision) is being structurally altered by infrastructure that two of the world's largest companies are actively building right now.

The practical implication isn't to stop emotional advertising. It's to stop treating brand strategy as purely communication and start treating it as operational infrastructure. What does your brand look like to an agent? What signals does it emit that an AI evaluator can parse? How reliable, consistent, and parameterizable is your brand's behavioral track record - not your brand story, but your brand data?

The category benchmark trap we've written about previously - where brands assume premium pricing holds because it always has - is part of the same vulnerability. Premium pricing survives if the agent is configured to weight quality signals. It disappears if the agent is optimizing on price or if the quality signals that justified the premium aren't machine-readable.

The Non-Obvious Conclusion

Emotional advertising works. The research is real, the neuroscience is solid, and for every purchase where a human is making an emotionally engaged decision, it will keep working. Nobody is questioning whether humans have limbic systems.

The question is simpler and more uncomfortable: what percentage of your purchase funnel will still have a human in the decision seat five years from now?

That number isn't fixed. It depends on your category, your customer's technical sophistication, and how quickly agentic commerce infrastructure like AmEx's developer kit achieves real adoption. But it's almost certainly declining. And it's the number that determines what your emotional advertising investment is actually worth.

The brands that get this right won't abandon emotional advertising. They'll treat it as one half of a two-part equation - the part that builds preference in the human configuration layer, where people set their agent parameters. And they'll build the other half: the operational brand signals that survive when the agent takes over from there.

The stranded asset risk is real. The Coast FIRE analogy is apt. You can have a perfectly executed strategy built on assumptions that have shifted beneath it. The question isn't whether the shift is happening. It's whether you've updated the model.

If you're building the data foundation to even answer that question, our research is a useful place to start.

Want more insights like this?

Follow along for weekly analysis on brand strategy, market dynamics, and the patterns that separate signal from noise.

Browse All Articles →

Or explore partnership opportunities with STI.

Related Articles