Skip to content
← Back to Blog
·8 min read·Hass Dhia

LLMs Give Brand Strategists 'Trendslop' While Shopify Puts Them Inside Your Checkout

AI strategybrand marketingagentic commerceLLMsdecision intelligence

Harvard researchers recently ran an experiment that should make every brand strategist uncomfortable. They asked multiple LLMs to generate strategic recommendations across a range of business scenarios. What they got back was something they labeled "trendslop" -- advice that sounds like strategy but is essentially sophisticated trend-following. Confident, well-organized, comprehensive. And completely derivative.

The same week that study dropped, Digiday reported that Shopify is moving forward with embedding merchant products directly inside ChatGPT conversations. OpenAI retreated from its Instant Checkout ambition -- too much friction, brands pushed back -- but the core architecture survives: your products get discovered inside an AI conversation, the purchase completes on your own storefront.

These two stories belong together. They're describing opposite ends of the same question about what AI is actually good for.

What "Trendslop" Actually Is

The HBR research isn't saying LLMs are bad at language. It's saying they're bad at generating genuine strategic insight -- which, it turns out, requires something fundamentally different from predicting what words should follow other words.

LLMs are trained on what humans have already written. That training corpus is weighted toward whatever was published, which is weighted toward whatever was considered worth publishing, which is weighted toward what was already successful. The result is a model that has absorbed an enormous amount of strategic discourse and can reproduce it fluently -- but reproduction is not the same as insight.

When an LLM recommends that a brand "focus on authentic storytelling" or "leverage data-driven personalization," it's not reasoning from your specific situation to a recommendation. It's pattern-matching your situation to the largest cluster of similar situations in its training data and returning the center of mass. The advice is technically responsive to your question. It's also the same advice fifty other brands with vaguely similar situations would receive.

This connects to something Scott Kirby, United's CEO, observed about AI behavior that we explored earlier this year: AI is designed to tell you what you want to hear. Kirby wasn't making a philosophical point. He was describing a systematic bias. LLMs optimize for responses that seem appropriate given the context. Strategic advice that sounds familiar and coherent is more likely to seem appropriate than advice that challenges your assumptions. The result is an advisory system that regresses toward consensus -- and consensus is, by definition, not where competitive advantage lives.

Why This Pattern Is Hard to Detect

The frustrating part is that trendslop is often indistinguishable from good advice unless you already know what good advice looks like for your specific situation. The writing is fluent, the frameworks are real, the reasoning is coherent. What's missing is the part where someone with actual knowledge of your competitive context and market dynamics tells you something that contradicts what you thought you knew.

Real strategic insight tends to feel uncomfortable. It surfaces tradeoffs you'd prefer to ignore, identifies assumptions that need to be tested, points at competitors you hadn't taken seriously. Trendslop rarely does any of that, because trendslop is optimized for the appearance of insight, not the substance.

Meanwhile, AI Is Taking Over Your Checkout

None of this stops AI from being genuinely transformative at a different layer of the business.

Shopify's arrangement with OpenAI is revealing in the details. The original vision -- Instant Checkout, where users could complete purchases without leaving the ChatGPT interface -- died because merchants resisted losing the checkout relationship. The customer data, the post-purchase experience, the ability to recover abandoned carts: these matter too much to surrender to an intermediary, even a capable one.

What survived is an architecture that preserves those relationships while adding a new discovery surface. Products appear inside ChatGPT conversations. A user researching "best running shoes for plantar fasciitis" encounters merchant inventory, synthesized with AI's ability to match their specific stated need to product specifications. The purchase still happens on the merchant's storefront -- but the discovery happened inside an AI conversation.

This is a meaningful shift in how products get found. Not because it replaces search (it doesn't, yet), but because it inserts AI into a moment of genuine consideration, not just keyword matching. A user describing a problem in natural language is expressing intent with more specificity than a search query. That specificity has value.

The caution that retailers expressed at NRF 2026 -- reluctance to let AI make purchasing decisions on behalf of customers -- turns out to have been productively applied pressure. The merchants who pushed back on Instant Checkout were protecting something real: the relationship that begins at checkout and extends through the entire post-purchase experience. OpenAI's retreat isn't a failure. It's evidence that the market found the right boundary.

The Storefront as Moat

There's a pattern emerging in how AI integrates with commerce that mirrors what happened with social media. Brands initially resisted building presence on Facebook and Instagram, then overcorrected into dependence, then spent years trying to rebuild direct customer relationships they'd let atrophy.

The Shopify-ChatGPT arrangement suggests the market may have learned something. Discovery can happen anywhere. Relationship happens on your own infrastructure. This is the kind of structural distinction that looks obvious in hindsight and requires active resistance to maintain in the present, when a new platform is offering reach and the path of least resistance is full integration.

What M&S Got Right That AI Can't Replicate

While the AI strategy debate plays out in the abstract, Marks & Spencer's recent turnaround of its summer fashion division offers a concrete example of what intelligence actually driving decisions looks like.

M&S's summer woes were documented: the business was making inventory decisions based on disconnected data sources that told different parts of the organization different things. Marketing saw one picture of customer demand. Buying teams saw another. Merchandising saw a third. The result was a business that couldn't get out of its own way even when individual teams were making locally rational decisions.

The fix wasn't an AI advisor. It was what MarketingWeek describes as "one version of the truth" -- a deliberate effort to unify insight sources so that everyone making decisions was working from the same data. That sounds obvious. In practice, it requires organizational will, governance, and genuine commitment to letting data override instinct when they conflict.

The insight that turned M&S's summer performance around wasn't sophisticated. It was accurate and shared. The teams that needed to act on it trusted it because they'd participated in building it.

This is the kind of intelligence that actually changes decisions. Not a language model summarizing trends, but a systematic process of building shared, accurate understanding of what's actually happening with customers. This is the kind of pattern STI's research tracks systematically -- the operational structures that separate brands making good decisions from brands making confident-sounding ones.

The Distinction That Determines Whether AI Helps You

The thread connecting HBR's trendslop finding, Shopify's agentic commerce architecture, and M&S's insight-driven turnaround is a single distinction brands need to get clear on: AI as execution layer versus AI as strategic advisor.

At the execution layer, AI is genuinely powerful. Discovery and matching (Shopify + ChatGPT), personalization at scale, routing and logistics, synthesizing customer behavior into usable signals -- these are problems where AI's ability to process volume and pattern-match across large datasets creates real value. The work is optimization, and optimization is something AI does well.

At the strategic layer, AI fails in a specific way. It can tell you what most companies in vaguely similar situations have done. It cannot tell you what your company should do given the specific dynamics of your competitive situation, your customer relationships, and your actual capabilities. That requires the kind of proprietary knowledge that doesn't exist in training data -- which is most of what makes strategy valuable.

The Advisor Trap

The danger isn't that companies will use AI for execution and believe it's a strategic tool. The danger is subtler: that the fluency and confidence of AI-generated strategic output will make it difficult to distinguish from the real thing, especially in organizations that have lost the habit of testing strategic recommendations against first-principles reasoning.

A recommendation that "leverages authentic storytelling to build emotional connection with your core customer segment" is not obviously wrong. It may even be roughly right. But "roughly right" delivered fluently is often worse than "specifically right" delivered awkwardly, because fluency prevents the friction that leads to deeper examination.

The bypass economy dynamics that are reshaping how customers find and purchase products will reward brands that have genuine insight into why customers choose them -- not brands that have impressive-sounding frameworks produced by language models that have never encountered their actual competitive situation.

The Counterintuitive Implication

Here's the part that doesn't get said enough: as AI becomes more deeply embedded in commerce infrastructure, the premium on genuine human strategic intelligence increases.

If AI handles discovery (as Shopify + ChatGPT enables), personalization, and execution optimization, the remaining source of competitive differentiation is the strategic judgment that determines what to optimize toward. Brand positioning. Portfolio decisions. The specific customer needs worth serving and the specific ones worth ignoring. These decisions compound. They're also exactly what LLMs are worst at helping with.

The businesses that will look smart in five years won't be the ones that replaced strategic thinking with AI. They'll be the ones that used AI to execute -- relentlessly, efficiently, at scale -- the strategy that humans built from genuine insight about their markets. The M&S model isn't a relic. It's the competitive architecture.

If you're evaluating where your intelligence infrastructure actually sits on the execution-to-strategy spectrum, our analysis tools can help surface what the tool demos won't show you.

Want more insights like this?

Follow along for weekly analysis on brand strategy, market dynamics, and the patterns that separate signal from noise.

Browse All Articles →

Or explore partnership opportunities with STI.

Related Articles