HBR's 'Brain Fry' Research and the Availability Heuristic Problem It Didn't Name
Researchers studying AI adoption in the workplace found that heavy AI users reported a specific kind of exhaustion that didn't match typical cognitive fatigue. It wasn't the tiredness of hard thinking. It was the tiredness of constant filtering - of reading AI output, evaluating it, catching errors, and looping back for revisions. Harvard Business Review called it "brain fry."
The term is catchy, and the phenomenon is real. But calling it fatigue misses what's actually happening to decision quality during that process. The mental load of working with AI tools isn't just tiring - it's systematically altering which information feels salient and which disappears from view entirely.
That's not a productivity problem. That's a judgment problem.
The Availability Heuristic Gets a New Attack Surface
The availability heuristic is one of the most studied biases in behavioral economics: people estimate the likelihood of events based on how easily examples come to mind. Things that are vivid, recent, or frequently encountered feel more probable - even when base rates say otherwise.
For decades, managing this bias meant recognizing which information was being over-weighted. If you'd just watched a plane crash documentary, you'd need to consciously discount your inflated fear of flying.
AI tools have created something considerably harder to manage.
When you query an AI assistant, you're not just getting information - you're getting a filtered version of information, shaped by training data, retrieval mechanisms, and reinforcement learning from human feedback. The model surfaces what it was rewarded for surfacing. Which means the "easily recalled information" in your availability heuristic is no longer even your own memory. It's the model's curated output.
BehavioralEconomics.com's recent essay identifies a related evolution they call "UnAvailability Bias" - the tendency to treat absent information as proof of nonexistence. In an era when search engines and AI tools have trained us to expect comprehensive results, the absence of something in the output feels like evidence that the thing doesn't exist, rather than evidence that the tool didn't surface it.
This distinction matters enormously for anyone using AI to inform strategy.
The Gap Between "Not Found" and "Doesn't Exist"
Consider what happens when a brand team uses an AI tool to audit competitive positioning. The tool surfaces competitors A, B, and C. The team builds their strategy around differentiating from those three. But the tool missed competitor D - maybe because D launched recently, or because its training data skewed toward English-language sources, or because D's strongest signals are in channel data the model doesn't access.
Old availability heuristic problem: team members would over-weight competitors they'd seen mentioned recently.
New availability heuristic problem: the team treats the AI's output as a complete map. Competitor D's absence from the report becomes invisible. They don't even know to look for what they didn't see.
This is the kind of pattern STI's research tracks systematically - the gap between what AI tools confidently surface and what actually exists in a market or decision space.
Why Brain Fry Makes This Worse
The HBR researchers found that AI fatigue correlates with heavy iterative use - the back-and-forth of prompting, reading, catching errors, prompting again. This is exactly the workflow that degrades meta-cognitive vigilance.
When you're tired, you stop asking "what am I not seeing?" You start accepting outputs more readily because each evaluation cycle is expensive. The cognitive budget that should be spent interrogating completeness gets consumed by just keeping up with the volume of generated content.
This creates a compounding dynamic. The more you rely on AI to manage cognitive load, the less capacity you have to catch the systematic gaps in what AI surfaces. And the more your availability heuristic gets calibrated to what AI consistently shows you, the more natural its blind spots start to feel.
Yoobi's chief marketing officer Sarah Leinberger, speaking with Adweek, framed something useful about operating under uncertainty: moving forward with incomplete information isn't a failure of process - it's a core competency. The question is whether you know you're doing it.
That's the operative phrase. Organizations that use AI tools without building in explicit "what did we not see" checkpoints aren't moving forward with incomplete information courageously. They're moving forward with incomplete information while believing they have complete information.
The Retirement vs. College Fund Scenario
Financial decision-making offers a concrete illustration. Kiplinger's coverage of single-income family financial planning found significant regional variation in viability - what works in Tulsa doesn't work in San Jose, even at similar household income levels.
A couple with $1.8 million in savings at 54 debating whether to fund a grandchild's college education or prioritize retirement isn't just solving a math problem. They're navigating availability-driven intuitions about risk, obligation, and identity - and increasingly, they're bringing AI tools into those conversations.
If the AI they consult consistently surfaces retirement optimization content (because that's what gets clicked, what gets trained on, what gets reinforced), the college-funding option will feel less viable than the underlying numbers justify. Not because of bad advice - because of structural information asymmetry that neither party recognizes.
The same dynamic plays out in corporate strategy contexts at much larger scale. A brand evaluating partnership opportunities, a retailer assessing market expansion, a financial services firm modeling consumer segments - all of them are increasingly running these evaluations through AI systems that have been trained on historical data, optimized for certain types of outputs, and incapable of flagging their own gaps.
What Calibrated AI Use Actually Looks Like
The answer isn't to stop using AI tools, and it isn't to demand they become omniscient. The answer is to treat AI outputs the way experienced analysts treat any data source: with explicit accounting for coverage limits.
None of this requires abandoning AI tools or returning to manual research pipelines. It requires building the kind of meta-cognitive layer that good analysts have always maintained - the habit of asking not just "what does the data say" but "what kind of data is this, what would it miss, and what would have to be true for the picture to be significantly different."
A few practices that change the dynamic:
Run the counterfactual query. After any AI-assisted analysis, deliberately ask: "What would have to be true for this output to be significantly incomplete?" Then query for those conditions directly. If your competitive audit came back with three players, ask the tool to describe scenarios where a fourth player exists that it might not have surfaced.
Track what the AI consistently doesn't surface. Over time, patterns in absence are as informative as patterns in presence. If your AI tools never surface small regional players, or never flag regulatory constraints from non-US markets, or consistently underweight distribution channel dynamics - that's a systematic gap, not a one-time miss.
Separate generation from evaluation. The brain fry problem is worst when people use AI for both generating information and evaluating it. The cognitive load compounds because you're relying on the same system to produce content and tell you if the content is good. This is also what makes the availability heuristic problem self-reinforcing - you're calibrating your judgment against the AI's output using the AI's output as reference. Build in evaluation steps that draw on sources outside the AI's output stream.
If you're evaluating partnerships, market entry, or competitive positioning against these criteria, our analysis tools can help surface what the AI-generated summaries won't.
The Harder Question
The brain fry framing locates the problem in individual cognitive capacity. That's accurate but limited. The harder question is structural: how do organizations build decision infrastructure that accounts for the systematic distortions introduced by AI-mediated information access?
This isn't new. Organizations have always had to manage information filters - the bias of consultants who pattern-match to prior engagements, the selection bias of industry reports, the recency bias of market research conducted after a trend has already peaked. AI tools introduce a new filter with its own characteristic distortions.
As we've explored in looking at how AI tools shape confidence in ways users don't recognize, the most expensive errors aren't the ones that look obviously wrong. They're the ones that feel certain.
And as United's CEO's anecdote about ChatGPT illustrated - the tool didn't lie, but it confirmed whatever direction the questioning assumed. That's not a hallucination problem. That's an availability heuristic problem with an AI-shaped attack surface.
The organizations that build around this - that treat systematic information gaps as an infrastructure question rather than a user training question - will make meaningfully better decisions than those that focus only on managing fatigue. Brain fry is real. But it's pointing at the wrong level of the problem.
The more interesting version of this research would track not just cognitive load, but decision quality over time in AI-heavy organizations versus those with explicit calibration practices built in. If you're thinking about how that kind of analysis applies to your own decision infrastructure, the conversation starts here.