Skip to content
← Back to Blog
·8 min read·Hass Dhia

McKinsey's $7 Trillion Data Center Opportunity Is Really About Industrial Power, Not Hyperscalers

data-centerindustrial-strategyenergy-costsb2b-strategyAI-infrastructure

The $7 trillion number is doing a lot of rhetorical work right now.

Every analysis of the AI infrastructure build-out leads with it -- hyperscalers committed, data centers announced, GPU demand spiking. The figure is so large it reads as inevitability. Something already decided, playing out at a scale that makes individual positioning decisions feel like noise.

But McKinsey's recent analysis of the data center build-out buries the most actionable finding in its second paragraph: "Incumbents can't meet demand for power and thermal equipment, creating room for new codesign entrants."

That is the sentence most strategists skip. It shouldn't be.

The Binding Constraint, Not the Headline Number

The gold rush analogy gets applied lazily to every technology cycle, but it holds for a specific reason: in infrastructure booms, returns don't primarily flow to the headline players. They flow to whoever controls the binding constraint.

In 1849 California, that was picks, shovels, and denim -- not gold claims. In the internet infrastructure build-out of the late 1990s, it was routing hardware and optical fiber -- Cisco, Corning, and JDS Uniphase captured extraordinary value while dot-coms burned through it. In the shale oil boom, it was industrial sand -- fracking proppant became one of the most profitable segments in the supply chain while many E&P operators never found lasting margins.

In this data center build-out, McKinsey's analysis points to the same dynamic. The binding constraint is power delivery and thermal management. The companies that have traditionally supplied this equipment -- cooling systems, power distribution units, thermal regulation infrastructure -- were designed for traditional data center footprints. They cannot scale fast enough to meet what $7 trillion in capital deployment is about to demand.

The practical translation: a hyperscaler can secure the land, sign the financing, and order the GPUs, then wait 12 to 18 months for the power infrastructure to catch up. That gap is where codesign entrants have an opening. Companies willing to design around hyperscaler specifications from the start -- not retrofitting legacy industrial designs -- can move into a vacuum that incumbent thermal and power suppliers cannot immediately fill.

The incumbents in this space are well-known industrial names: Vertiv in power management, Eaton in power distribution, Schneider Electric in thermal and power systems. These are not small companies. But their product development cycles and manufacturing footprints were calibrated for a world where data centers consumed megawatts, not gigawatts. The speed of the AI build-out has outrun their capacity to respond.

This is the kind of constraint-layer analysis that disappears when strategy teams focus on the headline number rather than the supply chain behind it. This is the pattern that STI's research tracks systematically -- not who is winning the visible race, but where the actual bottlenecks and value pools are forming beneath the surface.

Energy Has Moved From Operations to Strategy

The same week McKinsey published its data center analysis, Harvard Business Review published a piece on why executives need to treat energy costs as a board-level strategic variable, not an operating line item.

The convergence is not accidental.

For most of the industrial era, energy was a predictable input: you bought power at market rates, budgeted a predictable percentage of COGS, and optimized when prices spiked. The strategic levers were limited -- a hedge instrument here, some efficiency capex there. It rarely rose to the C-suite agenda unless you were a utility, an aluminum smelter, or a commodity producer with thin margins.

Data centers running modern GPU infrastructure break that model entirely. A single NVIDIA H100 chip draws roughly 700 watts under full load. A standard 8-GPU rack draws 5.6 kilowatts. A hyperscaler-class cluster for a major training run can draw as much power as a small city. That is not incrementally more demanding than traditional servers. It is a categorically different relationship with power infrastructure.

HBR's argument is that leaders treating energy as a fixed cost in their AI buildout are systematically mispricing their competitive exposure. If your cost to run inference is 40% higher than a competitor who locked in long-term power purchase agreements with renewable generators two years ago, that's not a rounding error in the P&L. It's a structural disadvantage embedded in every call your production systems make, every customer request served, every model output generated.

The hyperscalers understood this earlier than the coverage suggests. Microsoft, Google, and Amazon have been signing large-scale power purchase agreements at rates that will look prescient within five years. That is capital allocation happening quietly, outside the GPU headline cycle, that will determine cost structures for the next decade.

For companies that are not hyperscalers but still depend on AI-driven compute as a competitive input -- and that list is growing fast -- the energy equation is now a strategic question, not an operational one. If you're evaluating your organization's exposure to these infrastructure dynamics, our analysis tools can help surface where structural cost gaps are forming in your competitive landscape before they show up in quarterly results.

What the Shoptalk Anxiety Is Actually About

At Shoptalk this week, Adweek reported a persistent undercurrent of anxiety running through commerce strategy conversations. Home Depot, Stratacache, Google, and Meta were all grappling with similar questions: as AI reshapes discovery, evaluation, and purchase behavior, what remains durable about brand and distribution advantage?

The CAC anxiety threading through those conversations has a specific structure. It's partly about whether AI assistants will disintermediate the purchase funnel -- a real concern. But it's also about whether the cost economics of digital marketing can survive the compute costs of the AI systems that now power it.

Ad auction systems, recommendation engines, and personalization layers are GPU-hungry. The companies running them -- Google, Meta, Amazon -- are sitting directly inside the energy cost problem McKinsey identified. They're not just hyperscalers selling compute to external customers. They're running AI infrastructure to operate their own advertising products. Every performance marketing dollar a brand spends on those platforms is, in part, a wager on those platforms' energy cost structures holding.

If Google's or Meta's energy costs rise materially because the thermal and power infrastructure can't scale as fast as GPU demand, that cost eventually migrates into the CPM. It's a second-order effect, but it runs in only one direction: higher.

This connects to a pattern we've covered before: when the infrastructure layer becomes more strategically important than the interface on top, organizations that understand the underlying constraints make better long-term bets than those optimizing only the visible layer.

Why Very Large Numbers Defeat Ordinary Skepticism

There's a behavioral science dimension worth examining here.

Roger Dooley's neuromarketing research documents a well-replicated finding: the human brain has a surprisingly accurate built-in detector for prices that don't fit. When a luxury watch is priced at $19.99 or a pack of gum costs $20, the brain flags the mismatch automatically -- not through deliberate analysis, but through pattern recognition built from repeated exposure to category norms.

That mechanism breaks down at very large numbers. The brain's calibration was built for scales we encounter regularly. Trillion-dollar figures register as abstract rather than concrete. The gut-check that works reliably at $99 versus $999 doesn't engage at $7 trillion versus $700 billion.

This has a direct implication for strategy around the data center build-out. Executives and investors presented with a "$7 trillion" headline are not running the same skepticism circuits they'd apply to a $70 million capital request. The number is too large to feel real. Processing shifts to pattern matching -- who else is investing, what's the consensus narrative, is this the next cloud cycle -- rather than first-principles constraint analysis.

That's exactly when the McKinsey observation about power and thermal equipment capacity matters most. The codesign opportunity isn't visible at headline scale. It requires the kind of constraint analysis that only becomes available when you stop anchoring on the total figure and start asking what physical systems have to be in place for that capital to actually deploy.

Where the Real Strategic Exposure Sits

The non-obvious conclusion from synthesizing these signals is that energy is about to create a new layer of strategic stratification across industries -- one that has nothing to do with which AI model a company is using or how good its product team is.

Three categories are forming:

Upstream producers are companies that generate, store, or deliver power to data centers -- utilities, nuclear operators, battery manufacturers, transmission infrastructure owners. These are the clearest long-term beneficiaries of the build-out, but they require patient capital and long time horizons to realize that value.

Codesign entrants are the industrial companies McKinsey is pointing toward -- power distribution, thermal management, advanced cooling systems designed from the ground up around hyperscaler specifications. The window here may be shorter than it appears. Once incumbent suppliers like Vertiv, Eaton, and Schneider retool, the advantage narrows. Companies that move in the next 12 to 24 months have a structural head start that may be difficult to close later.

Downstream exposed are companies competing in markets where AI-enabled competitors have a structural energy cost advantage. Retailers are a clear example. A brand competing against Amazon's recommendation engine is not just competing against better software. It's competing against a compute infrastructure whose cost structure was locked in through years of energy procurement decisions that most retailers were not making.

This is the frame that disappears in the $7 trillion coverage cycle. Every strategist is asking whether they should be in data centers. The more useful question is: where does your organization sit in the energy cost equation? Upstream, at the codesign layer, or competing downstream against organizations that solved the energy problem years ago?

The companies that get this right won't necessarily be the ones with the best models or the most capable teams. They'll be the ones who treated power infrastructure as a strategic input while the rest of the industry was still treating it as a utility bill.

If you're mapping where your organization fits in these dynamics, our research covers the structural patterns underlying this shift -- not just who is building what, but where the constraint layers are and when they become competitively decisive.

Want more insights like this?

Follow along for weekly analysis on brand strategy, market dynamics, and the patterns that separate signal from noise.

Browse All Articles →

Or explore partnership opportunities with STI.

Related Articles