Executive summary. The bottleneck for AI is shifting from chips to electricity. In 2026, scaling compute is increasingly bounded by firm deliverable power, grid interconnection timelines, and heat removal. Three demand waves, AI data centres, industrial reshoring, and economy-wide electrification, are converging on the same constraint: grid capacity and the physical equipment that expands it. Scarcity rents flow upstream to grid equipment makers, transmission builders, firm generators, and thermal infrastructure. Meanwhile, power-exposed industrials and low-moat compute consumers face margin compression. You cannot compute what you cannot power.
For most of the 2010s, the global economy lived inside a comforting story: software scale meant falling marginal costs. “The cloud” sounded weightless, capital was cheap, and markets rewarded anything that looked asset-light. Meanwhile, the physical layer, data centres, transformers, substations, transmission lines, cooling plants, was treated as background infrastructure. Abundant. Boring. Someone else’s problem.
That story is breaking. Not because software stopped improving, but because the constraint moved. In 2026, the marginal cost of compute is increasingly bounded by physical reality: power availability, grid timelines, and thermals.
Three demand waves are colliding. First is the generative AI buildout: training clusters and inference fleets that push utilization and rack density to extremes. Second is industrial reshoring, which relocates energy-intensive production into regions already struggling to expand grids. Third is electrification, vehicles, heating, and industrial processes shifting from molecules to electrons. Different narratives, one shared dependency: more electricity, delivered reliably, in the right places, fast.
- $333.44/MW-day PJM 2027/2028 Base Residual Auction, clearing at the cap (Dec 2025)
- $329.17/MW-day PJM 2026/2027 auction, also clearing at the cap (Jul 2025)
- 128 weeks average lead time for high-voltage power transformers in recent surveys
- 144 weeks average lead time for generator step-up transformers (GSUs)
- 415 TWh to ~945 TWh IEA base case for global data centre electricity use (2024 to 2030)
The category error is thinking the energy transition is frictionless. Data centres don’t behave like flexible consumer loads. They are engineered for extremely high availability and tight service guarantees. That makes firm, dispatchable power uniquely valuable: electricity that shows up at 2 a.m. on a windless night, in a heat wave, when the grid is stressed.
Intermittent renewables can and will contribute meaningfully. But turning intermittent generation into “always-on” supply typically requires some combination of storage, overbuild, new transmission, and curtailment, each layer adding cost and friction. For many AI deployments, the all-in cost of “firming” power is the difference between attractive unit economics and marginal ones.
Then comes the unglamorous bottleneck: you can’t software-update a substation. Grid expansion is slow because it is physical, regulated, and politically contested. Even when generation exists, it can be trapped behind transmission limits or local congestion. The result is a brutal mismatch: demand can scale in quarters; supply scales in years.
So hyperscalers are adapting. The next capex wave isn’t just GPUs and custom silicon. It’s energy procurement, siting, and control: long-dated PPAs with firm generators, co-location strategies near existing plants, and “behind-the-meter” structures that reduce dependence on public grid timelines. Power stops being a utility input and starts looking like premium real estate.
The second-order effects are where the real repricing happens.
First: unhedged electricity consumers get squeezed. Energy-intensive industries, aluminum, chemicals, certain manufacturing, depend on stable power costs. In constrained nodes, the arrival of huge, creditworthy buyers willing to pay up for firmness changes the clearing price for everyone. Firms without hedges, captive generation, or favorable long-term contracts can see margin structures break.
Second: a compute class divide emerges inside tech. Hyperscalers can secure power and pass costs through. Many Tier-2 providers and low-moat SaaS companies cannot. If compute is a rising share of COGS and end users resist price hikes, the outcome is plain: margin compression followed by multiple compression. The “AI wrapper” layer may discover it is effectively short electricity.
Third: cooling becomes the silent limiter. This is the thermodynamics most people miss. AI-driven rack densities are approaching ~100 kW in advanced deployments, forcing a transition away from traditional air cooling toward liquid-based thermal systems. JLL expects average rack density in new facilities to triple to ~45 kW and forecasts ~80% liquid cooling adoption in new builds by 2026 (JLL).
All of this points to an investable logic: scarcity rents flow upstream to whoever controls firm supply and the bottlenecks that expand it.
The “Hard Power” long book is not generic “utilities and copper.” It is the physical choke points. Start with grid equipment. Add the engineering and construction firms that build transmission, substations, and interconnection capacity, plus the thermal-management suppliers that enable higher density compute.
The short book sits on the wrong side of the equation: power-hungry commodity producers without hedges or captive generation in constrained grids; over-levered developers whose returns are fragile to capital costs, curtailment, and queue delays; and commoditized compute consumers with high cloud costs as a share of revenue and weak pricing power.
The demand floor isn’t purely AI hype. If AI spend moderates, electrification and reshoring still keep incremental load growth high in the same constrained nodes. Model innovation remains a key lever, efficiency gains will help. But the binding constraint is deliverable power and the infrastructure around it. Permitting reform, transmission buildouts, and new nuclear could loosen the bottleneck, but not on software timelines. Over the next decade, outcomes are set by who can secure firm electrons and build the physical stack to use them: wires, transformers, substations, and cooling.
The coming decade’s defining constraint is brutally simple: you can’t compute what you can’t power.
This publication is provided solely for informational, educational, and general commentary purposes. It does not constitute, and should not be construed as, financial, investment, legal, accounting, engineering, or other professional advice. Nothing herein is a recommendation, solicitation, or offer to buy or sell any security, commodity, derivative, or financial instrument, or to engage in any investment strategy. Past performance is not indicative of future results. Any forward-looking statements are inherently uncertain and may differ materially from actual outcomes.
All views, opinions, analyses, and conclusions expressed herein are solely those of the author in their personal capacity and do not reflect the official policy, position, strategy, views, or opinions of the author’s employer (or any of its subsidiaries, affiliates, customers, suppliers, or partners). The author is not acting on behalf of, and is not authorized to speak for, any employer or related entity.
This publication is based exclusively on publicly available information and the author’s independent interpretation. No material non-public information (MNPI) has been used, disclosed, relied upon, or inferred in preparing this publication. Nothing herein should be interpreted as commentary on any current or future product plans, business strategies, financial performance, or confidential matters of the author’s employer or any other entity.
Readers are responsible for conducting their own independent research and for seeking advice from qualified professionals before making any decision. The author disclaims any liability for actions taken based on this publication.