Executive summary. The First Law of Thermodynamics dictates that energy cannot be destroyed. Every watt of power consumed by a GPU is converted into heat that must be rejected. As Parts I and II established, electrons are now the bottleneck; Part III completes the equation by revealing that every electron ultimately becomes heat, and heat rejection is brutally bounded by water and the periodic table. The generative AI supercycle has pushed rack densities past the physical limits of air cooling, forcing a non-negotiable pivot to liquid systems. This transition solves the thermal bottleneck but creates two brutally physical constraints: massive water demand in geographically stressed regions, and an explosive reliance on base metals. AI-ready campuses demand a step-function increase in copper intensity compared to traditional facilities to build out complex liquid cooling manifolds, cold plates, and high-amperage busbars. Scarcity rents are now flowing downstream from power generators to water infrastructure owners, advanced thermal tech manufacturers, and Tier-1 base metal miners. The short book holds legacy air-cooled facilities, drought-zone buildouts, and developers operating under the delusion of infinite materials.

For the past two years, the infrastructure narrative has been singular: secure the power. We established that the grid is maxed out (Part I) and that hyperscaler capital is pivoting toward nuclear fission to guarantee firm generation (Part II). But generating the electron is only half the physics equation. Once the electron does its work inside the silicon, it becomes heat.

The category error of the current AI cycle is treating heat rejection as a frictionless, infinitely scalable background process. It is not. The shift from low-density CPUs to multi-kilowatt AI accelerators has fundamentally broken traditional data center thermodynamics. You cannot cool a 100 kW rack with moving air. Air is a thermal insulator; liquid is a thermal conductor.

Here is the ultimate duration mismatch of the AI supercycle: 18 months to build a data center, 5 years to build an SMR, 10 to 15 years to permit and open a Tier-1 copper mine, and, increasingly, years to secure ironclad water rights in stressed basins. Software moves in quarters. Physics does not.

The pivot to direct-to-chip liquid cooling and two-phase immersion is mandatory. But this thermodynamic shift has triggered a vicious collision with the periodic table and local hydrology. Water Usage Effectiveness (WUE) is quietly replacing Power Usage Effectiveness (PUE) as the most scrutinized metric in the data center stack.

Key numbers
  • ~45 kW average rack density in new facilities, tripling prior baselines, per the JLL 2026 Outlook
  • ~80% liquid cooling adoption expected in new builds by 2026, per the JLL 2026 Outlook
  • Zero water cooling design target for Microsoft's next-generation data centers, reflecting an industry push toward closed-loop architectures
  • 68 billion gallons projected U.S. data center water consumption by 2028, up nearly 4x from ~17 billion gallons in 2023
  • 129% projected rise in global AI water demand by 2050, exceeding 54 km³ annually
  • Step-function increase in copper intensity required for AI-ready campuses versus traditional builds due to advanced liquid cooling and higher power loads
  • ~500,000 tonnes potential annual copper demand added by data centers alone by 2030
  • 10 to 15 years average lead time to permit and open a Tier-1 copper mine

1. The Water Bottleneck: The New Interconnection Queue. Water rights are fast becoming as critical, and as scarce, as grid interconnection rights. The industry is actively providing a counterweight to this vulnerability by pushing closed-loop and low-evaporation designs. Microsoft recently announced next-generation architectures that consume zero water for cooling, and broader fleet upgrades have sharply reduced their water intensity. Furthermore, newer liquid systems are designed to heavily curtail evaporative losses compared to legacy cooling assumptions. However, across the broader installed base and many near-term builds, evaporative cooling remains the dominant method for rejecting massive industrial heat loads. A traditional gigawatt-scale AI campus can evaporate up to 5 million gallons of water a day, the equivalent of a mid-sized town.

This demand is converging on regions that are already structurally water-stressed. Municipalities that eagerly welcomed data centers for their tax revenues are now mobilizing against "AI droughts," most visibly in Phoenix, Northern Virginia, and Dublin-style controversies. You can build an SMR behind the meter to bypass the local electric utility, but you cannot bypass local hydrology. Nuclear reactors themselves require massive heat sinks, compounding the water intensity of co-located, islanded compute campuses. If a facility cannot secure long-term, legally ironclad water rights, or afford the severe energy penalty of dry cooling, it becomes a stranded asset.

2. The Materials Supercycle: Bounded by the Crust. You cannot build a liquid-cooled data center out of software. It requires physical hardware engineered for extreme thermal conductivity. That means copper.

The shift to direct-to-chip cooling introduces thousands of miles of complex plumbing into the data center: cold plates, coolant distribution units (CDUs), massive heat exchangers, quick-disconnect valves, and specialized manifolds. Simultaneously, elevated rack power densities demand radically thicker copper busbars and cabling to safely deliver high amperages without melting. The result is that AI-ready campuses demand significantly more copper per megawatt than traditional cloud infrastructure.

Add this micro-level materials intensity to the macro-level grid upgrades and SMR deployments discussed in Parts I and II. S&P Global strongly supports the thesis of a major copper demand step-up driven by data centers and the broader grid buildout, warning of a severe structural deficit by 2040. The AI industry is now competing directly with the broader energy transition for the exact same molecules.

3. Second-Order Repricing and Adaptation. First, we will see a geographic arbitrage of compute based on cooling. Because AI training models are highly latency-tolerant, hyperscalers will increasingly site mega-campuses in "stranded energy" havens where firm power and freezing temperatures are abundant (Nordics, Quebec, Iceland). In these cold climates, "free cooling" (air-side or water-side economizers) drastically reduces evaporation needs, offering an escape from the water-stressed U.S. Southwest.

Second, the materials crunch will act as an inflation tax on the broader industrial economy. When hyperscalers with functionally infinite balance sheets corner the market on transformers, electrical-grade steel, and high-purity copper, they permanently change the clearing price. Renewable developers whose project economics rely on cheap raw materials will see their margins evaporate.

The investable logic. The scarcity rents of the AI buildout are migrating to the physical rejection stack.

The long book sits with the enablers of thermal survival: water-rights holders in politically stable, resource-abundant jurisdictions; desalination and closed-loop water treatment operators; and the advanced thermal ecosystem (manufacturers of CDUs, immersion fluids, and specialized cooling alloys). Critically, it includes Tier-1 copper and base metal miners in allied geographies, the ultimate physical choke point where Parts I through III converge.

The short book holds the victims of thermodynamics: legacy air-cooled data center REITs facing terminal obsolescence; facilities sitting in drought-prone regions without long-term water security; commoditized compute consumers exposed to skyrocketing cooling OPEX; and any developer still betting on infinite water, infinite copper, or infinite air-cooling headroom.

The defining constraint of the 2020s remains ironclad. You cannot compute what you cannot power (Part I). You cannot power what you cannot firm (Part II). And you cannot cool without mastering water rights and base metals. The age of weightless software has ended. The Hard Power era has begun.

Disclaimer:
This publication is provided solely for informational, educational, and general commentary purposes. It does not constitute, and should not be construed as, financial, investment, legal, accounting, engineering, or other professional advice. Nothing herein is a recommendation, solicitation, or offer to buy or sell any security, commodity, derivative, or financial instrument, or to engage in any investment strategy. Past performance is not indicative of future results. Any forward-looking statements are inherently uncertain and may differ materially from actual outcomes.
All views, opinions, analyses, and conclusions expressed herein are solely those of the author in their personal capacity and do not reflect the official policy, position, strategy, views, or opinions of the author’s employer (or any of its subsidiaries, affiliates, customers, suppliers, or partners). The author is not acting on behalf of, and is not authorized to speak for, any employer or related entity.
This publication is based exclusively on publicly available information and the author’s independent interpretation. No material non-public information (MNPI) has been used, disclosed, relied upon, or inferred in preparing this publication. Nothing herein should be interpreted as commentary on any current or future product plans, business strategies, financial performance, or confidential matters of the author’s employer or any other entity.
Readers are responsible for conducting their own independent research and for seeking advice from qualified professionals before making any decision. The author disclaims any liability for actions taken based on this publication.