AI Infrastructure Bottlenecks: Full-Stack Constraint Map (Silicon → Power → Cooling → Labor)
Date: 2026-02-16
Prepared for: AI infrastructure bottleneck research brief
Executive Summary
The AI buildout is no longer constrained by a single chokepoint (“not enough GPUs”). It is constrained by a stack of coupled bottlenecks that now spans compute silicon, memory, networking, packaging, electric power, cooling, site development, and specialized labor.
Three conclusions stand out:
-
The bottleneck has migrated from chips alone to systems-level infrastructure.
GPU/accelerator supply improved versus 2023–2024, but advanced packaging, HBM mix, power delivery, and cooling architecture now determine deployment velocity. NVIDIA itself is scaling aggressively with strong Blackwell demand and record data-center revenue, but end-market deployment still depends on the broader stack (https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-fourth-quarter-and-fiscal-2025). -
Power and grid constraints are the longest-duration bottleneck (most underpriced).
The IEA estimates data-center electricity use at ~415 TWh in 2024, rising to ~945 TWh by 2030 in its base case (more than 2x), while transmission build timelines and key component lead times remain slow (https://iea.blob.core.windows.net/assets/de9dea13-b07d-42c5-a398-d1b3ae17d866/EnergyandAI.pdf). DOE planning work similarly indicates U.S. transmission expansion needs are structural, not cyclical (2.1x–2.6x 2020 system size by 2050), with huge interconnection queues (https://www.energy.gov/sites/default/files/2024-10/NationalTransmissionPlanningStudy-ExecutiveSummary.pdf). -
Market pricing likely overweights “AI chip winners” and underweights “deployment enablers.”
The market broadly recognizes leaders in accelerators and HBM. It is less consistent in pricing long-cycle beneficiaries in transmission equipment, generation optionality, thermal systems, and power-constrained data-center land.
Framework: How to Read Bottlenecks
For each category, this report addresses:
- Constraint: What physically/economically limits expansion?
- Severity / timeline: How long until material easing?
- Beneficiaries: Which companies benefit from scarcity or relief spend?
- Market view: Priced-in vs under-appreciated.
1) GPUs / Accelerators (NVDA, AMD, Custom ASICs)
Constraint
The accelerator bottleneck is no longer just wafer starts; it is now platform-level integration:
- accelerator die availability,
- HBM attachment,
- advanced packaging,
- high-bandwidth network fabrics,
- rack-level power/cooling compatibility.
NVIDIA’s financial releases show demand remains intense (“Demand for Blackwell is amazing”) alongside record data-center revenue ($35.6B in Q4 FY2025) (https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-fourth-quarter-and-fiscal-2025). AMD also reported record data-center segment revenue, driven by EPYC plus Instinct ramp (https://ir.amd.com/news-events/press-releases/detail/1276/amd-reports-fourth-quarter-and-full-year-2025-financial-results).
Severity and Timeline
- Near term (0–18 months): constraints remain for top-tier training clusters (especially where power + cooling + network are pre-integrated).
- Medium term (18–36 months): easing likely via custom ASIC scaling and broader accelerator mix.
Custom silicon is a practical pressure valve. AWS explicitly positions Trainium as lower-cost/high-scale AI training and inference, with Trainium2 delivering up to 4x Trainium1 performance and 30–40% better price-performance vs certain GPU instances (https://aws.amazon.com/ai/machine-learning/trainium/).
Google likewise emphasizes AI hardware efficiency improvements and TPU progress in its 2025 environmental report (https://www.gstatic.com/gumdrop/sustainability/google-2025-environmental-report.pdf).
Beneficiaries
- Direct: NVIDIA (NVDA), AMD (AMD).
- ASIC ecosystem: Broadcom (AVGO), hyperscaler internal silicon ecosystems (AWS/Google/Meta).
- Second-order: networking and packaging suppliers required to make accelerators deployable.
Priced In vs Under-Appreciated
- Likely priced in: NVDA headline demand strength.
- Partially priced in: AMD Instinct optionality.
- Under-appreciated: the extent to which future growth depends on non-silicon enablers (power, cooling, packaging, transmission).
2) Memory Bottleneck: HBM + DRAM Mix (SK hynix, Micron, Samsung)
Constraint
HBM is the most strategic memory bottleneck because AI training/inference economics increasingly depend on memory bandwidth and packaging co-design. TrendForce notes that HBM manufacturing can require ~3x wafer input vs DDR5 for equivalent bit output, crowding general DRAM capacity (https://www.trendforce.com/news/2024/06/13/news-hbm-supply-shortage-prompts-microns-expansion-expected-schedule-in-japan-and-taiwan-revealed/).
TrendForce also described risk of second-half DRAM tightness as producers prioritize HBM profitability and capacity (https://www.trendforce.com/news/2024/05/21/news-hbm-boom-may-lead-to-dram-shortages-in-the-second-half-of-the-year/).
Severity and Timeline
- 2025–2027: still constrained at frontier nodes/qualified stacks.
- Easing path: new fab + TSV + packaging expansion in Taiwan/Japan/U.S. and qualification progress from multiple vendors.
Micron’s HBM ramp urgency and capacity expansion plans were explicitly tied to high demand and forward bookings in TrendForce reporting (https://www.trendforce.com/news/2024/06/13/news-hbm-supply-shortage-prompts-microns-expansion-expected-schedule-in-japan-and-taiwan-revealed/).
SK hynix continues to frame HBM as a structural “memory supercycle” driver, with cited expectations of continued leadership into HBM4-era platforms (https://news.skhynix.com/2026-market-outlook-focus-on-the-hbm-led-memory-supercycle/).
Beneficiaries
- Primary: SK hynix, Micron, Samsung (if qualification/yield ramps sustain).
- Secondary: advanced packaging ecosystem (TSMC/OSATs), tool vendors tied to TSV/stacking/test.
Priced In vs Under-Appreciated
- Likely priced in: “HBM leaders win” consensus.
- Under-appreciated: collateral impact on conventional DRAM supply and enterprise IT memory budgets; execution risk from qualification cadence.
3) Interconnect / Networking Bottlenecks (Broadcom, Arista, Infinera)
Constraint
As clusters scale, compute becomes gated by networking fabric performance (east-west throughput, congestion control, optical reach, failure domains). This is not optional overhead—it is core to realized AI utilization.
Broadcom and Arista data both reinforce this:
- Broadcom reported FY2024 AI revenue of $12.2B (+220% YoY), citing AI XPUs + Ethernet portfolio strength (https://investors.broadcom.com/news-releases/news-release-details/broadcom-inc-announces-fourth-quarter-and-fiscal-year-2024).
- Arista’s FY2025 commentary highlighted AI networking momentum and expansion goals, with full-year revenue at $9B (https://www.arista.com/en/company/news/press-release/23416-pr-20260212).
TrendForce reporting on Broadcom Tomahawk 6 underlines scale: 102.4 Tbps class switching and design choices aimed at very large GPU fabrics (https://www.trendforce.com/news/2025/06/04/news-broadcoms-latest-networking-chip-for-ai-reportedly-built-on-tsmcs-3nm-full-shipments-expected-in-july/).
Severity and Timeline
- Near term: persistent bottlenecks in ultra-large training pods and backbone optical interconnect.
- Medium term (2–4 years): greater relief as 800G+ and optical upgrades propagate.
Beneficiaries
- Primary: Broadcom (merchant silicon), Arista (AI Ethernet fabrics).
- Optical angle: Infinera/Nokia ecosystem exposure to DCI and long-haul optical buildouts (strategically relevant though increasingly consolidated).
Priced In vs Under-Appreciated
- Broadcom: partially priced, but AI networking duration may still be underestimated.
- Arista: market recognizes AI tailwind; persistence of scale-up networking demand still likely under-modeled.
- Under-appreciated overall: networking as a first-order limiter of effective GPU productivity.
4) Advanced Packaging Bottleneck (TSMC CoWoS, Intel Foveros)
Constraint
AI chips are package-constrained, not just die-constrained. CoWoS/SoIC/2.5D/3D capacity determines whether compute + HBM can be assembled into deployable product.
TrendForce reported TSMC advanced packaging capacity (CoWoS/SoIC) as effectively booked by major AI customers through next-year horizons, with rapid expansion underway (https://www.trendforce.com/news/2024/05/06/news-tsmcs-advanced-packaging-capacity-fully-booked-by-nvidia-and-amd-through-next-year/).
Additional TrendForce reporting indicates potential CoWoS monthly capacity expansion toward ~75k wafers in 2025 and continued growth into 2026, with partner support from ASE/Amkor (https://www.trendforce.com/news/2025/01/02/news-tsmc-set-to-expand-cowos-capacity-to-record-75000-wafers-in-2025-doubling-2024-output/).
Intel’s foundry packaging strategy (EMIB/Foveros) shows competing roadmap depth, with stated long-range goals for extreme package-level integration (https://www.intel.com/content/www/us/en/foundry/packaging.html).
Severity and Timeline
- 2025–2026: still tight at high-end AI package configurations.
- 2027 onward: material easing possible if new capacity ramps on schedule and yields stabilize.
Beneficiaries
- Core: TSMC; OSAT partners (ASE, Amkor).
- Alternative stack: Intel Foundry packaging (EMIB/Foveros) if customer adoption broadens.
Priced In vs Under-Appreciated
- TSMC packaging scarcity: widely recognized.
- Under-appreciated: risk that packaging—not leading-edge wafer supply—remains the true deployment limiter for frontier AI through mid-decade.
5) Power Generation + Grid Infrastructure (Vistra, Constellation, NRG, Eaton)
Constraint
This is the highest-conviction structural bottleneck.
The IEA estimates data-center electricity use at ~415 TWh in 2024 and ~945 TWh by 2030 in its base case (https://iea.blob.core.windows.net/assets/de9dea13-b07d-42c5-a398-d1b3ae17d866/EnergyandAI.pdf). At the same time:
- transmission build timelines in advanced economies are often 4–8 years,
- wait times for critical components (transformers/cables) have doubled in recent years (same source).
DOE’s National Transmission Planning Study indicates the U.S. grid may need to expand to 2.1x–2.6x 2020 transmission size by 2050, with interregional expansion 1.9x–3.5x. It also references very large interconnection queues (1,480 GW solar/wind + 1,030 GW storage seeking interconnection) (https://www.energy.gov/sites/default/files/2024-10/NationalTransmissionPlanningStudy-ExecutiveSummary.pdf).
Severity and Timeline
- Long duration (5–10+ years).
- This is not a one-cycle capex bulge; it is a multi-year buildout constrained by permitting, equipment, labor, and financing.
Beneficiaries
-
Generation incumbents with scalable optionality:
- Vistra: signed 20-year PPAs with Meta for >2,600 MW of nuclear-linked supply, including uprates (https://investor.vistracorp.com/2026-01-09-Vistra-and-Meta-Announce-Agreements-to-Support-Nuclear-Plants-in-PJM-and-Add-New-Nuclear-Generation-to-the-Grid).
- NRG: completed acquisition adding ~13 GW generation and explicitly framed a “power demand supercycle,” including data-center demand (https://investors.nrg.com/news-releases/news-release-details/nrg-energy-completes-acquisition-13-gw-power-generation-and-ci).
- Constellation: management messaging increasingly links nuclear reliability to “powering the data economy” (https://investors.constellationenergy.com/static-files/e146dfb8-93fe-48fe-9e20-b9a9e3eec465).
-
Electrical infrastructure providers: switchgear, breakers, transformer-adjacent value chain, and power quality systems (Eaton and peers) should benefit if component lead times stay tight (IEA source above).
Priced In vs Under-Appreciated
- Partially priced: independent power producers with explicit AI-linked contracts.
- Under-appreciated: how durable and non-linear grid bottlenecks can be, especially when data-center load competes with electrification and industrial reshoring.
6) Cooling Bottleneck (Vertiv, Schneider Electric, Liquid Cooling Ecosystem)
Constraint
Higher rack densities push thermal systems beyond legacy air-cooling envelopes. The constraint now includes:
- physical heat rejection capacity,
- liquid cooling retrofits,
- water availability/water intensity trade-offs,
- controls/automation integration.
IEA’s analysis indicates data-center water consumption around 560 billion liters/year currently, rising to ~1,200 billion liters/year by 2030 in the base case (https://iea.blob.core.windows.net/assets/de9dea13-b07d-42c5-a398-d1b3ae17d866/EnergyandAI.pdf).
Google reports fleet-level data-center PUE improvement to 1.09 and substantial freshwater replenishment efforts, while acknowledging rising AI-era infrastructure demands (https://www.gstatic.com/gumdrop/sustainability/google-2025-environmental-report.pdf).
Microsoft highlights direct-to-chip cooling innovations that can save >125 million liters of water per facility per year in certain designs (https://www.microsoft.com/en-us/corporate-responsibility/sustainability/report/).
Uptime Institute notes increasing automation and liquid-cooling integration, including Schneider’s thermal expansion via Motivair (https://journal.uptimeinstitute.com/ai-and-cooling-toward-more-automation/).
Severity and Timeline
- Near-to-medium term (2–5 years): intense retrofit/newbuild race.
- Local severity varies by climate, water policy, and utility structure.
Beneficiaries
- Public names: Vertiv (thermal + power integration), Schneider Electric (thermal + controls), plus broader liquid cooling ecosystems.
- Adjacencies: water treatment/reuse, facility digital twins, controls software.
Priced In vs Under-Appreciated
- Partially priced: headline AI thermal demand.
- Under-appreciated: water and siting externalities; control software and integrated thermal-power design as key differentiators.
7) Data Center REITs / Builders (Equinix, Digital Realty, QTS)
Constraint
In many metros, the scarce resource is no longer shell space; it is deliverable powered capacity with realistic energization timelines.
Equinix reported delivery of 90+ MW xScale capacity in 2025, major expansion activity, and ~1 GW added powered land-under-control (https://investor.equinix.com/news-events/press-releases/detail/1096/equinix-provides-robust-2026-outlook-driven-by-strong).
Digital Realty reported robust leasing and a sizable backlog: $817M annualized GAAP base-rent backlog (DLR share) and bookings expected to generate $400M annualized rent at 100% share (https://investor.digitalrealty.com/news-releases/news-release-details/digital-realty-reports-fourth-quarter-2025-results).
Severity and Timeline
- Persistent through decade in top demand hubs unless power/transmission constraints ease materially.
Beneficiaries
- Public: Equinix (EQIX), Digital Realty (DLR).
- Private: QTS and other hyperscale-oriented developers with power-rich land banks.
Priced In vs Under-Appreciated
- Mixed: market rewards growth but may not fully price optionality of pre-secured power/land in constrained grids.
8) Raw Materials Constraints (Copper, Rare Earths, Water)
Constraint
Copper and grid materials
IEA’s critical minerals outlook shows clean-energy-driven copper demand growth and warns that announced projects cover only ~70% of copper needs in APS-type trajectories by 2035 (https://iea.blob.core.windows.net/assets/ee01701d-1d5c-4ba8-9df6-abeeac9de99a/GlobalCriticalMineralsOutlook2024.pdf).
Rare earth / graphite concentration
IEA flags high concentration risk (e.g., very high shares of battery-grade graphite and refined rare-earth supply tied to China in 2030 scenarios) (same source). USGS also shows U.S. rare-earth compounds/metals import sourcing heavily concentrated in China (70% over 2020–2023), with high net import reliance in compounds/metals (https://pubs.usgs.gov/periodicals/mcs2025/mcs2025.pdf).
Water
IEA projects data-center water use rising materially by 2030; Google/Microsoft disclosures show both progress and high absolute dependence on advanced cooling and replenishment systems (IEA + Google + Microsoft links above).
Severity and Timeline
- Copper/grid materials: 3–10 years (mine + refining + infrastructure cycles).
- Rare earth concentration: persistent geopolitical/processing risk absent diversification.
- Water: already active in stressed basins; likely tighter with AI cluster density growth.
Beneficiaries
- Copper miners/refiners, grid hardware suppliers, water infrastructure and reuse vendors, and regions with favorable hydrology + permitting.
Priced In vs Under-Appreciated
- Partly priced: broad “critical minerals” theme.
- Under-appreciated: copper and transformer ecosystem bottlenecks specifically tied to AI-load-driven transmission expansion.
9) Talent / Labor Constraints
Constraint
AI infrastructure buildout needs specialized talent across:
- power engineering,
- high-voltage interconnection,
- thermal/mechanical design,
- data-center operations,
- controls/software for automation.
Uptime’s survey work has consistently shown staffing as a top operational pain point; staffing/organization ranked as a leading requirement, and staffing-related execution/process issues are major outage-risk contributors (https://journal.uptimeinstitute.com/data-center-staffing-an-ongoing-struggle/).
IEA also notes acute shortages in technical energy-sector skills and highlights that energy employers still lag in AI/digital skill integration, based on survey evidence (https://iea.blob.core.windows.net/assets/de9dea13-b07d-42c5-a398-d1b3ae17d866/EnergyandAI.pdf).
Severity and Timeline
- Multi-year (3–7 years) due training pipelines and field experience requirements.
Beneficiaries
- Automation software, managed operations providers, and firms with apprenticeship/training scale.
Priced In vs Under-Appreciated
- Under-appreciated in most valuation models; labor is often treated as solvable via capex alone, which is unrealistic in complex infrastructure rollouts.
10) Regulatory / Permitting Bottlenecks
Constraint
Permitting and interconnection are fundamental schedule constraints for both generation and transmission.
IEA: transmission build times commonly 4–8 years in advanced economies, with critical component lead times rising (https://iea.blob.core.windows.net/assets/de9dea13-b07d-42c5-a398-d1b3ae17d866/EnergyandAI.pdf).
DOE: long-range transmission expansion needs are very large, and current queue depth implies significant process/coordination strain (https://www.energy.gov/sites/default/files/2024-10/NationalTransmissionPlanningStudy-ExecutiveSummary.pdf).
Severity and Timeline
- Long-cycle (5–10+ years); reform can help but cannot fully compress physical + legal development timelines.
Beneficiaries
- Incumbent generation operators with already-interconnected assets.
- Developers with deep permitting expertise and local stakeholder relationships.
- Grid equipment firms benefiting from prolonged replacement/upgrade cycles.
Priced In vs Under-Appreciated
- Under-appreciated: permitting friction is often modeled as “delay,” but in constrained markets it can create persistent scarcity rents.
Company Analysis: Investable Names
Semis and AI Compute
-
NVIDIA (NVDA): still demand leader; Blackwell ramp + software moat, but sensitivity to packaging/network/power stack remains (https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-fourth-quarter-and-fiscal-2025).
View: High quality, but bottleneck alpha increasingly outside core GPU silicon. -
AMD (AMD): credible second source in accelerator + CPU; strong data-center growth and Instinct ramp (https://ir.amd.com/news-events/press-releases/detail/1276/amd-reports-fourth-quarter-and-full-year-2025-financial-results).
View: Beneficiary of supply diversification, especially where customers de-risk single-vendor exposure. -
Broadcom (AVGO): key beneficiary of custom AI silicon + Ethernet fabric scale (https://investors.broadcom.com/news-releases/news-release-details/broadcom-inc-announces-fourth-quarter-and-fiscal-year-2024; https://www.trendforce.com/news/2025/06/04/news-broadcoms-latest-networking-chip-for-ai-reportedly-built-on-tsmcs-3nm-full-shipments-expected-in-july/).
View: Durable exposure to networking + ASIC layer; less crowded than pure GPU narrative. -
Arista (ANET): monetizes AI Ethernet and cloud-scale switching demand (https://www.arista.com/en/company/news/press-release/23416-pr-20260212).
View: Structural beneficiary if Ethernet-based AI fabrics keep taking share.
Memory and Packaging
-
SK hynix / Micron / Samsung: HBM remains strategic choke point; wafer, yield, and qualification matter as much as bit growth (TrendForce HBM pieces + SK hynix newsroom links above).
View: SK hynix leadership recognized; Samsung/Micron upside tied to qualification and execution. -
TSMC / OSATs / Intel Foundry packaging: packaging scarcity remains central in near term; Intel offers alternative architecture path with EMIB/Foveros (https://www.trendforce.com/news/2024/05/06/news-tsmcs-advanced-packaging-capacity-fully-booked-by-nvidia-and-amd-through-next-year/; https://www.intel.com/content/www/us/en/foundry/packaging.html).
Power, Grid, Thermal
-
Vistra (VST), Constellation (CEG), NRG (NRG): dispatchable/firm generation + contract structures now directly linked to AI-load growth (Vistra/NRG/Constellation sources above).
View: Potential multi-year scarcity rent from power-constrained data-center geographies. -
Eaton (ETN) and power equipment peers: exposed to prolonged grid modernization and electrical balance-of-plant demand, especially given transmission + component lead-time stress signaled by IEA/DOE.
-
Vertiv (VRT), Schneider Electric (SU.PA / SBGSY): direct beneficiaries of liquid-cooling and high-density thermal retrofits (Uptime + Microsoft + Google + Vertiv pages).
Data Center Real Estate / Capacity Platforms
-
Equinix (EQIX), Digital Realty (DLR): backlog, bookings, and powered-land control are increasingly strategic in constrained markets (Equinix/DLR releases).
View: quality of power access and time-to-energize increasingly drives relative valuation. -
QTS (private): not public equity, but a notable beneficiary of hyperscale AI demand where power/land are pre-positioned.
Most Under-Appreciated Bottlenecks (Ranked)
1) Grid interconnection + transmission timelines
Why under-appreciated: markets extrapolate chip shipment growth faster than they model grid completion realities.
Evidence: 4–8 year transmission timelines, doubled transformer/cable waits, and large queue backlogs (IEA + DOE).
2) Electrical equipment lead times (transformers/switchgear ecosystem)
Why under-appreciated: viewed as commodity industrial spend, but timing and qualification can govern project CODs.
Evidence: IEA lead-time commentary plus accelerating load growth.
3) Water-constrained cooling in specific geographies
Why under-appreciated: PUE is tracked more than absolute water risk and local hydrology/permitting.
Evidence: IEA water trajectory; Google/Microsoft cooling disclosures.
4) Packaging-memory coupling (not just “HBM shortage”)
Why under-appreciated: HBM narrative is known, but less attention to capacity crowd-out effects across DRAM and package assembly.
5) Skilled labor for commissioning and operations
Why under-appreciated: capex plans assume labor appears on schedule; survey evidence suggests persistent shortage and execution risk.
Bottom Line for Investors
The AI infrastructure opportunity remains enormous—but the binding constraints are increasingly outside the GPU die.
- If your model assumes compute demand converts linearly into deployed AI capacity, it likely overstates near-term slope.
- If your model includes packaging, memory mix, networking fabric, power delivery, cooling, and permitting, it better reflects real deployment cadence.
Practical portfolio implication
A balanced AI infrastructure basket should include:
- Compute leaders (NVDA/AMD),
- Network + custom silicon enablers (AVGO/ANET),
- Power + generation optionality (VST/CEG/NRG),
- Thermal + electrical infrastructure (VRT/Schneider/Eaton peers),
- Power-constrained data-center platform owners (EQIX/DLR).
In this phase of the cycle, the biggest mispricing risk is treating AI as a pure semiconductor story rather than a full-stack industrial systems buildout.
Scenario Watchlist (What Would Change This View)
To make this framework actionable, the following datapoints matter most over the next 12–24 months:
-
Packaging lead-time compression
If CoWoS/SoIC lead times fall faster than expected (and not just announced capacity), the chip bottleneck could reassert as the dominant limiter. If not, deployment remains system-constrained. -
HBM qualification breadth
Watch whether second- and third-source suppliers consistently pass qualification for frontier accelerator programs. Faster multi-vendor qualification reduces pricing power concentration and lowers deployment risk. -
AI fabric utilization metrics
If cluster-level utilization improves materially without proportional network capex growth, networking bottleneck risk is easing. If utilization is still constrained by fabric congestion/fault domains, network names may have longer runway than consensus. -
Interconnection and energization cycle times
The single highest signal for long-duration AI infra growth is whether interconnection cycle times shrink in major U.S. and European hubs. If they do not, scarcity rents for powered sites and incumbent generation should persist. -
Cooling architecture mix shift
Track percentage of new AI capacity designed around liquid cooling from day one (vs retrofits). A high retrofit share usually implies higher cost, longer delivery, and greater execution risk. -
Water policy tightening in key metros
AI growth assumptions for water-stressed regions can break quickly if permitting regimes tighten. This is still weakly modeled in most top-down demand forecasts. -
Labor productivity in commissioning and operations
If operators fail to improve staffing productivity (automation, tooling, training), capex alone will not translate into on-time capacity delivery.
In short: if power + cooling + permitting data do not improve, AI infrastructure growth may stay robust in spend terms but remain lumpy in deployed-capacity terms.
Sources (inline cited above)
- NVIDIA Q4 FY2025: https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-fourth-quarter-and-fiscal-2025
- NVIDIA Q3 FY2025: https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-third-quarter-fiscal-2025
- AMD Q4/FY2025 results: https://ir.amd.com/news-events/press-releases/detail/1276/amd-reports-fourth-quarter-and-full-year-2025-financial-results
- AWS Trainium: https://aws.amazon.com/ai/machine-learning/trainium/
- Google 2025 Environmental Report (PDF): https://www.gstatic.com/gumdrop/sustainability/google-2025-environmental-report.pdf
- Microsoft Sustainability Report page: https://www.microsoft.com/en-us/corporate-responsibility/sustainability/report/
- Broadcom FY2024 results: https://investors.broadcom.com/news-releases/news-release-details/broadcom-inc-announces-fourth-quarter-and-fiscal-year-2024
- Arista Q4/FY2025 results: https://www.arista.com/en/company/news/press-release/23416-pr-20260212
- Intel packaging (EMIB/Foveros): https://www.intel.com/content/www/us/en/foundry/packaging.html
- TrendForce CoWoS booked (2024): https://www.trendforce.com/news/2024/05/06/news-tsmcs-advanced-packaging-capacity-fully-booked-by-nvidia-and-amd-through-next-year/
- TrendForce CoWoS expansion (2025): https://www.trendforce.com/news/2025/01/02/news-tsmc-set-to-expand-cowos-capacity-to-record-75000-wafers-in-2025-doubling-2024-output/
- TrendForce HBM shortage / Micron expansion: https://www.trendforce.com/news/2024/06/13/news-hbm-supply-shortage-prompts-microns-expansion-expected-schedule-in-japan-and-taiwan-revealed/
- TrendForce HBM boom and DRAM tightness: https://www.trendforce.com/news/2024/05/21/news-hbm-boom-may-lead-to-dram-shortages-in-the-second-half-of-the-year/
- SK hynix HBM-led supercycle outlook: https://news.skhynix.com/2026-market-outlook-focus-on-the-hbm-led-memory-supercycle/
- TrendForce Broadcom networking chip: https://www.trendforce.com/news/2025/06/04/news-broadcoms-latest-networking-chip-for-ai-reportedly-built-on-tsmcs-3nm-full-shipments-expected-in-july/
- IEA Energy and AI (PDF): https://iea.blob.core.windows.net/assets/de9dea13-b07d-42c5-a398-d1b3ae17d866/EnergyandAI.pdf
- DOE National Transmission Planning Study Executive Summary (PDF): https://www.energy.gov/sites/default/files/2024-10/NationalTransmissionPlanningStudy-ExecutiveSummary.pdf
- IEA Global Critical Minerals Outlook 2024 (PDF): https://iea.blob.core.windows.net/assets/ee01701d-1d5c-4ba8-9df6-abeeac9de99a/GlobalCriticalMineralsOutlook2024.pdf
- USGS Mineral Commodity Summaries 2025 (PDF): https://pubs.usgs.gov/periodicals/mcs2025/mcs2025.pdf
- Uptime staffing: https://journal.uptimeinstitute.com/data-center-staffing-an-ongoing-struggle/
- Uptime AI and cooling: https://journal.uptimeinstitute.com/ai-and-cooling-toward-more-automation/
- Equinix press release: https://investor.equinix.com/news-events/press-releases/detail/1096/equinix-provides-robust-2026-outlook-driven-by-strong
- Digital Realty Q4 2025 results: https://investor.digitalrealty.com/news-releases/news-release-details/digital-realty-reports-fourth-quarter-2025-results
- Vistra + Meta nuclear PPAs: https://investor.vistracorp.com/2026-01-09-Vistra-and-Meta-Announce-Agreements-to-Support-Nuclear-Plants-in-PJM-and-Add-New-Nuclear-Generation-to-the-Grid
- NRG + LS Power acquisition: https://investors.nrg.com/news-releases/news-release-details/nrg-energy-completes-acquisition-13-gw-power-generation-and-ci
- Constellation Q3 2025 release (PDF): https://investors.constellationenergy.com/static-files/e146dfb8-93fe-48fe-9e20-b9a9e3eec465