The most significant technological build-out since the internet is happening right now—and most investors are missing it.
The Death of Moore’s Law, The Birth of Something Bigger
For five decades, Moore’s Law defined technology progress: computing power doubled every two years while costs halved. This predictable cadence shaped how we built chips, planned investments, and understood technological progress.
That era is over.
AI compute demand is now growing at more than twice the rate of Moore’s Law, creating what industry insiders call “Hyper Moore’s Law.” This isn’t just an incremental shift—it represents a complete phase change in how we build and deploy computing infrastructure.
The Numbers Are Staggering
To meet current AI demand, $500 billion must be invested in data centers per year until 2030. This isn’t speculation—it’s already happening. Major tech companies are planning to spend over $320 billion in 2025 alone on AI infrastructure, with Meta allocating up to $72 billion and collectively planning $600 billion in U.S. infrastructure through 2028.
For context:
- AI training compute has grown 300,000x since 2012
- The semiconductor industry is projected to reach $1 trillion by 2030
- Hyperscaler capital expenditure will hit $315 billion by 2025
This is the largest infrastructure build-out in history, dwarfing even the internet boom of the 1990s.
Why This Time Is Different
Past technology cycles followed a familiar pattern: hype, bubble, crash, consolidation, then slow adoption. AI infrastructure is breaking this mold for three critical reasons:
1. Demand is Real and Immediate
Unlike previous bubbles driven by speculation (dotcom) or financial engineering (crypto), AI infrastructure spending is driven by actual, measurable demand. Companies aren’t building data centers hoping someone will use them—they’re racing to keep up with applications that already exist.
Every major enterprise is deploying AI models. Every consumer product is adding AI features. Every government is prioritizing AI capabilities. The infrastructure must be built first, before applications can scale.
2. The Physics of Compute Have Changed
Traditional Moore’s Law relied on making transistors smaller. We’ve hit physical limits—you can’t shrink silicon atoms. The new path forward requires:
- Advanced lithography: ASML’s EUV machines are the only way to make cutting-edge chips
- Specialized architectures: Custom accelerators, not general-purpose CPUs
- Novel packaging: Chiplets, 3D stacking, and advanced interconnects
- New memory hierarchies: High-bandwidth memory (HBM) is as critical as the processors themselves
This creates structural bottlenecks that can’t be easily resolved. When ASML is the only company that can make the machines that make the chips, and they can only produce so many per year, you have guaranteed supply constraints for years to come.
3. Winner-Takes-Most Dynamics
AI infrastructure exhibits extreme returns to scale. The companies with the most compute can:
- Train better models
- Attract better talent
- Generate more data to improve models further
- Deploy at lower marginal cost
This creates a compounding advantage that forces everyone to keep spending. No hyperscaler can afford to fall behind. No semiconductor company can afford to skip a generation. The competitive dynamics ensure sustained, massive investment.
The Three Phases of Infrastructure Build-Out
Investment opportunities follow a predictable sequence in infrastructure cycles:
Phase 1: Foundation Layer (2023-2026) ← We Are Here
The picks-and-shovels phase. Companies building the tools to build the infrastructure see explosive growth. This includes:
- Semiconductor equipment makers (ASML, Applied Materials, Lam Research)
- Foundries with advanced process nodes (TSMC, Samsung)
- Memory manufacturers (Micron, SK Hynix)
- Networking infrastructure (Broadcom, Arista)
Investment thesis: These companies have structural advantages—moats, limited competition, multi-year order books. They benefit regardless of which AI company “wins” at the application layer.
Real-world example: ASML’s EUV monopoly means every cutting-edge chip depends on their machines. They have years of backlog and pricing power most companies can only dream of.
Phase 2: Platform Layer (2025-2028)
As infrastructure scales, platform companies that aggregate and orchestrate resources capture value. This includes:
- Hyperscalers (Amazon AWS, Microsoft Azure, Google Cloud, Meta)
- Data infrastructure (Snowflake, Databricks, MongoDB)
- AI orchestration platforms (Palantir, Scale AI)
Investment thesis: These companies sit between raw infrastructure and applications, capturing margin while providing essential services. The winners will have strong network effects and sticky customer bases.
Consider Snowflake: as companies generate more data and run more AI workloads, they naturally consume more Snowflake resources. The platform becomes more valuable as it’s used more.
Phase 3: Application Layer (2027-2032)
Eventually, the infrastructure matures enough that the value shifts to companies building consumer and enterprise applications. This is where the internet analogy holds—Amazon and Google emerged years after the infrastructure was built.
Investment thesis: Too early to call winners, but watch for applications that:
- Solve real problems with measurable ROI
- Have sustainable competitive advantages beyond “we use AI”
- Can profitably acquire customers at scale
We’re not here yet. Most “AI applications” today are features, not products.
Investment Implications: Where to Position Now
The current opportunity lies in Phase 1 and early Phase 2. Here’s the hierarchy:
Tier 1 (Highest Conviction): Structural monopolies and duopolies
- ASML (lithography)
- TSMC (advanced foundry)
- Broadcom (custom accelerators and networking)
- Nvidia (GPU compute)
Tier 2 (Strong Conviction): Critical enablers with defensible positions
- Memory manufacturers (Micron)
- Data infrastructure (Snowflake, MongoDB)
- Hyperscalers with AI focus (Meta, Google)
Tier 3 (Selective): Emerging leaders in specific niches
- AI software platforms (Palantir)
- Specialized infrastructure (CrowdStrike for AI security)
- Next-generation components (high-bandwidth memory, optical interconnects)
Tier 4 (Speculative): Long-duration bets on paradigm shifts
- Quantum computing (10-15 year horizon)
- Alternative architectures (neuromorphic, photonic)
- Energy solutions for AI data centers
The Timing Question: Are We Too Late?
A common concern: “Haven’t these stocks already run up?”
Consider the context:
- We’re 3-4 years into a 10-15 year build-out cycle
- Current infrastructure can’t support projected AI compute needs
- Supply constraints (EUV machines, advanced packaging, HBM) guarantee pricing power
- The physics of computing have fundamentally changed—this isn’t a temporary bubble
Compare to historical parallels:
- Internet infrastructure (1995-2000): Cisco peaked in March 2000, 5 years into the build. Even buying at the peak, you’d be up 20x today (adjusted for splits)
- Smartphone infrastructure (2007-2012): ARM peaked in 2014, 7 years into the cycle
- Cloud infrastructure (2010-2020): Amazon AWS revenue is still growing 30%+ annually, 13 years after launch
The winners in infrastructure cycles compound for decades, not years. We’re early.
Key Risks to Monitor
No investment thesis is complete without acknowledging what could go wrong:
AI plateau: What if current architectures hit capability limits before achieving AGI? Infrastructure demand would moderate but not collapse—narrow AI applications are already valuable.
Geopolitical disruption: Taiwan (TSMC), Netherlands (ASML), and South Korea (Samsung, SK Hynix) are single points of failure. Any conflict or export controls could reshape the industry overnight.
Technological leapfrog: A breakthrough in quantum computing, neuromorphic chips, or photonic computing could obsolete current infrastructure. Monitor but don’t over-index on low-probability, high-impact events.
Margin compression: As infrastructure scales, prices inevitably fall. The question is whether volume growth outpaces price declines. History suggests yes for the leaders, no for the laggards.
Capital efficiency improvements: If models can be trained with 10x less compute, demand projections would need revision. Watch for algorithmic breakthroughs in efficiency.
Validation Checkpoints
Revisit this thesis quarterly against these metrics:
- Hyperscaler CapEx: Is spending holding at $300B+ annually? Any cuts signal demand weakness.
- ASML order book: Forward orders should extend 12+ months. Cancellations are a red flag.
- HBM pricing: Sustained high prices confirm supply constraints. Price collapses suggest oversupply.
- AI application revenue: Are software companies monetizing AI features? This validates the infrastructure investment.
- Energy availability: Data center construction limited by power? This shifts investment thesis toward utilities and energy.
Conclusion: The Decade-Long Opportunity
The AI infrastructure supercycle is the defining investment opportunity of the 2020s. Like the internet before it, the infrastructure must be built before the applications can flourish. Unlike the internet, we know the infrastructure is essential—AI isn’t speculative technology anymore, it’s operational necessity.
The winners will be companies with:
- Structural competitive advantages (monopolies, duopolies, high switching costs)
- Multi-year revenue visibility (long order books, sticky customers)
- Pricing power (supply constraints, mission-critical products)
- Management teams focused on execution over hype
We’re in the third inning of a nine-inning game. The foundation layer is being built right now. The platform layer is consolidating. The application layer is still forming.
This is the opportunity.
Next in this series: “The Semiconductor Value Chain: Where AI Money Really Flows” – A deep dive into ASML’s monopoly, TSMC’s dominance, and why memory might be the real bottleneck.
