Beyond Nvidia, Micron: Who Profits From $650 Billion AI Spending?
Led by a $200 billion spending plan from Amazon, the four biggest internet companies are on track to spend $650 billion on capex this year. That’s about 60% more than last year, one of the largest single-year spending jumps the tech sector has ever seen. The scale is striking. Amazon’s $200 billion budget is 52% higher than last year and larger than what Google and Microsoft spent combined in 2025. Google is also sharply increasing spending, aiming for as much as $185 billion, as it pushes its AI tools deeper into search, cloud services, and everyday software. So what is forcing these companies to spend this much, this fast, and who ends up winning? Building competitive models requires massive, upfront investment in physical infrastructure. As a result, the bulk of this spending flows not to software, but to the companies supplying compute, power, cooling, and connectivity.

If you seek an upside with less volatility than a single stock or sector, consider the High Quality Portfolio (HQ) – HQ has outperformed its benchmark, a combination of the S&P 500, Russell, and S&P midcap index, and achieved returns exceeding 105% since its inception.
Where Is The Cash Likely To Go?
Gigawatt Data Centers Traditional data centers were akin to digital libraries built to store and retrieve information. Modern AI data centers focus not just on storage but also on compute, and they are much larger. Older hyperscale facilities typically consumed 10 to 50 megawatts of power. New AI campuses operate at the gigawatt scale, 20–100× larger, using as much electricity as a city of roughly 750,000 homes. Training frontier models like Gemini or GPT-5 requires hundreds of thousands of chips communicating in real time, which means they cannot be geographically dispersed, linked by miles of specialized fiber. These chips generate extreme heat, far beyond what air cooling can handle, forcing massive investment into liquid-cooling systems that circulate chilled fluid directly over the processors. Products tied to GPUs and accelerators, high-speed networking and optical fiber, power generation and grid equipment, liquid cooling systems, and data center infrastructure construction.
- Palantir At 80x Earnings: What Revenue Growth Rate Justifies The Valuation?
- Should You Pay Attention To Chevron Stock’s Momentum?
- What Is Happening With Caterpillar Stock?
- What Can Trigger Microsoft Stock’s Slide?
- Is Microsoft Stock A Trap Or A Missed Opportunity?
- Earn 9.6% Today or Buy BSX 30% Cheaper – It’s a Win-Win
Custom AI Chips: Taking the Reins from Nvidia: Big Tech relied largely on Nvidia for AI chips. Now they’re spending billions to design their own custom silicon, like Amazon’s Trainium3 and Google’s TPU v6. Running GPUs from the likes of Nvidia and AMD at scale is costly. Amazon says Trainium3 can cut training costs by up to 50%. GPUs are general-purpose tools, while custom ASICs are purpose-built for AI, delivering far better performance per watt and lower power costs. As workloads shift from training to always-on inference, where cost per query matters most, custom silicon becomes strategically essential. Owning the chips also removes reliance on Nvidia’s long allocation cycles and multi-year waitlists.
The Energy Grid: Big Tech as a Power Utility Large models require enormous and continuous amounts of electricity. Without sufficient power, AI systems cannot operate. AI data centers need baseload power that is available at all times. While solar and wind are useful, their intermittency makes them insufficient on their own for large-scale AI workloads. This is why Microsoft signed an agreement to restart the Three Mile Island facility and why Amazon and Google are investing in small modular reactors (SMRs). These companies are also financing new substations, high-voltage transmission lines, and, in some cases, natural gas plants to secure dedicated and uninterrupted power for large AI data centers.
Who Wins From These Big Spends?
While GPU leaders like Nvidia and memory suppliers such as Micron are winners from rising AI infra spending, a broader and potentially more secular opportunity sits deeper in the AI infrastructure stack.
- The Design Partners: Marvell (NASDAQ:MRVL) and Broadcom (NASDAQ:AVGO) are the primary beneficiaries of the shift to custom chips, supplying networking silicon, interconnects, and co-design expertise that hyperscalers lack in-house. They are the architects helping Amazon and Google build their core compute infrastructure. Marvell surged 10% in Friday’s trading, while AVGO was up 7%.
- Cooling Systems: Because AI chips generate extreme heat, companies like Vertiv (NYSE:VRT) are seeing surging demand for liquid cooling, power management, and thermal control systems. Cooling is becoming a core constraint, not a secondary cost, in AI data center design. Vertiv surged 10% on Friday.
- Energy Providers: Nuclear and utility giants like Vistra and Constellation Energy (NYSE:CEG) have become the ultimate gatekeepers. They own the power that Big Tech needs, allowing them to sign massive, lucrative long-term contracts. Vistara stock was up almost 5% on Friday.
- Networking Experts: Arista Networks (NYSE:ANET) and Lumen (NYSE:LUMN) benefit from the need for ultra-low-latency, high-bandwidth connections inside and between data center campuses. Network spending scales almost linearly with AI cluster size and model complexity. Lumen stock gained 30% in Friday’s trading.
The Trefis High Quality (HQ) Portfolio, with a collection of 30 stocks, has a track record of comfortably outperforming its benchmark that includes all three – the S&P 500, S&P mid-cap, and Russell 2000 indices. Why is that? As a group, HQ Portfolio stocks provided better returns with less risk versus the benchmark index—less of a roller-coaster ride, as evident in HQ Portfolio performance metrics.