Key Highlights
- Amazon Web Services to receive 1 million GPUs from Nvidia by the conclusion of 2027.
- Deliveries commence in 2025 and continue until 2027.
- Agreement encompasses networking equipment, Groq inference processors, and upcoming Blackwell and Rubin architectures.
- Amazon will deploy seven distinct Nvidia processors for AI inference operations.
- NVDA and AMZN shares both climbed in extended trading hours after the disclosure.
The Amazon Web Services partnership represents one of Nvidia’s most substantial individual customer chip commitments ever disclosed. The agreement becomes increasingly significant as additional details emerge.
According to Nvidia VP Ian Buck’s statement to Reuters, the million-GPU delivery schedule kicks off in 2025 and extends through the end of 2027. This timeframe aligns perfectly with CEO Jensen Huang’s forecast of a $1 trillion addressable market for Nvidia’s Blackwell and Rubin processor lines during the identical period.
The arrangement extends far beyond simple GPU quantities. Amazon Web Services is acquiring Nvidia’s complete hardware ecosystem, including Spectrum-X and ConnectX networking infrastructure. This represents a notable development given that AWS has traditionally relied on proprietary networking technology. Integrating Nvidia’s networking solutions into its infrastructure signals a strategic pivot.
Amazon Web Services Embraces Comprehensive Nvidia Inference Solutions
AI inference — the operational phase where artificial intelligence models produce outputs and execute tasks — forms the foundation of this partnership’s technical strategy. Amazon Web Services intends to utilize seven different Nvidia chips for managing inference operations.
Buck stated directly: “Inference is hard. It’s wickedly hard. To be the best at inference, it is not a one chip pony. We actually use all seven chips.”
The Groq processors, unveiled by Nvidia earlier this week after securing a $17 billion licensing arrangement with an AI chip startup, form part of that inference ecosystem. These chips operate in conjunction with six additional Nvidia processors to provide what the company characterizes as industry-leading inference capabilities.
Amazon Web Services will also integrate Nvidia’s Blackwell processors and is anticipated to incorporate the forthcoming Rubin platform upon its release. Neither Nvidia nor Amazon have revealed the monetary terms of this partnership.
Both companies’ shares experienced modest gains during Thursday’s after-hours session following the announcement. NVDA closed approximately 1% lower during regular trading, while AMZN declined around 0.5%.
Amazon Maintains Internal Chip Development Alongside Nvidia Partnership
Amazon continues developing proprietary AI processors, including its Trainium2 chip. Nevertheless, the cloud giant continues relying on Nvidia for the most intensive computational workloads. These dual strategies appear to function as complementary approaches rather than competing initiatives.
This agreement underscores the ongoing substantial capital allocation toward AI infrastructure among leading cloud service providers. AWS isn’t abandoning its custom hardware — instead, it’s supplementing these systems with Nvidia technology for particular high-performance applications.
The Nvidia-AWS partnership was initially revealed this week without detailed timelines. Buck’s Thursday comments to Reuters delivered the most comprehensive information to date: deliveries beginning in 2025, extending through late 2027, and encompassing a diverse range of Nvidia offerings across computing, networking, and inference technologies.



