Amazon Web Services (AWS) and OpenAI have announced a landmark multi-year strategic partnership valued at $38 billion, aimed at accelerating OpenAI’s ability to run and scale its artificial intelligence workloads on AWS’s infrastructure starting immediately. The collaboration marks one of the most significant cloud infrastructure commitments in the AI sector to date, with plans for continued expansion over the next seven years.
As part of the agreement, OpenAI will gain access to hundreds of thousands of NVIDIA GPUs hosted on Amazon EC2 UltraServers, with the potential to scale to tens of millions of CPUs. This infrastructure will be optimized for OpenAI’s most advanced generative and agentic AI workloads, from powering ChatGPT to training future foundation models. The partnership allows OpenAI to leverage AWS’s established expertise in running large-scale AI clusters (some exceeding half a million chips) while benefiting from AWS’s security, performance, and cost advantages.
The companies emphasized that all initial compute capacity will be deployed before the end of 2026, with additional expansions expected through 2027 and beyond. The infrastructure will integrate tightly interconnected GPU clusters, including NVIDIA GB200 and GB300 chips, to enable low-latency, high-performance AI operations. The system’s flexible architecture is designed to adapt to OpenAI’s evolving workload demands as frontier models continue to grow in complexity and computational intensity.
This partnership builds on a growing relationship between the two companies. Earlier this year, OpenAI’s open weight foundation models became available on Amazon Bedrock, enabling millions of AWS customers to access OpenAI’s technology for a range of use cases including software development, data analytics, scientific research, and agentic automation. Companies such as Bystreet, Comscore, Peloton, Thomson Reuters, Triomics, and Verana Health are already using OpenAI’s models through AWS for various enterprise and scientific applications.
KEY QUOTES:
“Scaling frontier AI requires massive, reliable compute. Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
Sam Altman, Co-Founder and CEO, OpenAI
“As OpenAI continues to push the boundaries of what’s possible, AWS’s best-in-class infrastructure will serve as a backbone for their AI ambitions. The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads.”
Matt Garman, CEO, Amazon Web Services (AWS)