摘要:亚马逊与OpenAI达成380亿美元合作,OpenAI将获得AWS海量算力支持,包括数十万GPU和数千万CPU。这不仅是云服务大单,更标志着AI“算力死耗”时代来临、云服务商升级为“战略基础建设者”,以及模型方与基础设施方深度捆绑的新生态。AWS CEO称,只有百万级芯片集群才能支撑下一代AI模型需求。
Partnership will enable OpenAI to run its advanced AI workloads on AWS’s world-class infrastructure starting immediately.

Amazon Web Services (AWS) and OpenAI announced a multi-year, strategic partnership that provides AWS’s world-class infrastructure to run and scale OpenAI’s core artificial intelligence (AI) workloads starting immediately. Under this new $38 billion agreement, which will have continued growth over the next seven years, OpenAI is accessing AWS compute comprising hundreds of thousands of state-of-the-art NVIDIA GPUs, with the ability to expand to tens of millions of CPUs to rapidly scale agentic workloads. AWS has unusual experience running large-scale AI infrastructure securely, reliably, and at scale–with clusters topping 500K chips. AWS’s leadership in cloud infrastructure combined with OpenAI’s pioneering advancements in generative AI will help millions of users continue to get value from ChatGPT.
The rapid advancement of AI technology has created unprecedented demand for computing power. As frontier model providers seek to push their models to new heights of intelligence, they are increasingly turning to AWS due to the performance, scale, and security they can achieve. OpenAI will immediately start utilizing AWS compute as part of this partnership, with all capacity targeted to be deployed before the end of 2026, and the ability to expand further into 2027 and beyond.
The infrastructure deployment that AWS is building for OpenAI features a sophisticated architectural design optimized for maximum AI processing efficiency and performance. Clustering the NVIDIA GPUs—both GB200s and GB300s—via Amazon EC2 UltraServers on the same network enables low-latency performance across interconnected systems, allowing OpenAI to efficiently run workloads with optimal performance. The clusters are designed to support various workloads, from serving inference for ChatGPT to training next generation models, with the flexibility to adapt to OpenAI’s evolving needs.

“Scaling frontier AI requires massive, reliable compute,” said OpenAI co-founder and CEO Sam Altman. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
“As OpenAI continues to push the boundaries of what’s possible, AWS’s best-in-class infrastructure will serve as a backbone for their AI ambitions,” said Matt Garman, CEO of AWS. “The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads.”

This news continues the companies’ work together to provide cutting-edge AI technology to benefit organizations worldwide. Earlier this year, OpenAI open weight foundation models became available on Amazon Bedrock, bringing these additional model options to millions of customers on AWS. OpenAI has quickly become one of the most popular publicly available model providers in Amazon Bedrock with thousands of customers—including Bystreet, Comscore, Peloton, Thomson Reuters, Triomics, and Verana Health—working with their models for agentic workflows, coding, scientific analysis, mathematical problem-solving, and more.

11月3日消息,亚马逊与OpenAI公布一项约380亿美元多年度合作协议。
根据合同,OpenAI将通过AWS获取大量算力资源:EC2 UltraServers、数十万NVIDIA GPU、可扩展至数千万CPU,同时AWS表示相关部署计划将在2026年底完成,并可持续至更远未来。
合作消息发布后, 亚马逊盘前最高上涨5.68%,英伟达股价也扩大涨幅,涨幅高达3%。
。
从表面看,这是一笔云计算服务的大额合同;深层看,它折射出三个关键趋势:
一是AI史上前所未有的“算力死耗”时代正在来临;二是云服务商角色迅速从“资源提供者”升级为“战略基础建设者”;三是模型方与基础设施方紧密捆绑,形成新生态入口。
自从ChatGPT爆红以来,模型训练与推理规模指数级增长,传统云资源已难以满足。
OpenAI在声明中提到,新的合作将使其具备立即使用AWS超大规模算力的能力。
AWS CEO Matt Garman表示:“当下一代模型要求极低延迟、极高吞吐、极大并行时,只有具备数十万–百万级芯片集群的网络才足以支撑。”此话不虚。针对大模型训练、代理流程、推理服务,OpenAI的需求被定义为“数十万GPU+数千万CPU”。
相比过去按需租用云计算资源,这次是“预承诺–预建设–预布局”,显示双方已预计未来几年 AI 模型规模、用户使用频次、商业化路径都将显著上升。
对此,业内分析认为,云服务商正在把自己定位为“AI 世界的电网公司”,谁掌握算力、谁掌握数据流通、谁掌握网络入口,谁就更可能主导未来数年产业链分配。
OpenAI通过AWS可加速模型部署进入服务阶段,而AWS则借助OpenAI能力巩固其在AI云市场的领先地位。两者互为入口,构建“模型+底层基础设施”协同效应。(转载自:AI普瑞斯)