- AWS›
- AWS and NVIDIA›
- Project Ceiba
Project Ceiba
Constructing one of the world's fastest AI supercomputers in the cloud
Constructing one of the world’s fastest AI supercomputers in the cloud
Project Ceiba, a groundbreaking collaboration between AWS and NVIDIA, aims to push the boundaries of artificial intelligence (AI) by constructing one of world’s fastest AI supercomputers in the cloud. Hosted exclusively on AWS, this cutting-edge supercomputer will power NVIDIA's research and development efforts in AI.
Drive cutting-edge innovation
NVIDIA research and development teams will harness the immense power of Project Ceiba to drive advancements in a wide range of cutting-edge fields, including large language models (LLMs), graphics (images, videos, and 3D generation), simulation, digital biology, robotics, autonomous vehicles, climate prediction with NVIDIA Earth-2, and more. This groundbreaking initiative will propel NVIDIA’s work to advance generative AI, shaping the future of artificial intelligence and its applications across diverse domains.
Scalable AI infrastructure
Project Ceiba is available via the NVIDIA DGX Cloud architecture. DGX Cloud is an end-to-end, scalable AI platform for developers, offering scalable capacity built on the latest NVIDIA architecture and co-engineered at every layer with AWS. DGX Cloud is available on AWS with GB200 and during NVIDIA GTC DC 2025, NVIDA also announced the addition of GB300 NVL72. Project Ceiba is built upon AWS's purpose-built AI infrastructure, engineered to deliver the immense scale, enhanced security, and unparalleled performance necessary for a supercomputer of this magnitude.
Data
per superchip, enabling lightning-fast data transfer and processing
NVIDIA Blackwell GPUs, the first-of-its-kind supercomputer
Features
Project Ceiba's configuration now includes GB300s as well as 20,736 NVIDIA GB200 Grace Blackwell Superchips. This first-of-its-kind supercomputer is built using NVIDIA’s latest GB200 NVL72, a liquid-cooled, rack-scale system featuring fifth-generation NVLink, that scales to 20,736 Blackwell GPUs connected to 10,368 NVIDIA Grace CPUs. This supercomputer is capable of processing a massive 414 exaflops of AI, that’s around 375 times more powerful than the current world's fastest supercomputer Frontier. If the entire world's current supercomputing capacity was combined, it wouldn't reach 1% of the computing power represented by 414 exaflops. To put this into perspective, it is equivalent to having over 6 billion of the world's most advanced laptop computers working in tandem. To put this further into perspective, if every human on Earth performed one calculation per second, it would take them over 1,660 years to match what Project Ceiba can achieve in just one second.
Project Ceiba is the first system to leverage the massive scale-out capabilities enabled by fourth-generation AWS Elastic Fabric Adapter (EFA) networking, providing an unprecedented 1,600 Gbps per superchip of low-latency, high-bandwidth networking throughput, enabling lightning-fast data transfer and processing.
Project Ceiba will incorporate industry-leading security features designed to protect even the most sensitive AI data. NVIDIA's Blackwell GPU architecture, which provides a secure communication between GPUs integrated with AWS Nitro System and EFA technologies, will enable secure end-to-end encrypted data for generative AI workloads. This joint solution provides the decryption and loading of sensitive AI data into the GPUs while maintaining complete isolation from the infrastructure operators. All while verifying the authenticity of the applications used to process the data. Using the Nitro System, customers can cryptographically validate their applications to AWS Key Management System (KMS) and decrypt data only when the necessary checks pass, ensuring end-to-end encryption for their data as it flows through generative AI workloads. Read this blog and visit the secure AI webpage to learn more.