Why Amazon EC2 P6-B200 instances?
Amazon EC2 P6-B200 instances, accelerated by NVIDIA Blackwell GPUs, offer up to 2x performance compared to P5en instances for AI training and inference. They enable faster training for next-generation AI models and improve performance for real time inference in production workloads. P6-B200 instances are an ideal option for medium-to-large scale training and inference applications that use reasoning models and agentic AI.
Benefits
Features
Product Details
Instance Size
|
Available in EC2 UltraServers
|
Blackwell GPUs
|
GPU memory
|
vCPUs
|
Memory (TiB)
|
Instance storage (TB)
|
Network bandwidth (Gbps)
|
EBS bandwidth (Gbps)
|
---|---|---|---|---|---|---|---|---|
p6-b200.48xlarge
|
No |
8 |
1,440 HBM3e |
192 |
2 |
8 x 3.84 |
8 x 400 |
100 |
Getting started with ML use cases
Getting started with HPC use cases
P6-B200 instances are ideal to run engineering simulations, computational finance, seismic analysis, molecular modeling, genomics, and other GPU-based HPC workloads. HPC applications often require high network performance, fast storage, large amounts of memory, high compute capabilities, or all of the above. P6-B200 instances support Elastic Fabric Adapter (EFA), which enables HPC applications using the Message Passing Interface (MPI) to scale to thousands of GPUs. AWS Batch and AWS ParallelCluster help HPC developers quickly build and scale distributed HPC applications.
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages.