Skip to main content

Amazon SageMaker HyperPod

Amazon SageMaker HyperPod features

Scale and accelerate generative AI model development across thousands of AI accelerators

Checkpointless training

Checkpointless training on Amazon SageMaker HyperPod enables automatic recovery from infrastructure faults in minutes without manual intervention. It mitigates the need for a checkpoint-based job-level restart for fault recovery which requires pausing the entire cluster, fixing issues, and recovering from a saved checkpoint. Checkpointless training maintains forward training progress despite failures as SageMaker HyperPod automatically swaps faulty components and recovers training using peer-to-peer transfer of model and optimizer states from healthy AI accelerators. It enables over 95% training goodput on clusters with thousands of AI accelerators. With checkpointless training, save millions on compute costs, scale training to thousands of AI accelerators, and bring your models to production faster.

Learn more

Elastic training

Elastic training on Amazon SageMaker HyperPod automatically scales training jobs based on the availability of compute resources, saving hours of engineering time per week that was previously spent reconfiguring training jobs. The demand for AI accelerators constantly fluctuates as inference workloads scale with traffic patterns, completed experiments release resources, and new training jobs shift workload priorities. SageMaker HyperPod dynamically expands running training jobs to absorb idle AI accelerators, maximizing infrastructure utilization. When higher-priority workloads such as inference or evaluation need resources, training is scaled down to continue with fewer resources without halting entirely, yielding the required capacity based on priorities established through task governance policies. Elastic training helps you accelerate AI model development while reducing cost overruns from underutilized compute.

Learn more

Task governance

Amazon SageMaker HyperPod provides full visibility and control over compute resource allocation across generative AI model development tasks, such as training and inference. SageMaker HyperPod automatically manages tasks queues, ensuring the most critical tasks are prioritized, while more efficiently using compute resources to reduce model development costs. In a few short steps, administrators can define priorities for different tasks and set up limits for how many compute resources each team or project can use. Then, data scientists and developers create tasks (for example, a training run, fine-tuning a particular model, or making predictions on a trained model) that SageMaker HyperPod automatically runs, adhering to the compute resource limits and priorities that the administrator set. When a high-priority task needs to be completed immediately but all compute resources are in use, SageMaker HyperPod automatically frees up compute resources from lower-priority tasks. Additionally, SageMaker HyperPod automatically uses idle compute resources to accelerate waiting tasks. SageMaker HyperPod provides a dashboard where administrators can monitor and audit tasks that are running or waiting for compute resources.

Flexible training plans

To meet your training timelines and budgets, SageMaker HyperPod helps you create the most cost-efficient training plans that use compute resources from multiple blocks of compute capacity. Once you approve the training plans, SageMaker HyperPod automatically provisions the infrastructure and runs the training jobs on these compute resources without requiring any manual intervention. You save weeks of effort managing the training process to align jobs with compute availability.

Amazon SageMaker HyperPod Spot Instances

Spot Instances on SageMaker HyperPod enable you to access compute capacity at significantly reduced costs. Spot Instances are ideal for fault-tolerant workloads such as batch inference jobs. Prices vary by region and instance type, typically offering a discount of up to 90% off compared to SageMaker HyperPod On-Demand pricing. Spot Instance prices are set by Amazon EC2 and adjust gradually based on long-term trends in supply and demand for Spot Instance capacity. You pay the Spot price that's in effect for the time period your instances are running, with no upfront commitment required. To learn more about estimated Spot Instance prices and instance availability, visit the EC2 Spot Instances pricing page. Note that only instances that are also supported on HyperPod are available for Spot usage on HyperPod.

Optimized recipes to customize models

With SageMaker HyperPod recipes, data scientists and developers of all skill levels benefit from state-of-the-art performance and can quickly start training and fine-tuning publicly available foundation models, including Llama, Mixtral, Mistral, and DeepSeek models. In addition, you can customize Amazon Nova models, including Nova Micro, Nova Lite, and Nova Pro, using a suite of techniques including Supervised Fine-Tuning (SFT), Knowledge Distillation, Direct Preference Optimization (DPO), Proximal Policy Optimization, and Continued Pre-Training— with support for both parameter-efficient and full-model training options across SFT, Distillation, and DPO. Each recipe includes a training stack that has been tested by AWS, saving you weeks of tedious work testing different model configurations. You can switch between GPU-based and AWS Trainium–based instances with a one-line recipe change, enable automated model checkpointing for improved training resiliency, and run workloads in production on SageMaker HyperPod.

Amazon Nova Forge is a first-of-its-kind program that offers organizations the easiest and most cost-effective way to build their own frontier models using Nova. Access and train from intermediate checkpoints of Nova models, mix Amazon- curated datasets with proprietary data during training, and use SageMaker HyperPod recipes to train your own models. With Nova Forge you can use your own business data to unlock use- case specific intelligence and price-performance improvements for your tasks.

Learn more

High-performing distributed training

SageMaker HyperPod accelerates distributed training by automatically splitting your models and training datasets across AWS accelerators. It helps you to optimize your training job for AWS network infrastructure and cluster topology and streamline model checkpointing by optimizing the frequency of saving checkpoints, ensuring minimum overhead during training.

Advanced observability and experimentation tools

SageMaker HyperPod observability provides a unified dashboard preconfigured in Amazon Managed Grafana, with the monitoring data automatically published to an Amazon Managed Prometheus workspace. You can see real-time performance metrics, resource utilization, and cluster health in a single view, allowing teams to quickly spot bottlenecks, prevent costly delays, and optimize compute resources. SageMaker HyperPod is also integrated with Amazon CloudWatch Container Insights, providing deeper insights into cluster performance, health, and use. Managed TensorBoard in SageMaker helps you save development time by visualizing the model architecture to identify and remediate convergence issues. Managed MLflow in SageMaker helps you efficiently manage experiments at scale.

Screenshot of a GPU cluster dashboard displaying metrics and performance data for HyperPod, including GPU temperature, power usage, memory usage, NVLink bandwidth, and cluster alerts.

Workload scheduling and orchestration

The SageMaker HyperPod user interface is highly customizable using Slurm or Amazon Elastic Kubernetes Service (Amazon EKS). You can select and install any needed frameworks or tools. All clusters are provisioned with the instance type and count you choose, and they are retained for your use across workloads. With Amazon EKS support in SageMaker HyperPod, you can manage and operate clusters with a consistent Kubernetes-based administrator experience. Efficiently run and scale workloads, from training to fine-tuning to inference. You can also share compute capacity and switch between Slurm and Amazon EKS for different types of workloads.

Automatic cluster health check and repair

If any instances become defective during a model development workload, SageMaker HyperPod automatically detects and addresses infrastructure issues. To detect faulty hardware, SageMaker HyperPod regularly runs an array of health checks for accelerator and network integrity.

Accelerate open-weights model deployments from SageMaker Jumpstart

SageMaker HyperPod automatically streamlines the deployment of open-weights FMs from SageMaker JumpStart and fine-tuned models from Amazon S3 and Amazon FSx. SageMaker HyperPod automatically provisions the required infrastructure and configures endpoints, eliminating manual provisioning. With SageMaker HyperPod task governance, endpoint traffic is continuously monitored and dynamically adjusts compute resources while simultaneously publishing comprehensive performance metrics to the observability dashboard for real-time monitoring and optimization.

Screenshot of the deployment settings for deploying a model endpoint using SageMaker HyperPod in SageMaker Studio. The interface shows fields for deployment name, HyperPod cluster selection, instance type, namespace, auto-scaling options, and the model being deployed. Used for large-scale inference with pre-provisioned compute.

Managed tiered checkpointing

SageMaker HyperPod managed tiered checkpointing uses CPU memory to store frequent checkpoints for rapid recovery, while periodically persisting data to Amazon Simple Storage Service (Amazon S3) for long-term durability. This hybrid approach minimizes training loss and significantly reduces the time to resume training after a failure. Customers can configure checkpoint frequency and retention policies across both in-memory and persistent storage tiers. By storing frequently in memory, customers can recover quickly while minimizing storage costs. Integrated with PyTorch's Distributed Checkpoint (DCP), customers can easily implement checkpointing with only a few lines of code, while gaining the performance benefits of in-memory storage.

Learn more

Maximize resource utilization with GPU partitioning

SageMaker HyperPod enables administrators to partition GPU resources into smaller, isolated compute units to maximize GPU utilization. You can run diverse generative AI tasks on a single GPU instead of dedicating full GPUs to tasks that only need a fraction of the resources. With real-time performance metrics and resource utilization monitoring across GPU partitions, you gain visibility into how tasks are using compute resources. This optimized allocation and simplified setup accelerates generative AI development, improves GPU utilization, and delivers efficient GPU resource usage across tasks at scale.

Did you find what you were looking for today?

Let us know so we can improve the quality of the content on our pages