Amazon SageMaker HyperPod now supports running IDEs and Notebooks to accelerate AI development
Amazon SageMaker HyperPod now supports IDEs and Notebooks, enabling AI developers to run JupyterLab, Code Editor, or connect local IDEs to run their interactive AI workloads directly on HyperPod clusters.
The release allows AI developers to run IDEs and notebooks on the same persistent HyperPod EKS clusters used for training and inference. Developers can leverage HyperPod's scalable GPU capacity with familiar tools like HyperPod CLI, while sharing data across IDEs and training jobs through mounted file systems such as FSx and EFS. The solution supports running multiple IDEs on the same GPU-instance, as well as on a single-GPU, by leveraging Multi-Instance GPU (MIG) support on HyperPod.
Administrators can maximize CPU/GPU investments through unified governance across IDEs, training, and inference workloads using HyperPod Task Governance. HyperPod Observability provides comprehensive usage metrics including CPU, GPU, and memory consumption, enabling administrators to optimize cluster utilization and manage costs effectively.
This feature is available in all AWS Regions where Amazon SageMaker HyperPod is currently available, excluding China and GovCloud (US) regions. To learn more, visit our documentation.