Using Amazon EKS, Miro accelerated time to market for the new, innovative features in its Intelligent Canvas, a significant product advancement launched in 2024. Miro now performs upgrades, deploys applications, and manages changes in minutes instead of weeks. The team utilizes out-of-the-box management features within Amazon EKS—such as automated cluster provisioning and upgrades—to quickly and easily spin up clusters and deploy to new regions, managing cluster lifecycles using infrastructure as code. Additionally, Miro takes advantage of built-in monitoring and metrics in Amazon EKS. “All the features that Amazon EKS provides, including the managed control plane, made it superior compared with managing our own Kubernetes clusters,” says Medvetchii.
By using Amazon EKS to manage its clusters, Miro decreased its operational overhead significantly. Amazon EKS simplifies the management of the Kubernetes control plane by ensuring it is always operational, up-to-date, and automatically scaled to meet Miro’s needs. “Using the managed control plane that Amazon EKS offers, our small team can focus on scaling, building new features, and providing developers with self-service,” Medvetchii adds.
Miro has created self-service options for developers to deploy infrastructure using Amazon EKS. To set up standardized deployments, the team implemented tools such as Karpenter, an open-source Kubernetes cluster autoscaler, and Kyverno, a policy engine designed for Kubernetes. The infrastructure team presets aspects like compliance, domain name system registration, secrets management, and policies using these tools. This allows developers to select the correct instances and resource types for their workloads and create new microservices on the fly without needing help or approval from a centralized team. “Now developers can create new microservices on the fly without having to request help or approval from a centralized team,” says Fior Kuntzer. The governance that Miro put in place not only facilitated self-service but also standardized microservice architecture and resource deployment strategies, simplifying troubleshooting and onboarding new team members.
To build a scalable and highly available Kubernetes cluster, Miro uses Amazon EKS with a managed node group to host Karpenter across three Availability Zones. Karpenter simplifies Kubernetes infrastructure management by launching right-sized nodes based on workload requirements. To optimize performance and efficiency, Miro employs open-source components like KEDA, which facilitates pod scaling to match workload demand. The compute infrastructure integrates with several AWS services, including AWS Secrets Manager, Amazon ECR, and Elastic Load Balancing (ELB). For secure operations, Miro uses IAM Roles for service accounts to manage credentials for workloads, allowing them to perform authorized API calls to AWS services. To achieve high compute flexibility, Miro leverages Karpenter NodePools and dynamically configures EC2 Spot Instances, Graviton instances, or x86 instances based on specific workload requirements.
Miro also achieves greater scalability and cost optimization using Amazon EKS. The company uses automatic scaling tools and features—including Karpenter and KEDA, an open-source Kubernetes-based event-driven autoscaler—to absorb traffic spikes, scaling from a few hundred nodes overnight to thousands during the day. “The ability to scale up and down according to usage helps us reduce costs and be more efficient in how we use our compute,” says Medvetchii. By transitioning to Kubernetes on Amazon EKS from its previous, self-managed container infrastructure, Miro reduced costs by 80 percent. The company further reduced costs by 70 percent for nonproduction workloads and 50 percent for production workloads by using Karpenter to automatically launch the appropriate Amazon EC2 instances, including Amazon EC2 Spot Instances, which offer up to a 90 percent discount compared to Amazon EC2 On-Demand Pricing.