Containers

Migrate to Amazon EKS: Data plane cost modeling with Karpenter and KWOK

When migrating Kubernetes clusters to Amazon Elastic Kubernetes Service (Amazon EKS), organizations typically follow three phases: assessment, mobilize, and migrate and modernize. The assessment phase involves evaluating technical feasibility for Amazon EKS workloads, analyzing current Kubernetes environments, identifying compatibility issues, estimating costs, and determining timelines with business impact considerations. During the mobilize phase, organizations create detailed migration plans, establish EKS environments with proper networking and security, train teams, and develop testing procedures. The final migrate and modernize phase involves transferring applications and data, validating functionality, implementing cloud-centered features, optimizing resources and costs, and enhancing observability to fully use AWS capabilities.

One of the most significant challenges organizations face during the process is cost estimation, which happens in the assessment phase.

Karpenter is an open source Kubernetes node autoscaler that efficiently provisions just-in-time compute resources to match workload demands. Unlike traditional autoscalers, Karpenter directly integrates with cloud providers to make intelligent, real-time decisions about instance types, availability zones, and capacity options. It evaluates pod requirements and constraints to select optimal instances, considering factors such as CPU, memory, price, and availability.

Karpenter can consolidate workloads for cost efficiency and rapidly scale from zero to handle sudden demand spikes. It supports both spot and on-demand instances, and automatically terminates nodes when they’re no longer needed, optimizing cluster resource utilization and reducing cloud costs.

Karpenter uses the concept of Providers to interact with different infrastructure platforms for provisioning and managing compute resources. KWOK (Kubernetes WithOut Kubelet) is a toolkit that simulates data plane nodes without allocating actual infrastructure, and can be used as a provider to create lightweight testing environments that enable developers to validate provisioning decisions, try various (virtual) instance types, and debug scaling behaviors.

In an innovative approach described in this post, you can use KWOK to help simulate migrations to Amazon EKS. By using KWOK as a provider, you can observe how Karpenter would allocate resources for your workloads, without launching actual Amazon Elastic Compute Cloud (Amazon EC2) instances. This method replicates current resource requirements and scheduling patterns in a virtual environment, providing clear visibility into the types and quantities of nodes that would be needed in EKS. The simulation helps organizations build more accurate cost estimates and develop focused migration plans, while reducing infrastructure expenses during the Assessment phase.

In this blog post, we demonstrate how to mimic a Kubernetes migration to Amazon EKS using Karpenter and KWOK. By creating a test environment, backing it up, restoring it in a new EKS cluster and analyzing Karpenter’s node provisioning decisions, we show how to estimate compute costs before progressing to the mobilize and migrate and modernize phases.

Solution overview

When migrating workloads to Amazon EKS, organizations can choose from several replication strategies, each offering distinct advantages depending on your application architecture, downtime tolerance, and operational requirements.

For this demonstration, we focus on the backup and restore methodology. Our scenario consists of a source cluster posing as an existing production environment, and a destination cluster for workload migration. While real-world migrations typically target a cluster hosted either on-premises, or on another cloud provider, we use EKS for both source and destination clusters for practicality. We implement backup and restore using Velero, an open source tool designed for Kubernetes cluster backup, disaster recovery, and migration.

As shown in the preceding diagram, the process includes backing up resources from the source cluster to Amazon Simple Storage Service (Amazon S3), creating a destination cluster with Karpenter and KWOK enabled, and restoring workloads from the backup. This approach will reveal Karpenter’s node provisioning decisions in response to restored workloads, providing valuable insights into the required instance types, quantities, and configurations for production EKS migrations.

Solution walkthrough

Our solution walkthrough guides you through setting up the simulation environment, migrating a sample workload, and analyzing the results, to estimate the cost of your EKS compute resources.

Prerequisites

To use Karpenter with KWOK, you need to build a custom Karpenter image. This image must be built on an amd64 architecture to help ensure compatibility with the m5.xlarge instances in our destination cluster’s bootstrap node group. Such node group contains core cluster components, including the Karpenter controller. Once deployed, Karpenter will dynamically provision and manage separate instances for your workloads, based on scheduling demands and node requirements. While m5 instances are used here for demonstration, in a real-world scenario, you should choose instance types that best fit your specific use case. Building on the same architecture helps ensure that the Karpenter image will function correctly in our simulated environment.

  1. To make things less complicated for you, we created a few scripts that will set up an EC2 instance with the proper permissions and the required dependencies:
export AWS_ACCOUNT_ID={your account id}
export AWS_DEFAULT_REGION={your region}

export INSTANCE_CREATE_SCRIPT_URL=https://raw.githubusercontent.com/aws-samples/sample-eks-cost-estimation-karpenter-kwok/refs/heads/main/create_build_instance.sh
source <(curl -s $INSTANCE_CREATE_SCRIPT_URL)
  1. In a production environment, control traffic to your EC2 instance using security groups and use least-privilege permissions.
  2. SSH into your instance:
ssh -i $KEY_PAIR_NAME.pem ec2-user@${INSTANCE_DNS}
  1. Install dependencies:
export AWS_ACCOUNT_ID={your account id}
export AWS_DEFAULT_REGION={your region}

SETUP_SCRIPT_URL=https://raw.githubusercontent.com/aws-samples/sample-eks-cost-estimation-karpenter-kwok/refs/heads/main/setup.sh
source <(curl -s $SETUP_SCRIPT_URL)

Step 1: Create the source EKS cluster

Run the following commands to create the source cluster:

export SOURCE_CLUSTER_NAME=karpenter-kwok-source-1

cat << EOF | eksctl create cluster -f -
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${SOURCE_CLUSTER_NAME}
  region: ${AWS_DEFAULT_REGION}
nodeGroups:
  - name: ng-1
    instanceType: m5.xlarge
    desiredCapacity: 2
EOF

Step 2: Deploy an example workload

You now need to deploy a representative workload, for example you can use the Guestbook Application, from the tutorials section of the Kubernetes documentation; it’s a basic multi-tier web application, consisting of a single-instance Redis database, to store guestbook entries and multiple web frontend instances.

A summary of commands is provided here for your convenience:


kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-deployment.yaml &&
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-service.yaml &&
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-deployment.yaml &&
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-service.yaml &&
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml &&
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml
  1. Check the deployed pods:
kubectl get pods

Expected output:

NAME                              READY   STATUS    RESTARTS   AGE
frontend-7c457c988c-7x96t         1/1     Running   0          45s
frontend-7c457c988c-dw8dg         1/1     Running   0          45s
frontend-7c457c988c-xhd2k         1/1     Running   0          45s
redis-follower-854f6dcbd5-2r9n2   1/1     Running   0          48s
redis-follower-854f6dcbd5-vw6c7   1/1     Running   0          48s
redis-leader-5f4cc9b47d-v42lm     1/1     Running   0          51s
  1. Scale the frontend deployment and inspect the pods increase:
kubectl scale deployment frontend --replicas=5 &&
kubectl get pods

Expected output:

NAME                              READY   STATUS    RESTARTS   AGE
frontend-7c457c988c-7x96t         1/1     Running   0          119s
frontend-7c457c988c-dw8dg         1/1     Running   0          119s
frontend-7c457c988c-f49jz         1/1     Running   0          7s
frontend-7c457c988c-rljn2         1/1     Running   0          7s
frontend-7c457c988c-xhd2k         1/1     Running   0          119s
redis-follower-854f6dcbd5-2r9n2   1/1     Running   0          2m2s
redis-follower-854f6dcbd5-vw6c7   1/1     Running   0          2m2s
redis-leader-5f4cc9b47d-v42lm     1/1     Running   0          2m5s

Step 3: Extract cluster configuration with Velero

Velero is an open source tool for backing up and restoring Kubernetes cluster resources and persistent volumes. For our estimation approach, we use Velero to capture the complete configuration of the source cluster, export all deployments, statefulsets, daemonsets, and other workload definitions, and preserve resource requests and limits that will inform our estimation. This will give us a snapshot of our cluster’s resource requirements.

Velero consists of a command line interface (CLI) for initiating backups and restores and a server component that runs as a Kubernetes controller to handle backup operations. When you run a backup command, the CLI creates a Backup object in Kubernetes, then the server’s BackupController validates it, queries the API server for specified resources, and uploads the collected data to object storage.

  1. First, create an Amazon Simple Storage Service (Amazon S3) bucket, to act as your object storage:
export VELERO_BUCKET=karpenter-kwok-velero-bucket

aws s3api create-bucket \
--bucket $VELERO_BUCKET \
--region $AWS_DEFAULT_REGION \
--create-bucket-configuration LocationConstraint=$AWS_DEFAULT_REGION
  1. Create the necessary Velero user and associated permissions to access the bucket (in a production environment, restrict privileges according to the principle of least privilege):
aws iam create-user --user-name velero &&
aws iam put-user-policy \
  --user-name velero \
  --policy-name velero \
  --policy-document "{
    \"Version\": \"2012-10-17\",
    \"Statement\": [
        {
            \"Effect\": \"Allow\",
            \"Action\": [
                \"ec2:DescribeVolumes\",
                \"ec2:DescribeSnapshots\",
                \"ec2:CreateTags\",
                \"ec2:CreateVolume\",
                \"ec2:CreateSnapshot\",
                \"ec2:DeleteSnapshot\"
            ],
            \"Resource\": \"*\"
        },
        {
            \"Effect\": \"Allow\",
            \"Action\": [
                \"s3:GetObject\",
                \"s3:DeleteObject\",
                \"s3:PutObject\",
                \"s3:PutObjectTagging\",
                \"s3:AbortMultipartUpload\",
                \"s3:ListMultipartUploadParts\"
            ],
            \"Resource\": [
                \"arn:aws:s3:::${VELERO_BUCKET}/*\"
            ]
        },
        {
            \"Effect\": \"Allow\",
            \"Action\": [
                \"s3:ListBucket\"
            ],
            \"Resource\": [
                \"arn:aws:s3:::${VELERO_BUCKET}\"
            ]
        }
    ]
}"
  1. Then create a file to host the credentials:
VELERO_ACCESS_KEY_OUTPUT=$(aws iam create-access-key --user-name velero) &&
VELERO_ACCESS_KEY_ID=$(echo $VELERO_ACCESS_KEY_OUTPUT | jq -r '.AccessKey.AccessKeyId') &&
VELERO_SECRET_ACCESS_KEY=$(echo $VELERO_ACCESS_KEY_OUTPUT | jq -r '.AccessKey.SecretAccessKey') &&
cat > ./credentials-velero << EOF
[default]
aws_access_key_id=${VELERO_ACCESS_KEY_ID}
aws_secret_access_key=${VELERO_SECRET_ACCESS_KEY}
EOF
  1. Install the Velero CLI:
curl -L -o velero-v1.16.1-linux-amd64.tar.gz https://github.com/vmware-tanzu/velero/releases/download/v1.16.1/velero-v1.16.1-linux-amd64.tar.gz &&
tar -xvf velero-v1.16.1-linux-amd64.tar.gz &&
sudo mv velero-v1.16.1-linux-amd64/velero /usr/local/bin/
  1. Install the Velero server in the source cluster:
velero install \
    --provider aws \
    --plugins velero/velero-plugin-for-aws:v1.10.0 \
    --bucket $VELERO_BUCKET \
    --backup-location-config region=$AWS_DEFAULT_REGION \
    --snapshot-location-config region=$AWS_DEFAULT_REGION \
    --secret-file ./credentials-velero
  1. Check the Deployment:
kubectl logs deployment/velero -n velero
  1. Confirm the backup location’s PHASE is Available:
velero backup-location get

NAME      PROVIDER   BUCKET/PREFIX                  PHASE       LAST VALIDATED                   ACCESS MODE   DEFAULT
default   aws        karpenter-kwok-velero-bucket   Available   2025-06-08 11:12:18 +0200 CEST   ReadWrite     true
  1. Trigger the backup:
export VELERO_BACKUP_NAME=velero-backup
velero backup create $VELERO_BACKUP_NAME
  1. Check the backup status:
kubectl get backup $VELERO_BACKUP_NAME -n velero -o jsonpath='{.status.phase}'

Expected output: Completed.

Step 4: Create the destination EKS cluster

Run the following commands to create the destination cluster:

export DESTINATION_CLUSTER_NAME=karpenter-kwok-destination-1

cat << EOF | eksctl create cluster -f -
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${DESTINATION_CLUSTER_NAME}
  region: ${AWS_DEFAULT_REGION}
nodeGroups:
  - name: ng-1
    instanceType: m5.xlarge
    desiredCapacity: 2
EOF

For security considerations when setting up your cluster in a production environment, see the Security in Amazon EKS guide.

Step 5: Deploy Karpenter with the KWOK provider

In this setup, you install both Karpenter and KWOK on your EKS cluster and configure Karpenter to use the KWOK provider rather than the standard Amazon EC2 provider. This integration enables Karpenter to respond to pod scheduling demands by creating virtual nodes, eliminating the need for actual EC2 instance provisioning. The approach combines these technologies to create a lightweight, cost-effective simulation environment.

  1. Create an image repository on Amazon Elastic Container Registry (Amazon ECR), where the Karpenter-KWOK image will be pushed to:
export KWOK_DOCKER_REPO=kwok
export KWOK_DOCKER_REPO_FULL="${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${KWOK_DOCKER_REPO}"
aws ecr create-repository \
    --repository-name "${KWOK_DOCKER_REPO}" \
    --region "${AWS_DEFAULT_REGION}" &&
aws ecr get-login-password --region "${AWS_DEFAULT_REGION}" | docker login --username AWS --password-stdin "${KWOK_DOCKER_REPO_FULL}"
  1. KWOK is configured to provide a set of simulated instance types that mirror real EC2 instances. These instance types are defined in the Karpenter repo, in the kwok/cloudprovider/instance_types.json file, which contains a structured array of instance specifications:
[
    {
        "name": "m5a.2xlarge",
        "offerings": [
            {
                "Price": 0.2086,
                "Available": true,
                "Requirements": [
                    {
                        "key": "karpenter.sh/capacity-type",
                        "operator": "In",
                        "values": [
                            "spot"
                        ]
                    },
                    {
                        "key": "topology.kubernetes.io/zone",
                        "operator": "In",
                        "values": [
                            "eu-central-1a"
                        ]
                    }
                ]
            },
            //additional offerings e.g. on-demand, other AZs
        ],
        "architecture": "amd64",
        "operatingSystems": [
            "linux",
            "windows"
        ],
        "resources": {
            "cpu": "8",
            "memory": "32.0Gi",
            "ephemeral-storage": "20Gi",
            "pods": "40"
        }
    },
    //additional instance types
]

The file contains multiple instance types, each with potentially multiple offerings that vary by pricing, availability zone, and capacity type (spot or on-demand). This configuration allows KWOK to simulate the instance diversity that Karpenter would interact with in a real environment.

In this example, you will use instance families m4 and m5 to restore your backup, so you need to update the instance_types.json file with up-to-date information. For your convenience, we provided the get_instance_details.py script, which builds a new JSON with up-to date details (note that it’s your responsibility to verify the accuracy of the information, particularly instance pricing, to help ensure reliable compute cost estimations).

  1. Run the following commands to update instance_types.json:
cd karpenter
export =./kwok/cloudprovider/instance_types.json
export GET_INSTANCE_DETAILS_SCRIPT_URL=https://raw.githubusercontent.com/aws-samples/sample-eks-cost-estimation-karpenter-kwok/refs/heads/main/get_instance_details.py
curl -s $GET_INSTANCE_DETAILS_SCRIPT_URL | python3 - $AWS_DEFAULT_REGION m4 m5 --output $INSTANCE_TYPES_FILE_PATH
  1. Install the build toolchain:
make toolchain
  1. Install the KWOK controller in the destination cluster:
make install-kwok
  1. Build and deploy a new version of Karpenter, with KWOK provider support:
make apply
  1. To tell Karpenter to use the selected instances, create a new NodePool and the KWOKNodeClass (a custom resource definition, to represent KWOK virtual nodes, similar to a regular Karpenter NodeClass):
cat <<EOF | envsubst | kubectl apply -f -
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: default
spec:
  template:
    spec:
      requirements:
        - key: "karpenter.k8s.aws/instance-family"
          operator: In
          values: ["m5","m4"]
        - key: kubernetes.io/arch
          operator: In
          values: ["amd64"]
        - key: kubernetes.io/os
          operator: In
          values: ["linux"]
        - key: karpenter.sh/capacity-type
          operator: In
          values: ["on-demand"]
      nodeClassRef:
        name: default
        kind: KWOKNodeClass
        group: karpenter.kwok.sh
      expireAfter: 720h # 30 * 24h = 720h
  limits:
    cpu: 1000
  disruption:
    consolidationPolicy: WhenEmptyOrUnderutilized
    consolidateAfter: 10s
---
apiVersion: karpenter.kwok.sh/v1alpha1
kind: KWOKNodeClass
metadata:
  name: default
EOF

In this example NodePool, we use m4 and m5 instance families for demonstration purposes. In a real-life scenario, you would select instance types that best match your workload requirements, considering factors such as performance needs, cost constraints, and specific compute characteristics. See the Amazon EKS documentation for Karpenter setup best practices and specifically NodePool configuration.

Step 6: Restore the backup

  1. You can now install Velero in the destination cluster:
velero install \
    --provider aws \
    --plugins velero/velero-plugin-for-aws:v1.10.0 \
    --bucket $VELERO_BUCKET \
    --backup-location-config region=$AWS_DEFAULT_REGION \
    --snapshot-location-config region=$AWS_DEFAULT_REGION \
    --secret-file ../credentials-velero
  1. Check that you can access the backup from the source cluster:
velero backup describe $VELERO_BACKUP_NAME

Expected output:

Name:         velero-backup
Namespace:    velero
Labels:       velero.io/storage-location=default
Annotations:  velero.io/resource-timeout=10m0s
              velero.io/source-cluster-k8s-gitversion=v1.32.5-eks-5d4a308
              velero.io/source-cluster-k8s-major-version=1
              velero.io/source-cluster-k8s-minor-version=32

Phase:  Completed


Namespaces:
  Included:  *
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  <none>

Or label selector:  <none>

Storage Location:  default

Velero-Native Snapshot PVs:  auto
Snapshot Move Data:          false
Data Mover:                  velero

TTL:  720h0m0s

CSISnapshotTimeout:    10m0s
ItemOperationTimeout:  4h0m0s

Hooks:  <none>

Backup Format Version:  1.1.0

Started:    2025-06-08 11:13:16 +0200 CEST
Completed:  2025-06-08 11:13:18 +0200 CEST

Expiration:  2025-07-08 11:13:16 +0200 CEST

Total items to be backed up:  423
Items backed up:              423

Backup Volumes:
  Velero-Native Snapshots: <none included>

  CSI Snapshots: <none included>

  Pod Volume Backups: <none included>

HooksAttempted:  0
HooksFailed:     0
  1. You should now taint all existing real nodes, to deploy the backup workload on KWOK provided nodes only:
kubectl taint nodes --all CriticalAddonsOnly:NoSchedule
  1. You’re ready to restore the backup:
velero restore create --from-backup $VELERO_BACKUP_NAME

Expected output:

Restore request "backup-1-20250609165746" submitted successfully.
Run `velero restore describe backup-1-20250609165746` or `velero restore logs backup-1-20250609165746` for more details.
  1. Check the deployed pods:
kubectl get pods -A

Expected output:

NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
default       frontend-7c457c988c-j7296            1/1     Running   0          2m11s
default       frontend-7c457c988c-khbjs            1/1     Running   0          2m10s
default       frontend-7c457c988c-ptg74            1/1     Running   0          2m10s
default       frontend-7c457c988c-wjj45            1/1     Running   0          2m10s
default       frontend-7c457c988c-z4s7m            1/1     Running   0          2m10s
default       redis-follower-854f6dcbd5-f59bq      1/1     Running   0          2m10s
default       redis-follower-854f6dcbd5-zmh88      1/1     Running   0          2m10s
default       redis-leader-5795f95d8c-xv6sk        1/1     Running   0          2m7s
kube-system   aws-node-cw2t9                       2/2     Running   0          2m7s
kube-system   aws-node-l827x                       2/2     Running   0          4d6h
kube-system   aws-node-m76zk                       2/2     Running   0          4d6h
kube-system   coredns-6d78c58c9f-9s45j             1/1     Running   0          4d6h
kube-system   coredns-6d78c58c9f-zs6w2             1/1     Running   0          4d6h
kube-system   karpenter-68846dbcf6-68ptn           1/1     Running   0          18m
kube-system   kube-proxy-9p9m7                     1/1     Running   0          4d6h
kube-system   kube-proxy-cvcsf                     1/1     Running   0          2m7s
kube-system   kube-proxy-vj2w9                     1/1     Running   0          4d6h
kube-system   kwok-controller-a-66785dddbd-jmnhr   1/1     Running   0          4d7h
kube-system   metrics-server-6c8c76d545-667bv      1/1     Running   0          4d6h
kube-system   metrics-server-6c8c76d545-rx8v4      1/1     Running   0          4d6h
velero        velero-78f664b85-2dpdb               1/1     Running   0          28h
  1. Inspect the nodes:
kubectl get nodes -o custom-columns=NAME:.metadata.name,READY:"status.conditions[?(@.type=='Ready')].status",OS-IMAGE:.status.nodeInfo.osImage,INSTANCE-TYPE:.metadata.labels.'node\.kubernetes\.io/instance-type'

Expected output:

NAME                                                READY   OS-IMAGE         INSTANCE-TYPE
bold-haibt-162796501                                True                     m5.xlarge
ip-192-168-62-23.ap-northeast-1.compute.internal    True    Amazon Linux 2   m5d.xlarge
ip-192-168-90-145.ap-northeast-1.compute.internal   True    Amazon Linux 2   m5d.xlarge

You can see that Karpenter, together with KWOK, created a virtual node (named bold-haibt-162796501 in the preceding example) with instance type m5.xlarge to support our demo application.

Note that pods are not actually running on the virtual nodes—KWOK interacts with the Kubernetes API server by creating fake node objects that appear as real nodes to the control plane. These simulated nodes report their status, capacity, and other attributes just like real nodes would, but they don’t exist as physical or virtual machines. For scheduling, KWOK allows the standard Kubernetes scheduler to function normally. When you create a pod, the scheduler assigns it to one of the fake nodes based on regular scheduling constraints (resource requests, node selectors, taints and tolerations, and so on). After being scheduled, KWOK automatically updates the pod’s status to Running without running any containers.

You can explore resource utilization within the virtual (and real) nodes with eks-node-viewer (preinstalled on your build machine).

Run:

eks-node-viewer

Expected output:

By analyzing KWOK nodes and their associated costs, you can now estimate the compute resource requirements for your workload running on Amazon EKS.

While this simulation provides valuable insights, it’s important to understand its limitations:

  • It doesn’t account for network latency or I/O performance, which can impact real-world application behavior.
  • The simulation might not fully replicate complex interdependencies between services in your current environment.
  • It doesn’t factor in costs for managed services or data transfer that might be part of your overall Amazon EKS implementation.
  • The approach assumes that your current resource allocation is optimal, which might not always be the case.

Clean up

Clean up the resources created within the build instance:

CLEANUP_SCRIPT_URL=https://raw.githubusercontent.com/aws-samples/sample-eks-cost-estimation-karpenter-kwok/refs/heads/main/cleanup.sh
source <(curl -s $CLEANUP_SCRIPT_URL)

Exit the SSH session:

exit

Terminate the build instance:

INSTANCE_DELETE_SCRIPT_URL=https://raw.githubusercontent.com/aws-samples/sample-eks-cost-estimation-karpenter-kwok/refs/heads/main/delete_build_instance.sh
source <(curl -s $INSTANCE_DELETE_SCRIPT_URL)

Conclusion

By combining Karpenter and KWOK, organizations can get an indication of their AWS resource requirements before committing to a full migration. This approach reduces the risk of over or under-provisioning and provides concrete data for budgeting and capacity planning.

This method represents just one example of how modern cloud-centered tools can be combined in innovative ways to solve complex migration challenges. As you plan your journey to Amazon EKS, consider incorporating this estimation technique into your assessment phase for more predictable outcomes.


About the authors

Riccardo Freschi is a Senior Solutions Architect at AWS who specializes in Modernization. He helps partners and customers transform their IT landscapes by designing and implementing modern cloud-native architectures on AWS. His focus areas include container-based applications on Kubernetes, cloud-native development, and establishing modernization strategies that drive business value.