Containers
Managing Team Workloads in shared Amazon EKS cluster using Loft vCluster and Argo CD for better cost optimization and operational efficiency
This blog was authored by Adam Issaoui, Cloud Support Engineer – Containers, Asif Khan, Senior Solutions Architect, and Sébastien Allamand, Sr. Solution Architect Specialist – Containers.
Introduction
Amazon Elastic Kubernetes Service (Amazon EKS) has emerged as a fundamental platform for modern container orchestration. It enables organizations to effectively optimize their application deployment and management processes by providing a fully-managed, certified Kubernetes conformant service that streamlines the building, securing, operating, and maintenance of Kubernetes clusters on Amazon Web Services (AWS).
EKS clusters are often shared by multiple teams within an organization, allowing them to efficiently use available resources. Furthermore, Amazon EKS is used to deliver applications to end-users, necessitating strong segmentation and isolation of resources among various teams. Securely sharing the Amazon EKS control plane and worker node resources allows teams to enhance productivity and achieve cost efficiencies. This post demonstrates how to use vCluster for separating workloads on a shared EKS cluster.
Achieving Kubernetes multi-tenancy: strategies for isolation and efficiency
Kubernetes multi-tenancy allows multiple tenants to share a cluster’s resources. However, Kubernetes lacks built-in multi-tenancy, thus administrators must implement isolation strategies using quotas and limits. Two main patterns exist:
- Hard isolation: Dedicated clusters per tenant/workload, which streamlines isolation but leads to poorer resource usage and increased management overhead.
- Soft isolation: Using Kubernetes constructs like namespaces to share a cluster while maintaining logical separation. This improves resource usage but necessitates more setup through RBAC, network policies, and other configurations.
Kubernetes provides three main multi-tenancy strategies:
- Dedicated clusters: Streamlines isolation but leads to poorer resource usage and increased management overhead.
- Namespaces: Improves resource usage but necessitates more complex setup using RBAC, network policies, and other configurations. The Hierarchical Namespace Controller (HNC) addresses some namespace management challenges, but does not solve all multi-tenancy issues, particularly those related to cluster-wide resources.
- Shared control plane with virtual clusters: Balances efficiency and isolation through solutions such as loft vCluster or kamaji.
What is vCluster?
vCluster on Amazon EKS offers a range of benefits to users, streamlining operations and reducing costs. vCluster allows organizations to significantly lower their infrastructure expenses while streamlining cluster management. The solution provides control plan isolation, enabling more efficient development and testing processes. It also enhances security in continuous integration and continuous deployment (CI/CD) workflows. Furthermore, vCluster’s lightweight, isolated virtual clusters make training more cost-effective. For those looking to expand their Kubernetes capabilities, vCluster can be seamlessly integrated with Crossplane. This combination allows users to create and test Custom Resource Definitions (CRDs) in isolated environments.
Virtual clusters are functional Kubernetes clusters nested within a host cluster, enhancing resource sharing and flexibility.

Figure 1: EKS cluster and vClusters communication for pod scheduling
The vCluster control plane pod contains:
- API Server: Gateway for Kubernetes API requests, supporting various distributions.
- Controller Manager: Tracks and manages Kubernetes resources.
- Data Store: Stores resource definitions and state, with options such as SQLite, etcd, or managed databases Amazon Relational Database Service (Amazon RDS) for MySQL or PostgreSQL.
- Syncer: Synchronizes resources between virtual and host clusters.
The Syncer maintains bidirectional synchronization, scheduling virtual pods on host nodes and propagating changes. This allows vCluster resources to remain isolated while low-level pod resources are synchronized, enabling the virtual cluster to function within the host cluster.

Figure 2: Architecture overview
Solution overview
The EKS cluster in the following solution hosts multiple virtual clusters using vCluster, with each virtual cluster running a different Kubernetes version. Argo CD deploys application components to the virtual clusters.
Kyverno automatically adds new virtual clusters to the Argo CD managed list, streamlining infrastructure expansion. This setup centrally and automatically manages application deployments across virtual clusters. vCluster creates isolated Kubernetes environments, while Argo CD provides a GitOps approach to deployment management across the virtual clusters.
Walkthrough
The following steps outline the process described in this post:
- Create EKS cluster.
- Install Amazon Elastic Block Store (Amazon EBS) CSI driver add-on.
- Install Argo CD and Kyverno.
- Create vClusters for Teams A and B with different Kubernetes versions.
- Set up Kyverno cluster policy to add vClusters to Argo CD.
- Deploy apps to Team B vCluster using Argo CD.
- Verify workload isolation between Teams A and B.
Prerequisites
The following prerequisites are necessary to complete this solution:
- An AWS account
- AWS Command Line Interface (AWS CLI) > 2.15.30
- Helm > v3.10.0
- kubectl
- eksctl
vCluster CLI (0.19.6) , installation instructions are available here.
Step-by-step guidance
Step 0: Setup environment variables
Open terminal, update variables as follows, and run in terminal:
Step 1: Set up your EKS cluster
Before proceeding, you need an active EKS cluster. You can create one using the AWS Console, AWS CLI, or eksctl command-line tool. For a basic setup, you might use the following command:
Step 2: Set up VPC CNI plugin
To install the CNI add-on, run the following command:
This command downloads and runs a script from the aws-samples GitHub repository to install the VPC CNI add-on on your EKS cluster with network policy enabled.
Step 3: Setting up Amazon EKS Pod Identity and Amazon EBS CSI Driver EKS managed add-on
The Amazon EBS managed EKS add-on version must be greater than v1.26.0 to support EKS Pod Identity. Refer to this link for more details on other considerations when using Amazon EKS.
Installation steps
- Install the EBS CSI Driver with Pod Identity:
Run the following command in your terminal:
This script sets up the EBS CSI Driver with Pod Identity on your EKS cluster.
- Create the default storage class:
After the EBS CSI Driver is installed, apply the default storage class configuration:
This creates a default storage class for your EKS cluster using the EBS CSI Driver.
Step 4: Install Argo CD and Kyverno
Argo CD is used to manage Kubernetes resources and applications within your cluster in a declarative way, while Kyverno serves as the policy engine to define cluster policies for automation purposes. Both can be installed using Helm with the following commands:
Step 4.a: Installing Argo CD
Step 4.b: Installing Kyverno
Step 5: Create team-a and team-b vClusters
We’ve set up an EKS 1.32 cluster with two vClusters:
- Team A: Kubernetes v1.30
- Team B: Kubernetes v1.31
This allows teams to use different Kubernetes versions in isolated environments.
vCluster isolation mode is enabled, providing:
- Pod Security Standards enforcement
- Resource quota and limit range to restrict resource consumption.
- Network policies for access restriction
Argo CD creates vClusters using the vCluster Helm chart, enabling a GitOps approach.
For vCluster’s persistent volumes, you need a gp3 StorageClass. Apply it with this command:
After setting up the StorageClass, the next step is to create a vCluster project in Argo CD to organize applications and define deployment, destination, and object type restrictions.
To create the Team A vCluster:
To create the Team B vCluster:
vClusters are exposed using Classic LoadBalancer services for external access. Alternative access methods include Ingress or Service LoadBalancer with ingress controllers, more details can be found in the documentation.
To list the created vClusters:

Figure 3: vClutser status
Furthermore, you can check the status of the vCluster application in the Argo CD UI. The vClusters are associated with the vcluster Argo CD project.
To access the Argo CD server from outside the cluster, use kubectl port-forward command. Refer to this link if you would like to expose it using ingress resources:
This exposes the argocd-server
cluster-IP service on local port 8080, allowing access to the Argo CD UI at https://localhost:8080
.
To log in to the Argo CD web UI, use the default admin user:
1. Retrieve the initial admin password from the argocd-initial-admin-secret secret
:
2. Use the retrieved password to authenticate with the admin username in the Argo CD web UI.

Figure 4: ArgoCd vCluster applications
Step 6: Configure ClusterPolicy for automated Argo CD cluster configuration
Our goal is to automate adding new virtual Kubernetes clusters to Argo CD management, improving deployment efficiency and scalability. Argo CD stores cluster details in Kubernetes Secrets labeled with argocd.argoproj.io/secret-type: cluster. vCluster manages cluster credentials by storing them in Secrets within dedicated namespaces. The Secret name is derived by prefixing the cluster name with vc-. For example, the Secret name for the vcteam-a cluster is vc-vcteam-a.
We use Kyverno ClusterPolicies for automation. Kyverno can create more Kubernetes objects when objects are created or updated. The generateExisting attribute controls whether the policy applies to existing resources. When set to true, Kyverno generates ArgoCD secrets for existing vClusters. ArgoCD can access vCluster internally using Kubernetes Service with the same name as the vCluster name.
To create Kyverno ClusterPolicy, you can use the following command:
Starting from Kyverno v1.13, we need to grant specific permissions for configured policies, to accomplish this, apply the following RBAC:
When you create the Kyverno ClusterPolicy, you should see output similar to the following:

Figure 5: Kyverno ClusterPolicy
If you examine the secrets in the argocd namespace using the command kubectl get secrets -n argocd
, then you discover that Kyverno has automatically created two secrets named vcteam-a and vcteam-b.
Team A internal access
Result: ACCESS WORKS
Team A to Team B access
Result: ACCESS WORKS
Team B to Team A access
Result: ACCESS WORKS
Team B internal access
Result: ACCESS WORKS
You’ve observed inter-vCluster pod communication, which is typically undesired. To enhance isolation, implement network segmentation.
The pods in the cert-manager namespace are running and healthy when checking the resources.
kubectl get po -n cert-manager

Figure 15: Cert-manager pods
vClusters allow users to use their CRDs, namespaces, and cluster roles without affecting the host cluster or other vClusters.
In vCluster vcteam-b, cert-manager.io CRDs exist:
vcluster connect vcteam-b -- kubectl get crd | grep
cert-manager.io

Figure 16: Cert-manager custom resource definitions (CRD)
In vCluster vcteam-a, no cert-manager CRDs are found:
vcluster connect vcteam-a -- kubectl get crd

Figure 17: No Cert-manager custom resource definitions
Compute isolation
Default vCluster setup shares nodes. Enhance isolation using the --enforce-node-selector
flag on vCluster syncer to schedule workloads on specific nodes based on labels. For managed node groups, create multiple groups with specific selectors for each vCluster.
Karpenter integration options with vCluster:
- Designated NodePool for vCluster Workload Pods:
-
- Uses node pool taint with vCluster name
- Adds toleration to syncer
- Makes sure that pods synced by vCluster are scheduled on designated NodePool nodes
- Target Node Pool Based on vCluster Pod Labels:
-
- Uses exists operator in Karpenter nodePool spec requirements
- Allows fine-grained control over which node pool a vCluster’s pods are scheduled on
These approaches optimize resource usage and cost savings by provisioning nodes based on each vCluster’s workload requirements.
Congratulations on successfully combining vCluster for isolation and Argo CD for deployment management in a multi-tenant EKS cluster.
Cleaning up
First, delete vClusters, which cleans up namespaces, resources, and load balancers:
Then, delete the EKS cluster:
Conclusion
Kubernetes multi-tenancy, combined with Amazon EKS, Loft’s vCluster, and Argo CD delivers a comprehensive solution for hosting multiple isolated applications within a shared Amazon EKS environment. This integrated approach enables application-level isolation, cost savings through streamlined infrastructure management, and automated deployments — all while harnessing the scalability and reliability of the Amazon EKS platform.
Consolidating multiple virtual clusters on a single EKS cluster allows organizations to optimize infrastructure expenditure and streamline management, without compromising on application isolation or deployment agility. The GitOps workflow facilitated by Argo CD further enhances the reliability and reproducibility of application deployments across these virtual clusters.