

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Learn how to deploy workloads and add-ons to Amazon EKS
Workloads

Your workloads are deployed in containers, which are deployed in Pods in Kubernetes. A Pod includes one or more containers. Typically, one or more Pods that provide the same service are deployed in a Kubernetes service. Once you’ve deployed multiple Pods that provide the same service, you can:
+  [View information about the workloads](view-kubernetes-resources.md) running on each of your clusters using the AWS Management Console.
+ Vertically scale Pods up or down with the Kubernetes [Vertical Pod Autoscaler](vertical-pod-autoscaler.md).
+ Horizontally scale the number of Pods needed to meet demand up or down with the Kubernetes [Horizontal Pod Autoscaler](horizontal-pod-autoscaler.md).
+ Create an external (for internet-accessible Pods) or an internal (for private Pods) [network load balancer](network-load-balancing.md) to balance network traffic across Pods. The load balancer routes traffic at Layer 4 of the OSI model.
+ Create an [Application Load Balancer](alb-ingress.md) to balance application traffic across Pods. The application load balancer routes traffic at Layer 7 of the OSI model.
+ If you’re new to Kubernetes, this topic helps you [Deploy a sample application](sample-deployment.md).
+ You can [restrict IP addresses that can be assigned to a service](restrict-service-external-ip.md) with `externalIPs`.

# Deploy a sample application on Linux
Sample deployment (Linux)

In this topic, you deploy a sample application to your cluster on linux nodes.

## Prerequisites

+ An existing Kubernetes cluster with at least one node. If you don’t have an existing Amazon EKS cluster, you can deploy one using one of the guides in [Get started with Amazon EKS](getting-started.md).
+  `Kubectl` installed on your computer. For more information, see [Set up `kubectl` and `eksctl`](install-kubectl.md).
+  `Kubectl` configured to communicate with your cluster. For more information, see [Connect kubectl to an EKS cluster by creating a kubeconfig file](create-kubeconfig.md).
+ If you plan to deploy your sample workload to Fargate, then you must have an existing [Fargate profile](fargate-profile.md) that includes the same namespace created in this tutorial, which is `eks-sample-app`, unless you change the name. If you created a cluster with one of the guides in [Get started with Amazon EKS](getting-started.md), then you’ll have to create a new profile, or add the namespace to your existing profile, because the profile created in the getting started guides doesn’t specify the namespace used in this tutorial. Your VPC must also have at least one private subnet.

Though many variables are changeable in the following steps, we recommend only changing variable values where specified. Once you have a better understanding of Kubernetes Pods, deployments, and services, you can experiment with changing other values.

## Create a namespace


A namespace allows you to group resources in Kubernetes. For more information, see [Namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) in the Kubernetes documentation. If you plan to deploy your sample application to [Simplify compute management with AWS Fargate](fargate.md), make sure that the value for `namespace` in your [Define which Pods use AWS Fargate when launched](fargate-profile.md) is `eks-sample-app`.

```
kubectl create namespace eks-sample-app
```

## Create a Kubernetes deployment


Create a Kubernetes deployment. This sample deployment pulls a container image from a public repository and deploys three replicas (individual Pods) of it to your cluster. To learn more, see [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) in the Kubernetes documentation.

1. Save the following contents to a file named `eks-sample-deployment.yaml`. The containers in the sample application don’t use network storage, but you might have applications that need to. For more information, see [Use application data storage for your cluster](storage.md).
   + The `amd64` or `arm64` `values` under the `kubernetes.io/arch` key mean that the application can be deployed to either hardware architecture (if you have both in your cluster). This is possible because this image is a multi-architecture image, but not all are. You can determine the hardware architecture that the image is supported on by viewing the [image details](https://gallery.ecr.aws/nginx/nginx) in the repository that you’re pulling it from. When deploying images that don’t support a hardware architecture type, or that you don’t want the image deployed to, remove that type from the manifest. For more information, see [Well-Known Labels, Annotations and Taints](https://kubernetes.io/docs/reference/labels-annotations-taints/) in the Kubernetes documentation.
   + The `kubernetes.io/os: linux` `nodeSelector` means that if you had Linux and Windows nodes (for example) in your cluster, the image would only be deployed to Linux nodes. For more information, see [Well-Known Labels, Annotations and Taints](https://kubernetes.io/docs/reference/labels-annotations-taints/) in the Kubernetes documentation.

     ```
     apiVersion: apps/v1
     kind: Deployment
     metadata:
       name: eks-sample-linux-deployment
       namespace: eks-sample-app
       labels:
         app: eks-sample-linux-app
     spec:
       replicas: 3
       selector:
         matchLabels:
           app: eks-sample-linux-app
       template:
         metadata:
           labels:
             app: eks-sample-linux-app
         spec:
           affinity:
             nodeAffinity:
               requiredDuringSchedulingIgnoredDuringExecution:
                 nodeSelectorTerms:
                 - matchExpressions:
                   - key: kubernetes.io/arch
                     operator: In
                     values:
                     - amd64
                     - arm64
           containers:
           - name: nginx
             image: public.ecr.aws/nginx/nginx:1.23
             ports:
             - name: http
               containerPort: 80
             imagePullPolicy: IfNotPresent
           nodeSelector:
             kubernetes.io/os: linux
     ```

1. Apply the deployment manifest to your cluster.

   ```
   kubectl apply -f eks-sample-deployment.yaml
   ```

## Create a service


A service allows you to access all replicas through a single IP address or name. For more information, see [Service](https://kubernetes.io/docs/concepts/services-networking/service/) in the Kubernetes documentation. Though not implemented in the sample application, if you have applications that need to interact with other AWS services, we recommend that you create Kubernetes service accounts for your Pods, and associate them to AWS IAM accounts. By specifying service accounts, your Pods have only the minimum permissions that you specify for them to interact with other services. For more information, see [IAM roles for service accounts](iam-roles-for-service-accounts.md).

1. Save the following contents to a file named `eks-sample-service.yaml`. Kubernetes assigns the service its own IP address that is accessible only from within the cluster. To access the service from outside of your cluster, deploy the [AWS Load Balancer Controller](aws-load-balancer-controller.md) to load balance [application](alb-ingress.md) or [network](network-load-balancing.md) traffic to the service.

   ```
   apiVersion: v1
   kind: Service
   metadata:
     name: eks-sample-linux-service
     namespace: eks-sample-app
     labels:
       app: eks-sample-linux-app
   spec:
     selector:
       app: eks-sample-linux-app
     ports:
       - protocol: TCP
         port: 80
         targetPort: 80
   ```

1. Apply the service manifest to your cluster.

   ```
   kubectl apply -f eks-sample-service.yaml
   ```

## Review resources created


1. View all resources that exist in the `eks-sample-app` namespace.

   ```
   kubectl get all -n eks-sample-app
   ```

   An example output is as follows.

   ```
   NAME                                               READY   STATUS    RESTARTS   AGE
   pod/eks-sample-linux-deployment-65b7669776-m6qxz   1/1     Running   0          27m
   pod/eks-sample-linux-deployment-65b7669776-mmxvd   1/1     Running   0          27m
   pod/eks-sample-linux-deployment-65b7669776-qzn22   1/1     Running   0          27m
   
   NAME                               TYPE         CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
   service/eks-sample-linux-service   ClusterIP    10.100.74.8     <none>        80/TCP    32m
   
   NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
   deployment.apps/eks-sample-linux-deployment 3/3     3            3           27m
   
   NAME                                                      DESIRED   CURRENT   READY   AGE
   replicaset.apps/eks-sample-linux-deployment-776d8f8fd8    3         3         3       27m
   ```

   In the output, you see the service and deployment that were specified in the sample manifests deployed in previous steps. You also see three Pods. This is because `3` `replicas` were specified in the sample manifest. For more information about Pods, see [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/) in the Kubernetes documentation. Kubernetes automatically creates the `replicaset` resource, even though it isn’t specified in the sample manifests. For more information about `ReplicaSets`, see [ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/) in the Kubernetes documentation.
**Note**  
Kubernetes maintains the number of replicas that are specified in the manifest. If this were a production deployment and you wanted Kubernetes to horizontally scale the number of replicas or vertically scale the compute resources for the Pods, use the [Scale pod deployments with Horizontal Pod Autoscaler](horizontal-pod-autoscaler.md) and the [Adjust pod resources with Vertical Pod Autoscaler](vertical-pod-autoscaler.md) to do so.

1. View the details of the deployed service.

   ```
   kubectl -n eks-sample-app describe service eks-sample-linux-service
   ```

   An example output is as follows.

   ```
   Name:              eks-sample-linux-service
   Namespace:         eks-sample-app
   Labels:            app=eks-sample-linux-app
   Annotations:       <none>
   Selector:          app=eks-sample-linux-app
   Type:              ClusterIP
   IP Families:       <none>
   IP:                10.100.74.8
   IPs:               10.100.74.8
   Port:              <unset>  80/TCP
   TargetPort:        80/TCP
   Endpoints:         192.168.24.212:80,192.168.50.185:80,192.168.63.93:80
   Session Affinity:  None
   Events:            <none>
   ```

   In the previous output, the value for `IP:` is a unique IP address that can be reached from any node or Pod within the cluster, but it can’t be reached from outside of the cluster. The values for `Endpoints` are IP addresses assigned from within your VPC to the Pods that are part of the service.

1. View the details of one of the Pods listed in the output when you [viewed the namespace](#sample-app-view-namespace) in a previous step. Replace *776d8f8fd8-78w66* with the value returned for one of your Pods.

   ```
   kubectl -n eks-sample-app describe pod eks-sample-linux-deployment-65b7669776-m6qxz
   ```

   Abbreviated example output

   ```
   Name:         eks-sample-linux-deployment-65b7669776-m6qxz
   Namespace:    eks-sample-app
   Priority:     0
   Node:         ip-192-168-45-132.us-west-2.compute.internal/192.168.45.132
   [...]
   IP:           192.168.63.93
   IPs:
     IP:           192.168.63.93
   Controlled By:  ReplicaSet/eks-sample-linux-deployment-65b7669776
   [...]
   Conditions:
     Type              Status
     Initialized       True
     Ready             True
     ContainersReady   True
     PodScheduled      True
   [...]
   Events:
     Type    Reason     Age    From                                                 Message
     ----    ------     ----   ----                                                 -------
     Normal  Scheduled  3m20s  default-scheduler                                    Successfully assigned eks-sample-app/eks-sample-linux-deployment-65b7669776-m6qxz to ip-192-168-45-132.us-west-2.compute.internal
   [...]
   ```

   In the previous output, the value for `IP:` is a unique IP that’s assigned to the Pod from the CIDR block assigned to the subnet that the node is in. If you prefer to assign Pods IP addresses from different CIDR blocks, you can change the default behavior. For more information, see [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md). You can also see that the Kubernetes scheduler scheduled the Pod on the `Node` with the IP address *192.168.45.132*.
**Tip**  
Rather than using the command line, you can view many details about Pods, services, deployments, and other Kubernetes resources in the AWS Management Console. For more information, see [View Kubernetes resources in the AWS Management Console](view-kubernetes-resources.md).

## Run a shell on a Pod


1. Run a shell on the Pod that you described in the previous step, replacing *65b7669776-m6qxz* with the ID of one of your Pods.

   ```
   kubectl exec -it eks-sample-linux-deployment-65b7669776-m6qxz -n eks-sample-app -- /bin/bash
   ```

1. From the Pod shell, view the output from the web server that was installed with your deployment in a previous step. You only need to specify the service name. It is resolved to the service’s IP address by CoreDNS, which is deployed with an Amazon EKS cluster, by default.

   ```
   curl eks-sample-linux-service
   ```

   An example output is as follows.

   ```
   <!DOCTYPE html>
   <html>
   <head>
   <title>Welcome to nginx!</title>
   [...]
   ```

1. From the Pod shell, view the DNS server for the Pod.

   ```
   cat /etc/resolv.conf
   ```

   An example output is as follows.

   ```
   nameserver 10.100.0.10
   search eks-sample-app.svc.cluster.local svc.cluster.local cluster.local us-west-2.compute.internal
   options ndots:5
   ```

   In the previous output, `10.100.0.10` is automatically assigned as the `nameserver` for all Pods deployed to the cluster.

1. Disconnect from the Pod by typing `exit`.

1. Once you’re finished with the sample application, you can remove the sample namespace, service, and deployment with the following command.

   ```
   kubectl delete namespace eks-sample-app
   ```

## Next Steps


After you deploy the sample application, you might want to try some of the following exercises:
+  [Route application and HTTP traffic with Application Load Balancers](alb-ingress.md) 
+  [Route TCP and UDP traffic with Network Load Balancers](network-load-balancing.md) 

# Deploy a sample application on Windows
Sample deployment (Windows)

In this topic, you deploy a sample application to your cluster on Windows nodes.

## Prerequisites

+ An existing Kubernetes cluster with at least one node. If you don’t have an existing Amazon EKS cluster, you can deploy one using one of the guides in [Get started with Amazon EKS](getting-started.md). You must have [Windows support](windows-support.md) enabled for your cluster and at least one Amazon EC2 Windows node.
+  `Kubectl` installed on your computer. For more information, see [Set up `kubectl` and `eksctl`](install-kubectl.md).
+  `Kubectl` configured to communicate with your cluster. For more information, see [Connect kubectl to an EKS cluster by creating a kubeconfig file](create-kubeconfig.md).
+ If you plan to deploy your sample workload to Fargate, then you must have an existing [Fargate profile](fargate-profile.md) that includes the same namespace created in this tutorial, which is `eks-sample-app`, unless you change the name. If you created a cluster with one of the guides in [Get started with Amazon EKS](getting-started.md), then you’ll have to create a new profile, or add the namespace to your existing profile, because the profile created in the getting started guides doesn’t specify the namespace used in this tutorial. Your VPC must also have at least one private subnet.

Though many variables are changeable in the following steps, we recommend only changing variable values where specified. Once you have a better understanding of Kubernetes Pods, deployments, and services, you can experiment with changing other values.

## Create a namespace


A namespace allows you to group resources in Kubernetes. For more information, see [Namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) in the Kubernetes documentation. If you plan to deploy your sample application to [Simplify compute management with AWS Fargate](fargate.md), make sure that the value for `namespace` in your [Define which Pods use AWS Fargate when launched](fargate-profile.md) is `eks-sample-app`.

```
kubectl create namespace eks-sample-app
```

## Create a Kubernetes deployment


This sample deployment pulls a container image from a public repository and deploys three replicas (individual Pods) of it to your cluster. To learn more, see [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) in the Kubernetes documentation.

1. Save the following contents to a file named `eks-sample-deployment.yaml`. The containers in the sample application don’t use network storage, but you might have applications that need to. For more information, see [Use application data storage for your cluster](storage.md).
   + The `kubernetes.io/os: windows` `nodeSelector` means that if you had Windows and Linux nodes (for example) in your cluster, the image would only be deployed to Windows nodes. For more information, see [Well-Known Labels, Annotations and Taints](https://kubernetes.io/docs/reference/labels-annotations-taints/) in the Kubernetes documentation.

     ```
     apiVersion: apps/v1
     kind: Deployment
     metadata:
       name: eks-sample-windows-deployment
       namespace: eks-sample-app
       labels:
         app: eks-sample-windows-app
     spec:
       replicas: 3
       selector:
         matchLabels:
           app: eks-sample-windows-app
       template:
         metadata:
           labels:
             app: eks-sample-windows-app
         spec:
           affinity:
             nodeAffinity:
               requiredDuringSchedulingIgnoredDuringExecution:
                 nodeSelectorTerms:
                 - matchExpressions:
                   - key: kubernetes.io/arch
                     operator: In
                     values:
                     - amd64
           containers:
           - name: windows-server-iis
             image: mcr.microsoft.com/windows/servercore:ltsc2019
             ports:
             - name: http
               containerPort: 80
             imagePullPolicy: IfNotPresent
             command:
             - powershell.exe
             - -command
             - "Add-WindowsFeature Web-Server; Invoke-WebRequest -UseBasicParsing -Uri 'https://dotnetbinaries.blob.core.windows.net/servicemonitor/2.0.1.6/ServiceMonitor.exe' -OutFile 'C:\\ServiceMonitor.exe'; echo '<html><body><br/><br/><marquee><H1>Hello EKS!!!<H1><marquee></body><html>' > C:\\inetpub\\wwwroot\\default.html; C:\\ServiceMonitor.exe 'w3svc'; "
           nodeSelector:
             kubernetes.io/os: windows
     ```

1. Apply the deployment manifest to your cluster.

   ```
   kubectl apply -f eks-sample-deployment.yaml
   ```

## Create a service


A service allows you to access all replicas through a single IP address or name. For more information, see [Service](https://kubernetes.io/docs/concepts/services-networking/service/) in the Kubernetes documentation. Though not implemented in the sample application, if you have applications that need to interact with other AWS services, we recommend that you create Kubernetes service accounts for your Pods, and associate them to AWS IAM accounts. By specifying service accounts, your Pods have only the minimum permissions that you specify for them to interact with other services. For more information, see [IAM roles for service accounts](iam-roles-for-service-accounts.md).

1. Save the following contents to a file named `eks-sample-service.yaml`. Kubernetes assigns the service its own IP address that is accessible only from within the cluster. To access the service from outside of your cluster, deploy the [AWS Load Balancer Controller](aws-load-balancer-controller.md) to load balance [application](alb-ingress.md) or [network](network-load-balancing.md) traffic to the service.

   ```
   apiVersion: v1
   kind: Service
   metadata:
     name: eks-sample-windows-service
     namespace: eks-sample-app
     labels:
       app: eks-sample-windows-app
   spec:
     selector:
       app: eks-sample-windows-app
     ports:
       - protocol: TCP
         port: 80
         targetPort: 80
   ```

1. Apply the service manifest to your cluster.

   ```
   kubectl apply -f eks-sample-service.yaml
   ```

## Review resources created


1. View all resources that exist in the `eks-sample-app` namespace.

   ```
   kubectl get all -n eks-sample-app
   ```

   An example output is as follows.

   ```
   NAME                                               READY   STATUS    RESTARTS   AGE
   pod/eks-sample-windows-deployment-65b7669776-m6qxz   1/1     Running   0          27m
   pod/eks-sample-windows-deployment-65b7669776-mmxvd   1/1     Running   0          27m
   pod/eks-sample-windows-deployment-65b7669776-qzn22   1/1     Running   0          27m
   
   NAME                               TYPE         CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
   service/eks-sample-windows-service   ClusterIP    10.100.74.8     <none>        80/TCP    32m
   
   NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
   deployment.apps/eks-sample-windows-deployment 3/3     3            3           27m
   
   NAME                                                      DESIRED   CURRENT   READY   AGE
   replicaset.apps/eks-sample-windows-deployment-776d8f8fd8    3         3         3       27m
   ```

   In the output, you see the service and deployment that were specified in the sample manifests deployed in previous steps. You also see three Pods. This is because `3` `replicas` were specified in the sample manifest. For more information about Pods, see [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/) in the Kubernetes documentation. Kubernetes automatically creates the `replicaset` resource, even though it isn’t specified in the sample manifests. For more information about `ReplicaSets`, see [ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/) in the Kubernetes documentation.
**Note**  
Kubernetes maintains the number of replicas that are specified in the manifest. If this were a production deployment and you wanted Kubernetes to horizontally scale the number of replicas or vertically scale the compute resources for the Pods, use the [Scale pod deployments with Horizontal Pod Autoscaler](horizontal-pod-autoscaler.md) and the [Adjust pod resources with Vertical Pod Autoscaler](vertical-pod-autoscaler.md) to do so.

1. View the details of the deployed service.

   ```
   kubectl -n eks-sample-app describe service eks-sample-windows-service
   ```

   An example output is as follows.

   ```
   Name:              eks-sample-windows-service
   Namespace:         eks-sample-app
   Labels:            app=eks-sample-windows-app
   Annotations:       <none>
   Selector:          app=eks-sample-windows-app
   Type:              ClusterIP
   IP Families:       <none>
   IP:                10.100.74.8
   IPs:               10.100.74.8
   Port:              <unset>  80/TCP
   TargetPort:        80/TCP
   Endpoints:         192.168.24.212:80,192.168.50.185:80,192.168.63.93:80
   Session Affinity:  None
   Events:            <none>
   ```

   In the previous output, the value for `IP:` is a unique IP address that can be reached from any node or Pod within the cluster, but it can’t be reached from outside of the cluster. The values for `Endpoints` are IP addresses assigned from within your VPC to the Pods that are part of the service.

1. View the details of one of the Pods listed in the output when you [viewed the namespace](sample-deployment.md#sample-app-view-namespace) in a previous step. Replace *776d8f8fd8-78w66* with the value returned for one of your Pods.

   ```
   kubectl -n eks-sample-app describe pod eks-sample-windows-deployment-65b7669776-m6qxz
   ```

   Abbreviated example output

   ```
   Name:         eks-sample-windows-deployment-65b7669776-m6qxz
   Namespace:    eks-sample-app
   Priority:     0
   Node:         ip-192-168-45-132.us-west-2.compute.internal/192.168.45.132
   [...]
   IP:           192.168.63.93
   IPs:
     IP:           192.168.63.93
   Controlled By:  ReplicaSet/eks-sample-windows-deployment-65b7669776
   [...]
   Conditions:
     Type              Status
     Initialized       True
     Ready             True
     ContainersReady   True
     PodScheduled      True
   [...]
   Events:
     Type    Reason     Age    From                                                 Message
     ----    ------     ----   ----                                                 -------
     Normal  Scheduled  3m20s  default-scheduler                                    Successfully assigned eks-sample-app/eks-sample-windows-deployment-65b7669776-m6qxz to ip-192-168-45-132.us-west-2.compute.internal
   [...]
   ```

   In the previous output, the value for `IP:` is a unique IP that’s assigned to the Pod from the CIDR block assigned to the subnet that the node is in. If you prefer to assign Pods IP addresses from different CIDR blocks, you can change the default behavior. For more information, see [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md). You can also see that the Kubernetes scheduler scheduled the Pod on the `Node` with the IP address *192.168.45.132*.
**Tip**  
Rather than using the command line, you can view many details about Pods, services, deployments, and other Kubernetes resources in the AWS Management Console. For more information, see [View Kubernetes resources in the AWS Management Console](view-kubernetes-resources.md).

## Run a shell on a Pod


1. Run a shell on the Pod that you described in the previous step, replacing *65b7669776-m6qxz* with the ID of one of your Pods.

   ```
   kubectl exec -it eks-sample-windows-deployment-65b7669776-m6qxz -n eks-sample-app -- powershell.exe
   ```

1. From the Pod shell, view the output from the web server that was installed with your deployment in a previous step. You only need to specify the service name. It is resolved to the service’s IP address by CoreDNS, which is deployed with an Amazon EKS cluster, by default.

   ```
   Invoke-WebRequest -uri eks-sample-windows-service/default.html -UseBasicParsing
   ```

   An example output is as follows.

   ```
   StatusCode        : 200
   StatusDescription : OK
   Content           : < h t m l > < b o d y > < b r / > < b r / > < m a r q u e e > < H 1 > H e l l o
                         E K S ! ! ! < H 1 > < m a r q u e e > < / b o d y > < h t m l >
   ```

1. From the Pod shell, view the DNS server for the Pod.

   ```
   Get-NetIPConfiguration
   ```

   Abbreviated output

   ```
   InterfaceAlias       : vEthernet
   [...]
   IPv4Address          : 192.168.63.14
   [...]
   DNSServer            : 10.100.0.10
   ```

   In the previous output, `10.100.0.10` is automatically assigned as the DNS server for all Pods deployed to the cluster.

1. Disconnect from the Pod by typing `exit`.

1. Once you’re finished with the sample application, you can remove the sample namespace, service, and deployment with the following command.

   ```
   kubectl delete namespace eks-sample-app
   ```

## Next Steps


After you deploy the sample application, you might want to try some of the following exercises:
+  [Route application and HTTP traffic with Application Load Balancers](alb-ingress.md) 
+  [Route TCP and UDP traffic with Network Load Balancers](network-load-balancing.md) 

# Adjust pod resources with Vertical Pod Autoscaler
Vertical Pod Autoscaler

The Kubernetes [Vertical Pod Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) automatically adjusts the CPU and memory reservations for your Pods to help "right size" your applications. This adjustment can improve cluster resource utilization and free up CPU and memory for other Pods. This topic helps you to deploy the Vertical Pod Autoscaler to your cluster and verify that it is working.
+ You have an existing Amazon EKS cluster. If you don’t, see [Get started with Amazon EKS](getting-started.md).
+ You have the Kubernetes Metrics Server installed. For more information, see [View resource usage with the Kubernetes Metrics Server](metrics-server.md).
+ You are using a `kubectl` client that is [configured to communicate with your Amazon EKS cluster](getting-started-console.md#eks-configure-kubectl).
+ OpenSSL `1.1.1` or later installed on your device.

## Deploy the Vertical Pod Autoscaler


In this section, you deploy the Vertical Pod Autoscaler to your cluster.

1. Open a terminal window and navigate to a directory where you would like to download the Vertical Pod Autoscaler source code.

1. Clone the [kubernetes/autoscaler](https://github.com/kubernetes/autoscaler)GitHub repository.

   ```
   git clone https://github.com/kubernetes/autoscaler.git
   ```

1. Change to the `vertical-pod-autoscaler` directory.

   ```
   cd autoscaler/vertical-pod-autoscaler/
   ```

1. (Optional) If you have already deployed another version of the Vertical Pod Autoscaler, remove it with the following command.

   ```
   ./hack/vpa-down.sh
   ```

1. If your nodes don’t have internet access to the `registry.k8s.io` container registry, then you need to pull the following images and push them to your own private repository. For more information about how to pull the images and push them to your own private repository, see [Copy a container image from one repository to another repository](copy-image-to-repository.md).

   ```
   registry.k8s.io/autoscaling/vpa-admission-controller:0.10.0
   registry.k8s.io/autoscaling/vpa-recommender:0.10.0
   registry.k8s.io/autoscaling/vpa-updater:0.10.0
   ```

   If you’re pushing the images to a private Amazon ECR repository, then replace `registry.k8s.io` in the manifests with your registry. Replace *111122223333* with your account ID. Replace *region-code* with the AWS Region that your cluster is in. The following commands assume that you named your repository the same as the repository name in the manifest. If you named your repository something different, then you’ll need to change it too.

   ```
   sed -i.bak -e 's/registry.k8s.io/111122223333.dkr.ecr.region-code.amazonaws.com/' ./deploy/admission-controller-deployment.yaml
   sed -i.bak -e 's/registry.k8s.io/111122223333.dkr.ecr.region-code.amazonaws.com/' ./deploy/recommender-deployment.yaml
   sed -i.bak -e 's/registry.k8s.io/111122223333.dkr.ecr.region-code.amazonaws.com/' ./deploy/updater-deployment.yaml
   ```

1. Deploy the Vertical Pod Autoscaler to your cluster with the following command.

   ```
   ./hack/vpa-up.sh
   ```

1. Verify that the Vertical Pod Autoscaler Pods have been created successfully.

   ```
   kubectl get pods -n kube-system
   ```

   An example output is as follows.

   ```
   NAME                                        READY   STATUS    RESTARTS   AGE
   [...]
   metrics-server-8459fc497-kfj8w              1/1     Running   0          83m
   vpa-admission-controller-68c748777d-ppspd   1/1     Running   0          7s
   vpa-recommender-6fc8c67d85-gljpl            1/1     Running   0          8s
   vpa-updater-786b96955c-bgp9d                1/1     Running   0          8s
   ```

## Test your Vertical Pod Autoscaler installation


In this section, you deploy a sample application to verify that the Vertical Pod Autoscaler is working.

1. Deploy the `hamster.yaml` Vertical Pod Autoscaler example with the following command.

   ```
   kubectl apply -f examples/hamster.yaml
   ```

1. Get the Pods from the `hamster` example application.

   ```
   kubectl get pods -l app=hamster
   ```

   An example output is as follows.

   ```
   hamster-c7d89d6db-rglf5   1/1     Running   0          48s
   hamster-c7d89d6db-znvz5   1/1     Running   0          48s
   ```

1. Describe one of the Pods to view its `cpu` and `memory` reservation. Replace *c7d89d6db-rglf5* with one of the IDs returned in your output from the previous step.

   ```
   kubectl describe pod hamster-c7d89d6db-rglf5
   ```

   An example output is as follows.

   ```
   [...]
   Containers:
     hamster:
       Container ID:  docker://e76c2413fc720ac395c33b64588c82094fc8e5d590e373d5f818f3978f577e24
       Image:         registry.k8s.io/ubuntu-slim:0.1
       Image ID:      docker-pullable://registry.k8s.io/ubuntu-slim@sha256:b6f8c3885f5880a4f1a7cf717c07242eb4858fdd5a84b5ffe35b1cf680ea17b1
       Port:          <none>
       Host Port:     <none>
       Command:
         /bin/sh
       Args:
         -c
         while true; do timeout 0.5s yes >/dev/null; sleep 0.5s; done
       State:          Running
         Started:      Fri, 27 Sep 2019 10:35:16 -0700
       Ready:          True
       Restart Count:  0
       Requests:
         cpu:        100m
         memory:     50Mi
   [...]
   ```

   You can see that the original Pod reserves 100 millicpu of CPU and 50 mebibytes of memory. For this example application, 100 millicpu is less than the Pod needs to run, so it is CPU-constrained. It also reserves much less memory than it needs. The Vertical Pod Autoscaler `vpa-recommender` deployment analyzes the hamster Pods to see if the CPU and memory requirements are appropriate. If adjustments are needed, the `vpa-updater` relaunches the Pods with updated values.

1. Wait for the `vpa-updater` to launch a new hamster Pods. This should take a minute or two. You can monitor the Pods with the following command.
**Note**  
If you are not sure that a new Pod has launched, compare the Pod names with your previous list. When the new Pod launches, you will see a new Pod name.

   ```
   kubectl get --watch Pods -l app=hamster
   ```

1. When a new hamster Pods is started, describe it and view the updated CPU and memory reservations.

   ```
   kubectl describe pod hamster-c7d89d6db-jxgfv
   ```

   An example output is as follows.

   ```
   [...]
   Containers:
     hamster:
       Container ID:  docker://2c3e7b6fb7ce0d8c86444334df654af6fb3fc88aad4c5d710eac3b1e7c58f7db
       Image:         registry.k8s.io/ubuntu-slim:0.1
       Image ID:      docker-pullable://registry.k8s.io/ubuntu-slim@sha256:b6f8c3885f5880a4f1a7cf717c07242eb4858fdd5a84b5ffe35b1cf680ea17b1
       Port:          <none>
       Host Port:     <none>
       Command:
         /bin/sh
       Args:
         -c
         while true; do timeout 0.5s yes >/dev/null; sleep 0.5s; done
       State:          Running
         Started:      Fri, 27 Sep 2019 10:37:08 -0700
       Ready:          True
       Restart Count:  0
       Requests:
         cpu:        587m
         memory:     262144k
   [...]
   ```

   In the previous output, you can see that the `cpu` reservation increased to 587 millicpu, which is over five times the original value. The `memory` increased to 262,144 Kilobytes, which is around 250 mebibytes, or five times the original value. This Pod was under-resourced, and the Vertical Pod Autoscaler corrected the estimate with a much more appropriate value.

1. Describe the `hamster-vpa` resource to view the new recommendation.

   ```
   kubectl describe vpa/hamster-vpa
   ```

   An example output is as follows.

   ```
   Name:         hamster-vpa
   Namespace:    default
   Labels:       <none>
   Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                   {"apiVersion":"autoscaling.k8s.io/v1beta2","kind":"VerticalPodAutoscaler","metadata":{"annotations":{},"name":"hamster-vpa","namespace":"d...
   API Version:  autoscaling.k8s.io/v1beta2
   Kind:         VerticalPodAutoscaler
   Metadata:
     Creation Timestamp:  2019-09-27T18:22:51Z
     Generation:          23
     Resource Version:    14411
     Self Link:           /apis/autoscaling.k8s.io/v1beta2/namespaces/default/verticalpodautoscalers/hamster-vpa
     UID:                 d0d85fb9-e153-11e9-ae53-0205785d75b0
   Spec:
     Target Ref:
       API Version:  apps/v1
       Kind:         Deployment
       Name:         hamster
   Status:
     Conditions:
       Last Transition Time:  2019-09-27T18:23:28Z
       Status:                True
       Type:                  RecommendationProvided
     Recommendation:
       Container Recommendations:
         Container Name:  hamster
         Lower Bound:
           Cpu:     550m
           Memory:  262144k
         Target:
           Cpu:     587m
           Memory:  262144k
         Uncapped Target:
           Cpu:     587m
           Memory:  262144k
         Upper Bound:
           Cpu:     21147m
           Memory:  387863636
   Events:          <none>
   ```

1. When you finish experimenting with the example application, you can delete it with the following command.

   ```
   kubectl delete -f examples/hamster.yaml
   ```

# Scale pod deployments with Horizontal Pod Autoscaler
Horizontal Pod Autoscaler

The Kubernetes [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) automatically scales the number of Pods in a deployment, replication controller, or replica set based on that resource’s CPU utilization. This can help your applications scale out to meet increased demand or scale in when resources are not needed, thus freeing up your nodes for other applications. When you set a target CPU utilization percentage, the Horizontal Pod Autoscaler scales your application in or out to try to meet that target.

The Horizontal Pod Autoscaler is a standard API resource in Kubernetes that simply requires that a metrics source (such as the Kubernetes metrics server) is installed on your Amazon EKS cluster to work. You do not need to deploy or install the Horizontal Pod Autoscaler on your cluster to begin scaling your applications. For more information, see [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) in the Kubernetes documentation.

Use this topic to prepare the Horizontal Pod Autoscaler for your Amazon EKS cluster and to verify that it is working with a sample application.

**Note**  
This topic is based on the [Horizontal Pod autoscaler walkthrough](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) in the Kubernetes documentation.
+ You have an existing Amazon EKS cluster. If you don’t, see [Get started with Amazon EKS](getting-started.md).
+ You have the Kubernetes Metrics Server installed. For more information, see [View resource usage with the Kubernetes Metrics Server](metrics-server.md).
+ You are using a `kubectl` client that is [configured to communicate with your Amazon EKS cluster](getting-started-console.md#eks-configure-kubectl).

## Run a Horizontal Pod Autoscaler test application


In this section, you deploy a sample application to verify that the Horizontal Pod Autoscaler is working.

**Note**  
This example is based on the [Horizontal Pod autoscaler walkthrough](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) in the Kubernetes documentation.

1. Deploy a simple Apache web server application with the following command.

   ```
   kubectl apply -f https://k8s.io/examples/application/php-apache.yaml
   ```

   This Apache web server Pod is given a 500 millicpu CPU limit and it is serving on port 80.

1. Create a Horizontal Pod Autoscaler resource for the `php-apache` deployment.

   ```
   kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
   ```

   This command creates an autoscaler that targets 50 percent CPU utilization for the deployment, with a minimum of one Pod and a maximum of ten Pods. When the average CPU load is lower than 50 percent, the autoscaler tries to reduce the number of Pods in the deployment, to a minimum of one. When the load is greater than 50 percent, the autoscaler tries to increase the number of Pods in the deployment, up to a maximum of ten. For more information, see [How does a HorizontalPodAutoscaler work?](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-a-horizontalpodautoscaler-work) in the Kubernetes documentation.

1. Describe the autoscaler with the following command to view its details.

   ```
   kubectl get hpa
   ```

   An example output is as follows.

   ```
   NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
   php-apache   Deployment/php-apache   0%/50%    1         10        1          51s
   ```

   As you can see, the current CPU load is `0%`, because there’s no load on the server yet. The Pod count is already at its lowest boundary (one), so it cannot scale in.

1.  Create a load for the web server by running a container.

   ```
   kubectl run -i \
       --tty load-generator \
       --rm --image=busybox \
       --restart=Never \
       -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
   ```

1. To watch the deployment scale out, periodically run the following command in a separate terminal from the terminal that you ran the previous step in.

   ```
   kubectl get hpa php-apache
   ```

   An example output is as follows.

   ```
   NAME         REFERENCE               TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
   php-apache   Deployment/php-apache   250%/50%   1         10        5          4m44s
   ```

   It may take over a minute for the replica count to increase. As long as actual CPU percentage is higher than the target percentage, then the replica count increases, up to 10. In this case, it’s `250%`, so the number of `REPLICAS` continues to increase.
**Note**  
It may take a few minutes before you see the replica count reach its maximum. If only 6 replicas, for example, are necessary for the CPU load to remain at or under 50%, then the load won’t scale beyond 6 replicas.

1. Stop the load. In the terminal window you’re generating the load in, stop the load by holding down the `Ctrl+C` keys. You can watch the replicas scale back to 1 by running the following command again in the terminal that you’re watching the scaling in.

   ```
   kubectl get hpa
   ```

   An example output is as follows.

   ```
   NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
   php-apache   Deployment/php-apache   0%/50%    1         10        1          25m
   ```
**Note**  
The default timeframe for scaling back down is five minutes, so it will take some time before you see the replica count reach 1 again, even when the current CPU percentage is 0 percent. The timeframe is modifiable. For more information, see [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) in the Kubernetes documentation.

1. When you are done experimenting with your sample application, delete the `php-apache` resources.

   ```
   kubectl delete deployment.apps/php-apache service/php-apache horizontalpodautoscaler.autoscaling/php-apache
   ```

# Route TCP and UDP traffic with Network Load Balancers
Network load balancing

**Note**  
 **New:** Amazon EKS Auto Mode automates routine tasks for load balancing. For more information, see:  
 [Deploy a Sample Load Balancer Workload to EKS Auto Mode](auto-elb-example.md) 
 [Use Service Annotations to configure Network Load Balancers](auto-configure-nlb.md) 

Network traffic is load balanced at `L4` of the OSI model. To load balance application traffic at `L7`, you deploy a Kubernetes `ingress`, which provisions an AWS Application Load Balancer. For more information, see [Route application and HTTP traffic with Application Load Balancers](alb-ingress.md). To learn more about the differences between the two types of load balancing, see [Elastic Load Balancing features](https://aws.amazon.com/elasticloadbalancing/features/) on the AWS website.

When you create a Kubernetes `Service` of type `LoadBalancer`, the AWS cloud provider load balancer controller creates AWS [Classic Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html) by default, but can also create AWS [Network Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html). This controller is only receiving critical bug fixes in the future. For more information about using the AWS cloud provider load balancer , see [AWS cloud provider load balancer controller](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) in the Kubernetes documentation. Its use is not covered in this topic.

We recommend that you use version `2.7.2` or later of the [AWS Load Balancer Controller](aws-load-balancer-controller.md) instead of the AWS cloud provider load balancer controller. The AWS Load Balancer Controller creates AWS Network Load Balancers, but doesn’t create AWS Classic Load Balancers. The remainder of this topic is about using the AWS Load Balancer Controller.

An AWS Network Load Balancer can load balance network traffic to Pods deployed to Amazon EC2 IP and instance [targets](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#target-type), to AWS Fargate IP targets, or to Amazon EKS Hybrid Nodes as IP targets. For more information, see [AWS Load Balancer Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/targetgroupbinding/targetgroupbinding/#targettype) on GitHub.

## Prerequisites


Before you can load balance network traffic using the AWS Load Balancer Controller, you must meet the following requirements.
+ Have an existing cluster. If you don’t have an existing cluster, see [Get started with Amazon EKS](getting-started.md). If you need to update the version of an existing cluster, see [Update existing cluster to new Kubernetes version](update-cluster.md).
+ Have the AWS Load Balancer Controller deployed on your cluster. For more information, see [Route internet traffic with AWS Load Balancer Controller](aws-load-balancer-controller.md). We recommend version `2.7.2` or later.
+ At least one subnet. If multiple tagged subnets are found in an Availability Zone, the controller chooses the first subnet whose subnet ID comes first lexicographically. The subnet must have at least eight available IP addresses.
+ If you’re using the AWS Load Balancer Controller version `2.1.1` or earlier, subnets must be tagged as follows. If using version `2.1.2` or later, this tag is optional. You might want to tag a subnet if you have multiple clusters running in the same VPC, or multiple AWS services sharing subnets in a VPC, and want more control over where load balancers are provisioned for each cluster. If you explicitly specify subnet IDs as an annotation on a service object, then Kubernetes and the AWS Load Balancer Controller use those subnets directly to create the load balancer. Subnet tagging isn’t required if you choose to use this method for provisioning load balancers and you can skip the following private and public subnet tagging requirements. Replace *my-cluster* with your cluster name.
  +  **Key** – `kubernetes.io/cluster/<my-cluster>` 
  +  **Value** – `shared` or `owned` 
+ Your public and private subnets must meet the following requirements, unless you explicitly specify subnet IDs as an annotation on a service or ingress object. If you provision load balancers by explicitly specifying subnet IDs as an annotation on a service or ingress object, then Kubernetes and the AWS Load Balancer Controller use those subnets directly to create the load balancer and the following tags aren’t required.
  +  **Private subnets** – Must be tagged in the following format. This is so that Kubernetes and the AWS Load Balancer Controller know that the subnets can be used for internal load balancers. If you use `eksctl` or an Amazon EKS AWS AWS CloudFormation template to create your VPC after March 26, 2020, then the subnets are tagged appropriately when they’re created. For more information about the Amazon EKS AWS AWS CloudFormation VPC templates, see [Create an Amazon VPC for your Amazon EKS cluster](creating-a-vpc.md).
    +  **Key** – `kubernetes.io/role/internal-elb` 
    +  **Value** – `1` 
  +  **Public subnets** – Must be tagged in the following format. This is so that Kubernetes knows to use only those subnets for external load balancers instead of choosing a public subnet in each Availability Zone (based on the lexicographical order of the subnet IDs). If you use `eksctl` or an Amazon EKS AWS CloudFormation template to create your VPC after March 26, 2020, then the subnets are tagged appropriately when they’re created. For more information about the Amazon EKS AWS CloudFormation VPC templates, see [Create an Amazon VPC for your Amazon EKS cluster](creating-a-vpc.md).
    +  **Key** – `kubernetes.io/role/elb` 
    +  **Value** – `1` 

  If the subnet role tags aren’t explicitly added, the Kubernetes service controller examines the route table of your cluster VPC subnets to determine if the subnet is private or public. We recommend that you don’t rely on this behavior, and instead explicitly add the private or public role tags. The AWS Load Balancer Controller doesn’t examine route tables, and requires the private and public tags to be present for successful auto discovery.

## Considerations

+ The configuration of your load balancer is controlled by annotations that are added to the manifest for your service. Service annotations are different when using the AWS Load Balancer Controller than they are when using the AWS cloud provider load balancer controller. Make sure to review the [annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/) for the AWS Load Balancer Controller before deploying services.
+ When using the [Amazon VPC CNI plugin for Kubernetes](managing-vpc-cni.md), the AWS Load Balancer Controller can load balance to Amazon EC2 IP or instance targets and Fargate IP targets. When using [Alternate compatible CNI plugins](alternate-cni-plugins.md), the controller can only load balance to instance targets, unless you are load balancing to Amazon EKS Hybrid Nodes. For hybrid nodes, the controller can load balance IP targets. For more information about Network Load Balancer target types, see [Target type](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#target-type) in the User Guide for Network Load Balancers
+ If you want to add tags to the load balancer when or after it’s created, add the following annotation in your service specification. For more information, see [AWS Resource Tags](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/#aws-resource-tags) in the AWS Load Balancer Controller documentation.

  ```
  service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags
  ```
+ You can assign [Elastic IP addresses](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) to the Network Load Balancer by adding the following annotation. Replace the example values with the `Allocation IDs` of your Elastic IP addresses. The number of `Allocation IDs` must match the number of subnets that are used for the load balancer. For more information, see the [AWS Load Balancer Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/#eip-allocations) documentation.

  ```
  service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-xxxxxxxxxxxxxxxxx,eipalloc-yyyyyyyyyyyyyyyyy
  ```
+ Amazon EKS adds one inbound rule to the node’s security group for client traffic and one rule for each load balancer subnet in the VPC for health checks for each Network Load Balancer that you create. Deployment of a service of type `LoadBalancer` can fail if Amazon EKS attempts to create rules that exceed the quota for the maximum number of rules allowed for a security group. For more information, see [Security groups](https://docs.aws.amazon.com/vpc/latest/userguide/amazon-vpc-limits.html#vpc-limits-security-groups) in Amazon VPC quotas in the Amazon VPC User Guide. Consider the following options to minimize the chances of exceeding the maximum number of rules for a security group:
  + Request an increase in your rules per security group quota. For more information, see [Requesting a quota increase](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html) in the Service Quotas User Guide.
  + Use IP targets, rather than instance targets. With IP targets, you can share rules for the same target ports. You can manually specify load balancer subnets with an annotation. For more information, see [Annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/) on GitHub.
  + Use an ingress, instead of a service of type `LoadBalancer`, to send traffic to your service. The AWS Application Load Balancer requires fewer rules than Network Load Balancers. You can share an ALB across multiple ingresses. For more information, see [Route application and HTTP traffic with Application Load Balancers](alb-ingress.md). You can’t share a Network Load Balancer across multiple services.
  + Deploy your clusters to multiple accounts.
+ If your Pods run on Windows in an Amazon EKS cluster, a single service with a load balancer can support up to 1024 back-end Pods. Each Pod has its own unique IP address.
+ We recommend only creating new Network Load Balancers with the AWS Load Balancer Controller. Attempting to replace existing Network Load Balancers created with the AWS cloud provider load balancer controller can result in multiple Network Load Balancers that might cause application downtime.

## Create a network load balancer


You can create a network load balancer with IP or instance targets.

### Create network load balancer — IP Targets

+ You can use IP targets with Pods deployed to Amazon EC2 nodes, Fargate, or Amazon EKS Hybrid Nodes. Your Kubernetes service must be created as type `LoadBalancer`. For more information, see [Type LoadBalancer](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) in the Kubernetes documentation.

  To create a load balancer that uses IP targets, add the following annotations to a service manifest and deploy your service. The `external` value for `aws-load-balancer-type` is what causes the AWS Load Balancer Controller, rather than the AWS cloud provider load balancer controller, to create the Network Load Balancer. You can view a [sample service manifest](#network-load-balancing-service-sample-manifest) with the annotations.

  ```
  service.beta.kubernetes.io/aws-load-balancer-type: "external"
  service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
  ```
**Note**  
If you’re load balancing to `IPv6` Pods, add the following annotation. You can only load balance over `IPv6` to IP targets, not instance targets. Without this annotation, load balancing is over `IPv4`.

  ```
  service.beta.kubernetes.io/aws-load-balancer-ip-address-type: dualstack
  ```

  Network Load Balancers are created with the `internal` `aws-load-balancer-scheme`, by default. You can launch Network Load Balancers in any subnet in your cluster’s VPC, including subnets that weren’t specified when you created your cluster.

  Kubernetes examines the route table for your subnets to identify whether they are public or private. Public subnets have a route directly to the internet using an internet gateway, but private subnets do not.

  If you want to create a Network Load Balancer in a public subnet to load balance to Amazon EC2 nodes (Fargate can only be private), specify `internet-facing` with the following annotation:

  ```
  service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
  ```
**Note**  
The `service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"` annotation is still supported for backwards compatibility. However, we recommend using the previous annotations for new load balancers instead of `service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"`.
**Important**  
Do not edit the annotations after creating your service. If you need to modify it, delete the service object and create it again with the desired value for this annotation.

### Create network load balancer — Instance Targets

+ The AWS cloud provider load balancer controller creates Network Load Balancers with instance targets only. Version `2.2.0` and later of the AWS Load Balancer Controller also creates Network Load Balancers with instance targets. We recommend using it, rather than the AWS cloud provider load balancer controller, to create new Network Load Balancers. You can use Network Load Balancer instance targets with Pods deployed to Amazon EC2 nodes, but not to Fargate. To load balance network traffic across Pods deployed to Fargate, you must use IP targets.

  To deploy a Network Load Balancer to a private subnet, your service specification must have the following annotations. You can view a [sample service manifest](#network-load-balancing-service-sample-manifest) with the annotations. The `external` value for `aws-load-balancer-type` is what causes the AWS Load Balancer Controller, rather than the AWS cloud provider load balancer controller, to create the Network Load Balancer.

  ```
  service.beta.kubernetes.io/aws-load-balancer-type: "external"
  service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "instance"
  ```

  Network Load Balancers are created with the `internal` `aws-load-balancer-scheme`, by default. For internal Network Load Balancers, your Amazon EKS cluster must be configured to use at least one private subnet in your VPC. Kubernetes examines the route table for your subnets to identify whether they are public or private. Public subnets have a route directly to the internet using an internet gateway, but private subnets do not.

  If you want to create an Network Load Balancer in a public subnet to load balance to Amazon EC2 nodes, specify `internet-facing` with the following annotation:

  ```
  service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
  ```
**Important**  
Do not edit the annotations after creating your service. If you need to modify it, delete the service object and create it again with the desired value for this annotation.

## (Optional) Deploy a sample application

+ At least one public or private subnet in your cluster VPC.
+ Have the AWS Load Balancer Controller deployed on your cluster. For more information, see [Route internet traffic with AWS Load Balancer Controller](aws-load-balancer-controller.md). We recommend version `2.7.2` or later.

  1. If you’re deploying to Fargate, make sure you have an available private subnet in your VPC and create a Fargate profile. If you’re not deploying to Fargate, skip this step. You can create the profile by running the following command or in the [AWS Management Console](fargate-profile.md#create-fargate-profile) using the same values for `name` and `namespace` that are in the command. Replace the example values with your own.

     ```
     eksctl create fargateprofile \
         --cluster my-cluster \
         --region region-code \
         --name nlb-sample-app \
         --namespace nlb-sample-app
     ```

  1. Deploy a sample application.

     1. Create a namespace for the application.

        ```
        kubectl create namespace nlb-sample-app
        ```

     1. Save the following contents to a file named `sample-deployment.yaml` file on your computer.

        ```
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: nlb-sample-app
          namespace: nlb-sample-app
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: nginx
          template:
            metadata:
              labels:
                app: nginx
            spec:
              containers:
                - name: nginx
                  image: public.ecr.aws/nginx/nginx:1.23
                  ports:
                    - name: tcp
                      containerPort: 80
        ```

     1. Apply the manifest to the cluster.

        ```
        kubectl apply -f sample-deployment.yaml
        ```

  1. Create a service with an internet-facing Network Load Balancer that load balances to IP targets.

     1.  Save the following contents to a file named `sample-service.yaml` file on your computer. If you’re deploying to Fargate nodes, remove the `service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing` line.

        ```
        apiVersion: v1
        kind: Service
        metadata:
          name: nlb-sample-service
          namespace: nlb-sample-app
          annotations:
            service.beta.kubernetes.io/aws-load-balancer-type: external
            service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
            service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
        spec:
          ports:
            - port: 80
              targetPort: 80
              protocol: TCP
          type: LoadBalancer
          selector:
            app: nginx
        ```

     1. Apply the manifest to the cluster.

        ```
        kubectl apply -f sample-service.yaml
        ```

  1.  Verify that the service was deployed.

     ```
     kubectl get svc nlb-sample-service -n nlb-sample-app
     ```

     An example output is as follows.

     ```
     NAME            TYPE           CLUSTER-IP         EXTERNAL-IP                                                                    PORT(S)        AGE
     sample-service  LoadBalancer   10.100.240.137   k8s-nlbsampl-nlbsampl-xxxxxxxxxx-xxxxxxxxxxxxxxxx.elb.region-code.amazonaws.com  80:32400/TCP   16h
     ```
**Note**  
The values for *10.100.240.137* and *xxxxxxxxxx*-*xxxxxxxxxxxxxxxx* will be different than the example output (they will be unique to your load balancer) and *us-west-2* may be different for you, depending on which AWS Region that your cluster is in.

  1. Open the [Amazon EC2 AWS Management Console](https://console.aws.amazon.com/ec2). Select **Target Groups** (under **Load Balancing**) in the left navigation pane. In the **Name** column, select the target group’s name where the value in the **Load balancer** column matches a portion of the name in the `EXTERNAL-IP` column of the output in the previous step. For example, you’d select the target group named `k8s-default-samplese-xxxxxxxxxx ` if your output were the same as the previous output. The **Target type** is `IP` because that was specified in the sample service manifest.

  1. Select the **Target group** and then select the **Targets** tab. Under **Registered targets**, you should see three IP addresses of the three replicas deployed in a previous step. Wait until the status of all targets is **healthy** before continuing. It might take several minutes before all targets are `healthy`. The targets might be in an `unhealthy` state before changing to a `healthy` state.

  1. Send traffic to the service replacing *xxxxxxxxxx-xxxxxxxxxxxxxxxx* and *us-west-2* with the values returned in the output for a [previous step](#nlb-sample-app-verify-deployment) for `EXTERNAL-IP`. If you deployed to a private subnet, then you’ll need to view the page from a device within your VPC, such as a bastion host. For more information, see [Linux Bastion Hosts on AWS](https://aws.amazon.com/quickstart/architecture/linux-bastion/).

     ```
     curl k8s-default-samplese-xxxxxxxxxx-xxxxxxxxxxxxxxxx.elb.region-code.amazonaws.com
     ```

     An example output is as follows.

     ```
     <!DOCTYPE html>
     <html>
     <head>
     <title>Welcome to nginx!</title>
     [...]
     ```

  1. When you’re finished with the sample deployment, service, and namespace, remove them.

     ```
     kubectl delete namespace nlb-sample-app
     ```

# Route application and HTTP traffic with Application Load Balancers
Application load balancing

**Note**  
 **New:** Amazon EKS Auto Mode automates routine tasks for load balancing. For more information, see:  
 [Deploy a Sample Load Balancer Workload to EKS Auto Mode](auto-elb-example.md) 
 [Create an IngressClass to configure an Application Load Balancer](auto-configure-alb.md) 

When you create a Kubernetes `ingress`, an AWS Application Load Balancer (ALB) is provisioned that load balances application traffic. To learn more, see [What is an Application Load Balancer?](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) in the *Application Load Balancers User Guide* and [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) in the Kubernetes documentation. ALBs can be used with Pods that are deployed to nodes or to AWS Fargate. You can deploy an ALB to public or private subnets.

Application traffic is balanced at `L7` of the OSI model. To load balance network traffic at `L4`, you deploy a Kubernetes `service` of the `LoadBalancer` type. This type provisions an AWS Network Load Balancer. For more information, see [Route TCP and UDP traffic with Network Load Balancers](network-load-balancing.md). To learn more about the differences between the two types of load balancing, see [Elastic Load Balancing features](https://aws.amazon.com/elasticloadbalancing/features/) on the AWS website.

## Prerequisites


Before you can load balance application traffic to an application, you must meet the following requirements.
+ Have an existing cluster. If you don’t have an existing cluster, see [Get started with Amazon EKS](getting-started.md). If you need to update the version of an existing cluster, see [Update existing cluster to new Kubernetes version](update-cluster.md).
+ Have the AWS Load Balancer Controller deployed on your cluster. For more information, see [Route internet traffic with AWS Load Balancer Controller](aws-load-balancer-controller.md). We recommend version `2.7.2` or later.
+ At least two subnets in different Availability Zones. The AWS Load Balancer Controller chooses one subnet from each Availability Zone. When multiple tagged subnets are found in an Availability Zone, the controller chooses the subnet whose subnet ID comes first lexicographically. Each subnet must have at least eight available IP addresses.

  If you’re using multiple security groups attached to worker node, exactly one security group must be tagged as follows. Replace *my-cluster* with your cluster name.
  +  **Key** – `kubernetes.io/cluster/<my-cluster>` 
  +  **Value** – `shared` or `owned` 
+ If you’re using the AWS Load Balancer Controller version `2.1.1` or earlier, subnets must be tagged in the format that follows. If you’re using version `2.1.2` or later, tagging is optional. However, we recommend that you tag a subnet if any of the following is the case. You have multiple clusters that are running in the same VPC, or have multiple AWS services that share subnets in a VPC. Or, you want more control over where load balancers are provisioned for each cluster. Replace *my-cluster* with your cluster name.
  +  **Key** – `kubernetes.io/cluster/<my-cluster>` 
  +  **Value** – `shared` or `owned` 
+ Your public and private subnets must meet the following requirements. This is unless you explicitly specify subnet IDs as an annotation on a service or ingress object. Assume that you provision load balancers by explicitly specifying subnet IDs as an annotation on a service or ingress object. In this situation, Kubernetes and the AWS load balancer controller use those subnets directly to create the load balancer and the following tags aren’t required.
  +  **Private subnets** – Must be tagged in the following format. This is so that Kubernetes and the AWS load balancer controller know that the subnets can be used for internal load balancers. If you use `eksctl` or an Amazon EKS AWS CloudFormation template to create your VPC after March 26, 2020, the subnets are tagged appropriately when created. For more information about the Amazon EKS AWS CloudFormation VPC templates, see [Create an Amazon VPC for your Amazon EKS cluster](creating-a-vpc.md).
    +  **Key** – `kubernetes.io/role/internal-elb` 
    +  **Value** – `1` 
  +  **Public subnets** – Must be tagged in the following format. This is so that Kubernetes knows to use only the subnets that were specified for external load balancers. This way, Kubernetes doesn’t choose a public subnet in each Availability Zone (lexicographically based on their subnet ID). If you use `eksctl` or an Amazon EKS AWS CloudFormation template to create your VPC after March 26, 2020, the subnets are tagged appropriately when created. For more information about the Amazon EKS AWS CloudFormation VPC templates, see [Create an Amazon VPC for your Amazon EKS cluster](creating-a-vpc.md).
    +  **Key** – `kubernetes.io/role/elb` 
    +  **Value** – `1` 

  If the subnet role tags aren’t explicitly added, the Kubernetes service controller examines the route table of your cluster VPC subnets. This is to determine if the subnet is private or public. We recommend that you don’t rely on this behavior. Rather, explicitly add the private or public role tags. The AWS Load Balancer Controller doesn’t examine route tables. It also requires the private and public tags to be present for successful auto discovery.
+ The [AWS Load Balancer Controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller) creates ALBs and the necessary supporting AWS resources whenever a Kubernetes ingress resource is created on the cluster with the `kubernetes.io/ingress.class: alb` annotation. The ingress resource configures the ALB to route HTTP or HTTPS traffic to different Pods within the cluster. To ensure that your ingress objects use the AWS Load Balancer Controller, add the following annotation to your Kubernetes ingress specification. For more information, see [Ingress specification](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/spec/) on GitHub.

  ```
  annotations:
      kubernetes.io/ingress.class: alb
  ```
**Note**  
If you’re load balancing to `IPv6` Pods, add the following annotation to your ingress spec. You can only load balance over `IPv6` to IP targets, not instance targets. Without this annotation, load balancing is over `IPv4`.

  ```
  alb.ingress.kubernetes.io/ip-address-type: dualstack
  ```
+ The AWS Load Balancer Controller supports the following traffic modes:
  +  **Instance** – Registers nodes within your cluster as targets for the ALB. Traffic reaching the ALB is routed to `NodePort` for your service and then proxied to your Pods. This is the default traffic mode. You can also explicitly specify it with the `alb.ingress.kubernetes.io/target-type: instance` annotation.
**Note**  
Your Kubernetes service must specify the `NodePort` or `LoadBalancer` type to use this traffic mode.
  +  **IP** – Registers Pods as targets for the ALB. Traffic reaching the ALB is directly routed to Pods for your service. You must specify the `alb.ingress.kubernetes.io/target-type: ip` annotation to use this traffic mode. The IP target type is required when target Pods are running on Fargate or Amazon EKS Hybrid Nodes.
+ To tag ALBs created by the controller, add the following annotation to the controller: `alb.ingress.kubernetes.io/tags`. For a list of all available annotations supported by the AWS Load Balancer Controller, see [Ingress annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/) on GitHub.
+ Upgrading or downgrading the ALB controller version can introduce breaking changes for features that rely on it. For more information about the breaking changes that are introduced in each release, see the [ALB controller release notes](https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases) on GitHub.

## Reuse ALBs with Ingress Groups


You can share an application load balancer across multiple service resources using `IngressGroups`.

To join an ingress to a group, add the following annotation to a Kubernetes ingress resource specification.

```
alb.ingress.kubernetes.io/group.name: my-group
```

The group name must:
+ Be 63 or fewer characters in length.
+ Consist of lower case letters, numbers, `-`, and `.` 
+ Start and end with a letter or number.

The controller automatically merges ingress rules for all ingresses in the same ingress group. It supports them with a single ALB. Most annotations that are defined on an ingress only apply to the paths defined by that ingress. By default, ingress resources don’t belong to any ingress group.

**Warning**  
 **Potential security risk**   
Specify an ingress group for an ingress only when all the Kubernetes users that have RBAC permission to create or modify ingress resources are within the same trust boundary. If you add the annotation with a group name, other Kubernetes users might create or modify their ingresses to belong to the same ingress group. Doing so can cause undesirable behavior, such as overwriting existing rules with higher priority rules.

You can add an order number of your ingress resource.

```
alb.ingress.kubernetes.io/group.order: '10'
```

The number can be 1-1000. The lowest number for all ingresses in the same ingress group is evaluated first. All ingresses without this annotation are evaluated with a value of zero. Duplicate rules with a higher number can overwrite rules with a lower number. By default, the rule order between ingresses within the same ingress group is determined lexicographically based namespace and name.

**Important**  
Ensure that each ingress in the same ingress group has a unique priority number. You can’t have duplicate order numbers across ingresses.

## (Optional) Deploy a sample application

+ At least one public or private subnet in your cluster VPC.
+ Have the AWS Load Balancer Controller deployed on your cluster. For more information, see [Route internet traffic with AWS Load Balancer Controller](aws-load-balancer-controller.md). We recommend version `2.7.2` or later.

You can run the sample application on a cluster that has Amazon EC2 nodes, Fargate Pods, or both.

1. If you’re not deploying to Fargate, skip this step. If you’re deploying to Fargate, create a Fargate profile. You can create the profile by running the following command or in the [AWS Management Console](fargate-profile.md#create-fargate-profile) using the same values for `name` and `namespace` that are in the command. Replace the example values with your own.

   ```
   eksctl create fargateprofile \
       --cluster my-cluster \
       --region region-code \
       --name alb-sample-app \
       --namespace game-2048
   ```

1. Deploy the game [2048](https://play2048.co/) as a sample application to verify that the AWS Load Balancer Controller creates an AWS ALB as a result of the ingress object. Complete the steps for the type of subnet you’re deploying to.

   1. If you’re deploying to Pods in a cluster that you created with the `IPv6` family, skip to the next step.
      +  **Public**::

      ```
      kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.14.1/docs/examples/2048/2048_full.yaml
      ```
      +  **Private**::

        1. Download the manifest.

           ```
           curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.14.1/docs/examples/2048/2048_full.yaml
           ```

        1. Edit the file and find the line that says `alb.ingress.kubernetes.io/scheme: internet-facing`.

        1. Change *internet-facing* to `internal` and save the file.

        1. Apply the manifest to your cluster.

           ```
           kubectl apply -f 2048_full.yaml
           ```

   1. If you’re deploying to Pods in a cluster that you created with the [IPv6 family](cni-ipv6.md), complete the following steps.

      1. Download the manifest.

         ```
         curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.14.1/docs/examples/2048/2048_full.yaml
         ```

      1. Open the file in an editor and add the following line to the annotations in the ingress spec.

         ```
         alb.ingress.kubernetes.io/ip-address-type: dualstack
         ```

      1. If you’re load balancing to internal Pods, rather than internet facing Pods, change the line that says `alb.ingress.kubernetes.io/scheme: internet-facing ` to `alb.ingress.kubernetes.io/scheme: internal` 

      1. Save the file.

      1. Apply the manifest to your cluster.

         ```
         kubectl apply -f 2048_full.yaml
         ```

1. After a few minutes, verify that the ingress resource was created with the following command.

   ```
   kubectl get ingress/ingress-2048 -n game-2048
   ```

   An example output is as follows.

   ```
   NAME           CLASS    HOSTS   ADDRESS                                                                   PORTS   AGE
   ingress-2048   <none>   *       k8s-game2048-ingress2-xxxxxxxxxx-yyyyyyyyyy.region-code.elb.amazonaws.com   80      2m32s
   ```
**Note**  
If you created the load balancer in a private subnet, the value under `ADDRESS` in the previous output is prefaced with `internal-`.

If your ingress wasn’t successfully created after several minutes, run the following command to view the AWS Load Balancer Controller logs. These logs might contain error messages that you can use to diagnose issues with your deployment.

```
kubectl logs -f -n kube-system -l app.kubernetes.io/instance=aws-load-balancer-controller
```

1. If you deployed to a public subnet, open a browser and navigate to the `ADDRESS` URL from the previous command output to see the sample application. If you don’t see anything, refresh your browser and try again. If you deployed to a private subnet, then you’ll need to view the page from a device within your VPC, such as a bastion host. For more information, see [Linux Bastion Hosts on AWS](https://aws.amazon.com/quickstart/architecture/linux-bastion/).  
![\[2048 sample application\]](http://docs.aws.amazon.com/eks/latest/userguide/images/2048.png)

1. When you finish experimenting with your sample application, delete it by running one of the the following commands.
   + If you applied the manifest, rather than applying a copy that you downloaded, use the following command.

     ```
     kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.14.1/docs/examples/2048/2048_full.yaml
     ```
   + If you downloaded and edited the manifest, use the following command.

     ```
     kubectl delete -f 2048_full.yaml
     ```

# Restrict external IP addresses that can be assigned to services
Restrict service external IPs

Kubernetes services can be reached from inside of a cluster through:
+ A cluster IP address that is assigned automatically by Kubernetes
+ Any IP address that you specify for the `externalIPs` property in a service spec. External IP addresses are not managed by Kubernetes and are the responsibility of the cluster administrator. External IP addresses specified with `externalIPs` are different than the external IP address assigned to a service of type `LoadBalancer` by a cloud provider.

To learn more about Kubernetes services, see [Service](https://kubernetes.io/docs/concepts/services-networking/service/) in the Kubernetes documentation. You can restrict the IP addresses that can be specified for `externalIPs` in a service spec.

1. Deploy `cert-manager` to manage webhook certificates. For more information, see the [cert-manager](https://cert-manager.io/docs/) documentation.

   ```
   kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml
   ```

1. Verify that the `cert-manager` Pods are running.

   ```
   kubectl get pods -n cert-manager
   ```

   An example output is as follows.

   ```
   NAME                                       READY   STATUS    RESTARTS   AGE
   cert-manager-58c8844bb8-nlx7q              1/1     Running   0          15s
   cert-manager-cainjector-745768f6ff-696h5   1/1     Running   0          15s
   cert-manager-webhook-67cc76975b-4v4nk      1/1     Running   0          14s
   ```

1. Review your existing services to ensure that none of them have external IP addresses assigned to them that aren’t contained within the CIDR block you want to limit addresses to.

   ```
   kubectl get services -A
   ```

   An example output is as follows.

   ```
   NAMESPACE                      NAME                                    TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)         AGE
   cert-manager                   cert-manager                            ClusterIP      10.100.102.137   <none>          9402/TCP        20m
   cert-manager                   cert-manager-webhook                    ClusterIP      10.100.6.136     <none>          443/TCP         20m
   default                        kubernetes                              ClusterIP      10.100.0.1       <none>          443/TCP         2d1h
   externalip-validation-system   externalip-validation-webhook-service   ClusterIP      10.100.234.179   <none>          443/TCP         16s
   kube-system                    kube-dns                                ClusterIP      10.100.0.10      <none>          53/UDP,53/TCP   2d1h
   my-namespace                   my-service                              ClusterIP      10.100.128.10    192.168.1.1     80/TCP          149m
   ```

   If any of the values are IP addresses that are not within the block you want to restrict access to, you’ll need to change the addresses to be within the block, and redeploy the services. For example, the `my-service` service in the previous output has an external IP address assigned to it that isn’t within the CIDR block example in step 5.

1. Download the external IP webhook manifest. You can also view the [source code for the webhook](https://github.com/kubernetes-sigs/externalip-webhook) on GitHub.

   ```
   curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/docs/externalip-webhook.yaml
   ```

1. Specify CIDR blocks. Open the downloaded file in your editor and remove the `\#` at the start of the following lines.

   ```
   #args:
   #- --allowed-external-ip-cidrs=10.0.0.0/8
   ```

   Replace `10.0.0.0/8` with your own CIDR block. You can specify as many blocks as you like. If specifying mutiple blocks, add a comma between blocks.

1. If your cluster is not in the `us-west-2` AWS Region, then replace `us-west-2`, `602401143452`, and `amazonaws.com` in the file with the following commands. Before running the commands, replace *region-code* and *111122223333* with the value for your AWS Region from the list in [View Amazon container image registries for Amazon EKS add-ons](add-ons-images.md).

   ```
   sed -i.bak -e 's|602401143452|111122223333|' externalip-webhook.yaml
   sed -i.bak -e 's|us-west-2|region-code|' externalip-webhook.yaml
   sed -i.bak -e 's|amazonaws.com||' externalip-webhook.yaml
   ```

1. Apply the manifest to your cluster.

   ```
   kubectl apply -f externalip-webhook.yaml
   ```

   An attempt to deploy a service to your cluster with an IP address specified for `externalIPs` that is not contained in the blocks that you specified in the Specify CIDR blocks step will fail.

# Copy a container image from one repository to another repository
Copy an image to a repository

This topic describes how to pull a container image from a repository that your nodes don’t have access to and push the image to a repository that your nodes have access to. You can push the image to Amazon ECR or an alternative repository that your nodes have access to.
+ The Docker engine installed and configured on your computer. For instructions, see [Install Docker Engine](https://docs.docker.com/engine/install/) in the Docker documentation.
+ Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.
+ An interface VPC endpoint for Amazon ECR if you want your nodes to pull container images from or push container images to a private Amazon ECR repository over Amazon’s network. For more information, see [Create the VPC endpoints for Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/vpc-endpoints.html#ecr-setting-up-vpc-create) in the Amazon Elastic Container Registry User Guide.

Complete the following steps to pull a container image from a repository and push it to your own repository. In the following examples that are provided in this topic, the image for the [Amazon VPC CNI plugin for Kubernetes metrics helper](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/cmd/cni-metrics-helper/README.md) is pulled. When you follow these steps, make sure to replace the example values with your own values.

1. If you don’t already have an Amazon ECR repository or another repository, then create one that your nodes have access to. The following command creates an Amazon ECR private repository. An Amazon ECR private repository name must start with a letter. It can only contain lowercase letters, numbers, hyphens (-), underscores (\$1), and forward slashes (/). For more information, see [Creating a private repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) in the Amazon Elastic Container Registry User Guide.

   You can replace *cni-metrics-helper* with whatever you choose. As a best practice, create a separate repository for each image. We recommend this because image tags must be unique within a repository. Replace *region-code* with an [AWS Region supported by Amazon ECR](https://docs.aws.amazon.com/general/latest/gr/ecr.html).

   ```
   aws ecr create-repository --region region-code --repository-name cni-metrics-helper
   ```

1. Determine the registry, repository, and tag (optional) of the image that your nodes need to pull. This information is in the `registry/repository[:tag]` format.

   Many of the Amazon EKS topics about installing images require that you apply a manifest file or install the image using a Helm chart. However, before you apply a manifest file or install a Helm chart, first view the contents of the manifest or chart’s `values.yaml` file. That way, you can determine the registry, repository, and tag to pull.

   For example, you can find the following line in the [manifest file](https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.12.6/config/master/cni-metrics-helper.yaml) for the [Amazon VPC CNI plugin for Kubernetes metrics helper](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/cmd/cni-metrics-helper/README.md). The registry is `602401143452.dkr.ecr.us-west-2.amazonaws.com`, which is an Amazon ECR private registry. The repository is `cni-metrics-helper`.

   ```
   image: "602401143452.dkr.ecr.us-west-2.amazonaws.com/cni-metrics-helper:v1.12.6"
   ```

   You may see the following variations for an image location:
   + Only `repository-name:tag`. In this case, `docker.io` is usually the registry, but not specified since Kubernetes prepends it to a repository name by default if no registry is specified.
   +  `repository-name/repository-namespace/repository:tag`. A repository namespace is optional, but is sometimes specified by the repository owner for categorizing images. For example, all [Amazon EC2 images in the Amazon ECR Public Gallery](https://gallery.ecr.aws/aws-ec2/) use the `aws-ec2` namespace.

     Before installing an image with Helm, view the Helm `values.yaml` file to determine the image location. For example, the [values.yaml](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/charts/cni-metrics-helper/values.yaml#L5-L9) file for the [Amazon VPC CNI plugin for Kubernetes metrics helper](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/cmd/cni-metrics-helper/README.md) includes the following lines.

     ```
     image:
       region: us-west-2
       tag: v1.12.6
       account: "602401143452"
       domain: "amazonaws.com"
     ```

1. Pull the container image specified in the manifest file.

   1. If you’re pulling from a public registry, such as the [Amazon ECR Public Gallery](https://gallery.ecr.aws/), you can skip to the next sub-step, because authentication isn’t required. In this example, you authenticate to an Amazon ECR private registry that contains the repository for the CNI metrics helper image. Amazon EKS maintains the image in each registry listed in [View Amazon container image registries for Amazon EKS add-ons](add-ons-images.md). You can authenticate to any of the registries by replacing *602401143452* and *region-code* with the information for a different registry. A separate registry exists for each [AWS Region that Amazon EKS is supported in](https://docs.aws.amazon.com/general/latest/gr/eks.html#eks_region).

      ```
      aws ecr get-login-password --region region-code | docker login --username AWS --password-stdin 602401143452.dkr.ecr.region-code.amazonaws.com
      ```

   1. Pull the image. In this example, you pull from the registry that you authenticated to in the previous sub-step. Replace *602401143452* and *region-code* with the information that you provided in the previous sub-step.

      ```
      docker pull 602401143452.dkr.ecr.region-code.amazonaws.com/cni-metrics-helper:v1.12.6
      ```

1. Tag the image that you pulled with your registry, repository, and tag. The following example assumes that you pulled the image from the manifest file and are going to push it to the Amazon ECR private repository that you created in the first step. Replace *111122223333* with your account ID. Replace *region-code* with the AWS Region that you created your Amazon ECR private repository in.

   ```
   docker tag cni-metrics-helper:v1.12.6 111122223333.dkr.ecr.region-code.amazonaws.com/cni-metrics-helper:v1.12.6
   ```

1. Authenticate to your registry. In this example, you authenticate to the Amazon ECR private registry that you created in the first step. For more information, see [Registry authentication](https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html#registry_auth) in the Amazon Elastic Container Registry User Guide.

   ```
   aws ecr get-login-password --region region-code | docker login --username AWS --password-stdin 111122223333.dkr.ecr.region-code.amazonaws.com
   ```

1. Push the image to your repository. In this example, you push the image to the Amazon ECR private repository that you created in the first step. For more information, see [Pushing a Docker image](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html) in the Amazon Elastic Container Registry User Guide.

   ```
   docker push 111122223333.dkr.ecr.region-code.amazonaws.com/cni-metrics-helper:v1.12.6
   ```

1. Update the manifest file that you used to determine the image in a previous step with the `registry/repository:tag` for the image that you pushed. If you’re installing with a Helm chart, there’s often an option to specify the `registry/repository:tag`. When installing the chart, specify the `registry/repository:tag` for the image that you pushed to your repository.

# View Amazon container image registries for Amazon EKS add-ons
View Amazon image registries

When you deploy [AWS Amazon EKS add-ons](workloads-add-ons-available-eks.md) to your cluster, your nodes pull the required container images from the registry specified in the installation mechanism for the add-on, such as an installation manifest or a Helm `values.yaml` file. The images are pulled from an Amazon EKS Amazon ECR private repository. Amazon EKS replicates the images to a repository in each Amazon EKS supported AWS Region. Your nodes can pull the container image over the internet from any of the following registries. Alternatively, your nodes can pull the image over Amazon’s network if you created an [interface VPC endpoint for Amazon ECR (AWS PrivateLink)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/vpc-endpoints.html) in your VPC. The registries require authentication with an AWS IAM account. Your nodes authenticate using the [Amazon EKS node IAM role](create-node-role.md), which has the permissions in the [AmazonEC2ContainerRegistryReadOnly](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEC2ContainerRegistryReadOnly.html) managed IAM policy associated to it.


|  AWS Region | Registry | 
| --- | --- | 
|  af-south-1  |  877085696533.dkr.ecr.af-south-1.amazonaws.com  | 
|  ap-east-1  |  800184023465.dkr.ecr.ap-east-1.amazonaws.com  | 
|  ap-east-2  |  533267051163.dkr.ecr.ap-east-1.amazonaws.com  | 
|  ap-southeast-3  |  296578399912.dkr.ecr.ap-southeast-3.amazonaws.com  | 
|  ap-south-2  |  900889452093.dkr.ecr.ap-south-2.amazonaws.com  | 
|  ap-southeast-4  |  491585149902.dkr.ecr.ap-southeast-4.amazonaws.com  | 
|  ap-southeast-5  |  151610086707.dkr.ecr.ap-southeast-5.amazonaws.com  | 
|  ap-southeast-6  |  333609536671.dkr.ecr.ap-southeast-6.amazonaws.com  | 
|  ap-south-1  |  602401143452.dkr.ecr.ap-south-1.amazonaws.com  | 
|  ap-northeast-3  |  602401143452.dkr.ecr.ap-northeast-3.amazonaws.com  | 
|  ap-northeast-2  |  602401143452.dkr.ecr.ap-northeast-2.amazonaws.com  | 
|  ap-southeast-1  |  602401143452.dkr.ecr.ap-southeast-1.amazonaws.com  | 
|  ap-southeast-2  |  602401143452.dkr.ecr.ap-southeast-2.amazonaws.com  | 
|  ap-southeast-7  |  121268973566.dkr.ecr.ap-southeast-7.amazonaws.com  | 
|  ap-northeast-1  |  602401143452.dkr.ecr.ap-northeast-1.amazonaws.com  | 
|  cn-north-1  |  918309763551---dkr---ecr---cn-north-1.amazonaws.com.rproxy.govskope.us.cn  | 
|  cn-northwest-1  |  961992271922---dkr---ecr---cn-northwest-1.amazonaws.com.rproxy.govskope.us.cn  | 
|  eu-central-1  |  602401143452.dkr.ecr.eu-central-1.amazonaws.com  | 
|  eu-west-1  |  602401143452.dkr.ecr.eu-west-1.amazonaws.com  | 
|  eu-west-2  |  602401143452.dkr.ecr.eu-west-2.amazonaws.com  | 
|  eu-south-1  |  590381155156.dkr.ecr.eu-south-1.amazonaws.com  | 
|  eu-west-3  |  602401143452.dkr.ecr.eu-west-3.amazonaws.com  | 
|  eu-south-2  |  455263428931.dkr.ecr.eu-south-2.amazonaws.com  | 
|  eu-north-1  |  602401143452.dkr.ecr.eu-north-1.amazonaws.com  | 
|  eu-central-2  |  900612956339.dkr.ecr.eu-central-2.amazonaws.com  | 
|  il-central-1  |  066635153087.dkr.ecr.il-central-1.amazonaws.com  | 
|  mx-central-1  |  730335286997.dkr.ecr.mx-central-1.amazonaws.com  | 
|  me-south-1  |  558608220178.dkr.ecr.me-south-1.amazonaws.com  | 
|  me-central-1  |  759879836304.dkr.ecr.me-central-1.amazonaws.com  | 
|  us-east-1  |  602401143452.dkr.ecr.us-east-1.amazonaws.com  | 
|  us-east-2  |  602401143452.dkr.ecr.us-east-2.amazonaws.com  | 
|  us-west-1  |  602401143452.dkr.ecr.us-west-1.amazonaws.com  | 
|  us-west-2  |  602401143452.dkr.ecr.us-west-2.amazonaws.com  | 
|  ca-central-1  |  602401143452.dkr.ecr.ca-central-1.amazonaws.com  | 
|  ca-west-1  |  761377655185.dkr.ecr.ca-west-1.amazonaws.com  | 
|  sa-east-1  |  602401143452.dkr.ecr.sa-east-1.amazonaws.com  | 
|  us-gov-east-1  |  151742754352.dkr.ecr.us-gov-east-1.amazonaws.com  | 
|  us-gov-west-1  |  013241004608.dkr.ecr.us-gov-west-1.amazonaws.com  | 
|  eusc-de-east-1  |  877088126301.dkr.ecr.eusc-de-east-1.amazonaws.eu  | 

# Amazon EKS add-ons
Amazon EKS add-ons

An add-on is software that provides supporting operational capabilities to Kubernetes applications, but is not specific to the application. This includes software like observability agents or Kubernetes drivers that allow the cluster to interact with underlying AWS resources for networking, compute, and storage. Add-on software is typically built and maintained by the Kubernetes community, cloud providers like AWS, or third-party vendors. Amazon EKS automatically installs self-managed add-ons such as the Amazon VPC CNI plugin for Kubernetes, `kube-proxy`, and CoreDNS for every cluster. Note that the VPC CNI add-on isn’t compatible with Amazon EKS Hybrid Nodes and doesn’t deploy to hybrid nodes. You can change the default configuration of the add-ons and update them when desired.

Amazon EKS add-ons provide installation and management of a curated set of add-ons for Amazon EKS clusters. All Amazon EKS add-ons include the latest security patches, bug fixes, and are validated by AWS to work with Amazon EKS. Amazon EKS add-ons allow you to consistently ensure that your Amazon EKS clusters are secure and stable and reduce the amount of work that you need to do in order to install, configure, and update add-ons. If a self-managed add-on, such as `kube-proxy` is already running on your cluster and is available as an Amazon EKS add-on, then you can install the `kube-proxy` Amazon EKS add-on to start benefiting from the capabilities of Amazon EKS add-ons.

You can update specific Amazon EKS managed configuration fields for Amazon EKS add-ons through the Amazon EKS API. You can also modify configuration fields not managed by Amazon EKS directly within the Kubernetes cluster once the add-on starts. This includes defining specific configuration fields for an add-on where applicable. These changes are not overridden by Amazon EKS once they are made. This is made possible using the Kubernetes server-side apply feature. For more information, see [Determine fields you can customize for Amazon EKS add-ons](kubernetes-field-management.md).

You can use Amazon EKS add-ons with any Amazon EKS node type. For more information, see [Manage compute resources by using nodes](eks-compute.md).

You can add, update, or delete Amazon EKS add-ons using the Amazon EKS API, AWS Management Console, AWS CLI, and `eksctl`. You can also create Amazon EKS add-ons using [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-addon.html).

## Considerations


Consider the following when you use Amazon EKS add-ons:
+ To configure add-ons for the cluster your [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) must have IAM permissions to work with add-ons. For more information, see the actions with `Addon` in their name in [Actions defined by Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-actions-as-permissions).
+ Amazon EKS add-ons run on the nodes that you provision or configure for your cluster. Node types include Amazon EC2 instances, Fargate, and hybrid nodes.
+ You can modify fields that aren’t managed by Amazon EKS to customize the installation of an Amazon EKS add-on. For more information, see [Determine fields you can customize for Amazon EKS add-ons](kubernetes-field-management.md).
+ If you create a cluster with the AWS Management Console, the Amazon EKS `kube-proxy`, Amazon VPC CNI plugin for Kubernetes, and CoreDNS Amazon EKS add-ons are automatically added to your cluster. If you use `eksctl` to create your cluster with a `config` file, `eksctl` can also create the cluster with Amazon EKS add-ons. If you create your cluster using `eksctl` without a `config` file or with any other tool, the self-managed `kube-proxy`, Amazon VPC CNI plugin for Kubernetes, and CoreDNS add-ons are installed, rather than the Amazon EKS add-ons. You can either manage them yourself or add the Amazon EKS add-ons manually after cluster creation. Regardless of the method that you use to create your cluster, the VPC CNI add-on doesn’t install on hybrid nodes.
+ The `eks:addon-cluster-admin` `ClusterRoleBinding` binds the `cluster-admin` `ClusterRole` to the `eks:addon-manager` Kubernetes identity. The role has the necessary permissions for the `eks:addon-manager` identity to create Kubernetes namespaces and install add-ons into namespaces. If the `eks:addon-cluster-admin` `ClusterRoleBinding` is removed, the Amazon EKS cluster will continue to function, however Amazon EKS is no longer able to manage any add-ons. All clusters starting with the following platform versions use the new `ClusterRoleBinding`.
+ A subset of EKS add-ons from AWS have been validated for compatibility with Amazon EKS Hybrid Nodes. For more information, see the compatibility table on [AWS add-ons](workloads-add-ons-available-eks.md).

## Custom namespace for add-ons


For community and AWS add-ons, you can optionally specify a custom namespace during add-on creation. Once you install an add-on in a specific namespace, you must remove and re-create the add-on to change its namespace.

If you don’t specify a namespace, it will use the predefined namespace for the add-on.

Use custom namespaces for better organization and isolation of add-on objects within your EKS cluster. This flexibility helps you align add-ons with your operational needs and existing namespace strategy.

You can set a custom namespace when creating an add-on. For more information, see [Create an Amazon EKS add-on](creating-an-add-on.md).

### Get predefined namespace for add-on


The predefined namespace for an add-on is the namespace it will be installed into if you don’t specify one.

To get the predefined namespace for an add-on, use the following command:

```
aws eks describe-addon-versions --addon-name <addon-name> --query "addons[].defaultNamespace"
```

Example output:

```
[
    "kube-system"
]
```

## Considerations for Amazon EKS Auto Mode


Amazon EKS Auto mode includes capabilities that deliver essential cluster functionality, including:
+ Pod networking
+ Service networking
+ Cluster DNS
+ Autoscaling
+ Block storage
+ Load balancer controller
+ Pod Identity agent
+ Node monitoring agent

With Auto mode compute, many commonly used EKS add-ons become redundant, such as:
+ Amazon VPC CNI
+ kube-proxy
+ CoreDNS
+ Amazon EBS CSI Driver
+ EKS Pod Identity Agent

However, if your cluster combines Auto mode with other compute options like self-managed EC2 instances, Managed Node Groups, or AWS Fargate, these add-ons remain necessary. AWS has enhanced EKS add-ons with anti-affinity rules that automatically ensure add-on pods are scheduled only on supported compute types. Furthermore, users can now leverage the EKS add-ons `DescribeAddonVersions` API to verify the supported computeTypes for each add-on and its specific versions. Additionally, with EKS Auto mode, the controllers listed above run on AWS owned infrastructure. So, you may not even see them in your accounts unless you are using EKS auto mode with other types of compute in which case, you will see the controllers you installed on your cluster.

If you are planning to enable EKS Auto Mode on an existing cluster, you may need to upgrade the version of certain addons. For more information, see [Required add-on versions](auto-enable-existing.md#auto-addons-required) for EKS Auto Mode.

## Support


 AWS publishes multiple types of add-ons with different levels of support.
+  ** AWS Add-ons:** These add-ons are built and fully supported by AWS.
  + Use an AWS add-on to work with other AWS services, such as Amazon EFS.
  + For more information, see [AWS add-ons](workloads-add-ons-available-eks.md).
+  ** AWS Marketplace Add-ons:** These add-ons are scanned by AWS and supported by an independent AWS partner.
  + Use a marketplace add-on to add valuable and sophisticated features to your cluster, such as monitoring with Splunk.
  + For more information, see [AWS Marketplace add-ons](workloads-add-ons-available-vendors.md).
+  **Community Add-ons**: These add-ons are scanned by AWS but supported by the open source community.
  + Use a community add-on to reduce the complexity of installing common open source software, such as Kubernetes Metrics Server.
  + Community add-ons are packaged from source by AWS. AWS only validates community add-ons for version compatibility.
  + For more information, see [Community add-ons](community-addons.md).

The following table details the scope of support for each add-on type:


| Category | Feature |  AWS add-ons |  AWS Marketplace add-ons | Community add-ons | 
| --- | --- | --- | --- | --- | 
|  Development  |  Built by AWS   |  Yes  |  No  |  Yes  | 
|  Development  |  Validated by AWS   |  Yes  |  No  |  Yes\$1  | 
|  Development  |  Validated by AWS Partner  |  No  |  Yes  |  No  | 
|  Maintenance  |  Scanned by AWS   |  Yes  |  Yes  |  Yes  | 
|  Maintenance  |  Patched by AWS   |  Yes  |  No  |  Yes  | 
|  Maintenance  |  Patched by AWS Partner  |  No  |  Yes  |  No  | 
|  Distribution  |  Published by AWS   |  Yes  |  No  |  Yes  | 
|  Distribution  |  Published by AWS Partner  |  No  |  Yes  |  No  | 
|  Support  |  Basic Install Support by AWS   |  Yes  |  Yes  |  Yes  | 
|  Support  |  Full AWS Support  |  Yes  |  No  |  No  | 
|  Support  |  Full AWS Partner Support  |  No  |  Yes  |  No  | 

 `*`: Validation for community add-ons only includes Kubernetes version compatibility. For example, if you install a community add-on on a cluster, AWS checks if it is compatible with the Kubernetes version of your cluster.

 AWS Marketplace add-ons can download additional software dependencies from external sources outside of AWS. These external dependencies are not scanned or validated by AWS. Consider your security requirements when deploying AWS Marketplace add-ons that fetch external dependencies.

# AWS add-ons
AWS add-ons

The following Amazon EKS add-ons are available to create on your cluster. You can view the most current list of available add-ons using `eksctl`, the AWS Management Console, or the AWS CLI. To see all available add-ons or to install an add-on, see [Create an Amazon EKS add-on](creating-an-add-on.md). If an add-on requires IAM permissions, then you must have an IAM OpenID Connect (OIDC) provider for your cluster. To determine whether you have one, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md). You can create or delete an add-on after you’ve installed it. For more information, see [Update an Amazon EKS add-on](updating-an-add-on.md) or [Remove an Amazon EKS add-on from a cluster](removing-an-add-on.md). For more information about considerations specific to running EKS add-ons with Amazon EKS Hybrid Nodes, see [Configure add-ons for hybrid nodes](hybrid-nodes-add-ons.md).

You can use any of the following Amazon EKS add-ons.


| Description | Learn more | Compatible compute types | 
| --- | --- | --- | 
|  Provide native VPC networking for your cluster  |   [Amazon VPC CNI plugin for Kubernetes](#add-ons-vpc-cni)   |  EC2  | 
|  A flexible, extensible DNS server that can serve as the Kubernetes cluster DNS  |   [CoreDNS](#add-ons-coredns)   |  EC2, Fargate, EKS Auto Mode, EKS Hybrid Nodes  | 
|  Maintain network rules on each Amazon EC2 node  |   [`Kube-proxy`](#add-ons-kube-proxy)   |  EC2, EKS Hybrid Nodes  | 
|  Provide Amazon EBS storage for your cluster  |   [Amazon EBS CSI driver](#add-ons-aws-ebs-csi-driver)   |  EC2  | 
|  Provide Amazon EFS storage for your cluster  |   [Amazon EFS CSI driver](#add-ons-aws-efs-csi-driver)   |  EC2, EKS Auto Mode  | 
|  Provide Amazon S3 Files storage for your cluster  |   [Amazon EFS CSI driver](#add-ons-aws-efs-csi-driver)   |  EC2, EKS Auto Mode  | 
|  Provide Amazon FSx for Lustre storage for your cluster  |   [Amazon FSx CSI driver](#add-ons-aws-fsx-csi-driver)   |  EC2, EKS Auto Mode  | 
|  Provide Amazon S3 storage for your cluster  |   [Mountpoint for Amazon S3 CSI Driver](#mountpoint-for-s3-add-on)   |  EC2, EKS Auto Mode  | 
|  Detect additional node health issues  |   [Node monitoring agent](#add-ons-eks-node-monitoring-agent)   |  EC2, EKS Hybrid Nodes  | 
|  Enable the use of snapshot functionality in compatible CSI drivers, such as the Amazon EBS CSI driver  |   [CSI snapshot controller](#addons-csi-snapshot-controller)   |  EC2, Fargate, EKS Auto Mode, EKS Hybrid Nodes  | 
|  SageMaker HyperPod task governance optimizes compute resource allocation and usage across teams in Amazon EKS clusters, addressing inefficiencies in task prioritization and resource sharing.  |   [Amazon SageMaker HyperPod task governance](#addons-hyperpod)   |  EC2, EKS Auto Mode,  | 
|  The Amazon SageMaker HyperPod Observability AddOn provides comprehensive monitoring and observability capabilities for HyperPod clusters.  |   [Amazon SageMaker HyperPod Observability Add-on](#addons-hyperpod-observability)   |  EC2, EKS Auto Mode,  | 
|  Amazon SageMaker HyperPod training operator enables efficient distributed training on Amazon EKS clusters with advanced scheduling and resource management capabilities.  |   [Amazon SageMaker HyperPod training operator](#addons-hyperpod-training-operator)   |  EC2, EKS Auto Mode  | 
|  Amazon SageMaker HyperPod inference operator enables deployment and management of high-performance AI inference workloads with optimized resource utilization and cost efficiency.  |   [Amazon SageMaker HyperPod inference operator](#addons-hyperpod-inference-operator)   |  EC2, EKS Auto Mode  | 
|  A Kubernetes agent that collects and reports network flow data to Amazon CloudWatch, enabling comprehensive monitoring of TCP connections across cluster nodes.  |   [AWS Network Flow Monitor Agent](#addons-network-flow)   |  EC2, EKS Auto Mode  | 
|  Secure, production-ready, AWS supported distribution of the OpenTelemetry project  |   [AWS Distro for OpenTelemetry](#add-ons-adot)   |  EC2, Fargate, EKS Auto Mode, EKS Hybrid Nodes  | 
|  Security monitoring service that analyzes and processes foundational data sources including AWS CloudTrail management events and Amazon VPC flow logs. Amazon GuardDuty also processes features, such as Kubernetes audit logs and runtime monitoring  |   [Amazon GuardDuty agent](#add-ons-guard-duty)   |  EC2, EKS Auto Mode  | 
|  Monitoring and observability service provided by AWS. This add-on installs the CloudWatch Agent and enables both CloudWatch Application Signals and CloudWatch Container Insights with enhanced observability for Amazon EKS  |   [Amazon CloudWatch Observability agent](#amazon-cloudwatch-observability)   |  EC2, EKS Auto Mode, EKS Hybrid Nodes  | 
|  Ability to manage credentials for your applications, similar to the way that EC2 instance profiles provide credentials to EC2 instances  |   [EKS Pod Identity Agent](#add-ons-pod-id)   |  EC2, EKS Hybrid Nodes  | 
|  Enable cert-manager to issue X.509 certificates from AWS Private CA. Requires cert-manager.  |   [AWS Private CA Connector for Kubernetes](#add-ons-aws-privateca-connector)   |  EC2, Fargate, EKS Auto Mode, EKS Hybrid Nodes  | 
|  Generate Prometheus metrics about SR-IOV network device performance  |   [SR-IOV Network Metrics Exporter](#add-ons-sriov-network-metrics-exporter)   |  EC2  | 
|  Retrieve secrets from AWS Secrets Manager and parameters from AWS Systems Manager Parameter Store and mount them as files in Kubernetes pods.  |   [AWS Secrets Store CSI Driver provider](#add-ons-aws-secrets-store-csi-driver-provider)   |  EC2, EKS Auto Mode, EKS Hybrid Nodes  | 
|  With Spaces, you can create and manage JupyterLab and Code Editor applications to run interactive ML workloads.  |   [Amazon SageMaker Spaces](#add-ons-amazon-sagemaker-spaces)   |  Hyperpod  | 

## Amazon VPC CNI plugin for Kubernetes


The Amazon VPC CNI plugin for Kubernetes Amazon EKS add-on is a Kubernetes container network interface (CNI) plugin that provides native VPC networking for your cluster. The self-managed or managed type of this add-on is installed on each Amazon EC2 node, by default. For more information, see [Kubernetes container network interface (CNI) plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/).

**Note**  
You do not need to install this add-on on Amazon EKS Auto Mode clusters. For more information, see [Considerations for Amazon EKS Auto Mode](eks-add-ons.md#addon-consider-auto).

The Amazon EKS add-on name is `vpc-cni`.

### Required IAM permissions


This add-on uses the IAM roles for service accounts capability of Amazon EKS. For more information, see [IAM roles for service accounts](iam-roles-for-service-accounts.md).

If your cluster uses the `IPv4` family, the permissions in the [AmazonEKS\$1CNI\$1Policy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEKS_CNI_Policy.html) are required. If your cluster uses the `IPv6` family, you must [create an IAM policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) with the permissions in [IPv6 mode](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/iam-policy.md#ipv6-mode). You can create an IAM role, attach one of the policies to it, and annotate the Kubernetes service account used by the add-on with the following command.

Replace *my-cluster* with the name of your cluster and *AmazonEKSVPCCNIRole* with the name for your role. If your cluster uses the `IPv6` family, then replace *AmazonEKS\$1CNI\$1Policy* with the name of the policy that you created. This command requires that you have [eksctl](https://eksctl.io) installed on your device. If you need to use a different tool to create the role, attach the policy to it, and annotate the Kubernetes service account, see [Assign IAM roles to Kubernetes service accounts](associate-service-account-role.md).

```
eksctl create iamserviceaccount --name aws-node --namespace kube-system --cluster my-cluster --role-name AmazonEKSVPCCNIRole \
    --role-only --attach-policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy --approve
```

### Update information


You can only update one minor version at a time. For example, if your current version is `1.28.x-eksbuild.y ` and you want to update to `1.30.x-eksbuild.y `, then you must update your current version to `1.29.x-eksbuild.y ` and then update it again to `1.30.x-eksbuild.y `. For more information about updating the add-on, see [Update the Amazon VPC CNI (Amazon EKS add-on)](vpc-add-on-update.md).

## CoreDNS


The CoreDNS Amazon EKS add-on is a flexible, extensible DNS server that can serve as the Kubernetes cluster DNS. The self-managed or managed type of this add-on was installed, by default, when you created your cluster. When you launch an Amazon EKS cluster with at least one node, two replicas of the CoreDNS image are deployed by default, regardless of the number of nodes deployed in your cluster. The CoreDNS Pods provide name resolution for all Pods in the cluster. You can deploy the CoreDNS Pods to Fargate nodes if your cluster includes a Fargate profile with a namespace that matches the namespace for the CoreDNS deployment. For more information, see [Define which Pods use AWS Fargate when launched](fargate-profile.md) 

**Note**  
You do not need to install this add-on on Amazon EKS Auto Mode clusters. For more information, see [Considerations for Amazon EKS Auto Mode](eks-add-ons.md#addon-consider-auto).

The Amazon EKS add-on name is `coredns`.

### Required IAM permissions


This add-on doesn’t require any permissions.

### Additional information


To learn more about CoreDNS, see [Using CoreDNS for Service Discovery](https://kubernetes.io/docs/tasks/administer-cluster/coredns/) and [Customizing DNS Service](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/) in the Kubernetes documentation.

## `Kube-proxy`


The `Kube-proxy` Amazon EKS add-on maintains network rules on each Amazon EC2 node. It enables network communication to your Pods. The self-managed or managed type of this add-on is installed on each Amazon EC2 node in your cluster, by default.

**Note**  
You do not need to install this add-on on Amazon EKS Auto Mode clusters. For more information, see [Considerations for Amazon EKS Auto Mode](eks-add-ons.md#addon-consider-auto).

The Amazon EKS add-on name is `kube-proxy`.

### Required IAM permissions


This add-on doesn’t require any permissions.

### Update information


Before updating your current version, consider the following requirements:
+  `Kube-proxy` on an Amazon EKS cluster has the same [compatibility and skew policy as Kubernetes](https://kubernetes.io/releases/version-skew-policy/#kube-proxy).

### Additional information


To learn more about `kube-proxy`, see [kube-proxy](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/) in the Kubernetes documentation.

## Amazon EBS CSI driver


The Amazon EBS CSI driver Amazon EKS add-on is a Kubernetes Container Storage Interface (CSI) plugin that provides Amazon EBS storage for your cluster.

**Note**  
You do not need to install this add-on on Amazon EKS Auto Mode clusters. Auto Mode includes a block storage capability. For more information, see [Deploy a sample stateful workload to EKS Auto Mode](sample-storage-workload.md).

The Amazon EKS add-on name is `aws-ebs-csi-driver`.

### Required IAM permissions


This add-on utilizes the IAM roles for service accounts capability of Amazon EKS. For more information, see [IAM roles for service accounts](iam-roles-for-service-accounts.md). The permissions in the [AmazonEBSCSIDriverPolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEBSCSIDriverPolicy.html) AWS managed policy are required. You can create an IAM role and attach the managed policy to it with the following command. Replace *my-cluster* with the name of your cluster and *AmazonEKS\$1EBS\$1CSI\$1DriverRole* with the name for your role. This command requires that you have [eksctl](https://eksctl.io) installed on your device. If you need to use a different tool or you need to use a custom [KMS key](https://aws.amazon.com/kms/) for encryption, see [Step 1: Create an IAM role](ebs-csi.md#csi-iam-role).

```
eksctl create iamserviceaccount \
    --name ebs-csi-controller-sa \
    --namespace kube-system \
    --cluster my-cluster \
    --role-name AmazonEKS_EBS_CSI_DriverRole \
    --role-only \
    --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
    --approve
```

### Additional information


To learn more about the add-on, see [Use Kubernetes volume storage with Amazon EBS](ebs-csi.md).

## Amazon EFS CSI driver


The Amazon EFS CSI driver Amazon EKS add-on is a Kubernetes Container Storage Interface (CSI) plugin that provides Amazon EFS and Amazon S3 Files storage for your cluster.

The Amazon EKS add-on name is `aws-efs-csi-driver`.

### Required IAM permissions


This add-on utilizes the IAM roles for service accounts capability of Amazon EKS. For more information, see [IAM roles for service accounts](iam-roles-for-service-accounts.md).

The specific AWS managed policy you need depends on which file system type you want to use:
+  **For Amazon EFS file systems only**: Attach the [AmazonEFSCSIDriverPolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEFSCSIDriverPolicy.html) managed policy.
+  **For Amazon S3 file system only**: Attach the `AmazonS3FilesCSIDriverPolicy` managed policy.
+  **For both Amazon EFS and Amazon S3 file systems**: Attach both the `AmazonEFSCSIDriverPolicy` and `AmazonS3FilesCSIDriverPolicy` managed policies.

You can create an IAM role and attach the managed policy to it with the following commands. Replace *my-cluster* with the name of your cluster and *AmazonEKS\$1EFS\$1CSI\$1DriverRole* with the name for your role. The following example attaches the `AmazonEFSCSIDriverPolicy` for Amazon EFS file systems. If you’re using an Amazon S3 file system, replace the policy ARN with ` arn:aws:iam::aws:policy/service-role/AmazonS3FilesCSIDriverPolicy`. If you’re using both file system types, add an additional `--attach-policy-arn` flag with the second policy ARN. These commands require that you have [eksctl](https://eksctl.io) installed on your device. If you need to use a different tool, see [Step 1: Create an IAM role](efs-csi.md#efs-create-iam-resources) for Amazon EFS or [Step 1: Create IAM roles](s3files-csi.md#s3files-create-iam-resources) for Amazon S3 Files.

```
export cluster_name=my-cluster
export role_name=AmazonEKS_EFS_CSI_DriverRole
eksctl create iamserviceaccount \
    --name efs-csi-controller-sa \
    --namespace kube-system \
    --cluster $cluster_name \
    --role-name $role_name \
    --role-only \
    --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy \
    --approve
TRUST_POLICY=$(aws iam get-role --output json --role-name $role_name --query 'Role.AssumeRolePolicyDocument' | \
    sed -e 's/efs-csi-controller-sa/efs-csi-*/' -e 's/StringEquals/StringLike/')
aws iam update-assume-role-policy --role-name $role_name --policy-document "$TRUST_POLICY"
```

**Note**  
The above example only configures `efs-csi-controller-sa`. If you are using Amazon S3 file systems, you also need to configure `efs-csi-node-sa`. See [Step 1: Create IAM roles](s3files-csi.md#s3files-create-iam-resources) for the complete S3 Files IAM setup.

### Additional information


To learn more about the add-on, see [Use elastic file system storage with Amazon EFS](efs-csi.md).

## Amazon FSx CSI driver


The Amazon FSx CSI driver Amazon EKS add-on is a Kubernetes Container Storage Interface (CSI) plugin that provides Amazon FSx for Lustre storage for your cluster.

The Amazon EKS add-on name is `aws-fsx-csi-driver`.

**Note**  
Pre-existing Amazon FSx CSI driver installations in the cluster can cause add-on installation failures. When you attempt to install the Amazon EKS add-on version while a non-EKS FSx CSI Driver exists, the installation will fail due to resource conflicts. Use the `OVERWRITE` flag during installation to resolve this issue:  

  ```
  aws eks create-addon --addon-name aws-fsx-csi-driver --cluster-name my-cluster --resolve-conflicts OVERWRITE
  ```
The Amazon FSx CSI Driver EKS add-on requires the EKS Pod Identity agent for authentication. Without this component, the add-on will fail with the error `Amazon EKS Pod Identity agent is not installed in the cluster`, preventing volume operations. Install the Pod Identity agent before or after deploying the FSx CSI Driver add-on. For more information, see [Set up the Amazon EKS Pod Identity Agent](pod-id-agent-setup.md).

### Required IAM permissions


This add-on utilizes the IAM roles for service accounts capability of Amazon EKS. For more information, see [IAM roles for service accounts](iam-roles-for-service-accounts.md). The permissions in the [AmazonFSxFullAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonFSxFullAccess.html) AWS managed policy are required. You can create an IAM role and attach the managed policy to it with the following command. Replace *my-cluster* with the name of your cluster and *AmazonEKS\$1FSx\$1CSI\$1DriverRole* with the name for your role. This command requires that you have [eksctl](https://eksctl.io) installed on your device.

```
eksctl create iamserviceaccount \
    --name fsx-csi-controller-sa \
    --namespace kube-system \
    --cluster my-cluster \
    --role-name AmazonEKS_FSx_CSI_DriverRole \
    --role-only \
    --attach-policy-arn arn:aws:iam::aws:policy/AmazonFSxFullAccess \
    --approve
```

### Additional information


To learn more about the add-on, see [Use high-performance app storage with Amazon FSx for Lustre](fsx-csi.md).

## Mountpoint for Amazon S3 CSI Driver


The Mountpoint for Amazon S3 CSI Driver Amazon EKS add-on is a Kubernetes Container Storage Interface (CSI) plugin that provides Amazon S3 storage for your cluster.

The Amazon EKS add-on name is `aws-mountpoint-s3-csi-driver`.

### Required IAM permissions


This add-on uses the IAM roles for service accounts capability of Amazon EKS. For more information, see [IAM roles for service accounts](iam-roles-for-service-accounts.md).

The IAM role that is created will require a policy that gives access to S3. Follow the [Mountpoint IAM permissions recommendations](https://github.com/awslabs/mountpoint-s3/blob/main/doc/CONFIGURATION.md#iam-permissions) when creating the policy. Alternatively, you may use the AWS managed policy [AmazonS3FullAccess](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonS3FullAccess$jsonEditor), but this managed policy grants more permissions than are needed for Mountpoint.

You can create an IAM role and attach your policy to it with the following commands. Replace *my-cluster* with the name of your cluster, *region-code* with the correct AWS Region code, *AmazonEKS\$1S3\$1CSI\$1DriverRole* with the name for your role, and *AmazonEKS\$1S3\$1CSI\$1DriverRole\$1ARN* with the role ARN. These commands require that you have [eksctl](https://eksctl.io) installed on your device. For instructions on using the IAM console or AWS CLI, see [Step 2: Create an IAM role](s3-csi-create.md#s3-create-iam-role).

```
CLUSTER_NAME=my-cluster
REGION=region-code
ROLE_NAME=AmazonEKS_S3_CSI_DriverRole
POLICY_ARN=AmazonEKS_S3_CSI_DriverRole_ARN
eksctl create iamserviceaccount \
    --name s3-csi-driver-sa \
    --namespace kube-system \
    --cluster $CLUSTER_NAME \
    --attach-policy-arn $POLICY_ARN \
    --approve \
    --role-name $ROLE_NAME \
    --region $REGION \
    --role-only
```

### Additional information


To learn more about the add-on, see [Access Amazon S3 objects with Mountpoint for Amazon S3 CSI driver](s3-csi.md).

## CSI snapshot controller


The Container Storage Interface (CSI) snapshot controller enables the use of snapshot functionality in compatible CSI drivers, such as the Amazon EBS CSI driver.

The Amazon EKS add-on name is `snapshot-controller`.

### Required IAM permissions


This add-on doesn’t require any permissions.

### Additional information


To learn more about the add-on, see [Enable snapshot functionality for CSI volumes](csi-snapshot-controller.md).

## Amazon SageMaker HyperPod task governance


SageMaker HyperPod task governance is a robust management system designed to streamline resource allocation and ensure efficient utilization of compute resources across teams and projects for your Amazon EKS clusters. This provides administrators with the capability to set:
+ Priority levels for various tasks
+ Compute allocation for each team
+ How each team lends and borrows idle compute
+ If a team preempts their own tasks

HyperPod task governance also provides Amazon EKS cluster Observability, offering real-time visibility into cluster capacity. This includes compute availability and usage, team allocation and utilization, and task run and wait time information, setting you up for informed decision-making and proactive resource management.

The Amazon EKS add-on name is `amazon-sagemaker-hyperpod-taskgovernance`.

### Required IAM permissions


This add-on doesn’t require any permissions.

### Additional information


To learn more about the add-on, see [SageMaker HyperPod task governance](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-eks-operate-console-ui-governance.html) 

## Amazon SageMaker HyperPod Observability Add-on


The Amazon SageMaker HyperPod Observability Add-on provides comprehensive monitoring and observability capabilities for HyperPod clusters. This add-on automatically deploys and manages essential monitoring components including node exporter, DCGM exporter, kube-state-metrics, and EFA exporter. It collects and forwards metrics to a customer-designated Amazon Managed Prometheus (AMP) instance and exposes an OTLP endpoint for custom metrics and event ingestion from customer training jobs.

The add-on integrates with the broader HyperPod ecosystem by scraping metrics from various components including HyperPod Task Governance add-on, HyperPod Training Operator, Kubeflow, and KEDA. All collected metrics are centralized in Amazon Managed Prometheus, enabling customers to achieve a unified observability view through Amazon Managed Grafana dashboards. This provides end-to-end visibility into cluster health, resource utilization, and training job performance across the entire HyperPod environment.

The Amazon EKS add-on name is `amazon-sagemaker-hyperpod-observability`.

### Required IAM permissions


This add-on uses the IAM roles for service accounts capability of Amazon EKS. For more information, see [IAM roles for service accounts](iam-roles-for-service-accounts.md). The following managed policies are required:
+  `AmazonPrometheusRemoteWriteAccess` - for remote writing metrics from the cluster to AMP
+  `CloudWatchAgentServerPolicy` - for remote writing the logs from the cluster to CloudWatch

### Additional information


To learn more about the add-on and its capabilities, see [SageMaker HyperPod Observability](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-eks-cluster-observability-cluster.html).

## Amazon SageMaker HyperPod training operator


The Amazon SageMaker HyperPod training operator helps you accelerate generative AI model development by efficiently managing distributed training across large GPU clusters. It introduces intelligent fault recovery, hang job detection, and process-level management capabilities that minimize training disruptions and reduce costs. Unlike traditional training infrastructure that requires complete job restarts when failures occur, this operator implements surgical process recovery to keep your training jobs running smoothly.

The operator also works with HyperPod’s health monitoring and observability functions, providing real-time visibility into training execution and automatic monitoring of critical metrics like loss spikes and throughput degradation. You can define recovery policies through simple YAML configurations without code changes, allowing you to quickly respond to and recover from unrecoverable training states. These monitoring and recovery capabilities work together to maintain optimal training performance while minimizing operational overhead.

The Amazon EKS add-on name is `amazon-sagemaker-hyperpod-training-operator`.

For more information, see [Using the HyperPod training operator](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-eks-operator.html) in the *Amazon SageMaker Developer Guide*.

### Required IAM permissions


This add-on requires IAM permissions, and uses EKS Pod Identity.

 AWS suggests the `AmazonSageMakerHyperPodTrainingOperatorAccess` [managed policy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSageMakerHyperPodTrainingOperatorAccess.html).

For more information, see [Installing the training operator](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-eks-operator-install.html#sagemaker-eks-operator-install-operator) in the *Amazon SageMaker Developer Guide*.

### Additional information


To learn more about the add-on, see [SageMaker HyperPod training operator](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-eks-operator.html).

## Amazon SageMaker HyperPod inference operator


Amazon SageMaker HyperPod offers an end-to-end experience supporting the full lifecycle of AI development from interactive experimentation and training to inference and post training workflows. It now provides a comprehensive inference platform that combines the flexibility of Kubernetes with the operational excellence of a managed experience. Deploy, scale, and optimize your GenAI models with enterprise-grade reliability using the same HyperPod compute throughout the entire model lifecycle.

Amazon SageMaker HyperPod offers flexible deployment interfaces that allow you to deploy models through multiple methods including kubectl, Python SDK, Amazon SageMaker Studio UI, or HyperPod CLI. The capability provides advanced autoscaling capabilities with dynamic resource allocation that automatically adjusts based on demand. Additionally, it includes comprehensive observability and monitoring features that track critical metrics such as time-to-first-token, latency, and GPU utilization to help you optimize performance.

The Amazon EKS add-on name is `amazon-sagemaker-hyperpod-inference`.

### Installation methods


You can install this add-on using one of the following methods:
+  **SageMaker Console (Recommended)**: Provides a streamlined installation experience with guided configuration.
+  **EKS Add-ons Console or CLI**: Requires manual installation of dependency add-ons before installing the inference operator. See the prerequisites section below for required dependencies.

### Prerequisites


Before installing the inference operator add-on via the EKS Add-ons Console or CLI, ensure the following dependencies are installed.

Required EKS add-ons:
+ Amazon S3 Mountpoint CSI Driver (minimum version: v1.14.1-eksbuild.1)
+ Metrics Server (minimum version: v0.7.2-eksbuild.4)
+ Amazon FSx CSI Driver (minimum version: v1.6.0-eksbuild.1)
+ Cert Manager (minimum version: v1.18.2-eksbuild.2)

For detailed installation instructions for each dependency, see [Installing the inference operator](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-model-deployment-setup.html).

### Required IAM permissions


This add-on requires IAM permissions, and uses OIDC/IRSA.

The following managed policies are recommended as they provide the minimum scoped permissions:
+  `AmazonSageMakerHyperPodInferenceAccess` - provides admin privileges required for setting up the inference operator
+  `AmazonSageMakerHyperPodGatedModelAccess` - provides SageMaker HyperPod access to gated models in SageMaker Jumpstart (e.g., Meta Llama, GPT-Neo)

For more information, see [Installing the inference operator](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-model-deployment-setup.html).

### Additional information


To learn more about the Amazon SageMaker HyperPod inference operator, see [SageMaker HyperPod inference operator](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-model-deployment.html).

For troubleshooting information, see [Troubleshooting SageMaker HyperPod model deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-model-deployment-ts.html).

## AWS Network Flow Monitor Agent


The Amazon CloudWatch Network Flow Monitor Agent is a Kubernetes application that collects TCP connection statistics from all nodes in a cluster and publishes network flow reports to Amazon CloudWatch Network Flow Monitor Ingestion APIs.

The Amazon EKS add-on name is `aws-network-flow-monitoring-agent`.

### Required IAM permissions


This add-on does require IAM permissions.

You need to attach the `CloudWatchNetworkFlowMonitorAgentPublishPolicy` managed policy to the add-on.

For more information on the required IAM setup, see [IAM Policy](https://github.com/aws/network-flow-monitor-agent?tab=readme-ov-file#iam-policy) on the Amazon CloudWatch Network Flow Monitor Agent GitHub repo.

For more information about the managed policy, see [CloudWatchNetworkFlowMonitorAgentPublishPolicy](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/security-iam-awsmanpol-network-flow-monitor.html#security-iam-awsmanpol-CloudWatchNetworkFlowMonitorAgentPublishPolicy) in the Amazon CloudWatch User Guide.

### Additional information


To learn more about the add-on, see the [Amazon CloudWatch Network Flow Monitor Agent GitHub repo](https://github.com/aws/network-flow-monitor-agent?tab=readme-ov-file).

## Node monitoring agent


The node monitoring agent Amazon EKS add-on can detect additional node health issues. These extra health signals can also be leveraged by the optional node auto repair feature to automatically replace nodes as needed.

**Note**  
You do not need to install this add-on on Amazon EKS Auto Mode clusters. For more information, see [Considerations for Amazon EKS Auto Mode](eks-add-ons.md#addon-consider-auto).

The Amazon EKS add-on name is `eks-node-monitoring-agent`.

### Required IAM permissions


This add-on doesn’t require additional permissions.

### Additional information


For more information, see [Detect node health issues and enable automatic node repair](node-health.md).

## AWS Distro for OpenTelemetry


The AWS Distro for OpenTelemetry Amazon EKS add-on is a secure, production-ready, AWS supported distribution of the OpenTelemetry project. For more information, see [AWS Distro for OpenTelemetry](https://aws-otel.github.io/) on GitHub.

The Amazon EKS add-on name is `adot`.

### Required IAM permissions


This add-on only requires IAM permissions if you’re using one of the preconfigured custom resources that can be opted into through advanced configuration.

### Additional information


For more information, see [Getting Started with AWS Distro for OpenTelemetry using EKS Add-Ons](https://aws-otel.github.io/docs/getting-started/adot-eks-add-on) in the AWS Distro for OpenTelemetry documentation.

ADOT requires that the `cert-manager` add-on is deployed on the cluster as a prerequisite, otherwise this add-on won’t work if deployed directly using the https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest`cluster_addons` property. For more requirements, see [Requirements for Getting Started with AWS Distro for OpenTelemetry using EKS Add-Ons](https://aws-otel.github.io/docs/getting-started/adot-eks-add-on/requirements) in the AWS Distro for OpenTelemetry documentation.

## Amazon GuardDuty agent


The Amazon GuardDuty agent Amazon EKS add-on collects [runtime events](https://docs.aws.amazon.com/guardduty/latest/ug/runtime-monitoring-collected-events.html) (file access, process execution, network connections) from your EKS cluster nodes for analysis by GuardDuty Runtime Monitoring. GuardDuty itself (not the agent) is the security monitoring service that analyzes and processes [foundational data sources](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_data-sources.html) including AWS CloudTrail management events and Amazon VPC flow logs, as well as [features](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty-features-activation-model.html), such as Kubernetes audit logs and runtime monitoring.

The Amazon EKS add-on name is `aws-guardduty-agent`.

### Required IAM permissions


This add-on doesn’t require any permissions.

### Additional information


For more information, see [Runtime Monitoring for Amazon EKS clusters in Amazon GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/how-runtime-monitoring-works-eks.html).
+ To detect potential security threats in your Amazon EKS clusters, enable Amazon GuardDuty runtime monitoring and deploy the GuardDuty security agent to your Amazon EKS clusters.

## Amazon CloudWatch Observability agent


The Amazon CloudWatch Observability agent Amazon EKS add-on the monitoring and observability service provided by AWS. This add-on installs the CloudWatch Agent and enables both CloudWatch Application Signals and CloudWatch Container Insights with enhanced observability for Amazon EKS. For more information, see [Amazon CloudWatch Agent](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html).

The Amazon EKS add-on name is `amazon-cloudwatch-observability`.

### Required IAM permissions


This add-on uses the IAM roles for service accounts capability of Amazon EKS. For more information, see [IAM roles for service accounts](iam-roles-for-service-accounts.md). The permissions in the [AWSXrayWriteOnlyAccess](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AWSXrayWriteOnlyAccess) and [CloudWatchAgentServerPolicy](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy) AWS managed policies are required. You can create an IAM role, attach the managed policies to it, and annotate the Kubernetes service account used by the add-on with the following command. Replace *my-cluster* with the name of your cluster and *AmazonEKS\$1Observability\$1role* with the name for your role. This command requires that you have [eksctl](https://eksctl.io) installed on your device. If you need to use a different tool to create the role, attach the policy to it, and annotate the Kubernetes service account, see [Assign IAM roles to Kubernetes service accounts](associate-service-account-role.md).

```
eksctl create iamserviceaccount \
    --name cloudwatch-agent \
    --namespace amazon-cloudwatch \
    --cluster my-cluster \
    --role-name AmazonEKS_Observability_Role \
    --role-only \
    --attach-policy-arn arn:aws:iam::aws:policy/AWSXrayWriteOnlyAccess \
    --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy \
    --approve
```

### Additional information


For more information, see [Install the CloudWatch agent](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Observability-EKS-addon.html).

## AWS Private CA Connector for Kubernetes


The AWS Private CA Connector for Kubernetes is an add-on for cert-manager that enables users to obtain Certificates from AWS Private Certificate Authority (AWS Private CA).
+ The Amazon EKS add-on name is `aws-privateca-connector-for-kubernetes`.
+ The add-on namespace is `aws-privateca-issuer`.

This add-on requires `cert-manager`. `cert-manager` is available on Amazon EKS as a community add-on. For more information about this add-on, see [Cert Manager](community-addons.md#addon-cert-manager). For more information about installing add-ons, see [Create an Amazon EKS add-on](creating-an-add-on.md).

### Required IAM permissions


This add-on requires IAM permissions.

Use EKS Pod Identities to attach the `AWSPrivateCAConnectorForKubernetesPolicy` IAM Policy to the `aws-privateca-issuer` Kubernetes Service Account. For more information, see [Use Pod Identities to assign an IAM role to an Amazon EKS add-on](update-addon-role.md).

For information about the required permissions, see [AWSPrivateCAConnectorForKubernetesPolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSPrivateCAConnectorForKubernetesPolicy.html) in the AWS Managed Policy Reference.

### Additional information


For more information, see the [AWS Private CA Issuer for Kubernetes GitHub repository](https://github.com/cert-manager/aws-privateca-issuer).

For more information about configuring the add-on, see [values.yaml](https://github.com/cert-manager/aws-privateca-issuer/blob/main/charts/aws-pca-issuer/values.yaml) in the `aws-privateca-issuer` GitHub repo. Confirm the version of values.yaml matches the version of the add-on installed on your cluster.

This add-on tolerates the `CriticalAddonsOnly` taint used by the `system` NodePool of EKS Auto Mode. For more information, see [Run critical add-ons on dedicated instances](critical-workload.md).

## EKS Pod Identity Agent


The Amazon EKS Pod Identity Agent Amazon EKS add-on provides the ability to manage credentials for your applications, similar to the way that EC2 instance profiles provide credentials to EC2 instances.

**Note**  
You do not need to install this add-on on Amazon EKS Auto Mode clusters. Amazon EKS Auto Mode integrates with EKS Pod Identity. For more information, see [Considerations for Amazon EKS Auto Mode](eks-add-ons.md#addon-consider-auto).

The Amazon EKS add-on name is `eks-pod-identity-agent`.

### Required IAM permissions


The Pod Identity Agent add-on itself does not require an IAM role. It uses permissions from the [Amazon EKS node IAM role](create-node-role.md) to function, but does not need a dedicated IAM role for the add-on.

### Update information


You can only update one minor version at a time. For example, if your current version is `1.28.x-eksbuild.y` and you want to update to `1.30.x-eksbuild.y`, then you must update your current version to `1.29.x-eksbuild.y` and then update it again to `1.30.x-eksbuild.y`. For more information about updating the add-on, see [Update an Amazon EKS add-on](updating-an-add-on.md).

## SR-IOV Network Metrics Exporter


The SR-IOV Network Metrics Exporter Amazon EKS add-on collects and exposes metrics about SR-IOV network devices in Prometheus format. It enables monitoring of SR-IOV network performance on EKS bare metal nodes. The exporter runs as a DaemonSet on nodes with SR-IOV-capable network interfaces and exports metrics that can be scraped by Prometheus.

**Note**  
This add-on requires nodes with SR-IOV-capable network interfaces.


| Property | Value | 
| --- | --- | 
|  Add-on name  |   `sriov-network-metrics-exporter`   | 
|  Namespace  |   `monitoring`   | 
|  Documentation  |   [SR-IOV Network Metrics Exporter GitHub repo](https://github.com/k8snetworkplumbingwg/sriov-network-metrics-exporter)   | 
|  Service account name  |  None  | 
|  Managed IAM policy  |  None  | 
|  Custom IAM permissions  |  None  | 

### AWS Secrets Store CSI Driver provider


The AWS provider for the Secrets Store CSI Driver is an add-on that enables retrieving secrets from AWS Secrets Manager and parameters from AWS Systems Manager Parameter Store and mounting them as files in Kubernetes pods.

### Required IAM permissions


The add-on does not require IAM permissions. However, application pods will require IAM permissions to fetch secrets from AWS Secrets Manager and parameters from AWS Systems Manager Parameter Store. After installing the add-on, access must be configured via IAM Roles for Service Accounts (IRSA) or EKS Pod Identity. To use IRSA, please refer to the Secrets Manager [IRSA setup documentation](https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_ascp_irsa.html). To use EKS Pod Identity, please refer to the Secrets Manager [Pod Identity setup documentation](https://docs.aws.amazon.com/secretsmanager/latest/userguide/ascp-pod-identity-integration.html).

 AWS suggests the `AWSSecretsManagerClientReadOnlyAccess` [managed policy](https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_available-policies.html#security-iam-awsmanpol-AWSSecretsManagerClientReadOnlyAccess).

For more information about the required permissions, see `AWSSecretsManagerClientReadOnlyAccess` in the AWS Managed Policy Reference.

### Additional information


For more information, please see the secrets-store-csi-driver-provider-aws [GitHub repository](https://github.com/aws/secrets-store-csi-driver-provider-aws).

To learn more about the add-on, please refer to the [AWS Secrets Manager documentation for the add-on](https://docs.aws.amazon.com/secretsmanager/latest/userguide/ascp-eks-installation.html).

## Amazon SageMaker Spaces


The Amazon SageMaker Spaces Add-on provides ability to run IDEs and Notebooks on EKS or HyperPod-EKS clusters. Administrators can use EKS Console to install the add-on on their cluster, and define default space configurations such as images, compute resources, local storage for notebook settings (additional storage to be attached to their spaces), file systems, and initialization scripts.

AI developers can use kubectl to create, update, and delete spaces. They have the flexibility to use default configurations provided by admins or customize settings. AI developers can access their spaces on EKS or HyperPod-EKS using their local VS Code IDEs, and/or their web browser that hosts their JupyterLab or CodeEditor IDE on custom DNS domain configured by their admins. They can also use kubernetes’ port forwarding feature to access spaces in their web browsers.

The Amazon EKS add-on name is `amazon-sagemaker-spaces`.

### Required IAM permissions


This add-on requires IAM permissions. For more information about the required IAM setup, see [IAM Permissions Setup](https://docs.aws.amazon.com/sagemaker/latest/dg/permission-setup.html) in the *Amazon SageMaker Developer Guide*.

### Additional information


To learn more about the add-on and its capabilities, see [SageMaker AI Notebooks on HyperPod](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-eks-cluster-ide.html) in the *Amazon SageMaker Developer Guide*.

# Community add-ons


You can use AWS APIs to install community add-ons, such as the Kubernetes Metrics Server. You may choose to install community add-ons as Amazon EKS Add-ons to reduce the complexity of maintaining the software on multiple clusters.

For example, you can use the AWS API, CLI, or Management Console to install community add-ons. You can install a community add-on during cluster creation.

You manage community add-ons just like existing Amazon EKS Add-ons. Community add-ons are different from existing add-ons in that they have a unique scope of support.

**Note**  
Using community add-ons is at your discretion. As part of the [shared responsibility model](security.md) between you and AWS, you are expected to understand what you are installing into your cluster for these third party plugins. You are also responsible for the community add-ons meeting your cluster security needs. For more information, see [Support for software deployed to EKS](related-projects.md#oss-scope).

Community add-ons are not built by AWS. AWS only validates community add-ons for version compatibility. For example, if you install a community add-on on a cluster, AWS checks if it is compatible with the Kubernetes version of your cluster.

Importantly, AWS does not provide full support for community add-ons. AWS supports only lifecycle operations done using AWS APIs, such as installing add-ons or deleting add-ons.

If you require support for a community add-on, utilize the existing project resources. For example, you may create a GitHub issue on the repo for the project.

## Determine add-on type


You can use the AWS CLI to determine the type of an Amazon EKS Add-on.

Use the following CLI command to retrieve information about an add-on. You can replace `metrics-server` with the name of any add-on.

```
aws eks describe-addon-versions --addon-name metrics-server
```

Review the CLI output for the `owner` field.

```
{
    "addons": [
        {
            "addonName": "metrics-server",
            "type": "observability",
            "owner": "community",
            "addonVersions": [
```

If the value of `owner` is `community`, then the add-on is a community add-on. AWS only provides support for installing, updating, and removing the add-on. If you have questions about the functionality and operation of the add-on itself, use community resources like GitHub issues.

## Install or update community add-on


You install or update community add-ons in the same way as other Amazon EKS Add-ons.
+  [Create an Amazon EKS add-on](creating-an-add-on.md) 
+  [Update an Amazon EKS add-on](updating-an-add-on.md) 
+  [Remove an Amazon EKS add-on from a cluster](removing-an-add-on.md) 

## Available community add-ons


The following community add-ons are available from Amazon EKS.
+  [Kubernetes Metrics Server](#kubernetes-metrics-server) 
+  [kube-state-metrics](#kube-state-metrics) 
+  [Prometheus Node Exporter](#prometheus-node-exporter) 
+  [Cert Manager](#addon-cert-manager) 
+  [External DNS](#external-dns) 
+  [Fluent Bit](#fluent-bit) 

### Kubernetes Metrics Server


The Kubernetes Metrics Server is a scalable and efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines. It collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler.


| Property | Value | 
| --- | --- | 
|  Add-on name  |   `metrics-server`   | 
|  Namespace  |   `kube-system`   | 
|  Documentation  |   [GitHub Readme](https://github.com/kubernetes-sigs/metrics-server)   | 
|  Service account name  |  None  | 
|  Managed IAM policy  |  None  | 
|  Custom IAM permissions  |  None  | 

### kube-state-metrics


Add-on agent to generate and expose cluster-level metrics.

The state of Kubernetes objects in the Kubernetes API can be exposed as metrics. An add-on agent called kube-state-metrics can connect to the Kubernetes API server and expose a HTTP endpoint with metrics generated from the state of individual objects in the cluster. It exposes various information about the state of objects like labels and annotations, startup and termination times, status or the phase the object currently is in.


| Property | Value | 
| --- | --- | 
|  Add-on name  |   `kube-state-metrics`   | 
|  Namespace  |   `kube-state-metrics`   | 
|  Documentation  |   [Metrics for Kubernetes Object States](https://kubernetes.io/docs/concepts/cluster-administration/kube-state-metrics/) in Kubernetes Docs  | 
|  Service account name  |  None  | 
|  Managed IAM policy  |  None  | 
|  Custom IAM permissions  |  None  | 

### Prometheus Node Exporter


Prometheus exporter for hardware and OS metrics exposed by \$1NIX kernels, written in Go with pluggable metric collectors. The Prometheus Node Exporter exposes a wide variety of hardware- and kernel-related metrics.


| Property | Value | 
| --- | --- | 
|  Add-on name  |   `prometheus-node-exporter`   | 
|  Namespace  |   `prometheus-node-exporter`   | 
|  Documentation  |   [Monitoring Linux host metrics with the Node Exporter](https://prometheus.io/docs/guides/node-exporter/#monitoring-linux-host-metrics-with-the-node-exporter) in Prometheus Docs  | 
|  Service account name  |  None  | 
|  Managed IAM policy  |  None  | 
|  Custom IAM permissions  |  None  | 

### Cert Manager


Cert Manager can be used to manage the creation and renewal of certificates.


| Property | Value | 
| --- | --- | 
|  Add-on name  |   `cert-manager`   | 
|  Namespace  |   `cert-manager`   | 
|  Documentation  |   [Cert Manager Docs](https://cert-manager.io/docs/)   | 
|  Service account name  |  None  | 
|  Managed IAM policy  |  None  | 
|  Custom IAM permissions  |  None  | 

### External DNS


The External DNS EKS add-on can be used to manage Route53 DNS records through Kubernetes resources.

External DNS permissions can be reduced to `route53:ChangeResourceRecordSets`, `route53:ListHostedZones`, and `route53:ListResourceRecordSets` on the hosted zones you wish to manage.


| Property | Value | 
| --- | --- | 
|  Add-on name  |   `external-dns`   | 
|  Namespace  |   `external-dns`   | 
|  Documentation  |   [GitHub Readme](https://github.com/kubernetes-sigs/external-dns)   | 
|  Service account name  |   `external-dns`   | 
|  Managed IAM policy  |   ` arn:aws:iam::aws:policy/AmazonRoute53FullAccess`   | 
|  Custom IAM permissions  |  None  | 

### Fluent Bit


Fluent Bit is a lightweight and high-performance log processor and forwarder. It allows you to collect data/logs from different sources, unify them, and send them to multiple destinations including Amazon CloudWatch Logs, Amazon S3, and Amazon Data Firehose. Fluent Bit is designed with performance and resource efficiency in mind, making it ideal for Kubernetes environments.

This add-on does not require IAM permissions in the default configuration. However, you may need to grant this add-on IAM permissions if you configure an AWS output location. For more information, see [Use Pod Identities to assign an IAM role to an Amazon EKS add-on](update-addon-role.md).


| Property | Value | 
| --- | --- | 
|  Add-on name  |   `fluent-bit`   | 
|  Namespace  |   `fluent-bit`   | 
|  Documentation  |   [Fluent Bit Documentation](https://docs.fluentbit.io/manual/)   | 
|  Service account name  |   `fluent-bit`   | 
|  Managed IAM policy  |  None  | 
|  Custom IAM permissions  |  None  | 

## View Attributions


You can download the open source attributions and license information for community add-ons.

1. Determine the name and version of the add-on you want to download attributions for.

1. Update the following command with the name and version:

   ```
   curl -O https://amazon-eks-docs.s3.amazonaws.com/attributions/<add-on-name>/<add-on-version>/attributions.zip
   ```

   For example:

   ```
   curl -O https://amazon-eks-docs.s3.amazonaws.com/attributions/kube-state-metrics/v2.14.0-eksbuild.1/attributions.zip
   ```

1. Use the command to download the file.

Use this zip file to view information about the license attributions.

# AWS Marketplace add-ons
Marketplace add-ons

In addition to the previous list of Amazon EKS add-ons, you can also add a wide selection of operational software Amazon EKS add-ons from independent software vendors. Choose an add-on to learn more about it and its installation requirements.

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/IIPj119mspc?rel=0/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/IIPj119mspc?rel=0)


## Accuknox


The add-on name is `accuknox_kubearmor` and the namespace is `kubearmor`. Accuknox publishes the add-on.

For information about the add-on, see [Getting Started with KubeArmor](https://docs.kubearmor.io/kubearmor/quick-links/deployment_guide) in the KubeArmor documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Akuity


The add-on name is `akuity_agent` and the namespace is `akuity`. Akuity publishes the add-on.

For information about how the add-on, see [Installing the Akuity Agent on Amazon EKS with the Akuity EKS add-on](https://docs.akuity.io/tutorials/eks-addon-agent-install/) in the Akuity Platform documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Calyptia


The add-on name is `calyptia_fluent-bit` and the namespace is `calyptia-fluentbit`. Calyptia publishes the add-on.

For information about the add-on, see [Getting Started with Calyptia Core Agent](https://docs.akuity.io/tutorials/eks-addon-agent-install/) on the Calyptia documentation website.

### Service account name


The service account name is `calyptia-fluentbit`.

### AWS managed IAM policy


This add-on uses the `AWSMarketplaceMeteringRegisterUsage` managed policy. For more information, see [AWSMarketplaceMeteringRegisterUsage](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSMarketplaceMeteringRegisterUsage.html) in the AWS Managed Policy Reference Guide.

### Command to create required IAM role


The following command requires that you have an IAM OpenID Connect (OIDC) provider for your cluster. To determine whether you have one, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md). Replace *my-cluster* with the name of your cluster and *my-calyptia-role* with the name for your role. This command requires that you have [eksctl](https://eksctl.io) installed on your device. If you need to use a different tool to create the role and annotate the Kubernetes service account, see [Assign IAM roles to Kubernetes service accounts](associate-service-account-role.md).

```
eksctl create iamserviceaccount --name service-account-name  --namespace calyptia-fluentbit --cluster my-cluster --role-name my-calyptia-role \
    --role-only --attach-policy-arn arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage --approve
```

## Cisco Observability Collector


The add-on name is `cisco_cisco-cloud-observability-collectors` and the namespace is `appdynamics`. Cisco publishes the add-on.

For information about the add-on, see [Use the Cisco Cloud Observability AWS Marketplace Add-Ons](https://docs.appdynamics.com/observability/cisco-cloud-observability/en/kubernetes-and-app-service-monitoring/install-kubernetes-and-app-service-monitoring-with-amazon-elastic-kubernetes-service/use-the-cisco-cloud-observability-aws-marketplace-add-ons) in the Cisco AppDynamics documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Cisco Observability Operator


The add-on name is `cisco_cisco-cloud-observability-operators` and the namespace is `appdynamics`. Cisco publishes the add-on.

For information about the add-on, see [Use the Cisco Cloud Observability AWS Marketplace Add-Ons](https://docs.appdynamics.com/observability/cisco-cloud-observability/en/kubernetes-and-app-service-monitoring/install-kubernetes-and-app-service-monitoring-with-amazon-elastic-kubernetes-service/use-the-cisco-cloud-observability-aws-marketplace-add-ons) in the Cisco AppDynamics documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## CLOUDSOFT


The add-on name is `cloudsoft_cloudsoft-amp` and the namespace is `cloudsoft-amp`. CLOUDSOFT publishes the add-on.

For information about the add-on, see [Amazon EKS ADDON](https://docs.cloudsoft.io/operations/configuration/aws-eks-addon.html) in the CLOUDSOFT documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Cribl


The add-on name is `cribl_cribledge` and the namespace is `cribledge`. Cribl publishes the add-on.

For information about the add-on, see [Installing the Cribl Amazon EKS Add-on for Edge](https://docs.cribl.io/edge/usecase-edge-aws-eks/) in the Cribl documentation

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Dynatrace


The add-on name is `dynatrace_dynatrace-operator` and the namespace is `dynatrace`. Dynatrace publishes the add-on.

For information about the add-on, see [Kubernetes monitoring](https://www.dynatrace.com/technologies/kubernetes-monitoring/) in the dynatrace documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Datree


The add-on name is `datree_engine-pro` and the namespace is `datree`. Datree publishes the add-on.

For information about the add-on, see [Amazon EKS-integration](https://hub.datree.io/integrations/eks-integration) in the Datree documentation.

### Service account name


The service account name is datree-webhook-server-awsmp.

### AWS managed IAM policy


The managed policy is AWSLicenseManagerConsumptionPolicy. For more information, see [AWSLicenseManagerConsumptionPolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLicenseManagerConsumptionPolicy.html) in the AWS Managed Policy Reference Guide..

### Command to create required IAM role


The following command requires that you have an IAM OpenID Connect (OIDC) provider for your cluster. To determine whether you have one, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md). Replace *my-cluster* with the name of your cluster and *my-datree-role* with the name for your role. This command requires that you have [eksctl](https://eksctl.io) installed on your device. If you need to use a different tool to create the role and annotate the Kubernetes service account, see [Assign IAM roles to Kubernetes service accounts](associate-service-account-role.md).

```
eksctl create iamserviceaccount --name datree-webhook-server-awsmp --namespace datree --cluster my-cluster --role-name my-datree-role \
    --role-only --attach-policy-arn arn:aws:iam::aws:policy/service-role/AWSLicenseManagerConsumptionPolicy --approve
```

### Custom permissions


Custom permissions aren’t used with this add-on.

## Datadog


The add-on name is `datadog_operator` and the namespace is `datadog-agent`. Datadog publishes the add-on.

For information about the add-on, see [Installing the Datadog Agent on Amazon EKS with the Datadog Operator Add-on](https://docs.datadoghq.com/containers/guide/operator-eks-addon/?tab=console) in the Datadog documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Groundcover


The add-on name is `groundcover_agent` and the namespace is `groundcover`. groundcover publishes the add-on.

For information about the add-on, see [Installing the groundcover Amazon EKS Add-on](https://docs.groundcover.com/docs/~/changes/VhDDAl1gy1VIO3RIcgxD/configuration/customization-guide/customize-deployment/eks-add-on) in the groundcover documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## IBM Instana


The add-on name is `instana-agent` and the namespace is `instana-agent`. IBM publishes the add-on.

For information about the add-on, see [Implement observability for Amazon EKS workloads using the Instana Amazon EKS add-on](https://aws.amazon.com/blogs/ibm-redhat/implement-observability-for-amazon-eks-workloads-using-the-instana-amazon-eks-add-on/) and [Monitor and optimize Amazon EKS costs with IBM Instana and Kubecost](https://aws.amazon.com/blogs/ibm-redhat/monitor-and-optimize-amazon-eks-costs-with-ibm-instana-and-kubecost/) in the AWS Blog.

Instana Observability (Instana) offers an Amazon EKS Add-on that deploys Instana agents to Amazon EKS clusters. Customers can use this add-on to collect and analyze real-time performance data to gain insights into their containerized applications. The Instana Amazon EKS add-on provides visibility across your Kubernetes environments. Once deployed, the Instana agent automatically discovers components within your Amazon EKS clusters including nodes, namespaces, deployments, services, and pods.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Grafana Labs


The add-on name is `grafana-labs_kubernetes-monitoring` and the namespace is `monitoring`. Grafana Labs publishes the add-on.

For information about the add-on, see [Configure Kubernetes Monitoring as an Add-on with Amazon EKS](https://grafana.com/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/configuration/config-aws-eks/) in the Grafana Labs documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Guance

+  **Publisher** – GUANCE
+  **Name** – `guance_datakit` 
+  **Namespace** – `datakit` 
+  **Service account name** – A service account isn’t used with this add-on.
+  ** AWS managed IAM policy** – A managed policy isn’t used with this add-on.
+  **Custom IAM permissions** – Custom permissions aren’t used with this add-on.
+  **Setup and usage instructions** – See [Using Amazon EKS add-on](https://docs.guance.com/en/datakit/datakit-eks-deploy/#add-on-install) in the Guance documentation.

## HA Proxy


The name is `haproxy-technologies_kubernetes-ingress-ee` and the namespace is `haproxy-controller`. HA Proxy publishes the add-on.

For information about the add-on, see [Amazon EKS-integration](https://hub.datree.io/integrations/eks-integration) in the Datree documentation.

### Service account name


The service account name is `customer defined`.

### AWS managed IAM policy


The managed policy is AWSLicenseManagerConsumptionPolicy. For more information, see [AWSLicenseManagerConsumptionPolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLicenseManagerConsumptionPolicy.html) in the AWS Managed Policy Reference Guide..

### Command to create required IAM role


The following command requires that you have an IAM OpenID Connect (OIDC) provider for your cluster. To determine whether you have one, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md). Replace *my-cluster* with the name of your cluster and *my-haproxy-role* with the name for your role. This command requires that you have [eksctl](https://eksctl.io) installed on your device. If you need to use a different tool to create the role and annotate the Kubernetes service account, see [Assign IAM roles to Kubernetes service accounts](associate-service-account-role.md).

```
eksctl create iamserviceaccount --name service-account-name  --namespace haproxy-controller --cluster my-cluster --role-name my-haproxy-role \
    --role-only --attach-policy-arn arn:aws:iam::aws:policy/service-role/AWSLicenseManagerConsumptionPolicy --approve
```

### Custom permissions


Custom permissions aren’t used with this add-on.

## Kpow


The add-on name is `factorhouse_kpow` and the namespace is `factorhouse`. Factorhouse publishes the add-on.

For information about the add-on, see [AWS Marketplace LM](https://docs.kpow.io/installation/aws-marketplace-lm/) in the Kpow documentation.

### Service account name


The service account name is `kpow`.

### AWS managed IAM policy


The managed policy is AWSLicenseManagerConsumptionPolicy. For more information, see [AWSLicenseManagerConsumptionPolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLicenseManagerConsumptionPolicy.html) in the AWS Managed Policy Reference Guide..

### Command to create required IAM role


The following command requires that you have an IAM OpenID Connect (OIDC) provider for your cluster. To determine whether you have one, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md). Replace *my-cluster* with the name of your cluster and *my-kpow-role* with the name for your role. This command requires that you have [eksctl](https://eksctl.io) installed on your device. If you need to use a different tool to create the role and annotate the Kubernetes service account, see [Assign IAM roles to Kubernetes service accounts](associate-service-account-role.md).

```
eksctl create iamserviceaccount --name kpow --namespace factorhouse --cluster my-cluster --role-name my-kpow-role \
    --role-only --attach-policy-arn arn:aws:iam::aws:policy/service-role/AWSLicenseManagerConsumptionPolicy --approve
```

### Custom permissions


Custom permissions aren’t used with this add-on.

## Kubecost


The add-on name is `kubecost_kubecost` and the namespace is `kubecost`. Kubecost publishes the add-on.

For information about the add-on, see [AWS Cloud Billing Integration](https://docs.kubecost.com/install-and-configure/install/cloud-integration/aws-cloud-integrations) in the Kubecost documentation.

You must have the [Store Kubernetes volumes with Amazon EBS](ebs-csi.md) installed on your cluster. otherwise you will receive an error.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Kasten


The add-on name is `kasten_k10` and the namespace is `kasten-io`. Kasten by Veeam publishes the add-on.

For information about the add-on, see [Installing K10 on AWS using Amazon EKS Add-on](https://docs.kasten.io/latest/install/aws-eks-addon/aws-eks-addon.html) in the Kasten documentation.

You must have the Amazon EBS CSI driver installed on your cluster with a default `StorageClass`.

### Service account name


The service account name is `k10-k10`.

### AWS managed IAM policy


The managed policy is AWSLicenseManagerConsumptionPolicy. For more information, see [AWSLicenseManagerConsumptionPolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLicenseManagerConsumptionPolicy.html) in the AWS Managed Policy Reference Guide..

### Command to create required IAM role


The following command requires that you have an IAM OpenID Connect (OIDC) provider for your cluster. To determine whether you have one, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md). Replace *my-cluster* with the name of your cluster and *my-kasten-role* with the name for your role. This command requires that you have [eksctl](https://eksctl.io) installed on your device. If you need to use a different tool to create the role and annotate the Kubernetes service account, see [Assign IAM roles to Kubernetes service accounts](associate-service-account-role.md).

```
eksctl create iamserviceaccount --name k10-k10 --namespace kasten-io --cluster my-cluster --role-name my-kasten-role \
    --role-only --attach-policy-arn arn:aws:iam::aws:policy/service-role/AWSLicenseManagerConsumptionPolicy --approve
```

### Custom permissions


Custom permissions aren’t used with this add-on.

## Kong


The add-on name is `kong_konnect-ri` and the namespace is `kong`. Kong publishes the add-on.

For information about the add-on, see [Installing the Kong Gateway EKS Add-on](https://kong.github.io/aws-marketplace-addon-kong-gateway/) in the Kong documentation.

You must have the [Store Kubernetes volumes with Amazon EBS](ebs-csi.md) installed on your cluster. otherwise you will receive an error.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## LeakSignal


The add-on name is `leaksignal_leakagent` and the namespace is `leakagent`. LeakSignal publishes the add-on.

For information about the add-on, see https://www.leaksignal.com/docs/LeakAgent/Deployment/AWS%20EKS%20Addon/[Install the LeakAgent add-on] in the LeakSignal documentation

You must have the [Store Kubernetes volumes with Amazon EBS](ebs-csi.md) installed on your cluster. otherwise you will receive an error.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## NetApp


The add-on name is `netapp_trident-operator` and the namespace is `trident`. NetApp publishes the add-on.

For information about the add-on, see [Configure the Trident EKS add-on](https://docs.netapp.com/us-en/trident/trident-use/trident-aws-addon.html) in the NetApp documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## New Relic


The add-on name is `new-relic_kubernetes-operator` and the namespace is `newrelic`. New Relic publishes the add-on.

For information about the add-on, see [Installing the New Relic Add-on for EKS](https://docs.newrelic.com/docs/infrastructure/amazon-integrations/connect/eks-add-on) in the New Relic documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Rafay


The add-on name is `rafay-systems_rafay-operator` and the namespace is `rafay-system`. Rafay publishes the add-on.

For information about the add-on, see [Installing the Rafay Amazon EKS Add-on](https://docs.rafay.co/clusters/import/eksaddon/) in the Rafay documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Rad Security

+  **Publisher** – RAD SECURITY
+  **Name** – `rad-security_rad-security` 
+  **Namespace** – `ksoc` 
+  **Service account name** – A service account isn’t used with this add-on.
+  ** AWS managed IAM policy** – A managed policy isn’t used with this add-on.
+  **Custom IAM permissions** – Custom permissions aren’t used with this add-on.
+  **Setup and usage instructions** – See [Installing Rad Through The AWS Marketplace](https://docs.rad.security/docs/installing-ksoc-in-the-aws-marketplace) in the Rad Security documentation.

## SolarWinds

+  **Publisher** – SOLARWINDS
+  **Name** – `solarwinds_swo-k8s-collector-addon` 
+  **Namespace** – `solarwinds` 
+  **Service account name** – A service account isn’t used with this add-on.
+  ** AWS managed IAM policy** – A managed policy isn’t used with this add-on.
+  **Custom IAM permissions** – Custom permissions aren’t used with this add-on.
+  **Setup and usage instructions** – See [Monitor an Amazon EKS cluster](https://documentation.solarwinds.com/en/success_center/observability/content/configure/configure-kubernetes.htm#MonitorAmazonEKS) in the SolarWinds documentation.

## Solo


The add-on name is `solo-io_istio-distro` and the namespace is `istio-system`. Solo publishes the add-on.

For information about the add-on, see [Installing Istio](https://docs.solo.io/gloo-mesh-enterprise/main/setup/install/eks_addon/) in the Solo.io documentation..

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Snyk

+  **Publisher** – SNYK
+  **Name** – `snyk_runtime-sensor` 
+  **Namespace** – `snyk_runtime-sensor` 
+  **Service account name** – A service account isn’t used with this add-on.
+  ** AWS managed IAM policy** – A managed policy isn’t used with this add-on.
+  **Custom IAM permissions** – Custom permissions aren’t used with this add-on.
+  **Setup and usage instructions** – See [Snyk runtime sensor](https://docs.snyk.io/integrate-with-snyk/snyk-runtime-sensor) in the Snyk user docs.

## Stormforge


The add-on name is `stormforge_optimize-Live` and the namespace is `stormforge-system`. Stormforge publishes the add-on.

For information about the add-on, see [Installing the StormForge Agent](https://docs.stormforge.io/optimize-live/getting-started/install-v2/) in the StormForge documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## SUSE

+  **Publisher** – SUSE
+  **Name** – `suse_observability-agent` 
+  **Namespace** – `suse-observability` 
+  **Service account name** – A service account isn’t used with this add-on.
+  ** AWS managed IAM policy** – A managed policy isn’t used with this add-on.
+  **Custom IAM permissions** – Custom permissions aren’t used with this add-on.
+  **Setup and usage instructions** – See [Quick Start](https://docs.stackstate.com/get-started/k8s-quick-start-guide#amazon-eks) in the SUSE documentation.

## Splunk


The add-on name is `splunk_splunk-otel-collector-chart` and the namespace is `splunk-monitoring`. Splunk publishes the add-on.

For information about the add-on, see [Install the Splunk add-on for Amazon EKS](https://help.splunk.com/en/splunk-observability-cloud/manage-data/splunk-distribution-of-the-opentelemetry-collector/get-started-with-the-splunk-distribution-of-the-opentelemetry-collector/collector-for-kubernetes/kubernetes-eks-add-on) in the Splunk documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Teleport


The add-on name is `teleport_teleport` and the namespace is `teleport`. Teleport publishes the add-on.

For information about the add-on, see [How Teleport Works](https://goteleport.com/how-it-works/) in the Teleport documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Tetrate


The add-on name is `tetrate-io_istio-distro` and the namespace is `istio-system`. Tetrate Io publishes the add-on.

For information about the add-on, see the [Tetrate Istio Distro](https://tetratelabs.io/) website.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Upbound Universal Crossplane


The add-on name is `upbound_universal-crossplane` and the namespace is `upbound-system`. Upbound publishes the add-on.

For information about the add-on, see [Upbound Universal Crossplane (UXP)](https://docs.upbound.io/uxp/) in the Upbound documentation.

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

## Upwind


The add-on name is `upwind` and the namespace is `upwind`. Upwind publishes the add-on.

For information about the add-on, see [Upwind documentation](https://docs.upwind.io/install-sensor/kubernetes/install?installation-method=amazon-eks-addon).

### Service account name


A service account isn’t used with this add-on.

### AWS managed IAM policy


A managed policy isn’t used with this add-on.

### Custom IAM permissions


Custom permissions aren’t used with this add-on.

# Create an Amazon EKS add-on
Create an add-on

Amazon EKS add-ons are add-on software for Amazon EKS clusters. All Amazon EKS add-ons:
+ Include the latest security patches and bug fixes.
+ Are validated by AWS to work with Amazon EKS.
+ Reduce the amount of work required to manage the add-on software.

You can create an Amazon EKS add-on using `eksctl`, the AWS Management Console, or the AWS CLI. If the add-on requires an IAM role, see the details for the specific add-on in [Amazon EKS add-ons](eks-add-ons.md) for details about creating the role.

## Prerequisites


Complete the following before you create an add-on:
+ The cluster must exist before you create an add-on for it. For more information, see [Create an Amazon EKS cluster](create-cluster.md).
+ Check if your add-on requires an IAM role. For more information, see [Verify Amazon EKS add-on version compatibility with a cluster](addon-compat.md).
+ Verify that the Amazon EKS add-on version is compatible with your cluster. For more information, see [Verify Amazon EKS add-on version compatibility with a cluster](addon-compat.md).
+ Verify that version 0.190.0 or later of the `eksctl` command line tool installed on your computer or AWS CloudShell. For more information, see [Installation](https://eksctl.io/installation/) on the `eksctl` website.

## Procedure


You can create an Amazon EKS add-on using `eksctl`, the AWS Management Console, or the AWS CLI. If the add-on requires an IAM role, see the details for the specific add-on in [Available Amazon EKS add-ons from AWS](workloads-add-ons-available-eks.md) for details about creating the role.

## Create add-on (eksctl)


1. View the names of add-ons available for a cluster version. Replace *1.35* with the version of your cluster.

   ```
   eksctl utils describe-addon-versions --kubernetes-version 1.35 | grep AddonName
   ```

   An example output is as follows.

   ```
   "AddonName": "aws-ebs-csi-driver",
                           "AddonName": "coredns",
                           "AddonName": "kube-proxy",
                           "AddonName": "vpc-cni",
                           "AddonName": "adot",
                           "AddonName": "dynatrace_dynatrace-operator",
                           "AddonName": "upbound_universal-crossplane",
                           "AddonName": "teleport_teleport",
                           "AddonName": "factorhouse_kpow",
                           [...]
   ```

1. View the versions available for the add-on that you would like to create. Replace *1.35* with the version of your cluster. Replace *name-of-addon* with the name of the add-on you want to view the versions for. The name must be one of the names returned in the previous step.

   ```
   eksctl utils describe-addon-versions --kubernetes-version 1.35 --name name-of-addon | grep AddonVersion
   ```

   The following output is an example of what is returned for the add-on named `vpc-cni`. You can see that the add-on has several available versions.

   ```
   "AddonVersions": [
       "AddonVersion": "v1.12.0-eksbuild.1",
       "AddonVersion": "v1.11.4-eksbuild.1",
       "AddonVersion": "v1.10.4-eksbuild.1",
       "AddonVersion": "v1.9.3-eksbuild.1",
   ```

   1. Determine whether the add-on you want to create is an Amazon EKS or AWS Marketplace add-on. The AWS Marketplace has third party add-ons that require you to complete additional steps to create the add-on.

      ```
      eksctl utils describe-addon-versions --kubernetes-version 1.35 --name name-of-addon | grep ProductUrl
      ```

      If no output is returned, then the add-on is an Amazon EKS. If output is returned, then the add-on is an AWS Marketplace add-on. The following output is for an add-on named `teleport_teleport`.

      ```
      "ProductUrl": "https://aws.amazon.com/marketplace/pp?sku=3bda70bb-566f-4976-806c-f96faef18b26"
      ```

      You can learn more about the add-on in the AWS Marketplace with the returned URL. If the add-on requires a subscription, you can subscribe to the add-on through the AWS Marketplace. If you’re going to create an add-on from the AWS Marketplace, then the [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) that you’re using to create the add-on must have permission to create the [AWSServiceRoleForAWSLicenseManagerRole](https://docs.aws.amazon.com/license-manager/latest/userguide/license-manager-role-core.html) service-linked role. For more information about assigning permissions to an IAM entity, see [Adding and removing IAM identity permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the IAM User Guide.

1. Create an Amazon EKS add-on. Copy the command and replace the *user-data* as follows:
   + Replace *my-cluster* with the name of your cluster.
   + Replace *name-of-addon* with the name of the add-on that you want to create.
   + If you want a version of the add-on that’s earlier than the latest version, then replace *latest* with the version number returned in the output of a previous step that you want to use.
   + If the add-on uses a service account role, replace *111122223333* with your account ID and replace *role-name* with the name of the role. For instructions on creating a role for your service account, see the documentation for the add-on that you’re creating. For a list of add-ons, see [AWS add-ons](workloads-add-ons-available-eks.md). Specifying a service account role requires that you have an IAM OpenID Connect (OIDC) provider for your cluster. To determine whether you have one for your cluster, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).

     If the add-on doesn’t use a service account role, delete `--service-account-role-arnarn:aws:iam::111122223333:role/role-name`.
   + This example command overwrites the configuration of any existing self-managed version of the add-on, if there is one. If you don’t want to overwrite the configuration of an existing self-managed add-on, remove the *--force* option. If you remove the option, and the Amazon EKS add-on needs to overwrite the configuration of an existing self-managed add-on, then creation of the Amazon EKS add-on fails with an error message to help you resolve the conflict. Before specifying this option, make sure that the Amazon EKS add-on doesn’t manage settings that you need to manage, because those settings are overwritten with this option.

     ```
     eksctl create addon --cluster my-cluster --name name-of-addon --version latest \
         --service-account-role-arn arn:aws:iam::111122223333:role/role-name --force
     ```

     You can see a list of all available options for the command.

     ```
     eksctl create addon --help
     ```

     For more information about available options see [Addons](https://eksctl.io/usage/addons/) in the `eksctl` documentation.

## Create add-on (AWS Console)


1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, choose **Clusters**.

1. Choose the name of the cluster that you want to create the add-on for.

1. Choose the **Add-ons** tab.

1. Choose **Get more add-ons**.

1. On the **Select add-ons** page, choose the add-ons that you want to add to your cluster. You can add as many **Amazon EKS add-ons** and ** AWS Marketplace add-ons** as you require.

   For ** AWS Marketplace** add-ons the [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) that you’re using to create the add-on must have permissions to read entitlements for the add-on from the AWS LicenseManager. AWS LicenseManager requires [AWSServiceRoleForAWSLicenseManagerRole](https://docs.aws.amazon.com/license-manager/latest/userguide/license-manager-role-core.html) service-linked role (SLR) that allows AWS resources to manage licenses on your behalf. The SLR is a one time requirement, per account, and you will not have to create separate SLR’s for each add-on nor each cluster. For more information about assigning permissions to an [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) see [Adding and removing IAM identity permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the IAM User Guide.

   If the ** AWS Marketplace add-ons** that you want to install aren’t listed, you can click the page numbering to view additional page results or search in the search box. In the **Filtering options**, you can also filter by **category**, **vendor**, or **pricing model** and then choose the add-ons from the search results. Once you’ve selected the add-ons that you want to install, choose **Next**.

1. On the **Configure selected add-ons settings** page, do the following:

   1. Choose **View subscription options** to open the **Subscription options** form. Review the **Pricing details** and **Legal** sections, then choose the **Subscribe** button to continue.

   1. For **Version**, choose the version that you want to install. We recommend the version marked **latest**, unless the individual add-on that you’re creating recommends a different version. To determine whether an add-on has a recommended version, see the documentation for the add-on that you’re creating. For a list of add-ons, see [AWS add-ons](workloads-add-ons-available-eks.md).

   1. You have two options for configuring roles for add-ons: EKS Pod Identities IAM role and IAM roles for service accounts (IRSA). Follow the appropriate step below for your preferred option. If all of the add-ons that you selected have **Requires subscription** under **Status**, choose **Next**. You can’t [configure those add-ons](updating-an-add-on.md) further until you’ve subscribed to them after your cluster is created. For the add-ons that don’t have **Requires subscription** under **Status**, do the following:

      1. For **Pod Identity IAM role for service account**, you can either use an existing EKS Pod Identity IAM role or create one using the **Create Recommended Role** button. This field will only provide options with the appropriate trust policy. If there’s no role to select, then you don’t have an existing role with a matching trust policy. To configure an EKS Pod Identity IAM role for service accounts of the selected add-on, choose **Create recommended role**. The role creation wizard opens in a separate window. The wizard will automatically populate the role information as follows. For each add-on where you want to create the EKS Pod Identity IAM role, complete the steps in the IAM wizard as follows.
         + On the **Select trusted entity** step, the AWS service option for **EKS** and the use case for **EKS - Pod Identity** are preselected, and the appropriate trust policy will be automatically populated for the add-on. For example, the role will be created with the appropriate trust policy containing the pods.eks.amazonaws.com IAM Principal as detailed in [Benefits of EKS Pod Identities](pod-identities.md#pod-id-benefits). Choose **Next**.
         + On the **Add permissions** step, the appropriate managed policy for the role policy is preselected for the add-on. For example, for the Amazon VPC CNI add-on, the role will be created with the managed policy `AmazonEKS_CNI_Policy` as detailed in [Amazon VPC CNI plugin for Kubernetes](workloads-add-ons-available-eks.md#add-ons-vpc-cni). Choose **Next**.
         + On the **Name, review, and create** step, in **Role name**, the default role name is automatically populated for the add-on. For example, for the **Amazon VPC CNI** add-on, the role will be created with the name **AmazonEKSPodIdentityAmazonVPCCNIRole**. In **Description**, the default description is automatically populated with the appropriate description for the add-on. For example, for the Amazon VPC CNI add-on, the role will be created with the description **Allows pods running in Amazon EKS cluster** to access AWS resources. In **Trust policy**, view the populated trust policy for the add-on. Choose **Create role**.

           NOTE: Retaining the default role name enables EKS to pre-select the role for add-ons in new clusters or when adding add-ons to existing clusters. You can still override this name and the role will be available for the add-on across your clusters, but the role will need to be manually selected from the drop down.

      1. For add-ons that do not have **Requires subscription** under **Status** and where you want to configure roles using IRSA, see the documentation for the add-on that you’re creating to create an IAM policy and attach it to a role. For a list of add-ons, see [AWS add-ons](workloads-add-ons-available-eks.md). Selecting an IAM role requires that you have an IAM OpenID Connect (OIDC) provider for your cluster. To determine whether you have one for your cluster, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).

      1. Choose **Optional configuration settings**.

      1. If the add-on requires configuration, enter it in the **Configuration values** box. To determine whether the add-on requires configuration information, see the documentation for the add-on that you’re creating. For a list of add-ons, see [AWS add-ons](workloads-add-ons-available-eks.md).

      1. Choose one of the available options for **Conflict resolution method**. If you choose **Override** for the **Conflict resolution method**, one or more of the settings for the existing add-on can be overwritten with the Amazon EKS add-on settings. If you don’t enable this option and there’s a conflict with your existing settings, the operation fails. You can use the resulting error message to troubleshoot the conflict. Before choosing this option, make sure that the Amazon EKS add-on doesn’t manage settings that you need to self-manage.

      1. If you want to install the add-on into a specific namespace, enter it in the **Namespace** field. For AWS and community add-ons, you can define a custom Kubernetes namespace to install the add-on into. For more information, see [Custom namespace for add-ons](eks-add-ons.md#custom-namespace).

      1. Choose **Next**.

1. On the **Review and add** page, choose **Create**. After the add-on installation is complete, you see your installed add-ons.

1. If any of the add-ons that you installed require a subscription, complete the following steps:

   1. Choose the **Subscribe** button in the lower right corner for the add-on. You’re taken to the page for the add-on in the AWS Marketplace. Read the information about the add-on such as its **Product Overview** and **Pricing Information**.

   1. Select the **Continue to Subscribe** button on the top right of the add-on page.

   1. Read through the **Terms and Conditions**. If you agree to them, choose **Accept Terms**. It may take several minutes to process the subscription. While the subscription is processing, the **Return to Amazon EKS Console** button is grayed out.

   1. Once the subscription has finished processing, the **Return to Amazon EKS Console** button is no longer grayed out. Choose the button to go back to the Amazon EKS console **Add-ons** tab for your cluster.

   1. For the add-on that you subscribed to, choose **Remove and reinstall** and then choose **Reinstall add-on**. Installation of the add-on can take several minutes. When Installation is complete, you can configure the add-on.

## Create add-on (AWS CLI)


1. You need version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.

1. Determine which add-ons are available. You can see all available add-ons, their type, and their publisher. You can also see the URL for add-ons that are available through the AWS Marketplace. Replace *1.35* with the version of your cluster.

   ```
   aws eks describe-addon-versions --kubernetes-version 1.35 \
       --query 'addons[].{MarketplaceProductUrl: marketplaceInformation.productUrl, Name: addonName, Owner: owner Publisher: publisher, Type: type}' --output table
   ```

   An example output is as follows.

   ```
   ---------------------------------------------------------------------------------------------------------------------------------------------------------
   |                                                                 DescribeAddonVersions                                                                 |
   +---------------------------------------------------------------+-------------------------------+------------------+--------------+---------------------+
   |                     MarketplaceProductUrl                     |             Name              |      Owner       |  Publisher   |        Type         |
   +---------------------------------------------------------------+-------------------------------+------------------+--------------+---------------------+
   |  None                                                         |  aws-ebs-csi-driver           |  aws             |  eks         |  storage            |
   |  None                                                         |  coredns                      |  aws             |  eks         |  networking         |
   |  None                                                         |  kube-proxy                   |  aws             |  eks         |  networking         |
   |  None                                                         |  vpc-cni                      |  aws             |  eks         |  networking         |
   |  None                                                         |  adot                         |  aws             |  eks         |  observability      |
   | https://aws.amazon.com/marketplace/pp/prodview-brb73nceicv7u |  dynatrace_dynatrace-operator |  aws-marketplace |  dynatrace   |  monitoring         |
   | https://aws.amazon.com/marketplace/pp/prodview-uhc2iwi5xysoc |  upbound_universal-crossplane |  aws-marketplace |  upbound     |  infra-management   |
   | https://aws.amazon.com/marketplace/pp/prodview-hd2ydsrgqy4li |  teleport_teleport            |  aws-marketplace |  teleport    |  policy-management  |
   | https://aws.amazon.com/marketplace/pp/prodview-vgghgqdsplhvc |  factorhouse_kpow             |  aws-marketplace |  factorhouse |  monitoring         |
   |  [...]                                                        |  [...]                        |  [...]           |  [...]       |  [...]              |
   +---------------------------------------------------------------+-------------------------------+------------------+--------------+---------------------+
   ```

   Your output might be different. In this example output, there are three different add-ons available of type `networking` and five add-ons with a publisher of type `eks`. The add-ons with `aws-marketplace` in the `Owner` column may require a subscription before you can install them. You can visit the URL to learn more about the add-on and to subscribe to it.

1. You can see which versions are available for each add-on. Replace *1.35* with the version of your cluster and replace *vpc-cni* with the name of an add-on returned in the previous step.

   ```
   aws eks describe-addon-versions --kubernetes-version 1.35 --addon-name vpc-cni \
       --query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
   ```

   An example output is as follows.

   ```
   ------------------------------------------
   |          DescribeAddonVersions         |
   +-----------------+----------------------+
   | Defaultversion  |       Version        |
   +-----------------+----------------------+
   |  False          |  v1.12.0-eksbuild.1  |
   |  True           |  v1.11.4-eksbuild.1  |
   |  False          |  v1.10.4-eksbuild.1  |
   |  False          |  v1.9.3-eksbuild.1   |
   +-----------------+----------------------+
   ```

   The version with `True` in the `Defaultversion` column is the version that the add-on is created with, by default.

1. (Optional) Find the configuration options for your chosen add-on by running the following command:

   ```
   aws eks describe-addon-configuration --addon-name vpc-cni --addon-version v1.12.0-eksbuild.1
   ```

   ```
   {
       "addonName": "vpc-cni",
       "addonVersion": "v1.12.0-eksbuild.1",
       "configurationSchema": "{\"$ref\":\"#/definitions/VpcCni\",\"$schema\":\"http://json-schema.org/draft-06/schema#\",\"definitions\":{\"Cri\":{\"additionalProperties\":false,\"properties\":{\"hostPath\":{\"$ref\":\"#/definitions/HostPath\"}},\"title\":\"Cri\",\"type\":\"object\"},\"Env\":{\"additionalProperties\":false,\"properties\":{\"ADDITIONAL_ENI_TAGS\":{\"type\":\"string\"},\"AWS_VPC_CNI_NODE_PORT_SUPPORT\":{\"format\":\"boolean\",\"type\":\"string\"},\"AWS_VPC_ENI_MTU\":{\"format\":\"integer\",\"type\":\"string\"},\"AWS_VPC_K8S_CNI_CONFIGURE_RPFILTER\":{\"format\":\"boolean\",\"type\":\"string\"},\"AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG\":{\"format\":\"boolean\",\"type\":\"string\"},\"AWS_VPC_K8S_CNI_EXTERNALSNAT\":{\"format\":\"boolean\",\"type\":\"string\"},\"AWS_VPC_K8S_CNI_LOGLEVEL\":{\"type\":\"string\"},\"AWS_VPC_K8S_CNI_LOG_FILE\":{\"type\":\"string\"},\"AWS_VPC_K8S_CNI_RANDOMIZESNAT\":{\"type\":\"string\"},\"AWS_VPC_K8S_CNI_VETHPREFIX\":{\"type\":\"string\"},\"AWS_VPC_K8S_PLUGIN_LOG_FILE\":{\"type\":\"string\"},\"AWS_VPC_K8S_PLUGIN_LOG_LEVEL\":{\"type\":\"string\"},\"DISABLE_INTROSPECTION\":{\"format\":\"boolean\",\"type\":\"string\"},\"DISABLE_METRICS\":{\"format\":\"boolean\",\"type\":\"string\"},\"DISABLE_NETWORK_RESOURCE_PROVISIONING\":{\"format\":\"boolean\",\"type\":\"string\"},\"ENABLE_POD_ENI\":{\"format\":\"boolean\",\"type\":\"string\"},\"ENABLE_PREFIX_DELEGATION\":{\"format\":\"boolean\",\"type\":\"string\"},\"WARM_ENI_TARGET\":{\"format\":\"integer\",\"type\":\"string\"},\"WARM_PREFIX_TARGET\":{\"format\":\"integer\",\"type\":\"string\"}},\"title\":\"Env\",\"type\":\"object\"},\"HostPath\":{\"additionalProperties\":false,\"properties\":{\"path\":{\"type\":\"string\"}},\"title\":\"HostPath\",\"type\":\"object\"},\"Limits\":{\"additionalProperties\":false,\"properties\":{\"cpu\":{\"type\":\"string\"},\"memory\":{\"type\":\"string\"}},\"title\":\"Limits\",\"type\":\"object\"},\"Resources\":{\"additionalProperties\":false,\"properties\":{\"limits\":{\"$ref\":\"#/definitions/Limits\"},\"requests\":{\"$ref\":\"#/definitions/Limits\"}},\"title\":\"Resources\",\"type\":\"object\"},\"VpcCni\":{\"additionalProperties\":false,\"properties\":{\"cri\":{\"$ref\":\"#/definitions/Cri\"},\"env\":{\"$ref\":\"#/definitions/Env\"},\"resources\":{\"$ref\":\"#/definitions/Resources\"}},\"title\":\"VpcCni\",\"type\":\"object\"}}}"
   }
   ```

   The output is a standard JSON schema.

   Here is an example of valid configuration values, in JSON format, that works with the schema above.

   ```
   {
     "resources": {
       "limits": {
         "cpu": "100m"
       }
     }
   }
   ```

   Here is an example of valid configuration values, in YAML format, that works with the schema above.

   ```
     resources:
       limits:
         cpu: 100m
   ```

1. Determine if the add-on requires IAM permissions. If so, you need to (1) determine if you want to use EKS Pod Identities or IAM Roles for Service Accounts (IRSA), (2) determine the ARN of the IAM role to use with the add-on, and (3) determine the name of the Kubernetes service account used by the add-on. For more information, see [Retrieve IAM information about an Amazon EKS add-on](retreive-iam-info.md).
   + Amazon EKS suggests using EKS Pod Identities if the add-on supports it. This requires the [Pod Identity Agent is installed on your cluster](pod-identities.md). For more information about using Pod Identities with Add-ons, see [IAM roles for Amazon EKS add-ons](add-ons-iam.md).
   + If the add-on or your cluster is not setup for EKS Pod Identities, use IRSA. [Confirm IRSA is setup on your cluster. ](iam-roles-for-service-accounts.md) 
   +  [Review the Amazon EKS Add-ons documentation to determine if the add-on requires IAM permissions and the name of the associated Kubernetes service account. ](eks-add-ons.md) 

     1. Create an Amazon EKS add-on. Copy the command that follows to your device. Make the following modifications to the command as needed and then run the modified command:
   + Replace *my-cluster* with the name of your cluster.
   + Replace *vpc-cni* with an add-on name returned in the output of the previous step that you want to create.
   + Replace *version-number* with the version returned in the output of the previous step that you want to use.
   + If you want to install the add-on into a custom Kubernetes namespace, add the `--namespace-config 'namespace=<my-namespace>` option. This option is only available for AWS and community add-ons. For more information, see [Custom namespace for add-ons](eks-add-ons.md#custom-namespace) 
   + If the add-on doesn’t require IAM permissions, delete *<service-account-configuration>*.
   + Do one of the following:
     + If the add-on (1) requires IAM permissions, and (2) your cluster uses EKS Pod Identities, replace *<service-account-configuration>* with the following pod identity association. Replace *<service-account-name>* with the service account name used by the add-on. Replace *<role-arn>* with the ARN of an IAM role. The role must have the trust policy required by EKS Pod Identities.

       ```
       --pod-identity-associations 'serviceAccount=<service-account-name>,roleArn=<role-arn>'
       ```
     + If the add-on (1) requires IAM permissions, and (2) your cluster uses IRSA, replace *<service-account-configuration>* with the following IRSA configuration. Replace *111122223333* with your account ID and *role-name* with the name of an existing IAM role that you’ve created. For instructions on creating the role, see the documentation for the add-on that you’re creating. For a list of add-ons, see [AWS add-ons](workloads-add-ons-available-eks.md). Specifying a service account role requires that you have an IAM OpenID Connect (OIDC) provider for your cluster. To determine whether you have one for your cluster, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).

       ```
       --service-account-role-arn arn:aws::iam::111122223333:role/role-name
       ```
   + These example commands overwrites the `--configuration-values` option of any existing self-managed version of the add-on, if there is one. Replace this with the desired configuration values, such as a string or a file input. If you don’t want to provide configuration values, then delete the `--configuration-values` option. If you don’t want the AWS CLI to overwrite the configuration of an existing self-managed add-on, remove the *--resolve-conflicts OVERWRITE* option. If you remove the option, and the Amazon EKS add-on needs to overwrite the configuration of an existing self-managed add-on, then creation of the Amazon EKS add-on fails with an error message to help you resolve the conflict. Before specifying this option, make sure that the Amazon EKS add-on doesn’t manage settings that you need to manage, because those settings are overwritten with this option.

     ```
     aws eks create-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version version-number \
          <service-account-configuration> --configuration-values '{"resources":{"limits":{"cpu":"100m"}}}' --resolve-conflicts OVERWRITE
     ```

     ```
     aws eks create-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version version-number \
         <service-account-configuration> --configuration-values 'file://example.yaml' --resolve-conflicts OVERWRITE
     ```

     For a full list of available options, see ` [create-addon](https://docs.aws.amazon.com/cli/latest/reference/eks/create-addon.html) ` in the Amazon EKS Command Line Reference. If the add-on that you created has `aws-marketplace` listed in the `Owner` column of a previous step, then creation may fail, and you may receive an error message similar to the following error.

     ```
     {
         "addon": {
             "addonName": "addon-name",
             "clusterName": "my-cluster",
             "status": "CREATE_FAILED",
             "addonVersion": "version",
             "health": {
                 "issues": [
                     {
                         "code": "AddonSubscriptionNeeded",
                         "message": "You are currently not subscribed to this add-on. To subscribe, visit the AWS Marketplace console, agree to the seller EULA, select the pricing type if required, then re-install the add-on"
                     }
                 ]
             }
         }
     }
     ```

     If you receive an error similar to the error in the previous output, visit the URL in the output of a previous step to subscribe to the add-on. Once subscribed, run the `create-addon` command again.

# Update an Amazon EKS add-on
Update an add-on

Amazon EKS doesn’t automatically update an add-on when new versions are released or after you update your cluster to a new Kubernetes minor version. To update an add-on for an existing cluster, you must initiate the update. After you initiate the update, Amazon EKS updates the add-on for you. Before updating an add-on, review the current documentation for the add-on. For a list of available add-ons, see [AWS add-ons](workloads-add-ons-available-eks.md). If the add-on requires an IAM role, see the details for the specific add-on in [Available Amazon EKS add-ons from AWS](workloads-add-ons-available-eks.md) for details about creating the role.

## Prerequisites


Complete the following before you create an add-on:
+ Check if your add-on requires an IAM role. For more information, see [Amazon EKS add-ons](eks-add-ons.md).
+ Verify that the Amazon EKS add-on version is compatible with your cluster. For more information, see [Verify Amazon EKS add-on version compatibility with a cluster](addon-compat.md).

## Procedure


You can update an Amazon EKS add-on using `eksctl`, the AWS Management Console, or the AWS CLI.

## Update add-on (eksctl)


1. Determine the current add-ons and add-on versions installed on your cluster. Replace *my-cluster* with the name of your cluster.

   ```
   eksctl get addon --cluster my-cluster
   ```

   An example output is as follows.

   ```
   NAME        VERSION              STATUS  ISSUES  IAMROLE  UPDATE AVAILABLE
   coredns     v1.8.7-eksbuild.2    ACTIVE  0
   kube-proxy  v1.23.7-eksbuild.1   ACTIVE  0                v1.23.8-eksbuild.2
   vpc-cni     v1.10.4-eksbuild.1   ACTIVE  0                v1.12.0-eksbuild.1,v1.11.4-eksbuild.1,v1.11.3-eksbuild.1,v1.11.2-eksbuild.1,v1.11.0-eksbuild.1
   ```

   Your output might look different, depending on which add-ons and versions that you have on your cluster. You can see that in the previous example output, two existing add-ons on the cluster have newer versions available in the `UPDATE AVAILABLE` column.

1. Update the add-on.

   1. Copy the command that follows to your device. Make the following modifications to the command as needed:
      + Replace *my-cluster* with the name of your cluster.
      + Replace *region-code* with the AWS Region that your cluster is in.
      + Replace *vpc-cni* with the name of an add-on returned in the output of the previous step that you want to update.
      + If you want to update to a version earlier than the latest available version, then replace *latest* with the version number returned in the output of the previous step that you want to use. Some add-ons have recommended versions. For more information, see the documentation for the add-on that you’re updating. For a list of add-ons, see [AWS add-ons](workloads-add-ons-available-eks.md).**\$1** If the add-on uses a Kubernetes service account and IAM role, replace *111122223333* with your account ID and *role-name* with the name of an existing IAM role that you’ve created. For instructions on creating the role, see the documentation for the add-on that you’re creating. For a list of add-ons, see [AWS add-ons](workloads-add-ons-available-eks.md). Specifying a service account role requires that you have an IAM OpenID Connect (OIDC) provider for your cluster. To determine whether you have one for your cluster, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).

        If the add-on doesn’t use a Kubernetes service account and IAM role, delete the `serviceAccountRoleARN: arn:aws:iam::111122223333:role/role-name ` line.
      + The *preserve* option preserves existing values for the add-on. If you have set custom values for add-on settings, and you don’t use this option, Amazon EKS overwrites your values with its default values. If you use this option, then we recommend that you test any field and value changes on a non-production cluster before updating the add-on on your production cluster. If you change this value to `overwrite`, all settings are changed to Amazon EKS default values. If you’ve set custom values for any settings, they might be overwritten with Amazon EKS default values. If you change this value to `none`, Amazon EKS doesn’t change the value of any settings, but the update might fail. If the update fails, you receive an error message to help you resolve the conflict.

        ```
        cat >update-addon.yaml <<EOF
        apiVersion: eksctl.io/v1alpha5
        kind: ClusterConfig
        metadata:
          name: my-cluster
          region: region-code
        
        addons:
        - name: vpc-cni
          version: latest
          serviceAccountRoleARN: arn:aws:iam::111122223333:role/role-name
          resolveConflicts: preserve
        EOF
        ```

   1. Run the modified command to create the `update-addon.yaml` file.

   1. Apply the config file to your cluster.

      ```
      eksctl update addon -f update-addon.yaml
      ```

   For more information about updating add-ons, see [Updating addons](https://eksctl.io/usage/addons/#updating-addons) in the `eksctl` documentation.

## Update add-on (AWS Console)


1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, choose **Clusters**.

1. Choose the name of the cluster that you want to update the add-on for.

1. Choose the **Add-ons** tab.

1. Choose the add-on that you want to update.

1. Choose **Edit**.

1. On the **Configure *name of addon* ** page, do the following:

   1. Choose the **Version** that you’d like to use. The add-on might have a recommended version. For more information, see the documentation for the add-on that you’re updating. For a list of add-ons, see [AWS add-ons](workloads-add-ons-available-eks.md).

   1. You have two options for configuring roles for add-ons: EKS Pod Identities IAM role and IAM roles for service accounts (IRSA). Follow the appropriate step below for your preferred option. If all of the add-ons that you selected have **Requires subscription** under **Status**, choose **Next**. For the add-ons that don’t have **Requires subscription** under **Status**, do the following:

      1. For **Pod Identity IAM role for service account**, you can either use an existing EKS Pod Identity IAM role or create one using the **Create Recommended Role** button. This field will only provide options with the appropriate trust policy. If there’s no role to select, then you don’t have an existing role with a matching trust policy. To configure an EKS Pod Identity IAM role for service accounts of the selected add-on, choose **Create recommended role**. The role creation wizard opens in a separate window. The wizard will automatically populate the role information as follows. For each add-on where you want to create the EKS Pod Identity IAM role, complete the steps in the IAM wizard as follows.
         + On the **Select trusted entity** step, the AWS service option for **EKS** and the use case for **EKS - Pod Identity** are preselected, and the appropriate trust policy will be automatically populated for the add-on. For example, the role will be created with the appropriate trust policy containing the pods.eks.amazonaws.com IAM Principal as detailed in [Benefits of EKS Pod Identities](pod-identities.md#pod-id-benefits). Choose **Next**.
         + On the **Add permissions** step, the appropriate managed policy for the role policy is preselected for the add-on. For example, for the Amazon VPC CNI add-on, the role will be created with the managed policy `AmazonEKS_CNI_Policy` as detailed in [Amazon VPC CNI plugin for Kubernetes](workloads-add-ons-available-eks.md#add-ons-vpc-cni). Choose **Next**.
         + On the **Name, review, and create** step, in **Role name**, the default role name is automatically populated for the add-on. For example, for the **Amazon VPC CNI** add-on, the role will be created with the name **AmazonEKSPodIdentityAmazonVPCCNIRole**. In **Description**, the default description is automatically populated with the appropriate description for the add-on. For example, for the Amazon VPC CNI add-on, the role will be created with the description **Allows pods running in Amazon EKS cluster** to access AWS resources. In **Trust policy**, view the populated trust policy for the add-on. Choose **Create role**.
**Note**  
Retaining the default role name enables EKS to pre-select the role for add-ons in new clusters or when adding add-ons to existing clusters. You can still override this name and the role will be available for the add-on across your clusters, but the role will need to be manually selected from the drop down.

      1. For add-ons that do not have **Requires subscription** under **Status** and where you want to configure roles using IRSA, see the documentation for the add-on that you’re creating to create an IAM policy and attach it to a role. For a list of add-ons, see [AWS add-ons](workloads-add-ons-available-eks.md). Selecting an IAM role requires that you have an IAM OpenID Connect (OIDC) provider for your cluster. To determine whether you have one for your cluster, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).

   1. Expand the **Optional configuration settings**.

   1. In **Configuration values**, enter any add-on specific configuration information. For more information, see the documentation for the add-on that you’re updating. For a list of add-ons, see [AWS add-ons](workloads-add-ons-available-eks.md)…​ For **Conflict resolution method**, select one of the options. If you have set custom values for add-on settings, we recommend the **Preserve** option. If you don’t choose this option, Amazon EKS overwrites your values with its default values. If you use this option, then we recommend that you test any field and value changes on a non-production cluster before updating the add-on on your production cluster. If you change this value to overwrite, all settings are changed to Amazon EKS default values. If you’ve set custom values for any settings, they might be overwritten with Amazon EKS default values. If you change this value to none, Amazon EKS doesn’t change the value of any settings, but the update might fail. If the update fails, you receive an error message to help you resolve the conflict.

1. Choose **Save changes**.

## Update add-on (AWS CLI)


1. You need version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.

1. See a list of installed add-ons. Replace *my-cluster* with the name of your cluster.

   ```
   aws eks list-addons --cluster-name my-cluster
   ```

   An example output is as follows.

   ```
   {
       "addons": [
           "coredns",
           "kube-proxy",
           "vpc-cni"
       ]
   }
   ```

1. View the current version of the add-on that you want to update. Replace *my-cluster* with your cluster name and *vpc-cni* with the name of the add-on that you want to update.

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name vpc-cni --query "addon.addonVersion" --output text
   ```

   An example output is as follows.

   ```
   v1.10.4-eksbuild.1
   ```

1. Determine which versions of the add-on are available for your cluster’s version. Replace *1.35* with your cluster’s version and *vpc-cni* with the name of the add-on that you want to update.

   ```
   aws eks describe-addon-versions --kubernetes-version 1.35 --addon-name vpc-cni \
       --query 'addons[].addonVersions[].{Version: addonVersion, Defaultversion: compatibilities[0].defaultVersion}' --output table
   ```

   An example output is as follows.

   ```
   ------------------------------------------
   |          DescribeAddonVersions         |
   +-----------------+----------------------+
   | Defaultversion  |       Version        |
   +-----------------+----------------------+
   |  False          |  v1.12.0-eksbuild.1  |
   |  True           |  v1.11.4-eksbuild.1  |
   |  False          |  v1.10.4-eksbuild.1  |
   |  False          |  v1.9.3-eksbuild.1   |
   +-----------------+----------------------+
   ```

   The version with `True` in the `Defaultversion` column is the version that the add-on is created with, by default.

1. Update your add-on. Copy the command that follows to your device. Make the following modifications to the command, as needed, and then run the modified command. For more information about this command, see [update-addon](https://docs.aws.amazon.com/cli/latest/reference/eks/update-addon.html) in the Amazon EKS Command Line Reference.
   + Replace *my-cluster* with the name of your cluster.
   + Replace *vpc-cni* with the name of the add-on that you want to update that was returned in the output of a previous step.
   + Replace *version-number* with the version returned in the output of the previous step that you want to update to. Some add-ons have recommended versions. For more information, see the documentation for the add-on that you’re updating. For a list of add-ons, see [AWS add-ons](workloads-add-ons-available-eks.md).**\$1** If the add-on uses a Kubernetes service account and IAM role, replace *111122223333* with your account ID and *role-name* with the name of an existing IAM role that you’ve created. For instructions on creating the role, see the documentation for the add-on that you’re creating. For a list of add-ons, see [AWS add-ons](workloads-add-ons-available-eks.md). Specifying a service account role requires that you have an IAM OpenID Connect (OIDC) provider for your cluster. To determine whether you have one for your cluster, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).

     If the add-on doesn’t use a Kubernetes service account and IAM role, delete the `serviceAccountRoleARN: arn:aws:iam::111122223333:role/role-name ` line.
   + The `--resolve-conflicts PRESERVE` option preserves existing values for the add-on. If you have set custom values for add-on settings, and you don’t use this option, Amazon EKS overwrites your values with its default values. If you use this option, then we recommend that you test any field and value changes on a non-production cluster before updating the add-on on your production cluster. If you change this value to `OVERWRITE`, all settings are changed to Amazon EKS default values. If you’ve set custom values for any settings, they might be overwritten with Amazon EKS default values. If you change this value to `NONE`, Amazon EKS doesn’t change the value of any settings, but the update might fail. If the update fails, you receive an error message to help you resolve the conflict.
   + If you want to remove all custom configuration then perform the update using the *--configuration-values '\$1\$1'* option. This sets all custom configuration back to the default values. If you don’t want to change your custom configuration, don’t provide the *--configuration-values* flag. If you want to adjust a custom configuration then replace *\$1\$1* with the new parameters.

     ```
     aws eks update-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version version-number \
         --service-account-role-arn arn:aws:iam::111122223333:role/role-name --configuration-values '{}' --resolve-conflicts PRESERVE
     ```

1. Check the status of the update. Replace *my-cluster* with the name of your cluster and *vpc-cni* with the name of the add-on you’re updating.

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name vpc-cni
   ```

   An example output is as follows.

   ```
   {
       "addon": {
           "addonName": "vpc-cni",
           "clusterName": "my-cluster",
           "status": "UPDATING",
       }
   }
   ```

   The update is complete when the status is `ACTIVE`.

# Verify Amazon EKS add-on version compatibility with a cluster
Verify compatibility

Before you create an Amazon EKS add-on you need to verify that the Amazon EKS add-on version is compatible with your cluster.

Use the [describe-addon-versions API](https://docs.aws.amazon.com/eks/latest/APIReference/API_DescribeAddonVersions.html) to list the available versions of EKS add-ons, and which Kubernetes versions each addon version supports.

1. Verify the AWS CLI is installed and working with `aws sts get-caller-identity`. If this command doesn’t work, learn how to [Get started with the AWS CLI.](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) 

1. Determine the name of the add-on you want to retrieve version compatibility information for, such as `amazon-cloudwatch-observability`.

1. Determine the Kubernetes version of your cluster, such as `1.35`.

1. Use the AWS CLI to retrieve the addon versions that are compatible with the Kubernetes version of your cluster.

   ```
   aws eks describe-addon-versions --addon-name amazon-cloudwatch-observability --kubernetes-version 1.35
   ```

   An example output is as follows.

   ```
   {
       "addons": [
           {
               "addonName": "amazon-cloudwatch-observability",
               "type": "observability",
               "addonVersions": [
                   {
                       "addonVersion": "vX.X.X-eksbuild.X",
                       "architecture": [
                           "amd64",
                           "arm64"
                       ],
                       "computeTypes": [
                           "ec2",
                           "auto",
                           "hybrid"
                       ],
                       "compatibilities": [
                           {
                               "clusterVersion": "1.35",
                               "platformVersions": [
                                   "*"
                               ],
                               "defaultVersion": true
                           }
                       ],
                   }
               ]
           }
       ]
   }
   ```

   This output shows that addon version `vX.X.X-eksbuild.X` is compatible with Kubernetes cluster version `1.35`.

## Add-on compatibility with compute types


The `computeTypes` field in the `describe-addon-versions` output indicates an add-on’s compatibility with EKS Auto Mode Managed Nodes or Hybrid Nodes. Add-ons marked `auto` work with EKS Auto Mode’s cloud-based, AWS-managed infrastructure, while those marked `hybrid` can run on on-premises nodes connected to the EKS cloud control plane.

For more information, see [Considerations for Amazon EKS Auto Mode](eks-add-ons.md#addon-consider-auto).

# Remove an Amazon EKS add-on from a cluster
Remove an add-on

You can remove an Amazon EKS add-on from your cluster using `eksctl`, the AWS Management Console, or the AWS CLI.

When you remove an Amazon EKS add-on from a cluster:
+ There is no downtime for the functionality that the add-on provides.
+ If you are using IAM Roles for Service Accounts (IRSA) and the add-on has an IAM role associated with it, the IAM role isn’t removed.
+ If you are using Pod Identities, any Pod Identity Associations owned by the add-on are removed. If you specify the `--preserve` option to the AWS CLI, the associations are preserved.
+ Amazon EKS stops managing settings for the add-on.
+ The console stops notifying you when new versions are available.
+ You can’t update the add-on using any AWS tools or APIs.
+ You can choose to leave the add-on software on your cluster so that you can self-manage it, or you can remove the add-on software from your cluster. You should only remove the add-on software from your cluster if there are no resources on your cluster are dependent on the functionality that the add-on provides.

## Prerequisites


Complete the following before you create an add-on:
+ An existing Amazon EKS cluster. To deploy one, see [Get started with Amazon EKS](getting-started.md).
+ Check if your add-on requires an IAM role. For more information, see
+ Version `0.215.0` or later of the `eksctl` command line tool installed on your device or AWS CloudShell. To install or update `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation..

## Procedure


You have two options when removing an Amazon EKS add-on.
+  **Preserve add-on software on your cluster** – This option removes Amazon EKS management of any settings. It also removes the ability for Amazon EKS to notify you of updates and automatically update the Amazon EKS add-on after you initiate an update. However, it preserves the add-on software on your cluster. This option makes the add-on a self-managed installation, rather than an Amazon EKS add-on. With this option, there’s no downtime for the add-on.
+  **Remove add-on software entirely from your cluster** – We recommend that you remove the Amazon EKS add-on from your cluster only if there are no resources on your cluster that are dependent on it.

You can remove an Amazon EKS add-on using `eksctl`, the AWS Management Console, or the AWS CLI.

### Remove add-on (eksctl)


1. Determine the current add-ons installed on your cluster. Replace *my-cluster* with the name of your cluster.

   ```
   eksctl get addon --cluster my-cluster
   ```

   An example output is as follows.

   ```
   NAME        VERSION              STATUS  ISSUES  IAMROLE  UPDATE AVAILABLE
   coredns     v1.8.7-eksbuild.2    ACTIVE  0
   kube-proxy  v1.23.7-eksbuild.1   ACTIVE  0
   vpc-cni     v1.10.4-eksbuild.1   ACTIVE  0
   [...]
   ```

   Your output might look different, depending on which add-ons and versions that you have on your cluster.

1. Remove the add-on. Replace *my-cluster* with the name of your cluster and *name-of-add-on* with the name of the add-on returned in the output of the previous step that you want to remove. If you remove the *--preserve* option, in addition to Amazon EKS no longer managing the add-on, the add-on software is deleted from your cluster.

   ```
   eksctl delete addon --cluster my-cluster --name name-of-addon --preserve
   ```

   For more information about removing add-ons, see [Deleting addons](https://eksctl.io/usage/addons/#deleting-addons) in the `eksctl` documentation.

### Remove add-on (AWS Console)


1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, choose **Clusters**.

1. Choose the name of the cluster that you want to remove the Amazon EKS add-on for.

1. Choose the **Add-ons** tab.

1. Choose the add-on that you want to remove.

1. Choose **Remove**.

1. In the **Remove: *name of addon* ** confirmation dialog box, do the following:

   1. If you want Amazon EKS to stop managing settings for the add-on, select **Preserve on cluster**. Do this if you want to retain the add-on software on your cluster. This is so that you can manage all of the settings of the add-on on your own.

   1. Enter the add-on name.

   1. Choose **Remove**.

### Remove add-on (AWS CLI)


1. You need version `0.215.0` or later of the `eksctl` command line tool installed on your device or AWS CloudShell. To install or update `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.

1. See a list of installed add-ons. Replace *my-cluster* with the name of your cluster.

   ```
   aws eks list-addons --cluster-name my-cluster
   ```

   An example output is as follows.

   ```
   {
       "addons": [
           "coredns",
           "kube-proxy",
           "vpc-cni",
           "name-of-addon"
       ]
   }
   ```

1. Remove the installed add-on. Replace *my-cluster* with the name of your cluster and *name-of-add-on* with the name of the add-on that you want to remove. Removing *--preserve* deletes the add-on software from your cluster.

   ```
   aws eks delete-addon --cluster-name my-cluster --addon-name name-of-addon --preserve
   ```

   The abbreviated example output is as follows.

   ```
   {
       "addon": {
           "addonName": "name-of-add-on",
           "clusterName": "my-cluster",
           "status": "DELETING",
       }
   }
   ```

1. Check the status of the removal. Replace *my-cluster* with the name of your cluster and *name-of-addon* with the name of the add-on that you’re removing.

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name name-of-addon
   ```

   After the add-on is removed, the example output is as follows.

   ```
   An error occurred (ResourceNotFoundException) when calling the DescribeAddon operation: No addon: name-of-addon found in cluster: my-cluster
   ```

# IAM roles for Amazon EKS add-ons
IAM roles

Certain Amazon EKS add-ons need IAM roles and permissions to call AWS APIs. For example, the Amazon VPC CNI add-on calls certain AWS APIs to configure networking resources in your account. These add-ons need to be granted permission using IAM. More specifically, the service account of the pod running the add-on needs to be associated with an IAM role with a specific IAM policy.

The recommended way to grant AWS permissions to cluster workloads is using the Amazon EKS feature Pod Identities. You can use a **Pod Identity Association** to map the service account of an add-on to an IAM role. If a pod uses a service account that has an association, Amazon EKS sets environment variables in the containers of the pod. The environment variables configure the AWS SDKs, including the AWS CLI, to use the EKS Pod Identity credentials. For more information, see [Learn how EKS Pod Identity grants pods access to AWS services](pod-identities.md) 

Amazon EKS add-ons can help manage the life cycle of pod identity associations corresponding to the add-on. For example, you can create or update an Amazon EKS add-on and the necessary pod identity association in a single API call. Amazon EKS also provides an API for retrieving suggested IAM policies.

1. Confirm that [Amazon EKS pod identity agent](pod-id-agent-setup.md) is setup on your cluster.

1. Determine if the add-on you want to install requires IAM permissions using the `describe-addon-versions` AWS CLI operation. If the `requiresIamPermissions` flag is `true`, then you should use the `describe-addon-configurations` operation to determine the permissions needed by the addon. The response includes a list of suggested managed IAM policies.

1. Retrieve the name of the Kubernetes Service Account and the IAM policy using the `describe-addon-configuration` CLI operation. Evaluate the scope of the suggested policy against your security requirements.

1. Create an IAM role using the suggested permissions policy, and the trust policy required by Pod Identity. For more information, see [Create a Pod Identity association (AWS Console)](pod-id-association.md#pod-id-association-create).

1. Create or update an Amazon EKS add-on using the CLI. Specify at least one pod identity association. A pod identity association is the name of a Kubernetes service account, and the ARN of the IAM role.
   + Pod identity associations created using the add-on APIs are owned by the respective add-on. If you delete the add-on, the pod identity association is also deleted. You can prevent this cascading delete by using the `preserve` option when deleting an addon using the AWS CLI or API. You also can directly update or delete the pod identity association if necessary. Add-ons can’t assume ownership of existing pod identity associations. You must delete the existing association and re-create it using an add-on create or update operation.
   + Amazon EKS recommends using pod identity associations to manage IAM permissions for add-ons. The previous method, IAM roles for service accounts (IRSA), is still supported. You can specify both an IRSA `serviceAccountRoleArn` and a pod identity association for an add-on. If the EKS pod identity agent is installed on the cluster, the `serviceAccountRoleArn` will be ignored, and EKS will use the provided pod identity association. If Pod Identity is not enabled, the `serviceAccountRoleArn` will be used.
   + If you update the pod identity associations for an existing add-on, Amazon EKS initiates a rolling restart of the add-on pods.

# Retrieve IAM information about an Amazon EKS add-on
Retrieve IAM information

Before you create an add-on, use the AWS CLI to determine:
+ If the add-on requires IAM permissions
+ The suggested IAM policy to use

## Procedure


1. Determine the name of the add-on you want to install, and the Kubernetes version of your cluster. For more information about add-ons, see [Amazon EKS add-ons](eks-add-ons.md).

1. Use the AWS CLI to determine if the add-on requires IAM permissions.

   ```
   aws eks describe-addon-versions \
   --addon-name <addon-name> \
   --kubernetes-version <kubernetes-version>
   ```

   For example:

   ```
   aws eks describe-addon-versions \
   --addon-name aws-ebs-csi-driver \
   --kubernetes-version 1.30
   ```

   Review the following sample output. Note that `requiresIamPermissions` is `true`, and the default add-on version. You need to specify the add-on version when retrieving the recommended IAM policy.

   ```
   {
       "addons": [
           {
               "addonName": "aws-ebs-csi-driver",
               "type": "storage",
               "addonVersions": [
                   {
                       "addonVersion": "v1.31.0-eksbuild.1",
                       "architecture": [
                           "amd64",
                           "arm64"
                       ],
                       "compatibilities": [
                           {
                               "clusterVersion": "1.30",
                               "platformVersions": [
                                   "*"
                               ],
                               "defaultVersion": true
                           }
                       ],
                       "requiresConfiguration": false,
                       "requiresIamPermissions": true
                   },
   [...]
   ```

1. If the add-on requires IAM permissions, use the AWS CLI to retrieve a recommended IAM policy.

   ```
   aws eks describe-addon-configuration \
   --query podIdentityConfiguration \
   --addon-name <addon-name> \
   --addon-version <addon-version>
   ```

   For example:

   ```
   aws eks describe-addon-configuration \
   --query podIdentityConfiguration \
   --addon-name aws-ebs-csi-driver \
   --addon-version v1.31.0-eksbuild.1
   ```

   Review the following output. Note the `recommendedManagedPolicies`.

   ```
   [
       {
           "serviceAccount": "ebs-csi-controller-sa",
           "recommendedManagedPolicies": [
               "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"
           ]
       }
   ]
   ```

1. Create an IAM role and attach the recommended Managed Policy. Alternatively, review the managed policy and scope down the permissions as appropriate. For more information see [Create a Pod Identity association (AWS Console)](pod-id-association.md#pod-id-association-create).

## Pod Identity Support Reference


The following table indicates if certain Amazon EKS add-ons support EKS Pod Identity.


| Add-on name | Pod identity support | Minimum version required | 
| --- | --- | --- | 
|   [Amazon EBS CSI Driver](workloads-add-ons-available-eks.md#add-ons-aws-ebs-csi-driver)   |  Yes  |  v1.26.0-eksbuild.1  | 
|   [Amazon VPC CNI](workloads-add-ons-available-eks.md#add-ons-vpc-cni)   |  Yes  |  v1.15.5-eksbuild.1  | 
|   [Amazon EFS CSI Driver](workloads-add-ons-available-eks.md#add-ons-aws-efs-csi-driver)   |  Yes  |  v2.0.5-eksbuild.1  | 
|   [AWS Distro for OpenTelemetry](workloads-add-ons-available-eks.md#add-ons-adot)   |  Yes  |  v0.94.1-eksbuild.1  | 
|   [Mountpoint for Amazon S3 CSI Driver](workloads-add-ons-available-eks.md#mountpoint-for-s3-add-on)   |  No  |  N/A  | 
|   [Amazon CloudWatch Observability agent](workloads-add-ons-available-eks.md#amazon-cloudwatch-observability)   |  Yes  |  v3.1.0-eksbuild.1  | 

This table was last updated on October 28, 2024.

# Use Pod Identities to assign an IAM role to an Amazon EKS add-on
Use Pod Identities

Certain Amazon EKS add-ons need IAM roles and permissions. Before you add update an Amazon EKS add-on to use a Pod Identity association, verify the role and policy to use. For more information, see [Retrieve IAM information about an Amazon EKS add-on](retreive-iam-info.md).

1. Determine:
   +  `cluster-name` – The name of the cluster to install the add-on onto.
   +  `addon-name` – The name of the add-on to install.
   +  `service-account-name` – The name of the Kubernetes Service Account used by the add-on.
   +  `iam-role-arn` – The ARN of an IAM role with sufficient permissions for the add-on. The role must have the required trust policy for EKS Pod Identity. For more information see [Create a Pod Identity association (AWS Console)](pod-id-association.md#pod-id-association-create).

1. Update the add-on using the AWS CLI. You can also specify Pod Identity associations when creating an add-on, using the same `--pod-identity-assocations` syntax. Note that when you specify pod identity associations while updating an add-on, all previous pod identity associations are overwritten.

   ```
   aws eks update-addon --cluster-name <cluster-name> \
   --addon-name <addon-name> \
   --pod-identity-associations 'serviceAccount=<service-account-name>,roleArn=<role-arn>'
   ```

   For example:

   ```
   aws eks update-addon --cluster-name mycluster \
   --addon-name aws-ebs-csi-driver \
   --pod-identity-associations 'serviceAccount=ebs-csi-controller-sa,roleArn=arn:aws:iam::123456789012:role/StorageDriver'
   ```

1. Validate the Pod Identity association was created:

   ```
   aws eks list-pod-identity-associations --cluster-name <cluster-name>
   ```

   If successful, you should see output similar to the following. Note the OwnerARN of the EKS add-on.

   ```
   {
       "associations": [
           {
               "clusterName": "mycluster",
               "namespace": "kube-system",
               "serviceAccount": "ebs-csi-controller-sa",
               "associationArn": "arn:aws:eks:us-west-2:123456789012:podidentityassociation/mycluster/a-4wvljrezsukshq1bv",
               "associationId": "a-4wvljrezsukshq1bv",
               "ownerArn": "arn:aws:eks:us-west-2:123456789012:addon/mycluster/aws-ebs-csi-driver/9cc7ce8c-2e15-b0a7-f311-426691cd8546"
           }
       ]
   }
   ```

# Remove Pod Identity associations from an Amazon EKS add-on
Remove Pod Identity

Remove the Pod Identity associations from an Amazon EKS add-on.

1. Determine:
   +  `cluster-name` - The name of the EKS cluster to install the add-on onto.
   +  `addon-name` - The name of the Amazon EKS add-on to install.

1. Update the addon to specify an empty array of pod identity associations.

   ```
   aws eks update-addon --cluster-name <cluster-name> \
   --addon-name <addon-name> \
   --pod-identity-associations "[]"
   ```

# Troubleshoot Pod Identities for EKS add-ons
Troubleshoot Identities

If your add-ons are encountering errors while attempting AWS API, SDK, or CLI operations, confirm the following:
+ The Pod Identity Agent is installed in your cluster.
  + For information about how to install the Pod Identity Agent, see [Set up the Amazon EKS Pod Identity Agent](pod-id-agent-setup.md).
+ The Add-on has a valid Pod Identity association.
  + Use the AWS CLI to retrieve the associations for the service account name used by the add-on.

    ```
    aws eks list-pod-identity-associations --cluster-name <cluster-name>
    ```
+ The IAM role has the required trust policy for Pod Identities.
  + Use the AWS CLI to retrieve the trust policy for an add-on.

    ```
    aws iam get-role --role-name <role-name> --query Role.AssumeRolePolicyDocument
    ```
+ The IAM role has the necessary permissions for the add-on.
  + Use AWS CloudTrail to review `AccessDenied` or `UnauthorizedOperation` events .
+ The service account name in the pod identity association matches the service account name used by the add-on.
  + For information about the available add-ons, see [AWS add-ons](workloads-add-ons-available-eks.md).
+ Check configuration of MutatingWebhookConfiguration named `pod-identity-webhook` 
  +  `admissionReviewVersions` of the webhook needs to be `v1beta1` and doesn’t work with `v1`.

# Determine fields you can customize for Amazon EKS add-ons
Fields you can customize

Amazon EKS add-ons are installed to your cluster using standard, best practice configurations. For more information about adding an Amazon EKS add-on to your cluster, see [Amazon EKS add-ons](eks-add-ons.md).

You may want to customize the configuration of an Amazon EKS add-on to enable advanced features. Amazon EKS uses the Kubernetes server-side apply feature to enable management of an add-on by Amazon EKS without overwriting your configuration for settings that aren’t managed by Amazon EKS. For more information, see [Server-Side Apply](https://kubernetes.io/docs/reference/using-api/server-side-apply/) in the Kubernetes documentation. To achieve this, Amazon EKS manages a minimum set of fields for every add-on that it installs. You can modify all fields that aren’t managed by Amazon EKS, or another Kubernetes control plane process such as `kube-controller-manager`, without issue.

**Important**  
Modifying a field managed by Amazon EKS prevents Amazon EKS from managing the add-on and may result in your changes being overwritten when an add-on is updated.

## Field management syntax


When you view details for a Kubernetes object, both managed and unmanaged fields are returned in the output. Managed fields can be either of the following types:
+  **Fully managed** – All keys for the field are managed by Amazon EKS. Modifications to any value causes a conflict.
+  **Partially managed** – Some keys for the field are managed by Amazon EKS. Only modifications to the keys explicitly managed by Amazon EKS cause a conflict.

Both types of fields are tagged with `manager: eks`.

Each key is either a `.` representing the field itself, which always maps to an empty set, or a string that represents a sub-field or item. The output for field management consists of the following types of declarations:
+  `f:name `, where *name* is the name of a field in a list.
+  `k:keys `, where *keys* is a map of a list item’s fields.
+  `v:value `, where *value* is the exact JSON formatted value of a list item.
+  `i:index `, where *index* is position of an item in the list.

The following portions of output for the CoreDNS add-on illustrate the previous declarations:
+  **Fully managed fields** – If a managed field has an `f:` (field) specified, but no `k:` (key), then the entire field is managed. Modifications to any values in this field cause a conflict.

  In the following output, you can see that the container named `coredns` is managed by `eks`. The `args`, `image`, and `imagePullPolicy` sub-fields are also managed by `eks`. Modifications to any values in these fields cause a conflict.

  ```
  [...]
  f:containers:
    k:{"name":"coredns"}:
    .: {}
    f:args: {}
    f:image: {}
    f:imagePullPolicy: {}
  [...]
  manager: eks
  [...]
  ```
+  **Partially managed fields** – If a managed key has a value specified, the declared keys are managed for that field. Modifying the specified keys cause a conflict.

  In the following output, you can see that `eks` manages the `config-volume` and `tmp` volumes set with the `name` key.

  ```
  [...]
  f:volumes:
    k:{"name":"config-volume"}:
      .: {}
      f:configMap:
        f:items: {}
        f:name: {}
      f:name: {}
    k:{"name":"tmp"}:
      .: {}
      f:name: {}
  [...]
  manager: eks
  [...]
  ```
+  **Adding keys to partially managed fields** – If only a specific key value is managed, you can safely add additional keys, such as arguments, to a field without causing a conflict. If you add additional keys, make sure that the field isn’t managed first. Adding or modifying any value that is managed causes a conflict.

  In the following output, you can see that both the `name` key and `name` field are managed. Adding or modifying any container name causes a conflict with this managed key.

  ```
  [...]
  f:containers:
    k:{"name":"coredns"}:
  [...]
      f:name: {}
  [...]
  manager: eks
  [...]
  ```

## Procedure


You can use `kubectl` to see which fields are managed by Amazon EKS for any Amazon EKS add-on.

You can modify all fields that aren’t managed by Amazon EKS, or another Kubernetes control plane process such as `kube-controller-manager`, without issue.

1. Determine which add-on that you want to examine. To see all of the `deployments` and DaemonSets deployed to your cluster, see [View Kubernetes resources in the AWS Management Console](view-kubernetes-resources.md).

1. View the managed fields for an add-on by running the following command:

   ```
   kubectl get type/add-on-name -n add-on-namespace -o yaml
   ```

   For example, you can see the managed fields for the CoreDNS add-on with the following command.

   ```
   kubectl get deployment/coredns -n kube-system -o yaml
   ```

   Field management is listed in the following section in the returned output.

   ```
   [...]
   managedFields:
     - apiVersion: apps/v1
       fieldsType: FieldsV1
       fieldsV1:
   [...]
   ```
**Note**  
If you don’t see `managedFields` in the output, add `--show-managed-fields` to the command and run it again. The version of `kubectl` that you’re using determines whether managed fields are returned by default.

## Next steps


Customize the fields not owned by AWS for you add-on.

# Validate container image signatures during deployment
Verify container images

If you use [AWS Signer](https://docs.aws.amazon.com/signer/latest/developerguide/Welcome.html) and want to verify signed container images at the time of deployment, you can use one of the following solutions:
+  [Gatekeeper and Ratify](https://ratify.dev/docs/1.0/quickstarts/ratify-on-aws) – Use Gatekeeper as the admission controller and Ratify configured with an AWS Signer plugin as a web hook for validating signatures.
+  [Kyverno](https://github.com/nirmata/kyverno-notation-aws) – A Kubernetes policy engine configured with an AWS Signer plugin for validating signatures.

**Note**  
Before verifying container image signatures, configure the [Notation](https://github.com/notaryproject/notation#readme) trust store and trust policy, as required by your selected admission controller.