

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Run on-premises workloads on hybrid nodes
Run hybrid nodes

In an EKS cluster with hybrid nodes enabled, you can run on-premises and edge applications on your own infrastructure with the same Amazon EKS clusters, features, and tools that you use in AWS Cloud.

The following sections contain step-by-step instructions for using hybrid nodes.

**Topics**
+ [Connect hybrid nodes](hybrid-nodes-join.md)
+ [Connect hybrid nodes with Bottlerocket](hybrid-nodes-bottlerocket.md)
+ [Upgrade hybrid nodes](hybrid-nodes-upgrade.md)
+ [Patch hybrid nodes](hybrid-nodes-security.md)
+ [Delete hybrid nodes](hybrid-nodes-remove.md)

# Connect hybrid nodes
Connect hybrid nodes

**Note**  
The following steps apply to hybrid nodes running compatible operating systems except Bottlerocket. For steps to connect a hybrid node that runs Bottlerocket, see [Connect hybrid nodes with Bottlerocket](hybrid-nodes-bottlerocket.md).

This topic describes how to connect hybrid nodes to an Amazon EKS cluster. After your hybrid nodes join the cluster, they will appear with status Not Ready in the Amazon EKS console and in Kubernetes-compatible tooling such as kubectl. After completing the steps on this page, proceed to [Configure CNI for hybrid nodes](hybrid-nodes-cni.md) to make your hybrid nodes ready to run applications.

## Prerequisites


Before connecting hybrid nodes to your Amazon EKS cluster, make sure you have completed the prerequisite steps.
+ You have network connectivity from your on-premises environment to the AWS Region hosting your Amazon EKS cluster. See [Prepare networking for hybrid nodes](hybrid-nodes-networking.md) for more information.
+ You have a compatible operating system for hybrid nodes installed on your on-premises hosts. See [Prepare operating system for hybrid nodes](hybrid-nodes-os.md) for more information.
+ You have created your Hybrid Nodes IAM role and set up your on-premises credential provider (AWS Systems Manager hybrid activations or AWS IAM Roles Anywhere). See [Prepare credentials for hybrid nodes](hybrid-nodes-creds.md) for more information.
+ You have created your hybrid nodes-enabled Amazon EKS cluster. See [Create an Amazon EKS cluster with hybrid nodes](hybrid-nodes-cluster-create.md) for more information.
+ You have associated your Hybrid Nodes IAM role with Kubernetes Role-Based Access Control (RBAC) permissions. See [Prepare cluster access for hybrid nodes](hybrid-nodes-cluster-prep.md) for more information.

## Step 1: Install the hybrid nodes CLI (`nodeadm`) on each on-premises host


If you are including the Amazon EKS Hybrid Nodes CLI (`nodeadm`) in your pre-built operating system images, you can skip this step. For more information on the hybrid nodes version of `nodeadm`, see [Hybrid nodes `nodeadm` reference](hybrid-nodes-nodeadm.md).

The hybrid nodes version of `nodeadm` is hosted in Amazon S3 fronted by Amazon CloudFront. To install `nodeadm` on each on-premises host, you can run the following command from your on-premises hosts.

 **For x86\$164 hosts:** 

```
curl -OL 'https://hybrid-assets.eks.amazonaws.com/releases/latest/bin/linux/amd64/nodeadm'
```

 **For ARM hosts** 

```
curl -OL 'https://hybrid-assets.eks.amazonaws.com/releases/latest/bin/linux/arm64/nodeadm'
```

Add executable file permission to the downloaded binary on each host.

```
chmod +x nodeadm
```

## Step 2: Install the hybrid nodes dependencies with `nodeadm`


If you are installing the hybrid nodes dependencies in pre-built operating system images, you can skip this step. The `nodeadm install` command can be used to install all dependencies required for hybrid nodes. The hybrid nodes dependencies include containerd, kubelet, kubectl, and AWS SSM or AWS IAM Roles Anywhere components. See [Hybrid nodes `nodeadm` reference](hybrid-nodes-nodeadm.md) for more information on the components and file locations installed by `nodeadm install`. See [Prepare networking for hybrid nodes](hybrid-nodes-networking.md) for hybrid nodes for more information on the domains that must be allowed in your on-premises firewall for the `nodeadm install` process.

Run the command below to install the hybrid nodes dependencies on your on-premises host. The command below must be run with a user that has sudo/root access on your host.

**Important**  
The hybrid nodes CLI (`nodeadm`) must be run with a user that has sudo/root access on your host.
+ Replace `K8S_VERSION` with the Kubernetes minor version of your Amazon EKS cluster, for example `1.31`. See [Amazon EKS supported versions](https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html) for a list of the supported Kubernetes versions.
+ Replace `CREDS_PROVIDER` with the on-premises credential provider you are using. Valid values are `ssm` for AWS SSM and `iam-ra` for AWS IAM Roles Anywhere.

```
nodeadm install K8S_VERSION --credential-provider CREDS_PROVIDER
```

## Step 3: Connect hybrid nodes to your cluster


Before connecting your hybrid nodes to your cluster, make sure you have allowed the required access in your on-premises firewall and in the security group for your cluster for the Amazon EKS control plane to/from hybrid node communication. Most issues at this step are related to the firewall configuration, security group configuration, or Hybrid Nodes IAM role configuration.

**Important**  
The hybrid nodes CLI (`nodeadm`) must be run with a user that has sudo/root access on your host.

1. Create a `nodeConfig.yaml` file on each host with the values for your deployment. For a full description of the available configuration settings, see [Hybrid nodes `nodeadm` reference](hybrid-nodes-nodeadm.md). If your Hybrid Nodes IAM role does not have permission for the `eks:DescribeCluster` action, you must pass your Kubernetes API endpoint, cluster CA bundle, and Kubernetes service IPv4 CIDR in the cluster section of your `nodeConfig.yaml`.

   1. Use the `nodeConfig.yaml` example below if you are using AWS SSM hybrid activations for your on-premises credentials provider.

      1. Replace `CLUSTER_NAME` with the name of your cluster.

      1. Replace `AWS_REGION` with the AWS Region hosting your cluster. For example, `us-west-2`.

      1. Replace `ACTIVATION_CODE` with the activation code you received when creating your AWS SSM hybrid activation. See [Prepare credentials for hybrid nodes](hybrid-nodes-creds.md) for more information.

      1. Replace `ACTIVATION_ID` with the activation ID you received when creating your AWS SSM hybrid activation. You can retrieve this information from the AWS Systems Manager console or from the AWS CLI `aws ssm describe-activations` command.

         ```
         apiVersion: node.eks.aws/v1alpha1
         kind: NodeConfig
         spec:
           cluster:
             name: CLUSTER_NAME
             region: AWS_REGION
           hybrid:
             ssm:
               activationCode: ACTIVATION_CODE
               activationId: ACTIVATION_ID
         ```

   1. Use the `nodeConfig.yaml` example below if you are using AWS IAM Roles Anywhere for your on-premises credentials provider.

      1. Replace `CLUSTER_NAME` with the name of your cluster.

      1. Replace `AWS_REGION` with the AWS Region hosting your cluster. For example, `us-west-2`.

      1. Replace `NODE_NAME` with the name of your node. The node name must match the CN of the certificate on the host if you configured the trust policy of your Hybrid Nodes IAM role with the `"sts:RoleSessionName": "${aws:PrincipalTag/x509Subject/CN}"` resource condition. The `nodeName` you use must not be longer than 64 characters.

      1. Replace `TRUST_ANCHOR_ARN` with the ARN of the trust anchor you configured in the steps for Prepare credentials for hybrid nodes.

      1. Replace `PROFILE_ARN` with the ARN of the trust anchor you configured in the steps for [Prepare credentials for hybrid nodes](hybrid-nodes-creds.md).

      1. Replace `ROLE_ARN` with the ARN of your Hybrid Nodes IAM role.

      1. Replace `CERTIFICATE_PATH` with the path in disk to your node certificate. If you don’t specify it, the default is `/etc/iam/pki/server.pem`.

      1. Replace `KEY_PATH` with the path in disk to your certificate private key. If you don’t specify it, the default is `/etc/iam/pki/server.key`.

         ```
         apiVersion: node.eks.aws/v1alpha1
         kind: NodeConfig
         spec:
           cluster:
             name: CLUSTER_NAME
             region: AWS_REGION
           hybrid:
             iamRolesAnywhere:
               nodeName: NODE_NAME
               trustAnchorArn: TRUST_ANCHOR_ARN
               profileArn: PROFILE_ARN
               roleArn: ROLE_ARN
               certificatePath: CERTIFICATE_PATH
               privateKeyPath: KEY_PATH
         ```

1. Run the `nodeadm init` command with your `nodeConfig.yaml` to connect your hybrid nodes to your Amazon EKS cluster.

   ```
   nodeadm init -c file://nodeConfig.yaml
   ```

If the above command completes successfully, your hybrid node has joined your Amazon EKS cluster. You can verify this in the Amazon EKS console by navigating to the Compute tab for your cluster ([ensure IAM principal has permissions to view](view-kubernetes-resources.md#view-kubernetes-resources-permissions)) or with `kubectl get nodes`.

**Important**  
Your nodes will have status `Not Ready`, which is expected and is due to the lack of a CNI running on your hybrid nodes. If your nodes did not join the cluster, see [Troubleshooting hybrid nodes](hybrid-nodes-troubleshooting.md).

## Step 4: Configure a CNI for hybrid nodes


To make your hybrid nodes ready to run applications, continue with the steps on [Configure CNI for hybrid nodes](hybrid-nodes-cni.md).

# Connect hybrid nodes with Bottlerocket
Connect hybrid nodes with Bottlerocket

This topic describes how to connect hybrid nodes running Bottlerocket to an Amazon EKS cluster. [Bottlerocket](https://aws.amazon.com/bottlerocket/) is an open source Linux distribution that is sponsored and supported by AWS. Bottlerocket is purpose-built for hosting container workloads. With Bottlerocket, you can improve the availability of containerized deployments and reduce operational costs by automating updates to your container infrastructure. Bottlerocket includes only the essential software to run containers, which improves resource usage, reduces security threats, and lowers management overhead.

Only VMware variants of Bottlerocket version v1.37.0 and above are supported with EKS Hybrid Nodes. VMware variants of Bottlerocket are available for Kubernetes versions v1.28 and above. The OS images for these variants include the kubelet, containerd, aws-iam-authenticator and other software prerequisites for EKS Hybrid Nodes. You can configure these components using a Bottlerocket [settings](https://github.com/bottlerocket-os/bottlerocket#settings) file that includes base64 encoded user-data for the Bottlerocket bootstrap and admin containers. Configuring these settings enables Bottlerocket to use your hybrid nodes credentials provider to authenticate hybrid nodes to your cluster. After your hybrid nodes join the cluster, they will appear with status `Not Ready` in the Amazon EKS console and in Kubernetes-compatible tooling such as `kubectl`. After completing the steps on this page, proceed to [Configure CNI for hybrid nodes](hybrid-nodes-cni.md) to make your hybrid nodes ready to run applications.

## Prerequisites


Before connecting hybrid nodes to your Amazon EKS cluster, make sure you have completed the prerequisite steps.
+ You have network connectivity from your on-premises environment to the AWS Region hosting your Amazon EKS cluster. See [Prepare networking for hybrid nodes](hybrid-nodes-networking.md) for more information.
+ You have created your Hybrid Nodes IAM role and set up your on-premises credential provider (AWS Systems Manager hybrid activations or AWS IAM Roles Anywhere). See [Prepare credentials for hybrid nodes](hybrid-nodes-creds.md) for more information.
+ You have created your hybrid nodes-enabled Amazon EKS cluster. See [Create an Amazon EKS cluster with hybrid nodes](hybrid-nodes-cluster-create.md) for more information.
+ You have associated your Hybrid Nodes IAM role with Kubernetes Role-Based Access Control (RBAC) permissions. See [Prepare cluster access for hybrid nodes](hybrid-nodes-cluster-prep.md) for more information.

## Step 1: Create the Bottlerocket settings TOML file


To configure Bottlerocket for hybrid nodes, you need to create a `settings.toml` file with the necessary configuration. The contents of the TOML file will differ based on the credential provider you are using (SSM or IAM Roles Anywhere). This file will be passed as user data when provisioning the Bottlerocket instance.

**Note**  
The TOML files provided below only represent the minimum required settings for initializing a Bottlerocket VMWare machine as a node on an EKS cluster. Bottlerocket provides a wide range of settings to address several different use cases, so for further configuration options beyond hybrid node initialization, please refer to the [Bottlerocket documentation](https://bottlerocket.dev/en) for the comprehensive list of all documented settings for the Bottlerocket version you are using (for example, [here](https://bottlerocket.dev/en/os/1.51.x/api/settings-index) are all the settings available for Bottlerocket 1.51.x).

### SSM


If you are using AWS Systems Manager as your credential provider, create a `settings.toml` file with the following content:

```
[settings.kubernetes]
cluster-name = "<cluster-name>"
api-server = "<api-server-endpoint>"
cluster-certificate = "<cluster-certificate-authority>"
hostname-override = "<hostname>"
provider-id = "eks-hybrid:///<region>/<cluster-name>/<hostname>"
authentication-mode = "aws"
cloud-provider = ""
server-tls-bootstrap = true

[settings.network]
hostname = "<hostname>"

[settings.aws]
region = "<region>"

[settings.kubernetes.credential-providers.ecr-credential-provider]
enabled = true
cache-duration = "12h"
image-patterns = [
    "*.dkr.ecr.*.amazonaws.com",
    "*.dkr.ecr.*.amazonaws.com.rproxy.govskope.us.cn",
    "*.dkr.ecr.*.amazonaws.eu",
    "*.dkr.ecr-fips.*.amazonaws.com",
    "*.dkr.ecr-fips.*.amazonaws.eu",
    "public.ecr.aws"
]

[settings.kubernetes.node-labels]
"eks.amazonaws.com/compute-type" = "hybrid"
"eks.amazonaws.com/hybrid-credential-provider" = "ssm"

[settings.host-containers.admin]
enabled = true
user-data = "<base64-encoded-admin-container-userdata>"

[settings.bootstrap-containers.eks-hybrid-setup]
mode = "always"
user-data = "<base64-encoded-bootstrap-container-userdata>"

[settings.host-containers.control]
enabled = true
```

Replace the placeholders with the following values:
+  `<cluster-name>`: The name of your Amazon EKS cluster.
+  `<api-server-endpoint>`: The API server endpoint of your cluster.
+  `<cluster-certificate-authority>`: The base64-encoded CA bundle of your cluster.
+  `<region>`: The AWS Region hosting your cluster, for example "us-east-1".
+  `<hostname>`: The hostname of the Bottlerocket instance, which will also be configured as the node name. This can be any unique value of your choice, but must follow the [Kubernetes Object naming conventions](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). In addition, the hostname you use cannot be longer than 64 characters. NOTE: When using SSM provider, this hostname and node name will be replaced by the managed instance ID (for example, `mi-*` ID) after the instance has been registered with SSM.
+  `<base64-encoded-admin-container-userdata>`: The base64-encoded contents of the Bottlerocket admin container configuration. Enabling the admin container allows you to connect to your Bottlerocket instance with SSH for system exploration and debugging. While this is not a required setting, we recommend enabling it for ease of troubleshooting. Refer to the [Bottlerocket admin container documentation](https://github.com/bottlerocket-os/bottlerocket-admin-container#authenticating-with-the-admin-container) for more information on authenticating with the admin container. The admin container takes SSH user and key input in JSON format, for example,

```
{
  "user": "<ssh-user>",
  "ssh": {
    "authorized-keys": [
      "<ssh-authorized-key>"
    ]
  }
}
```
+  `<base64-encoded-bootstrap-container-userdata>`: The base64-encoded contents of the Bottlerocket bootstrap container configuration. Refer to the [Bottlerocket bootstrap container documentation](https://github.com/bottlerocket-os/bottlerocket-bootstrap-container) for more information on its configuration. The bootstrap container is responsible for registering the instance as an AWS SSM Managed Instance and joining it as a Kubernetes node on your Amazon EKS Cluster. The user data passed into the bootstrap container takes the form of a command invocation which accepts as input the SSM hybrid activation code and ID you previously created:

```
eks-hybrid-ssm-setup --activation-id=<activation-id> --activation-code=<activation-code> --region=<region>
```

### IAM Roles Anywhere


If you are using AWS IAM Roles Anywhere as your credential provider, create a `settings.toml` file with the following content:

```
[settings.kubernetes]
cluster-name = "<cluster-name>"
api-server = "<api-server-endpoint>"
cluster-certificate = "<cluster-certificate-authority>"
hostname-override = "<hostname>"
provider-id = "eks-hybrid:///<region>/<cluster-name>/<hostname>"
authentication-mode = "aws"
cloud-provider = ""
server-tls-bootstrap = true

[settings.network]
hostname = "<hostname>"

[settings.aws]
region = "<region>"
config = "<base64-encoded-aws-config-file>"

[settings.kubernetes.credential-providers.ecr-credential-provider]
enabled = true
cache-duration = "12h"
image-patterns = [
    "*.dkr.ecr.*.amazonaws.com",
    "*.dkr.ecr.*.amazonaws.com.rproxy.govskope.us.cn",
    "*.dkr.ecr.*.amazonaws.eu",
    "*.dkr.ecr-fips.*.amazonaws.com",
    "*.dkr.ecr-fips.*.amazonaws.eu",
    "public.ecr.aws"
]

[settings.kubernetes.node-labels]
"eks.amazonaws.com/compute-type" = "hybrid"
"eks.amazonaws.com/hybrid-credential-provider" = "iam-ra"

[settings.host-containers.admin]
enabled = true
user-data = "<base64-encoded-admin-container-userdata>"

[settings.bootstrap-containers.eks-hybrid-setup]
mode = "always"
user-data = "<base64-encoded-bootstrap-container-userdata>"
```

Replace the placeholders with the following values:
+  `<cluster-name>`: The name of your Amazon EKS cluster.
+  `<api-server-endpoint>`: The API server endpoint of your cluster.
+  `<cluster-certificate-authority>`: The base64-encoded CA bundle of your cluster.
+  `<region>`: The AWS Region hosting your cluster, e.g., "us-east-1"
+  `<hostname>`: The hostname of the Bottlerocket instance, which will also be configured as the node name. This can be any unique value of your choice, but must follow the [Kubernetes Object naming conventions](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). In addition, the hostname you use cannot be longer than 64 characters. NOTE: When using IAM-RA provider, the node name must match the CN of the certificate on the host if you configured the trust policy of your Hybrid Nodes IAM role with the `"sts:RoleSessionName": "${aws:PrincipalTag/x509Subject/CN}"` resource condition.
+  `<base64-encoded-aws-config-file>`: The base64-encoded contents of your AWS config file. The contents of the file should be as follows:

```
[default]
credential_process = aws_signing_helper credential-process --certificate /root/.aws/node.crt --private-key /root/.aws/node.key --profile-arn <profile-arn> --role-arn <role-arn> --trust-anchor-arn <trust-anchor-arn> --role-session-name <role-session-name>
```
+  `<base64-encoded-admin-container-userdata>`: The base64-encoded contents of the Bottlerocket admin container configuration. Enabling the admin container allows you to connect to your Bottlerocket instance with SSH for system exploration and debugging. While this is not a required setting, we recommend enabling it for ease of troubleshooting. Refer to the [Bottlerocket admin container documentation](https://github.com/bottlerocket-os/bottlerocket-admin-container#authenticating-with-the-admin-container) for more information on authenticating with the admin container. The admin container takes SSH user and key input in JSON format, for example,

```
{
  "user": "<ssh-user>",
  "ssh": {
    "authorized-keys": [
      "<ssh-authorized-key>"
    ]
  }
}
```
+  `<base64-encoded-bootstrap-container-userdata>`: The base64-encoded contents of the Bottlerocket bootstrap container configuration. Refer to the [Bottlerocket bootstrap container documentation](https://github.com/bottlerocket-os/bottlerocket-bootstrap-container) for more information on its configuration. The bootstrap container is responsible for creating the IAM Roles Anywhere host certificate and certificate private key files on the instance. These will then be consumed by the `aws_signing_helper` to obtain temporary credentials for authenticating with your Amazon EKS cluster. The user data passed into the bootstrap container takes the form of a command invocation which accepts as input the contents of the certificate and private key you previously created:

```
eks-hybrid-iam-ra-setup --certificate=<certificate> --key=<private-key>
```

## Step 2: Provision the Bottlerocket vSphere VM with user data


Once you have constructed the TOML file, pass it as user data during vSphere VM creation. Keep in mind that the user data must be configured before the VM is powered on for the first time. As such, you will need to supply it when creating the instance, or if you wish to create the VM ahead of time, the VM must be in poweredOff state until you configure the user data for it. For example, if using the `govc` CLI:

### Creating VM for the first time


```
govc vm.create \
  -on=true \
  -c=2 \
  -m=4096 \
  -net.adapter=<network-adapter> \
  -net=<network-name> \
  -e guestinfo.userdata.encoding="base64" \
  -e guestinfo.userdata="$(base64 -w0 settings.toml)" \
  -template=<template-name> \
  <vm-name>
```

### Updating user data for an existing VM


```
govc vm.create \
    -on=false \
    -c=2 \
    -m=4096 \
    -net.adapter=<network-adapter> \
    -net=<network-name> \
    -template=<template-name> \
    <vm-name>

govc vm.change
    -vm <vm-name> \
    -e guestinfo.userdata="$(base64 -w0 settings.toml)" \
    -e guestinfo.userdata.encoding="base64"

govc vm.power -on <vm-name>
```

In the above sections, the `-e guestinfo.userdata.encoding="base64"` option specifies that the user data is base64-encoded. The `-e guestinfo.userdata` option passes the base64-encoded contents of the `settings.toml` file as user data to the Bottlerocket instance. Replace the placeholders with your specific values, such as the Bottlerocket OVA template and networking details.

## Step 3: Verify the hybrid node connection


After the Bottlerocket instance starts, it will attempt to join your Amazon EKS cluster. You can verify the connection in the Amazon EKS console by navigating to the Compute tab for your cluster or by running the following command:

```
kubectl get nodes
```

**Important**  
Your nodes will have status `Not Ready`, which is expected and is due to the lack of a CNI running on your hybrid nodes. If your nodes did not join the cluster, see [Troubleshooting hybrid nodes](hybrid-nodes-troubleshooting.md).

## Step 4: Configure a CNI for hybrid nodes


To make your hybrid nodes ready to run applications, continue with the steps on [Configure CNI for hybrid nodes](hybrid-nodes-cni.md).

# Upgrade hybrid nodes for your cluster
Upgrade hybrid nodes

The guidance for upgrading hybrid nodes is similar to self-managed Amazon EKS nodes that run in Amazon EC2. We recommend that you create new hybrid nodes on your target Kubernetes version, gracefully migrate your existing applications to the hybrid nodes on the new Kubernetes version, and remove the hybrid nodes on the old Kubernetes version from your cluster. Be sure to review the [Amazon EKS Best Practices](https://docs.aws.amazon.com/eks/latest/best-practices/cluster-upgrades.html) for upgrades before initiating an upgrade. Amazon EKS Hybrid Nodes have the same [Kubernetes version support](https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html) for Amazon EKS clusters with cloud nodes, including standard and extended support.

Amazon EKS Hybrid Nodes follow the same [version skew policy](https://kubernetes.io/releases/version-skew-policy/#supported-version-skew) for nodes as upstream Kubernetes. Amazon EKS Hybrid Nodes cannot be on a newer version than the Amazon EKS control plane, and hybrid nodes may be up to three Kubernetes minor versions older than the Amazon EKS control plane minor version.

If you do not have spare capacity to create new hybrid nodes on your target Kubernetes version for a cutover migration upgrade strategy, you can alternatively use the Amazon EKS Hybrid Nodes CLI (`nodeadm`) to upgrade the Kubernetes version of your hybrid nodes in-place.

**Important**  
If you are upgrading your hybrid nodes in-place with `nodeadm`, there is downtime for the node during the process where the older version of the Kubernetes components are shut down and the new Kubernetes version components are installed and started.

## Prerequisites


Before upgrading, make sure you have completed the following prerequisites.
+ The target Kubernetes version for your hybrid nodes upgrade must be equal to or less than the Amazon EKS control plane version.
+ If you are following a cutover migration upgrade strategy, the new hybrid nodes you are installing on your target Kubernetes version must meet the [Prerequisite setup for hybrid nodes](hybrid-nodes-prereqs.md) requirements. This includes having IP addresses within the Remote Node Network CIDR you passed during Amazon EKS cluster creation.
+ For both cutover migration and in-place upgrades, the hybrid nodes must have access to the [required domains](hybrid-nodes-networking.md#hybrid-nodes-networking-on-prem) to pull the new versions of the hybrid nodes dependencies.
+ You must have kubectl installed on your local machine or instance you are using to interact with your Amazon EKS Kubernetes API endpoint.
+ The version of your CNI must support the Kubernetes version you are upgrading to. If it does not, upgrade your CNI version before upgrading your hybrid nodes. See [Configure CNI for hybrid nodes](hybrid-nodes-cni.md) for more information.

## Cutover migration (blue-green) upgrades


 *Cutover migration upgrades* refer to the process of creating new hybrid nodes on new hosts with your target Kubernetes version, gracefully migrating your existing applications to the new hybrid nodes on your target Kubernetes version, and removing the hybrid nodes on the old Kubernetes version from your cluster. This strategy is also called a blue-green migration.

1. Connect your new hosts as hybrid nodes following the [Connect hybrid nodes](hybrid-nodes-join.md) steps. When running the `nodeadm install` command, use your target Kubernetes version.

1. Enable communication between the new hybrid nodes on the target Kubernetes version and your hybrid nodes on the old Kubernetes version. This configuration allows pods to communicate with each other while you are migrating your workload to the hybrid nodes on the target Kubernetes version.

1. Confirm your hybrid nodes on your target Kubernetes version successfully joined your cluster and have status Ready.

1. Use the following command to mark each of the nodes that you want to remove as unschedulable. This is so that new pods aren’t scheduled or rescheduled on the nodes that you are replacing. For more information, see [kubectl cordon](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cordon/) in the Kubernetes documentation. Replace `NODE_NAME` with the name of the hybrid nodes on the old Kubernetes version.

   ```
   kubectl cordon NODE_NAME
   ```

   You can identify and cordon all of the nodes of a particular Kubernetes version (in this case, `1.28`) with the following code snippet.

   ```
   K8S_VERSION=1.28
   for node in $(kubectl get nodes -o json | jq --arg K8S_VERSION "$K8S_VERSION" -r '.items[] | select(.status.nodeInfo.kubeletVersion | match("\($K8S_VERSION)")).metadata.name')
   do
       echo "Cordoning $node"
       kubectl cordon $node
   done
   ```

1. If your current deployment is running fewer than two CoreDNS replicas on your hybrid nodes, scale out the deployment to at least two replicas. We recommend that you run at least two CoreDNS replicas on hybrid nodes for resiliency during normal operations.

   ```
   kubectl scale deployments/coredns --replicas=2 -n kube-system
   ```

1. Drain each of the hybrid nodes on the old Kubernetes version that you want to remove from your cluster with the following command. For more information on draining nodes, see [Safely Drain a Node](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) in the Kubernetes documentation. Replace `NODE_NAME` with the name of the hybrid nodes on the old Kubernetes version.

   ```
   kubectl drain NODE_NAME --ignore-daemonsets --delete-emptydir-data
   ```

   You can identify and drain all of the nodes of a particular Kubernetes version (in this case, `1.28`) with the following code snippet.

   ```
   K8S_VERSION=1.28
   for node in $(kubectl get nodes -o json | jq --arg K8S_VERSION "$K8S_VERSION" -r '.items[] | select(.status.nodeInfo.kubeletVersion | match("\($K8S_VERSION)")).metadata.name')
   do
       echo "Draining $node"
       kubectl drain $node --ignore-daemonsets --delete-emptydir-data
   done
   ```

1. You can use `nodeadm` to stop and remove the hybrid nodes artifacts from the host. You must run `nodeadm` with a user that has root/sudo privileges. By default, `nodeadm uninstall` will not proceed if there are pods remaining on the node. For more information see [Hybrid nodes `nodeadm` reference](hybrid-nodes-nodeadm.md).

   ```
   nodeadm uninstall
   ```

1. With the hybrid nodes artifacts stopped and uninstalled, remove the node resource from your cluster.

   ```
   kubectl delete node node-name
   ```

   You can identify and delete all of the nodes of a particular Kubernetes version (in this case, `1.28`) with the following code snippet.

   ```
   K8S_VERSION=1.28
   for node in $(kubectl get nodes -o json | jq --arg K8S_VERSION "$K8S_VERSION" -r '.items[] | select(.status.nodeInfo.kubeletVersion | match("\($K8S_VERSION)")).metadata.name')
   do
       echo "Deleting $node"
       kubectl delete node $node
   done
   ```

1. Depending on your choice of CNI, there may be artifacts remaining on your hybrid nodes after running the above steps. See [Configure CNI for hybrid nodes](hybrid-nodes-cni.md) for more information.

## In-place upgrades


The in-place upgrade process refers to using `nodeadm upgrade` to upgrade the Kubernetes version for hybrid nodes without using new physical or virtual hosts and a cutover migration strategy. The `nodeadm upgrade` process shuts down the existing older Kubernetes components running on the hybrid node, uninstalls the existing older Kubernetes components, installs the new target Kubernetes components, and starts the new target Kubernetes components. It is strongly recommend to upgrade one node at a time to minimize impact to applications running on the hybrid nodes. The duration of this process depends on your network bandwidth and latency.

1. Use the following command to mark the node you are upgrading as unschedulable. This is so that new pods aren’t scheduled or rescheduled on the node that you are upgrading. For more information, see [kubectl cordon](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cordon/) in the Kubernetes documentation. Replace `NODE_NAME` with the name of the hybrid node you are upgrading

   ```
   kubectl cordon NODE_NAME
   ```

1. Drain the node you are upgrading with the following command. For more information on draining nodes, see [Safely Drain a Node](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) in the Kubernetes documentation. Replace `NODE_NAME` with the name of the hybrid node you are upgrading.

   ```
   kubectl drain NODE_NAME --ignore-daemonsets --delete-emptydir-data
   ```

1. Run `nodeadm upgrade` on the hybrid node you are upgrading. You must run `nodeadm` with a user that has root/sudo privileges. The name of the node is preserved through upgrade for both AWS SSM and AWS IAM Roles Anywhere credential providers. You cannot change credentials providers during the upgrade process. See [Hybrid nodes `nodeadm` reference](hybrid-nodes-nodeadm.md) for configuration values for `nodeConfig.yaml`. Replace `K8S_VERSION` with the target Kubernetes version you upgrading to.

   ```
   nodeadm upgrade K8S_VERSION -c file://nodeConfig.yaml
   ```

1. To allow pods to be scheduled on the node after you have upgraded, type the following. Replace `NODE_NAME` with the name of the node.

   ```
   kubectl uncordon NODE_NAME
   ```

1. Watch the status of your hybrid nodes and wait for your nodes to shutdown and restart on the new Kubernetes version with the Ready status.

   ```
   kubectl get nodes -o wide -w
   ```

# Patch security updates for hybrid nodes
Patch hybrid nodes

This topic describes the procedure to perform in-place patching of security updates for specific packages and dependencies running on your hybrid nodes. As a best practice we recommend you to regularly update your hybrid nodes to receive CVEs and security patches.

For steps to upgrade the Kubernetes version, see [Upgrade hybrid nodes for your cluster](hybrid-nodes-upgrade.md).

One example of software that might need security patching is `containerd`.

## `Containerd`


 `containerd` is the standard Kubernetes container runtime and core dependency for EKS Hybrid Nodes, used for managing container lifecycle, including pulling images and managing container execution. On an hybrid node, you can install `containerd` through the [nodeadm CLI](https://docs.aws.amazon.com/eks/latest/userguide/hybrid-nodes-nodeadm.html) or manually. Depending on the operating system of your node, `nodeadm` will install `containerd` from the OS-distributed package or Docker package.

When a CVE in `containerd` has been published, you have the following options to upgrade to the patched version of `containerd` on your Hybrid nodes.

## Step 1: Check if the patch published to package managers


You can check whether the `containerd` CVE patch has been published to each respective OS package manager by referring to the corresponding security bulletins:
+  [Amazon Linux 2023](https://alas.aws.amazon.com/alas2023.html) 
+  [RHEL](https://access.redhat.com/security/security-updates/security-advisories) 
+  [Ubuntu 20.04](https://ubuntu.com/security/notices?order=newest&release=focal) 
+  [Ubuntu 22.04](https://ubuntu.com/security/notices?order=newest&release=jammy) 
+  [Ubuntu 24.04](https://ubuntu.com/security/notices?order=newest&release=noble) 

If you use the Docker repo as the source of `containerd`, you can check the [Docker security announcements](https://docs.docker.com/security/security-announcements/) to identify the availability of the patched version in the Docker repo.

## Step 2: Choose the method to install the patch


There are three methods to patch and install security upgrades in-place on nodes. Which method you can use depends on whether the patch is available from the operating system in the package manager or not:

1. Install patches with `nodeadm upgrade` that are published to package managers, see [Step 2 a](#hybrid-nodes-security-nodeadm).

1. Install patches with the package managers directly, see [Step 2 b](#hybrid-nodes-security-package).

1. Install custom patches that aren’t published in package managers. Note that there are special considerations for custom patches for `containerd`, [Step 2 c](#hybrid-nodes-security-manual).

## Step 2 a: Patching with `nodeadm upgrade`


After you confirm that the `containerd` CVE patch has been published to the OS or Docker repos (either Apt or RPM), you can use the `nodeadm upgrade` command to upgrade to the latest version of `containerd`. Since this isn’t a Kubernetes version upgrade, you must pass in your current Kubernetes version to the `nodeadm` upgrade command.

```
nodeadm upgrade K8S_VERSION --config-source file:///root/nodeConfig.yaml
```

## Step 2 b: Patching with operating system package managers


Alternatively you can also update through the respective package manager and use it to upgrade the `containerd` package as follows.

 **Amazon Linux 2023** 

```
sudo yum update -y
sudo yum install -y containerd
```

 **RHEL** 

```
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo
sudo yum update -y
sudo yum install -y containerd
```

 **Ubuntu** 

```
sudo mkdir -p /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update -y
sudo apt install -y --only-upgrade containerd.io
```

## Step 2 c: `Containerd` CVE patch not published in package managers


If the patched `containerd` version is only available by other means instead of in the package manager, for example in GitHub releases, then you can install `containerd` from the official GitHub site.

1. If the machine has already joined the cluster as a hybrid node, then you need to run the `nodeadm uninstall` command.

1. Install the official `containerd` binaries. You can use the steps [official installation steps](https://github.com/containerd/containerd/blob/main/docs/getting-started.md#option-1-from-the-official-binaries) on GitHub.

1. Run the `nodeadm install` command with the `--containerd-source` argument set to `none`, which will skip `containerd` installation through `nodeadm`. You can use the value of `none` in the `containerd` source for any operating system that the node is running.

   ```
   nodeadm install K8S_VERSION --credential-provider CREDS_PROVIDER --containerd-source none
   ```

# Remove hybrid nodes
Delete hybrid nodes

This topic describes how to delete hybrid nodes from your Amazon EKS cluster. You must delete your hybrid nodes with your choice of Kubernetes-compatible tooling such as [kubectl](https://kubernetes.io/docs/reference/kubectl/). Charges for hybrid nodes stop when the node object is removed from the Amazon EKS cluster. For more information on hybrid nodes pricing, see [Amazon EKS Pricing](https://aws.amazon.com/eks/pricing/).

**Important**  
Removing nodes is disruptive to workloads running on the node. Before deleting hybrid nodes, we recommend that you first drain the node to move pods to another active node. For more information on draining nodes, see [Safely Drain a Node](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) in the Kubernetes documentation.

Run the kubectl steps below from your local machine or instance that you use to interact with the Amazon EKS cluster’s Kubernetes API endpoint. If you are using a specific `kubeconfig` file, use the `--kubeconfig` flag.

## Step 1: List your nodes


```
kubectl get nodes
```

## Step 2: Drain your node


See [kubectl drain](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_drain/) in the Kubernetes documentation for more information on the `kubectl drain` command.

```
kubectl drain --ignore-daemonsets <node-name>
```

## Step 3: Stop and uninstall hybrid nodes artifacts


You can use the Amazon EKS Hybrid Nodes CLI (`nodeadm`) to stop and remove the hybrid nodes artifacts from the host. You must run `nodeadm` with a user that has root/sudo privileges. By default, `nodeadm uninstall` will not proceed if there are pods remaining on the node. If you are using AWS Systems Manager (SSM) as your credentials provider, the `nodeadm uninstall` command deregisters the host as an AWS SSM managed instance. For more information, see [Hybrid nodes `nodeadm` reference](hybrid-nodes-nodeadm.md).

```
nodeadm uninstall
```

## Step 4: Delete your node from the cluster


With the hybrid nodes artifacts stopped and uninstalled, remove the node resource from your cluster.

```
kubectl delete node <node-name>
```

## Step 5: Check for remaining artifacts


Depending on your choice of CNI, there may be artifacts remaining on your hybrid nodes after running the above steps. See [Configure CNI for hybrid nodes](hybrid-nodes-cni.md) for more information.