

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# EKS Capabilities
<a name="capabilities"></a>

**Tip**  
Get started: [Create ACK capability](create-ack-capability.md) \$1 [Create Argo CD capability](create-argocd-capability.md) \$1 [Create kro capability](create-kro-capability.md) 

Amazon EKS Capabilities is a layered set of fully managed cluster features that help accelerate developer velocity and offload the complexity of building and scaling with Kubernetes. EKS Capabilities are Kubernetes-native features for declarative continuous deployment, AWS resource management, and Kubernetes resource authoring and orchestration, all fully managed by AWS. With EKS Capabilities, you can focus more on building and scaling your workloads, offloading the operational burden of these foundational platform services to AWS. These capabilities run within EKS rather than in your clusters, eliminating the need to install, maintain, and scale critical platform components on your worker nodes.

To get started, you can create one or more EKS Capabilities on a new or existing EKS cluster. To do this, you can use the AWS CLI, the AWS Management Console, EKS APIs, eksctl, or your preferred infrastructure-as-code tools. While EKS Capabilities are designed to work together, they are independent cloud resources that you can pick and choose based on your use case and requirements.

All Kubernetes versions supported by EKS are supported for EKS Capabilities.

**Note**  
EKS Capabilities are available in all AWS commercial Regions where Amazon EKS is available. For a list of supported Regions, see [Amazon EKS endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/eks.html) in the AWS General Reference.

## Available Capabilities
<a name="_available_capabilities"></a>

### AWS Controllers for Kubernetes (ACK)
<a name="shared_aws_controllers_for_kubernetes_ack"></a>

ACK enables the management of AWS resources using Kubernetes APIs, allowing you to create and manage S3 buckets, RDS databases, IAM roles, and other AWS resources using Kubernetes custom resources. ACK continuously reconciles your desired state with the actual state in AWS, correcting any drift over time in order to keep your system healthy and your resources configured as specified. You can manage AWS resources alongside your Kubernetes workloads using the same tools and workflows, with support for more than 50 AWS services including S3, RDS, DynamoDB, and Lambda. ACK supports cross-account and cross-region resource management, enabling complex multi-account, multi-cluster system management architectures. ACK supports read-only resources and read-only adoption, facilitating migration from other infrastructure as code tools into your Kubernetes-based systems.

 [Learn more about ACK →](ack.md) 

### Argo CD
<a name="_argo_cd"></a>

Argo CD implements GitOps-based continuous deployment for your applications, using Git repositories as the source of truth for your workloads and system state. Argo CD automatically syncs application resources to your clusters from your Git repositories, detecting and remediating drift to ensure your deployed applications match your desired state. You can deploy and manage applications across multiple clusters from a single Argo CD instance, with automated deployment from Git repositories whenever changes are committed. Using Argo CD and ACK together provides a foundational GitOps system, simplifying workload dependency management as well as supporting whole-system designs including cluster and infrastructure management at scale. Argo CD integrates with AWS Identity Center for authentication and authorization, and provides a hosted Argo UI for visualizing application health and deployment status.

 [Learn more about Argo CD →](argocd.md) 

### kro (Kube Resource Orchestrator)
<a name="_kro_kube_resource_orchestrator"></a>

kro enables you to create custom Kubernetes APIs that compose multiple resources into higher-level abstractions, allowing platform teams to define reusable patterns for common resource combinations-cloud building blocks. With kro, you can compose both Kubernetes and AWS resources together into unified abstractions, using simple syntax to enable dynamic configurations and conditional logic. kro enables platform teams to provide self-service capabilities with appropriate guardrails, allowing developers to provision complex infrastructure using simple, purpose-built APIs while maintaining organizational standards and best practices. kro resources are simply Kubernetes resources, and are specified in Kubernetes manifests which can be stored in Git, or pushed to OCI-compatible registries like Amazon ECR for broad organizational distribution.

 [Learn more about kro →](kro.md) 

## Benefits of EKS Capabilities
<a name="_benefits_of_eks_capabilities"></a>

EKS Capabilities are fully managed by AWS, eliminating the need for installation, maintenance, and scaling of foundational cluster services. AWS handles security patching, updates, and operational management, freeing your teams to focus on building with AWS rather than on cluster operations. Unlike traditional Kubernetes add-ons that consume cluster resources, capabilities run in EKS rather than on your worker nodes. This frees up cluster capacity and resources for your workloads while minimizing the operational burden of managing in-cluster controllers and other platform components.

With EKS Capabilities, you can manage deployments, AWS resources, custom Kubernetes resources, and compositions using native Kubernetes APIs and tools like `kubectl`. All capabilities operate in the context of your clusters, automatically detecting and correcting configuration drift in both application and cloud infrastructure resources. You can deploy and manage resources across multiple clusters, AWS accounts, and regions from a single control point, simplifying operations in complex, distributed environments.

EKS Capabilities are designed for GitOps workflows, providing declarative, version-controlled infrastructure and application management. Changes flow from Git through the system, providing audit trails, rollback capabilities, and collaboration workflows that integrate with your existing development practices. This Kubernetes-native approach means you don’t need to use multiple tools or manage infrastructure-as-code systems external to your clusters, and there is a single source of truth to refer to. Your desired state, defined in version-controlled Kubernetes declarative configuration, is continuously enforced across your environment.

## Pricing
<a name="_pricing"></a>

With EKS Capabilities, there are no upfront commitments or minimum fees. You are charged for each capability resource for each hour it is active on your Amazon EKS cluster. Specific Kubernetes resources managed by EKS Capabilities are also billed at an hourly rate.

For current pricing information, see the [Amazon EKS pricing page](https://aws.amazon.com/eks/pricing/).

**Tip**  
You can use AWS Cost Explorer and Cost and Usage Reports to track capability costs separately from other EKS charges. You can tag your capabilities with cluster name, capability type, and other details for cost allocation purposes.

## How EKS Capabilities Work
<a name="_how_eks_capabilities_work"></a>

Each capability is an AWS resource that you create on your EKS cluster. Once created, the capability runs in EKS and is fully managed by AWS.

**Note**  
You can create one capability resource of each type (Argo CD, ACK, and kro) for a given cluster. You cannot create multiple capability resources of the same type on the same cluster.

You interact with capabilities in your cluster using standard Kubernetes APIs and tools:
+ Use `kubectl` to apply Kubernetes custom resources
+ Use Git repositories as the source of truth for GitOps workflows

Some capabilities have additional tools supported. For example:
+ Use the Argo CD CLI to configure and manage repositories and clusters in your Argo CD capability
+ Use the Argo CD UI to visualize and manage applications managed by your Argo CD capability

Capabilities are designed to work together but are independent and fully opt-in. You can enable one, two, or all three capabilities based on your needs, and update your configuration as your requirements evolve.

All EKS Compute types are supported for use with EKS Capabilities. For more information, see [Manage compute resources by using nodes](eks-compute.md).

For security configuration and details on IAM roles, see [Security considerations for EKS Capabilities](capabilities-security.md). For multi-cluster architecture patterns, see [EKS Capabilities considerations](capabilities-considerations.md).

## Common Use Cases
<a name="_common_use_cases"></a>

 **GitOps for Applications and Infrastructure** 

Use Argo CD to deploy applications and operational components and ACK to manage cluster configurations and provision infrastructure, both from Git repositories. Your entire stack—applications, databases, storage, and networking—is defined as code and automatically deployed.

Example: A development team pushes changes to Git. Argo CD deploys the updated application, and ACK provisions a new RDS database with the correct configuration. All changes are auditable, reversible, and consistent across environments.

 **Platform Engineering with Self-Service** 

Use kro to create custom APIs that compose ACK and Kubernetes resources. Platform teams define approved patterns with guardrails. Application teams use simple, high-level APIs to provision complete stacks.

Example: A platform team creates a "WebApplication" API that provisions a Deployment, Service, Ingress, and S3 bucket. Developers use this API without needing to understand the underlying complexity or AWS permissions.

 **Multi-Cluster Application Management** 

Use Argo CD to deploy applications across multiple EKS clusters in different regions or accounts. Manage all deployments from a single Argo CD instance with consistent policies and workflows.

Example: Deploy the same application to development, staging, and production clusters across multiple regions. Argo CD ensures each environment stays in sync with its corresponding Git branch.

 **Multi-Cluster Management** 

Use ACK to define and provision EKS clusters, kro to customize cluster configurations with organizational standards, and Argo CD to manage cluster lifecycle and configuration. This provides end-to-end cluster management from creation through ongoing operations.

Example: Define EKS clusters using ACK and kro to provision and manage cluster infrastructure, defining organizational standards for networking, security policies, add-ons and other configuration. Use Argo CD to create and continuously manage clusters, configuration, and Kubernetes version updates across your fleet leveraging consistent standards and automated lifecycle management.

 **Migrations and Modernization** 

Simplify migration to EKS with native cloud resource provisioning and GitOps workflows. Use ACK to adopt existing AWS resources without recreating them, and Argo CD to operationalize workload deployments from Git.

Example: A team migrating from EC2 to EKS adopts their existing RDS databases and S3 buckets using ACK, then uses Argo CD to deploy containerized applications from Git. The migration path is clear, and operations are standardized from day one.

 **Account and Regional Bootstrapping** 

Automate infrastructure rollout across accounts and regions using Argo CD and ACK together. Define your infrastructure as code in Git, and let capabilities handle the deployment and management.

Example: A platform team maintains Git repositories defining standard account configurations—VPCs, IAM roles, RDS instances, and monitoring stacks. Argo CD deploys these configurations to new accounts and regions automatically, ensuring consistency and reducing manual setup time from days to minutes.

# Working with capability resources
<a name="working-with-capabilities"></a>

This topic describes common operations for managing capability resources across all capability types.

## EKS capability resources
<a name="_eks_capability_resources"></a>

EKS capabilities are AWS resources that enable managed functionality on your Amazon EKS cluster. Capabilities run in EKS, eliminating the need to install and maintain controllers and other operational components on your worker nodes. Capabilities are created for a specific EKS cluster, and remain affiliated with that cluster for their entire lifecycle.

Each capability resource has:
+ A unique name within your cluster
+ A capability type (ACK, ARGOCD, or KRO)
+ An Amazon Resource Name (ARN), specifying both name and type
+ A capability IAM role
+ A status that indicates its current state
+ Configuration, both generic and specific to the capability type

## Understanding capability status
<a name="_understanding_capability_status"></a>

Capability resources have a status that indicates their current state. You can view capability status and health in the EKS console or using the AWS CLI.

 **Console**:

1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home\$1/clusters.

1. Select your cluster name.

1. Choose the **Capabilities** tab to view status for all capabilities.

1. For detailed health information, choose the **Observability** tab, then **Monitor cluster**, then the **Capabilities** tab.

 ** AWS CLI**:

```
aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-capability-name
```

### Capability statuses
<a name="_capability_statuses"></a>

 **CREATING**: Capability is being set up. You can navigate away from the console—the capability will continue creating in the background.

 **ACTIVE**: Capability is running and ready to use. If resources aren’t working as expected, check resource status and IAM permissions. See [Troubleshooting EKS Capabilities](capabilities-troubleshooting.md) for guidance.

 **UPDATING**: Configuration changes are being applied. Wait for the status to return to `ACTIVE`.

 **DELETING**: Capability is being removed from the cluster.

 **CREATE\$1FAILED**: Setup encountered an error. Common causes include:
+ IAM role trust policy incorrect or missing
+ IAM role doesn’t exist or isn’t accessible
+ Cluster access issues
+ Invalid configuration parameters

Check the capability health section for specific error details.

 **UPDATE\$1FAILED**: Configuration update failed. Check the capability health section for details and verify IAM permissions.

**Tip**  
For detailed troubleshooting guidance, see:  
 [Troubleshooting EKS Capabilities](capabilities-troubleshooting.md) - General capability troubleshooting
 [Troubleshoot issues with ACK capabilities](ack-troubleshooting.md) - ACK-specific issues
 [Troubleshoot issues with Argo CD capabilities](argocd-troubleshooting.md) - Argo CD-specific issues
 [Troubleshoot issues with kro capabilities](kro-troubleshooting.md) - kro-specific issues

## Create capabilities
<a name="_create_capabilities"></a>

To create a capability on your cluster, see the following topics:
+  [Create an ACK capability](create-ack-capability.md) – Create an ACK capability to manage AWS resources using Kubernetes APIs
+  [Create an Argo CD capability](create-argocd-capability.md) – Create an Argo CD capability for GitOps continuous delivery
+  [Create a kro capability](create-kro-capability.md) – Create a kro capability for resource composition and orchestration

## List capabilities
<a name="_list_capabilities"></a>

You can list all capability resources on a cluster.

### Console
<a name="_console"></a>

1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home\$1/clusters.

1. Select your cluster name to open the cluster detail page.

1. Choose the **Capabilities** tab.

1. View capability resources under **Managed capabilities**.

### AWS CLI
<a name="shared_aws_cli"></a>

Use the `list-capabilities` command to view all capabilities on your cluster. Replace *region-code* with the AWS Region that your cluster is in and replace *my-cluster* with the name of your cluster.

```
aws eks list-capabilities \
  --region region-code \
  --cluster-name my-cluster
```

```
{
    "capabilities": [
        {
            "capabilityName": "my-ack",
            "arn": "arn:aws:eks:us-west-2:111122223333:capability/my-cluster/ack/my-ack/abc123",
            "type": "ACK",
            "status": "ACTIVE",
            "createdAt": "2025-11-02T10:30:00.000000-07:00",
            "modifiedAt": "2025-11-02T10:32:15.000000-07:00",
        },
        {
            "capabilityName": "my-kro",
            "arn": "arn:aws:eks:us-west-2:111122223333:capability/my-cluster/kro/my-kro/abc123",
            "type": "KRO",
            "status": "ACTIVE",
            "version": "v0.6.3",
            "createdAt": "2025-11-02T10:30:00.000000-07:00",
            "modifiedAt": "2025-11-02T10:32:15.000000-07:00",
        },
        {
            "capabilityName": "my-argocd",
            "arn": "arn:aws:eks:us-west-2:111122223333:capability/my-cluster/argocd/my-argocd/abc123",
            "type": "ARGOCD",
            "status": "ACTIVE",
            "version": "3.1.8-eks-1",
            "createdAt": "2025-11-21T08:22:28.486000-05:00",
            "modifiedAt": "2025-11-21T08:22:28.486000-05:00"
        }
    ]
}
```

## Describe a capability
<a name="_describe_a_capability"></a>

Get detailed information about a specific capability, including its configuration and status.

### Console
<a name="_console_2"></a>

1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home\$1/clusters.

1. Select your cluster name to open the cluster detail page.

1. Choose the **Capabilities** tab.

1. Choose the capability you want to view from **Managed capabilities**.

1. View the capability details, including status, configuration, and creation time.

### AWS CLI
<a name="shared_aws_cli"></a>

Use the `describe-capability` command to view detailed information. Replace *region-code* with the AWS Region that your cluster is in, replace *my-cluster* with the name of your cluster, and replace *capability-name* with the capability name (ack, argocd, or kro).

```
aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name capability-name
```

 **Example output:** 

```
{
  "capability": {
    "capabilityName": "my-ack",
    "capabilityArn": "arn:aws:eks:us-west-2:111122223333:capability/my-cluster/ack/my-ack/abc123",
    "clusterName": "my-cluster",
    "type": "ACK",
    "roleArn": "arn:aws:iam::111122223333:role/AmazonEKSCapabilityACKRole",
    "status": "ACTIVE",
    "configuration": {},
    "tags": {},
    "health": {
      "issues": []
    },
    "createdAt": "2025-11-19T17:11:30.242000-05:00",
    "modifiedAt": "2025-11-19T17:11:30.242000-05:00",
    "deletePropagationPolicy": "RETAIN"
  }
}
```

## Update the configuration of a capability
<a name="_update_the_configuration_of_a_capability"></a>

You can update certain aspects of a capability’s configuration after creation. The specific configuration options vary by capability type.

**Note**  
EKS capability resources are fully managed, including patching and version updates. Updating a capability will update resource configuration and will not result in version updates of the managed capability components.

### AWS CLI
<a name="shared_aws_cli"></a>

Use the `update-capability` command to modify a capability:

```
aws eks update-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name capability-name \
  --role-arn arn:aws:iam::[.replaceable]111122223333:role/NewCapabilityRole
```

**Note**  
Not all capability properties can be updated after creation. Refer to the capability-specific documentation for details on what can be modified.

## Delete a capability
<a name="_delete_a_capability"></a>

When you no longer need a capability on your cluster, you can delete the capability resource.

**Important**  
 **Delete cluster resources before deleting the capability.**   
Deleting a capability resource does not automatically delete resources created through that capability:  
All Kubernetes Custom Resource Definitions (CRDs) remain installed in your cluster.
ACK resources remain in your cluster, and corresponding AWS resources remain in your account
Argo CD Applications and their Kubernetes resources remain in your cluster
kro ResourceGraphDefinitions and instances remain in your cluster
You should delete these resources before deleting the capability to avoid orphaned resources.  
You may optionally choose to retain AWS resources associated with ACK Kubernetes resources. See [ACK considerations](ack-considerations.md) 

### Console
<a name="_console_3"></a>

1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home\$1/clusters.

1. Select your cluster name to open the cluster detail page.

1. Choose the **Capabilities** tab.

1. Select the capability you want to delete from the list of **Managed capabilities**.

1. Choose **Delete capability**.

1. In the confirmation dialog, type the name of the capability to confirm deletion.

1. Choose **Delete**.

### AWS CLI
<a name="shared_aws_cli"></a>

Use the `delete-capability` command to delete a capability resource:

Replace *region-code* with the AWS Region that your cluster is in, replace *my-cluster* with the name of your cluster, and replace *capability-name* with the capability name to delete.

```
aws eks delete-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name capability-name
```

## Next steps
<a name="_next_steps"></a>
+  [Capability Kubernetes resources](capability-kubernetes-resources.md) – Learn about the Kubernetes resources provided by each capability type
+  [ACK concepts](ack-concepts.md) – Understand ACK concepts and resource lifecycle
+  [Working with Argo CD](working-with-argocd.md) – Working with Argo CD capabilities for GitOps workflows
+  [kro concepts](kro-concepts.md) – Understand kro concepts and resource composition

# Capability Kubernetes resources
<a name="capability-kubernetes-resources"></a>

After you enable a capability on your cluster, you will most often interact with it by creating and managing Kubernetes custom resources in your cluster. Each capability provides its own set of custom resource definitions (CRDs) that extend the Kubernetes API with capability-specific functionality.

## Argo CD resources
<a name="_argo_cd_resources"></a>

When you enable the Argo CD capability, you can create and manage the following Kubernetes resources:

 **Application**   
Defines a deployment from a Git repository to a target cluster. `Application` resources specify the source repository, target namespace, and sync policy. You can create up to 1000 `Application` resources per Argo CD capability instance.

 **ApplicationSet**   
Generates multiple `Application` resources from templates, enabling multi-cluster and multi-environment deployments. `ApplicationSet` resources use generators to create `Application` resources dynamically based on cluster lists, Git directories, or other sources.

 **AppProject**   
Provides logical grouping and access control for `Application` resources. `AppProject` resources define which repositories, clusters, and namespaces `Application` resources can use, enabling multi-tenancy and security boundaries.

Example `Application` resource:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/org/repo
    targetRevision: main
    path: manifests
  destination:
    server: https://kubernetes.default.svc
    namespace: production
```

For more information about Argo CD resources and concepts, see [Argo CD concepts](argocd-concepts.md).

## kro resources
<a name="_kro_resources"></a>

When you enable the kro capability, you can create and manage the following Kubernetes resources:

 **ResourceGraphDefinition (RGD)**   
Defines a custom API that composes multiple Kubernetes and AWS resources into a higher-level abstraction. Platform teams create `ResourceGraphDefinition` resources to provide reusable patterns with guardrails.

 **Custom resource instances**   
After creating a `ResourceGraphDefinition` resource, you can create instances of the custom API defined by the `ResourceGraphDefinition`. kro automatically creates and manages the resources specified in the `ResourceGraphDefinition`.

Example `ResourceGraphDefinition` resource:

```
apiVersion: kro.run/v1alpha1
kind: ResourceGraphDefinition
metadata:
  name: web-application
spec:
  schema:
    apiVersion: v1alpha1
    kind: WebApplication
    spec:
      name: string
      replicas: integer
  resources:
    - id: deployment
      template:
        apiVersion: apps/v1
        kind: Deployment
        # ... deployment spec
    - id: service
      template:
        apiVersion: v1
        kind: Service
        # ... service spec
```

Example `WebApplication` instance:

```
apiVersion: v1alpha1
kind: WebApplication
metadata:
  name: my-web-app
  namespace: default
spec:
  name: my-web-app
  replicas: 3
```

When you apply this instance, kro automatically creates the `Deployment` and `Service` resources defined in the `ResourceGraphDefinition`.

For more information about kro resources and concepts, see [kro concepts](kro-concepts.md).

## ACK resources
<a name="_ack_resources"></a>

When you enable the ACK capability, you can create and manage AWS resources using Kubernetes custom resources. ACK provides over 200 CRDs for more than 50 AWS services, allowing you to define AWS resources alongside your Kubernetes workloads, and manage dedicated AWS infrastructure resources with Kubernetes.

Examples of ACK resources:

 **S3 Bucket**   
 `Bucket` resources create and manage Amazon S3 buckets with versioning, encryption, and lifecycle policies.

 **RDS DBInstance**   
 `DBInstance` resources provision and manage Amazon RDS database instances with automated backups and maintenance windows.

 **DynamoDB Table**   
 `Table` resources create and manage DynamoDB tables with provisioned or on-demand capacity.

 **IAM Role**   
 `Role` resources define IAM roles with trust policies and permission policies for AWS service access.

 **Lambda Function**   
 `Function` resources create and manage Lambda functions with code, runtime, and execution role configuration.

Example specification of a `Bucket` resource:

```
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: my-app-bucket
spec:
  name: my-unique-bucket-name-12345
  versioning:
    status: Enabled
  encryption:
    rules:
      - applyServerSideEncryptionByDefault:
          sseAlgorithm: AES256
```

For more information about ACK resources and concepts, see [ACK concepts](ack-concepts.md).

## Resource limits
<a name="_resource_limits"></a>

EKS Capabilities have the following resource limits:

 **Argo CD usage limits**:
+ Maximum 1000 `Application` resources per Argo CD capability instance
+ Maximum 100 remote clusters configured per Argo CD capability instance

 **Resource configuration limits**:
+ Maximum 150 Kubernetes resources per `Application` resource in Argo CD
+ Maximum 64 Kubernetes resources per `ResourceGraphDefinition` in kro

**Note**  
These limits apply to the number of resources managed by each capability instance. If you need higher limits, you can deploy capabilities across multiple clusters.

## Next steps
<a name="_next_steps"></a>

For capability-specific tasks and advanced configuration, see the following topics:
+  [ACK concepts](ack-concepts.md) – Understand ACK concepts and resource lifecycle
+  [Working with Argo CD](working-with-argocd.md) – Working with Argo CD capabilities for GitOps workflows
+  [kro concepts](kro-concepts.md) – Understand kro concepts and resource composition

# EKS Capabilities considerations
<a name="capabilities-considerations"></a>

This topic covers important considerations for using EKS Capabilities, including access control design, choosing between EKS Capabilities and self-managed solutions, architectural patterns for multi-cluster deployments, and operational best practices.

## Capability IAM roles and Kubernetes RBAC
<a name="_capability_iam_roles_and_kubernetes_rbac"></a>

Each EKS capability resource has a configured capability IAM role. The capability role is used to grant AWS service permissions for EKS capabilities to act on your behalf. For example, to use the EKS Capability for ACK to manage Amazon S3 Buckets, you will grant S3 Bucket administrative permissions to the capability, enabling it to create and manage buckets.

Once the capability is configured, S3 resources in AWS can be created and managed with Kubernetes custom resources in your cluster. Kubernetes RBAC is the in-cluster access control mechanism for determining which users and groups can create and manage those custom resources. For example, grant specific Kubernetes RBAC users and groups permissions to create and manage `Bucket` resources in namespaces you choose.

In this way, IAM and Kubernetes RBAC are two halves of the end-to-end access control system that governs permissions related to EKS Capabilities and resources. It’s important to design the right combination of IAM permissions and RBAC access policies for your use case.

For additional information on capability IAM roles and Kubernetes permissions, see [Security considerations for EKS Capabilities](capabilities-security.md).

## Multi-cluster architecture patterns
<a name="_multi_cluster_architecture_patterns"></a>

When deploying capabilities across multiple clusters, consider these common architectural patterns:

 **Hub and Spoke with centralized management** 

Run all three capabilities in a centrally managed cluster to orchestrate workloads and manage cloud infrastructure across multiple workload clusters.
+ Argo CD on the management cluster deploys applications to workload clusters in different regions or accounts
+ ACK on the management cluster provisions AWS resources (RDS, S3, IAM) for all clusters
+ kro on the management cluster creates portable platform abstractions that work across all clusters

This pattern centralizes workload and cloud infrastructure management, and can simplify operations for organizations managing many clusters.

 **Decentralized GitOps** 

Workloads and cloud infrastructure are managed by capabilities on the same cluster where workloads are running.
+ Argo CD manages application resources on the local cluster.
+ ACK resources are used for cluster and workload needs.
+ kro platform abstractions are installed and orchestrate local resources.

This pattern decentralizes operations, with teams managing their own dedicated platform services in one or more clusters.

 **Hub and Spoke with Hybrid ACK Deployment** 

Combine centralized and decentralized models, with centralized application deployments and resource management based on scope and ownership.
+ Hub cluster:
  + Argo CD manages GitOps deployments to the local cluster and all remote workload clusters
  + ACK is used on the management cluster for admin-scoped resources (production databases, IAM roles, VPCs)
  + kro is used on the management cluster for reusable platform abstractions
+ Spoke clusters:
  + Workloads are managed via Argo CD on the centralized hub cluster
  + ACK is used locally for workload-scoped resources (S3 buckets, ElastiCache instances, SQS queues)
  + kro is used locally for resource compositions and building block patterns

This pattern separates concerns—platform teams manage critical infrastructure centrally on management clusters, optionally including workload clusters, while application teams specify and manage cloud resources alongside workloads.

 **Choosing a Pattern** 

Consider these factors when selecting an architecture:
+  **Organizational structure**: Centralized platform teams favor hub patterns; decentralized teams may prefer per-cluster capabilities
+  **Resource scope**: Admin-scoped resources (databases, IAM) often benefit from central management; workload resources (buckets, queues) can be managed locally
+  **Self-service**: Centralized platform teams can author and distribute prescriptive custom resources to enable safe self-service of cloud resources for common workload needs
+  **Cluster fleet management**: Centralized management clusters provide a customer-owned control plane for EKS cluster fleet management, along with other admin-scoped resources
+  **Compliance requirements**: Some organizations require centralized control for audit and governance
+  **Operational complexity**: Fewer capability instances simplify operations but may create bottlenecks

**Note**  
You can start with one pattern and evolve to another as your platform matures. Capabilities are independent—you can deploy them differently across clusters based on your needs.

## Comparing EKS Capabilities to self-managed solutions
<a name="_comparing_eks_capabilities_to_self_managed_solutions"></a>

EKS Capabilities provide fully managed experiences for popular Kubernetes tools and controllers that run in EKS. This differs from self-managed solutions, which you install and operate in your cluster.

### Key Differences
<a name="_key_differences"></a>

 **Deployment and management** 

 AWS fully manages EKS Capabilities with no installation, configuration, or maintenance of component software required. AWS installs and manages all required Kubernetes Custom Resource Definitions (CRDs) in the cluster automatically.

With self-managed solutions, you install and configure cluster software using Helm charts, kubectl, or other operators. You have full control over the software lifecycle and runtime configuration of your self-managed solutions, providing customizations at any layer of the solution.

 **Operations and maintenance** 

 AWS manages patching and other software lifecycle operations for EKS Capabilities, with automatic updates and security patches. EKS Capabilities are integrated with AWS features for streamlined configurations, provides built-in highly availability and fault tolerance, and eliminates in-cluster troubleshooting of controller workloads.

Self-managed solutions require you to monitor component health and logs, apply security patches and version updates, configure high availability with multiple replicas and pod disruption budgets, troubleshoot and remediate controller workload issues, and manage releases and versions. You have full control over your deployments, but this often requires bespoke solutions for private cluster access and other integrations which must align with organizational standards and security compliance requirements.

 **Resource consumption** 

EKS Capabilities run in EKS and off of your clusters, freeing up node resources and cluster resources. Capabilities do not use cluster workload resources, do not consume CPU or memory on your worker nodes, scale automatically, and have minimal impact on cluster capacity planning.

Self-managed solutions run controllers and other components on your worker nodes, directly consuming worker node resources, cluster IPs, and other cluster resources. Managing cluster services requires capacity planning for their workloads, and requires planning and configuration of resource requests and limits to manage scaling and high availability requirements.

 **Feature support** 

As fully managed service features, EKS Capabilities are by their nature opinionated compared to self-managed solutions. While capabilities will support most features and use cases, there will be a difference in coverage when compared to self-managed solutions.

With self-managed solutions, you fully control the configuration, optional features, and other aspects of functionality for your software. You may choose to run your own custom images, customize all aspects of configuration, and fully control your self-managed solution functionality.

 **Cost Considerations** 

Each EKS capability resource has a related hourly cost, which differs based upon the capability type. Cluster resources managed by the capability also have an associated hourly cost with their own pricing. For more information, see [Amazon EKS pricing](https://aws.amazon.com/eks/pricing/).

Self-managed solutions have no direct costs related to AWS charges, but you pay for cluster compute resources used by controllers and related workloads. Beyond node and cluster resource consumption, the full cost of ownership with self-managed solutions includes operational overhead and expense of maintenance, troubleshooting, and support.

### Choosing between EKS Capabilities and self-managed solutions
<a name="_choosing_between_eks_capabilities_and_self_managed_solutions"></a>

 **EKS Capabilities** Consider this choice when you want to reduce operational overhead and focus on differentiated value in your software and systems, rather than cluster platform operations for foundational requirements. Use EKS Capabilities when you want to minimize the operational burden of security patches and software lifecycle management, free up node and cluster resources for application workloads, simplify configuration and security management, and benefit from AWS support coverage. EKS Capabilities are ideal for most production use cases and are the recommended approach for new deployments.

 **Self-managed solutions** Consider this choice when you require specific Kubernetes resource API versions, custom controller builds, have existing automation and tooling built around self-managed deployments, or need deep customization of controller runtime configurations. Self-managed solutions provide flexibility for specialized use cases, and you have complete control over your deployment and runtime configuration.

**Note**  
EKS Capabilities can coexist in your cluster with self-managed solutions, and step-wise migrations are possible to achieve.

### Capability-Specific Comparisons
<a name="_capability_specific_comparisons"></a>

For detailed comparisons including capability-specific features, upstream differences, and migration paths see:
+  [Comparing EKS Capability for ACK to self-managed ACK](ack-comparison.md) 
+  [Comparing EKS Capability for Argo CD to self-managed Argo CD](argocd-comparison.md) 
+  [Comparing EKS Capability for kro to self-managed kro](kro-comparison.md) 

# Deploy AWS resources from Kubernetes with AWS Controllers for Kubernetes (ACK)
<a name="ack"></a>

 AWS Controllers for Kubernetes (ACK) lets you define and manage AWS service resources directly from Kubernetes. With AWS Controllers for Kubernetes (ACK), you can manage workload resources and cloud infrastructure using Kubernetes custom resources, right alongside your application workloads using familiar Kubernetes APIs and tools.

With EKS Capabilities, ACK is fully managed by AWS, eliminating the need to install, maintain, and scale ACK controllers on your clusters.

## How ACK Works
<a name="_how_ack_works"></a>

ACK translates Kubernetes custom resource specifications into AWS API calls. When you create, update, or delete a Kubernetes custom resource representing an AWS service resource, ACK makes the required AWS API calls to create, update, or delete the AWS resource.

Each AWS resource supported by ACK has its own custom resource definition (CRD) that defines the Kubernetes API schema for specifying its configuration. For example, ACK provides CRDs for S3 including buckets, bucket policies, and other S3 resources.

ACK continuously reconciles the state of your AWS resources with the desired state defined in your Kubernetes custom resources. If a resource drifts from its desired state, ACK detects this and takes corrective action to bring it back into alignment. Changes to Kubernetes resources are immediately reflected in AWS resource state, while passive drift detection and remediation of upstream AWS resource changes can take as long as 10 hours (the resync period), but will typically occur much sooner.

 **Example S3 Bucket resource manifest** 

```
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: my-ack-bucket
spec:
  name: my-unique-bucket-name
```

When you apply this custom resource to your cluster, ACK creates an Amazon S3 bucket in your account if it does not yet exist. Subsequent changes to this resource, for example specifying a non-default storage tier or adding a policy, will be applied to the S3 resource in AWS. When this resource is deleted from the cluster, the S3 bucket in AWS is deleted by default.

## Benefits of ACK
<a name="_benefits_of_ack"></a>

ACK provides Kubernetes-native AWS resource management, allowing you to manage AWS resources using the same Kubernetes APIs and tools you use for your applications. This unified approach simplifies your infrastructure management workflow by eliminating the need to switch between different tools or learn separate infrastructure-as-code systems. You define your AWS resources declaratively in Kubernetes manifests, enabling GitOps workflows and infrastructure as code practices that integrate seamlessly with your existing development processes.

ACK continuously reconciles the desired state of your AWS resources with their actual state, correcting drift and ensuring consistency across your infrastructure. This continuous reconciliation means that imperative out-of-band changes to AWS resources are automatically reverted to match your declared configuration, maintaining the integrity of your infrastructure as code. You can configure ACK to manage resources across multiple AWS accounts and regions, enabling complex multi-account architectures with no additional tooling.

For organizations migrating from other infrastructure management tools, ACK supports resource adoption, allowing you to bring existing AWS resources under ACK management without recreating them. ACK also provides read-only resources for AWS resource observation without modification access, and annotations to optionally retain AWS resources even when the Kubernetes resource is deleted from the cluster.

To learn more and get started with the EKS Capability for ACK, see [ACK concepts](ack-concepts.md) and [ACK considerations for EKS](ack-considerations.md).

## Supported AWS Services
<a name="supported_shared_aws_services"></a>

ACK supports a wide range of AWS services, including but not limited to:
+ Amazon EC2
+ Amazon S3
+ Amazon RDS
+ Amazon DynamoDB
+ Amazon ElastiCache
+ Amazon EKS
+ Amazon SQS
+ Amazon SNS
+  AWS Lambda
+  AWS IAM

All AWS services listed as Generally Available upstream are supported by the EKS Capability for ACK. Refer to the [full list of AWS services supported](https://aws-controllers-k8s.github.io/community/docs/community/services/) for details.

## Integration with Other EKS Managed Capabilities
<a name="_integration_with_other_eks_managed_capabilities"></a>

ACK integrates with other EKS Managed Capabilities.
+  **Argo CD**: Use Argo CD to manage the deployment of ACK resources across multiple clusters, enabling GitOps workflows for your AWS infrastructure.
  + ACK extends the benefits of GitOps when paired with ArgoCD, but ACK does not require integration with git.
+  **kro (Kube Resource Orchestrator)**: Use kro to compose complex resources from ACK resources, creating higher-level abstractions that simplify resource management.
  + You can create composite custom resources with kro that define both Kubernetes resources and AWS resources. Team members can use these custom resources to quickly deploy complex applications.

## Getting Started with ACK
<a name="_getting_started_with_ack"></a>

To get started with the EKS Capability for ACK:

1. Create and configure an IAM Capability Role with the necessary permissions for ACK to manage AWS resources on your behalf.

1.  [Create an ACK capability resource](create-ack-capability.md) on your EKS cluster through the AWS Console, AWS CLI, or your preferred infrastructure as code tool.

1. Apply Kubernetes custom resources to your cluster to start managing your AWS resources in Kubernetes.

# Create an ACK capability
<a name="create-ack-capability"></a>

This chapter explains how to create an ACK capability on your Amazon EKS cluster.

## Prerequisites
<a name="_prerequisites"></a>

Before creating an ACK capability, ensure you have:
+ An Amazon EKS cluster
+ An IAM Capability Role with permissions for ACK to manage AWS resources
+ Sufficient IAM permissions to create capability resources on EKS clusters
+ The appropriate CLI tool installed and configured, or access to the EKS Console

For instructions on creating the IAM Capability Role, see [Amazon EKS capability IAM role](capability-role.md).

**Important**  
ACK is an infrastructure management capability that grants the ability to create, modify, and delete AWS resources. This is an admin-scoped capability that should be carefully controlled. Anyone with permission to create Kubernetes resources in your cluster can effectively create AWS resources through ACK, subject to the IAM Capability Role permissions. The IAM Capability Role you provide determines which AWS resources ACK can create and manage. For guidance on creating an appropriate role with least-privilege permissions, see [Amazon EKS capability IAM role](capability-role.md) and [Security considerations for EKS Capabilities](capabilities-security.md).

## Choose your tool
<a name="_choose_your_tool"></a>

You can create an ACK capability using the AWS Management Console, AWS CLI, or eksctl:
+  [Create an ACK capability using the Console](ack-create-console.md) - Use the Console for a guided experience
+  [Create an ACK capability using the AWS CLI](ack-create-cli.md) - Use the AWS CLI for scripting and automation
+  [Create an ACK capability using eksctl](ack-create-eksctl.md) - Use eksctl for a Kubernetes-native experience

## What happens when you create an ACK capability
<a name="_what_happens_when_you_create_an_ack_capability"></a>

When you create an ACK capability:

1. EKS creates the ACK capability service and configures it to monitor and manage resources in your cluster

1. Custom Resource Definitions (CRDs) are installed in your cluster

1. An access entry is automatically created for your IAM Capability Role with capability-specific access entry policies that grant baseline Kubernetes permissions (see [Security considerations for EKS Capabilities](capabilities-security.md))

1. The capability assumes the IAM Capability Role you provide

1. ACK begins watching for its custom resources in your cluster

1. The capability status changes from `CREATING` to `ACTIVE` 

Once active, you can create ACK custom resources in your cluster to manage AWS resources.

**Note**  
The automatically created access entry includes the `AmazonEKSACKPolicy` which grants ACK permissions to manage AWS resources. Some ACK resources that reference Kubernetes secrets (such as RDS databases with passwords) require additional access entry policies. To learn more about access entries and how to configure additional permissions, see [Security considerations for EKS Capabilities](capabilities-security.md).

## Next steps
<a name="_next_steps"></a>

After creating the ACK capability:
+  [ACK concepts](ack-concepts.md) - Understand ACK concepts and get started with AWS resources
+  [ACK concepts](ack-concepts.md) - Learn about reconciliation, field exports, and resource adoption patterns
+  [Configure ACK permissions](ack-permissions.md) - Configure IAM permissions and multi-account patterns

# Create an ACK capability using the Console
<a name="ack-create-console"></a>

This topic describes how to create an AWS Controllers for Kubernetes (ACK) capability using the AWS Management Console.

## Create the ACK capability
<a name="_create_the_ack_capability"></a>

1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home\$1/clusters.

1. Select your cluster name to open the cluster detail page.

1. Choose the **Capabilities** tab.

1. In the left navigation, choose ** AWS Controllers for Kubernetes (ACK)**.

1. Choose **Create AWS Controllers for Kubernetes capability**.

1. For **IAM Capability Role**:
   + If you already have an IAM Capability Role, select it from the dropdown
   + If you need to create a role, choose **Create admin role** 

     This opens the IAM console in a new tab with pre-populated trust policy and the `AdministratorAccess` managed policy. You can unselect this policy and add other permissions if you prefer.

     After creating the role, return to the EKS console and the role will be automatically selected.
**Important**  
The suggested `AdministratorAccess` policy grants broad permissions and is intended to streamline getting started. For production use, replace this with a custom policy that grants only the permissions needed for the specific AWS services you plan to manage with ACK. For guidance on creating least-privilege policies, see [Configure ACK permissions](ack-permissions.md) and [Security considerations for EKS Capabilities](capabilities-security.md).

1. Choose **Create**.

The capability creation process begins.

## Verify the capability is active
<a name="_verify_the_capability_is_active"></a>

1. On the **Capabilities** tab, view the ACK capability status.

1. Wait for the status to change from `CREATING` to `ACTIVE`.

1. Once active, the capability is ready to use.

For information about capability statuses and troubleshooting, see [Working with capability resources](working-with-capabilities.md).

## Verify custom resources are available
<a name="_verify_custom_resources_are_available"></a>

After the capability is active, verify that ACK custom resources are available in your cluster.

 **Using the console** 

1. Navigate to your cluster in the Amazon EKS console

1. Choose the **Resources** tab

1. Choose **Extensions** 

1. Choose **CustomResourceDefinitions** 

You should see a number of CRDs listed for AWS resources.

 **Using kubectl** 

```
kubectl api-resources | grep services.k8s.aws
```

You should see a number of APIs listed for AWS resources.

**Note**  
The capability for AWS Controllers for Kubernetes will install a number of CRDs for a variety of AWS resources.

## Next steps
<a name="_next_steps"></a>
+  [ACK concepts](ack-concepts.md) - Understand ACK concepts and get started
+  [Configure ACK permissions](ack-permissions.md) - Configure IAM permissions for other AWS services
+  [Working with capability resources](working-with-capabilities.md) - Manage your ACK capability resource

# Create an ACK capability using the AWS CLI
<a name="ack-create-cli"></a>

This topic describes how to create an AWS Controllers for Kubernetes (ACK) capability using the AWS CLI.

## Prerequisites
<a name="_prerequisites"></a>
+  ** AWS CLI** – Version `2.12.3` or later. To check your version, run `aws --version`. For more information, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.
+  ** `kubectl` ** – A command line tool for working with Kubernetes clusters. For more information, see [Set up `kubectl` and `eksctl`](install-kubectl.md).

## Step 1: Create an IAM Capability Role
<a name="_step_1_create_an_iam_capability_role"></a>

Create a trust policy file:

```
cat > ack-trust-policy.json << 'EOF'
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "capabilities.eks.amazonaws.com"
      },
      "Action": [
        "sts:AssumeRole",
        "sts:TagSession"
      ]
    }
  ]
}
EOF
```

Create the IAM role:

```
aws iam create-role \
  --role-name ACKCapabilityRole \
  --assume-role-policy-document file://ack-trust-policy.json
```

Attach the `AdministratorAccess` managed policy to the role:

```
aws iam attach-role-policy \
  --role-name ACKCapabilityRole \
  --policy-arn arn:aws:iam::aws:policy/AdministratorAccess
```

**Important**  
The suggested `AdministratorAccess` policy grants broad permissions and is intended to streamline getting started. For production use, replace this with a custom policy that grants only the permissions needed for the specific AWS services you plan to manage with ACK. For guidance on creating least-privilege policies, see [Configure ACK permissions](ack-permissions.md) and [Security considerations for EKS Capabilities](capabilities-security.md).

## Step 2: Create the ACK capability
<a name="_step_2_create_the_ack_capability"></a>

Create the ACK capability resource on your cluster. Replace *region-code* with the AWS Region that your cluster is in and replace *my-cluster* with the name of your cluster.

```
aws eks create-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-ack \
  --type ACK \
  --role-arn arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):role/ACKCapabilityRole \
  --delete-propagation-policy RETAIN
```

The command returns immediately, but the capability takes some time to become active as EKS creates the required capability infrastructure and components. EKS will install the Kubernetes Custom Resource Definitions related to this capability in your cluster as it is being created.

**Note**  
If you receive an error that the cluster doesn’t exist or you don’t have permissions, verify:  
The cluster name is correct
Your AWS CLI is configured for the correct region
You have the required IAM permissions

## Step 3: Verify the capability is active
<a name="_step_3_verify_the_capability_is_active"></a>

Wait for the capability to become active. Replace *region-code* with the AWS Region that your cluster is in and replace *my-cluster* with the name of your cluster.

```
aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-ack \
  --query 'capability.status' \
  --output text
```

The capability is ready when the status shows `ACTIVE`. Don’t continue to the next step until the status is `ACTIVE`.

You can also view the full capability details:

```
aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-ack
```

## Step 4: Verify custom resources are available
<a name="_step_4_verify_custom_resources_are_available"></a>

After the capability is active, verify that ACK custom resources are available in your cluster:

```
kubectl api-resources | grep services.k8s.aws
```

You should see a number of APIs listed for AWS resources.

**Note**  
The capability for AWS Controllers for Kubernetes will install a number of CRDs for a variety of AWS resources.

## Next steps
<a name="_next_steps"></a>
+  [ACK concepts](ack-concepts.md) - Understand ACK concepts and get started
+  [Configure ACK permissions](ack-permissions.md) - Configure IAM permissions for other AWS services
+  [Working with capability resources](working-with-capabilities.md) - Manage your ACK capability resource

# Create an ACK capability using eksctl
<a name="ack-create-eksctl"></a>

This topic describes how to create an AWS Controllers for Kubernetes (ACK) capability using eksctl.

**Note**  
The following steps require eksctl version `0.220.0` or later. To check your version, run `eksctl version`.

## Step 1: Create an IAM Capability Role
<a name="_step_1_create_an_iam_capability_role"></a>

Create a trust policy file:

```
cat > ack-trust-policy.json << 'EOF'
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "capabilities.eks.amazonaws.com"
      },
      "Action": [
        "sts:AssumeRole",
        "sts:TagSession"
      ]
    }
  ]
}
EOF
```

Create the IAM role:

```
aws iam create-role \
  --role-name ACKCapabilityRole \
  --assume-role-policy-document file://ack-trust-policy.json
```

Attach the `AdministratorAccess` managed policy to the role:

```
aws iam attach-role-policy \
  --role-name ACKCapabilityRole \
  --policy-arn arn:aws:iam::aws:policy/AdministratorAccess
```

**Important**  
The suggested `AdministratorAccess` policy grants broad permissions and is intended to streamline getting started. For production use, replace this with a custom policy that grants only the permissions needed for the specific AWS services you plan to manage with ACK. For guidance on creating least-privilege policies, see [Configure ACK permissions](ack-permissions.md) and [Security considerations for EKS Capabilities](capabilities-security.md).

**Important**  
This policy grants permissions for S3 bucket management with `"Resource": "*"`, which allows operations on all S3 buckets.  
For production use: \$1 Restrict the `Resource` field to specific bucket ARNs or name patterns \$1 Use IAM condition keys to limit access by resource tags \$1 Grant only the minimum permissions needed for your use case  
For other AWS services, see [Configure ACK permissions](ack-permissions.md).

Attach the policy to the role:

```
aws iam attach-role-policy \
  --role-name ACKCapabilityRole \
  --policy-arn arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):policy/ACKS3Policy
```

## Step 2: Create the ACK capability
<a name="_step_2_create_the_ack_capability"></a>

Create the ACK capability using eksctl. Replace *region-code* with the AWS Region that your cluster is in and replace *my-cluster* with the name of your cluster.

```
eksctl create capability \
  --cluster [.replaceable]`my-cluster` \
  --region [.replaceable]`region-code` \
  --name ack \
  --type ACK \
  --role-arn arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):role/ACKCapabilityRole \
  --ack-service-controllers s3
```

**Note**  
The `--ack-service-controllers` flag is optional. If omitted, ACK enables all available controllers. For better performance and security, consider enabling only the controllers you need. You can specify multiple controllers: `--ack-service-controllers s3,rds,dynamodb` 

The command returns immediately, but the capability takes some time to become active.

## Step 3: Verify the capability is active
<a name="_step_3_verify_the_capability_is_active"></a>

Check the capability status:

```
eksctl get capability \
  --cluster [.replaceable]`my-cluster` \
  --region [.replaceable]`region-code` \
  --name ack
```

The capability is ready when the status shows `ACTIVE`.

## Step 4: Verify custom resources are available
<a name="_step_4_verify_custom_resources_are_available"></a>

After the capability is active, verify that ACK custom resources are available in your cluster:

```
kubectl api-resources | grep services.k8s.aws
```

You should see a number of APIs listed for AWS resources.

**Note**  
The capability for AWS Controllers for Kubernetes will install a number of CRDs for a variety of AWS resources.

## Next steps
<a name="_next_steps"></a>
+  [ACK concepts](ack-concepts.md) - Understand ACK concepts and get started
+  [Configure ACK permissions](ack-permissions.md) - Configure IAM permissions for other AWS services
+  [Working with capability resources](working-with-capabilities.md) - Manage your ACK capability resource

# ACK concepts
<a name="ack-concepts"></a>

ACK manages AWS resources through Kubernetes APIs by continuously reconciling the desired state in your manifests with the actual state in AWS. When you create or update a Kubernetes custom resource, ACK makes the necessary AWS API calls to create or modify the corresponding AWS resource, then monitors it for drift and updates the Kubernetes status to reflect the current state. This approach lets you manage infrastructure using familiar Kubernetes tools and workflows while maintaining consistency between your cluster and AWS.

This topic explains the fundamental concepts behind how ACK manages AWS resources through Kubernetes APIs.

## Getting started with ACK
<a name="_getting_started_with_ack"></a>

After creating the ACK capability (see [Create an ACK capability](create-ack-capability.md)), you can start managing AWS resources using Kubernetes manifests in your cluster.

As an example, create this S3 bucket manifest in `bucket.yaml`, choosing your own unique bucket name.

```
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: my-test-bucket
  namespace: default
spec:
  name: my-unique-bucket-name-12345
```

Apply the manifest:

```
kubectl apply -f bucket.yaml
```

Check the status:

```
kubectl get bucket my-test-bucket
kubectl describe bucket my-test-bucket
```

Verify the bucket was created in AWS:

```
aws s3 ls | grep my-unique-bucket-name-12345
```

Delete the Kubernetes resource:

```
kubectl delete bucket my-test-bucket
```

Verify the bucket was deleted from AWS:

```
aws s3 ls | grep my-unique-bucket-name-12345
```

The bucket should no longer appear in the list, demonstrating that ACK manages the full lifecycle of AWS resources.

For more information on getting started with ACK, see [Getting Started with ACK](https://aws-controllers-k8s.github.io/community/docs/user-docs/getting-started/).

## Resource lifecycle and reconciliation
<a name="_resource_lifecycle_and_reconciliation"></a>

ACK uses a continuous reconciliation loop to ensure your AWS resources match the desired state defined in your Kubernetes manifests.

 **How reconciliation works**:

1. You create or update a Kubernetes custom resource (for example, an S3 Bucket)

1. ACK detects the change and compares the desired state with the actual state in AWS 

1. If they differ, ACK makes AWS API calls to reconcile the difference

1. ACK updates the resource status in Kubernetes to reflect the current state

1. The loop repeats continuously, typically every few hours

Reconciliation is triggered when you create a new Kubernetes resource, update an existing resource’s `spec`, or when ACK detects drift in AWS from manual changes made outside ACK. Additionally, ACK performs periodic reconciliation with a resync period of 10 hours. Changes to Kubernetes resources trigger immediate reconciliation, while passive drift detection of upstream AWS resource changes occurs during the periodic resync.

When working through the getting started example above, ACK performs these steps:

1. Checks if bucket exists in AWS 

1. If not, calls `s3:CreateBucket` 

1. Updates Kubernetes status with bucket ARN and state

1. Continues monitoring for drift

To learn more about how ACK works, see [ACK Reconciliation](https://aws-controllers-k8s.github.io/community/docs/user-docs/reconciliation/).

## Status conditions
<a name="_status_conditions"></a>

ACK resources use status conditions to communicate their state. Understanding these conditions helps you troubleshoot issues and understand resource health.
+  **Ready**: Indicates the resource is ready to be consumed (standardized Kubernetes condition).
+  **ACK.ResourceSynced**: Indicates the resource spec matches the AWS resource state.
+  **ACK.Terminal**: Indicates an unrecoverable error has occurred.
+  **ACK.Adopted**: Indicates the resource was adopted from an existing AWS resource rather than created new.
+  **ACK.Recoverable**: Indicates a recoverable error that may resolve without updating the spec.
+  **ACK.Advisory**: Provides advisory information about the resource.
+  **ACK.LateInitialized**: Indicates whether late initialization of fields is complete.
+  **ACK.ReferencesResolved**: Indicates whether all `AWSResourceReference` fields have been resolved.
+  **ACK.IAMRoleSelected**: Indicates whether an IAMRoleSelector has been selected to manage this resource.

Check resource status:

```
# Check if resource is ready
kubectl get bucket my-bucket -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}'

# Check for terminal errors
kubectl get bucket my-bucket -o jsonpath='{.status.conditions[?(@.type=="ACK.Terminal")]}'
```

Example status:

```
status:
  conditions:
  - type: Ready
    status: "True"
    lastTransitionTime: "2024-01-15T10:30:00Z"
  - type: ACK.ResourceSynced
    status: "True"
    lastTransitionTime: "2024-01-15T10:30:00Z"
  - type: ACK.Terminal
    status: "True"
  ackResourceMetadata:
    arn: arn:aws:s3:::my-unique-bucket-name
    ownerAccountID: "111122223333"
    region: us-west-2
```

To learn more about ACK status and conditions, see [ACK Conditions](https://aws-controllers-k8s.github.io/community/docs/user-docs/conditions/).

## Deletion policies
<a name="_deletion_policies"></a>

ACK’s deletion policy controls what happens to AWS resources when you delete the Kubernetes resource.

 **Delete (default)** 

The AWS resource is deleted when you delete the Kubernetes resource: This is the default behavior.

```
# No annotation needed - this is the default
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: temp-bucket
spec:
  name: temporary-bucket
```

Deleting this resource deletes the S3 bucket in AWS.

 **Retain** 

The AWS resource is kept when you delete the Kubernetes resource:

```
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: important-bucket
  annotations:
    services.k8s.aws/deletion-policy: "retain"
spec:
  name: production-data-bucket
```

Deleting this resource removes it from Kubernetes but leaves the S3 bucket in AWS.

The `retain` policy is useful for production databases that should outlive the Kubernetes resource, shared resources used by multiple applications, resources with important data that shouldn’t be accidentally deleted, or temporary ACK management where you adopt a resource, configure it, then release it back to manual management.

To learn more about ACK deletion policy, see [ACK Deletion Policy](https://aws-controllers-k8s.github.io/community/docs/user-docs/deletion-policy/).

## Resource adoption
<a name="_resource_adoption"></a>

Adoption allows you to bring existing AWS resources under ACK management without recreating them.

When to use adoption:
+ Migrating existing infrastructure to ACK management
+ Recovering orphaned AWS resources in case of accidental resource deletion in Kubernetes
+ Importing resources created by other tools (CloudFormation, Terraform)

How adoption works:

```
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: existing-bucket
  annotations:
    services.k8s.aws/adoption-policy: "adopt-or-create"
spec:
  name: my-existing-bucket-name
```

When you create this resource:

1. ACK checks if a bucket with that name exists in AWS 

1. If found, ACK adopts it (no API calls to create)

1. ACK reads the current configuration from AWS 

1. ACK updates the Kubernetes status to reflect the actual state

1. Future updates reconcile the resource normally

Once adopted, resources are managed like any other ACK resource, and deleting the Kubernetes resource will delete the AWS resource unless you use the `retain` deletion policy.

When adopting resources, the AWS resource must already exist and ACK needs read permissions to discover it. The `adopt-or-create` policy adopts the resource if it exists, or creates it if it doesn’t. This is useful when you want a declarative workflow that works whether the resource exists or not.

To learn more about ACK resource adoption, see [ACK Resource Adoption](https://aws-controllers-k8s.github.io/community/docs/user-docs/adopted-resource/).

## Cross-account and cross-region resources
<a name="_cross_account_and_cross_region_resources"></a>

ACK can manage resources in different AWS accounts and regions from a single cluster.

 **Cross-region resource annotations** 

You can specify the region of an AWS resource using an annotation:

```
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: eu-bucket
  annotations:
    services.k8s.aws/region: eu-west-1
spec:
  name: my-eu-bucket
```

You can also specify the region of all AWS resources created in a given namespace:

 **Namespace annotations** 

Set a default region for all resources in a namespace:

```
apiVersion: v1
kind: Namespace
metadata:
  name: production
  annotations:
    services.k8s.aws/default-region: us-west-2
```

Resources created in this namespace use this region unless overridden with a resource-level annotation.

 **Cross-account** 

Use IAM Role Selectors to map specific IAM roles to namespaces:

```
apiVersion: services.k8s.aws/v1alpha1
kind: IAMRoleSelector
metadata:
  name: target-account-config
spec:
  arn: arn:aws:iam::444455556666:role/ACKTargetAccountRole
  namespaceSelector:
    names:
      - production
```

Resources created in the mapped namespace automatically use the specified role.

To learn more about IAM Role Selectors, see [ACK Cross-Account Resource Management](https://aws-controllers-k8s.github.io/docs/guides/cross-account). For cross-account configuration details, see [Configure ACK permissions](ack-permissions.md).

## Error handling and retry behavior
<a name="_error_handling_and_retry_behavior"></a>

ACK automatically handles transient errors and retries failed operations.

Retry strategy:
+ Transient errors (rate limiting, temporary service issues, insufficient permissions) trigger automatic retries
+ Exponential backoff prevents overwhelming AWS APIs
+ Maximum retry attempts vary by error type
+ Permanent errors (invalid parameters, resource name conflicts) don’t retry

Check resource status for error details using `kubectl describe`:

```
kubectl describe bucket my-bucket
```

Look for status conditions with error messages, events showing recent reconciliation attempts, and the `message` field in status conditions explaining failures. Common errors include insufficient IAM permissions, resource name conflicts in AWS, invalid configuration values in the `spec`, and exceeded AWS service quotas.

For troubleshooting common errors, see [Troubleshoot issues with ACK capabilities](ack-troubleshooting.md).

## Resource composition with kro
<a name="_resource_composition_with_kro"></a>

For composing and connecting multiple ACK resources together, use the EKS Capability for kro (Kube Resource Orchestrator). kro provides a declarative way to define groups of resources, passing configuration between resources to manage complex infrastructure patterns simply.

For detailed examples of creating custom resource compositions with ACK resources, see [kro concepts](kro-concepts.md) 

## Next steps
<a name="_next_steps"></a>
+  [ACK considerations for EKS](ack-considerations.md) - EKS-specific patterns and integration strategies

# Configure ACK permissions
<a name="ack-permissions"></a>

ACK requires IAM permissions to create and manage AWS resources on your behalf. This topic explains how IAM works with ACK and provides guidance on configuring permissions for different use cases.

## How IAM works with ACK
<a name="_how_iam_works_with_ack"></a>

ACK uses IAM roles to authenticate with AWS and perform actions on your resources. There are two ways to provide permissions to ACK:

 **Capability Role**: The IAM role you provide when creating the ACK capability. This role is used by default for all ACK operations.

 **IAM Role Selectors**: Additional IAM roles that can be mapped to specific namespaces or resources. These roles override the Capability Role for resources in their scope.

When ACK needs to create or manage a resource, it determines which IAM role to use:

1. Check if an IAMRoleSelector matches the resource’s namespace

1. If a match is found, assume that IAM role

1. Otherwise, use the Capability Role

This approach enables flexible permission management from simple single-role setups to complex multi-account, multi-team configurations.

## Getting started: Simple permission setup
<a name="_getting_started_simple_permission_setup"></a>

For development, testing, or simple use cases, you can add all necessary service permissions directly to the Capability Role.

This approach works well when:
+ You’re getting started with ACK
+ All resources are in the same AWS account
+ A single team manages all ACK resources
+ You trust all ACK users to have the same permissions

## Production best practice: IAM Role Selectors
<a name="_production_best_practice_iam_role_selectors"></a>

For production environments, use IAM Role Selectors to implement least-privilege access and namespace-level isolation.

When using IAM Role Selectors, the Capability Role only needs `sts:AssumeRole` and `sts:TagSession` permissions to assume the service-specific roles. You don’t need to add any AWS service permissions (like S3 or RDS) to the Capability Role itself—those permissions are granted to the individual IAM roles that the Capability Role assumes.

 **Choosing between permission models**:

Use **direct permissions** (adding service permissions to the Capability Role) when:
+ You’re getting started and want the simplest setup
+ All resources are in the same account as your cluster
+ You have administrative, cluster-wide permission requirements
+ All teams can share the same permissions

Use **IAM Role Selectors** when:
+ Managing resources across multiple AWS accounts
+ Different teams or namespaces need different permissions
+ You need fine-grained access control per namespace
+ You want to follow least-privilege security practices

You can start with direct permissions and migrate to IAM Role Selectors later as your requirements grow.

 **Why use IAM Role Selectors in production:** 
+  **Least privilege**: Each namespace gets only the permissions it needs
+  **Team isolation**: Team A cannot accidentally use Team B’s permissions
+  **Easier auditing**: Clear mapping of which namespace uses which role
+  **Cross-account support**: Required for managing resources in multiple accounts
+  **Separation of concerns**: Different services or environments use different roles

### Basic IAM Role Selector setup
<a name="_basic_iam_role_selector_setup"></a>

 **Step 1: Create a service-specific IAM role** 

Create an IAM role with permissions for specific AWS services:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:*"
      ],
      "Resource": "*"
    }
  ]
}
```

Configure the trust policy to allow the Capability Role to assume it:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:role/ACKCapabilityRole"
      },
      "Action": ["sts:AssumeRole", "sts:TagSession"]
    }
  ]
}
```

 **Step 2: Grant AssumeRole permission to Capability Role** 

Add permission to the Capability Role to assume the service-specific role:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["sts:AssumeRole", "sts:TagSession"],
      "Resource": "arn:aws:iam::111122223333:role/ACK-S3-Role"
    }
  ]
}
```

 **Step 3: Create IAMRoleSelector** 

Map the IAM role to a namespace:

```
apiVersion: services.k8s.aws/v1alpha1
kind: IAMRoleSelector
metadata:
  name: s3-namespace-config
spec:
  arn: arn:aws:iam::111122223333:role/ACK-S3-Role
  namespaceSelector:
    names:
      - s3-resources
```

 **Step 4: Create resources in the mapped namespace** 

Resources in the `s3-resources` namespace automatically use the specified role:

```
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: my-bucket
  namespace: s3-resources
spec:
  name: my-production-bucket
```

## Multi-account management
<a name="_multi_account_management"></a>

Use IAM Role Selectors to manage resources across multiple AWS accounts.

 **Step 1: Create cross-account IAM role** 

In the target account (444455556666), create a role that trusts the source account’s Capability Role:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:role/ACKCapabilityRole"
      },
      "Action": ["sts:AssumeRole", "sts:TagSession"]
    }
  ]
}
```

Attach service-specific permissions to this role.

 **Step 2: Grant AssumeRole permission** 

In the source account (111122223333), allow the Capability Role to assume the target account role:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["sts:AssumeRole", "sts:TagSession"],
      "Resource": "arn:aws:iam::444455556666:role/ACKTargetAccountRole"
    }
  ]
}
```

 **Step 3: Create IAMRoleSelector** 

Map the cross-account role to a namespace:

```
apiVersion: services.k8s.aws/v1alpha1
kind: IAMRoleSelector
metadata:
  name: production-account-config
spec:
  arn: arn:aws:iam::444455556666:role/ACKTargetAccountRole
  namespaceSelector:
    names:
      - production
```

 **Step 4: Create resources** 

Resources in the `production` namespace are created in the target account:

```
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: my-bucket
  namespace: production
spec:
  name: my-cross-account-bucket
```

## Session tags
<a name="_session_tags"></a>

The EKS ACK capability automatically sets session tags on all AWS API requests. These tags enable fine-grained access control and auditing by identifying the source of each request.

### Available session tags
<a name="_available_session_tags"></a>

The following session tags are included with every AWS API call made by ACK:


| Tag Key | Description | 
| --- | --- | 
|   `eks:eks-capability-arn`   |  The ARN of the EKS capability making the request  | 
|   `eks:kubernetes-namespace`   |  The Kubernetes namespace of the resource being managed  | 
|   `eks:kubernetes-api-group`   |  The Kubernetes API group of the resource (for example, `s3.services.k8s.aws`)  | 

### Using session tags for access control
<a name="_using_session_tags_for_access_control"></a>

You can use these session tags in IAM policy conditions to restrict which resources ACK can manage. This provides an additional layer of security beyond namespace-based IAM Role Selectors.

 **Example: Restrict by namespace** 

Allow ACK to create S3 buckets only when the request originates from the `production` namespace:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:CreateBucket",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalTag/eks:kubernetes-namespace": "production"
        }
      }
    }
  ]
}
```

 **Example: Restrict by capability** 

Allow actions only from a specific ACK capability:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalTag/eks:eks-capability-arn": "arn:aws:eks:us-west-2:111122223333:capability/my-cluster/ack/my-ack"
        }
      }
    }
  ]
}
```

**Note**  
Session tags are a difference from self-managed ACK, which does not set these tags by default. This enables more granular access control with the managed capability.

## Advanced IAM Role Selector patterns
<a name="_advanced_iam_role_selector_patterns"></a>

For advanced configuration including label selectors, resource-specific role mapping, and additional examples, see [ACK IRSA Documentation](https://aws-controllers-k8s.github.io/community/docs/user-docs/irsa/).

## Next steps
<a name="_next_steps"></a>
+  [ACK concepts](ack-concepts.md) - Understand ACK concepts and resource lifecycle
+  [ACK concepts](ack-concepts.md) - Learn about resource adoption and deletion policies
+  [Security considerations for EKS Capabilities](capabilities-security.md) - Understand security best practices for capabilities

# ACK considerations for EKS
<a name="ack-considerations"></a>

This topic covers important considerations for using the EKS Capability for ACK, including IAM configuration, multi-account patterns, and integration with other EKS capabilities.

## IAM configuration patterns
<a name="_iam_configuration_patterns"></a>

The ACK capability uses an IAM Capability Role to authenticate with AWS. Choose the right IAM pattern based on your requirements.

### Simple: Single Capability Role
<a name="_simple_single_capability_role"></a>

For development, testing, or simple use cases, grant all necessary permissions directly to the Capability Role.

 **When to use**:
+ Getting started with ACK
+ Single-account deployments
+ All resources managed by one team
+ Development and testing environments

 **Example**: Add S3 and RDS permissions to your Capability Role with resource tagging conditions:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:*"],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:RequestedRegion": ["us-west-2", "us-east-1"]
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": ["rds:*"],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:RequestedRegion": ["us-west-2", "us-east-1"]
        },
      }
    }
  ]
}
```

This example limits S3 and RDS operations to specific regions and requires RDS resources to have a `ManagedBy: ACK` tag.

### Production: IAM Role Selectors
<a name="_production_iam_role_selectors"></a>

For production environments, use IAM Role Selectors to implement least-privilege access and namespace-level isolation.

 **When to use**:
+ Production environments
+ Multi-team clusters
+ Multi-account resource management
+ Least-privilege security requirements
+ Different services need different permissions

 **Benefits**:
+ Each namespace gets only the permissions it needs
+ Team isolation - Team A cannot use Team B’s permissions
+ Easier auditing and compliance
+ Required for cross-account resource management

For detailed IAM Role Selector configuration, see [Configure ACK permissions](ack-permissions.md).

## Integration with other EKS capabilities
<a name="_integration_with_other_eks_capabilities"></a>

### GitOps with Argo CD
<a name="_gitops_with_argo_cd"></a>

Use the EKS Capability for Argo CD to deploy ACK resources from Git repositories, enabling GitOps workflows for infrastructure management.

 **Considerations**:
+ Store ACK resources alongside application manifests for end-to-end GitOps
+ Organize by environment, service, or resource type based on your team structure
+ Use Argo CD’s automated sync for continuous reconciliation
+ Enable pruning to automatically remove deleted resources
+ Consider hub-and-spoke patterns for multi-cluster infrastructure management

GitOps provides audit trails, rollback capabilities, and declarative infrastructure management. For more on Argo CD, see [Working with Argo CD](working-with-argocd.md).

### Resource composition with kro
<a name="_resource_composition_with_kro"></a>

Use the EKS Capability for kro (Kube Resource Orchestrator) to compose multiple ACK resources into higher-level abstractions and custom APIs.

 **When to use kro with ACK**:
+ Create reusable patterns for common infrastructure stacks (database \$1 backup \$1 monitoring)
+ Build self-service platforms with simplified APIs for application teams
+ Manage resource dependencies and pass values between resources (S3 bucket ARN to Lambda function)
+ Standardize infrastructure configurations across teams
+ Reduce complexity by hiding implementation details behind custom resources

 **Example patterns**:
+ Application stack: S3 bucket \$1 SQS queue \$1 notification configuration
+ Database setup: RDS instance \$1 parameter group \$1 security group \$1 secrets
+ Networking: VPC \$1 subnets \$1 route tables \$1 security groups

kro handles dependency ordering, status propagation, and lifecycle management for composed resources. For more on kro, see [kro concepts](kro-concepts.md).

## Organizing your resources
<a name="_organizing_your_resources"></a>

Organize ACK resources using Kubernetes namespaces and AWS resource tags for better management, access control, and cost tracking.

### Namespace organization
<a name="_namespace_organization"></a>

Use Kubernetes namespaces to logically separate ACK resources by environment (production, staging, development), team (platform, data, ml), or application.

 **Benefits**:
+ Namespace-scoped RBAC for access control
+ Set default regions per namespace using annotations
+ Easier resource management and cleanup
+ Logical separation aligned with organizational structure

### Resource tagging
<a name="_resource_tagging"></a>

The EKS ACK capability automatically applies default tags to all AWS resources it creates. These tags differ from self-managed ACK and provide enhanced traceability.

 **Default tags applied by the capability**:


| Tag Key | Description | 
| --- | --- | 
|   `eks:controller-version`   |  The version of the ACK controller  | 
|   `eks:kubernetes-namespace`   |  The Kubernetes namespace of the ACK resource  | 
|   `eks:kubernetes-resource-name`   |  The name of the Kubernetes resource  | 
|   `eks:kubernetes-api-group`   |  The Kubernetes API group (for example, `s3.services.k8s.aws`)  | 
|   `eks:eks-capability-arn`   |  The ARN of the EKS ACK capability  | 

**Note**  
Self-managed ACK uses different default tags: `services.k8s.aws/controller-version` and `services.k8s.aws/namespace`. The capability’s tags use the `eks:` prefix for consistency with other EKS features.

 **Additional recommended tags**:

Add custom tags for cost allocation, ownership tracking, and organizational purposes:
+ Environment (Production, Staging, Development)
+ Team or department ownership
+ Cost center for billing allocation
+ Application or service name

## Migration from other Infrastructure-as-code tools
<a name="_migration_from_other_infrastructure_as_code_tools"></a>

Many organizations are finding value in standardizing on Kubernetes beyond their workload orchestration. Migrating infrastructure and AWS resource management to ACK allows you to standardize infrastructure management using Kubernetes APIs alongside your application workloads.

 **Benefits of standardizing on Kubernetes for infrastructure**:
+  **Single source of truth**: Manage both applications and infrastructure in Kubernetes, enabling an end-to-end GitOps practice
+  **Unified tooling**: Teams use Kubernetes resources and tooling rather than learning multiple tools and frameworks
+  **Consistent reconciliation**: ACK continuously reconciles AWS resources like Kubernetes does for workloads, detecting and correcting drift compared to imperative tools
+  **Native compositions**: With kro and ACK together, reference AWS resources directly in application and resource manifests, passing connection strings and ARNs between resources
+  **Simplified operations**: One control plane for deployments, rollbacks, and observability across your entire system

ACK supports adopting existing AWS resources without recreating them, enabling zero-downtime migration from CloudFormation, Terraform, or resources external to the cluster.

 **Adopt an existing resource**:

```
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: existing-bucket
  annotations:
    services.k8s.aws/adoption-policy: "adopt-or-create"
spec:
  name: my-existing-bucket-name
```

Once adopted, the resource is managed by ACK and can be updated through Kubernetes manifests. You can migrate incrementally, adopting resources as needed while maintaining existing IaC tools for other resources.

ACK also supports read-only resources. For resources managed by other teams or tools that you want to reference but not modify, combine adoption with the `retain` deletion policy and grant only read IAM permissions. This allows applications to discover shared infrastructure (VPCs, IAM roles, KMS keys) through Kubernetes APIs without risking modifications.

For more on resource adoption, see [ACK concepts](ack-concepts.md).

## Deletion policies
<a name="_deletion_policies"></a>

Deletion policies control what happens to AWS resources when you delete the corresponding Kubernetes resource. Choose the right policy based on the resource lifecycle and your operational requirements.

### Delete (default)
<a name="_delete_default"></a>

The AWS resource is deleted when you delete the Kubernetes resource. This maintains consistency between your cluster and AWS, ensuring resources don’t accumulate.

 **When to use delete**:
+ Development and testing environments where cleanup is important
+ Ephemeral resources tied to application lifecycle (test databases, temporary buckets)
+ Resources that should not outlive the application (SQS queues, ElastiCache clusters)
+ Cost optimization - automatically clean up unused resources
+ Environments managed with GitOps where resource removal from Git should delete the infrastructure

The default delete policy aligns with Kubernetes' declarative model: what’s in the cluster matches what exists in AWS.

### Retain
<a name="_retain"></a>

The AWS resource is kept when you delete the Kubernetes resource. This protects critical data and allows resources to outlive their Kubernetes representation.

 **When to use retain**:
+ Production databases with critical data that must survive cluster changes
+ Long-term storage buckets with compliance or audit requirements
+ Shared resources used by multiple applications or teams
+ Resources being migrated to different management tools
+ Disaster recovery scenarios where you want to preserve infrastructure
+ Resources with complex dependencies that require careful decommissioning

```
apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBInstance
metadata:
  name: production-db
  annotations:
    services.k8s.aws/deletion-policy: "retain"
spec:
  dbInstanceIdentifier: prod-db
  # ... configuration
```

**Important**  
Retained resources continue to incur AWS costs and must be manually deleted from AWS when no longer needed. Use resource tagging to track retained resources for cleanup.

For more on deletion policies, see [ACK concepts](ack-concepts.md).

## Upstream documentation
<a name="_upstream_documentation"></a>

For detailed information on using ACK:
+  [ACK usage guide](https://aws-controllers-k8s.github.io/community/docs/user-docs/usage/) - Creating and managing resources
+  [ACK API reference](https://aws-controllers-k8s.github.io/community/reference/) - Complete API documentation for all services
+  [ACK documentation](https://aws-controllers-k8s.github.io/community/docs/) - Comprehensive user documentation

## Next steps
<a name="_next_steps"></a>
+  [Configure ACK permissions](ack-permissions.md) - Configure IAM permissions and multi-account patterns
+  [ACK concepts](ack-concepts.md) - Understand ACK concepts and resource lifecycle
+  [Troubleshoot issues with ACK capabilities](ack-troubleshooting.md) - Troubleshoot ACK issues
+  [Working with Argo CD](working-with-argocd.md) - Deploy ACK resources with GitOps
+  [kro concepts](kro-concepts.md) - Compose ACK resources into higher-level abstractions

# Troubleshoot issues with ACK capabilities
<a name="ack-troubleshooting"></a>

This topic provides troubleshooting guidance for the EKS Capability for ACK, including capability health checks, resource status verification, and IAM permission issues.

**Note**  
EKS Capabilities are fully managed and run outside your cluster. You don’t have access to controller logs or controller namespaces. Troubleshooting focuses on capability health, resource status, and IAM configuration.

## Capability is ACTIVE but resources aren’t being created
<a name="_capability_is_active_but_resources_arent_being_created"></a>

If your ACK capability shows `ACTIVE` status but resources aren’t being created in AWS, check the capability health, resource status, and IAM permissions.

 **Check capability health**:

You can view capability health and status issues in the EKS console or using the AWS CLI.

 **Console**:

1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home\$1/clusters.

1. Select your cluster name.

1. Choose the **Observability** tab.

1. Choose **Monitor cluster**.

1. Choose the **Capabilities** tab to view health and status for all capabilities.

 ** AWS CLI**:

```
# View capability status and health
aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-ack

# Look for issues in the health section
```

 **Common causes**:
+  **IAM permissions missing**: The Capability Role lacks permissions for the AWS service
+  **Wrong namespace**: Resources created in namespace without proper IAMRoleSelector
+  **Invalid resource spec**: Check resource status conditions for validation errors
+  **API throttling**: AWS API rate limits being hit
+  **Admission webhooks**: Admission webhooks blocking the controller from patching resource status

 **Check resource status**:

```
# Describe the resource to see conditions and events
kubectl describe bucket my-bucket -n default

# Look for status conditions
kubectl get bucket my-bucket -n default -o jsonpath='{.status.conditions}'

# View resource events
kubectl get events --field-selector involvedObject.name=my-bucket -n default
```

 **Verify IAM permissions**:

```
# View the Capability Role's policies
aws iam list-attached-role-policies --role-name my-ack-capability-role
aws iam list-role-policies --role-name my-ack-capability-role

# Get specific policy details
aws iam get-role-policy --role-name my-ack-capability-role --policy-name policy-name
```

## Resources created in AWS but not showing in Kubernetes
<a name="resources_created_in_shared_aws_but_not_showing_in_kubernetes"></a>

ACK only tracks resources it creates through Kubernetes manifests. To manage existing AWS resources with ACK, use the adoption feature.

```
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: existing-bucket
  annotations:
    services.k8s.aws/adoption-policy: "adopt-or-create"
spec:
  name: my-existing-bucket-name
```

For more on resource adoption, see [ACK concepts](ack-concepts.md).

## Cross-account resources not being created
<a name="_cross_account_resources_not_being_created"></a>

If resources aren’t being created in a target AWS account when using IAM Role Selectors, verify the trust relationship and IAMRoleSelector configuration.

 **Verify trust relationship**:

```
# Check the trust policy in the target account role
aws iam get-role --role-name cross-account-ack-role --query 'Role.AssumeRolePolicyDocument'
```

The trust policy must allow the source account’s Capability Role to assume it.

 **Confirm IAMRoleSelector configuration**:

```
# List IAMRoleSelectors (cluster-scoped)
kubectl get iamroleselector

# Describe specific selector
kubectl describe iamroleselector my-selector
```

 **Verify namespace alignment**:

IAMRoleSelectors are cluster-scoped resources but target specific namespaces. Ensure your ACK resources are in a namespace that matches the IAMRoleSelector’s namespace selector:

```
# Check resource namespace
kubectl get bucket my-cross-account-bucket -n production

# List all IAMRoleSelectors (cluster-scoped)
kubectl get iamroleselector

# Check which namespace the selector targets
kubectl get iamroleselector my-selector -o jsonpath='{.spec.namespaceSelector}'
```

 **Check IAMRoleSelected condition**:

Verify that the IAMRoleSelector was successfully matched to your resource by checking the `ACK.IAMRoleSelected` condition:

```
# Check if IAMRoleSelector was matched
kubectl get bucket my-cross-account-bucket -n production -o jsonpath='{.status.conditions[?(@.type=="ACK.IAMRoleSelected")]}'
```

If the condition is `False` or missing, the IAMRoleSelector’s namespace selector doesn’t match the resource’s namespace. Verify the selector’s `namespaceSelector` matches your resource’s namespace labels.

 **Check Capability Role permissions**:

The Capability Role needs `sts:AssumeRole` and `sts:TagSession` permissions for the target account role:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["sts:AssumeRole", "sts:TagSession"],
      "Resource": "arn:aws:iam::[.replaceable]`444455556666`:role/[.replaceable]`cross-account-ack-role`"
    }
  ]
}
```

For detailed cross-account configuration, see [Configure ACK permissions](ack-permissions.md).

## Next steps
<a name="_next_steps"></a>
+  [ACK considerations for EKS](ack-considerations.md) - ACK considerations and best practices
+  [Configure ACK permissions](ack-permissions.md) - Configure IAM permissions and multi-account patterns
+  [ACK concepts](ack-concepts.md) - Understand ACK concepts and resource lifecycle
+  [Troubleshooting EKS Capabilities](capabilities-troubleshooting.md) - General capability troubleshooting guidance

# Comparing EKS Capability for ACK to self-managed ACK
<a name="ack-comparison"></a>

The EKS Capability for ACK provides the same functionality as self-managed ACK controllers, but with significant operational advantages. For a general comparison of EKS Capabilities vs self-managed solutions, see [EKS Capabilities considerations](capabilities-considerations.md). This topic focuses on ACK-specific differences.

## Differences from upstream ACK
<a name="_differences_from_upstream_ack"></a>

The EKS Capability for ACK is based on upstream ACK controllers but differs in IAM integration.

 **IAM Capability Role**: The capability uses a dedicated IAM role with a trust policy that allows the `capabilities.eks.amazonaws.com` service principal, not IRSA (IAM Roles for Service Accounts). You can attach IAM policies directly to the Capability Role with no need to create or annotate Kubernetes service accounts or configure OIDC providers. A best practice for production use cases is to configure service permissions using `IAMRoleSelector`. See [Configure ACK permissions](ack-permissions.md) for more details.

 **Session tags**: The managed capability automatically sets session tags on all AWS API requests, enabling fine-grained access control and auditing. Tags include `eks:eks-capability-arn`, `eks:kubernetes-namespace`, and `eks:kubernetes-api-group`. This differs from self-managed ACK, which does not set these tags by default. See [Configure ACK permissions](ack-permissions.md) for details on using session tags in IAM policies.

 **Resource tags**: The capability applies different default tags to AWS resources than self-managed ACK. The capability uses `eks:` prefixed tags (such as `eks:kubernetes-namespace`, `eks:eks-capability-arn`) instead of the `services.k8s.aws/` tags used by self-managed ACK. See [ACK considerations for EKS](ack-considerations.md) for the complete list of default resource tags.

 **Resource compatibility**: ACK custom resources work identically to upstream ACK with no changes to your ACK resource YAML files. The capability uses the same Kubernetes APIs and CRDs, so tools like `kubectl` work the same way. All GA controllers and resources from upstream ACK are supported.

For complete ACK documentation and service-specific guides, see the [ACK documentation](https://aws-controllers-k8s.github.io/community/).

## Migration path
<a name="_migration_path"></a>

You can migrate from self-managed ACK to the managed capability with zero downtime:

1. Update your self-managed ACK controller to use `kube-system` for leader election leases, for example:

   ```
   helm upgrade --install ack-s3-controller \
     oci://public.ecr.aws/aws-controllers-k8s/s3-chart \
     --namespace ack-system \
     --set leaderElection.namespace=kube-system
   ```

   This moves the controller’s lease to `kube-system`, allowing the managed capability to coordinate with it.

1. Create the ACK capability on your cluster (see [Create an ACK capability](create-ack-capability.md))

1. The managed capability recognizes existing ACK-managed AWS resources and takes over reconciliation

1. Gradually scale down or remove self-managed controller deployments:

   ```
   helm uninstall ack-s3-controller --namespace ack-system
   ```

This approach allows both controllers to coexist safely during migration. The managed capability automatically adopts resources previously managed by self-managed controllers, ensuring continuous reconciliation without conflicts.

## Next steps
<a name="_next_steps"></a>
+  [Create an ACK capability](create-ack-capability.md) - Create an ACK capability resource
+  [ACK concepts](ack-concepts.md) - Understand ACK concepts and resource lifecycle
+  [Configure ACK permissions](ack-permissions.md) - Configure IAM and permissions

# Continuous Deployment with Argo CD
<a name="argocd"></a>

Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. With Argo CD, you can automate the deployment and lifecycle management of your applications across multiple clusters and environments. Argo CD supports multiple source types including Git repositories, Helm registries (HTTP and OCI), and OCI images—providing flexibility for organizations with different security and compliance requirements.

With EKS Capabilities, Argo CD is fully managed by AWS, eliminating the need to install, maintain, and scale Argo CD controllers and their dependencies on your clusters.

## How Argo CD Works
<a name="_how_argo_cd_works"></a>

Argo CD follows the GitOps pattern, where your application source (Git repository, Helm registry, or OCI image) is the source of truth for defining the desired application state. When you create an Argo CD `Application` resource, you specify the source containing your application manifests and the target Kubernetes cluster and namespace. Argo CD continuously monitors both the source and the live state in the cluster, automatically synchronizing any changes to ensure the cluster state matches the desired state.

**Note**  
With the EKS Capability for Argo CD, the Argo CD software runs in the AWS control plane, not on your worker nodes. This means your worker nodes don’t need direct access to Git repositories or Helm registries—the capability handles source access from the AWS account.

Argo CD provides three primary resource types:
+  **Application**: Defines a deployment from a Git repository to a target cluster
+  **ApplicationSet**: Generates multiple Applications from templates for multi-cluster deployments
+  **AppProject**: Provides logical grouping and access control for Applications

 **Example: Creating an Argo CD Application** 

The following example shows how to create an Argo CD `Application` resource:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/argoproj/argocd-example-apps.git
    targetRevision: HEAD
    path: guestbook
  destination:
    name: in-cluster
    namespace: guestbook
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
```

**Note**  
Use `destination.name` with the cluster name you used when registering the cluster (like `in-cluster` for the local cluster). The `destination.server` field also works with EKS cluster ARNs, but using cluster names is recommended for better readability.

## Benefits of Argo CD
<a name="_benefits_of_argo_cd"></a>

Argo CD implements a GitOps workflow where you define your application configurations in Git repositories and Argo CD automatically syncs your applications to match the desired state. This Git-centric approach provides a complete audit trail of all changes, enables easy rollbacks, and integrates naturally with your existing code review and approval processes. Argo CD automatically detects and reconciles drift between the desired state in Git and the actual state in your clusters, ensuring your deployments remain consistent with your declared configuration.

With Argo CD, you can deploy and manage applications across multiple clusters from a single Argo CD instance, simplifying operations in multi-cluster and multi-region environments. The Argo CD UI provides visualization and monitoring capabilities, allowing you to view the deployment status, health, and history of your applications. The UI integrates with AWS Identity Center (formerly AWS SSO) for seamless authentication and authorization, enabling you to control access using your existing identity management infrastructure.

As part of EKS Managed Capabilities, Argo CD is fully managed by AWS, eliminating the need to install, configure, and maintain Argo CD infrastructure. AWS handles scaling, patching, and operational management, allowing your teams to focus on application delivery rather than tool maintenance.

## Integration with AWS Identity Center
<a name="integration_with_shared_aws_identity_center"></a>

EKS Managed Capabilities provides direct integration between Argo CD and AWS Identity Center, enabling seamless authentication and authorization for your users. When you enable the Argo CD capability, you can configure AWS Identity Center integration to map Identity Center groups and users to Argo CD RBAC roles, allowing you to control who can access and manage applications in Argo CD.

## Integration with Other EKS Managed Capabilities
<a name="_integration_with_other_eks_managed_capabilities"></a>

Argo CD integrates with other EKS Managed Capabilities.
+  ** AWS Controllers for Kubernetes (ACK)**: Use Argo CD to manage the deployment of ACK resources across multiple clusters, enabling GitOps workflows for your AWS infrastructure.
+  **kro (Kube Resource Orchestrator)**: Use Argo CD to deploy kro compositions across multiple clusters, enabling consistent resource composition across your Kubernetes estate.

## Getting Started with Argo CD
<a name="_getting_started_with_argo_cd"></a>

To get started with the EKS Capability for Argo CD:

1. Create and configure an IAM Capability Role with the necessary permissions for Argo CD to access your sources and manage applications.

1.  [Create an Argo CD capability resource](create-argocd-capability.md) on your EKS cluster through the AWS Console, AWS CLI, or your preferred infrastructure as code tool.

1. Configure repository access and register clusters for application deployment.

1. Create Application resources to deploy your applications from your declarative sources.

# Create an Argo CD capability
<a name="create-argocd-capability"></a>

This topic explains how to create an Argo CD capability on your Amazon EKS cluster.

## Prerequisites
<a name="_prerequisites"></a>

Before creating an Argo CD capability, ensure you have:
+ An existing Amazon EKS cluster running a supported Kubernetes version (all versions in standard and extended support are supported)
+  ** AWS Identity Center configured** - Required for Argo CD authentication (local users are not supported)
+ An IAM Capability Role with permissions for Argo CD
+ Sufficient IAM permissions to create capability resources on EKS clusters
+  `kubectl` configured to communicate with your cluster
+ (Optional) The Argo CD CLI installed for easier cluster and repository management
+ (For CLI/eksctl) The appropriate CLI tool installed and configured

For instructions on creating the IAM Capability Role, see [Amazon EKS capability IAM role](capability-role.md). For Identity Center setup, see [Getting started with AWS Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html).

**Important**  
The IAM Capability Role you provide determines which AWS resources Argo CD can access. This includes Git repository access via CodeConnections and secrets in Secrets Manager. For guidance on creating an appropriate role with least-privilege permissions, see [Amazon EKS capability IAM role](capability-role.md) and [Security considerations for EKS Capabilities](capabilities-security.md).

## Choose your tool
<a name="_choose_your_tool"></a>

You can create an Argo CD capability using the AWS Management Console, AWS CLI, or eksctl:
+  [Create an Argo CD capability using the Console](argocd-create-console.md) - Use the Console for a guided experience
+  [Create an Argo CD capability using the AWS CLI](argocd-create-cli.md) - Use the AWS CLI for scripting and automation
+  [Create an Argo CD capability using eksctl](argocd-create-eksctl.md) - Use eksctl for a Kubernetes-native experience

## What happens when you create an Argo CD capability
<a name="_what_happens_when_you_create_an_argo_cd_capability"></a>

When you create an Argo CD capability:

1. EKS creates the Argo CD capability service in the AWS control plane

1. Custom Resource Definitions (CRDs) are installed in your cluster

1. An access entry is automatically created for your IAM Capability Role with capability-specific access entry policies that grant baseline Kubernetes permissions (see [Security considerations for EKS Capabilities](capabilities-security.md))

1. Argo CD begins watching for its custom resources (Applications, ApplicationSets, AppProjects)

1. The capability status changes from `CREATING` to `ACTIVE` 

1. The Argo CD UI becomes accessible through its URL

Once active, you can create Argo CD Applications in your cluster to deploy from your declarative sources.

**Note**  
The automatically created access entry does not grant permissions to deploy applications to clusters. To deploy applications, you must configure additional Kubernetes RBAC permissions for each target cluster. See [Register target clusters](argocd-register-clusters.md) for details on registering clusters and configuring access.

## Next steps
<a name="_next_steps"></a>

After creating the Argo CD capability:
+  [Argo CD concepts](argocd-concepts.md) - Learn about GitOps principles, sync policies, and multi-cluster patterns
+  [Working with Argo CD](working-with-argocd.md) - Configure repository access, register target clusters, and create Applications
+  [Argo CD considerations](argocd-considerations.md) - Explore multi-cluster architecture patterns and advanced configuration

# Create an Argo CD capability using the Console
<a name="argocd-create-console"></a>

This topic describes how to create an Argo CD capability using the AWS Management Console.

## Prerequisites
<a name="_prerequisites"></a>
+  ** AWS Identity Center configured** – Argo CD requires AWS Identity Center for authentication. Local users are not supported. If you don’t have AWS Identity Center set up, see [Getting started with AWS Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html) to create an Identity Center instance, and [Add users](https://docs.aws.amazon.com/singlesignon/latest/userguide/addusers.html) and [Add groups](https://docs.aws.amazon.com/singlesignon/latest/userguide/addgroups.html) to create users and groups for Argo CD access.

## Create the Argo CD capability
<a name="_create_the_argo_cd_capability"></a>

1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home\$1/clusters.

1. Select your cluster name to open the cluster detail page.

1. Choose the **Capabilities** tab.

1. In the left navigation, choose **Argo CD**.

1. Choose **Create Argo CD capability**.

1. For **IAM Capability Role**:
   + If you already have an IAM Capability Role, select it from the dropdown
   + If you need to create a role, choose **Create Argo CD role** 

     This opens the IAM console in a new tab with pre-populated trust policy and full read access to Secrets Manager. No other permissions are added by default, but you can add them if needed. If you plan to use CodeCommit repositories or other AWS services, add the appropriate permissions before creating the role.

     After creating the role, return to the EKS console and the role will be automatically selected.
**Note**  
If you plan to use the optional integrations with AWS Secrets Manager or AWS CodeConnections, you’ll need to add permissions to the role. For IAM policy examples and configuration guidance, see [Manage application secrets with AWS Secrets Manager](integration-secrets-manager.md) and [Connect to Git repositories with AWS CodeConnections](integration-codeconnections.md).

1. Configure AWS Identity Center integration:

   1. Select **Enable AWS Identity Center integration**.

   1. Choose your Identity Center instance from the dropdown.

   1. Configure role mappings for RBAC by assigning users or groups to Argo CD roles (ADMIN, EDITOR, or VIEWER)

1. Choose **Create**.

The capability creation process begins.

## Verify the capability is active
<a name="_verify_the_capability_is_active"></a>

1. On the **Capabilities** tab, view the Argo CD capability status.

1. Wait for the status to change from `CREATING` to `ACTIVE`.

1. Once active, the capability is ready to use.

For information about capability statuses and troubleshooting, see [Working with capability resources](working-with-capabilities.md).

## Access the Argo CD UI
<a name="_access_the_argo_cd_ui"></a>

After the capability is active, you can access the Argo CD UI:

1. On the Argo CD capability page, choose **Open Argo CD UI**.

1. The Argo CD UI opens in a new browser tab.

1. You can now create Applications and manage deployments through the UI.

## Next steps
<a name="_next_steps"></a>
+  [Working with Argo CD](working-with-argocd.md) - Configure repositories, register clusters, and create Applications
+  [Argo CD considerations](argocd-considerations.md) - Multi-cluster architecture and advanced configuration
+  [Working with capability resources](working-with-capabilities.md) - Manage your Argo CD capability resource

# Create an Argo CD capability using the AWS CLI
<a name="argocd-create-cli"></a>

This topic describes how to create an Argo CD capability using the AWS CLI.

## Prerequisites
<a name="_prerequisites"></a>
+  ** AWS CLI** – Version `2.12.3` or later. To check your version, run `aws --version`. For more information, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.
+  ** `kubectl` ** – A command line tool for working with Kubernetes clusters. For more information, see [Set up `kubectl` and `eksctl`](install-kubectl.md).
+  ** AWS Identity Center configured** – Argo CD requires AWS Identity Center for authentication. Local users are not supported. If you don’t have AWS Identity Center set up, see [Getting started with AWS Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html) to create an Identity Center instance, and [Add users](https://docs.aws.amazon.com/singlesignon/latest/userguide/addusers.html) and [Add groups](https://docs.aws.amazon.com/singlesignon/latest/userguide/addgroups.html) to create users and groups for Argo CD access.

## Step 1: Create an IAM Capability Role
<a name="_step_1_create_an_iam_capability_role"></a>

Create a trust policy file:

```
cat > argocd-trust-policy.json << 'EOF'
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "capabilities.eks.amazonaws.com"
      },
      "Action": [
        "sts:AssumeRole",
        "sts:TagSession"
      ]
    }
  ]
}
EOF
```

Create the IAM role:

```
aws iam create-role \
  --role-name ArgoCDCapabilityRole \
  --assume-role-policy-document file://argocd-trust-policy.json
```

**Note**  
If you plan to use the optional integrations with AWS Secrets Manager or AWS CodeConnections, you’ll need to add permissions to the role. For IAM policy examples and configuration guidance, see [Manage application secrets with AWS Secrets Manager](integration-secrets-manager.md) and [Connect to Git repositories with AWS CodeConnections](integration-codeconnections.md).

## Step 2: Create the Argo CD capability
<a name="_step_2_create_the_argo_cd_capability"></a>

Create the Argo CD capability resource on your cluster.

First, set environment variables for your Identity Center configuration:

```
# Get your Identity Center instance ARN (replace region if your IDC instance is in a different region)
export IDC_INSTANCE_ARN=$(aws sso-admin list-instances --region [.replaceable]`region` --query 'Instances[0].InstanceArn' --output text)

# Get a user ID for RBAC mapping (replace with your username and region if needed)
export IDC_USER_ID=$(aws identitystore list-users \
  --region [.replaceable]`region` \
  --identity-store-id $(aws sso-admin list-instances --region [.replaceable]`region` --query 'Instances[0].IdentityStoreId' --output text) \
  --query 'Users[?UserName==`your-username`].UserId' --output text)

echo "IDC_INSTANCE_ARN=$IDC_INSTANCE_ARN"
echo "IDC_USER_ID=$IDC_USER_ID"
```

Create the capability with Identity Center integration. Replace *region-code* with the AWS Region where your cluster is located and *my-cluster* with your cluster name and *idc-region-code* with the region code where you IAM Identity Center has been configured:

```
aws eks create-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-argocd \
  --type ARGOCD \
  --role-arn arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):role/ArgoCDCapabilityRole \
  --delete-propagation-policy RETAIN \
  --configuration '{
    "argoCd": {
      "awsIdc": {
        "idcInstanceArn": "'$IDC_INSTANCE_ARN'",
        "idcRegion": "'[.replaceable]`idc-region-code`'"
      },
      "rbacRoleMappings": [{
        "role": "ADMIN",
        "identities": [{
          "id": "'$IDC_USER_ID'",
          "type": "SSO_USER"
        }]
      }]
    }
  }'
```

The command returns immediately, but the capability takes some time to become active as EKS creates the required capability infrastructure and components. EKS will install the Kubernetes Custom Resource Definitions related to this capability in your cluster as it is being created.

**Note**  
If you receive an error that the cluster doesn’t exist or you don’t have permissions, verify:  
The cluster name is correct
Your AWS CLI is configured for the correct region
You have the required IAM permissions

## Step 3: Verify the capability is active
<a name="_step_3_verify_the_capability_is_active"></a>

Wait for the capability to become active. Replace *region-code* with the AWS Region where your cluster is located and *my-cluster* with your cluster name.

```
aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-argocd \
  --query 'capability.status' \
  --output text
```

The capability is ready when the status shows `ACTIVE`. Don’t continue to the next step until the status is `ACTIVE`.

You can also view the full capability details:

```
aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-argocd
```

## Step 4: Verify custom resources are available
<a name="_step_4_verify_custom_resources_are_available"></a>

After the capability is active, verify that Argo CD custom resources are available in your cluster:

```
kubectl api-resources | grep argoproj.io
```

You should see `Application` and `ApplicationSet` resource types listed.

## Next steps
<a name="_next_steps"></a>
+  [Working with Argo CD](working-with-argocd.md) - Configure repositories, register clusters, and create Applications
+  [Argo CD considerations](argocd-considerations.md) - Multi-cluster architecture and advanced configuration
+  [Working with capability resources](working-with-capabilities.md) - Manage your Argo CD capability resource

# Create an Argo CD capability using eksctl
<a name="argocd-create-eksctl"></a>

This topic describes how to create an Argo CD capability using eksctl.

**Note**  
The following steps require eksctl version `0.220.0` or later. To check your version, run `eksctl version`.

## Step 1: Create an IAM Capability Role
<a name="_step_1_create_an_iam_capability_role"></a>

Create a trust policy file:

```
cat > argocd-trust-policy.json << 'EOF'
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "capabilities.eks.amazonaws.com"
      },
      "Action": [
        "sts:AssumeRole",
        "sts:TagSession"
      ]
    }
  ]
}
EOF
```

Create the IAM role:

```
aws iam create-role \
  --role-name ArgoCDCapabilityRole \
  --assume-role-policy-document file://argocd-trust-policy.json
```

**Note**  
For this basic setup, no additional IAM policies are needed. If you plan to use Secrets Manager for repository credentials or CodeConnections, you’ll need to add permissions to the role. For IAM policy examples and configuration guidance, see [Manage application secrets with AWS Secrets Manager](integration-secrets-manager.md) and [Connect to Git repositories with AWS CodeConnections](integration-codeconnections.md).

## Step 2: Get your AWS Identity Center configuration
<a name="step_2_get_your_shared_aws_identity_center_configuration"></a>

Get your Identity Center instance ARN and user ID for RBAC configuration:

```
# Get your Identity Center instance ARN
aws sso-admin list-instances --query 'Instances[0].InstanceArn' --output text

# Get a user ID for admin access (replace 'your-username' with your Identity Center username)
aws identitystore list-users \
  --identity-store-id $(aws sso-admin list-instances --query 'Instances[0].IdentityStoreId' --output text) \
  --query 'Users[?UserName==`your-username`].UserId' --output text
```

Note these values - you’ll need them in the next step.

## Step 3: Create an eksctl configuration file
<a name="_step_3_create_an_eksctl_configuration_file"></a>

Create a file named `argocd-capability.yaml` with the following content. Replace the placeholder values with your cluster’s name, cluster’s region, IAM role ARN, Identity Center instance ARN, Identity Center region, and user ID:

```
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: my-cluster
  region: cluster-region-code

capabilities:
  - name: my-argocd
    type: ARGOCD
    roleArn: arn:aws:iam::[.replaceable]111122223333:role/ArgoCDCapabilityRole
    deletePropagationPolicy: RETAIN
    configuration:
      argocd:
        awsIdc:
          idcInstanceArn: arn:aws:sso:::instance/ssoins-123abc
          idcRegion: idc-region-code
        rbacRoleMappings:
          - role: ADMIN
            identities:
              - id: 38414300-1041-708a-01af-5422d6091e34
                type: SSO_USER
```

**Note**  
You can add multiple users or groups to the RBAC mappings. For groups, use `type: SSO_GROUP` and provide the group ID. Available roles are `ADMIN`, `EDITOR`, and `VIEWER`.

## Step 4: Create the Argo CD capability
<a name="_step_4_create_the_argo_cd_capability"></a>

Apply the configuration file:

```
eksctl create capability -f argocd-capability.yaml
```

The command returns immediately, but the capability takes some time to become active.

## Step 5: Verify the capability is active
<a name="_step_5_verify_the_capability_is_active"></a>

Check the capability status. Replace *region-code* with the AWS Region that your cluster is in and replace *my-cluster* with the name of your cluster.

```
eksctl get capability \
  --region region-code \
  --cluster my-cluster \
  --name my-argocd
```

The capability is ready when the status shows `ACTIVE`.

## Step 6: Verify custom resources are available
<a name="_step_6_verify_custom_resources_are_available"></a>

After the capability is active, verify that Argo CD custom resources are available in your cluster:

```
kubectl api-resources | grep argoproj.io
```

You should see `Application` and `ApplicationSet` resource types listed.

## Next steps
<a name="_next_steps"></a>
+  [Working with Argo CD](working-with-argocd.md) - Learn how to create and manage Argo CD Applications
+  [Argo CD considerations](argocd-considerations.md) - Configure SSO and multi-cluster access
+  [Working with capability resources](working-with-capabilities.md) - Manage your Argo CD capability resource

# Argo CD concepts
<a name="argocd-concepts"></a>

Argo CD implements GitOps by treating Git as the single source of truth for your application deployments. This topic walks through a practical example, then explains the core concepts you need to understand when working with the EKS Capability for Argo CD.

## Getting started with Argo CD
<a name="_getting_started_with_argo_cd"></a>

After creating the Argo CD capability (see [Create an Argo CD capability](create-argocd-capability.md)), you can start deploying applications. This example walks through registering a cluster and creating an Application.

### Step 1: Set up
<a name="_step_1_set_up"></a>

 **Register your cluster** (required)

Register the cluster where you want to deploy applications. For this example, we’ll register the same cluster where Argo CD is running (you can use the name `in-cluster` for compatibility with most Argo CD examples):

```
# Get your cluster ARN
CLUSTER_ARN=$(aws eks describe-cluster \
  --name my-cluster \
  --query 'cluster.arn' \
  --output text)

# Register the cluster using Argo CD CLI
argocd cluster add $CLUSTER_ARN \
  --aws-cluster-name $CLUSTER_ARN \
  --name in-cluster \
  --project default
```

**Note**  
For information about configuring the Argo CD CLI to work with the Argo CD capability in EKS, see [Using the Argo CD CLI with the managed capability](argocd-comparison.md#argocd-cli-configuration).

Alternatively, register the cluster using a Kubernetes Secret (see [Register target clusters](argocd-register-clusters.md) for details).

 **Configure repository access** (optional)

This example uses a public GitHub repository, so no repository configuration is required. For private repositories, configure access using AWS Secrets Manager, CodeConnections, or Kubernetes Secrets (see [Configure repository access](argocd-configure-repositories.md) for details).

For AWS services (ECR for Helm charts, CodeConnections, and CodeCommit), you can reference them directly in Application resources without creating a Repository. The Capability Role must have the required IAM permissions. See [Configure repository access](argocd-configure-repositories.md) for details.

### Step 2: Create an Application
<a name="_step_2_create_an_application"></a>

Create this Application manifest in `my-app.yaml`:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/argoproj/argocd-example-apps.git
    targetRevision: HEAD
    path: guestbook
  destination:
    name: in-cluster
    namespace: guestbook
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true
```

Apply the Application:

```
kubectl apply -f my-app.yaml
```

After applying this Application, Argo CD: 1. Syncs the application from Git to your cluster (initial deployment) 2. Monitors the Git repository for changes 3. Automatically syncs subsequent changes to your cluster 4. Detects and corrects any drift from the desired state 5. Provides health status and sync history in the UI

View the application status:

```
kubectl get application guestbook -n argocd
```

You can also view the application using the Argo CD CLI or the Argo CD UI (accessible from the EKS console under your cluster’s Capabilities tab).

**Note**  
When using the Argo CD CLI with the managed capability, specify applications with the namespace prefix: `argocd app get argocd/guestbook`.

**Note**  
Use the cluster name in `destination.name` (the name you used when registering the cluster). The managed capability does not support the local in-cluster default (`kubernetes.default.svc`).

## Core concepts
<a name="_core_concepts"></a>

### GitOps principles and source types
<a name="_gitops_principles_and_source_types"></a>

Argo CD implements GitOps, where your application source is the single source of truth for deployments:
+  **Declarative** - Desired state is declared using YAML manifests, Helm charts, or Kustomize overlays
+  **Versioned** - Every change is tracked with complete audit trail
+  **Automated** - Argo CD continuously monitors sources and automatically syncs changes
+  **Self-healing** - Detects and corrects drift between desired and actual cluster state

 **Supported source types**:
+  **Git repositories** - GitHub, GitLab, Bitbucket, CodeCommit (HTTPS, SSH, or CodeConnections)
+  **Helm registries** - HTTP registries (like `https://aws.github.io/eks-charts`) and OCI registries (like `public.ecr.aws`)
+  **OCI images** - Container images containing manifests or Helm charts (like `oci://registry-1.docker.io/user/my-app`)

This flexibility allows organizations to choose sources that meet their security and compliance requirements. For example, organizations that restrict Git access from clusters can use ECR for Helm charts or OCI images.

For more information, see [Application Sources](https://argo-cd.readthedocs.io/en/stable/user-guide/application-sources/) in the Argo CD documentation.

### Sync and reconciliation
<a name="_sync_and_reconciliation"></a>

Argo CD continuously monitors your sources and clusters to detect and correct differences:

1. Polls sources for changes (default: every 6 minutes)

1. Compares desired state with cluster state

1. Marks applications as `Synced` or `OutOfSync` 

1. Syncs changes automatically (if configured) or waits for manual approval

1. Monitors resource health after sync

 **Sync waves** control resource creation order using annotations:

```
metadata:
  annotations:
    argocd.argoproj.io/sync-wave: "0"  # Default if not specified
```

Resources are applied in wave order (lower numbers first, including negative numbers like `-1`). Wave `0` is the default if not specified. This allows you to create dependencies like namespaces (wave `-1`) before deployments (wave `0`) before services (wave `1`).

 **Self-healing** automatically reverts manual changes:

```
spec:
  syncPolicy:
    automated:
      selfHeal: true
```

**Note**  
The managed capability uses annotation-based resource tracking (not label-based) for better compatibility with Kubernetes conventions and other tools.

For detailed information about sync phases, hooks, and advanced patterns, see the [Argo CD sync documentation](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/).

### Application health
<a name="_application_health"></a>

Argo CD monitors the health of all resources in your application:

 **Health statuses**: \$1 **Healthy** - All resources running as expected \$1 **Progressing** - Resources being created or updated \$1 **Degraded** - Some resources not healthy (pods crashing, jobs failing) \$1 **Suspended** - Application intentionally paused \$1 **Missing** - Resources defined in Git not present in cluster

Argo CD has built-in health checks for common Kubernetes resources (Deployments, StatefulSets, Jobs, etc.) and supports custom health checks for CRDs.

Application health is determined by all its resources - if any resource is `Degraded`, the application is `Degraded`.

For more information, see [Resource Health](https://argo-cd.readthedocs.io/en/stable/operator-manual/health/) in the Argo CD documentation.

### Multi-cluster patterns
<a name="_multi_cluster_patterns"></a>

Argo CD supports two main deployment patterns:

 **Hub-and-spoke** - Run Argo CD on a dedicated management cluster that deploys to multiple workload clusters: \$1 Centralized control and visibility \$1 Consistent policies across all clusters \$1 One Argo CD instance to manage \$1 Clear separation between control plane and workloads

 **Per-cluster** - Run Argo CD on each cluster, managing only that cluster’s applications: \$1 Cluster separation (one failure doesn’t affect others) \$1 Simpler networking (no cross-cluster communication) \$1 Easier initial setup (no cluster registration)

Choose hub-and-spoke for platform teams managing many clusters, or per-cluster for independent teams or when clusters must be fully isolated.

For detailed multi-cluster configuration, see [Argo CD considerations](argocd-considerations.md).

### Projects
<a name="_projects"></a>

Projects provide logical grouping and access control for Applications:
+  **Source restrictions** - Limit which Git repositories can be used
+  **Destination restrictions** - Limit which clusters and namespaces can be targeted
+  **Resource restrictions** - Limit which Kubernetes resource types can be deployed
+  **RBAC integration** - Map projects to AWS Identity Center user and group IDs

Applications belong to a single project. If not specified, they use the `default` project, which has no restrictions by default. For production use, edit the `default` project to restrict access and create new projects with appropriate restrictions.

For project configuration and RBAC patterns, see [Configure Argo CD permissions](argocd-permissions.md).

### Sync options
<a name="_sync_options"></a>

Fine-tune sync behavior with common options:
+  `CreateNamespace=true` - Automatically create destination namespace
+  `ServerSideApply=true` - Use server-side apply for better conflict resolution
+  `SkipDryRunOnMissingResource=true` - Skip dry run when CRDs don’t exist yet (useful for kro instances)

```
spec:
  syncPolicy:
    syncOptions:
    - CreateNamespace=true
    - ServerSideApply=true
    - SkipDryRunOnMissingResource=true
```

For a complete list of sync options, see the [Argo CD sync options documentation](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/).

## Next steps
<a name="_next_steps"></a>
+  [Configure repository access](argocd-configure-repositories.md) - Configure Git repository access
+  [Register target clusters](argocd-register-clusters.md) - Register target clusters for deployment
+  [Create Applications](argocd-create-application.md) - Create your first Application
+  [Argo CD considerations](argocd-considerations.md) - EKS-specific patterns, Identity Center integration, and multi-cluster configuration
+  [Argo CD Documentation](https://argo-cd.readthedocs.io/en/stable/) - Comprehensive Argo CD documentation including sync hooks, health checks, and advanced patterns

# Configure Argo CD permissions
<a name="argocd-permissions"></a>

The Argo CD managed capability integrates with AWS Identity Center for authentication and uses built-in RBAC roles for authorization. This topic explains how to configure permissions for users and teams.

## How permissions work with Argo CD
<a name="_how_permissions_work_with_argo_cd"></a>

The Argo CD capability uses AWS Identity Center for authentication and provides three built-in RBAC roles for authorization.

When a user accesses Argo CD:

1. They authenticate using AWS Identity Center (which can federate to your corporate identity provider)

1.  AWS Identity Center provides user and group information to Argo CD

1. Argo CD maps users and groups to RBAC roles based on your configuration

1. Users see only the applications and resources they have permission to access

## Built-in RBAC roles
<a name="_built_in_rbac_roles"></a>

The Argo CD capability provides three built-in roles that you map to AWS Identity Center users and groups. These are **globally scoped roles** that control access to Argo CD resources like projects, clusters, and repositories.

**Important**  
Global roles control access to Argo CD itself, not to project-scoped resources like Applications. EDITOR and VIEWER users cannot see or manage Applications by default—they need project roles to access project-scoped resources. See [Project roles and project-scoped access](#project-roles) for details on granting access to Applications and other project-scoped resources.

 **ADMIN** 

Full access to all Argo CD resources and settings:
+ Create, update, and delete Applications and ApplicationSets in any project
+ Manage Argo CD configuration
+ Register and manage deployment target clusters
+ Configure repository access
+ Create and manage projects
+ View all application status and history
+ List and access all clusters and repositories

 **EDITOR** 

Can update projects and configure project roles, but cannot change global Argo CD settings:
+ Update existing projects (cannot create or delete projects)
+ Configure project roles and permissions
+ View GPG keys and certificates
+ Cannot change global Argo CD configuration
+ Cannot manage clusters or repositories directly
+ Cannot see or manage Applications without project roles

 **VIEWER** 

Read-only access to Argo CD resources:
+ View project configurations
+ List all projects (including projects the user is not assigned to)
+ View GPG keys and certificates
+ Cannot list clusters or repositories
+ Cannot make any changes
+ Cannot see or manage Applications without project roles

**Note**  
To grant EDITOR or VIEWER users access to Applications, an ADMIN or EDITOR must create project roles that map Identity Center groups to specific permissions within a project.

## Project roles and project-scoped access
<a name="project-roles"></a>

Global roles (ADMIN, EDITOR, VIEWER) control access to Argo CD itself. Project roles control access to resources and capabilities within a specific project, including:
+  **Resources**: Applications, ApplicationSets, repository credentials, cluster credentials
+  **Capabilities**: Log access, exec access to application pods

 **Understanding the two-level permission model**:
+  **Global scope**: Built-in roles determine what users can do with projects, clusters, repositories, and Argo CD settings
+  **Project scope**: Project roles determine what users can do with resources and capabilities within a specific project

This means:
+ ADMIN users can access all project resources and capabilities without additional configuration
+ EDITOR and VIEWER users must be granted project roles to access project resources and capabilities
+ EDITOR users can create project roles to grant themselves and others access within projects they can update

 **Example workflow**:

1. An ADMIN maps an Identity Center group to the EDITOR role globally

1. An ADMIN creates a project for a team

1. The EDITOR configures project roles within that project to grant team members access to project-scoped resources

1. Team members (who may have VIEWER global role) can now see and manage Applications in that project based on their project role permissions

For details on configuring project roles, see [Project-based access control](#_project_based_access_control).

## Configure role mappings
<a name="_configure_role_mappings"></a>

Map AWS Identity Center users and groups to Argo CD roles when creating or updating the capability.

 **Example role mapping**:

```
{
  "rbacRoleMapping": {		 	 	 
    "ADMIN": ["AdminGroup", "alice@example.com"],
    "EDITOR": ["DeveloperGroup", "DevOpsTeam"],
    "VIEWER": ["ReadOnlyGroup", "bob@example.com"]
  }
}
```

**Note**  
Role names are case-sensitive and must be uppercase (ADMIN, EDITOR, VIEWER).

**Important**  
EKS Capabilities integration with AWS Identity Center supports up to 1,000 identities per Argo CD capability. An identity can be a user or a group.

 **Update role mappings**:

```
aws eks update-capability \
  --region us-east-1 \
  --cluster-name cluster \
  --capability-name capname \
  --endpoint "https://eks.ap-northeast-2.amazonaws.com" \
  --role-arn "arn:aws:iam::[.replaceable]111122223333:role/[.replaceable]`EKSCapabilityRole`" \
  --configuration '{
    "argoCd": {
      "rbacRoleMappings": {
        "addOrUpdateRoleMappings": [
          {
            "role": "ADMIN",
            "identities": [
              { "id": "686103e0-f051-7068-b225-e6392b959d9e", "type": "SSO_USER" }
            ]
          }
        ]
      }
    }
  }'
```

## Admin account usage
<a name="_admin_account_usage"></a>

The admin account is designed for initial setup and administrative tasks like registering clusters and configuring repositories.

 **When admin account is appropriate**:
+ Initial capability setup and configuration
+ Solo development or quick demonstrations
+ Administrative tasks (cluster registration, repository configuration, project creation)

 **Best practices for admin account**:
+ Don’t commit account tokens to version control
+ Rotate tokens immediately if exposed
+ Limit account token usage to setup and administrative tasks
+ Set short expiration times (maximum 12 hours)
+ Only 5 account tokens can be created at any given time

 **When to use project-based access instead**:
+ Shared development environments with multiple users
+ Any environment that resembles production
+ When you need audit trails of who performed actions
+ When you need to enforce resource restrictions or access boundaries

For production environments and multi-user scenarios, use project-based access control with dedicated RBAC roles mapped to AWS Identity Center groups.

## Project-based access control
<a name="_project_based_access_control"></a>

Use Argo CD Projects (AppProject) to provide fine-grained access control and resource isolation for teams.

**Important**  
Before assigning users or groups to project-specific roles, you must first map them to a global Argo CD role (ADMIN, EDITOR, or VIEWER) in the capability configuration. Users cannot access Argo CD without a global role mapping, even if they’re assigned to project roles.  
Consider mapping users to the VIEWER role globally, then grant additional permissions through project-specific roles. This provides baseline access while allowing fine-grained control at the project level.

Projects provide:
+  **Source restrictions**: Limit which Git repositories can be used
+  **Destination restrictions**: Limit which clusters and namespaces can be targeted
+  **Resource restrictions**: Limit which Kubernetes resource types can be deployed
+  **RBAC integration**: Map projects to AWS Identity Center groups or Argo CD roles

 **Example project for team isolation**:

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: team-a
  namespace: argocd
spec:
  description: Team A applications

  # Required: Specify which namespaces this project watches for Applications
  sourceNamespaces:
  - argocd

  # Source restrictions
  sourceRepos:
  - https://github.com/myorg/team-a-apps

  # Destination restrictions
  destinations:
  - namespace: team-a-*
    server: arn:aws:eks:us-west-2:111122223333:cluster/production

  # Resource restrictions
  clusterResourceWhitelist:
  - group: ''
    kind: Namespace
  namespaceResourceWhitelist:
  - group: 'apps'
    kind: Deployment
  - group: ''
    kind: Service
  - group: ''
    kind: ConfigMap
```

### Source namespaces
<a name="_source_namespaces"></a>

When using the EKS Argo CD capability, the `spec.sourceNamespaces` field is required in AppProject definitions. This field specifies which namespace can contain Applications or ApplicationSets that reference this project.

**Important**  
The EKS Argo CD capability only supports a single namespace for Applications and ApplicationSets—the namespace you specified when creating the capability (typically `argocd`). This differs from open source Argo CD which supports multiple namespaces.

 **AppProject configuration** 

All AppProjects must include the capability’s configured namespace in `sourceNamespaces`:

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: team-a-project
  namespace: argocd
spec:
  description: Applications for Team A

  # Required: Specify the capability's configured namespace (configuration.argoCd.namespace)
  sourceNamespaces:
    - argocd  # Must match your capability's namespace configuration

  # Source repositories this project can deploy from
  sourceRepos:
    - 'https://github.com/my-org/team-a-*'

  # Destination restrictions
  destinations:
    - namespace: 'team-a-*'
      server: arn:aws:eks:us-west-2:111122223333:cluster/my-cluster
```

**Note**  
If you omit the capability’s namespace from `sourceNamespaces`, Applications or ApplicationSets in that namespace cannot reference this project, resulting in deployment failures.

 **Assign users to projects**:

Project roles grant EDITOR and VIEWER users access to project resources (Applications, ApplicationSets, repository and cluster credentials) and capabilities (logs, exec). Without project roles, these users cannot access these resources even if they have global role access.

ADMIN users have access to all Applications without needing project roles.

 **Example: Granting Application access to team members** 

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: team-a
  namespace: argocd
spec:
  # ... project configuration ...

  sourceNamespaces:
  - argocd

  # Project roles grant Application-level access
  roles:
  - name: developer
    description: Team A developers - can manage Applications
    policies:
    - p, proj:team-a:developer, applications, *, team-a/*, allow
    - p, proj:team-a:developer, clusters, get, *, allow  # See cluster names in UI
    groups:
    - 686103e0-f051-7068-b225-e6392b959d9e  # Identity Center group ID

  - name: viewer
    description: Team A viewers - read-only Application access
    policies:
    - p, proj:team-a:viewer, applications, get, team-a/*, allow
    - p, proj:team-a:viewer, clusters, get, *, allow  # See cluster names in UI
    groups:
    - 786203e0-f051-7068-b225-e6392b959d9f  # Identity Center group ID
```

**Note**  
Include `clusters, get, *, allow` in project roles to allow users to see cluster names in the UI. Without this permission, the destination cluster displays as "unknown".

 **Understanding project role policies**:

The policy format is: `p, proj:<project>:<role>, <resource>, <action>, <object>, <allow/deny>` 

 **Resource policies**:
+  `applications, , team-a/, allow` - Full access to all Applications in the team-a project
+  `applications, get, team-a/*, allow` - Read-only access to Applications
+  `applications, sync, team-a/*, allow` - Can sync Applications but not create/delete
+  `applications, delete, team-a/*, allow` - Can delete Applications (use with caution)
+  `applicationsets, , team-a/, allow` - Full access to ApplicationSets
+  `repositories, *, *, allow` - Access to repository credentials
+  `clusters, *, *, allow` - Access to cluster credentials

 **Capability policies**:
+  `logs, , team-a/, allow` - Access to application logs
+  `exec, , team-a/, allow` - Exec access to application pods

**Note**  
EDITOR users can create project roles to grant themselves and others permissions within projects they can update. This allows team leads to control access to project-scoped resources for their team without requiring ADMIN intervention.

**Note**  
Use Identity Center group IDs (not group names) in the `groups` field. You can also use Identity Center user IDs for individual user access. Find these IDs in the AWS Identity Center console or using the AWS CLI.

## Common permission patterns
<a name="_common_permission_patterns"></a>

 **Pattern 1: Admin team with full access** 

```
{
  "rbacRoleMapping": {		 	 	 
    "ADMIN": ["PlatformTeam", "SRETeam"]
  }
}
```

ADMIN users can see and manage all project-scoped resources without additional configuration.

 **Pattern 2: Team leads manage projects, developers access via project roles** 

```
{
  "rbacRoleMapping": {		 	 	 
    "ADMIN": ["PlatformTeam"],
    "EDITOR": ["TeamLeads"],
    "VIEWER": ["AllDevelopers"]
  }
}
```

1. ADMIN creates projects for each team

1. Team leads (EDITOR) configure project roles to grant their developers access to project resources (Applications, ApplicationSets, credentials) and capabilities (logs, exec)

1. Developers (VIEWER) can only access resources and capabilities allowed by their project roles

 **Pattern 3: Team-based access with project roles** 

1. ADMIN creates projects and maps team leads to EDITOR role globally

1. Team leads (EDITOR) assign team members to project roles within their projects

1. Team members only need VIEWER global role—project roles provide access to project resources and capabilities

```
{
  "rbacRoleMapping": {		 	 	 
    "ADMIN": ["PlatformTeam"],
    "EDITOR": ["TeamLeads"],
    "VIEWER": ["AllDevelopers"]
  }
}
```

## Best practices
<a name="_best_practices"></a>

 **Use groups instead of individual users**: Map AWS Identity Center groups to Argo CD roles rather than individual users for easier management.

 **Start with least privilege**: Begin with VIEWER access and grant EDITOR or ADMIN as needed.

 **Use projects for team isolation**: Create separate AppProjects for different teams or environments to enforce boundaries.

 **Leverage Identity Center federation**: Configure AWS Identity Center to federate with your corporate identity provider for centralized user management.

 **Regular access reviews**: Periodically review role mappings and project assignments to ensure appropriate access levels.

 **Limit cluster access**: Remember that Argo CD RBAC controls access to Argo CD resources and operations, but does not correspond to Kubernetes RBAC. Users with Argo CD access can deploy applications to clusters that Argo CD has access to. Limit which clusters Argo CD can access and use project destination restrictions to control where applications can be deployed.

## AWS service permissions
<a name="shared_aws_service_permissions"></a>

To use AWS services directly in Application resources (without creating Repository resources), attach the required IAM permissions to the Capability Role.

 **ECR for Helm charts**:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ecr:GetAuthorizationToken",
        "ecr:BatchCheckLayerAvailability",
        "ecr:GetDownloadUrlForLayer",
        "ecr:BatchGetImage"
      ],
      "Resource": "*"
    }
  ]
}
```

 **CodeCommit repositories**:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "codecommit:GitPull"
      ],
      "Resource": "arn:aws:codecommit:region:account-id:repository-name"
    }
  ]
}
```

 **CodeConnections (GitHub, GitLab, Bitbucket)**:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "codeconnections:UseConnection"
      ],
      "Resource": "arn:aws:codeconnections:region:account-id:connection/connection-id"
    }
  ]
}
```

See [Configure repository access](argocd-configure-repositories.md) for details on using these integrations.

## Next steps
<a name="_next_steps"></a>
+  [Working with Argo CD](working-with-argocd.md) - Learn how to create applications and manage deployments
+  [Argo CD concepts](argocd-concepts.md) - Understand Argo CD concepts including Projects
+  [Security considerations for EKS Capabilities](capabilities-security.md) - Review security best practices for capabilities

# Working with Argo CD
<a name="working-with-argocd"></a>

With Argo CD, you define applications in Git repositories and Argo CD automatically syncs them to your Kubernetes clusters. This enables declarative, version-controlled application deployment with automated drift detection.

## Prerequisites
<a name="_prerequisites"></a>

Before working with Argo CD, you need:
+ An EKS cluster with the Argo CD capability created (see [Create an Argo CD capability](create-argocd-capability.md))
+ A Git repository containing Kubernetes manifests
+  `kubectl` configured to communicate with your cluster

## Common tasks
<a name="_common_tasks"></a>

The following topics guide you through common Argo CD tasks:

 ** [Configure repository access](argocd-configure-repositories.md) ** - Configure Argo CD to access your Git repositories using AWS Secrets Manager, AWS CodeConnections, or Kubernetes Secrets.

 ** [Register target clusters](argocd-register-clusters.md) ** - Register target clusters where Argo CD will deploy applications.

 ** [Working with Argo CD Projects](argocd-projects.md) ** - Organize applications and enforce security boundaries using Projects for multi-tenant environments.

 ** [Create Applications](argocd-create-application.md) ** - Create Applications that deploy from Git repositories with automated or manual sync policies.

 ** [Use ApplicationSets](argocd-applicationsets.md) ** - Use ApplicationSets to deploy applications across multiple environments or clusters using templates and generators.

## Access the Argo CD UI
<a name="_access_the_argo_cd_ui"></a>

Access the Argo CD UI through the EKS console:

1. Open the Amazon EKS console

1. Select your cluster

1. Choose the **Capabilities** tab

1. Choose **Argo CD** 

1. Choose **Open Argo CD UI** 

The UI provides visual application topology, sync status and history, resource health and events, manual sync controls, and application management.

## Upstream documentation
<a name="_upstream_documentation"></a>

For detailed information about Argo CD features:
+  [Argo CD Documentation](https://argo-cd.readthedocs.io/) - Complete user guide
+  [Application Spec](https://argo-cd.readthedocs.io/en/stable/user-guide/application-specification/) - Full Application API reference
+  [ApplicationSet Guide](https://argo-cd.readthedocs.io/en/stable/user-guide/application-set/) - ApplicationSet patterns and examples
+  [Argo CD GitHub](https://github.com/argoproj/argo-cd) - Source code and examples

# Configure repository access
<a name="argocd-configure-repositories"></a>

Before deploying applications, configure Argo CD to access your Git repositories and Helm chart registries. Argo CD supports multiple authentication methods for GitHub, GitLab, Bitbucket, AWS CodeCommit, and AWS ECR.

**Note**  
For direct AWS service integrations (ECR Helm charts, CodeCommit repositories, and CodeConnections), you can reference them directly in Application resources without creating Repository configurations. The Capability Role must have the required IAM permissions. See [Configure Argo CD permissions](argocd-permissions.md) for details.

## Prerequisites
<a name="_prerequisites"></a>
+ An EKS cluster with the Argo CD capability created
+ Git repositories containing Kubernetes manifests
+  `kubectl` configured to communicate with your cluster

**Note**  
 AWS CodeConnections can connect to Git servers located in AWS Cloud or on-premises. For more information, see [AWS CodeConnections](https://docs.aws.amazon.com/codeconnections/latest/userguide/welcome.html).

## Authentication methods
<a name="_authentication_methods"></a>


| Method | Use Case | IAM Permissions Required | 
| --- | --- | --- | 
|   **Direct integration with AWS services**   | 
|  CodeCommit  |  Direct integration with AWS CodeCommit Git repositories. No Repository configuration needed.  |   `codecommit:GitPull`   | 
|  CodeConnections  |  Connect to GitHub, GitLab, or Bitbucket with managed authentication. Requires connection setup.  |   `codeconnections:UseConnection`   | 
|  ECR OCI Artifacts  |  Direct integration with AWS ECR for OCI Helm charts and manifest images. No Repository configuration needed.  |   `arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly`   | 
|   **Repository configuration with credentials**   | 
|   AWS Secrets Manager (Username/Token)  |  Store personal access tokens or passwords. Enables credential rotation without Kubernetes access.  |   `arn:aws:iam::aws:policy/AWSSecretsManagerClientReadOnlyAccess`   | 
|   AWS Secrets Manager (SSH Key)  |  Use SSH key authentication. Enables credential rotation without Kubernetes access.  |   `arn:aws:iam::aws:policy/AWSSecretsManagerClientReadOnlyAccess`   | 
|   AWS Secrets Manager (GitHub App)  |  GitHub App authentication with private key. Enables credential rotation without Kubernetes access.  |   `arn:aws:iam::aws:policy/AWSSecretsManagerClientReadOnlyAccess`   | 
|  Kubernetes Secret  |  Standard Argo CD method using in-cluster secrets  |  None (permissions handled by EKS Access Entry with Kubernetes RBAC)  | 

## Direct access to AWS services
<a name="direct_access_to_shared_aws_services"></a>

For AWS services, you can reference them directly in Application resources without creating Repository configurations. The Capability Role must have the required IAM permissions.

### CodeCommit repositories
<a name="_codecommit_repositories"></a>

Reference CodeCommit repositories directly in Applications:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  source:
    repoURL: https://git-codecommit.region.amazonaws.com/v1/repos/repository-name
    targetRevision: main
    path: kubernetes/manifests
```

Required Capability Role permissions:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "codecommit:GitPull",
      "Resource": "arn:aws:codecommit:region:account-id:repository-name"
    }
  ]
}
```

### CodeConnections
<a name="_codeconnections"></a>

Reference GitHub, GitLab, or Bitbucket repositories through CodeConnections. The repository URL format is derived from the CodeConnections connection ARN.

The repository URL format is:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  source:
    repoURL: https://codeconnections.region.amazonaws.com/git-http/account-id/region/connection-id/owner/repository.git
    targetRevision: main
    path: kubernetes/manifests
```

Required Capability Role permissions:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "codeconnections:UseConnection",
      "Resource": "arn:aws:codeconnections:region:account-id:connection/connection-id"
    }
  ]
}
```

### ECR Helm charts
<a name="_ecr_helm_charts"></a>

ECR stores Helm charts as OCI artifacts. Argo CD supports two ways to reference them:

 **Helm format** (recommended for Helm charts):

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app-helm
  namespace: argocd
spec:
  source:
    repoURL: account-id.dkr.ecr.region.amazonaws.com/repository-name
    targetRevision: chart-version
    chart: chart-name
    helm:
      valueFiles:
        - values.yaml
```

Note: Do not include the `oci://` prefix when using Helm format. Use the `chart` field to specify the chart name.

 **OCI format** (for OCI artifacts with Kubernetes manifests):

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app-oci
  namespace: argocd
spec:
  source:
    repoURL: oci://account-id.dkr.ecr.region.amazonaws.com/repository-name
    targetRevision: artifact-version
    path: path-to-manifests
```

Note: Include the `oci://` prefix when using OCI format. Use the `path` field instead of `chart`.

Required Capability Role permissions - attach the managed policy:

```
arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly
```

This policy includes the necessary ECR permissions: `ecr:GetAuthorizationToken`, `ecr:BatchGetImage`, and `ecr:GetDownloadUrlForLayer`.

## Using AWS Secrets Manager
<a name="using_shared_aws_secrets_manager"></a>

Store repository credentials in Secrets Manager and reference them in Argo CD Repository configurations. Using Secrets Manager enables automated credential rotation without requiring Kubernetes RBAC access—credentials can be rotated using IAM permissions to Secrets Manager, and Argo CD automatically reads the updated values.

**Note**  
For credential reuse across multiple repositories (for example, all repositories under a GitHub organization), use repository credential templates with `argocd.argoproj.io/secret-type: repo-creds`. This provides better UX than creating individual repository secrets. For more information, see [Repository Credentials](https://argo-cd.readthedocs.io/en/stable/operator-manual/argocd-repo-creds-yaml/) in the Argo CD documentation.

### Username and token authentication
<a name="_username_and_token_authentication"></a>

For HTTPS repositories with personal access tokens or passwords:

 **Create the secret in Secrets Manager**:

```
aws secretsmanager create-secret \
  --name argocd/my-repo \
  --description "GitHub credentials for Argo CD" \
  --secret-string '{"username":"your-username","token":"your-personal-access-token"}'
```

 **Optional TLS client certificate fields** (for private Git servers):

```
aws secretsmanager create-secret \
  --name argocd/my-private-repo \
  --secret-string '{
    "username":"your-username",
    "token":"your-token",
    "tlsClientCertData":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCi4uLgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t",
    "tlsClientCertKey":"LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCi4uLgotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0t"
  }'
```

**Note**  
The `tlsClientCertData` and `tlsClientCertKey` values must be base64 encoded.

 **Create a Repository Secret referencing Secrets Manager**:

```
apiVersion: v1
kind: Secret
metadata:
  name: my-repo
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  type: git
  url: https://github.com/your-org/your-repo
  secretArn: arn:aws:secretsmanager:us-west-2:111122223333:secret:argocd/my-repo-AbCdEf
  project: default
```

### SSH key authentication
<a name="_ssh_key_authentication"></a>

For SSH-based Git access, store the private key as plaintext (not JSON):

 **Create the secret with SSH private key**:

```
aws secretsmanager create-secret \
  --name argocd/my-repo-ssh \
  --description "SSH key for Argo CD" \
  --secret-string "-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
...
-----END OPENSSH PRIVATE KEY-----"
```

 **Create a Repository Secret for SSH**:

```
apiVersion: v1
kind: Secret
metadata:
  name: my-repo-ssh
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  type: git
  url: git@github.com:your-org/your-repo.git
  secretArn: arn:aws:secretsmanager:us-west-2:111122223333:secret:argocd/my-repo-ssh-AbCdEf
  project: default
```

### GitHub App authentication
<a name="_github_app_authentication"></a>

For GitHub App authentication with a private key:

 **Create the secret with GitHub App credentials**:

```
aws secretsmanager create-secret \
  --name argocd/github-app \
  --description "GitHub App credentials for Argo CD" \
  --secret-string '{
    "githubAppPrivateKeySecret":"LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQouLi4KLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0=",
    "githubAppID":"123456",
    "githubAppInstallationID":"12345678"
  }'
```

**Note**  
The `githubAppPrivateKeySecret` value must be base64 encoded.

 **Optional field for GitHub Enterprise**:

```
aws secretsmanager create-secret \
  --name argocd/github-enterprise-app \
  --secret-string '{
    "githubAppPrivateKeySecret":"LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQouLi4KLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0=",
    "githubAppID":"123456",
    "githubAppInstallationID":"12345678",
    "githubAppEnterpriseBaseUrl":"https://github.example.com/api/v3"
  }'
```

 **Create a Repository Secret for GitHub App**:

```
apiVersion: v1
kind: Secret
metadata:
  name: my-repo-github-app
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  type: git
  url: https://github.com/your-org/your-repo
  secretArn: arn:aws:secretsmanager:us-west-2:111122223333:secret:argocd/github-app-AbCdEf
  project: default
```

### Repository credential templates
<a name="_repository_credential_templates"></a>

For credential reuse across multiple repositories (for example, all repositories under a GitHub organization or user), use repository credential templates with `argocd.argoproj.io/secret-type: repo-creds`. This provides better UX than creating individual repository secrets for each repository.

 **Create a repository credential template**:

```
apiVersion: v1
kind: Secret
metadata:
  name: github-org-creds
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repo-creds
stringData:
  type: git
  url: https://github.com/your-org
  secretArn: arn:aws:secretsmanager:us-west-2:111122223333:secret:argocd/github-org-AbCdEf
```

This credential template applies to all repositories matching the URL prefix `https://github.com/your-org`. You can then reference any repository under this organization in Applications without creating additional secrets.

For more information, see [Repository Credentials](https://argo-cd.readthedocs.io/en/stable/operator-manual/argocd-repo-creds-yaml/) in the Argo CD documentation.

**Important**  
Ensure your IAM Capability Role has the managed policy `arn:aws:iam::aws:policy/AWSSecretsManagerClientReadOnlyAccess` attached, or equivalent permissions including `secretsmanager:GetSecretValue` and KMS decrypt permissions. See [Argo CD considerations](argocd-considerations.md) for IAM policy configuration.

## Using AWS CodeConnections
<a name="using_shared_aws_codeconnections"></a>

For CodeConnections integration, see [Connect to Git repositories with AWS CodeConnections](integration-codeconnections.md).

CodeConnections provides managed authentication for GitHub, GitLab, and Bitbucket without storing credentials.

## Using Kubernetes Secrets
<a name="_using_kubernetes_secrets"></a>

Store credentials directly in Kubernetes using the standard Argo CD method.

 **For HTTPS with personal access token**:

```
apiVersion: v1
kind: Secret
metadata:
  name: my-repo
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  type: git
  url: https://github.com/your-org/your-repo
  username: your-username
  password: your-personal-access-token
```

 **For SSH**:

```
apiVersion: v1
kind: Secret
metadata:
  name: my-repo-ssh
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  type: git
  url: git@github.com:your-org/your-repo.git
  sshPrivateKey: |
    -----BEGIN OPENSSH PRIVATE KEY-----
    ... your private key ...
    -----END OPENSSH PRIVATE KEY-----
```

## CodeCommit repositories
<a name="_codecommit_repositories_2"></a>

For AWS CodeCommit, grant your IAM Capability Role CodeCommit permissions (`codecommit:GitPull`).

Configure the repository:

```
apiVersion: v1
kind: Secret
metadata:
  name: codecommit-repo
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  type: git
  url: https://git-codecommit.us-west-2.amazonaws.com/v1/repos/my-repo
  project: default
```

For detailed IAM policy configuration, see [Argo CD considerations](argocd-considerations.md).

## Verify repository connection
<a name="_verify_repository_connection"></a>

Check connection status through the Argo CD UI under Settings → Repositories. The UI shows connection status and any authentication errors.

Repository Secrets do not include status information.

## Additional resources
<a name="_additional_resources"></a>
+  [Register target clusters](argocd-register-clusters.md) - Register target clusters for deployments
+  [Create Applications](argocd-create-application.md) - Create your first Application
+  [Argo CD considerations](argocd-considerations.md) - IAM permissions and security configuration
+  [Private Repositories](https://argo-cd.readthedocs.io/en/stable/user-guide/private-repositories/) - Upstream repository configuration reference

# Register target clusters
<a name="argocd-register-clusters"></a>

Register clusters to enable Argo CD to deploy applications to them. You can register the same cluster where Argo CD is running (local cluster) or remote clusters in different accounts or regions. Once a cluster is registered, it will remain in an Unknown connection state until you create an application within that cluster. To create an Argo CD application after your cluster is registered, see [Create Applications](argocd-create-application.md).

## Prerequisites
<a name="_prerequisites"></a>
+ An EKS cluster with the Argo CD capability created
+  `kubectl` configured to communicate with your cluster
+ For remote clusters: appropriate IAM permissions and access entries

## Register the local cluster
<a name="_register_the_local_cluster"></a>

To deploy applications to the same cluster where Argo CD is running, register it as a deployment target.

**Important**  
The Argo CD capability does not automatically register the local cluster. You must explicitly register it to deploy applications to the same cluster. You can use the cluster name `in-cluster` for compatibility with most Argo CD examples online.

**Note**  
An EKS Access Entry is automatically created for the local cluster with the Argo CD Capability Role, but no Kubernetes RBAC permissions are granted by default. This follows the principle of least privilege—you must explicitly configure the permissions Argo CD needs based on your use case. For example, if you only use this cluster as an Argo CD hub to manage remote clusters, it doesn’t need any local deployment permissions. See the Access Entry RBAC requirements section below for configuration options.

 **Using the Argo CD CLI**:

```
argocd cluster add <cluster-context-name> \
  --aws-cluster-name arn:aws:eks:us-west-2:111122223333:cluster/my-cluster \
  --name local-cluster
```

 **Using a Kubernetes Secret**:

```
apiVersion: v1
kind: Secret
metadata:
  name: local-cluster
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: cluster
stringData:
  name: local-cluster
  server: arn:aws:eks:us-west-2:111122223333:cluster/my-cluster
  project: default
```

Apply the configuration:

```
kubectl apply -f local-cluster.yaml
```

**Note**  
Use the EKS cluster ARN in the `server` field, not the Kubernetes API server URL. The managed capability requires ARNs to identify clusters. The default `kubernetes.default.svc` is not supported.

## Register remote clusters
<a name="_register_remote_clusters"></a>

To deploy to remote clusters:

 **Step 1: Create the access entry on the remote cluster** 

Replace *region-code* with the AWS Region that your remote cluster is in, replace *remote-cluster* with the name of your remote cluster, and replace the ARN with your Argo CD capability role ARN.

```
aws eks create-access-entry \
  --region region-code \
  --cluster-name remote-cluster \
  --principal-arn arn:aws:iam::[.replaceable]111122223333:role/ArgoCDCapabilityRole \
  --type STANDARD
```

 **Step 2: Associate an access policy with Kubernetes RBAC permissions** 

The Access Entry requires Kubernetes RBAC permissions for Argo CD to deploy applications. For getting started quickly, you can use the `AmazonEKSClusterAdminPolicy`:

```
aws eks associate-access-policy \
  --region region-code \
  --cluster-name remote-cluster \
  --principal-arn arn:aws:iam::[.replaceable]111122223333:role/ArgoCDCapabilityRole \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy \
  --access-scope type=cluster
```

**Important**  
The `AmazonEKSClusterAdminPolicy` provides full cluster-admin access (equivalent to `system:masters`). This is convenient for getting started but should not be used in production. For production environments, use more restrictive permissions by associating the Access Entry with custom Kubernetes groups and creating appropriate Role or ClusterRole bindings. See the production setup section below for least privilege configuration.

 **Step 3: Register the cluster in Argo CD** 

 **Using the Argo CD CLI**:

```
argocd cluster add <cluster-context-name> \
  --aws-cluster-name arn:aws:eks:us-west-2:111122223333:cluster/remote-cluster \
  --name remote-cluster
```

 **Using a Kubernetes Secret**:

```
apiVersion: v1
kind: Secret
metadata:
  name: remote-cluster
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: cluster
stringData:
  name: remote-cluster
  server: arn:aws:eks:us-west-2:111122223333:cluster/remote-cluster
  project: default
```

Apply the configuration:

```
kubectl apply -f remote-cluster.yaml
```

## Cross-account clusters
<a name="_cross_account_clusters"></a>

To deploy to clusters in different AWS accounts:

1. In the target account, create an Access Entry on the target EKS cluster using the Argo CD IAM Capability Role ARN from the source account as the principal

1. Associate an access policy with appropriate Kubernetes RBAC permissions

1. Register the cluster in Argo CD using its EKS cluster ARN

No additional IAM role creation or trust policy configuration is required—EKS Access Entries handle cross-account access.

The cluster ARN format includes the region, so cross-region deployments use the same process as same-region deployments.

## Verify cluster registration
<a name="_verify_cluster_registration"></a>

View registered clusters:

```
kubectl get secrets -n argocd -l argocd.argoproj.io/secret-type=cluster
```

Or check cluster status in the Argo CD UI under Settings → Clusters.

## Private clusters
<a name="_private_clusters"></a>

The Argo CD capability provides transparent access to fully private EKS clusters without requiring VPC peering or specialized networking configuration.

 AWS manages connectivity between the Argo CD capability and private remote clusters automatically.

Simply register the private cluster using its ARN—no additional networking setup required.

## Access Entry RBAC requirements
<a name="_access_entry_rbac_requirements"></a>

When you create an Argo CD capability, an EKS Access Entry is automatically created for the Capability Role, but no Kubernetes RBAC permissions are granted by default. This intentional design follows the principle of least privilege—different use cases require different permissions.

For example: \$1 If you use the cluster only as an Argo CD hub to manage remote clusters, it doesn’t need local deployment permissions \$1 If you deploy applications locally, it needs read access cluster-wide and write access to specific namespaces \$1 If you need to create CRDs, it requires additional cluster-admin permissions

You must explicitly configure the permissions Argo CD needs based on your requirements.

### Minimum permissions for Argo CD
<a name="_minimum_permissions_for_argo_cd"></a>

Argo CD needs two types of permissions to function without errors:

 **Read permissions (cluster-wide)**: Argo CD must be able to read all resource types and Custom Resource Definitions (CRDs) across the cluster for:
+ Resource discovery and health checks
+ Detecting drift between desired and actual state
+ Validating resources before deployment

 **Write permissions (namespace-specific)**: Argo CD needs create, update, and delete permissions for resources defined in Applications:
+ Deploy application workloads (Deployments, Services, ConfigMaps, etc.)
+ Apply Custom Resources (CRDs specific to your applications)
+ Manage application lifecycle

### Quick setup
<a name="_quick_setup"></a>

For getting started quickly, testing, or development environments, use `AmazonEKSClusterAdminPolicy`:

```
aws eks associate-access-policy \
  --region region-code \
  --cluster-name my-cluster \
  --principal-arn arn:aws:iam::[.replaceable]111122223333:role/ArgoCDCapabilityRole \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy \
  --access-scope type=cluster
```

**Important**  
The `AmazonEKSClusterAdminPolicy` provides full cluster-admin access (equivalent to `system:masters`), including the ability to create CRDs, modify cluster-wide resources, and deploy to any namespace. This is convenient for development and POCs but should not be used in production. For production, use the least privilege setup below.

### Production setup with least privilege
<a name="_production_setup_with_least_privilege"></a>

For production environments, create custom Kubernetes RBAC that grants:
+ Cluster-wide read access to all resources (for discovery and health checks)
+ Namespace-specific write access (for deployments)

 **Step 1: Associate Access Entry with a custom Kubernetes group** 

```
aws eks associate-access-policy \
  --region region-code \
  --cluster-name my-cluster \
  --principal-arn arn:aws:iam::[.replaceable]111122223333:role/ArgoCDCapabilityRole \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy \
  --access-scope type=namespace,namespaces=app-namespace
```

 **Step 2: Create ClusterRole for read access** 

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: argocd-read-all
rules:
# Read access to all resources for discovery and health checks
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["get", "list", "watch"]
```

 **Step 3: Create Role for write access to application namespaces** 

```
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: argocd-deploy
  namespace: app-namespace
rules:
# Full access to deploy application resources
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["*"]
```

 **Step 4: Bind roles to the Kubernetes group** 

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: argocd-read-all
subjects:
- kind: Group
  name: eks-access-entry:arn:aws:iam::111122223333:role/ArgoCDCapabilityRole
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: argocd-read-all
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: argocd-deploy
  namespace: app-namespace
subjects:
- kind: Group
  name: eks-access-entry:arn:aws:iam::111122223333:role/ArgoCDCapabilityRole
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: argocd-deploy
  apiGroup: rbac.authorization.k8s.io
```

**Note**  
The group name format for Access Entries is `eks-access-entry:` followed by the principal ARN. Repeat the RoleBinding for each namespace where Argo CD should deploy applications.

**Important**  
Argo CD must be able to read all resource types across the cluster for health checks and discovery, even if it only deploys to specific namespaces. Without cluster-wide read access, Argo CD will show errors when checking application health.

## Restrict cluster access with Projects
<a name="_restrict_cluster_access_with_projects"></a>

Use Projects to control which clusters and namespaces Applications can deploy to by configuring the allowed target clusters and namespaces in `spec.destinations`:

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: production
  namespace: argocd
spec:
  destinations:
  - server: arn:aws:eks:us-west-2:111122223333:cluster/prod-cluster
    namespace: '*'
  - server: arn:aws:eks:eu-west-1:111122223333:cluster/prod-eu-cluster
    namespace: '*'
  sourceRepos:
  - 'https://github.com/example/production-apps'
```

For details, see [Working with Argo CD Projects](argocd-projects.md).

## Additional resources
<a name="_additional_resources"></a>
+  [Working with Argo CD Projects](argocd-projects.md) - Organize applications and enforce security boundaries
+  [Create Applications](argocd-create-application.md) - Deploy your first application
+  [Use ApplicationSets](argocd-applicationsets.md) - Deploy to multiple clusters with ApplicationSets
+  [Argo CD considerations](argocd-considerations.md) - Multi-cluster patterns and cross-account setup
+  [Declarative Cluster Setup](https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#clusters) - Upstream cluster configuration reference

# Working with Argo CD Projects
<a name="argocd-projects"></a>

Argo CD Projects (AppProject) provide logical grouping and access control for Applications. Projects define which Git repositories, target clusters, and namespaces Applications can use, enabling multi-tenancy and security boundaries in shared Argo CD instances.

## When to use Projects
<a name="_when_to_use_projects"></a>

Use Projects to:
+ Separate applications by team, environment, or business unit
+ Restrict which repositories teams can deploy from
+ Limit which clusters and namespaces teams can deploy to
+ Enforce resource quotas and allowed resource types
+ Provide self-service application deployment with guardrails

## Default Project
<a name="_default_project"></a>

Every Argo CD capability includes a `default` project that allows access to all repositories, clusters, and namespaces. While useful for initial testing, create dedicated projects with explicit restrictions for production use.

For details on the default project configuration and how to restrict it, see [The Default Project](https://argo-cd.readthedocs.io/en/stable/user-guide/projects/#the-default-project) in the Argo CD documentation.

## Create a Project
<a name="_create_a_project"></a>

Create a Project by applying an `AppProject` resource to your cluster.

 **Example: Team-specific Project** 

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: team-a
  namespace: argocd
spec:
  description: Applications for Team A

  # Source repositories this project can deploy from
  sourceRepos:
    - 'https://github.com/my-org/team-a-*'
    - 'https://github.com/my-org/shared-libs'

  # Destination clusters and namespaces
  destinations:
    - name: dev-cluster
      namespace: team-a-dev
    - name: prod-cluster
      namespace: team-a-prod

  # Allowed resource types
  clusterResourceWhitelist:
    - group: ''
      kind: Namespace

  namespaceResourceWhitelist:
    - group: 'apps'
      kind: Deployment
    - group: ''
      kind: Service
    - group: ''
      kind: ConfigMap
```

Apply the Project:

```
kubectl apply -f team-a-project.yaml
```

## Project configuration
<a name="_project_configuration"></a>

### Source repositories
<a name="_source_repositories"></a>

Control which Git repositories Applications in this project can use:

```
spec:
  sourceRepos:
    - 'https://github.com/my-org/app-*'  # Wildcard pattern
    - 'https://github.com/my-org/infra'  # Specific repo
```

You can use wildcards and negation patterns (`!` prefix) to allow or deny specific repositories. For details, see [Managing Projects](https://argo-cd.readthedocs.io/en/stable/user-guide/projects/#managing-projects) in the Argo CD documentation.

### Destination restrictions
<a name="_destination_restrictions"></a>

Limit where Applications can deploy:

```
spec:
  destinations:
    - name: prod-cluster  # Specific cluster by name
      namespace: production
    - name: '*'  # Any cluster
      namespace: team-a-*  # Namespace pattern
```

**Important**  
Use specific cluster names and namespace patterns rather than wildcards for production Projects. This prevents accidental deployments to unauthorized clusters or namespaces.

You can use wildcards and negation patterns to control destinations. For details, see [Managing Projects](https://argo-cd.readthedocs.io/en/stable/user-guide/projects/#managing-projects) in the Argo CD documentation.

### Resource restrictions
<a name="_resource_restrictions"></a>

Control which Kubernetes resource types can be deployed:

 **Cluster-scoped resources**:

```
spec:
  clusterResourceWhitelist:
    - group: ''
      kind: Namespace
    - group: 'rbac.authorization.k8s.io'
      kind: Role
```

 **Namespace-scoped resources**:

```
spec:
  namespaceResourceWhitelist:
    - group: 'apps'
      kind: Deployment
    - group: ''
      kind: Service
    - group: ''
      kind: ConfigMap
    - group: 's3.services.k8s.aws'
      kind: Bucket
```

Use blacklists to deny specific resources:

```
spec:
  namespaceResourceBlacklist:
    - group: ''
      kind: Secret  # Prevent direct Secret creation
```

## Assign Applications to Projects
<a name="_assign_applications_to_projects"></a>

When creating an Application, specify the project in the `spec.project` field:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  project: team-a  # Assign to team-a project
  source:
    repoURL: https://github.com/my-org/my-app
    path: manifests
  destination:
    name: prod-cluster
    namespace: team-a-prod
```

Applications without a specified project use the `default` project.

## Project roles and RBAC
<a name="_project_roles_and_rbac"></a>

Projects can define custom roles for fine-grained access control. Map project roles to AWS Identity Center users and groups in your capability configuration to control who can sync, update, or delete applications.

 **Example: Project with developer and admin roles** 

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: team-a
  namespace: argocd
spec:
  sourceRepos:
    - '*'
  destinations:
    - name: '*'
      namespace: 'team-a-*'

  roles:
    - name: developer
      description: Developers can sync applications
      policies:
        - p, proj:team-a:developer, applications, sync, team-a/*, allow
        - p, proj:team-a:developer, applications, get, team-a/*, allow
      groups:
        - team-a-developers

    - name: admin
      description: Admins have full access
      policies:
        - p, proj:team-a:admin, applications, *, team-a/*, allow
      groups:
        - team-a-admins
```

For details on project roles, JWT tokens for CI/CD pipelines, and RBAC configuration, see [Project Roles](https://argo-cd.readthedocs.io/en/stable/user-guide/projects/#project-roles) in the Argo CD documentation.

## Common patterns
<a name="_common_patterns"></a>

### Environment-based Projects
<a name="_environment_based_projects"></a>

Create separate projects for each environment:

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: production
  namespace: argocd
spec:
  sourceRepos:
    - 'https://github.com/my-org/*'
  destinations:
    - name: prod-cluster
      namespace: '*'
  # Strict resource controls for production
  clusterResourceWhitelist: []
  namespaceResourceWhitelist:
    - group: 'apps'
      kind: Deployment
    - group: ''
      kind: Service
```

### Team-based Projects
<a name="_team_based_projects"></a>

Isolate teams with dedicated projects:

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: platform-team
  namespace: argocd
spec:
  sourceRepos:
    - 'https://github.com/my-org/platform-*'
  destinations:
    - name: '*'
      namespace: 'platform-*'
  # Platform team can manage cluster resources
  clusterResourceWhitelist:
    - group: '*'
      kind: '*'
```

### Multi-cluster Projects
<a name="_multi_cluster_projects"></a>

Deploy to multiple clusters with consistent policies:

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: global-app
  namespace: argocd
spec:
  sourceRepos:
    - 'https://github.com/my-org/global-app'
  destinations:
    - name: us-west-cluster
      namespace: app
    - name: eu-west-cluster
      namespace: app
    - name: ap-south-cluster
      namespace: app
```

## Best practices
<a name="_best_practices"></a>

 **Start with restrictive Projects**: Begin with narrow permissions and expand as needed rather than starting with broad access.

 **Use namespace patterns**: Leverage wildcards in namespace restrictions (like `team-a-*`) to allow flexibility while maintaining boundaries.

 **Separate production Projects**: Use dedicated Projects for production with stricter controls and manual sync policies.

 **Document Project purposes**: Use the `description` field to explain what each Project is for and who should use it.

 **Review Project permissions regularly**: Audit Projects periodically to ensure restrictions still align with team needs and security requirements.

## Additional resources
<a name="_additional_resources"></a>
+  [Configure Argo CD permissions](argocd-permissions.md) - Configure RBAC and Identity Center integration
+  [Create Applications](argocd-create-application.md) - Create Applications within Projects
+  [Use ApplicationSets](argocd-applicationsets.md) - Use ApplicationSets with Projects for multi-cluster deployments
+  [Argo CD Projects Documentation](https://argo-cd.readthedocs.io/en/stable/user-guide/projects/) - Complete upstream reference

# Create Applications
<a name="argocd-create-application"></a>

Applications represent deployments in target clusters. Each Application defines a source (Git repository) and destination (cluster and namespace). When applied, Argo CD will create the resources specified by manifests in the Git repository to the namespace in the cluster. Applications often specify workload deployments, but they can manage any Kubernetes resources available in the destination cluster.

## Prerequisites
<a name="_prerequisites"></a>
+ An EKS cluster with the Argo CD capability created
+ Repository access configured (see [Configure repository access](argocd-configure-repositories.md))
+ Target cluster registered (see [Register target clusters](argocd-register-clusters.md))
+  `kubectl` configured to communicate with your cluster

## Create a basic Application
<a name="_create_a_basic_application"></a>

Define an Application that deploys from a Git repository:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/argoproj/argocd-example-apps
    targetRevision: HEAD
    path: guestbook
  destination:
    name: in-cluster
    namespace: default
```

**Note**  
Use `destination.name` with the cluster name you used when registering the cluster (like `in-cluster` for the local cluster). The `destination.server` field also works with EKS cluster ARNs, but using cluster names is recommended for better readability.

Apply the Application:

```
kubectl apply -f application.yaml
```

View the Application status:

```
kubectl get application guestbook -n argocd
```

## Source configuration
<a name="_source_configuration"></a>

 **Git repository**:

```
spec:
  source:
    repoURL: https://github.com/example/my-app
    targetRevision: main
    path: kubernetes/manifests
```

 **Specific Git tag or commit**:

```
spec:
  source:
    targetRevision: v1.2.0  # or commit SHA
```

 **Helm chart**:

```
spec:
  source:
    repoURL: https://github.com/example/helm-charts
    targetRevision: main
    path: charts/my-app
    helm:
      valueFiles:
      - values.yaml
      parameters:
      - name: image.tag
        value: v1.2.0
```

 **Helm chart with values from external Git repository** (multi-source pattern):

```
spec:
  sources:
  - repoURL: https://github.com/example/helm-charts
    targetRevision: main
    path: charts/my-app
    helm:
      valueFiles:
      - $values/environments/production/values.yaml
  - repoURL: https://github.com/example/config-repo
    targetRevision: main
    ref: values
```

For more information, see [Helm Value Files from External Git Repository](https://argo-cd.readthedocs.io/en/stable/user-guide/multiple_sources/#helm-value-files-from-external-git-repository) in the Argo CD documentation.

 **Helm chart from ECR**:

```
spec:
  source:
    repoURL: oci://account-id.dkr.ecr.region.amazonaws.com/repository-name
    targetRevision: chart-version
    chart: chart-name
```

If the Capability Role has the required ECR permissions, the repository is used directly and no Repository configuration is required. See [Configure repository access](argocd-configure-repositories.md) for details.

 **Git repository from CodeCommit**:

```
spec:
  source:
    repoURL: https://git-codecommit.region.amazonaws.com/v1/repos/repository-name
    targetRevision: main
    path: kubernetes/manifests
```

If the Capability Role has the required CodeCommit permissions, the repository is used directly and no Repository configuration is required. See [Configure repository access](argocd-configure-repositories.md) for details.

 **Git repository from CodeConnections**:

```
spec:
  source:
    repoURL: https://codeconnections.region.amazonaws.com/git-http/account-id/region/connection-id/owner/repository.git
    targetRevision: main
    path: kubernetes/manifests
```

The repository URL format is derived from the CodeConnections connection ARN. If the Capability Role has the required CodeConnections permissions and a connection is configured, the repository is used directly and no Repository configuration is required. See [Configure repository access](argocd-configure-repositories.md) for details.

 **Kustomize**:

```
spec:
  source:
    repoURL: https://github.com/example/kustomize-app
    targetRevision: main
    path: overlays/production
    kustomize:
      namePrefix: prod-
```

## Sync policies
<a name="_sync_policies"></a>

Control how Argo CD syncs applications.

 **Manual sync (default)**:

Applications require manual approval to sync:

```
spec:
  syncPolicy: {}  # No automated sync
```

Manually trigger sync:

```
kubectl patch application guestbook -n argocd \
  --type merge \
  --patch '{"operation": {"initiatedBy": {"username": "admin"}, "sync": {}}}'
```

 **Automatic sync**:

Applications automatically sync when Git changes are detected:

```
spec:
  syncPolicy:
    automated: {}
```

 **Self-healing**:

Automatically revert manual changes to the cluster:

```
spec:
  syncPolicy:
    automated:
      selfHeal: true
```

When enabled, Argo CD reverts any manual changes made directly to the cluster, ensuring Git remains the source of truth.

 **Pruning**:

Automatically delete resources removed from Git:

```
spec:
  syncPolicy:
    automated:
      prune: true
```

**Warning**  
Pruning will delete resources from your cluster. Use with caution in production environments.

 **Combined automated sync**:

```
spec:
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true
```

 **Retry configuration**:

Configure retry behavior for failed syncs:

```
spec:
  syncPolicy:
    retry:
      limit: 5  # Number of failed sync attempts; unlimited if less than 0
      backoff:
        duration: 5s  # Amount to back off (default unit: seconds, also supports "2m", "1h")
        factor: 2  # Factor to multiply the base duration after each failed retry
        maxDuration: 3m  # Maximum amount of time allowed for the backoff strategy
```

This is particularly useful for resources that depend on CRDs being created first, or when working with kro instances where the CRD may not be immediately available.

## Sync options
<a name="_sync_options"></a>

Additional sync configuration:

 **Create namespace if it doesn’t exist**:

```
spec:
  syncPolicy:
    syncOptions:
    - CreateNamespace=true
```

 **Skip dry run for missing resources**:

Useful when applying resources that depend on CRDs that don’t exist yet (like kro instances):

```
spec:
  syncPolicy:
    syncOptions:
    - SkipDryRunOnMissingResource=true
```

This can also be applied to specific resources using a label on the resource itself.

 **Validate resources before applying**:

```
spec:
  syncPolicy:
    syncOptions:
    - Validate=true
```

 **Apply out of sync only**:

```
spec:
  syncPolicy:
    syncOptions:
    - ApplyOutOfSyncOnly=true
```

## Advanced sync features
<a name="_advanced_sync_features"></a>

Argo CD supports advanced sync features for complex deployments:
+  **Sync waves** - Control resource creation order with `argocd.argoproj.io/sync-wave` annotations
+  **Sync hooks** - Run jobs before or after sync with `argocd.argoproj.io/hook` annotations (PreSync, PostSync, SyncFail)
+  **Resource health assessment** - Custom health checks for application-specific resources

For details, see [Sync Waves](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/) and [Resource Hooks](https://argo-cd.readthedocs.io/en/stable/user-guide/resource_hooks/) in the Argo CD documentation.

## Ignore differences
<a name="_ignore_differences"></a>

Prevent Argo CD from syncing specific fields that are managed by other controllers (like HPA managing replicas):

```
spec:
  ignoreDifferences:
  - group: apps
    kind: Deployment
    jsonPointers:
    - /spec/replicas
```

For details on ignore patterns and field exclusions, see [Diffing Customization](https://argo-cd.readthedocs.io/en/stable/user-guide/diffing/) in the Argo CD documentation.

## Multi-environment deployment
<a name="_multi_environment_deployment"></a>

Deploy the same application to multiple environments:

 **Development**:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app-dev
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/example/my-app
    targetRevision: develop
    path: overlays/development
  destination:
    name: dev-cluster
    namespace: my-app
```

 **Production**:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app-prod
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/example/my-app
    targetRevision: main
    path: overlays/production
  destination:
    name: prod-cluster
    namespace: my-app
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
```

## Monitor and manage Applications
<a name="_monitor_and_manage_applications"></a>

 **View Application status**:

```
kubectl get application my-app -n argocd
```

 **Access the Argo CD UI**:

Open the Argo CD UI through the EKS console to view application topology, sync status, resource health, and deployment history. See [Working with Argo CD](working-with-argocd.md) for UI access instructions.

 **Rollback Applications**:

Rollback to a previous revision using the Argo CD UI, the Argo CD CLI, or by updating the `targetRevision` in the Application spec to a previous Git commit or tag.

Using the Argo CD CLI:

```
argocd app rollback argocd/my-app <revision-id>
```

**Note**  
When using the Argo CD CLI with the managed capability, specify applications with the namespace prefix: `namespace/appname`.

For more information, see [argocd app rollback](https://argo-cd.readthedocs.io/en/stable/user-guide/commands/argocd_app_rollback/) in the Argo CD documentation.

## Additional resources
<a name="_additional_resources"></a>
+  [Working with Argo CD Projects](argocd-projects.md) - Organize applications with Projects for multi-tenant environments
+  [Use ApplicationSets](argocd-applicationsets.md) - Deploy to multiple clusters with templates
+  [Application Specification](https://argo-cd.readthedocs.io/en/stable/user-guide/application-specification/) - Complete Application API reference
+  [Sync Options](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/) - Advanced sync configuration

# Use ApplicationSets
<a name="argocd-applicationsets"></a>

ApplicationSets generate multiple Applications from templates, enabling you to deploy the same application across multiple clusters, environments, or namespaces with a single resource definition.

## Prerequisites
<a name="_prerequisites"></a>
+ An EKS cluster with the Argo CD capability created
+ Repository access configured (see [Configure repository access](argocd-configure-repositories.md))
+  `kubectl` configured to communicate with your cluster

**Note**  
Multiple target clusters are not required for ApplicationSets. You can use generators other than the cluster generator (like list, git, or matrix generators) to deploy applications without remote clusters.

## How ApplicationSets work
<a name="_how_applicationsets_work"></a>

ApplicationSets use generators to produce parameters, then apply those parameters to an Application template. Each set of generated parameters creates one Application.

Common generators for EKS deployments:
+  **List generator** - Explicitly define clusters and parameters for each environment
+  **Cluster generator** - Automatically deploy to all registered clusters
+  **Git generator** - Generate Applications from repository structure
+  **Matrix generator** - Combine generators for multi-dimensional deployments
+  **Merge generator** - Merge parameters from multiple generators

For complete generator reference, see [ApplicationSet Documentation](https://argo-cd.readthedocs.io/en/stable/user-guide/application-set/).

## List generator
<a name="_list_generator"></a>

Deploy to multiple clusters with explicit configuration:

```
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: guestbook-all-clusters
  namespace: argocd
spec:
  generators:
  - list:
      elements:
      - environment: dev
        replicas: "2"
      - environment: staging
        replicas: "3"
      - environment: prod
        replicas: "5"
  template:
    metadata:
      name: 'guestbook-{{environment}}'
    spec:
      project: default
      source:
        repoURL: https://github.com/example/guestbook
        targetRevision: HEAD
        path: 'overlays/{{environment}}'
      destination:
        name: '{{environment}}-cluster'
        namespace: guestbook
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
```

**Note**  
Use `destination.name` with cluster names for better readability. The `destination.server` field also works with EKS cluster ARNs if needed.

This creates three Applications: `guestbook-dev`, `guestbook-staging`, and `guestbook-prod`.

## Cluster generator
<a name="_cluster_generator"></a>

Deploy to all registered clusters automatically:

```
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: cluster-addons
  namespace: argocd
spec:
  generators:
  - clusters: {}
  template:
    metadata:
      name: '{{name}}-addons'
    spec:
      project: default
      source:
        repoURL: https://github.com/example/cluster-addons
        targetRevision: HEAD
        path: addons
      destination:
        server: '{{server}}'
        namespace: kube-system
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
```

This automatically creates an Application for each registered cluster.

 **Filter clusters**:

Use `matchLabels` to include specific clusters, or `matchExpressions` to exclude clusters:

```
spec:
  generators:
  - clusters:
      selector:
        matchLabels:
          environment: production
        matchExpressions:
        - key: skip-appset
          operator: DoesNotExist
```

## Git generators
<a name="_git_generators"></a>

Git generators create Applications based on repository structure:
+  **Directory generator** - Deploy each directory as a separate Application (useful for microservices)
+  **File generator** - Generate Applications from parameter files (useful for multi-tenant deployments)

 **Example: Microservices deployment** 

```
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: microservices
  namespace: argocd
spec:
  generators:
  - git:
      repoURL: https://github.com/example/microservices
      revision: HEAD
      directories:
      - path: services/*
  template:
    metadata:
      name: '{{path.basename}}'
    spec:
      project: default
      source:
        repoURL: https://github.com/example/microservices
        targetRevision: HEAD
        path: '{{path}}'
      destination:
        name: my-cluster
        namespace: '{{path.basename}}'
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
        - CreateNamespace=true
```

For details on Git generators and file-based configuration, see [Git Generator](https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/Generators-Git/) in the Argo CD documentation.

## Matrix generator
<a name="_matrix_generator"></a>

Combine multiple generators to deploy across multiple dimensions (environments × clusters):

```
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: multi-env-multi-cluster
  namespace: argocd
spec:
  generators:
  - matrix:
      generators:
      - list:
          elements:
          - environment: dev
          - environment: staging
          - environment: prod
      - clusters:
          selector:
            matchLabels:
              region: us-west-2
  template:
    metadata:
      name: 'app-{{environment}}-{{name}}'
    spec:
      project: default
      source:
        repoURL: https://github.com/example/app
        targetRevision: HEAD
        path: 'overlays/{{environment}}'
      destination:
        name: '{{name}}'
        namespace: 'app-{{environment}}'
```

For details on combining generators, see [Matrix Generator](https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/Generators-Matrix/) in the Argo CD documentation.

## Multi-region deployment
<a name="_multi_region_deployment"></a>

Deploy to clusters across multiple regions:

```
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: global-app
  namespace: argocd
spec:
  generators:
  - list:
      elements:
      - clusterName: prod-us-west
        region: us-west-2
      - clusterName: prod-us-east
        region: us-east-1
      - clusterName: prod-eu-west
        region: eu-west-1
  template:
    metadata:
      name: 'app-{{region}}'
    spec:
      project: default
      source:
        repoURL: https://github.com/example/app
        targetRevision: HEAD
        path: kubernetes
        helm:
          parameters:
          - name: region
            value: '{{region}}'
      destination:
        name: '{{clusterName}}'
        namespace: app
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
```

## Manage ApplicationSets
<a name="_manage_applicationsets"></a>

 **View ApplicationSets and generated Applications**:

```
kubectl get applicationsets -n argocd
kubectl get applications -n argocd -l argocd.argoproj.io/application-set-name=<applicationset-name>
```

 **Update an ApplicationSet**:

Modify the ApplicationSet spec and reapply. Argo CD automatically updates all generated Applications:

```
kubectl apply -f applicationset.yaml
```

 **Delete an ApplicationSet**:

```
kubectl delete applicationset <name> -n argocd
```

**Warning**  
Deleting an ApplicationSet deletes all generated Applications. If those Applications have `prune: true`, their resources will also be deleted from target clusters.  
To preserve deployed resources when deleting an ApplicationSet, set `.syncPolicy.preserveResourcesOnDeletion` to `true` in the ApplicationSet spec. For more information, see [Application Pruning & Resource Deletion](https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/Application-Deletion/) in the Argo CD documentation.

**Important**  
Argo CD’s ApplicationSets feature has security considerations you should be aware of before using ApplicationSets. For more information, see [ApplicationSet Security](https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/Security/) in the Argo CD documentation.

## Additional resources
<a name="_additional_resources"></a>
+  [Working with Argo CD Projects](argocd-projects.md) - Organize ApplicationSets with Projects
+  [Create Applications](argocd-create-application.md) - Understand Application configuration
+  [ApplicationSet Documentation](https://argo-cd.readthedocs.io/en/stable/user-guide/application-set/) - Complete generator reference and patterns
+  [Generator Reference](https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/Generators/) - Detailed generator specifications

# Argo CD considerations
<a name="argocd-considerations"></a>

This topic covers important considerations for using the EKS Capability for Argo CD, including planning, permissions, authentication, and multi-cluster deployment patterns.

## Planning
<a name="_planning"></a>

Before deploying Argo CD, consider the following:

 **Repository strategy**: Determine where your application manifests will be stored (CodeCommit, GitHub, GitLab, Bitbucket). Plan your repository structure and branching strategy for different environments.

 **RBAC strategy**: Plan which teams or users should have admin, editor, or viewer access. Map these to AWS Identity Center groups or Argo CD roles.

 **Multi-cluster architecture**: Determine if you’ll manage multiple clusters from a single Argo CD instance. Consider using a dedicated management cluster for Argo CD.

 **Application organization**: Plan how you’ll structure Applications and ApplicationSets. Consider using projects to organize applications by team or environment.

 **Sync policies**: Decide whether applications should sync automatically or require manual approval. Automated sync is common for development, manual for production.

## Permissions
<a name="_permissions"></a>

For detailed information about IAM Capability Roles, trust policies, and security best practices, see [Amazon EKS capability IAM role](capability-role.md) and [Security considerations for EKS Capabilities](capabilities-security.md).

### IAM Capability Role overview
<a name="_iam_capability_role_overview"></a>

When you create an Argo CD capability resource, you provide an IAM Capability Role. Unlike ACK, Argo CD primarily manages Kubernetes resources, not AWS resources directly. However, the IAM Capability Role is required for:
+ Accessing private Git repositories in CodeCommit
+ Integrating with AWS Identity Center for authentication
+ Accessing secrets in AWS Secrets Manager (if configured)
+ Cross-cluster deployments to other EKS clusters

### CodeCommit integration
<a name="_codecommit_integration"></a>

If you’re using CodeCommit repositories, attach a policy with read permissions:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "codecommit:GitPull"
      ],
      "Resource": "*"
    }
  ]
}
```

**Important**  
For production use, restrict the `Resource` field to specific repository ARNs instead of using `"*"`.  
Example:  

```
"Resource": "arn:aws:codecommit:us-west-2:111122223333:my-app-repo"
```
This limits the Argo CD capability’s access to only the repositories it needs to manage.

### Secrets Manager integration
<a name="_secrets_manager_integration"></a>

If you’re storing repository credentials in Secrets Manager, attach the managed policy for read access:

```
arn:aws:iam::aws:policy/AWSSecretsManagerClientReadOnlyAccess
```

This policy includes the necessary permissions: `secretsmanager:GetSecretValue`, `secretsmanager:DescribeSecret`, and KMS decrypt permissions.

### Basic setup
<a name="_basic_setup"></a>

For basic Argo CD functionality with public Git repositories, no additional IAM policies are required beyond the trust policy.

## Authentication
<a name="_authentication"></a>

### AWS Identity Center integration
<a name="shared_aws_identity_center_integration"></a>

The Argo CD managed capability integrates directly with AWS Identity Center (formerly AWS SSO), enabling you to use your existing identity provider for authentication.

When you configure AWS Identity Center integration:

1. Users access the Argo CD UI through the EKS console

1. They authenticate using AWS Identity Center (which can federate to your corporate identity provider)

1.  AWS Identity Center provides user and group information to Argo CD

1. Argo CD maps users and groups to RBAC roles based on your configuration

1. Users see only the applications and resources they have permission to access

### Simplifying access with Identity Center permission sets
<a name="_simplifying_access_with_identity_center_permission_sets"></a>

 AWS Identity Center provides two distinct authentication paths when working with Argo CD:

 **Argo CD API authentication**: Identity Center provides SSO authentication to the Argo CD UI and API. This is configured through the Argo CD capability’s RBAC role mappings.

 **EKS cluster access**: The Argo CD capability uses the customer-provided IAM role to authenticate with EKS clusters through access entries. These access entries can be manually configured to add or remove permissions.

You can use [Identity Center permission sets](https://docs.aws.amazon.com/singlesignon/latest/userguide/howtocreatepermissionset.html) to simplify identity management by allowing a single identity to access both Argo CD and EKS clusters. This reduces overhead by requiring you to manage only one identity across both systems, rather than maintaining separate credentials for Argo CD access and cluster access.

### RBAC role mappings
<a name="_rbac_role_mappings"></a>

Argo CD has built-in roles that you can map to AWS Identity Center users and groups:

 **ADMIN**: Full access to all applications and settings. Can create, update, and delete applications. Can manage Argo CD configuration.

 **EDITOR**: Can create and modify applications. Cannot change Argo CD settings or delete applications.

 **VIEWER**: Read-only access to applications. Can view application status and history. Cannot make changes.

**Note**  
Role names are case-sensitive and must be uppercase (ADMIN, EDITOR, VIEWER).

**Important**  
EKS Capabilities integration with AWS Identity Center supports up to 1,000 identities per Argo CD capability. An identity can be a user or a group.

## Multi-cluster deployments
<a name="_multi_cluster_deployments"></a>

The Argo CD managed capability supports multi-cluster deployments, enabling you to manage applications across development, staging, and production clusters from a single Argo CD instance.

### How multi-cluster works
<a name="_how_multi_cluster_works"></a>

When you register additional clusters with Argo CD:

1. You create cluster secrets that reference target EKS clusters by ARN

1. You create Applications or ApplicationSets that target different clusters

1. Argo CD connects to each cluster to deploy and watch resources

1. You view and manage all clusters from a single Argo CD UI

### Prerequisites for multi-cluster
<a name="_prerequisites_for_multi_cluster"></a>

Before registering additional clusters:
+ Create an Access Entry on the target cluster for the Argo CD capability role
+ Ensure network connectivity between the Argo CD capability and target clusters
+ Verify IAM permissions to access the target clusters

### Register a cluster
<a name="_register_a_cluster"></a>

Register clusters using Kubernetes Secrets in the `argocd` namespace.

Get the target cluster ARN. Replace *region-code* with the AWS Region that your target cluster is in and replace *target-cluster* with the name of your target cluster.

```
aws eks describe-cluster \
  --region region-code \
  --name target-cluster \
  --query 'cluster.arn' \
  --output text
```

Create a cluster secret using the cluster ARN:

```
apiVersion: v1
kind: Secret
metadata:
  name: target-cluster
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
  name: target-cluster
  server: arn:aws:eks:us-west-2:111122223333:cluster/target-cluster
  project: default
```

**Important**  
Use the EKS cluster ARN in the `server` field, not the Kubernetes API server URL. The managed capability requires ARNs to identify target clusters.

Apply the secret:

```
kubectl apply -f cluster-secret.yaml
```

### Configure Access Entry on target cluster
<a name="_configure_access_entry_on_target_cluster"></a>

The target cluster must have an Access Entry that grants the Argo CD capability role permission to deploy applications. Replace *region-code* with the AWS Region that your target cluster is in, replace *target-cluster* with the name of your target cluster, and replace the ARN with your Argo CD capability role ARN.

```
aws eks create-access-entry \
  --region region-code \
  --cluster-name target-cluster \
  --principal-arn arn:aws:iam::[.replaceable]111122223333:role/ArgoCDCapabilityRole \
  --type STANDARD \
  --kubernetes-groups system:masters
```

**Note**  
For production use, consider using more restrictive Kubernetes groups instead of `system:masters`.

### Private cluster access
<a name="_private_cluster_access"></a>

The Argo CD managed capability can deploy to fully private EKS clusters without requiring VPC peering or specialized networking configuration. AWS manages connectivity between the Argo CD capability and private remote clusters automatically. Ensure your repository access controls and Argo CD RBAC policies are properly configured.

### Cross-account deployments
<a name="_cross_account_deployments"></a>

For cross-account deployments, add the Argo CD IAM Capability Role from the source account to the target cluster’s EKS Access Entry:

1. In the target account, create an Access Entry on the target EKS cluster

1. Use the Argo CD IAM Capability Role ARN from the source account as the principal

1. Configure appropriate Kubernetes RBAC permissions for the Access Entry

1. Register the target cluster in Argo CD using its EKS cluster ARN

No additional IAM role creation or trust policy configuration is required—EKS Access Entries handle cross-account access.

## Best practices
<a name="_best_practices"></a>

 **Use declarative sources as the source of truth**: Store all your application manifests in declarative sources (Git repositories, Helm registries, or OCI images), enabling version control, audit trails, and collaboration.

 **Implement proper RBAC**: Use AWS Identity Center integration to control who can access and manage applications in Argo CD. Argo CD supports fine-grained access control to resources within Applications (Deployments, Pods, ConfigMaps, Secrets).

 **Use ApplicationSets for multi-environment deployments**: Use ApplicationSets to deploy applications across multiple clusters or namespaces with different configurations.

## Lifecycle management
<a name="_lifecycle_management"></a>

### Application sync policies
<a name="_application_sync_policies"></a>

Control how Argo CD syncs applications:

 **Manual sync**: Applications require manual approval to sync changes. Recommended for **production** environments.

 **Automatic sync**: Applications automatically sync when Git changes are detected. Common for development and staging environments.

 **Self-healing**: Automatically revert manual changes made to the cluster. Ensures cluster state matches Git.

 **Pruning**: Automatically delete resources removed from Git. Use with caution as this can delete resources.

### Application health
<a name="_application_health"></a>

Argo CD continuously monitors application health:
+  **Healthy**: All resources are running as expected
+  **Progressing**: Resources are being created or updated
+  **Degraded**: Some resources are not healthy
+  **Suspended**: Application is paused
+  **Missing**: Resources are missing from the cluster

### Sync windows
<a name="_sync_windows"></a>

Configure sync windows to control when applications can be synced:
+ Allow syncs only during maintenance windows
+ Block syncs during business hours
+ Schedule automatic syncs for specific times
+ Use sync windows in situations where you need to make changes and stop any syncs (break-glass scenarios)

## Webhook configuration for faster sync
<a name="_webhook_configuration_for_faster_sync"></a>

By default, Argo CD polls Git repositories every 6 minutes to detect changes. For more responsive deployments, configure Git webhooks to trigger immediate syncs when changes are pushed.

Webhooks provide several benefits:
+ Immediate sync response when code is pushed (seconds vs minutes)
+ Reduced polling overhead and improved system performance
+ More efficient use of API rate limits
+ Better user experience with faster feedback

### Webhook endpoint
<a name="_webhook_endpoint"></a>

The webhook URL follows the pattern `${serverUrl}/api/webhook`, where `serverUrl` is your Argo CD server URL.

For example, if your Argo CD server URL is `https://abc123.eks-capabilities.us-west-2.amazonaws.com`, the webhook URL is:

```
https://abc123.eks-capabilities.us-west-2.amazonaws.com/api/webhook
```

### Configure webhooks by Git provider
<a name="_configure_webhooks_by_git_provider"></a>

 **GitHub**: In your repository settings, add a webhook with the Argo CD webhook URL. Set the content type to `application/json` and select "Just the push event".

 **GitLab**: In your project settings, add a webhook with the Argo CD webhook URL. Enable "Push events" and optionally "Tag push events".

 **Bitbucket**: In your repository settings, add a webhook with the Argo CD webhook URL. Select "Repository push" as the trigger.

 **CodeCommit**: Create an Amazon EventBridge rule that triggers on CodeCommit repository state changes and sends notifications to the Argo CD webhook endpoint.

For detailed webhook configuration instructions, see [Argo CD Webhook Configuration](https://argo-cd.readthedocs.io/en/stable/operator-manual/webhook/).

**Note**  
Webhooks complement polling—they don’t replace it. Argo CD continues to poll repositories as a fallback mechanism in case webhook notifications are missed.

## Next steps
<a name="_next_steps"></a>
+  [Working with Argo CD](working-with-argocd.md) - Learn how to create and manage Argo CD Applications
+  [Troubleshoot issues with Argo CD capabilities](argocd-troubleshooting.md) - Troubleshoot Argo CD issues
+  [Working with capability resources](working-with-capabilities.md) - Manage your Argo CD capability resource

# Troubleshoot issues with Argo CD capabilities
<a name="argocd-troubleshooting"></a>

This topic provides troubleshooting guidance for the EKS Capability for Argo CD, including capability health checks, application sync issues, repository authentication, and multi-cluster deployments.

**Note**  
EKS Capabilities are fully managed and run outside your cluster. You don’t have access to Argo CD server logs or the `argocd` namespace. Troubleshooting focuses on capability health, application status, and configuration.

## Capability is ACTIVE but applications aren’t syncing
<a name="_capability_is_active_but_applications_arent_syncing"></a>

If your Argo CD capability shows `ACTIVE` status but applications aren’t syncing, check the capability health and application status.

 **Check capability health**:

You can view capability health and status issues in the EKS console or using the AWS CLI.

 **Console**:

1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home\$1/clusters.

1. Select your cluster name.

1. Choose the **Observability** tab.

1. Choose **Monitor cluster**.

1. Choose the **Capabilities** tab to view health and status for all capabilities.

 ** AWS CLI**:

```
# View capability status and health
aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-argocd

# Look for issues in the health section
```

 **Common causes**:
+  **Repository not configured**: Git repository not added to Argo CD
+  **Authentication failed**: SSH key, token, or CodeCommit credentials invalid
+  **Application not created**: No Application resources exist in the cluster
+  **Sync policy**: Manual sync required (auto-sync not enabled)
+  **IAM permissions**: Missing permissions for CodeCommit or Secrets Manager

 **Check application status**:

```
# List applications
kubectl get application -n argocd

# View sync status
kubectl get application my-app -n argocd -o jsonpath='{.status.sync.status}'

# View application health
kubectl get application my-app -n argocd -o jsonpath='{.status.health}'
```

 **Check application conditions**:

```
# Describe application to see detailed status
kubectl describe application my-app -n argocd

# View application health
kubectl get application my-app -n argocd -o jsonpath='{.status.health}'
```

## Applications stuck in "Progressing" state
<a name="_applications_stuck_in_progressing_state"></a>

If an application shows `Progressing` but never reaches `Healthy`, check the application’s resource status and events.

 **Check resource health**:

```
# View application resources
kubectl get application my-app -n argocd -o jsonpath='{.status.resources}'

# Check for unhealthy resources
kubectl describe application my-app -n argocd | grep -A 10 "Health Status"
```

 **Common causes**:
+  **Deployment not ready**: Pods failing to start or readiness probes failing
+  **Resource dependencies**: Resources waiting for other resources to be ready
+  **Image pull errors**: Container images not accessible
+  **Insufficient resources**: Cluster lacks CPU or memory for pods

 **Verify target cluster configuration** (for multi-cluster setups):

```
# List registered clusters
kubectl get secret -n argocd -l argocd.argoproj.io/secret-type=cluster

# View cluster secret details
kubectl get secret cluster-secret-name -n argocd -o yaml
```

## Repository authentication failures
<a name="_repository_authentication_failures"></a>

If Argo CD cannot access your Git repositories, verify the authentication configuration.

 **For CodeCommit repositories**:

Verify the IAM Capability Role has CodeCommit permissions:

```
# View IAM policies
aws iam list-attached-role-policies --role-name my-argocd-capability-role
aws iam list-role-policies --role-name my-argocd-capability-role

# Get specific policy details
aws iam get-role-policy --role-name my-argocd-capability-role --policy-name policy-name
```

The role needs `codecommit:GitPull` permission for the repositories.

 **For private Git repositories**:

Verify repository credentials are correctly configured:

```
# Check repository secret exists
kubectl get secret -n argocd repo-secret-name -o yaml
```

Ensure the secret contains the correct authentication credentials (SSH key, token, or username/password).

 **For repositories using Secrets Manager**:

```
# Verify IAM Capability Role has Secrets Manager permissions
aws iam list-attached-role-policies --role-name my-argocd-capability-role

# Test secret retrieval
aws secretsmanager get-secret-value --secret-id arn:aws:secretsmanager:region-code:111122223333:secret:my-secret
```

## Multi-cluster deployment issues
<a name="_multi_cluster_deployment_issues"></a>

If applications aren’t deploying to remote clusters, verify the cluster registration and access configuration.

 **Check cluster registration**:

```
# List registered clusters
kubectl get secret -n argocd -l argocd.argoproj.io/secret-type=cluster

# Verify cluster secret format
kubectl get secret CLUSTER_SECRET_NAME -n argocd -o yaml
```

Ensure the `server` field contains the EKS cluster ARN, not the Kubernetes API URL.

 **Verify target cluster Access Entry**:

On the target cluster, check that the Argo CD Capability Role has an Access Entry:

```
# List access entries (run on target cluster or use AWS CLI)
aws eks list-access-entries --cluster-name target-cluster

# Describe specific access entry
aws eks describe-access-entry \
  --cluster-name target-cluster \
  --principal-arn arn:aws:iam::[.replaceable]111122223333:role/my-argocd-capability-role
```

 **Check IAM permissions for cross-account**:

For cross-account deployments, verify the Argo CD Capability Role has an Access Entry on the target cluster. The managed capability uses EKS Access Entries for cross-account access, not IAM role assumption.

For more on multi-cluster configuration, see [Register target clusters](argocd-register-clusters.md).

## Next steps
<a name="_next_steps"></a>
+  [Argo CD considerations](argocd-considerations.md) - Argo CD considerations and best practices
+  [Working with Argo CD](working-with-argocd.md) - Create and manage Argo CD Applications
+  [Register target clusters](argocd-register-clusters.md) - Configure multi-cluster deployments
+  [Troubleshooting EKS Capabilities](capabilities-troubleshooting.md) - General capability troubleshooting guidance

# Comparing EKS Capability for Argo CD to self-managed Argo CD
<a name="argocd-comparison"></a>

The EKS Capability for Argo CD provides a fully managed Argo CD experience that runs in EKS. For a general comparison of EKS Capabilities vs self-managed solutions, see [EKS Capabilities considerations](capabilities-considerations.md). This topic focuses on Argo CD-specific differences, including authentication, multi-cluster management, and upstream feature support.

## Differences from upstream Argo CD
<a name="_differences_from_upstream_argo_cd"></a>

The EKS Capability for Argo CD is based on upstream Argo CD but differs in how it’s accessed, configured, and integrated with AWS services.

 **RBAC and authentication**: The capability comes with three RBAC roles (admin, editor, viewer) and uses AWS Identity Center for authentication instead of Argo CD’s built-in authentication. Configure role mappings through the capability’s `rbacRoleMapping` parameter to map Identity Center groups to Argo CD roles, not through Argo CD’s `argocd-rbac-cm` ConfigMap. The Argo CD UI is hosted with its own direct URL (find it in the EKS console under your cluster’s Capabilities tab), and API access uses AWS authentication and authorization through IAM.

 **Cluster configuration**: The capability does not automatically configure local cluster or hub-and-spoke topologies. You configure your deployment target clusters and EKS access entries. The capability supports only Amazon EKS clusters as deployment targets using EKS cluster ARNs (not Kubernetes API server URLs). The capability does not automatically add the local cluster (`kubernetes.default.svc`) as a deployment target—to deploy to the same cluster where the capability is created, explicitly register that cluster using its ARN.

 **Simplified remote cluster access**: The capability simplifies multi-cluster deployments by using EKS Access Entries to grant Argo CD access to remote clusters, eliminating the need to configure IAM Roles for Service Accounts (IRSA) or set up cross-account IAM role assumptions. The capability also provides transparent access to fully private EKS clusters without requiring VPC peering or specialized networking configuration—AWS manages connectivity between the Argo CD capability and private remote clusters automatically.

 **Direct AWS service integration**: The capability provides direct integration with AWS services through the Capability Role’s IAM permissions. You can reference CodeCommit repositories, ECR Helm charts, and CodeConnections directly in Application resources without creating Repository configurations. This simplifies authentication and eliminates the need to manage separate credentials for AWS services. See [Configure repository access](argocd-configure-repositories.md) for details.

 **Namespace support**: The capability requires you to specify a single namespace where Argo CD Application, ApplicationSet, and AppProject custom resources must be created.

**Note**  
This namespace restriction only applies to Argo CD’s own custom resources (Application, ApplicationSet, AppProject). Your application workloads can be deployed to any namespace in any target cluster. For example, if you create the capability with namespace `argocd`, all Application CRs must be created in the `argocd` namespace, but those Applications can deploy workloads to `default`, `production`, `staging`, or any other namespace.

**Note**  
The managed capability has specific requirements for CLI usage and AppProject configuration:  
When using the Argo CD CLI, specify applications with the namespace prefix: `argocd app sync namespace/appname` 
AppProject resources must specify `.spec.sourceNamespaces` to define which namespaces the project can watch for Applications (typically set to the namespace you specified when creating the capability)
Resource tracking annotations use the format: `namespace_appname:group/kind:namespace/name` 

 **Unsupported features**: The following features are not available in the managed capability:
+ Config Management Plugins (CMPs) for custom manifest generation
+ Custom Lua scripts for resource health assessment (built-in health checks for standard resources are supported)
+ The Notifications controller
+ Custom SSO providers (only AWS Identity Center is supported, including third-party federated identity through AWS Identity Center)
+ UI extensions and custom banners
+ Direct access to `argocd-cm`, `argocd-params`, and other configuration ConfigMaps
+ Modifying the sync timeout (fixed at 120 seconds)

 **Compatibility**: Applications and ApplicationSets work identically to upstream Argo CD with no changes to your manifests. The capability uses the same Kubernetes APIs and CRDs, so tools like `kubectl` work the same way. The capability fully supports Applications and ApplicationSets, GitOps workflows with automatic sync, multi-cluster deployments, sync policies (automated, prune, self-heal), sync waves and hooks, health assessment for standard Kubernetes resources, rollback capabilities, Git repository sources (HTTPS and SSH), Helm, Kustomize, and plain YAML manifests, GitHub app credentials, projects for multi-tenancy, and resource exclusions and inclusions.

## Using the Argo CD CLI with the managed capability
<a name="argocd-cli-configuration"></a>

The Argo CD CLI works the same as upstream Argo CD for most operations, but authentication and cluster registration differ.

### Prerequisites
<a name="_prerequisites"></a>

Install the Argo CD CLI following the [upstream installation instructions](https://argo-cd.readthedocs.io/en/stable/cli_installation/).

### Configuration
<a name="_configuration"></a>

Configure the CLI using environment variables:

1. Get the Argo CD server URL from the EKS console (under your cluster’s **Capabilities** tab), or using the AWS CLI. The `https://` prefix must be removed:

   ```
   export ARGOCD_SERVER=$(aws eks describe-capability \
     --cluster-name my-cluster \
     --capability-name my-argocd \
     --query 'capability.configuration.argoCd.serverUrl' \
     --output text \
     --region region-code | sed 's|^https://||')
   ```

1. Generate an account token from the Argo CD UI (**Settings** → **Accounts** → **admin** → **Generate New Token**), then set it as an environment variable:

   ```
   export ARGOCD_AUTH_TOKEN="your-token-here"
   ```

**Important**  
This configuration uses the admin account token for initial setup and development workflows. For production use cases, use project-scoped roles and tokens to follow the principle of least privilege. For more information about configuring project roles and RBAC, see [Configure Argo CD permissions](argocd-permissions.md).

1. Set the required gRPC option:

   ```
   export ARGOCD_OPTS="--grpc-web"
   ```

With these environment variables set, you can use the Argo CD CLI without the `argocd login` command.

### Key differences
<a name="_key_differences"></a>

The managed capability has the following CLI limitations:
+  `argocd admin` commands are not supported (they require direct pod access)
+  `argocd login` is not supported (use account or project tokens instead)
+  `argocd cluster add` requires the `--aws-cluster-name` flag with the EKS cluster ARN

### Example: Register a cluster
<a name="_example_register_a_cluster"></a>

Register an EKS cluster for application deployment:

```
# Get the cluster ARN
CLUSTER_ARN=$(aws eks describe-cluster \
  --name my-cluster \
  --query 'cluster.arn' \
  --output text)

# Register the cluster
argocd cluster add $CLUSTER_ARN \
  --aws-cluster-name $CLUSTER_ARN \
  --name in-cluster \
  --project default
```

For complete Argo CD CLI documentation, see the [Argo CD CLI reference](https://argo-cd.readthedocs.io/en/stable/user-guide/commands/argocd/).

## Migration Path
<a name="_migration_path"></a>

You can migrate from self-managed Argo CD to the managed capability:

1. Review your current Argo CD configuration for unsupported features (Notifications controller, CMPs, custom health checks, UI extensions)

1. Scale your self-managed Argo CD controllers to zero replicas to prevent conflicts

1. Create an Argo CD capability resource on your cluster

1. Export your existing Applications, ApplicationSets, and AppProjects

1. Migrate repository credentials, cluster secrets, and repository credential templates (repocreds)

1. If using GPG keys, TLS certificates, or SSH known hosts, migrate these configurations as well

1. Update `destination.server` fields to use cluster names or EKS cluster ARNs

1. Apply them to the managed Argo CD instance

1. Verify applications are syncing correctly

1. Decommission your self-managed Argo CD installation

The managed capability uses the same Argo CD APIs and resource definitions, so your existing manifests work with minimal modification.

## Next steps
<a name="_next_steps"></a>
+  [Create an Argo CD capability](create-argocd-capability.md) - Create an Argo CD capability resource
+  [Working with Argo CD](working-with-argocd.md) - Deploy your first application
+  [Argo CD considerations](argocd-considerations.md) - Configure AWS Identity Center integration

# Resource Composition with kro (Kube Resource Orchestrator)
<a name="kro"></a>

 **kro (Kube Resource Orchestrator)** is an open-source, Kubernetes-native project that allows you to define custom Kubernetes APIs using simple and straightforward configuration. With kro, you can easily configure new custom APIs that create a group of Kubernetes objects and the logical operations between them.

With EKS Capabilities, kro is fully managed by AWS, eliminating the need to install, maintain, and scale kro controllers on your clusters.

## How kro Works
<a name="_how_kro_works"></a>

kro introduces a Custom Resource Definition (CRD) called `ResourceGraphDefinition` (RGD) that enables simple and streamlined creation of custom Kubernetes APIs. When you create a `ResourceGraphDefinition`, kro uses native Kubernetes extensions to create and manage new APIs in your cluster. From this single resource specification, kro will create and register a new CRD for you based on your specification and will adapt to manage your newly defined custom resources.

RGDs can include multiple resources, and kro will determine interdependencies and resource ordering, so you don’t have to. You can use simple syntax to inject configuration from one resource to another, greatly simplifying compositions and removing the need for "glue" operators in your cluster. With kro, your custom resources can include native Kubernetes resources as well as any Custom Resource Definitions (CRDs) installed in the cluster.

kro supports a single primary resource type:
+  **ResourceGraphDefinition (RGD)**: Defines a Kubernetes custom resource, encapsulating one or more underlying native or custom Kubernetes resources

In addition to this resource, kro will create and manage the lifecycle of your custom resources created with it, as well as all of their constituent resources.

kro integrates seamlessly with AWS Controllers for Kubernetes (ACK), allowing you to compose workload resources with AWS resources to create higher-level abstractions. This enables you to create your own cloud building blocks, simplifying resource management and enabling reusable patterns with default and immutable configuration settings based on your organizational standards.

## Benefits of kro
<a name="_benefits_of_kro"></a>

kro enables platform teams to create custom Kubernetes APIs that compose multiple resources into higher-level abstractions. This simplifies resource management by allowing developers to deploy complex applications using simple, standardized, and versioned custom resources. You define reusable patterns for common resource combinations, enabling consistent resource creation across your organization.

kro uses [Common Expression Language (CEL) in Kubernetes](https://kubernetes.io/docs/reference/using-api/cel/) for passing values between resources and incorporating conditional logic, providing flexibility in resource composition. You can compose both Kubernetes resources and AWS resources managed by ACK into unified custom APIs, enabling complete application and infrastructure definitions.

kro supports declarative configuration through Kubernetes manifests, enabling GitOps workflows and infrastructure as code practices that integrate seamlessly with your existing development processes. As part of EKS Managed Capabilities, kro is fully managed by AWS, eliminating the need to install, configure, and maintain kro controllers on your clusters.

 **Example: Creating a ResourceGraphDefinition** 

The following example shows a simple `ResourceGraphDefinition` that creates a web application with a Deployment and Service:

```
apiVersion: kro.run/v1alpha1
kind: ResourceGraphDefinition
metadata:
  name: web-application
spec:
  schema:
    apiVersion: v1alpha1
    kind: WebApplication
    spec:
      name: string
      replicas: integer | default=3
  resources:
    - id: deployment
      template:
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: ${schema.spec.name}
        spec:
          replicas: ${schema.spec.replicas}
    - id: service
      template:
        apiVersion: v1
        kind: Service
        metadata:
          name: ${schema.spec.name}
```

When users create instances of the `WebApplication` custom resource, kro automatically creates the corresponding Deployment and Service resources, managing their lifecycle along with the custom resource.

## Integration with Other EKS Managed Capabilities
<a name="_integration_with_other_eks_managed_capabilities"></a>

kro integrates with other EKS Managed Capabilities.
+  ** AWS Controllers for Kubernetes (ACK)**: Use kro to compose ACK resources into higher-level abstractions, simplifying AWS resource management.
+  **Argo CD**: Use Argo CD to manage the deployment of kro custom resources across multiple clusters, enabling GitOps workflows for your platform building blocks and application stacks.

## Getting Started with kro
<a name="_getting_started_with_kro"></a>

To get started with the EKS Capability for kro:

1.  [Create a kro capability resource](create-kro-capability.md) on your EKS cluster through the AWS Console, AWS CLI, or your preferred infrastructure as code tool.

1. Create ResourceGraphDefinitions (RGDs) that define your custom APIs and resource compositions.

1. Apply instances of your custom resources to provision and manage the underlying Kubernetes and AWS resources.

# Create a kro capability
<a name="create-kro-capability"></a>

This topic explains how to create a kro capability on your Amazon EKS cluster.

## Prerequisites
<a name="_prerequisites"></a>

Before creating a kro capability, ensure you have:
+ An existing Amazon EKS cluster running a supported Kubernetes version (all versions in standard and extended support are supported)
+ Sufficient IAM permissions to create capability resources on EKS clusters
+ (For CLI/eksctl) The appropriate CLI tool installed and configured

**Note**  
Unlike ACK and Argo CD, kro does not require additional IAM permissions beyond the trust policy. kro operates entirely within your cluster and does not make AWS API calls. However, you still need to provide an IAM Capability Role with the appropriate trust policy. For information about configuring Kubernetes RBAC permissions for kro, see [Configure kro permissions](kro-permissions.md).

## Choose your tool
<a name="_choose_your_tool"></a>

You can create a kro capability using the AWS Management Console, AWS CLI, or eksctl:
+  [Create a kro capability using the Console](kro-create-console.md) - Use the Console for a guided experience
+  [Create a kro capability using the AWS CLI](kro-create-cli.md) - Use the AWS CLI for scripting and automation
+  [Create a kro capability using eksctl](kro-create-eksctl.md) - Use eksctl for a Kubernetes-native experience

## What happens when you create a kro capability
<a name="_what_happens_when_you_create_a_kro_capability"></a>

When you create a kro capability:

1. EKS creates the kro capability service and configures it to monitor and manage resources in your cluster

1. Custom Resource Definitions (CRDs) are installed in your cluster

1. An access entry is automatically created for your IAM Capability Role with the `AmazonEKSKROPolicy` which grants permissions to manage ResourceGraphDefinitions and their instances (see [Security considerations for EKS Capabilities](capabilities-security.md))

1. The capability assumes the IAM Capability Role you provide (used only for the trust relationship)

1. kro begins watching for `ResourceGraphDefinition` resources and their instances

1. The capability status changes from `CREATING` to `ACTIVE` 

Once active, you can create ResourceGraphDefinitions to define custom APIs and create instances of those APIs.

**Note**  
The automatically created access entry includes the `AmazonEKSKROPolicy` which grants kro permissions to manage ResourceGraphDefinitions and their instances. To allow kro to create the underlying Kubernetes resources defined in your ResourceGraphDefinitions (such as Deployments, Services, or ACK resources), you must configure additional access entry policies. To learn more about access entries and how to configure additional permissions, see [Configure kro permissions](kro-permissions.md) and [Security considerations for EKS Capabilities](capabilities-security.md).

## Next steps
<a name="_next_steps"></a>

After creating the kro capability:
+  [kro concepts](kro-concepts.md) - Understand kro concepts and resource composition
+  [kro concepts](kro-concepts.md) - Learn about SimpleSchema, CEL expressions, and resource composition patterns

# Create a kro capability using the Console
<a name="kro-create-console"></a>

This topic describes how to create a kro (Kube Resource Orchestrator) capability using the AWS Management Console.

## Create the kro capability
<a name="_create_the_kro_capability"></a>

1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home\$1/clusters.

1. Select your cluster name to open the cluster detail page.

1. Choose the **Capabilities** tab.

1. In the left navigation, choose **kro (Kube Resource Orchestrator)**.

1. Choose **Create kro capability**.

1. For **IAM Capability Role**:
   + If you already have an IAM Capability Role, select it from the dropdown
   + If you need to create a role, choose **Create kro role** 

     This opens the IAM console in a new tab with pre-populated trust policy. The role requires no additional IAM permissions since kro operates entirely within your cluster.

     After creating the role, return to the EKS console and the role will be automatically selected.
**Note**  
Unlike ACK and Argo CD, kro does not require additional IAM permissions beyond the trust policy. kro operates entirely within your cluster and does not make AWS API calls.

1. Choose **Create**.

The capability creation process begins.

## Verify the capability is active
<a name="_verify_the_capability_is_active"></a>

1. On the **Capabilities** tab, view the kro capability status.

1. Wait for the status to change from `CREATING` to `ACTIVE`.

1. Once active, the capability is ready to use.

For information about capability statuses and troubleshooting, see [Working with capability resources](working-with-capabilities.md).

## Grant permissions to manage Kubernetes resources
<a name="_grant_permissions_to_manage_kubernetes_resources"></a>

When you create a kro capability, an EKS Access Entry is automatically created with the `AmazonEKSKROPolicy`, which allows kro to manage ResourceGraphDefinitions and their instances. However, no permissions are granted by default to create the underlying Kubernetes resources (like Deployments, Services, ConfigMaps, etc.) defined in your ResourceGraphDefinitions.

This intentional design follows the principle of least privilege—different ResourceGraphDefinitions require different permissions. You must explicitly configure the permissions kro needs based on the resources your ResourceGraphDefinitions will manage.

For getting started quickly, testing, or development environments, use `AmazonEKSClusterAdminPolicy`:

1. In the EKS console, navigate to your cluster’s **Access** tab.

1. Under **Access entries**, find the entry for your kro capability role (it will have the role ARN you created earlier).

1. Choose the access entry to open its details.

1. In the **Access policies** section, choose **Associate access policy**.

1. Select `AmazonEKSClusterAdminPolicy` from the policy list.

1. For **Access scope**, select **Cluster**.

1. Choose **Associate**.

**Important**  
The `AmazonEKSClusterAdminPolicy` grants broad permissions to create and manage all Kubernetes resources, including the ability to create any resource type across all namespaces. This is convenient for development and POCs but should not be used in production. For production, create custom RBAC policies that grant only the permissions needed for the specific resources your ResourceGraphDefinitions will manage. For guidance on configuring least-privilege permissions, see [Configure kro permissions](kro-permissions.md) and [Security considerations for EKS Capabilities](capabilities-security.md).

## Verify custom resources are available
<a name="_verify_custom_resources_are_available"></a>

After the capability is active, verify that kro custom resources are available in your cluster.

 **Using the console** 

1. Navigate to your cluster in the Amazon EKS console

1. Choose the **Resources** tab

1. Choose **Extensions** 

1. Choose **CustomResourceDefinitions** 

You should see the `ResourceGraphDefinition` resource type listed.

 **Using kubectl** 

```
kubectl api-resources | grep kro.run
```

You should see the `ResourceGraphDefinition` resource type listed.

## Next steps
<a name="_next_steps"></a>
+  [kro concepts](kro-concepts.md) - Understand kro concepts and resource composition
+  [kro concepts](kro-concepts.md) - Learn about SimpleSchema, CEL expressions, and composition patterns
+  [Working with capability resources](working-with-capabilities.md) - Manage your kro capability resource

# Create a kro capability using the AWS CLI
<a name="kro-create-cli"></a>

This topic describes how to create a kro (Kube Resource Orchestrator) capability using the AWS CLI.

## Prerequisites
<a name="_prerequisites"></a>
+  ** AWS CLI** – Version `2.12.3` or later. To check your version, run `aws --version`. For more information, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.
+  ** `kubectl` ** – A command line tool for working with Kubernetes clusters. For more information, see [Set up `kubectl` and `eksctl`](install-kubectl.md).

## Step 1: Create an IAM Capability Role
<a name="_step_1_create_an_iam_capability_role"></a>

Create a trust policy file:

```
cat > kro-trust-policy.json << 'EOF'
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "capabilities.eks.amazonaws.com"
      },
      "Action": [
        "sts:AssumeRole",
        "sts:TagSession"
      ]
    }
  ]
}
EOF
```

Create the IAM role:

```
aws iam create-role \
  --role-name KROCapabilityRole \
  --assume-role-policy-document file://kro-trust-policy.json
```

**Note**  
Unlike ACK and Argo CD, kro does not require additional IAM permissions. kro operates entirely within your cluster and does not make AWS API calls. The role is only needed to establish the trust relationship with the EKS capabilities service.

## Step 2: Create the kro capability
<a name="_step_2_create_the_kro_capability"></a>

Create the kro capability resource on your cluster. Replace *region-code* with the AWS Region where your cluster is located (such as `us-west-2`) and *my-cluster* with your cluster name.

```
aws eks create-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-kro \
  --type KRO \
  --role-arn arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):role/KROCapabilityRole \
  --delete-propagation-policy RETAIN
```

The command returns immediately, but the capability takes some time to become active as EKS creates the required capability infrastructure and components. EKS will install the Kubernetes Custom Resource Definitions related to this capability in your cluster as it is being created.

**Note**  
If you receive an error that the cluster doesn’t exist or you don’t have permissions, verify:  
The cluster name is correct
Your AWS CLI is configured for the correct region
You have the required IAM permissions

## Step 3: Verify the capability is active
<a name="_step_3_verify_the_capability_is_active"></a>

Wait for the capability to become active. Replace *region-code* with the AWS Region that your cluster is in and replace *my-cluster* with the name of your cluster.

```
aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-kro \
  --query 'capability.status' \
  --output text
```

The capability is ready when the status shows `ACTIVE`.

You can also view the full capability details:

```
aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-kro
```

## Step 4: Grant permissions to manage Kubernetes resources
<a name="_step_4_grant_permissions_to_manage_kubernetes_resources"></a>

When you create a kro capability, an EKS Access Entry is automatically created with the `AmazonEKSKROPolicy`, which allows kro to manage ResourceGraphDefinitions and their instances. However, no permissions are granted by default to create the underlying Kubernetes resources (like Deployments, Services, ConfigMaps, etc.) defined in your ResourceGraphDefinitions.

This intentional design follows the principle of least privilege—different ResourceGraphDefinitions require different permissions. For example: \$1 A ResourceGraphDefinition that creates only ConfigMaps and Secrets needs different permissions than one that creates Deployments and Services \$1 A ResourceGraphDefinition that creates ACK resources needs permissions for those specific custom resources \$1 Some ResourceGraphDefinitions might only read existing resources without creating new ones

You must explicitly configure the permissions kro needs based on the resources your ResourceGraphDefinitions will manage.

### Quick setup
<a name="_quick_setup"></a>

For getting started quickly, testing, or development environments, use `AmazonEKSClusterAdminPolicy`:

Get the capability role ARN:

```
CAPABILITY_ROLE_ARN=$(aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-kro \
  --query 'capability.roleArn' \
  --output text)
```

Associate the cluster admin policy:

```
aws eks associate-access-policy \
  --region region-code \
  --cluster-name my-cluster \
  --principal-arn $CAPABILITY_ROLE_ARN \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy \
  --access-scope type=cluster
```

**Important**  
The `AmazonEKSClusterAdminPolicy` grants broad permissions to create and manage all Kubernetes resources, including the ability to create any resource type across all namespaces. This is convenient for development and POCs but should not be used in production. For production, create custom RBAC policies that grant only the permissions needed for the specific resources your ResourceGraphDefinitions will manage. For guidance on configuring least-privilege permissions, see [Configure kro permissions](kro-permissions.md) and [Security considerations for EKS Capabilities](capabilities-security.md).

## Step 5: Verify custom resources are available
<a name="_step_5_verify_custom_resources_are_available"></a>

After the capability is active, verify that kro custom resources are available in your cluster:

```
kubectl api-resources | grep kro.run
```

You should see the `ResourceGraphDefinition` resource type listed.

## Next steps
<a name="_next_steps"></a>
+  [kro concepts](kro-concepts.md) - Understand kro concepts and resource composition
+  [kro concepts](kro-concepts.md) - Learn about SimpleSchema, CEL expressions, and composition patterns
+  [Working with capability resources](working-with-capabilities.md) - Manage your kro capability resource

# Create a kro capability using eksctl
<a name="kro-create-eksctl"></a>

This topic describes how to create a kro (Kube Resource Orchestrator) capability using eksctl.

**Note**  
The following steps require eksctl version `0.220.0` or later. To check your version, run `eksctl version`.

## Step 1: Create an IAM Capability Role
<a name="_step_1_create_an_iam_capability_role"></a>

Create a trust policy file:

```
cat > kro-trust-policy.json << 'EOF'
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "capabilities.eks.amazonaws.com"
      },
      "Action": [
        "sts:AssumeRole",
        "sts:TagSession"
      ]
    }
  ]
}
EOF
```

Create the IAM role:

```
aws iam create-role \
  --role-name KROCapabilityRole \
  --assume-role-policy-document file://kro-trust-policy.json
```

**Note**  
Unlike ACK and Argo CD, kro does not require additional IAM permissions beyond the trust policy. kro operates entirely within your cluster and does not make AWS API calls.

## Step 2: Create the kro capability
<a name="_step_2_create_the_kro_capability"></a>

Create the kro capability using eksctl. Replace *region-code* with the AWS Region that your cluster is in and replace *my-cluster* with the name of your cluster.

```
eksctl create capability \
  --region region-code \
  --cluster my-cluster \
  --name my-kro \
  --type KRO \
  --role-arn arn:aws:iam::[.replaceable]111122223333:role/KROCapabilityRole
```

The command returns immediately, but the capability takes some time to become active.

## Step 3: Verify the capability is active
<a name="_step_3_verify_the_capability_is_active"></a>

Check the capability status. Replace *region-code* with the AWS Region that your cluster is in and replace *my-cluster* with the name of your cluster.

```
eksctl get capability \
  --region region-code \
  --cluster my-cluster \
  --name my-kro
```

The capability is ready when the status shows `ACTIVE`.

## Step 4: Grant permissions to manage Kubernetes resources
<a name="_step_4_grant_permissions_to_manage_kubernetes_resources"></a>

By default, kro can only create and manage ResourceGraphDefinitions and their instances. To allow kro to create and manage the underlying Kubernetes resources defined in your ResourceGraphDefinitions, associate the `AmazonEKSClusterAdminPolicy` access policy with the capability’s access entry.

Get the capability role ARN:

```
CAPABILITY_ROLE_ARN=$(aws eks describe-capability \
  --region region-code \
  --cluster my-cluster \
  --name my-kro \
  --query 'capability.roleArn' \
  --output text)
```

Associate the cluster admin policy:

```
aws eks associate-access-policy \
  --region region-code \
  --cluster my-cluster \
  --principal-arn $CAPABILITY_ROLE_ARN \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy \
  --access-scope type=cluster
```

**Important**  
The `AmazonEKSClusterAdminPolicy` grants broad permissions to create and manage all Kubernetes resources and is intended to streamline getting started. For production use, create more restrictive RBAC policies that grant only the permissions needed for the specific resources your ResourceGraphDefinitions will manage. For guidance on configuring least-privilege permissions, see [Configure kro permissions](kro-permissions.md) and [Security considerations for EKS Capabilities](capabilities-security.md).

## Step 5: Verify custom resources are available
<a name="_step_5_verify_custom_resources_are_available"></a>

After the capability is active, verify that kro custom resources are available in your cluster:

```
kubectl api-resources | grep kro.run
```

You should see the `ResourceGraphDefinition` resource type listed.

## Next steps
<a name="_next_steps"></a>
+  [kro concepts](kro-concepts.md) - Understand kro concepts and resource composition
+  [kro concepts](kro-concepts.md) - Learn about SimpleSchema, CEL expressions, and composition patterns
+  [Working with capability resources](working-with-capabilities.md) - Manage your kro capability resource

# kro concepts
<a name="kro-concepts"></a>

kro enables platform teams to create custom Kubernetes APIs that compose multiple resources into higher-level abstractions. This topic walks through a practical example, then explains the core concepts you need to understand when working with the EKS Capability for kro.

## Getting started with kro
<a name="_getting_started_with_kro"></a>

After creating the kro capability (see [Create a kro capability](create-kro-capability.md)), you can start creating custom APIs using ResourceGraphDefinitions in your cluster.

Here’s a complete example that creates a simple web application abstraction:

```
apiVersion: kro.run/v1alpha1
kind: ResourceGraphDefinition
metadata:
  name: webapplication
spec:
  schema:
    apiVersion: v1alpha1
    kind: WebApplication
    group: kro.run
    spec:
      name: string | required=true
      image: string | default="nginx:latest"
      replicas: integer | default=3
  resources:
  - id: deployment
    template:
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: ${schema.spec.name}
      spec:
        replicas: ${schema.spec.replicas}
        selector:
          matchLabels:
            app: ${schema.spec.name}
        template:
          metadata:
            labels:
              app: ${schema.spec.name}
          spec:
            containers:
            - name: app
              image: ${schema.spec.image}
              ports:
              - containerPort: 80
  - id: service
    template:
      apiVersion: v1
      kind: Service
      metadata:
        name: ${schema.spec.name}
      spec:
        selector:
          app: ${schema.spec.name}
        ports:
        - protocol: TCP
          port: 80
          targetPort: 80
```

After applying this ResourceGraphDefinition, application teams can create web applications using your simplified API:

```
apiVersion: kro.run/v1alpha1
kind: WebApplication
metadata:
  name: my-app
spec:
  name: my-app
  replicas: 5
```

kro automatically creates the Deployment and Service with the appropriate configuration. Since `image` isn’t specified, it uses the default value `nginx:latest` from the schema.

## Core concepts
<a name="_core_concepts"></a>

**Important**  
kro validates ResourceGraphDefinitions at creation time, not at runtime. When you create an RGD, kro validates CEL syntax, type-checks expressions against actual Kubernetes schemas, verifies field existence, and detects circular dependencies. This means errors are caught immediately when you create the RGD, before any instances are deployed.

### ResourceGraphDefinition
<a name="_resourcegraphdefinition"></a>

A ResourceGraphDefinition (RGD) defines a custom Kubernetes API by specifying:
+  **Schema** - The API structure using SimpleSchema format (field names, types, defaults, validation)
+  **Resources** - Templates for the underlying Kubernetes or AWS resources to create
+  **Dependencies** - How resources relate to each other (automatically detected from field references)

When you apply an RGD, kro registers a new Custom Resource Definition (CRD) in your cluster. Application teams can then create instances of your custom API, and kro handles creating and managing all the underlying resources.

For more information, see [ResourceGraphDefinition Overview](https://kro.run/docs/concepts/rgd/overview/) in the kro documentation.

### SimpleSchema format
<a name="_simpleschema_format"></a>

SimpleSchema provides a simplified way to define API schemas without requiring OpenAPI knowledge:

```
schema:
  apiVersion: v1alpha1
  kind: Database
  spec:
    name: string | required=true description="Database name"
    size: string | default="small" enum=small,medium,large
    replicas: integer | default=1 minimum=1 maximum=5
```

SimpleSchema supports `string`, `integer`, `boolean`, and `number` types with constraints like `required`, `default`, `minimum`/`maximum`, `enum`, and `pattern`.

For more information, see [SimpleSchema](https://kro.run/docs/concepts/rgd/schema/) in the kro documentation.

### CEL expressions
<a name="_cel_expressions"></a>

kro uses Common Expression Language (CEL) to reference values dynamically and add conditional logic. CEL expressions are wrapped in `${` and `}` and can be used in two ways:

 **Standalone expressions** - The entire field value is a single expression:

```
spec:
  replicas: ${schema.spec.replicaCount}  # Expression returns integer
  labels: ${schema.spec.labelMap}        # Expression returns object
```

The expression result replaces the entire field value and must match the field’s expected type.

 **String templates** - One or more expressions embedded in a string:

```
metadata:
  name: "${schema.spec.prefix}-${schema.spec.name}"  # Multiple expressions
  annotation: "Created by ${schema.spec.owner}"      # Single expression in string
```

All expressions in string templates must return strings. Use `string()` to convert other types: `"replicas-${string(schema.spec.count)}"`.

 **Field references** - Access instance spec values using `schema.spec`:

```
template:
  metadata:
    name: ${schema.spec.name}-deployment
    namespace: ${schema.metadata.namespace}  # Can also reference metadata
  spec:
    replicas: ${schema.spec.replicas}
```

 **Optional field access** - Use `?` for fields that might not exist:

```
# For ConfigMaps or Secrets with unknown structure
value: ${configmap.data.?DATABASE_URL}

# For optional status fields
ready: ${deployment.status.?readyReplicas > 0}
```

If the field doesn’t exist, the expression returns `null` instead of failing.

 **Conditional resources** - Include resources only when conditions are met:

```
resources:
- id: ingress
  includeWhen:
    - ${schema.spec.enableIngress == true}
  template:
    # ... ingress configuration
```

The `includeWhen` field accepts a list of boolean expressions. All conditions must be true for the resource to be created. Currently, `includeWhen` can only reference `schema.spec` fields.

 **Transformations** - Transform values using ternary operators and functions:

```
template:
  spec:
    resources:
      requests:
        memory: ${schema.spec.size == "small" ? "512Mi" : "2Gi"}

    # String concatenation
    image: ${schema.spec.registry + "/" + schema.spec.imageName}

    # Type conversion
    port: ${string(schema.spec.portNumber)}
```

 **Cross-resource references** - Reference values from other resources:

```
resources:
- id: bucket
  template:
    apiVersion: s3.services.k8s.aws/v1alpha1
    kind: Bucket
    spec:
      name: ${schema.spec.name}-data

- id: configmap
  template:
    apiVersion: v1
    kind: ConfigMap
    data:
      BUCKET_NAME: ${bucket.spec.name}
      BUCKET_ARN: ${bucket.status.ackResourceMetadata.arn}
```

When you reference another resource in a CEL expression, it automatically creates a dependency. kro ensures the referenced resource is created first.

For more information, see [CEL Expressions](https://kro.run/docs/concepts/rgd/cel-expressions/) in the kro documentation.

### Resource dependencies
<a name="_resource_dependencies"></a>

kro automatically infers dependencies from CEL expressions—you don’t specify the order, you describe relationships. When one resource references another using a CEL expression, kro creates a dependency and determines the correct creation order.

```
resources:
- id: bucket
  template:
    apiVersion: s3.services.k8s.aws/v1alpha1
    kind: Bucket
    spec:
      name: ${schema.spec.name}-data

- id: notification
  template:
    apiVersion: s3.services.k8s.aws/v1alpha1
    kind: BucketNotification
    spec:
      bucket: ${bucket.spec.name}  # Creates dependency: notification depends on bucket
```

The expression `${bucket.spec.name}` creates a dependency. kro builds a directed acyclic graph (DAG) of all resources and their dependencies, then computes a topological order for creation.

 **Creation order**: Resources are created in topological order (dependencies first).

 **Parallel creation**: Resources with no dependencies are created simultaneously.

 **Deletion order**: Resources are deleted in reverse topological order (dependents first).

 **Circular dependencies**: Not allowed—kro rejects ResourceGraphDefinitions with circular dependencies during validation.

You can view the computed creation order:

```
kubectl get resourcegraphdefinition my-rgd -o jsonpath='{.status.topologicalOrder}'
```

For more information, see [Graph inference](https://kro.run/docs/concepts/rgd/dependencies-ordering/) in the kro documentation.

## Composing with ACK
<a name="_composing_with_ack"></a>

kro works seamlessly with the EKS Capability for ACK to compose AWS resources with Kubernetes resources:

```
resources:
# Create {aws} S3 bucket with ACK
- id: bucket
  template:
    apiVersion: s3.services.k8s.aws/v1alpha1
    kind: Bucket
    spec:
      name: ${schema.spec.name}-files

# Inject bucket details into Kubernetes ConfigMap
- id: config
  template:
    apiVersion: v1
    kind: ConfigMap
    data:
      BUCKET_NAME: ${bucket.spec.name}
      BUCKET_ARN: ${bucket.status.ackResourceMetadata.arn}

# Use ConfigMap in application deployment
- id: deployment
  template:
    apiVersion: apps/v1
    kind: Deployment
    spec:
      template:
        spec:
          containers:
          - name: app
            envFrom:
            - configMapRef:
                name: ${config.metadata.name}
```

This pattern lets you create AWS resources, extract their details (ARNs, URLs, endpoints), and inject them into your application configuration—all managed as a single unit.

For more composition patterns and advanced examples, see [kro considerations for EKS](kro-considerations.md).

## Next steps
<a name="_next_steps"></a>
+  [kro considerations for EKS](kro-considerations.md) - Learn about EKS-specific patterns, RBAC, and integration with ACK and Argo CD
+  [kro Documentation](https://kro.run/docs/overview) - Comprehensive kro documentation including advanced CEL expressions, validation patterns, and troubleshooting

# Configure kro permissions
<a name="kro-permissions"></a>

Unlike ACK and Argo CD, kro does not require IAM permissions. kro operates entirely within your Kubernetes cluster and does not make AWS API calls. Control access to kro resources using standard Kubernetes RBAC.

## How permissions work with kro
<a name="_how_permissions_work_with_kro"></a>

kro uses two types of Kubernetes resources with different scopes:

 **ResourceGraphDefinitions**: Cluster-scoped resources that define custom APIs. Typically managed by platform teams who design and maintain organizational standards.

 **Instances**: Namespace-scoped custom resources created from ResourceGraphDefinitions. Can be created by application teams with appropriate RBAC permissions.

By default, the kro capability has permissions to manage ResourceGraphDefinitions and their instances through the `AmazonEKSKROPolicy` access entry policy. However, kro requires additional permissions to create and manage the underlying Kubernetes resources defined in your ResourceGraphDefinitions (such as Deployments, Services, or ACK resources). You must grant these permissions through access entry policies or Kubernetes RBAC. For details on granting these permissions, see [kro arbitrary resource permissions](capabilities-security.md#kro-resource-permissions).

## Platform team permissions
<a name="_platform_team_permissions"></a>

Platform teams need permissions to create and manage ResourceGraphDefinitions.

 **Example ClusterRole for platform teams**:

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kro-platform-admin
rules:
- apiGroups: ["kro.run"]
  resources: ["resourcegraphdefinitions"]
  verbs: ["*"]
```

 **Bind to platform team members**:

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: platform-team-kro-admin
subjects:
- kind: Group
  name: platform-team
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: kro-platform-admin
  apiGroup: rbac.authorization.k8s.io
```

## Application team permissions
<a name="_application_team_permissions"></a>

Application teams need permissions to create instances of custom resources in their namespaces.

 **Example Role for application teams**:

```
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: kro-app-developer
  namespace: my-app
rules:
- apiGroups: ["kro.run"]
  resources: ["webapps", "databases"]
  verbs: ["create", "get", "list", "update", "delete", "patch"]
```

 **Bind to application team members**:

```
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: app-team-kro-developer
  namespace: my-app
subjects:
- kind: Group
  name: app-team
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: kro-app-developer
  apiGroup: rbac.authorization.k8s.io
```

**Note**  
The API group in the Role (`kro.run` in this example) must match the `apiVersion` defined in your ResourceGraphDefinition’s schema.

## Read-only access
<a name="_read_only_access"></a>

Grant read-only access to view ResourceGraphDefinitions and instances without modification permissions.

 **Read-only ClusterRole**:

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kro-viewer
rules:
- apiGroups: ["kro.run"]
  resources: ["resourcegraphdefinitions"]
  verbs: ["get", "list", "watch"]
```

 **Read-only Role for instances**:

```
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: kro-instance-viewer
  namespace: my-app
rules:
- apiGroups: ["kro.run"]
  resources: ["webapps", "databases"]
  verbs: ["get", "list", "watch"]
```

## Multi-namespace access
<a name="_multi_namespace_access"></a>

Grant application teams access to multiple namespaces using ClusterRoles with RoleBindings.

 **ClusterRole for multi-namespace access**:

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kro-multi-namespace-developer
rules:
- apiGroups: ["kro.run"]
  resources: ["webapps"]
  verbs: ["create", "get", "list", "update", "delete"]
```

 **Bind to specific namespaces**:

```
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: app-team-dev-access
  namespace: development
subjects:
- kind: Group
  name: app-team
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: kro-multi-namespace-developer
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: app-team-staging-access
  namespace: staging
subjects:
- kind: Group
  name: app-team
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: kro-multi-namespace-developer
  apiGroup: rbac.authorization.k8s.io
```

## Best practices
<a name="_best_practices"></a>

 **Principle of least privilege**: Grant only the minimum permissions needed for each team’s responsibilities.

 **Use groups instead of individual users**: Bind roles to groups rather than individual users for easier management.

 **Separate platform and application concerns**: Platform teams manage ResourceGraphDefinitions, application teams manage instances.

 **Namespace isolation**: Use namespaces to isolate different teams or environments, with RBAC controlling access to each namespace.

 **Read-only access for auditing**: Provide read-only access to security and compliance teams for auditing purposes.

## Next steps
<a name="_next_steps"></a>
+  [kro concepts](kro-concepts.md) - Understand kro concepts and resource composition
+  [kro concepts](kro-concepts.md) - Understand SimpleSchema, CEL expressions, and composition patterns
+  [Security considerations for EKS Capabilities](capabilities-security.md) - Review security best practices for capabilities

# kro considerations for EKS
<a name="kro-considerations"></a>

This topic covers important considerations for using the EKS Capability for kro, including when to use resource composition, RBAC patterns, and integration with other EKS capabilities.

## When to use kro
<a name="_when_to_use_kro"></a>

kro is designed for creating reusable infrastructure patterns and custom APIs that simplify complex resource management.

 **Use kro when you need to**:
+ Create self-service platforms with simplified APIs for application teams
+ Standardize infrastructure patterns across teams (database \$1 backup \$1 monitoring)
+ Manage resource dependencies and pass values between resources
+ Build custom abstractions that hide implementation complexity
+ Compose multiple ACK resources into higher-level building blocks
+ Enable GitOps workflows for complex infrastructure stacks

 **Don’t use kro when**:
+ Managing simple, standalone resources (use ACK or Kubernetes resources directly)
+ You need dynamic runtime logic (kro is declarative, not imperative)
+ Resources don’t have dependencies or shared configuration

kro excels at creating "paved paths" - opinionated, reusable patterns that make it easy for teams to deploy complex infrastructure correctly.

## RBAC patterns
<a name="_rbac_patterns"></a>

kro enables separation of concerns between platform teams who create ResourceGraphDefinitions and application teams who create instances.

### Platform team responsibilities
<a name="_platform_team_responsibilities"></a>

Platform teams create and maintain ResourceGraphDefinitions (RGDs) that define custom APIs.

 **Permissions needed**:
+ Create, update, delete ResourceGraphDefinitions
+ Manage underlying resource types (Deployments, Services, ACK resources)
+ Access to all namespaces where RGDs will be used

 **Example ClusterRole for platform team**:

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: platform-kro-admin
rules:
- apiGroups: ["kro.run"]
  resources: ["resourcegraphdefinitions"]
  verbs: ["*"]
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["get", "list", "watch"]
```

For detailed RBAC configuration, see [Configure kro permissions](kro-permissions.md).

### Application team responsibilities
<a name="_application_team_responsibilities"></a>

Application teams create instances of custom resources defined by RGDs without needing to understand the underlying complexity.

 **Permissions needed**:
+ Create, update, delete instances of custom resources
+ Read access to their namespace
+ No access to underlying resources or RGDs

 **Benefits**:
+ Teams use simple, high-level APIs
+ Platform teams control implementation details
+ Reduced risk of misconfiguration
+ Faster onboarding for new team members

## Integration with other EKS capabilities
<a name="_integration_with_other_eks_capabilities"></a>

### Composing ACK resources
<a name="_composing_ack_resources"></a>

kro is particularly powerful when combined with ACK to create infrastructure patterns.

 **Common patterns**:
+  **Application with storage**: S3 bucket \$1 SQS queue \$1 notification configuration
+  **Database stack**: RDS instance \$1 parameter group \$1 security group \$1 Secrets Manager secret
+  **Networking**: VPC \$1 subnets \$1 route tables \$1 security groups \$1 NAT gateways
+  **Compute with storage**: EC2 instance \$1 EBS volumes \$1 IAM instance profile

kro handles dependency ordering, passes values between resources (like ARNs and connection strings), and manages the full lifecycle as a single unit.

For examples of composing ACK resources, see [ACK concepts](ack-concepts.md).

### GitOps with Argo CD
<a name="_gitops_with_argo_cd"></a>

Use the EKS Capability for Argo CD to deploy both RGDs and instances from Git repositories.

 **Repository organization**:
+  **Platform repo**: Contains ResourceGraphDefinitions managed by platform team
+  **Application repos**: Contain instances of custom resources managed by app teams
+  **Shared repo**: Contains both RGDs and instances for smaller organizations

 **Considerations**:
+ Deploy RGDs before instances (Argo CD sync waves can help)
+ Use separate Argo CD Projects for platform and application teams
+ Platform team controls RGD repository access
+ Application teams have read-only access to RGD definitions

For more on Argo CD, see [Working with Argo CD](working-with-argocd.md).

## Organizing ResourceGraphDefinitions
<a name="_organizing_resourcegraphdefinitions"></a>

Organize RGDs by purpose, complexity, and ownership.

 **By purpose**:
+  **Infrastructure**: Database stacks, networking, storage
+  **Application**: Web apps, APIs, batch jobs
+  **Platform**: Shared services, monitoring, logging

 **By complexity**:
+  **Simple**: 2-3 resources with minimal dependencies
+  **Moderate**: 5-10 resources with some dependencies
+  **Complex**: 10\$1 resources with complex dependencies

 **Naming conventions**:
+ Use descriptive names: `webapp-with-database`, `s3-notification-queue` 
+ Include version in name for breaking changes: `webapp-v2` 
+ Use consistent prefixes for related RGDs: `platform- `, `app-` 

 **Namespace strategy**:
+ RGDs are cluster-scoped (not namespaced)
+ Instances are namespaced
+ Use namespace selectors in RGDs to control where instances can be created

## Versioning and updates
<a name="_versioning_and_updates"></a>

Plan for RGD evolution and instance migration.

 **RGD updates**:
+  **Non-breaking changes**: Update RGD in place (add optional fields, new resources with includeWhen)
+  **Breaking changes**: Create new RGD with different name (webapp-v2)
+  **Deprecation**: Mark old RGDs with annotations, communicate migration timeline

 **Instance migration**:
+ Create new instances with updated RGD
+ Validate new instances work correctly
+ Delete old instances
+ kro handles underlying resource updates automatically

 **Best practices**:
+ Test RGD changes in non-production environments first
+ Use semantic versioning in RGD names for major changes
+ Document breaking changes and migration paths
+ Provide migration examples for application teams

## Validation and testing
<a name="_validation_and_testing"></a>

Validate RGDs before deploying to production.

 **Validation strategies**:
+  **Schema validation**: kro validates RGD structure automatically
+  **Dry-run instances**: Create test instances in development namespaces
+  **Integration tests**: Verify composed resources work together
+  **Policy enforcement**: Use admission controllers to enforce organizational standards

 **Common issues to test**:
+ Resource dependencies and ordering
+ Value passing between resources (CEL expressions)
+ Conditional resource inclusion (includeWhen)
+ Status propagation from underlying resources
+ RBAC permissions for instance creation

## Upstream documentation
<a name="_upstream_documentation"></a>

For detailed information on using kro:
+  [Getting Started with kro](https://kro.run/docs/guides/getting-started) - Creating ResourceGraphDefinitions
+  [CEL Expressions](https://kro.run/docs/concepts/cel) - Writing CEL expressions
+  [kro Guides](https://kro.run/docs/guides/) - Advanced composition patterns
+  [Troubleshooting](https://kro.run/docs/troubleshooting) - Troubleshooting and debugging

## Next steps
<a name="_next_steps"></a>
+  [Configure kro permissions](kro-permissions.md) - Configure RBAC for platform and application teams
+  [kro concepts](kro-concepts.md) - Understand kro concepts and resource lifecycle
+  [Troubleshoot issues with kro capabilities](kro-troubleshooting.md) - Troubleshoot kro issues
+  [ACK concepts](ack-concepts.md) - Learn about ACK resources for composition
+  [Working with Argo CD](working-with-argocd.md) - Deploy RGDs and instances with GitOps

# Troubleshoot issues with kro capabilities
<a name="kro-troubleshooting"></a>

This topic provides troubleshooting guidance for the EKS Capability for kro, including capability health checks, RBAC permissions, CEL expression errors, and resource composition issues.

**Note**  
EKS Capabilities are fully managed and run outside your cluster. You don’t have access to controller logs or the `kro-system` namespace. Troubleshooting focuses on capability health, RBAC configuration, and resource status.

## Capability is ACTIVE but ResourceGraphDefinitions aren’t working
<a name="_capability_is_active_but_resourcegraphdefinitions_arent_working"></a>

If your kro capability shows `ACTIVE` status but ResourceGraphDefinitions aren’t creating underlying resources, check the capability health, RBAC permissions, and resource status.

 **Check capability health**:

You can view capability health and status issues in the EKS console or using the AWS CLI.

 **Console**:

1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home\$1/clusters.

1. Select your cluster name.

1. Choose the **Observability** tab.

1. Choose **Monitor cluster**.

1. Choose the **Capabilities** tab to view health and status for all capabilities.

 ** AWS CLI**:

```
# View capability status and health
aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-kro

# Look for issues in the health section
```

 **Common causes**:
+  **RBAC permissions missing**: kro lacks permissions to create underlying Kubernetes resources
+  **Invalid CEL expressions**: Syntax errors in ResourceGraphDefinition
+  **Resource dependencies**: Dependent resources not ready
+  **Schema validation**: Instance doesn’t match RGD schema requirements

 **Verify RBAC permissions**:

```
# Check if capability has cluster admin policy
kubectl get accessentry -A | grep kro
```

If the capability doesn’t have the required permissions, associate the `AmazonEKSClusterAdminPolicy` with the kro capability’s access entry, or create more restrictive RBAC policies for production use. See [Configure kro permissions](kro-permissions.md) for details.

 **Check ResourceGraphDefinition status**:

```
# List all RGDs
kubectl get resourcegraphdefinition

# Describe specific RGD
kubectl describe resourcegraphdefinition my-rgd

# Check for validation errors
kubectl get resourcegraphdefinition my-rgd -o jsonpath='{.status.conditions}'
```

ResourceGraphDefinitions have three key status conditions:
+  `ResourceGraphAccepted` - Whether the RGD passed validation (CEL syntax, type checking, field existence)
+  `KindReady` - Whether the CRD for your custom API was generated and registered
+  `ControllerReady` - Whether kro is actively watching for instances of your custom API

If `ResourceGraphAccepted` is `False`, check the condition message for validation errors like unknown fields, type mismatches, or circular dependencies.

## Instances created but underlying resources not appearing
<a name="_instances_created_but_underlying_resources_not_appearing"></a>

If custom resource instances exist but the underlying Kubernetes resources (Deployments, Services, ConfigMaps) aren’t being created, verify kro has permissions and check for composition errors.

 **Check instance status**:

```
# Describe the instance (replace with your custom resource kind and name)
kubectl describe custom-kind
         my-instance

# View instance events
kubectl get events --field-selector involvedObject.name=my-instance

# Check instance status conditions
kubectl get custom-kind
         my-instance -o jsonpath='{.status.conditions}'

# Check instance state
kubectl get custom-kind
         my-instance -o jsonpath='{.status.state}'
```

Instances have a `state` field showing high-level status:
+  `ACTIVE` - Instance is successfully running
+  `IN_PROGRESS` - Instance is being processed or reconciled
+  `FAILED` - Instance failed to reconcile
+  `DELETING` - Instance is being deleted
+  `ERROR` - An error occurred during processing

Instances also have four status conditions:
+  `InstanceManaged` - Finalizers and labels are properly set
+  `GraphResolved` - Runtime graph created and resources resolved
+  `ResourcesReady` - All resources created and ready
+  `Ready` - Overall instance health (only becomes `True` when all sub-conditions are `True`)

Focus on the `Ready` condition to determine instance health. If `Ready` is `False`, check the sub-conditions to identify which phase failed.

 **Verify RBAC permissions**:

The kro capability needs permissions to create the underlying Kubernetes resources defined in your ResourceGraphDefinitions.

```
# Check if the capability has the AmazonEKSClusterAdminPolicy
kubectl get accessentry -A | grep kro
```

If permissions are missing, associate the `AmazonEKSClusterAdminPolicy` with the kro capability’s access entry, or create more restrictive RBAC policies for production use. See [Configure kro permissions](kro-permissions.md) for details.

## CEL expression errors
<a name="_cel_expression_errors"></a>

CEL expression errors are caught at ResourceGraphDefinition creation time, not when instances are created. kro validates all CEL syntax, type-checks expressions against Kubernetes schemas, and verifies field existence when you create the RGD.

 **Common CEL validation errors**:
+  **Undefined field reference**: Referencing a field that doesn’t exist in the schema or resource
+  **Type mismatch**: Expression returns wrong type (e.g., string where integer expected)
+  **Invalid syntax**: Missing brackets, quotes, or operators in CEL expression
+  **Unknown resource type**: Referencing a CRD that doesn’t exist in the cluster

 **Check RGD validation status**:

```
# Check if RGD was accepted
kubectl get resourcegraphdefinition my-rgd -o jsonpath='{.status.conditions[?(@.type=="ResourceGraphAccepted")]}'

# View detailed validation errors
kubectl describe resourcegraphdefinition my-rgd
```

If `ResourceGraphAccepted` is `False`, the condition message contains the validation error.

 **Example valid CEL expressions**:

```
# Reference schema field
${schema.spec.appName}

# Conditional expression
${schema.spec.replicas > 1}

# String template (expressions must return strings)
name: "${schema.spec.appName}-service"

# Standalone expression (can be any type)
replicas: ${schema.spec.replicaCount}

# Resource reference
${deployment.status.availableReplicas}

# Optional field access (returns null if field doesn't exist)
${configmap.data.?DATABASE_URL}
```

## Resource dependencies not resolving
<a name="_resource_dependencies_not_resolving"></a>

kro automatically infers dependencies from CEL expressions and creates resources in the correct order. If resources aren’t being created as expected, check the dependency order and resource readiness.

 **View computed creation order**:

```
# See the order kro will create resources
kubectl get resourcegraphdefinition my-rgd -o jsonpath='{.status.topologicalOrder}'
```

This shows the computed order based on CEL expression references between resources.

 **Check resource readiness**:

```
# View instance status to see which resources are ready
kubectl get custom-kind
         my-instance -o jsonpath='{.status}'

# Check specific resource status
kubectl get deployment my-deployment -o jsonpath='{.status.conditions}'
```

 **Verify readyWhen conditions (if used)**:

The `readyWhen` field is optional. If not specified, resources are considered ready immediately after creation. If you’ve defined `readyWhen` conditions, verify they correctly check for resource readiness:

```
resources:
  - id: deployment
    readyWhen:
      - ${deployment.status.availableReplicas == deployment.spec.replicas}
```

 **Check resource events**:

```
# View events for the underlying resources
kubectl get events -n namespace --sort-by='.lastTimestamp'
```

## Schema validation failures
<a name="_schema_validation_failures"></a>

If instances fail to create due to schema validation errors, verify the instance matches the RGD schema requirements.

 **Check validation errors**:

```
# Attempt to create instance and view error
kubectl apply -f instance.yaml

# View existing instance validation status
kubectl describe custom-kind
         my-instance | grep -A 5 "Validation"
```

 **Common validation issues**:
+  **Required fields missing**: Instance doesn’t provide all required schema fields
+  **Type mismatch**: Providing string where integer is expected
+  **Invalid enum value**: Using value not in allowed list
+  **Pattern mismatch**: String doesn’t match regex pattern

 **Review RGD schema**:

```
# View the schema definition
kubectl get resourcegraphdefinition my-rgd -o jsonpath='{.spec.schema}'
```

Ensure your instance provides all required fields with correct types.

## Next steps
<a name="_next_steps"></a>
+  [kro considerations for EKS](kro-considerations.md) - kro considerations and best practices
+  [Configure kro permissions](kro-permissions.md) - Configure RBAC for platform and application teams
+  [kro concepts](kro-concepts.md) - Understand kro concepts and resource lifecycle
+  [Troubleshooting EKS Capabilities](capabilities-troubleshooting.md) - General capability troubleshooting guidance

# Comparing EKS Capability for kro to self-managed kro
<a name="kro-comparison"></a>

The EKS Capability for kro provides the same functionality as self-managed kro, but with significant operational advantages. For a general comparison of EKS Capabilities vs self-managed solutions, see [EKS Capabilities considerations](capabilities-considerations.md).

The EKS Capability for kro uses the same upstream kro controllers and is fully compatible with upstream kro. ResourceGraphDefinitions, CEL expressions, and resource composition work identically. For complete kro documentation and examples, see the [kro documentation](https://kro.run/docs/overview).

## Migration path
<a name="_migration_path"></a>

You can migrate from self-managed kro to the managed capability with zero downtime.

**Important**  
Before migrating, ensure your self-managed kro controller is running the same version as the EKS Capability for kro. Check the capability version in the EKS console or using `aws eks describe-capability`, then upgrade your self-managed installation to match. This prevents compatibility issues during the migration.

1. Update your self-managed kro controller to use `kube-system` for leader election leases:

   ```
   helm upgrade --install kro \
     oci://ghcr.io/awslabs/kro/kro-chart \
     --namespace kro \
     --set leaderElection.namespace=kube-system
   ```

   This moves the controller’s lease to `kube-system`, allowing the managed capability to coordinate with it.

1. Create the kro capability on your cluster (see [Create a kro capability](create-kro-capability.md))

1. The managed capability recognizes existing ResourceGraphDefinitions and instances, taking over reconciliation

1. Gradually scale down or remove self-managed kro deployments:

   ```
   helm uninstall kro --namespace kro
   ```

This approach allows both controllers to coexist safely during migration. The managed capability automatically adopts ResourceGraphDefinitions and instances previously managed by self-managed kro, ensuring continuous reconciliation without conflicts.

## Next steps
<a name="_next_steps"></a>
+  [Create a kro capability](create-kro-capability.md) - Create a kro capability resource
+  [kro concepts](kro-concepts.md) - Understand kro concepts and resource composition

# Troubleshooting EKS Capabilities
<a name="capabilities-troubleshooting"></a>

This topic provides general troubleshooting guidance for EKS Capabilities, including capability health checks, common issues, and links to capability-specific troubleshooting.

**Note**  
EKS Capabilities are fully managed and run outside your cluster. You don’t have access to controller logs or controller namespaces. Troubleshooting focuses on capability health, resource status, and configuration.

## General troubleshooting approach
<a name="_general_troubleshooting_approach"></a>

When troubleshooting EKS Capabilities, follow this general approach:

1.  **Check capability health**: Use `aws eks describe-capability` to view the capability status and health issues

1.  **Verify resource status**: Check the Kubernetes resources (CRDs) you created for status conditions and events

1.  **Review IAM permissions**: Ensure the Capability Role has the necessary permissions

1.  **Check configuration**: Verify capability-specific configuration is correct

## Check capability health
<a name="_check_capability_health"></a>

All EKS Capabilities provide health information through the EKS console and the `describe-capability` API.

 **Console**:

1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home\$1/clusters.

1. Select your cluster name.

1. Choose the **Observability** tab.

1. Choose **Monitor cluster**.

1. Choose the **Capabilities** tab to view health and status for all capabilities.

The Capabilities tab shows:
+ Capability name and type
+ Current status
+ Health issues, with description

 ** AWS CLI**:

```
aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-capability-name
```

The response includes:
+  **status**: Current capability state (`CREATING`, `ACTIVE`, `UPDATING`, `DELETING`, `CREATE_FAILED`, `UPDATE_FAILED`)
+  **health**: Health information including any issues detected by the capability

## Common capability statuses
<a name="_common_capability_statuses"></a>

 **CREATING**: Capability is being set up.

 **ACTIVE**: Capability is running and ready to use. If resources aren’t working as expected, check resource status and IAM permissions.

 **UPDATING**: Configuration changes are being applied. Wait for the status to return to `ACTIVE`.

 **CREATE\$1FAILED** or **UPDATE\$1FAILED**: Setup or update encountered an error. Check the health section for details. Common causes:
+ IAM role trust policy incorrect or missing
+ IAM role doesn’t exist or isn’t accessible
+ Cluster access issues
+ Invalid configuration parameters

## Verify Kubernetes resource status
<a name="_verify_kubernetes_resource_status"></a>

EKS Capabilities create and manage Kubernetes Custom Resource Definitions (CRDs) in your cluster. When troubleshooting, check the status of the resources you created:

```
# List resources of a specific type
kubectl get resource-kind -A

# Describe a specific resource to see conditions and events
kubectl describe resource-kind
         resource-name -n namespace

# View resource status conditions
kubectl get resource-kind
         resource-name -n namespace -o jsonpath='{.status.conditions}'

# View events related to the resource
kubectl get events --field-selector involvedObject.name=resource-name -n namespace
```

Resource status conditions provide information about:
+ Whether the resource is ready
+ Any errors encountered
+ Current reconciliation state

## Review IAM permissions and cluster access
<a name="_review_iam_permissions_and_cluster_access"></a>

Many capability issues stem from IAM permission problems or missing cluster access configuration. Verify both the Capability Role permissions and cluster access entries.

### Check IAM role permissions
<a name="_check_iam_role_permissions"></a>

Verify the Capability Role has the necessary permissions:

```
# List attached managed policies
aws iam list-attached-role-policies --role-name my-capability-role

# List inline policies
aws iam list-role-policies --role-name my-capability-role

# Get specific policy details
aws iam get-role-policy --role-name my-capability-role --policy-name policy-name

# View the role's trust policy
aws iam get-role --role-name my-capability-role --query 'Role.AssumeRolePolicyDocument'
```

The trust policy must allow the `capabilities.eks.amazonaws.com` service principal:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "capabilities.eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
```

### Check EKS Access Entries and Access Policies
<a name="_check_eks_access_entries_and_access_policies"></a>

All capabilities require proper EKS Access Entries and Access Policies on the cluster where they operate.

 **Verify Access Entry exists**:

```
aws eks list-access-entries \
  --cluster-name my-cluster \
  --region region-code
```

Look for the Capability Role ARN in the list. If missing, the capability cannot access the cluster.

 **Check Access Policies attached to the entry**:

```
aws eks list-associated-access-policies \
  --cluster-name my-cluster \
  --principal-arn arn:aws:iam::111122223333:role/my-capability-role \
  --region region-code
```

All capabilities require appropriate Access Policies:
+  **ACK**: Needs permissions to create and manage Kubernetes resources
+  **kro**: Needs permissions to create and manage Kubernetes resources
+  **Argo CD**: Needs permissions to create and manage Applications, and requires Access Entries on remote target clusters for multi-cluster deployments

 **For Argo CD multi-cluster deployments**:

If deploying to remote clusters, verify the Capability Role has an Access Entry on each target cluster:

```
# Check Access Entry on target cluster
aws eks describe-access-entry \
  --cluster-name target-cluster \
  --principal-arn arn:aws:iam::111122223333:role/argocd-capability-role \
  --region region-code
```

If the Access Entry is missing on a target cluster, Argo CD cannot deploy applications to it. See [Register target clusters](argocd-register-clusters.md) for configuration details.

## Capability-specific troubleshooting
<a name="_capability_specific_troubleshooting"></a>

For detailed troubleshooting guidance specific to each capability type:
+  [Troubleshoot issues with ACK capabilities](ack-troubleshooting.md) - Troubleshoot ACK resource creation, IAM permissions, and cross-account access
+  [Troubleshoot issues with Argo CD capabilities](argocd-troubleshooting.md) - Troubleshoot application sync, repository authentication, and multi-cluster deployments
+  [Troubleshoot issues with kro capabilities](kro-troubleshooting.md) - Troubleshoot ResourceGraphDefinitions, CEL expressions, and RBAC permissions

## Common issues across all capabilities
<a name="_common_issues_across_all_capabilities"></a>

### Capability stuck in CREATING state
<a name="_capability_stuck_in_creating_state"></a>

If a capability remains in `CREATING` state for longer than expected:

1. Check the capability health for specific issues in the console (**Observability** > **Monitor cluster** > **Capabilities** tab) or using the AWS CLI:

   ```
   aws eks describe-capability \
     --region region-code \
     --cluster-name my-cluster \
     --capability-name my-capability-name \
     --query 'capability.health'
   ```

1. Verify the IAM role exists and has the correct trust policy

1. Ensure your cluster is accessible and healthy

1. Check for any cluster-level issues that might prevent capability setup

### Resources not being created or updated
<a name="_resources_not_being_created_or_updated"></a>

If the capability is `ACTIVE` but resources aren’t being created or updated:

1. Check the resource status for error conditions

1. Verify IAM permissions for the specific AWS services (ACK) or repositories (Argo CD)

1. Check RBAC permissions for creating underlying resources (kro)

1. Review resource specifications for validation errors

### Capability health shows issues
<a name="_capability_health_shows_issues"></a>

If `describe-capability` shows health issues:

1. Read the issue descriptions carefully—they often indicate the specific problem

1. Address the root cause (IAM permissions, configuration errors, etc.)

1. The capability will automatically recover once the issue is resolved

## Next steps
<a name="_next_steps"></a>
+  [Working with capability resources](working-with-capabilities.md) - Manage capability resources
+  [Troubleshoot issues with ACK capabilities](ack-troubleshooting.md) - ACK-specific troubleshooting
+  [Troubleshoot issues with Argo CD capabilities](argocd-troubleshooting.md) - Argo CD-specific troubleshooting
+  [Troubleshoot issues with kro capabilities](kro-troubleshooting.md) - kro-specific troubleshooting
+  [Security considerations for EKS Capabilities](capabilities-security.md) - Security best practices for capabilities