

# Configuring capabilities for AWS DevOps Agent


AWS DevOps Agent capabilities extend your agent's functionality by connecting it to your existing tools and infrastructure. Configure these capabilities to enable comprehensive incident investigation, automated response workflows, and seamless integration with your DevOps ecosystem.

The following capabilities help you maximize your DevOps Agent's effectiveness:
+ **AWS EKS Access Setup** - Enable introspection of Kubernetes clusters, pod logs, and cluster events for both public and private EKS environments
+ **Azure Integration** - Connect Azure subscriptions and Azure DevOps organizations to investigate Azure resources and correlate Azure DevOps deployments with incidents
+ **CI/CD Pipeline Integration** - Connect GitHub and GitLab pipelines to correlate deployments with incidents and track code changes during investigations
+ **MCP Server Connections** - Extend investigation capabilities by connecting external observability tools and custom monitoring systems through Model Context Protocol
+ **Multi-Account AWS Access** - Configure secondary AWS accounts to investigate resources across your entire organization during incident response
+ **Telemetry Source Integration** - Connect monitoring platforms like Datadog, Dynatrace, Grafana, New Relic, and Splunk for comprehensive observability data access
+ **Ticketing and Chat Integration** - Connect ServiceNow, PagerDuty, and Slack to automate incident response workflows and enable team collaboration
+ **Webhook Configuration** - Allow external systems to automatically trigger DevOps Agent investigations through HTTP requests
+ **Amazon EventBridge Integration** - Incorporate AWS DevOps Agent into event-driven applications by routing investigation and mitigation lifecycle events to Amazon EventBridge targets

You can configure each capability independently based on your team's specific needs and existing tool stack. Start with the integrations most critical to your incident response workflow, then expand to additional capabilities as needed.

# Migrating from public preview to general availability


If you used AWS DevOps Agent during public preview, you must update your IAM roles before the GA release. This guide walks through updating the monitoring roles and operator roles in your accounts.

## What's changing


1. [On-demand chat histories during preview are no longer accessible](#on-demand-chat-history-from-public-preview)

1. [New managed policies replace policies available during preview](#new-managed-policies)

1. [Agent Spaces may have an outdated IAM Identity Center application access scope](#reconnect-iam-identity-center-if-applicable)

## On-demand chat history from public preview


The GA release introduces additional security measures to harden access controls for chat histories. As a result of these changes, on-demand chat histories from the public preview period (before March 30, 2026) are no longer accessible. Investigation journals and findings created during public preview are not affected. **This change applies only to on-demand chat conversations.**

## New Managed Policies


For GA, AWS provides new managed policies that replace the preview-era policies:


| Role type | Remove | Add | 
| --- | --- | --- | 
| Monitoring | AIOpsAssistantPolicy managed policy | AIDevOpsAgentAccessPolicy managed policy | 
| Operator (IAM and IDC) | Inline policy | AIDevOpsOperatorAppAccessPolicy managed policy | 

In addition, operator roles require updated trust policies, and IDC operator roles require a new inline policy.

### Prerequisites

+ Access to the AWS accounts where your DevOps Agent roles are configured (primary and all secondary accounts)
+ IAM permissions to modify roles, policies, and trust relationships
+ Your Agent Space ID, AWS account ID, and Region (visible in the DevOps Agent console)

### Step 1: Update monitoring roles


Update the monitoring role in your primary account and in each secondary account. These are the Primary/Secondary source roles configured under the **Capabilities** tab in your agent space (example primary/secondary role: `DevOpsAgentRole-AgentSpace-3xj2396z`).

1. In the DevOps Agent console, go to your Agent Space and choose the **Capabilities** tab.

1. Find the monitoring role for your Primary/Secondary Sources (for example, `DevOpsAgentRole-AgentSpace-3xj2396z`) and choose **Edit**.

1. Under **Permissions policies**, remove the `AIOpsAssistantPolicy` AWS managed policy.

1. Choose **Add permissions**, **Attach policies**, and attach the `AIDevOpsAgentAccessPolicy` managed policy.

1. Edit the inline policy and replace its contents with the following, substituting your account ID:

```
{
    "Version": "2012-10-17",		 	 	 		 	 	 
    "Statement": [
        {
            "Sid": "AllowCreateServiceLinkedRoles",
            "Effect": "Allow",
            "Action": [
                "iam:CreateServiceLinkedRole"
            ],
            "Resource": [
                "arn:aws:iam::<account-id>:role/aws-service-role/resource-explorer-2.amazonaws.com/AWSServiceRoleForResourceExplorer"
            ]
        }
    ]
}
```

1. The trust policy for the monitoring role does not require changes. Verify it matches the following:

```
{
    "Version": "2012-10-17",		 	 	 		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "aidevops.amazonaws.com"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "<account-id>"
                },
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:aidevops:<region>:<account-id>:agentspace/*"
                }
            }
        }
    ]
}
```
+ Repeat steps 2–6 for the monitoring role in each secondary account.

### Step 2: Update the operator role (IAM)


1. In the DevOps Agent console, choose the **Access** tab and find the operator role.

1. In the IAM console, remove the existing inline policy from the operator role.

1. Choose **Add permissions**, **Attach policies**, and attach the `AIDevOpsOperatorAppAccessPolicy` managed policy.

1. Choose the **Trust relationships** tab and choose **Edit trust policy**. Replace the trust policy with the following, substituting your account ID, Region, and Agent Space ID:

```
{
    "Version": "2012-10-17",		 	 	 		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "aidevops.amazonaws.com"
            },
            "Action": ["sts:AssumeRole", "sts:TagSession"],
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "<account-id>"
                },
                "ArnEquals": {
                    "aws:SourceArn": "arn:aws:aidevops:<region>:<account-id>:agentspace/<agentspace-id>"
                }
            }
        }
    ]
}
```

### Step 3: Update operator roles (IDC)


If you use IAM Identity Center with DevOps Agent, update each IDC operator role.

1. In the IAM console, go to **Roles** and search for `WebappIDC` to find your DevOps Agent IDC roles (for example, `DevOpsAgentRole-WebappIDC-<id>`).

1. For each IDC role:

a. Remove the existing inline policy.

b. Choose **Add permissions**, **Attach policies**, and attach the `AIDevOpsOperatorAppAccessPolicy` managed policy.

c. Choose the **Trust relationships** tab and choose **Edit trust policy**. Replace the trust policy with the following, substituting your account ID, Region, and Agent Space ID:

```
{
    "Version": "2012-10-17",		 	 	 		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "aidevops.amazonaws.com"
            },
            "Action": ["sts:AssumeRole", "sts:TagSession"],
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "<account-id>"
                },
                "ArnEquals": {
                    "aws:SourceArn": "arn:aws:aidevops:<region>:<account-id>:agentspace/<agentspace-id>"
                }
            }
        },
        {
            "Sid": "TrustedIdentityPropagation",
            "Effect": "Allow",
            "Principal": {
                "Service": "aidevops.amazonaws.com"
            },
            "Action": "sts:SetContext",
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "<account-id>"
                },
                "ArnEquals": {
                    "aws:SourceArn": "arn:aws:aidevops:<region>:<account-id>:agentspace/<agentspace-id>"
                },
                "ForAllValues:ArnEquals": {
                    "sts:RequestContextProviders": [
                        "arn:aws:iam::aws:contextProvider/IdentityCenter"
                    ]
                },
                "Null": {
                    "sts:RequestContextProviders": "false"
                }
            }
        }
    ]
}
```

d. Create a new inline policy with the following permissions, substituting your account ID:

```
{
    "Version": "2012-10-17",		 	 	 		 	 	 
    "Statement": [
        {
            "Sid": "AllowDevOpsAgentSSOAccess",
            "Effect": "Allow",
            "Action": [
                "sso:ListInstances",
                "sso:DescribeInstance"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AllowDevOpsAgentIDCUserAccess",
            "Effect": "Allow",
            "Action": "identitystore:DescribeUser",
            "Resource": [
                "arn:aws:identitystore::<account-id>:identitystore/*",
                "arn:aws:identitystore:::user/*"
            ]
        }
    ]
}
```

## Reconnect IAM Identity Center (if applicable)


Agent Spaces created during public preview may have an IAM Identity Center application configured with an outdated access scope. For GA, the correct scope is **`aidevops:read_write`**. If your IAM Identity Center application has the previous scope (**`awsaidevops:read_write`**), you must disconnect and reconnect IAM Identity Center.

### How to check your IAM Identity Center application scope


Run the following AWS CLI command to check the scope on your IAM Identity Center application. You can find the application ARN in the IAM Identity Center console under **Applications**.

```
aws sso-admin list-application-access-scopes \
  --application-arn arn:aws:sso::<account-id>:application/<instance-id>/<application-id>
```

The output should show the correct scope **`aidevops:read_write`**:

```
{
    "Scopes": [
        {
            "Scope": "aidevops:read_write"
        }
    ]
}
```

If the scope shows **`awsaidevops:read_write`**, it is outdated. Follow the steps below to update it.

### How to reconnect IAM Identity Center


The access scope on an AWS managed IAM Identity Center application cannot be updated directly. You must disconnect and reconnect:

1. In the AWS DevOps Agent console, go to your Agent Space and choose the **Access** tab.

1. Choose **Disconnect** next to the IAM Identity Center configuration.

1. Confirm the disconnection.

1. Choose **Connect** to set up IAM Identity Center again. The service creates a new IAM Identity Center application with the correct scope.

1. Reassign users and groups to the new application in the IAM Identity Center console.

**Important**  
Disconnecting removes individual user chat and artifact history associated with IAM Identity Center user accounts. Users will need to log in again after reconnection.

## Verification


After completing all steps:

1. Return to the DevOps Agent console and verify that no permission errors appear on the Agent Space **Access** tab.

1. Test the operator web app to confirm it loads and functions correctly.

1. If you use IDC, verify that users can authenticate and access the operator experience.

## Troubleshooting


**Permission denied errors after migration**
+ Verify that `AIOpsAssistantPolicy` was removed and `AIDevOpsAgentAccessPolicy` is attached to monitoring roles.
+ Verify that old inline policies were removed and `AIDevOpsOperatorAppAccessPolicy` is attached to operator roles.
+ Check that operator trust policies include `sts:TagSession`.
+ Confirm you replaced all placeholder values (`<account-id>`, `<region>`, `<agentspace-id>`) with actual values.

**Secondary accounts not working**
+ Each secondary account's monitoring role must be updated independently. Log into each account and repeat Step 1.

**IDC authentication failures**
+ Verify the IDC trust policy includes both the `sts:AssumeRole`/`sts:TagSession` statement and the `TrustedIdentityPropagation` statement.
+ Confirm the inline policy with `sso:ListInstances`, `sso:DescribeInstance`, and `identitystore:DescribeUser` was created.

**On-demand chat history missing after migration**
+ On-demand chat histories from the public preview period are not accessible after the GA release. This is expected behavior due to enhanced security measures introduced in GA. Investigation journals and findings from public preview are not affected.

# AWS EKS access setup


You can enable AWS DevOps Agent to investigate issues in your Amazon EKS clusters by running read-only `kubectl` commands against both public and private clusters. You can connect any number of EKS clusters to the same Agent Space.

Once connected, the agent can help diagnose operational issues in your clusters — describing resources, retrieving pod logs, inspecting cluster events, checking node health, and more. The agent cannot create, modify, or delete any resources in your cluster.

## Prerequisites


Before setting up EKS access, ensure that your EKS cluster's authentication mode includes the EKS API. You can check this on the **Access** tab in the [Amazon EKS console](https://console.aws.amazon.com/eks). If the mode doesn't include the EKS API, select a mode that does before proceeding.

## Setup


These steps need to be completed from the [Amazon EKS console](https://console.aws.amazon.com/eks) for each cluster you wish to create an access entry for. You can find your IAM role ARN in your Agent Space (see [Creating an Agent Space](getting-started-with-aws-devops-agent-creating-an-agent-space.md)) under **Capabilities > Cloud > Primary Source > Edit**.

1. Go to the **Access** tab. If the Authentication mode already says EKS API, you can add access entries. Otherwise, select a mode that includes the EKS API.

1. From the Access tab, create a new IAM access entry. Copy your primary cloud source IAM role ARN and enter it as the IAM principal for the access entry. Click **Next**.

1. Select the AWS Managed **AmazonAIOpsAssistantPolicy** access policy, and select **Cluster** for the access scope. (Alternatively, if you'd like the agent to only access certain namespaces, select the desired **Kubernetes Namespaces**). Click on **Add Policy**, and then click on **Next**.

1. Review the changes and confirm that the correct access entry policy and IAM role were chosen, and create your access entry by clicking **"Create"**.

To verify that the EKS access was configured correctly, navigate to the Operator App and start a new investigation, asking the agent a question about your cluster, such as "list all pods in the default namespace" or "show me recent events in my cluster".

## Troubleshooting


If the agent can't reach your cluster, verify that the access entry is using the correct IAM role ARN shown in the setup dialog and that the **AmazonAIOpsAssistantPolicy** access policy is attached.

# Connecting Azure


Azure integration enables AWS DevOps Agent to investigate resources in your Azure environment and correlate Azure DevOps pipeline deployments with operational incidents. By connecting Azure, the agent gains visibility into your Azure infrastructure and can perform root cause analysis across both AWS and Azure resources.

Azure integration consists of two independent capabilities:
+ **Azure Resources** – Enables the agent to discover and investigate Azure cloud resources such as virtual machines, Azure Kubernetes Service (AKS) clusters, databases, and networking components. The agent uses Azure Resource Graph to query your resources during incident investigations.
+ **Azure DevOps** – Enables the agent to access Azure DevOps repositories and pipeline execution history. The agent can correlate code changes and deployments with incidents to help identify potential root causes.

Each capability is registered at the AWS account level and can then be associated with individual Agent Spaces.

## Registration methods


AWS DevOps Agent supports two methods for connecting to Azure:
+ **Admin Consent** – A streamlined consent-based flow where you authorize the AWS DevOps Agent Entra application in your Azure tenant. In the console, this appears as the **Admin Consent** option. This method requires signing in with an account that has permission to perform admin consent in Microsoft Entra ID.
+ **App Registration** – A self-managed approach where you create your own Entra application with federated identity credentials using Outbound Identity Federation. In the console, this appears as the **App Registration** option. This method is suitable when you need more control over the application configuration or when admin consent permissions are not available.

Both methods provide the same capabilities. You can use one or both methods within the same AWS account.

## Known limitations

+ **Admin Consent: one AWS account per Azure tenant** – Each Azure tenant can only have its AWS DevOps Agent Entra App associated with one AWS account at a time. To associate the same tenant with a different AWS account, you must deregister the existing registration first.
+ **App Registration: unique application per registration** – Each App Registration must use a different application (client ID). You cannot register multiple configurations with the same client ID.
+ **Azure DevOps: source code access** – The Azure DevOps integration provides access to pipeline execution history regardless of where the source code is hosted. However, to access the actual source code, the repository must be connected separately through a supported source provider (for example, [Connecting GitHub](connecting-to-cicd-pipelines-connecting-github.md)). Source code hosted in Bitbucket is not directly accessible through the Azure DevOps integration.

## Topics

+ [Connecting Azure Resources](connecting-azure-connecting-azure-resources.md)
+ [Connecting Azure DevOps](connecting-azure-connecting-azure-devops.md)

# Connecting Azure Resources


Azure Resources integration enables AWS DevOps Agent to discover and investigate resources in your Azure subscriptions during incident investigations. The agent uses Azure Resource Graph for resource discovery and can access metrics, logs, and configuration data across your Azure environment.

This integration follows a two-step process: register Azure at the AWS account level, then associate specific Azure subscriptions with individual Agent Spaces.

## Prerequisites


Before connecting Azure Resources, ensure you have:
+ Access to the AWS DevOps Agent console
+ An Azure account with access to the target subscription
+ For Admin Consent method: an account with permission to perform admin consent in Microsoft Entra ID
+ For App Registration method: an Entra application with permissions to configure federated identity credentials, and [Outbound Identity Federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-federation.html) enabled in your AWS account

**Note:** You can also start registration from within an Agent Space. Navigate to **Secondary sources**, click **Add**, and select **Azure**. If Azure Cloud is not yet registered, the console guides you through registration first.

## Registering Azure Resources via Admin Consent


The Admin Consent method uses a consent-based flow with the AWS DevOps Agent managed application.

### Step 1: Start the registration


1. Sign in to the AWS Management Console and navigate to the AWS DevOps Agent console

1. Go to the **Capability Providers** page

1. Locate the **Azure Cloud** section and click **Register**

1. Select the **Admin Consent** registration method

### Step 2: Complete Admin Consent


1. Review the permissions being requested

1. Click to proceed — you are redirected to the Microsoft Entra admin consent page

1. Sign in with a user principal account that has permission to perform admin consent

1. Review and grant consent for the AWS DevOps Agent application

### Step 3: Complete user authorization


1. After admin consent, you are prompted for user authorization to verify your identity as a member of the authorized tenant

1. Sign in with an account belonging to the same Azure tenant

1. After authorization, you are redirected back to the AWS DevOps Agent console with a success status

### Step 4: Assign roles


See [Assigning Azure roles](#assigning-azure-roles) below. Search for **AWS DevOps Agent** when selecting members.

## Registering Azure Resources via App Registration


The App Registration method uses your own Entra application with federated identity credentials.

### Step 1: Start the registration


1. In the AWS DevOps Agent console, go to the **Capability Providers** page

1. Locate the **Azure Cloud** section and click **Register**

1. Select the **App Registration** method

### Step 2: Create and configure your Entra application


Follow the instructions displayed in the console to:

1. Enable Outbound Identity Federation in your AWS account (in the IAM console, go to **Account settings** → **Outbound Identity Federation**)

1. Create an Entra application in your Microsoft Entra ID, or use an existing one

1. Configure federated identity credentials on the application

### Step 3: Provide registration details


Fill in the registration form with:
+ **Tenant ID** – Your Azure tenant identifier
+ **Tenant Name** – A display name for the tenant
+ **Client ID** – The application (client) ID of the Entra application you created
+ **Audience** – The audience identifier for the federated credential

### Step 4: Create the IAM role


An IAM role will be automatically created when you submit the registration through the console. It permits AWS DevOps Agent to assume credentials and invoke `sts:GetWebIdentityToken`.

### Step 5: Assign roles


See [Assigning Azure roles](#assigning-azure-roles) below. Search for the Entra application you created when selecting members.

### Step 6: Complete the registration


1. Confirm the configuration in the AWS DevOps Agent console

1. Click **Submit** to complete the registration

## Assigning Azure roles


After registration, grant the application read access to your Azure subscription. This step is the same for both the Admin Consent and App Registration methods.

1. In the Azure Portal, navigate to your target subscription

1. Go to **Access Control (IAM)**

1. Click **Add** > **Add role assignment**

1. Select the **Reader** role and click **Next**

1. Click **Select members**, search for the application (either **AWS DevOps Agent** for Admin Consent, or your own Entra application for App Registration)

1. Select the application and click **Review \$1 assign**

1. (Optional) To enable the agent to access Azure Kubernetes Service (AKS) clusters, complete the following AKS access setup.

**Security Requirement:** The service principal must be assigned only the **Reader** role (and optionally the AKS read-only roles listed below). The Reader role serves as a security boundary that restricts the agent to read-only operations and limits the impact of indirect prompt injection attacks. Assigning roles with write or action permissions significantly increases the blast radius of prompt injection and may result in compromise of Azure resources. AWS DevOps Agent performs only read operations. The agent does not modify, create, or delete Azure resources.

### AKS access setup (optional)


#### Step 1: Azure Resource Manager (ARM) level access


Assign **Azure Kubernetes Service Cluster User Role** to the application.

In the Azure Portal, go to **Subscriptions** → select subscription → **Access Control (IAM)** → **Add role assignment** → select **Azure Kubernetes Service Cluster User Role** → assign to the application (either **AWS DevOps Agent** for Admin Consent, or your own Entra application for App Registration).

This covers all AKS clusters in the subscription. To scope to specific clusters, assign at the resource group or individual cluster level instead.

#### Step 2: Kubernetes API access


Choose one option based on your cluster's authentication configuration:

**Option A: Azure Role-Based Access Control (RBAC) for Kubernetes (recommended)**

1. Enable Azure RBAC on the cluster if not already enabled: Azure Portal → AKS cluster → **Settings** → **Security configuration** → **Authentication and authorization** → select **Azure RBAC**

1. Assign read-only role: Azure Portal → **Subscriptions** → select subscription → **Access Control (IAM)** → **Add role assignment** → select **Azure Kubernetes Service RBAC Reader** → assign to the application

This covers all AKS clusters in the subscription.

**Option B: Azure Active Directory (Azure AD) \$1 Kubernetes RBAC**

Use this if your cluster already uses the default Azure AD authentication configuration and you prefer not to enable Azure RBAC. This requires per-cluster `kubectl` setup.

1. Save the following manifest as `devops-agent-reader.yaml`:

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: devops-agent-reader
rules:
  - apiGroups: [""]
    resources: ["namespaces", "pods", "pods/log", "services", "events", "nodes"]
    verbs: ["get", "list"]
  - apiGroups: ["apps"]
    resources: ["deployments", "replicasets", "statefulsets", "daemonsets"]
    verbs: ["get", "list"]
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: devops-agent-reader-binding
subjects:
  - kind: User
    name: "<SERVICE_PRINCIPAL_OBJECT_ID>"
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: devops-agent-reader
  apiGroup: rbac.authorization.k8s.io
```

1. Replace `<SERVICE_PRINCIPAL_OBJECT_ID>` with your service principal's Object ID. To find it: Azure Portal → Entra ID → Enterprise Applications → search for the application name (either **AWS DevOps Agent** for Admin Consent, or your own Entra application for App Registration).

1. Apply to each cluster:

```
az aks get-credentials --resource-group <rg> --name <cluster-name>
kubectl apply -f devops-agent-reader.yaml
```

**Note:** Clusters using local accounts only (without Azure AD) are not supported. We recommend enabling Azure AD integration on your cluster to use this feature.

### Least-privileged custom role (optional)


For tighter access control, you can create a custom Azure role scoped to only the resource providers AWS DevOps Agent uses, instead of the broad Reader role:

```
{
  "Name": "AWS DevOps Agent - Azure Reader",
  "Description": "Least-privilege read-only access for AWS DevOps Agent incident investigations.",
  "Actions": [
    "Microsoft.AlertsManagement/*/read",
    "Microsoft.Compute/*/read",
    "Microsoft.ContainerRegistry/*/read",
    "Microsoft.ContainerService/*/read",
    "Microsoft.ContainerService/managedClusters/commandResults/read",
    "Microsoft.DocumentDB/*/read",
    "Microsoft.Insights/*/read",
    "Microsoft.KeyVault/vaults/read",
    "Microsoft.ManagedIdentity/*/read",
    "Microsoft.Monitor/*/read",
    "Microsoft.Network/*/read",
    "Microsoft.OperationalInsights/*/read",
    "Microsoft.ResourceGraph/resources/read",
    "Microsoft.ResourceHealth/*/read",
    "Microsoft.Resources/*/read",
    "Microsoft.Sql/*/read",
    "Microsoft.Storage/*/read",
    "Microsoft.Web/*/read"
  ],
  "NotActions": [],
  "DataActions": [],
  "NotDataActions": [],
  "AssignableScopes": [
    "/subscriptions/{your-subscription-id}"
  ]
}
```

## Associating a subscription with an Agent Space


After registering Azure at the account level, associate specific subscriptions with your Agent Spaces:

1. In the AWS DevOps Agent console, select your Agent Space

1. Go to the **Capabilities** tab

1. In the **Secondary sources** section, click **Add**

1. Select **Azure**

1. Provide the **Subscription ID** for the Azure subscription you want to associate

1. Click **Add** to complete the association

You can associate multiple subscriptions with the same Agent Space to give the agent visibility across your Azure environment.

## Managing Azure Resources connections

+ **Viewing connected subscriptions** – In the **Capabilities** tab, the **Secondary sources** section lists all connected Azure subscriptions.
+ **Removing a subscription** – To disconnect a subscription from an Agent Space, select it in the **Secondary sources** list and click **Remove**. This does not affect the account-level registration.
+ **Removing the registration** – To remove the Azure Cloud registration entirely, go to the **Capability Providers** page and delete the registration. All Agent Space associations must be removed first.

# Connecting Azure DevOps


Azure DevOps integration enables AWS DevOps Agent to access repositories and pipeline execution history in your Azure DevOps organization. The agent can correlate code changes and deployments with operational incidents to help identify potential root causes.

**Note:** Azure DevOps pipelines can use source code from Azure Repos, GitHub, or Bitbucket. The Azure DevOps integration provides access to pipeline execution history regardless of the source provider. However, to access the actual source code during investigations, the repository must be connected separately through a supported integration such as [Connecting GitHub](connecting-to-cicd-pipelines-connecting-github.md). Source code in Bitbucket is not directly accessible through this integration.

This integration follows a two-step process: register Azure DevOps at the AWS account level, then associate specific projects with individual Agent Spaces.

## Prerequisites


Before connecting Azure DevOps, ensure you have:
+ Access to the AWS DevOps Agent console
+ An Azure DevOps organization with at least one project containing a repository and pipeline history
+ Permissions to add users to your Azure DevOps organization
+ For Admin Consent method: an account with permission to perform admin consent in Microsoft Entra ID
+ For App Registration method: an Entra application with permissions to configure federated identity credentials, and [Outbound Identity Federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-federation.html) enabled in your AWS account

**Note:** You can also start registration from within an Agent Space. Navigate to the **Pipelines** section, click **Add**, and select **Azure DevOps**. If Azure DevOps is not yet registered, the console guides you through registration first.

## Registering Azure DevOps via Admin Consent


The Admin Consent method uses a consent-based flow with the AWS DevOps Agent managed application.

### Step 1: Start the registration


1. Sign in to the AWS Management Console and navigate to the AWS DevOps Agent console

1. Go to the **Capability Providers** page

1. Locate the **Azure DevOps** section and click **Register**

1. Enter your **Azure DevOps organization name** when prompted

### Step 2: Complete Admin Consent


1. Click to proceed - you are redirected to the Microsoft Entra admin consent page

1. Sign in with a user principal account that has permission to perform admin consent

1. Review and grant consent for the AWS DevOps Agent application

### Step 3: Complete user authorization


1. After admin consent, you are prompted for user authorization to verify your identity as a member of the authorized tenant

1. Sign in with an account belonging to the same Azure tenant

1. After authorization, you are redirected back to the AWS DevOps Agent console with a success status

### Step 4: Grant access in Azure DevOps


See [Granting access in Azure DevOps](#granting-access-in-azure-devops) below. Search for **AWS DevOps Agent** when adding users.

## Registering Azure DevOps via App Registration


App Registration is shared between Azure Resources and Azure DevOps. If you have already completed App Registration for Azure Resources, you can skip to [Granting access in Azure DevOps](#granting-access-in-azure-devops).

### Step 1: Start the ADO App Registration


1. In the AWS DevOps Agent console, go to the **Capability Providers** page

1. Locate the **Azure Cloud** section and click **Register**

1. Select the **App Registration** method

### Step 2: Create and configure your Entra application


Follow the instructions displayed in the console to:

1. Enable Outbound Identity Federation in your AWS account (in the IAM console, go to **Account settings** → **Outbound Identity Federation**)

1. Create an Entra application in your Microsoft Entra ID, or use an existing one

1. Configure federated identity credentials on the application

### Step 3: Provide registration details


Fill in the registration form with:
+ **Tenant ID** – Your Azure tenant identifier
+ **Tenant Name** – A display name for the tenant
+ **Client ID** – The application (client) ID of the Entra application
+ **Audience** – The audience identifier for the federated credential

### Step 4: Create the IAM role


An IAM role will be automatically created when you submit the registration through the console. It permits AWS DevOps Agent to assume credentials and invoke `sts:GetWebIdentityToken`.

### Step 5: Complete the registration


1. Confirm the configuration in the AWS DevOps Agent console

1. Click **Submit** to complete the registration

### Step 6: Grant access in Azure DevOps


See [Granting access in Azure DevOps](#granting-access-in-azure-devops) below. Search for the Entra application you created during App Registration when adding users.

## Granting access in Azure DevOps


After registration, grant the application access to your Azure DevOps organization. This step is the same for both the Admin Consent and App Registration methods.

1. In Azure DevOps, go to **Organization Settings** > **Users** > **Add Users**

1. Search for the application (either **AWS DevOps Agent** for Admin Consent, or your own Entra application for App Registration)

1. Set the access level to **Basic**

1. Under **Add to projects**, select the projects you want the agent to access

1. Under **Azure DevOps Groups**, select **Project Readers**

1. Click **Add** to complete

**Security Requirement:** Assign only the **Project Readers** group. Read-only access serves as a security boundary that restricts the agent to read-only operations and limits the impact of indirect prompt injection attacks. Assigning groups with write or action permissions significantly increases the blast radius of prompt injection and may result in compromise of Azure DevOps resources.

## Associating a project with an Agent Space


After registering Azure DevOps at the account level, associate specific projects with your Agent Spaces:

1. In the AWS DevOps Agent console, select your Agent Space

1. Go to the **Capabilities** tab

1. In the **Pipelines** section, click **Add**

1. Select **Azure DevOps** from the list of available providers

1. Select the project from the dropdown of available projects

1. Click **Add** to complete the association

## Managing Azure DevOps connections

+ **Viewing connected projects** – In the **Capabilities** tab, the **Pipelines** section lists all connected Azure DevOps projects.
+ **Removing a project** – To disconnect a project from an Agent Space, select it in the **Pipelines** section and click **Remove**.
+ **Removing the registration** – To remove the Azure DevOps registration entirely, go to the **Capability Providers** page and delete the registration. All Agent Space associations must be removed first.

# Connecting to CI/CD pipelines


CI/CD pipeline integration enables AWS DevOps Agent to monitor deployments and correlate code changes with operational incidents during investigations. By connecting your CI/CD providers, the agent can track deployment events and associate them with AWS resources to help identify potential root causes during incident response.

AWS DevOps Agent supports integration with popular CI/CD platforms through a two-step process:

1. **Account-level registration** – Register your CI/CD provider once at the AWS account level

1. **Agent Space connection** – Connect specific projects or repositories to individual Agent Spaces based on your organizational needs

This approach allows you to share CI/CD provider registrations across multiple Agent Spaces while maintaining granular control over which projects are monitored by each space.

## Supported CI/CD providers


AWS DevOps Agent supports the following CI/CD platforms:
+ **GitHub** – Connect repositories from [GitHub.com](http://GitHub.com) using the AWS DevOps Agent GitHub app.
+ **GitLab** – Connect projects from [GitLab.com,](http://gitlab.com) managed GitLab instances, or publicly accessible self-hosted GitLab deployments.

**Topics**
+ [Connecting GitHub](connecting-to-cicd-pipelines-connecting-github.md)
+ [Connecting GitLab](connecting-to-cicd-pipelines-connecting-gitlab.md)

# Connecting GitHub


GitHub integration enables AWS DevOps Agent to access code repositories and receive deployment events during incident investigations. This integration follows a two-step process: account-level registration of GitHub, followed by connecting specific repositories to individual Agent Spaces.

AWS DevOps Agent supports both GitHub.com (SaaS) and GitHub Enterprise Server (self-hosted) instances.

## Prerequisites


Before connecting GitHub, ensure you have:
+ Access to the AWS DevOps Agent admin console
+ A GitHub user account or organization with admin permissions
+ Authorization to install GitHub apps in your account or organization

For GitHub Enterprise Server, you also need:
+ A GitHub Enterprise Server instance (version 3.x or later) accessible over HTTPS
+ The HTTPS URL of your GitHub Enterprise Server instance (for example, `https://github.example.com`)
+ (Optional) A private connection, if your GitHub Enterprise Server instance is not publicly accessible

## Registering GitHub (account-level)


GitHub is registered at the AWS account level and shared among all Agent Spaces in that account. You only need to register GitHub once per AWS account.

### Step 1: Navigate to pipeline providers


1. Sign in to the AWS Management Console

1. Navigate to the AWS DevOps Agent console

1. Go to the **Capabilities** tab

1. In the **Pipeline** section, click **Add**

1. Select **GitHub** from the list of available providers

If GitHub hasn't been registered yet, you'll be prompted to register it first.

### Step 2: Choose connection type


On the "Register GitHub Account / Organization" screen, select whether you're connecting as a user or organization:
+ **User** – Your personal GitHub account with a username and profile
+ **Organization** – A shared GitHub account where multiple people can collaborate across many projects at once

If you are connecting to a GitHub Enterprise Server instance, check the **Use GitHub Enterprise Server** checkbox and enter the HTTPS URL of your instance (for example, `https://github.example.com`).

If your GitHub Enterprise Server instance is not publicly accessible, you can optionally configure a private connection to allow AWS DevOps Agent to securely reach your instance. For more information, see [Connecting to privately hosted tools](configuring-capabilities-for-aws-devops-agent-connecting-to-privately-hosted-tools.md).

**Note**  
** Do not include `/api/v3` or any trailing path in the URL — enter only the base URL.

### Step 3: Set up the GitHub App


Click **Submit** to begin the app setup process. The next steps differ depending on whether you are connecting to GitHub.com or GitHub Enterprise Server.

#### For GitHub.com


1. You'll be redirected to GitHub to install the AWS DevOps Agent GitHub app.

1. Select which account or organization to install the app in.

1. The app allows AWS DevOps Agent to receive events from connected repositories, including deployment events.

#### For GitHub Enterprise Server


GitHub Enterprise Server uses a GitHub App Manifest flow, which automatically sets up a new GitHub App on your instance. This involves two redirects to your GitHub Enterprise Server instance.

1. Your browser will be redirected to your GitHub Enterprise Server instance's "Create GitHub App" page.

1. You'll see the app name pre-filled. Feel free to change the name as needed. Click **Create GitHub App**.

1. You'll be redirected back to AWS DevOps Agent, which exchanges the manifest code for app credentials.

### Step 4: Select repositories and complete installation


1. You'll see the **Install & Authorize** page for the GitHub App.

1. Select which repositories to allow the app to access:
   + **All repositories** – Grant access to all current and future repositories
   + **Only select repositories** – Choose specific repositories from your account or organization

1. Click **Install & Authorize**.

1. You'll be redirected back to the AWS DevOps Agent console, where GitHub will appear as registered at the account level.

## Connecting repositories to an Agent Space


After registering GitHub at the account level, you can connect specific repositories to individual Agent Spaces:

1. In the AWS DevOps Agent console, select your Agent Space

1. Go to the **Capabilities** tab

1. In the **Pipeline** section, click **Add**

1. Select **GitHub** from the list of available providers

1. Select the subset of repositories relevant to this Agent Space

1. Click **Add** to complete the connection

You can connect different sets of repositories to different Agent Spaces based on your organizational needs.

## Understanding the GitHub app


The AWS DevOps Agent GitHub app:
+ Requests read-only access to your repositories
+ Receives deployment events and other repository events
+ Allows AWS DevOps Agent to correlate code changes with operational incidents
+ Can be uninstalled at any time through your GitHub settings

For GitHub Enterprise Server, the GitHub App is automatically created on your instance during registration. You can manage the app's repository access or uninstall it through **Settings > Applications > Installed GitHub Apps**. To delete the app definition entirely, go to **Settings > Developer settings > GitHub Apps**.

## Managing GitHub connections

+ **Updating repository access** – To change which repositories the GitHub app can access, go to your GitHub account or organization settings (or your GitHub Enterprise Server instance settings), navigate to installed GitHub apps, and modify the AWS DevOps Agent app configuration.
+ **Viewing connected repositories** – In the AWS DevOps Agent console, select your Agent Space and go to the Capabilities tab to view connected repositories in the Pipeline section.
+ **Removing GitHub connection** – To disconnect GitHub from an Agent Space, select the connection in the Pipeline section and click **Remove**. To uninstall the GitHub app completely, uninstall it from your GitHub account or organization settings. For GitHub Enterprise Server, because the GitHub App is created directly on your instance during registration, you can optionally clean up the app entirely by performing both of the following:
  + **Uninstall the app** – Go to **Settings > Applications > Installed GitHub Apps**, click **Configure** on the app, then uninstall it.
  + **Delete the app** – Go to **Settings > Developer settings > GitHub Apps**, select the app, go to the **Advanced** tab, and choose **Delete GitHub App**. **Warning:** Deleting the GitHub App is permanent and cannot be undone. If you delete it, you will need to re-register GitHub Enterprise Server from the beginning in the AWS DevOps Agent console to create a new app.

# Connecting GitLab


GitLab integration enables AWS DevOps Agent to monitor deployments from GitLab Pipelines to inform causal investigations during incident response. This integration follows a two-step process: account-level registration of GitLab, followed by connecting specific projects to individual Agent Spaces.

## Registering GitLab (account-level)


GitLab is registered at the AWS account level and shared among all Agent Spaces in that account. Individual Agent Spaces can then choose which specific projects apply to their Agent Space.

### Step 1: Navigate to pipeline providers


1. Sign in to the AWS Management Console

1. Navigate to the AWS DevOps Agent console

1. Go to the **Capability Providers** page (accessible from the side navigation)

1. Find **GitLab** in the **Available** providers section under **Pipeline** and click **Register**

### Step 2: Configure GitLab connection


On the GitLab registration page, configure the following:

**Connection type** – Select whether you're connecting as a person or a group:
+ **Personal** (default) – Your individual GitLab user account with a username and profile
+ **Group** – In GitLab, you use groups to manage one or more related projects at the same time

**GitLab instance type** – Choose which type of GitLab instance you're connecting to:
+ **GitLab.com** (default) – The public GitLab service
+ **Publicly accessible self-hosted GitLab** – Check the **Use GitLab self hosted endpoint** box and provide the URL to your GitLab instance

**Note**  
** Currently, only publicly accessible GitLab instances are supported.

**Access token** – Provide a GitLab personal access token:

1. In a separate browser tab, log in to your GitLab account

1. Navigate to your user settings and select **Access Tokens**

1. Create a new personal access token with the following permissions:
   + `read_repository` – Required to access repository content
   + `read_virtual_registry` – Required to access virtual registry information
   + `read_registry` – Required to access registry information
   + `api` – Required for read and write API access
   + `self_rotate` - Required for rotating tokens. This feature is currently unsupported by AWS DevOps Agent but will be supported at a later date. Adding now prevents the need to create a new token in the future.

1. Set the token expiration to a maximum of 365 days from the current date

1. Copy the generated token

1. Return to the AWS DevOps Agent console

1. Paste the token into the "Access Token" field

### Step 3: Complete registration


**(Optional) Tags** – Add AWS tags to the GitLab registration for organizational purposes.

Click **Next** to review your configuration, then click **Submit** to complete the GitLab registration process. The system will validate your access token and establish the connection.

## Connecting projects to an Agent Space


After registering GitLab at the account level, you can connect specific projects to individual Agent Spaces:

1. In the AWS DevOps Agent console, select your Agent Space

1. Go to the **Capabilities** tab

1. In the **Pipeline** section, click **Add**

1. Select **GitLab** from the list of available providers

1. Select the GitLab projects relevant to your Agent Space

1. Click **Save**

AWS DevOps Agent will monitor these projects for deployments from GitLab Pipelines to inform causal investigations.

## Managing GitLab connections

+ **Updating access token** – If your access token expires or needs to be updated, you can update it in the AWS DevOps Agent console by modifying the GitLab registration at the account level.
+ **Viewing connected projects** – In the AWS DevOps Agent console, select your Agent Space and go to the Capabilities tab to view connected projects in the Pipeline section.
+ **Removing GitLab connection** – To disconnect GitLab projects from an Agent Space, select the connection in the Pipeline section and click **Remove**. To remove the GitLab registration completely, remove it from all Agent Spaces first, then delete the registration at the account level.

# Connecting MCP Servers


Model Context Protocol (MCP) servers extend AWS DevOps Agent's investigation capabilities by providing access to data from your external observability tools, custom monitoring systems, and operational data sources. This guide explains how to connect an MCP server to AWS DevOps Agent.

## Requirements


Before connecting an MCP server, ensure your server meets these requirements:
+ **Streamable HTTP transport protocol** – Only MCP servers that implement the Streamable HTTP transport protocol are supported.
+ **Authentication support** – Your MCP server must support OAuth 2.0 authentication flows or API key/token-based authentication.

## Security considerations


When connecting MCP servers to AWS DevOps Agent, consider these security aspects:
+ **Tool allowlisting – ** You should allowlist only the specific tools your Agent Space needs, rather than exposing all tools from your MCP server. See [Configuring MCP tools in an Agent Space](#configuring-capabilities-for-aws-devops-agent-connecting-mcp-servers) for how to allow list tools per Agent Space.

Please note that the maximum tool length of any MCP tool is 64.
+ **Prompt injection risks** – Custom MCP servers can introduce additional risk of prompt injection attacks. See [Prompt injection protection: AWS DevOps Agent Security](aws-devops-agent-security.md) for more information.
+ **Read-only tools and access –** Only allowlist read-only MCP tools and ensure that authentication credentials are only permitted read-only access.

See [AWS DevOps Agent Security](aws-devops-agent-security.md) for more information on prompt injection and the shared responsibility model.

**Note**  
If your MCP server is on a private network, see [Connecting to privately hosted tools](configuring-capabilities-for-aws-devops-agent-connecting-to-privately-hosted-tools.md)

## Registering an MCP server (account-level)


MCP servers are registered at the AWS account level and shared among all Agent Spaces in that account. Individual Agent Spaces can then choose which specific tools they need from each MCP server.

### Step 1: MCP server details


1. Sign in to the AWS Management Console

1. Navigate to the AWS DevOps Agent console

1. Go to the **Capability Providers** page (accessible from the side navigation)

1. Find **MCP Server** in the **Available** providers section and click **Register**

1. On the **MCP server details** page, enter the following information:
   + **Name** – Enter a descriptive name for your MCP server
   + **Endpoint URL** – Enter the full HTTPS URL of your MCP server endpoint
   + **Description** (optional) – Add a description to help identify the server's purpose
   + **Enable Dynamic Client Registration** – Select this checkbox if you want to allow AWS DevOps Agent to automatically register with your MCP server's authorization server

1. Click **Next**

**Note**  
** The MCP server endpoint URL will be displayed in AWS CloudTrail logs in your account.

### Step 2: Authorization flow


Select the authentication method for your MCP server:

**OAuth Client Credentials** – If your MCP server uses OAuth Client Credentials flow:

1. Select **OAuth Client Credentials**

1. Click **Next**

**OAuth 3LO (Three-Legged OAuth)** – If your MCP server uses OAuth 3LO for authentication:

1. Select **OAuth 3LO**

1. Click **Next**

**API Key** – If your MCP server uses API key authentication:

1. Select **API Key**

1. Click **Next**

### Step 3: Authorization configuration


Configure additional authorization parameters based on the selected authentication method:

**For OAuth Client Credentials:**

1. **Client ID** – Enter the client ID of the OAuth client

1. **Client Secret** – Enter the client secret of the OAuth client

1. **Exchange URL** – Enter the OAuth token exchange endpoint URL

1. **Exchange Parameters** – Enter OAuth token exchange parameters for authenticating with the service

1. **Add Scope** – Add OAuth scopes for authentication

1. Click **Next**

**For OAuth 3LO:**

1. **Client ID** – Enter the client ID of the OAuth client

1. **Client Secret** – Enter the client secret of the OAuth client if it’s required by your OAuth client

1. **Exchange URL** – Enter the OAuth token exchange endpoint URL

1. **Authorization URL ** - Enter the OAuth authorization endpoint URL

1. **Code Challenge Support ** - Select this checkbox if your OAuth client supports code challenge

1. **Add Scope** – Add OAuth scopes for authentication

1. Click **Next**

**For API Key:**

1. Enter an API key name

1. Enter the the name of the header that will contain the API key in the request

1. Enter your API key value

1. Click **Next**

### Step 4: Review and submit


1. Review all the MCP server configuration details

1. Click **Submit** to complete the registration

1. AWS DevOps Agent will validate the connection to your MCP server

1. Upon successful validation, your MCP server will be registered at the account level

## Configuring MCP tools in an Agent Space


After registering an MCP server at the account level, you can configure which tools from that server are available to specific Agent Spaces:

1. In the AWS DevOps Agent console, select your Agent Space

1. Go to the **Capabilities** tab

1. In the **MCP Servers** section, click **Add**

1. Select the registered MCP server you want to connect to this Agent Space

1. Configure which tools from this MCP server should be available to the Agent Space:
   + **Allow all tools** – Makes all tools from the MCP server available
   + **Select specific tools** – Allows you to choose which tools to allowlist

1. Click **Add** to connect the MCP server to your Agent Space

AWS DevOps Agent will now be able to use the allowlisted tools from your MCP server during investigations in this Agent Space.

## Managing MCP server connections


**Updating authentication credentials** – If your authentication credentials need to be updated, you will need to re-register your MCP server. Navigate to the **Capability Providers** page in the AWS DevOps Agent console, locate your MCP server, remove any active associations, and click **Deregister**. Next, **register** your MCP server with the new authentication credentials and re-create any necessary associations with your Agent Space.

**Viewing connected MCP servers** – To see all MCP servers connected to your Agent Space, select your Agent Space, go to the **Capabilities** tab, and check the **MCP Servers** section. You can also update selected tools here.

**Removing MCP server connections** – To disconnect an MCP server from an Agent Space, select the server in the **MCP Servers** section and click **Remove**. To completely delete an MCP server registration, remove it from all Agent Spaces first, then delete the account-level registration.

## Related topics

+ Security in AWS DevOps Agent
+ Setting up an Agent Space
+ Prompt Injection Protection

# Connecting multiple AWS Accounts


Secondary AWS accounts allow AWS DevOps Agent to investigate resources across multiple AWS accounts in your organization. When your applications span multiple accounts, adding secondary accounts ensures the agent has visibility into all relevant resources during incident investigations. Greater access to the accounts and resources composing an application ensures greater investigation accuracy.

## Prerequisites


Before adding a secondary AWS account, ensure you have:
+ Access to the AWS DevOps Agent console in the primary account
+ Administrative access to the secondary AWS account
+ IAM permissions to create roles in the secondary account

## Adding a secondary AWS account


In addition to the steps below, you can use the [AWS DevOps Agent CLI onboarding guide](getting-started-with-aws-devops-agent-cli-onboarding-guide.md) to programmatically add secondary accounts.

### Step 1: Start the secondary account configuration


1. Sign in to the AWS Management Console and navigate to the AWS DevOps Agent console

1. Select your Agent Space

1. Go to the **Capabilities** tab

1. In the **Cloud** section, locate the **Secondary sources** subsection

1. Click **Add**

### Step 2: Specify the role name


1. In the **Name your role** field, enter a name for the role you'll create in the secondary account

1. Note this name—you'll use it again when creating the role in the secondary account

1. Copy the trust policy provided in the console and save it in a scratch space

### Step 3: Create the role in the secondary account


1. Open a new browser tab and sign in to the IAM console in the secondary AWS account

1. Navigate to **IAM >****Roles** > **Create role**

1. Select **Custom trust policy**

1. Paste the trust policy you copied from Step 2

1. Click **Next**

### Step 4: Attach the AWS managed policy


1. In the **Permissions policies** section, search for **AIOpsAssistantPolicy**

1. Select the checkbox next to the **AIOpsAssistantPolicy** managed policy

1. Click **Next**

### Step 5: Name and create the role


1. In the **Role name** field, enter the same role name you provided in Step 2

1. (Optional) Add a description to help identify the role's purpose

1. Review the trust policy and attached permissions

1. Click **Create role**

### Step 6: Attach the inline policy


1. In the IAM console, locate and select the role you just created

1. Go to the **Permissions** tab

1. Click **Add permissions** > **Create inline policy**

1. Switch to the **JSON** tab

1. Paste the policy you saved in Step 2

1. Paste the policy into the JSON editor in the IAM console

1. Click **Next**

1. Provide a name for the inline policy (for example, "DevOpsAgentInlinePolicy")

1. Click **Create policy**

### Step 7: Complete the configuration


1. Return to the AWS DevOps Agent console in the primary account

1. Click **Next** to complete the secondary account configuration

1. Verify the connection status shows as **Active**

## Understanding the required policies


AWS DevOps Agent requires three policy components to access resources in a secondary account:
+ **Trust policy** – Allows AWS DevOps Agent in the primary account to assume the role in the secondary account. This establishes the trust relationship between accounts.
+ **AIOpsAssistantPolicy (AWS managed policy)** – Provides the core read-only permissions AWS DevOps Agent needs to investigate resources in the secondary account. This policy is maintained by AWS and updated as new capabilities are added.
+ **Inline policy** – Provides additional permissions specific to your Agent Space configuration. This policy is generated based on your Agent Space settings and may include permissions for specific integrations or features.

In the primary account, the AWS DevOps Agent IAM Role must be able to assume the role created in the secondary account.

## Managing secondary accounts

+ **Viewing connected accounts** – In the **Capabilities** tab, the **Secondary sources** subsection lists all connected secondary accounts with their connection status.
+ **Updating the IAM role** – If you need to modify permissions, update the inline policy attached to the role in the secondary account. Changes take effect immediately.
+ **Removing a secondary account** – To disconnect a secondary account, select it in the **Secondary sources** list and click **Remove**. This does not delete the IAM role in the secondary account.

# Connecting telemetry sources


AWS DevOps Agent provides three ways to connect to your telemetry sources.

## Built-in, 2-way integration


Currently, AWS DevOps Agent supports Dynatrace users with a built-in, 2-way integration enabling the following:
+ **Topology resource mapping** - AWS DevOps Agent will augment your DevOps Agent Space Topology with entities and relationships available to it via a AWS DevOps Agent-hosted Dynatrace MCP server.
+ **Automated Investigation triggering** - Dynatrace Workflows can be configured to trigger incident resolution Investigations from Dynatrace Problems.
+ **Telemetry introspection** - AWS DevOps Agent can introspect Dynatrace telemetry as it investigates an issue via the AWS DevOps Agent-hosted Dynatrace MCP server.
+ **Status updates** - AWS DevOps Agent will publish key investigation findings, root cause analyses, and generated mitigation plans to the Dynatrace user interface.

To learn about 2-way integrations, see
+ [Connecting Dynatrace](connecting-telemetry-sources-connecting-dynatrace.md)

## Built-in, 1-way integration


Currently, AWS DevOps Agent supports AWS CloudWatch, Datadog, Grafana, New Relic, and Splunk users with built-in, 1 way integrations.

**Security best practice:** When configuring credentials for built-in 1-way integrations, we recommend scoping API keys and tokens to read-only access. AWS DevOps Agent uses these credentials for telemetry introspection only and does not require write access to your telemetry provider.

The AWS CloudWatch built-in, 1-way integration requires no additional setup and enables the following:
+ **Topology resource mapping** - AWS DevOps Agent will augment your DevOps Agent Space Topology with entities and relationships available to it via your configured primary and secondary AWS cloud accounts.
+ **Telemetry introspection** - AWS DevOps Agent can introspect AWS CloudWatch telemetry as it investigates an issue via the IAM role(s) provided during primary and secondary AWS cloud account configuration.

The Datadog, Grafana, New Relic, and Splunk built-in, 1 way integrations require setup and enable the following:
+ **Automated Investigation triggering** - Datadog, Grafana, New Relic, and Splunk events can be configured to trigger AWS DevOps Agent incident resolution Investigations via AWS DevOps Agent webhooks.
+ **Telemetry introspection** - AWS DevOps Agent can introspect Datadog, Grafana, New Relic, and Splunk telemetry as it investigates an issue via each provider's remote MCP server.

To learn about 1-way integrations, see the following:
+ [Connecting DataDog](connecting-telemetry-sources-connecting-datadog.md)
+ [Connecting Grafana](connecting-telemetry-sources-connecting-grafana.md)
+ [Connecting New Relic](connecting-telemetry-sources-connecting-new-relic.md)
+ [Connecting Splunk](connecting-telemetry-sources-connecting-splunk.md)

## Bring-your-own telemetry sources


For any other telemetry source, including Prometheus metrics, you can leverage AWS DevOps Agent’s support for both webhook and MCP server integration.

To learn about bring-your-own integrations, see the following
+ [Invoking DevOps Agent through Webhook](configuring-capabilities-for-aws-devops-agent-invoking-devops-agent-through-webhook.md)
+ [Connecting MCP Servers](configuring-capabilities-for-aws-devops-agent-connecting-mcp-servers.md)

# Connecting Dynatrace


## Built-in, 2-way integration


Currently, AWS DevOps Agent supports Dynatrace users with a built-in, 2-way integration enabling the following:
+ **Topology resource mapping** - AWS DevOps Agent will augment your DevOps Agent Space Topology with entities and relationships available to it from your Dynatrace environment.
+ **Automated Investigation triggering** - Dynatrace Workflows can be configured to trigger incident resolution Investigations from Dynatrace Problems.
+ **Telemetry introspection** - AWS DevOps Agent can introspect Dynatrace telemetry as it investigates an issue via the AWS DevOps Agent-hosted Dynatrace MCP server.
+ **Status updates** - AWS DevOps Agent will publish key investigation findings, root cause analyses, and generated mitigation plans to the Dynatrace user interface.

## Onboarding


### Onboarding Process


Onboarding your Dynatrace observability system involves three stages:

1. **Connect** - Establish connection to Dynatrace by configuring account access credentials, with all the environments you may need

1. **Enable** - Activate Dynatrace in specific Agent spaces with specific Dynatrace environments

1. **Configure your Dynatrace environment** - download the workflows and dashboard and import into Dynatrace, making a note of the webhook details to trigger investigations in designated Agent spaces

### Step 1: Connect


Establish connection to your Dynatrace environment

#### Configuration


1. Go to the **Capability Providers** page (accessible from the side navigation)

1. Find **Dynatrace** in the **Available** providers section under **Telemetry** and click **Register**

1. **Create OAuth client in Dynatrace, with the detailed permissions.**

   1. See [Dynatrace documentation](https://docs.dynatrace.com/docs/shortlink/aws-devops-agent)

   1. When ready press next

   1. You can connect multiple Dynatrace environments and later scope to specific ones for each DevOps Agent Space you may have.

1. Enter your Dynatrace details from the OAuth client setup:
   + **Client Name**
   + **Client ID**
   + **Client Secret**
   + **Account URN**

1. Click Next

1. Review and add

### Step 2: Enable


Activate Dynatrace in a specific Agent space and configure appropriate scoping

#### Configuration


1. From the agent spaces page, select an agent space and press view details

1. Select the Capabilities tab

1. Locate the Telemetry section, Press Add

1. You will notice Dynatrace with ‘Registered’ status. Click on add to add this to your agent space

1. Dynatrace Environment ID - Provide the Dynatrace environment ID you would like to associate with this DevOps agent space.

1. Enter one or more Dynatrace Entity IDs - these help DevOps agent discover your most important resources, examples might be services or applications. **If you are unsure you can press remove.**

1. Review and press Save

1. Copy the Webhook URL and Webhook Secret. See [Dynatrace documentation](https://docs.dynatrace.com/docs/shortlink/aws-devops-agent) to add these credentials to Dynatrace.

### Step 3: Configure your Dynatrace environment


To complete your Dynatrace set up you will need to perform certain setup steps in your Dynatrace environment. Follow the instructions in the [Dynatrace documentation](https://docs.dynatrace.com/docs/shortlink/aws-devops-agent).

#### Supported Event Schemas


AWS DevOps Agent supports two types of events from Dynatrace using webhooks. The supported event schemas are documented below:

##### Incident Event


Incident events are used to trigger an investigation. The event schema is:

```
{
    "event.id": string;
    "event.status": "ACTIVE" | "CLOSED";
    "event.status_transition": string;
    "event.description": string;
    "event.name": string;
    "event.category": "AVAILABILITY" | "ERROR" | "SLOWDOWN" | "RESOURCE_CONTENTION" | "CUSTOM_ALERT" | "MONITORING_UNAVAILABLE" | "INFO";
    "event.start"?: string;
    "affected_entity_ids"?: string[];
}
```

##### Mitigation Event


Mitigation events are used to trigger generating a mitigation report for the investigation on next steps. The event schema is:

```
{
    "task_id": string;
    "task_version": number;
    "event.type": "mitigation_request";
}
```

## Removal


The telemetry source is connected at two levels at the agent space level and at account level. To completely remove it you must first remove from all agent spaces where it is used and then it can be unregistered.

### Step 1: Remove from agent space


1. From the agent spaces page, select an agent space and press view details

1. Select the Capabilities tab

1. Scroll down to the Telemetry section

1. Select Dynatrace

1. Press remove

### Step 2: Deregister from account


1. Go to the **Capability Providers** page (accessible from the side navigation)

1. Scroll to the **Currently registered** section.

1. Check the agent space count is zero (if not repeat Step 1 above in your other agent spaces)

1. Press Deregister next to Dynatrace

# Connecting DataDog


## Built-in, 1 way integration


Currently, AWS DevOps Agent supports Datadog users with built-in, 1 way integration, enabling the following:
+ **Automated Investigation triggering** - Datadog events can be configured to trigger AWS DevOps Agent incident resolution Investigations via AWS DevOps Agent webhooks.
+ **Telemetry introspection** - AWS DevOps Agent can introspect Datadog telemetry as it investigates an issue via each provider's remote MCP server.

## Onboarding


### Step 1: Connect


Establish connection to your Datadog remote MCP endpoint with account access credentials

#### Configuration


1. Go to the **Capability Providers** page (accessible from the side navigation)

1. Find **Datadog** in the **Available** providers section under **Telemetry** and click **Register**

1. Enter your Datadog MCP server details:
   + **Server Name** - Unique identifier (e.g., my-datadog-server)
   + **Endpoint URL** - Your Datadog MCP server endpoint. The endpoint URL varies depending on your Datadog site. See the Datadog site endpoint table below.
   + **Description** - Optional server description

1. Click Next

1. Review and submit

#### Datadog site endpoints


The MCP endpoint URL varies depending on your Datadog site. To identify your site, check the URL in your browser when logged into Datadog, or see [Access the Datadog site](https://docs.datadoghq.com/getting_started/site/#access-the-datadog-site).


| Datadog Site | Site Domain | MCP Endpoint URL | 
| --- | --- | --- | 
| US1 (default) | datadoghq.com | https://mcp.datadoghq.com/api/unstable/mcp-server/mcp | 
| US3 | us3.datadoghq.com | https://mcp.us3.datadoghq.com/api/unstable/mcp-server/mcp | 
| US5 | us5.datadoghq.com | https://mcp.us5.datadoghq.com/api/unstable/mcp-server/mcp | 
| EU1 | datadoghq.eu | https://mcp.datadoghq.eu/api/unstable/mcp-server/mcp | 
| AP1 | ap1.datadoghq.com | https://mcp.ap1.datadoghq.com/api/unstable/mcp-server/mcp | 
| AP2 | ap2.datadoghq.com | https://mcp.ap2.datadoghq.com/api/unstable/mcp-server/mcp | 

#### Authorization


Complete OAuth authorization by:
+ Authorizing as your user on the Datadog OAuth page
+ If not logged in, click Allow, login, then authorize

Once configured, Datadog becomes available across all Agent spaces.

### Step 2: Enable


Activate DataDog in a specific Agent space and configure appropriate scoping

#### Configuration


1. From the agent spaces page, select an agent space and press view details (if you have not yet created an agent space see [Creating an Agent Space](getting-started-with-aws-devops-agent-creating-an-agent-space.md))

1. Select the Capabilities tab

1. Scroll down to the Telemetry section

1. Press Add

1. Select Datadog

1. Next

1. Review and press Save

1. Copy the Webhook URL and API Key

### Step 3: Configure webhooks


Using the Webhook URL and API Key you can configure Datadog to send events to trigger an investigation, for example from an alarm.

To ensure that events sent can be used by the DevOps Agent, make sure that the data transmitted to the webhook matches the data schema specified below. Events that do not match this schema may be ignored by DevOps Agent.

Set the method and the headers

```
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "Authorization": "Bearer <Token>",
    },
```

Send the body as a JSON string.

```
{
    eventType: 'incident';
    incidentId: string;
    action: 'created' | 'updated' | 'closed' | 'resolved';
    priority: "CRITICAL" | "HIGH" | "MEDIUM" | "LOW" | "MINIMAL";
    title: string;
    description?: string;
    timestamp?: string;
    service?: string;
    // The original event generated by service is attached here.
    data?: object;
}
```

Send webhooks with Datadog [https://docs.datadoghq.com/integrations/webhooks/](https://docs.datadoghq.com/integrations/webhooks/) (note select no authorization and instead use the custom header option).

Learn more: [Datadog Remote MCP Server](https://www.datadoghq.com/blog/datadog-remote-mcp-server/)

## Removal


The telemetry source is connected at two levels at the agent space level and at account level. To completely remove it you must first remove from all agent spaces where it is used and then it can be unregistered.

### Step 1: Remove from agent space


1. From the agent spaces page, select an agent space and press view details

1. Select the Capabilities tab

1. Scroll down to the Telemetry section

1. Select Datadog

1. Press remove

### Step 2: Deregister from account


1. Go to the **Capability Providers** page (accessible from the side navigation)

1. Scroll to the **Currently registered** section.

1. Check the agent space count is zero (if not repeat Step 1 above in your other agent spaces)

1. Press Deregister next to Datadog

# Connecting Grafana


Grafana integration enables AWS DevOps Agent to query metrics, dashboards, and alerting data from your Grafana instance during incident investigations. This integration follows a two-step process: account-level registration of Grafana, followed by connecting it to individual Agent Spaces.

To improve security, the Grafana integration only enables read-only tools. Write tools are disabled and cannot be enabled. This means the agent can query and read data from your Grafana instance but cannot create, modify, or delete any Grafana resources such as dashboards, alerts, or annotations. For more information, see [Security in AWS DevOps Agent](https://docs.aws.amazon.com/devopsagent/latest/userguide/aws-devops-agent-security.html).

## Grafana requirements


Before connecting Grafana, ensure you have:
+ Grafana version 9.0 or later. Some features, particularly datasource-related operations, may not work correctly with earlier versions due to missing API endpoints.
+ A Grafana instance accessible over HTTPS. Both public and private network endpoints are supported. With private network connectivity, your Grafana instance can be hosted inside a VPC with no public internet access. For details, see [Connecting to privately hosted tools](configuring-capabilities-for-aws-devops-agent-connecting-to-privately-hosted-tools.md).
+ A Grafana service account with an access token that has appropriate read permissions

## Registering Grafana (account-level)


Grafana is registered at the AWS account level and shared among all Agent Spaces in that account.

### Step 1: Configure Grafana


1. Sign in to the AWS Management Console

1. Navigate to the AWS DevOps Agent console

1. Go to the **Capability Providers** page (accessible from the side navigation)

1. Find **Grafana** in the **Available** providers section under **Telemetry** and click **Register**

1. On the **Configure Grafana** page, enter the following information:
   + **Service Name** (required) – Enter a descriptive name for your Grafana server using alphanumeric characters, hyphens, and underscores only. For example, `my-grafana-server`.
   + **Grafana URL** (required) – Enter the full HTTPS URL of your Grafana instance. For example, `https://myinstance.grafana.net`.
   + **Service Account Access Token** (required) – Enter a Grafana service account access token. Tokens typically start with `glsa_`. To create a service account token, navigate to your Grafana instance, go to **Administration > Service accounts**, create a service account with Viewer role, and generate a token.
   + **Description** (optional) – Add a description to help identify the server's purpose. For example, `Production Grafana server for monitoring`.

1. (Optional) Add AWS tags to the registration for organizational purposes.

1. Click **Next**

### Step 2: Review and submit Grafana registration


1. Review all the Grafana configuration details

1. Click **Submit** to complete the registration

1. Upon successful registration, Grafana appears in the **Currently registered** section of the Capability Providers page

## Adding Grafana to an Agent Space


After registering Grafana at the account level, you can connect it to individual Agent Spaces:

1. In the AWS DevOps Agent console, select your Agent Space

1. Go to the **Capabilities** tab

1. In the **Telemetry** section, click **Add**

1. Select **Grafana** from the list of available providers

1. Click **Save**

## Configuring Grafana alert webhooks


You can configure Grafana to automatically trigger AWS DevOps Agent investigations when alerts fire by sending webhooks through Grafana contact points. For details on webhook authentication methods and credential management, see [Invoking DevOps Agent through Webhook](configuring-capabilities-for-aws-devops-agent-invoking-devops-agent-through-webhook.md).

### Step 1: Create a custom notification template


In your Grafana instance, navigate to **Alerting > Contact points > Notification templates** and create a new template with the following content:

```
{{ define "devops-agent-payload" }}
{
  "eventType": "incident",
  "incidentId": "{{ (index .Alerts 0).Labels.alertname }}-{{ (index .Alerts 0).Fingerprint }}",
  "action": "{{ if eq .Status "resolved" }}resolved{{ else }}created{{ end }}",
  "priority": "{{ if eq .Status "resolved" }}MEDIUM{{ else }}HIGH{{ end }}",
  "title": "{{ (index .Alerts 0).Labels.alertname }}",
  "description": "{{ (index .Alerts 0).Annotations.summary }}",
  "service": "{{ if (index .Alerts 0).Labels.job }}{{ (index .Alerts 0).Labels.job }}{{ else }}grafana{{ end }}",
  "timestamp": "{{ (index .Alerts 0).StartsAt }}",
  "data": {
    "metadata": {
      {{ range $k, $v := (index .Alerts 0).Labels }}
      "{{ $k }}": "{{ $v }}",
      {{ end }}
      "_source": "grafana"
    }
  }
}
{{ end }}
```

This template formats Grafana alerts into the webhook payload structure expected by AWS DevOps Agent. It maps alert labels, annotations, and status into the appropriate fields, and includes all alert labels as metadata.

**Note:** This template processes only the first alert in a group. Grafana groups multiple firing alerts into a single notification by default. To ensure each alert is sent individually, configure your notification policies to group by `alertname`. Additionally, this template does not escape special JSON characters in label values or annotations. Ensure that alert labels and the `summary` annotation do not contain characters such as double quotes or newlines, which would produce invalid JSON.

### Step 2: Create a webhook contact point


1. In Grafana, navigate to **Alerting > Contact points** and click **Add contact point**

1. Select **Webhook** as the integration type

1. Set the **URL** to your AWS DevOps Agent webhook endpoint

1. Under **Optional Webhook settings**, configure the authentication headers based on your webhook type. See [Webhook authentication methods](configuring-capabilities-for-aws-devops-agent-invoking-devops-agent-through-webhook.md) for details.

1. Set the **Message** field to use your custom template: `{{ template "devops-agent-payload" . }}`

1. Click **Save contact point**

### Step 3: Assign the contact point to a notification policy


1. Navigate to **Alerting > Notification policies**

1. Edit an existing policy or create a new one

1. Set the contact point to the webhook contact point you created

1. Click **Save policy**

When a matching alert fires, Grafana will send the formatted payload to AWS DevOps Agent, which will start an investigation automatically.

## Limitations

+ **ClickHouse data source tools** – ClickHouse data source tools are not currently supported.
+ **Proactive incident prevention** – [Proactive incident prevention](working-with-devops-agent-proactive-incident-prevention.md) does not currently use Grafana tools. Support is planned for a future release.

### Amazon Managed Grafana considerations


If you are using [Amazon Managed Grafana](https://aws.amazon.com/grafana/) (AMG), be aware of the following limitations:
+ **Webhook contact points are not supported** – AMG does not currently support webhook contact points in its alerting configuration. You cannot use AMG to send alert webhooks directly to AWS DevOps Agent. For details, see [Alerting contact points in Amazon Managed Grafana](https://docs.aws.amazon.com/grafana/latest/userguide/v9-alerting-explore-contacts.html).
+ **Service account token expiration** – AMG service account tokens have a maximum expiration of 30 days. You will need to rotate tokens and update your Grafana registration in AWS DevOps Agent before they expire. See [Managing Grafana connections](#managing-grafana-connections) for how to update credentials. For details on AMG token limits, see [Service accounts in Amazon Managed Grafana](https://docs.aws.amazon.com/grafana/latest/userguide/service-accounts.html).

## Managing Grafana connections

+ **Updating credentials** – If your service account token expires or needs to be updated, deregister Grafana from the Capability Providers page and re-register with the new token.
+ **Viewing connected instances** – In the AWS DevOps Agent console, select your Agent Space and go to the Capabilities tab to view connected telemetry sources.
+ **Removing Grafana** – To disconnect Grafana from an Agent Space, select it in the Telemetry section and click **Remove**. To completely remove the registration, remove it from all Agent Spaces first, then deregister from the Capability Providers page.

# Connecting New Relic


## Built-in, 1 way integration


Currently, AWS DevOps Agent supports New Relic users with built-in, 1 way integration, enabling the following:
+ **Automated Investigation triggering** - New Relic events can be configured to trigger AWS DevOps Agent incident resolution Investigations via AWS DevOps Agent webhooks.
+ **Telemetry introspection** - AWS DevOps Agent can introspect New Relic telemetry as it investigates an issue via each provider's remote MCP server.

## Onboarding


### Step 1: Connect


Establish connection to your New Relic remote MCP endpoint with account access credentials

Please use a Full Platform User (not Basic/Core) in New relic to enable New Relic MCP tools.

#### Configuration


1. Go to the **Capability Providers** page (accessible from the side navigation)

1. Find **New Relic** in the **Available** providers section under **Telemetry** and click **Register**

1. Follow the instructions to obtain your New Relic API Key

1. Enter your New Relic MCP server API Key details:
   + **Account ID:** Enter your New Relic account ID obtained above
   + **API Key:** Enter the API Key obtained above
   + **Select US or EU region** based on where your New Relic account is.

1. Click Add

### Step 2: Enable


Activate New Relic in a specific Agent space and configure appropriate scoping

#### Configuration


1. From the agent spaces page, select an agent space and press view details (if you have not yet created an agent space see [Creating an Agent Space](getting-started-with-aws-devops-agent-creating-an-agent-space.md))

1. Select the Capabilities tab

1. Scroll down to the Telemetry section

1. Press Add

1. Select New Relic

1. Next

1. Review and press Save

1. Copy the Webhook URL and API Key

### Step 3: Configure webhooks


Using the Webhook URL and API Key you can configure New Relic to send events to trigger an investigation, for example from an alarm. For more details on setting up webhooks, see [Change tracking webhooks](https://docs.newrelic.com/docs/change-tracking/change-tracking-webhooks/).

To ensure that events sent can be used by the DevOps Agent, make sure that the data transmitted to the webhook matches the data schema specified below. Events that do not match this schema may be ignored by DevOps Agent.

Set the method and the headers

```
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "Authorization": "Bearer <Token>",
    },
```

Send the body as a JSON string.

```
{
    eventType: 'incident';
    incidentId: string;
    action: 'created' | 'updated' | 'closed' | 'resolved';
    priority: "CRITICAL" | "HIGH" | "MEDIUM" | "LOW" | "MINIMAL";
    title: string;
    description?: string;
    timestamp?: string;
    service?: string;
    // The original event generated by service is attached here.
    data?: object;
}
```

Send webhooks with New Relic [https://newrelic.com/instant-observability/webhook-notifications](https://newrelic.com/instant-observability/webhook-notifications). You can either select Bearer token for the authorization type, or select no authorization and add the `Authorization: Bearer <Token>` as a custom header instead.

Learn more: [https://docs.newrelic.com/docs/agentic-ai/mcp/overview/](https://docs.newrelic.com/docs/agentic-ai/mcp/overview/)

## Removal


The telemetry source is connected at two levels at the agent space level and at account level. To completely remove it you must first remove from all agent spaces where it is used and then it can be unregistered.

### Step 1: Remove from agent space


1. From the agent spaces page, select an agent space and press view details

1. Select the Capabilities tab

1. Scroll down to the Telemetry section

1. Select New Relic

1. Press remove

### Step 2: Deregister from account


1. Go to the **Capability Providers** page (accessible from the side navigation)

1. Scroll to the **Currently registered** section.

1. Check the agent space count is zero (if not repeat Step 1 above in your other agent spaces)

1. Press Deregister next to New Relic

# Connecting Splunk


## Built-in, 1 way integration


Currently, AWS DevOps Agent supports Splunk users with built-in, 1 way integration, enabling the following:
+ **Automated Investigation triggering** - Splunk events can be configured to trigger AWS DevOps Agent incident resolution Investigations via AWS DevOps Agent webhooks.
+ **Telemetry introspection** - AWS DevOps Agent can introspect Splunk telemetry as it investigates an issue via each provider's remote MCP server.

## Prerequisites


### Getting a Splunk API token


You will need an MCP URL and token to connect Splunk.

### Splunk Administrator steps


Your Splunk Administrator needs to perform the following steps:
+ enable [REST API access ](https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCloud)
+ [enable token authentication ](https://help.splunk.com/en/splunk-cloud-platform/administer/manage-users-and-security/9.2.2406/authenticate-into-the-splunk-platform-with-tokens/enable-or-disable-token-authentication) on the deployment.
+ create a new role 'mcp\$1user', the new role does not need to have any capabilities.
+ assign the role 'mcp\$1user' to any users on the deployment who are authorized to use the MCP server.
+ create the token for the authorized users with audience as 'mcp' and set the appropriate expiration, if the user does not have the permission to create tokens themselves.

### Splunk User steps


A Splunk user needs to perform the following steps:
+ Get an appropriate token from the Splunk Administrator or create one themselves, if they have the permission. The audience for the token must be 'mcp'.

## Onboarding


### Step 1: Connect


Establish connection to your Splunk remote MCP endpoint with account access credentials

#### Configuration


1. Go to the **Capability Providers** page (accessible from the side navigation)

1. Find **Splunk** in the **Available** providers section under **Telemetry** and click **Register**

1. Enter your Splunk MCP server details:
   + **Server Name** - Unique identifier (e.g., my-splunk-server)
   + **Endpoint URL** - Your Splunk MCP server endpoint:

`https://<YOUR_SPLUNK_DEPLOYMENT_NAME>.api.scs.splunk.com/<YOUR_SPLUNK_DEPLOYMENT_NAME>/mcp/v1/`
+ **Description** - Optional server description
+ **Token Name** - The name of the bearer token for authentication: `my-splunk-token`
+ **Token Value** The bearer token value for authentication

### Step 2: Enable


Activate Splunk in a specific Agent space and configure appropriate scoping

#### Configuration


1. From the agent spaces page, select an agent space and press view details (if you have not yet created an agent space see [Creating an Agent Space](getting-started-with-aws-devops-agent-creating-an-agent-space.md))

1. Select the Capabilities tab

1. Scroll down to the Telemetry section

1. Press Add

1. Select Splunk

1. Next

1. Review and press Save

1. Copy the Webhook URL and API Key

### Step 3: Configure webhooks


Using the Webhook URL and API Key you can configure Splunk to send events to trigger an investigation, for example from an alarm.

To ensure that events sent can be used by the DevOps Agent, make sure that the data transmitted to the webhook matches the data schema specified below. Events that do not match this schema may be ignored by DevOps Agent.

Set the method and the headers

```
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "Authorization": "Bearer <Token>",
    },
```

Send the body as a JSON string.

```
{
    eventType: 'incident';
    incidentId: string;
    action: 'created' | 'updated' | 'closed' | 'resolved';
    priority: "CRITICAL" | "HIGH" | "MEDIUM" | "LOW" | "MINIMAL";
    title: string;
    description?: string;
    timestamp?: string;
    service?: string;
    // The original event generated by service is attached here.
    data?: object;
}
```

Send webhooks with Splunk [https://help.splunk.com/en/splunk-enterprise/alert-and-respond/alerting-manual/9.4/configure-alert-actions/use-a-webhook-alert-action](https://help.splunk.com/en/splunk-enterprise/alert-and-respond/alerting-manual/9.4/configure-alert-actions/use-a-webhook-alert-action) (note select no authorization and instead use the custom header option)

### Learn more:

+ Splunk's MCP Server Documentation: [https://help.splunk.com/en/splunk-cloud-platform/mcp-server-for-splunk-platform/about-mcp-server-for-splunk-platform ](https://help.splunk.com/en/splunk-cloud-platform/mcp-server-for-splunk-platform/about-mcp-server-for-splunk-platform)
+ Access requirements and limitations for the Splunk Cloud Platform REST API: [https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCloud ](https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCloud)
+ Manage authentication tokens in Splunk Cloud Platform: [https://help.splunk.com/en/splunk-cloud-platform/administer/manage-users-and-security/9.3.2411/authenticate-into-the-splunk-platform-with-tokens/manage-or-delete-authentication-tokens ](https://help.splunk.com/en/splunk-cloud-platform/administer/manage-users-and-security/9.3.2411/authenticate-into-the-splunk-platform-with-tokens/manage-or-delete-authentication-tokens)
+ Create and manage roles with Splunk Web: [https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/Addandeditroles ](https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/Addandeditroles)

## Removal


The telemetry source is connected at two levels at the agent space level and at account level. To completely remove it you must first remove it from all agent spaces where it is used and then it can be unregistered.

### Step 1: Remove from agent space


1. From the agent spaces page, select an agent space and press view details

1. Select the Capabilities tab

1. Scroll down to the Telemetry section

1. Select Splunk

1. Press remove

### Step 2: Deregister from account


1. Go to the **Capability Providers** page (accessible from the side navigation)

1. Scroll to the **Currently registered** section. 

1. Check the agent space count is zero (if not repeat Step 1 above in your other agent spaces) 

1. Press Deregister next to Splunk

# Connecting to ticketing and chat


AWS DevOps Agent is designed to act as a member of your team by participating in your team’s existing communication channels. You can connect DevOps Agent to your ticketing and alarming systems, like ServiceNow and PagerDuty, to automatically launch investigations from incident tickets, accelerating incident response within your existing workflows to reduce meant time to recover (MTTR). You can also connect your DevOps Agent to your team collaboration systems like Slack to receive activity summaries from your DevOps Agent in a chat channel.

To learn about connecting ticketing and chat integrations, see the following:
+ [Connecting PagerDuty](connecting-to-ticketing-and-chat-connecting-pagerduty.md)
+ [Connecting ServiceNow](connecting-to-ticketing-and-chat-connecting-servicenow.md)
+ [Connecting Slack](connecting-to-ticketing-and-chat-connecting-slack.md)

# Connecting PagerDuty


PagerDuty integration enables AWS DevOps Agent to access and update incident data, on-call schedules, and service information from your PagerDuty account during incident investigations and automated response. This integration uses OAuth 2.0 for secure authentication.

**Important**  
** AWS DevOps Agent only supports the newer PagerDuty OAuth 2.0 (Scoped OAuth). Legacy PagerDuty OAuth with redirect uri is not supported.

## PagerDuty requirements


Before connecting PagerDuty, ensure you have:
+ A PagerDuty account with your OAuth client ID and client secret
+ Your PagerDuty account subdomain (for example, if your PagerDuty URL is `https://your-company.pagerduty.com`, the subdomain is `your-company`)

## Registering PagerDuty


PagerDuty is registered at the AWS account level and shared among all Agent Spaces in that account.

### Step 1: Configure access in PagerDuty


1. Sign in to the AWS Management Console

1. Navigate to the AWS DevOps Agent console

1. Go to the **Capability Providers** page (accessible from the side navigation)

1. Find **PagerDuty** in the **Available** providers section under **Communication** and click **Register**

1. Follow the guided setup on the **Configure access in PagerDuty** page:

**Check your service region and subdomain:**
+ **Account scope** – Select your PagerDuty region (**US** or **EU**) and enter your PagerDuty subdomain. For example, if your PagerDuty URL is `https://your-company.pagerduty.com`, enter `your-company`.

**Create a new app in PagerDuty:**
+ In a separate browser tab, log in to PagerDuty and navigate to **Integrations > App Registration**
+ Create a new app using **OAuth 2.0 Scoped OAuth**
+ Under **Permissions**, grant the following minimum required scopes: `incidents.read`, `incidents.write`, and `services.read`
+ Enable **Events Integration** to allow bi-directional communication between AWS DevOps Agent and PagerDuty

**Configure OAuth credentials:**
+ **Permission scope** – The minimum required scopes are: `incidents.read`, `incidents.write`, `services.read`
+ **Client name** – Enter a descriptive name for your OAuth client
+ **Client ID** – Enter the OAuth client ID from your PagerDuty app registration
+ **Client secret** – Enter the OAuth client secret from your PagerDuty app registration

### Step 2: Review and submit PagerDuty registration


1. Review all the PagerDuty configuration details

1. Click **Submit** to complete the registration

1. Upon successful registration, PagerDuty appears in the **Currently registered** section of the Capability Providers page

## Adding PagerDuty to an Agent Space


After registering PagerDuty at the account level, you can connect it to individual Agent Spaces:

1. In the AWS DevOps Agent console, select your Agent Space

1. Go to the **Capabilities** tab

1. In the **Communications** section, click **Add**

1. Select **PagerDuty** from the list of available providers

1. Click **Save**

## Managing PagerDuty connections

+ **Updating credentials** – If your OAuth credentials need to be updated, deregister PagerDuty from the Capability Providers page and re-register with the new credentials.
+ **Viewing connections** – In the AWS DevOps Agent console, select your Agent Space and go to the Capabilities tab to view connected communication integrations.
+ **Removing PagerDuty** – To disconnect PagerDuty from an Agent Space, select it in the Communications section and click **Remove**. To completely remove the registration, remove it from all Agent Spaces first, then deregister from the Capability Providers page.

## Webhook support


AWS DevOps Agent only supports PagerDuty V3 webhooks. Earlier webhook versions are not supported.

For more information about PagerDuty V3 webhook subscriptions, see [Webhooks Overview](https://developer.pagerduty.com/docs/webhooks-overview#webhook-subscriptions) in the PagerDuty developer documentation.

# Connecting ServiceNow


This tutorial walks you through connecting a ServiceNow instance to AWS DevOps Agent to enable it to automatically initiate incident response investigations when a ticket is created and post its key findings into the originating ticket. It also contains examples for how to configure your ServiceNow instance to send only specific tickets to a DevOps Agent Space and how to orchestrate ticket routing across multiple DevOps Agent Spaces.

## Initial Setup


The first step is to create in ServiceNow an OAuth application client that AWS DevOps can use to access your ServiceNow instance.

### Create a ServiceNow OAuth application client


1. Enable your instance’s client credential system property

   1. Search `sys_properties.list` in the filter search box and then hit enter (it will not show the option but hitting enter works)

   1. Choose New

   1. Add the name as `glide.oauth.inbound.client.credential.grant_type.enabled` and the value to true with type as true \$1 false

![\[alt text not found\]](http://docs.aws.amazon.com/devopsagent/latest/userguide/images/09ed6d5ff911.png)


1. Navigate to System OAuth > Application Registry from the filter search box

1. Choose “New” > “New Inbound Integration Experience” > “New Integration” > “OAuth - Client Credentials Grant”

1. Pick a name and set the OAuth application user to “Problem Administrator”, click “Save“

![\[alt text not found\]](http://docs.aws.amazon.com/devopsagent/latest/userguide/images/aeff4c127f7c.png)


### Connect your ServiceNow OAuth client to AWS DevOps Agent


1. You can start this process in two places. First, by navigating to the **Capability Providers** page and finding **ServiceNow** under **Communication**, then clicking **Register**. Alternatively you can select any DevOps Agent Space you may have created and navigate to Capabilities → Communications → Add → ServiceNow and click Register.

1. Next, authorize DevOps Agent to access your ServiceNow instance using the OAuth application client you just created.

![\[alt text not found\]](http://docs.aws.amazon.com/devopsagent/latest/userguide/images/3db5a9aafc5f.png)

+ Follow the next steps, and save the resulting information about the webhook 

**Important**  
You will not see this information again

![\[alt text not found\]](http://docs.aws.amazon.com/devopsagent/latest/userguide/images/80d0a319f87e.png)


### Configure your ServiceNow Business Rule


Once you have established connectivity, you’ll need to configure a business rule in ServiceNow to send tickets to your DevOps Agent Space(s).

1. Navigate to Activity Subscriptions → Administration → Business Rules, and click New.

1. Set the “Table” field to “Incident [incident]”, check the “Advanced” box, and set the rule to run after Insert, Update, and Delete.

![\[alt text not found\]](http://docs.aws.amazon.com/devopsagent/latest/userguide/images/6f2a7370e2c0.png)


1. Navigate to the “Advanced” tab and add the following webhook script, inserting your webhook secret and URL where indicated, and click Submit.

```
(function executeRule(current, previous /*null when async*/ ) {

    var WEBHOOK_CONFIG = {
        webhookSecret: GlideStringUtil.base64Encode('<<< INSERT WEBHOOK SECRET HERE >>>'),
        webhookUrl: '<<< INSERT WEBHOOK URL HERE >>>'
    };

    function generateHMACSignature(payloadString, secret) {
        try {
            var mac = new GlideCertificateEncryption();
            var signature = mac.generateMac(secret, "HmacSHA256", payloadString);
            return signature;
        } catch (e) {
            gs.error('HMAC generation failed: ' + e);
            return null;
        }
    }

    function callWebhook(payload, config) {
        try {
            var timestamp = new Date().toISOString();
            var payloadString = JSON.stringify(payload);
            var payloadWithTimestamp =`${timestamp}:${payloadString}`;

            var signature = generateHMACSignature(payloadWithTimestamp, config.webhookSecret);

            if (!signature) {
                gs.error('Failed to generate signature');
                return false;
            }

            gs.info('Generated signature: ' + signature);

            var request = new sn_ws.RESTMessageV2();
            request.setEndpoint(config.webhookUrl);
            request.setHttpMethod('POST');

            request.setRequestHeader('Content-Type', 'application/json');
            request.setRequestHeader('x-amzn-event-signature', signature);
            request.setRequestHeader('x-amzn-event-timestamp', timestamp);

            request.setRequestBody(payloadString);

            var response = request.execute();
            var httpStatus = response.getStatusCode();
            var responseBody = response.getBody();

            if (httpStatus >= 200 && httpStatus < 300) {
                gs.info('Webhook sent successfully. Status: ' + httpStatus);
                return true;
            } else {
                gs.error('Webhook failed. Status: ' + httpStatus + ', Response: ' + responseBody);
                return false;
            }

        } catch (ex) {
            gs.error('Error sending webhook: ' + ex.getMessage());
            return false;
        }
    }

    function createReference(field) {
        if (!field || field.nil()) {
            return null;
        }

        return {
            link: field.getLink(true),
            value: field.toString()
        };
    }

    function getStringValue(field) {
        if (!field || field.nil()) {
            return null;
        }
        return field.toString();
    }

    function getIntValue(field) {
        if (!field || field.nil()) {
            return null;
        }
        var val = parseInt(field.toString());
        return isNaN(val) ? null : val;
    }

    var eventType = (current.operation() == 'insert') ? "create" : "update";

    var incidentEvent = {
        eventType: eventType.toString(),
        sysId: current.sys_id.toString(),
        priority: getStringValue(current.priority),
        impact: getStringValue(current.impact),
        active: getStringValue(current.active),
        urgency: getStringValue(current.urgency),
        description: getStringValue(current.description),
        shortDescription: getStringValue(current.short_description),
        parent: getStringValue(current.parent),
        incidentState: getStringValue(current.incident_state),
        severity: getStringValue(current.severity),
        problem: createReference(current.problem),
        additionalContext: {}
    };

    incidentEvent.additionalContext = {
        number: current.number.toString(),
        opened_at: getStringValue(current.opened_at),
        opened_by: current.opened_by.nil() ? null : current.opened_by.getDisplayValue(),
        assigned_to: current.assigned_to.nil() ? null : current.assigned_to.getDisplayValue(),
        category: getStringValue(current.category),
        subcategory: getStringValue(current.subcategory),
        knowledge: getStringValue(current.knowledge),
        made_sla: getStringValue(current.made_sla),
        major_incident: getStringValue(current.major_incident)
    };

    for (var key in incidentEvent.additionalContext) {
        if (incidentEvent.additionalContext[key] === null) {
            delete incidentEvent.additionalContext[key];
        }
    }

    gs.info(JSON.stringify(incidentEvent, null, 2)); // Pretty print for logging only

    if (WEBHOOK_CONFIG.webhookUrl && WEBHOOK_CONFIG.webhookSecret) {
        callWebhook(incidentEvent, WEBHOOK_CONFIG);
    } else {
        gs.info('Webhook not configured.');
    }

})(current, previous);
```

If you chose to register your ServiceNow connection from the **Capability Providers** page, you now need to navigate to the DevOps Agent Space you want to investigate ServiceNow incident tickets, select Capabilities → Communications and then register the ServiceNow instance you registered on the Capability Providers page. Now, everything should be set up, and all incidents where the caller is set to “Problem Administrator” (to mimic the permissions you gave the AWS DevOps OAuth client) will trigger a incident response investigation in the configured DevOps Agent Space. You can test this by creating a new incident in ServiceNow and setting the Caller field of the incident as “Problem Administrator.” 

![\[alt text not found\]](http://docs.aws.amazon.com/devopsagent/latest/userguide/images/4c7d24a85f88.png)


### ServiceNow ticket updates


During all triggered incident response Investigations, your DevOps Agent will provide updates of its key findings, root cause analyses, and mitigation plans into the originating ticket. The agent findings are posted to the comments of an incident, and we'll currently only post agent records of type `finding`, `cause`, `investigation_summary`, `mitigation_summary` , and investigation status updates (e.g `AWS DevOps Agent started/finished its investigation`).

## Ticket routing and orchestration examples


### Scenario: Filtering which incidents are sent to a DevOps Agent Space


This is a simple scenario but needs some configuration in ServiceNow to create a field in ServiceNow to track incident source. For the purpose of this example, create a new Source (u\$1source) field using the SNOW form builder. This will enable tracking the incident source and use it to route requests from a particular source to a DevOps Agent Space. Routing is accomplished by creating a Service Now Business Rule and in the When to run tab setting “When” triggers and “Filter Conditions.” In this example the filter conditions are set as follows: 

![\[alt text not found\]](http://docs.aws.amazon.com/devopsagent/latest/userguide/images/fac7a186beee.png)


### Scenario: Routing incidents across multiple DevOps Agent Spaces


This example shows how to trigger an Investigation in DevOps Agent Space B when the urgency is `1`, category is `Software` , or Service is `AWS`, and trigger an Investigation in DevOps Agent Space A when the service is `AWS`, and source is `Dynatrace`.

This scenario can be accomplished in two ways. The webhook script itself can be updated to include this business logic. In this scenario we will show how to accomplish it with a ServiceNow Business Rule, for transparency and simplify debugging. Routing is accomplished by creating two Service Now Business Rules.
+ Create a Business Rule in ServiceNow for DevOps Agent Space A and create a condition using the condition builder to only send the events based on our specified condition.

![\[alt text not found\]](http://docs.aws.amazon.com/devopsagent/latest/userguide/images/bca2f3928bf0.png)

+ Next, create another Business Rule in ServiceNow for AgentSpace B for which the business rule will only trigger when Service is AWS and source is Dynatrace.

![\[alt text not found\]](http://docs.aws.amazon.com/devopsagent/latest/userguide/images/bc29e4db1a76.png)


Now, when you create a new Incident that matches the condition specified, it will either trigger an investigation on DevOps Agent Space A or DevOps Agent Space B, providing you with fine grained control over incident routing.

# Connecting Slack


You can configure AWS DevOps Agent to update a Slack channel you select with incident response investigation key findings, root cause analyses, and generated mitigation plans.

## Before you begin


Slack needs to be registered with DevOps Agent before it can be added to an Agent Space. To integrate AWS DevOps Agent with Slack you must meet these requirements:
+ Have access to a Slack workspace with the ability to install and authorize third-party applications
+ Have identified the Slack channels where you want AWS DevOps Agent to send notifications

## Register Slack integration with AWS DevOps Agent


![\[alt text not found\]](http://docs.aws.amazon.com/devopsagent/latest/userguide/images/4034f56fad96.png)


1. From the **Capability Providers** page in the AWS DevOps Agent console, find **Slack** in the **Available** providers section under **Communication** and click **Register**.

1. Choose the **Register** button.

1. You will be redirected to Slack to authorize the AWS DevOps Agent application for your workspace.

1. On the Slack authorization page, install directly to workspaces, not at the organization level.

1. Choose a workspace from the dropdown. Do not select an Enterprise Grid.

1. Install per workspace as needed for your organization.

1. Review the requested scopes and click **Allow** to authorize the integration.

1. After authorization, you'll return to the AWS DevOps Agent console.

## Associate Slack with your DevOps Agent Space(s)


After registering Slack in your DevOps Agent Space, you can associate it with your DevOps Agent Space(s):

1. From the **Capabilities** tab within your configured AgentSpace, navigate to **Communications** > **Slack**.

1. Select **Add Slack **

1. Enter the Channel ID

1. Choose **Create** to complete the Slack configuration.

**Note**  
** The agent’s bot user must be added to private channels before it can post messages.

**Important**  
** Uninstalling the Slack app may result in the Slack app not being able to be reinstalled. Please avoid uninstalling the Slack app.

# Invoking DevOps Agent through Webhook


Webhooks allow external systems to automatically trigger AWS DevOps Agent investigations. This enables integration with ticketing systems, monitoring tools, and other platforms that can send HTTP requests when incidents occur.

## Prerequisites


Before configuring webhook access, ensure you have:
+ An Agent Space configured in AWS DevOps Agent
+ Access to the AWS DevOps Agent console
+ The external system that will send webhook requests

## Webhook types


AWS DevOps Agent supports the following types of webhooks:
+ **Integration-specific webhooks** – Automatically generated when you configure third-party integrations like Dynatrace, Splunk, Datadog, New Relic, ServiceNow, or Slack. These webhooks are associated with the specific integration and use authentication methods determined by the integration type
+ **Generic webhooks** – Can be manually created for triggering investigations from any source not covered by a specific integration. Generic webhooks currently use **HMAC** authentication (bearer token not currently available).
+ **Grafana alert webhooks** – Grafana can send alert notifications directly to AWS DevOps Agent through webhook contact points. For setup instructions including a custom notification template, see [Connecting Grafana](connecting-telemetry-sources-connecting-grafana.md).

## Webhook authentication methods


The authentication method for your webhook depends on which integration it's associated with:

**HMAC authentication** – Used by:
+ Dynatrace integration webhooks
+ Generic webhooks (not linked to a specific third-party integration)

**Bearer token authentication** – Used by:
+ Splunk integration webhooks
+ Datadog integration webhooks
+ New Relic integration webhooks
+ ServiceNow integration webhooks
+ Slack integration webhooks

## Configuring webhook access


### Step 1: Navigate to the webhook configuration


1. Sign in to the AWS Management Console and navigate to the AWS DevOps Agent console

1. Select your Agent Space

1. Go to the **Capabilities** tab

1. In the **Webhook** section, click **Configure**

### Step 2: Generate webhook credentials


**For integration-specific webhooks:**

Webhooks are automatically generated when you complete the configuration of a third-party integration. The webhook endpoint URL and credentials are provided at the end of the integration setup process.

**For generic webhooks:**

1. Click **Generate webhook**

1. The system will generate an HMAC key pair

1. Securely store the generated key and secret—you won't be able to retrieve them again

1. Copy the webhook endpoint URL provided

### Step 3: Configure your external system


Use the webhook endpoint URL and credentials to configure your external system to send requests to AWS DevOps Agent. The specific configuration steps depend on your external system.

## Managing webhook credentials


**Removing credentials** – To delete webhook credentials, go to the webhook configuration section and click **Remove**. After removing credentials, the webhook endpoint will no longer accept requests until you generate new credentials.

**Regenerating credentials** – To generate new credentials, remove the existing credentials first, then generate a new key pair or token.

## Using the webhook


### Webhook request format


To trigger an investigation, your external system should send an HTTP POST request to the webhook endpoint URL.

**For Version 1 (HMAC authentication):**

Headers:
+ `Content-Type: application/json`
+ `x-amzn-event-signature: <HMAC signature>`
+ `x-amzn-event-timestamp: <+%Y-%m-%dT%H:%M:%S.000Z>`

The HMAC signature is generated by signing the request body with your secret key using SHA-256.

**For Version 2 (Bearer token authentication):**

Headers:
+ `Content-Type: application/json`
+ `Authorization: Bearer <your-token>`

**Request body:**

The request body should include information about the incident:

```
json

{
  "title": "Incident title",
  "severity": "high",
  "affectedResources": ["resource-id-1", "resource-id-2"],
  "timestamp": "2025-11-23T18:00:00Z",
  "description": "Detailed incident description",
  "data": {
    "metadata": {
        "region": "us-east-1",
        "environment": "production"
    }
  }
}
```

### Example code


**Version 1 (HMAC authentication) - JavaScript:**

```
const crypto = require('crypto');

// Webhook configuration
const webhookUrl = 'https://your-webhook-endpoint.amazonaws.com/invoke';
const webhookSecret = 'your-webhook-secret-key';

// Incident data
const incidentData = {  
    eventType: 'incident',
    incidentId: 'incident-123',
    action: 'created',
    priority: "HIGH",
    title: 'High CPU usage on production server',
    description: 'High CPU usage on production server host ABC in AWS account 1234 region us-east-1',
    timestamp: new Date().toISOString(),
    service: 'MyTestService',
    data: {
      metadata: {
        region: 'us-east-1',
        environment: 'production'
      }
    }
};

// Convert data to JSON string
const payload = JSON.stringify(incidentData);
const timestamp = new Date().toISOString();
const hmac = crypto.createHmac("sha256", webhookSecret);
hmac.update(`${timestamp}:${payload}`, "utf8");
const signature = hmac.digest("base64");

// Send the request
fetch(webhookUrl, {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'x-amzn-event-timestamp': timestamp,
    'x-amzn-event-signature': signature
  },
  body: payload
})
.then(res => {
  console.log(`Status Code: ${res.status}`);
  return res.text();
})
.then(data => {
  console.log('Response:', data);
})
.catch(error => {
  console.error('Error:', error);
});
```

**Version 1 (HMAC authentication) - cURL:**

```
#!/bin/bash

# Configuration
WEBHOOK_URL="https://event-ai.us-east-1.api.aws/webhook/generic/YOUR_WEBHOOK_ID"
SECRET="YOUR_WEBHOOK_SECRET"

# Create payload
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%S.000Z)
INCIDENT_ID="test-alert-$(date +%s)"

PAYLOAD=$(cat <<EOF
{
"eventType": "incident",
"incidentId": "$INCIDENT_ID",
"action": "created",
"priority": "HIGH",
"title": "Test Alert",
"description": "Test alert description",
"service": "TestService",
"timestamp": "$TIMESTAMP"
}
EOF
)

# Generate HMAC signature
SIGNATURE=$(echo -n "${TIMESTAMP}:${PAYLOAD}" | openssl dgst -sha256 -hmac "$SECRET" -binary | base64)

# Send webhook
curl -X POST "$WEBHOOK_URL" \
-H "Content-Type: application/json" \
-H "x-amzn-event-timestamp: $TIMESTAMP" \
-H "x-amzn-event-signature: $SIGNATURE" \
-d "$PAYLOAD"
```

**Version 2 (Bearer token authentication) - JavaScript:**

```
function sendEventToWebhook(webhookUrl, secret) {
  const timestamp = new Date().toISOString();
  
  const payload = {
    eventType: 'incident',
    incidentId: 'incident-123',
    action: 'created',
    priority: "HIGH",
    title: 'Test Alert',
    description: 'Test description',
    timestamp: timestamp,
    service: 'TestService',
    data: {}
  };

  fetch(webhookUrl, {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "x-amzn-event-timestamp": timestamp,
      "Authorization": `Bearer ${secret}`,  // Fixed: template literal
    },
    body: JSON.stringify(payload),
  });
}
```

**Version 2 (Bearer token authentication) - cURL:**

```
#!/bin/bash

# Configuration
WEBHOOK_URL="https://event-ai.us-east-1.api.aws/webhook/generic/YOUR_WEBHOOK_ID"
SECRET="YOUR_WEBHOOK_SECRET"

# Create payload
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%S.000Z)
INCIDENT_ID="test-alert-$(date +%s)"

PAYLOAD=$(cat <<EOF
{
"eventType": "incident",
"incidentId": "$INCIDENT_ID",
"action": "created",
"priority": "HIGH",
"title": "Test Alert",
"description": "Test alert description",
"service": "TestService",
"timestamp": "$TIMESTAMP"
}
EOF
)

# Send webhook
curl -X POST "$WEBHOOK_URL" \
-H "Content-Type: application/json" \
-H "x-amzn-event-timestamp: $TIMESTAMP" \
-H "Authorization: Bearer $SECRET" \
-d "$PAYLOAD"
```

## Troubleshooting webhooks


### If you do not receive a 200


A 200 and a message like webhook received indicate the authentication passed and the message has been queued for the system to verify and process. If you do not get a 200 but a 4xx most likely there is something wrong with the authentication or headers. Try sending manually using the curl options to help debug the authentication.

### If you receive a 200 but an investigation does not start


Likely cause is a misformated payload.

1. Check both timestamp and incident id are updated and unique. Duplicate messages are deduplicated.

1. Check the message is valid JSON

1. Check the format is correct

### If you receive a 200 and investigation is immediately cancelled


Most likely you have hit the limit for the month. Please talk to your AWS contact to ask for a rate limit change if appropriate.

## Related topics

+ [Creating an Agent Space](getting-started-with-aws-devops-agent-creating-an-agent-space.md)
+ [What is a DevOps Agent Web App?](about-aws-devops-agent-what-is-a-devops-agent-web-app.md)
+ [DevOps Agent IAM permissions](aws-devops-agent-security-devops-agent-iam-permissions.md)

# Integrating AWS DevOps Agent with Amazon EventBridge


You can integrate AWS DevOps Agent with your event-driven applications by using events that occur during investigation and mitigation lifecycles. AWS DevOps Agent sends events to Amazon EventBridge when the state of an investigation or mitigation changes. You can then create EventBridge rules that take action based on these events.

For example, you can create rules that perform the following actions:
+ Invoke an AWS Lambda function to process investigation results when an investigation completes.
+ Send an Amazon SNS notification when an investigation fails or times out.
+ Update a ticketing system when a new investigation is created.
+ Start an AWS Step Functions workflow when a mitigation action completes.

## How EventBridge routes AWS DevOps Agent events


AWS DevOps Agent sends events to the EventBridge default event bus. EventBridge then evaluates the events against the rules that you create. When an event matches a rule's event pattern, EventBridge sends the event to the specified targets.

The following diagram shows how EventBridge routes AWS DevOps Agent events.

![\[alt text not found\]](http://docs.aws.amazon.com/devopsagent/latest/userguide/images/eventbridge-integration-how-it-works.png)


1. AWS DevOps Agent sends an event to the EventBridge default event bus when an investigation or mitigation lifecycle state changes.

1. EventBridge evaluates the event against the rules that you created.

1. If the event matches a rule's event pattern, EventBridge sends the event to the targets specified in the rule.

## AWS DevOps Agent events


AWS DevOps Agent sends the following events to EventBridge. All events use the source `aws.aidevops`.

### Supported investigation events



| detail-type | Description | 
| --- | --- | 
| Investigation Created | An investigation was created in the agent space. | 
| Investigation Priority Updated | The priority of an investigation was changed. | 
| Investigation In Progress | An investigation started active analysis. | 
| Investigation Completed | An investigation finished successfully with findings. | 
| Investigation Failed | An investigation encountered an error and could not complete. | 
| Investigation Timed Out | An investigation exceeded the maximum allowed duration. | 
| Investigation Cancelled | An investigation was canceled before completion. | 
| Investigation Pending Triage | An investigation is awaiting triage before active analysis begins. | 
| Investigation Linked | An investigation was linked to a related incident or ticket. | 

### Supported mitigation events



| detail-type | Description | 
| --- | --- | 
| Mitigation In Progress | A mitigation action started. | 
| Mitigation Completed | A mitigation action finished successfully. | 
| Mitigation Failed | A mitigation action encountered an error and could not complete. | 
| Mitigation Timed Out | A mitigation action exceeded the maximum allowed duration. | 
| Mitigation Cancelled | A mitigation action was canceled before completion. | 

For detailed field descriptions and example events, see [AWS DevOps Agent events detail reference](integrating-devops-agent-into-event-driven-applications-using-amazon-eventbridge-devops-agent-events-detail-reference.md).

## Creating event patterns that match AWS DevOps Agent events


EventBridge rules use event patterns to select events and route them to targets. An event pattern matches the structure of the events that it handles. You create event patterns to filter AWS DevOps Agent events based on the event fields.

The following examples show event patterns for common use cases.

**Match all AWS DevOps Agent events**

The following event pattern matches all events from AWS DevOps Agent.

```
{
  "source": ["aws.aidevops"]
}
```

**Match only investigation events**

The following event pattern uses a prefix match to select only investigation lifecycle events.

```
{
  "source": ["aws.aidevops"],
  "detail-type": [{"prefix": "Investigation"}]
}
```

**Match only completion and failure events**

The following event pattern matches events for completed or failed investigations and mitigations.

```
{
  "source": ["aws.aidevops"],
  "detail-type": [
    "Investigation Completed",
    "Investigation Failed",
    "Mitigation Completed",
    "Mitigation Failed"
  ]
}
```

**Match events for a specific agent space**

The following event pattern matches events from a specific agent space.

```
{
  "source": ["aws.aidevops"],
  "detail": {
    "metadata": {
      "agent_space_id": ["your-agent-space-id"]
    }
  }
}
```

For more information about event patterns, see [Amazon EventBridge event patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html) in the *Amazon EventBridge User Guide*.

## Amazon EventBridge permissions


AWS DevOps Agent doesn't require additional permissions to deliver events to EventBridge. The events are sent to the default event bus automatically.

Depending on the targets that you configure for your EventBridge rules, you might need to add specific permissions. For more information about the permissions required for targets, see [Using resource-based policies for Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-use-resource-based.html) in the *Amazon EventBridge User Guide*.

## Additional EventBridge resources


For more information about EventBridge concepts and configuration, see the following topics in the *Amazon EventBridge User Guide*:
+ [EventBridge event buses](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-bus.html)
+ [EventBridge events](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-events.html)
+ [EventBridge event patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html)
+ [EventBridge rules](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-rules.html)
+ [EventBridge targets](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-targets.html)

# AWS DevOps Agent events detail reference


Events from AWS services have common metadata fields, including `source`, `detail-type`, `account`, `region`, and `time`. These events also contain a `detail` field with data specific to the service. For AWS DevOps Agent events, the `source` is always `aws.aidevops` and the `detail-type` identifies the specific event.

## Investigation events


The following `detail-type` values identify investigation events:
+ `Investigation Created`
+ `Investigation Priority Updated`
+ `Investigation In Progress`
+ `Investigation Completed`
+ `Investigation Failed`
+ `Investigation Timed Out`
+ `Investigation Cancelled`
+ `Investigation Pending Triage`
+ `Investigation Linked`

The `source` and `detail-type` fields are included below because they contain specific values for AWS DevOps Agent events. For definitions of the other metadata fields that are included in all events, see [Event structure](https://docs.aws.amazon.com/eventbridge/latest/ref/overiew-event-structure.html) in the *Amazon EventBridge Events Reference*.

The following is the JSON structure for investigation events.

```
{
  . . .,
  "detail-type" : "string",
  "source" : "aws.aidevops",
  . . .,
  "detail" : {
    "version" : "string",
    "metadata" : {
      "agent_space_id" : "string",
      "task_id" : "string",
      "execution_id" : "string"
    },
    "data" : {
      "task_type" : "string",
      "priority" : "string",
      "status" : "string",
      "created_at" : "string",
      "updated_at" : "string",
      "summary_record_id" : "string"
    }
  }
}
```

**`detail-type`** Identifies the type of event. For investigation events, this is one of the event names listed previously.

**`source`** Identifies the service that generated the event. For AWS DevOps Agent events, this value is `aws.aidevops`.

**`detail`** A JSON object that contains event-specific data. The `detail` object includes the following fields:
+ `version` (string) – The schema version of the event detail. Currently `1.0.0`.
+ `metadata.agent_space_id` (string) – The unique identifier of the agent space where the event originated.
+ `metadata.task_id` (string) – The unique identifier of the task.
+ `metadata.execution_id` (string) – The unique identifier of the execution run. Present when an execution has been assigned to the investigation.
+ `data.task_type` (string) – The type of task. Value: `INVESTIGATION`.
+ `data.priority` (string) – The priority level. Values: `CRITICAL`, `HIGH`, `MEDIUM`, `LOW`, `MINIMAL`.
+ `data.status` (string) – The current status. Values: `PENDING_START`, `IN_PROGRESS`, `COMPLETED`, `FAILED`, `TIMED_OUT`, `CANCELLED`, `PENDING_TRIAGE`, `LINKED`.
+ `data.created_at` (string) – ISO 8601 timestamp when the task was created.
+ `data.updated_at` (string) – ISO 8601 timestamp when the task was last updated.
+ `data.summary_record_id` (string) – The identifier of the summary record containing investigation findings. Included when a summary is generated for the completed investigation. You can retrieve the summary content through the AWS DevOps Agent API by using this identifier to look up the journal record with a record type of `investigation_summary_md`.

**Example: Investigation Completed event**

```
{
  "version": "0",
  "id": "12345678-1234-1234-1234-123456789015",
  "detail-type": "Investigation Completed",
  "source": "aws.aidevops",
  "account": "123456789012",
  "time": "2026-03-12T18:10:00Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:aidevops:us-east-1:123456789012:agentspace/8f6187a7-0388-4926-8217-3a0fe32f757c"
  ],
  "detail": {
    "version": "1.0.0",
    "metadata": {
      "agent_space_id": "8f6187a7-0388-4926-8217-3a0fe32f757c",
      "task_id": "a1b2c3d4-5678-90ab-cdef-example11111",
      "execution_id": "b2c3d4e5-6789-01ab-cdef-example22222"
    },
    "data": {
      "task_type": "INVESTIGATION",
      "priority": "CRITICAL",
      "status": "COMPLETED",
      "created_at": "2026-03-12T18:00:00Z",
      "updated_at": "2026-03-12T18:10:00Z",
      "summary_record_id": "d4e5f6g7-6789-01ab-cdef-example44444"
    }
  }
}
```

**Example: Investigation Failed event**

```
{
  "version": "0",
  "id": "12345678-1234-1234-1234-123456789016",
  "detail-type": "Investigation Failed",
  "source": "aws.aidevops",
  "account": "123456789012",
  "time": "2026-03-12T18:10:00Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:aidevops:us-east-1:123456789012:agentspace/8f6187a7-0388-4926-8217-3a0fe32f757c"
  ],
  "detail": {
    "version": "1.0.0",
    "metadata": {
      "agent_space_id": "8f6187a7-0388-4926-8217-3a0fe32f757c",
      "task_id": "a1b2c3d4-5678-90ab-cdef-example11111",
      "execution_id": "b2c3d4e5-6789-01ab-cdef-example22222"
    },
    "data": {
      "task_type": "INVESTIGATION",
      "priority": "CRITICAL",
      "status": "FAILED",
      "created_at": "2026-03-12T18:00:00Z",
      "updated_at": "2026-03-12T18:10:00Z"
    }
  }
}
```

## Mitigation events


The following `detail-type` values identify mitigation events:
+ `Mitigation In Progress`
+ `Mitigation Completed`
+ `Mitigation Failed`
+ `Mitigation Timed Out`
+ `Mitigation Cancelled`

The `source` and `detail-type` fields are included below because they contain specific values for AWS DevOps Agent events. For definitions of the other metadata fields that are included in all events, see [Event structure](https://docs.aws.amazon.com/eventbridge/latest/ref/overiew-event-structure.html) in the *Amazon EventBridge Events Reference*.

The following is the JSON structure for mitigation events.

```
{
  . . .,
  "detail-type" : "string",
  "source" : "aws.aidevops",
  . . .,
  "detail" : {
    "version" : "string",
    "metadata" : {
      "agent_space_id" : "string",
      "task_id" : "string",
      "execution_id" : "string"
    },
    "data" : {
      "task_type" : "string",
      "priority" : "string",
      "status" : "string",
      "created_at" : "string",
      "updated_at" : "string",
      "summary_record_id" : "string"
    }
  }
}
```

**`detail-type`** Identifies the type of event. For mitigation events, this is one of the event names listed previously.

**`source`** Identifies the service that generated the event. For AWS DevOps Agent events, this value is `aws.aidevops`.

**`detail`** A JSON object that contains event-specific data. The `detail` object includes the following fields:
+ `version` (string) – The schema version of the event detail. Currently `1.0.0`.
+ `metadata.agent_space_id` (string) – The unique identifier of the agent space where the event originated.
+ `metadata.task_id` (string) – The unique identifier of the task.
+ `metadata.execution_id` (string) – The unique identifier of the execution run. Present when an execution has been assigned to the mitigation.
+ `data.task_type` (string) – The type of task. Value: `INVESTIGATION`.
+ `data.priority` (string) – The priority level. Values: `CRITICAL`, `HIGH`, `MEDIUM`, `LOW`, `MINIMAL`.
+ `data.status` (string) – The current status. Values: `IN_PROGRESS`, `COMPLETED`, `FAILED`, `TIMED_OUT`, `CANCELLED`.
+ `data.created_at` (string) – ISO 8601 timestamp when the task was created.
+ `data.updated_at` (string) – ISO 8601 timestamp when the task was last updated.
+ `data.summary_record_id` (string) – The identifier of the summary record containing mitigation findings. Included when a summary is generated for the completed mitigation. You can retrieve the summary content through the AWS DevOps Agent API by using this identifier to look up the journal record with a record type of `mitigation_summary_md`.

**Example: Mitigation Completed event**

```
{
  "version": "0",
  "id": "12345678-1234-1234-1234-12345678901c",
  "detail-type": "Mitigation Completed",
  "source": "aws.aidevops",
  "account": "123456789012",
  "time": "2026-03-12T18:20:00Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:aidevops:us-east-1:123456789012:agentspace/8f6187a7-0388-4926-8217-3a0fe32f757c"
  ],
  "detail": {
    "version": "1.0.0",
    "metadata": {
      "agent_space_id": "8f6187a7-0388-4926-8217-3a0fe32f757c",
      "task_id": "a1b2c3d4-5678-90ab-cdef-example11111",
      "execution_id": "c3d4e5f6-7890-12ab-cdef-example33333"
    },
    "data": {
      "task_type": "INVESTIGATION",
      "priority": "CRITICAL",
      "status": "COMPLETED",
      "created_at": "2026-03-12T18:00:00Z",
      "updated_at": "2026-03-12T18:20:00Z",
      "summary_record_id": "e5f6g7h8-7890-12ab-cdef-example55555"
    }
  }
}
```

**Example: Mitigation Failed event**

```
{
  "version": "0",
  "id": "12345678-1234-1234-1234-12345678901d",
  "detail-type": "Mitigation Failed",
  "source": "aws.aidevops",
  "account": "123456789012",
  "time": "2026-03-12T18:20:00Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:aidevops:us-east-1:123456789012:agentspace/8f6187a7-0388-4926-8217-3a0fe32f757c"
  ],
  "detail": {
    "version": "1.0.0",
    "metadata": {
      "agent_space_id": "8f6187a7-0388-4926-8217-3a0fe32f757c",
      "task_id": "a1b2c3d4-5678-90ab-cdef-example11111",
      "execution_id": "c3d4e5f6-7890-12ab-cdef-example33333"
    },
    "data": {
      "task_type": "INVESTIGATION",
      "priority": "CRITICAL",
      "status": "FAILED",
      "created_at": "2026-03-12T18:00:00Z",
      "updated_at": "2026-03-12T18:20:00Z"
    }
  }
}
```

# Vended Logs and Metrics


You can monitor your agent spaces and service operations by using vended Amazon CloudWatch metrics and logs. This topic describes the CloudWatch metrics that the AWS DevOps Agent automatically publishes to your account and the vended logs that you can configure for delivery to your preferred destinations.

## Vended CloudWatch metrics


AWS DevOps Agent automatically publishes metrics to Amazon CloudWatch in your account. These metrics are available without any configuration. You can use them to monitor usage, track operational activity, and create alarms.

### Service-Linked Role


To have Amazon CloudWatch metrics published in your account for this service, AWS DevOps Agent will automatically create the [service-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create-service-linked-role.html) **AWSServiceRoleForAIDevOps** Service-Linked Role for you. If the IAM role invoking the API do not have appropriate permission the resource creation will fail with an InvalidParameterException.

**Important**  
Customers who created their AgentSpace before March 13, 2026 will need to manually create the **AWSServiceRoleForAIDevOps** Service Linked Role to have CloudWatch metrics for AWS DevOps Agent published in their account.

### Manually Create Service-Linked Role (For existing customers)


Do one of the following:
+ In the IAM console, create the **AWSServiceRoleForAIDevOps** role under the **AWS DevOps Agent** service.
+ From the AWS CLI, run the following command:

```
aws iam create-service-linked-role --aws-service-name aidevops.amazonaws.com
```

### Namespace


All metrics are published under the `AWS/AIDevOps` namespace.

### Dimensions


All metrics include the following dimension.


| Dimension | Description | 
| --- | --- | 
| AgentSpaceUUID | The unique identifier of the agent space. To aggregate metrics across all agent spaces in your account, use CloudWatch math expressions or omit the dimension filter. | 

### Metrics reference



| Metric name | Description | Unit | Publishing frequency | Useful statistics | 
| --- | --- | --- | --- | --- | 
| ConsumedChatRequests | The number of chat requests that an agent space consumed. To get the total count for your account, use the SUM statistic across all AgentSpaceUUID dimensions. | Count | Every 5 minutes | Sum, Average | 
| ConsumedInvestigationTime | The time spent running investigations in an agent space. | Seconds | Every 5 minutes | Sum, Average, Maximum | 
| ConsumedEvaluationTime | The time spent running evaluations in an agent space. | Seconds | Every 5 minutes | Sum, Average, Maximum | 
| TopologyCompletionCount | The number of topology processing completions. AWS DevOps Agent emits this metric when a topology finishes processing, whether from initial creation during onboarding, a manual update, or a scheduled daily refresh. | Count | Event-driven (emitted on each completion) | Sum, SampleCount | 

### Viewing metrics in the CloudWatch console


1. Open the [CloudWatch console](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Metrics**, and then choose **All metrics**.

1. Choose the **AWS/AIDevOps** namespace.

1. Choose **By AgentSpace** to view metrics for your agent spaces.

**Note**  
** You can create CloudWatch alarms on these metrics to receive notifications when usage exceeds a threshold. For example, create an alarm on `ConsumedChatRequests` to monitor chat request consumption.

## Prerequisites


Before you configure log delivery, make sure that you have the following:
+ An active AWS account with access to the AWS DevOps Agent console
+ An IAM principal with permissions for CloudWatch Logs delivery APIs
+ (Optional) An Amazon S3 bucket or Amazon Data Firehose delivery stream, if you plan to use those as log destinations

## Vended logs


AWS DevOps Agent supports vended logs that provide visibility into events that your agent spaces and service registrations process. Vended logs use the Amazon CloudWatch Logs infrastructure to deliver logs to your preferred destination.

To use vended logs, you must configure a delivery destination. The following destinations are supported:
+ **Amazon CloudWatch Logs** – A log group in your account
+ **Amazon S3** – An S3 bucket in your account
+ **Amazon Data Firehose** – A Firehose delivery stream in your account

### Supported log types


A single log type is supported: `APPLICATION_LOGS`. This log type covers all operational events that the service emits.

### Log event types


The following table summarizes the events that AWS DevOps Agent logs.


| Event | Description | Log level | 
| --- | --- | --- | 
| Agent inbound event received | An agent is triggered by an integrated source and receives an inbound event (for example, a PagerDuty incident event). | INFO | 
| Agent inbound event dropped | An inbound event was dropped before the agent processed it. The log includes the reason (for example, malformed data). | TBD | 
| Agent outbound communication failure | An outbound communication to a third-party integration failed. The log includes the task ID and destination identifier (for example, an authentication error). | TBD | 
| Topology creation queued | A topology creation job was queued for processing. | INFO | 
| Topology creation started | A topology creation job began processing. | INFO | 
| Topology creation finished | A topology creation job completed processing. This event applies to initial creation, updates, and daily refreshes. | INFO | 
| Resource discovery failed | Resource discovery during topology creation encountered a failure. | ERROR | 
| Service registration failed | Service registration encounters an unrecoverable failure | ERROR | 
| Webhook Validation fails | When webhook received by Devops agent doesn't match the expected schema | ERROR | 
| Association Validation status updates | When a Agent space association(typical primary/secondary account), validation status changes from valid to invalid and vice versa(for example, caused by malformed role, that is not assumable by the service). | ERROR/INFO | 

### Permissions


AWS DevOps Agent uses [CloudWatch vended logs (V2 permissions)](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-vended-logs-permissions-V2.html) to deliver logs. To set up log delivery, the IAM role that configures the delivery must have the following permissions:
+ `aidevops:AllowVendedLogDeliveryForResource` – Required to allow log delivery for the agent space resource.
+ Permissions for the CloudWatch Logs delivery APIs (`logs:PutDeliverySource`, `logs:PutDeliveryDestination`, `logs:CreateDelivery`, and related operations).
+ Permissions specific to your chosen delivery destination.

For the full IAM policy that is required for each destination type, see the following topics in the *Amazon CloudWatch Logs User Guide*:
+ [Logs sent to CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-infrastructure-V2-CloudWatchLogs.html)
+ [Logs sent to Amazon S3](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-infrastructure-V2-S3.html)
+ [Logs sent to Firehose](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-infrastructure-V2-Firehose.html)

### Configure log delivery (console)


AWS DevOps Agent provides two locations in the AWS Management Console to configure log delivery:
+ **Service Registration settings page** – Configure log delivery for service-level events. These logs use the service ARN (`arn:aws:aidevops:<region>:<account-id>:service/<account-id>`) as the resource.
+ **Agent Space page** – Configure log delivery for events that are specific to an individual agent space. These logs use the agent space ARN (`arn:aws:aidevops:<region>:<account-id>:agentspace/<agent-space-id>`) as the resource.

#### To configure log delivery for a service registration


1. Open the AWS DevOps Agent console in the AWS Management Console.

1. In the navigation pane, choose **Settings**.

1. In the **Capability Providers** **>** **Logs** tab, choose **Configure**.

1. For **Destination type**, choose one of the following:

1. **CloudWatch Logs** – Select or create a log group.

1. **Amazon S3** – Enter the S3 bucket ARN.

1. **Amazon Data Firehose** – Select or create a Firehose delivery stream.

1. For **Additional settings** – *optional*, you can specify the following options:

   1. For **Field selection**, select the log field names that you want to deliver to your destination. You can select [access log fields](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/standard-logs-reference.html#BasicDistributionFileFormat) and a subset of [real-time access log fields](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/standard-logging.html#standard-logging-real-time-log-selection).

   1. (Amazon S3 only) For **Partitioning**, specify the path to partition your log file data.

   1. (Amazon S3 only) For **Hive-compatible file format**, you can select the checkbox to use Hive-compatible S3 paths. This helps simplify loading new data into your Hive-compatible tools.

   1. For **Output format**, specify your preferred format.

   1. For **Field delimiter**, specify how to separate log fields.

1. Choose **Save**.

1. Verify that the delivery status shows **Active**.

#### To configure log delivery for an agent space


1. Open the AWS DevOps Agent console in the AWS Management Console.

1. Choose the agent space that you want to configure.

1. In the **Configuration** tab, choose **Configure**.

1. For **[Destination type](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-vended-logs-permissions-V2:~:text=sts%3AAssumeRole%22%0A%20%20%20%20%7D%0A%20%20%5D%0A%7D-,Logging%20that%20requires%20additional%20permissions%20%5BV2%5D,-Some%20AWS%20services)**, choose one of the following:

1. **CloudWatch Logs** – Select or create a log group.

1. **Amazon S3** – Enter the S3 bucket ARN.

1. **Amazon Data Firehose** – Select or create a Firehose delivery stream.

1. For **Additional settings – \$1optional** \$1, you can specify the following options:

   1. For **Field selection**, select the log field names that you want to deliver to your destination. You can select [access log fields](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/standard-logs-reference.html#BasicDistributionFileFormat) and a subset of [real-time access log fields](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/standard-logging.html#standard-logging-real-time-log-selection).

   1. (Amazon S3 only) For **Partitioning**, specify the path to partition your log file data.

   1. (Amazon S3 only) For **Hive-compatible file format**, you can select the checkbox to use Hive-compatible S3 paths. This helps simplify loading new data into your Hive-compatible tools.

   1. For **Output format**, specify your preferred format.

   1. For **Field delimiter**, specify how to separate log fields.

1. Choose **Save**.

1. Verify that the delivery status shows **Active**.

### Configure log delivery (CloudWatch API)


You can also use the CloudWatch Logs API to configure log delivery programmatically. A working log delivery consists of three elements:
+ A **DeliverySource** – Represents the AWS DevOps Agent space resource that generates the logs.
+ A **DeliveryDestination** – Represents the destination where logs are written.
+ A **Delivery** – Connects a delivery source to a delivery destination.

#### Step 1: Create a delivery source


Use the [PutDeliverySource](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliverySource.html) operation to create a delivery source. Pass the ARN of your AWS DevOps Agent space resource and specify `APPLICATION_LOGS` as the log type.

The following example creates a delivery source for an agent space:

```
{
    "name": "my-agent-space-delivery-source",
    "resourceArn": "arn:aws:aidevops:us-east-1:123456789012:agentspace/my-agent-space-id",
    "logType": "APPLICATION_LOGS"
}
```

The following example creates a delivery source for the service:

```
{
    "name": "my-service-delivery-source",
    "resourceArn": "arn:aws:aidevops:us-east-1:123456789012:service",
    "logType": "APPLICATION_LOGS"
}
```

#### Step 2: Create a delivery destination


Use the [PutDeliveryDestination](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html) operation to configure where logs are stored. You can choose Amazon CloudWatch Logs, Amazon S3, or Amazon Data Firehose.

The following example creates a CloudWatch Logs destination:

```
{
    "name": "my-cwl-destination",
    "deliveryDestinationConfiguration": {
        "destinationResourceArn": "arn:aws:logs:us-east-1:123456789012:log-group:/aws/aidevops/my-agent-space"
    },
    "outputFormat": "json"
}
```

The following example creates an Amazon S3 destination:

```
{
    "name": "my-s3-destination",
    "deliveryDestinationConfiguration": {
        "destinationResourceArn": "arn:aws:s3:::my-aidevops-logs-bucket"
    },
    "outputFormat": "json"
}
```

The following example creates an Amazon Data Firehose destination:

```
{
    "name": "my-firehose-destination",
    "deliveryDestinationConfiguration": {
        "destinationResourceArn": "arn:aws:firehose:us-east-1:123456789012:deliverystream/my-aidevops-log-stream"
    },
    "outputFormat": "json"
}
```

**Note**  
** If you deliver logs cross-account, you must use [PutDeliveryDestinationPolicy](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestinationPolicy.html) in the destination account to authorize the delivery.

If you want to use CloudFormation, you can use the following:
+ [Delivery](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-delivery.html)
+ [DeliveryDestination](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-deliverydestination.html)
+ [DeliverySource](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-deliverysource.html)

The `ResourceArn` is the `AgentSpaceArn`, and `LogType` must be `APPLICATION_LOGS` as the supported log type.

#### Step 3: Create a delivery


Use the [CreateDelivery](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateDelivery.html) operation to link the delivery source to the delivery destination.

```
{
    "deliverySourceName": "my-agent-space-delivery-source",
    "deliveryDestinationArn": "arn:aws:logs:us-east-1:123456789012:delivery-destination:my-cwl-destination"
}
```

#### AWS CloudFormation


You can also configure log delivery by using AWS CloudFormation with the following resources:
+ [AWS::Logs::DeliverySource](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-deliverysource.html)
+ [AWS::Logs::DeliveryDestination](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-deliverydestination.html)
+ [AWS::Logs::Delivery](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-delivery.html)

Set `ResourceArn` to the AWS DevOps Agent agent space or service ARN, and set `LogType` to `APPLICATION_LOGS`.

### Log schema reference


AWS DevOps Agent uses a shared log schema across all event types. Not every log event uses every field.

The following table describes the fields in the log schema.


| Field | Type | Description | 
| --- | --- | --- | 
| event\$1timestamp | Long | Unix timestamp of when the event occurred | 
| resource\$1arn | String | ARN of the resource that generated the event | 
| optional\$1account\$1id | String | AWS account ID associated with the log. | 
| optional\$1level | String | Log level: INFO, WARN, ERROR | 
| optional\$1agent\$1space\$1id | String | Identifier of the agent space. | 
| optional\$1association\$1id | String | Association identifier for the log. | 
| optional\$1status | String | Status of the topology operation. | 
| optional\$1webhook\$1id | String | Webhook identifier. | 
| optional\$1mcp\$1endpoint\$1url | String | MCP server endpoint URL | 
| optional\$1service\$1type | String | Type of the Service: DYNATRACE, DATADOG, GITHUB, SLACK, SERVICENOW. | 
| optional\$1service\$1endpoint\$1url | String | Endpoint URL for third-party integrations. | 
| optional\$1service\$1id | String | Identifier of the source. | 
| request\$1id | String | Request identifier for correlating with AWS CloudTrail or support tickets. | 
| optional\$1operation | String | Name of the operation that was performed. | 
| optional\$1task\$1type | String | Agent backlog task type: INVESTIGATION or EVALUATION | 
| optional\$1task\$1id | String | Agent Backlog Task IDAgent backlog task identifier. | 
| optional\$1reference | String | Reference from an agent task (for example, a Jira ticket). | 
| optional\$1error\$1type | String | Error type | 
| optional\$1error\$1message | String | Error description when an operation fails. | 
| optional\$1details | String (JSON) | Service-specific event payload that contains operation parameters and results. | 

### Manage and disable log delivery


You can modify or remove log delivery at any time from the AWS DevOps Agent console in the AWS Management Console or by using the CloudWatch Logs API.

#### Manage log delivery (console)


1. Open the AWS DevOps Agent console in the AWS Management Console.

1. Navigate to the **Settings** page (for service-level logs) or the specific **Agent Space** page (for Agent Space-level logs).

1. In the **Configuration** tab (for Agent Space-level logs) or **Capability Providers** **>** **Logs** tab (for service-level logs), choose the delivery to modify.

1. Update the configuration as needed and choose **Save**.

**Note:** You can't change the destination type of an existing delivery. To change the destination type, delete the current delivery and create a new one.

#### Disable log delivery (console)


1. Open the AWS DevOps Agent console in the AWS Management Console.

1. Navigate to the **Settings** page (for service-level logs) or the specific **Agent Space** page (for Agent Space-level logs).

1. In the **Configuration** tab (for Agent Space-level logs) or **Capability Providers** **>** **Logs** tab (for service-level logs),, select the delivery to remove.

1. Choose **Delete** and confirm.

#### Disable log delivery (API)


To remove a log delivery by using the API, delete the resources in the following order:

1. Delete the delivery by using [DeleteDelivery](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DeleteDelivery.html).

1. Delete the delivery source by using [DeleteDeliverySource](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DeleteDeliverySource.html).

1. (Optional) If the delivery destination is no longer needed, delete it by using [DeleteDeliveryDestination](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DeleteDeliveryDestination.html).

**Important**  
** You are responsible for removing log delivery resources after you delete the agent space resource that generates the logs (for example, after you delete an agent space). If you don't remove these resources, orphaned delivery configurations might remain.

## Pricing


The AWS DevOps Agent does not charge for enabling vended logs. However, you can incur charges for the delivery, ingestion, storage or access, depending on the log delivery destination that you select. For pricing details, see **Vended Logs** on the **Logs** tab at [Amazon CloudWatch Pricing](https://aws.amazon.com/cloudwatch/pricing/).

For destination-specific pricing, see the following:
+ [Amazon CloudWatch Logs Pricing](https://aws.amazon.com/cloudwatch/pricing/)
+ [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/)
+ [Amazon Data Firehose Pricing](https://aws.amazon.com/kinesis/data-firehose/pricing/)

# Connecting to privately hosted tools


## Private connections overview


AWS DevOps Agent can be extended with custom Model Context Protocol (MCP) tools and other integrations that give the agent access to internal systems such as private package registries, self-hosted observability platforms, internal documentation APIs, and source control instances (see: [Configuring capabilities for AWS DevOps Agent](configuring-capabilities-for-aws-devops-agent.md)). These services often run inside an [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide) with restricted or no public internet access, which means AWS DevOps Agent can't reach them by default.

Private connections for AWS DevOps Agent let you securely connect your Agent Space to services running in your VPC without exposing them to the public internet. Private connections work with any integration that needs to reach a private endpoint, including MCP servers, self-hosted Grafana or Splunk instances, and source control systems like GitHub Enterprise Server and GitLab Self-Managed.

**Note**  
** If your privately hosted tools make outbound requests to the AWS DevOps Agent from within your VPC, this traffic can also be secured by using a VPC Endpoint so it stays within the AWS network. For example, this can be used with tools that trigger the DevOps Agent via webhook events (see: [Invoking DevOps Agent through Webhook](configuring-capabilities-for-aws-devops-agent-invoking-devops-agent-through-webhook.md)). For more information, see [VPC Endpoints (AWS PrivateLink)](aws-devops-agent-security-vpc-endpoints-aws-privatelink.md).

### How private connections work


A private connection creates a secure network path between AWS DevOps Agent and a target resource in your VPC. Under the hood, AWS DevOps Agent uses Amazon [VPC Lattice](https://docs.aws.amazon.com/vpc-lattice/latest/ug/) to establish this secure private connectivity path. VPC Lattice is an application networking service that lets you connect, secure, and monitor communication between applications across VPCs, accounts, and compute types, without managing the underlying network infrastructure.

When you create a private connection, the following occurs:
+ You provide the VPC, subnets, and (optionally) security groups that have network connectivity to your target service.
+ AWS DevOps Agent creates a service-managed [resource gateway](https://docs.aws.amazon.com/vpc/latest/privatelink/resource-gateway.html) and provisions its elastic network interfaces (ENIs) in the subnets you specified.
+ The agent uses the resource gateway to route traffic to your target service's IP address or DNS name over the private network path.

The resource gateway is fully managed by AWS DevOps Agent and appears as a read-only resource in your account (named `aidevops-{your-private-connection-name}`). You don't need to configure or maintain it. The only resources created in your VPC are ENIs in the subnets you specify. These ENIs serve as the entry point for private traffic and are managed entirely by the service. They don't accept inbound connections from the internet, and you retain full control over their traffic through your own security groups.

### Security


Private connections are designed with multiple layers of security:
+ **No public internet exposure** – All traffic between AWS DevOps Agent and your target service stays on the AWS network. Your service never needs a public IP address or internet gateway.
+ **Service-controlled resource gateway** – The service-managed resource gateway is read-only in your account. It can only be used by AWS DevOps Agent, and no other service or principal can route traffic through it. You can verify this in [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/) logs, which record all VPC Lattice API calls.
+ **Your security groups, your rules** – You control inbound and outbound traffic to the ENIs through security groups that you own and manage. If you don't specify security groups, AWS DevOps Agent creates a default security group scoped to the ports you define.
+ **Service-linked roles with least privilege** – AWS DevOps Agent uses a [service-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html) to create only the necessary VPC Lattice and Amazon EC2 resources. This role is scoped to resources tagged with `AWSAIDevOpsManaged` and cannot access any other resources in your account.

**Note**  
** If your organization has [service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) that restrict VPC Lattice API actions, the service-managed resource gateway is created through a service-linked role. Ensure your SCPs permit the necessary actions for the service-linked role.

### Architecture


The following diagram shows the network path for a private connection.

![\[alt text not found\]](http://docs.aws.amazon.com/devopsagent/latest/userguide/images/7cd6182e6b8d.png)


In this architecture:
+ AWS DevOps Agent initiates a request to your target service.
+ Amazon VPC Lattice routes the request through the service-managed resource gateway in your VPC. For advanced setups using your own VPC Lattice resources, see [Advanced setup using existing VPC Lattice resources](#advanced-setup-using-existing-vpc-lattice-resources).
+ An ENI in your VPC receives the traffic and forwards it to your target service's IP address or DNS name.
+ Your security groups govern which traffic is allowed through the ENIs.
+ From your target service's perspective, the request originates from private IP addresses of ENIs within your VPC.

## Create a private connection


You can create a private connection using the AWS Management Console or the AWS CLI.

**Note**  
** The following Availability Zones aren't supported by VPC Lattice: `use1-az3`, `usw1-az2`, `apne1-az3`, `apne2-az2`, `euc1-az2`, `euw1-az4`, `cac1-az3`, `ilc1-az2`.

### Prerequisites


Before creating a private connection, verify that you have the following:
+ **An active Agent Space** – You need an existing Agent Space in your account. If you don't have one, see [Getting started with AWS DevOps Agent](getting-started-with-aws-devops-agent.md).
+ **A privately reachable target service** – Your MCP server, observability platform, or other service must be reachable at a known private IP address or DNS name from the VPC where the resource gateway is deployed. The service can run in the same VPC, a peered VPC, or on-premises, as long as it's routable from the resource gateway subnets. The service must serve HTTPS traffic with a minimum TLS version of 1.2 on a port that you specify when creating the connection.
+ **Subnets in your VPC** – Identify 1–20 subnets where the ENIs will be created. We recommend selecting subnets in multiple Availability Zones for high availability. These subnets must have network connectivity to your target service. One subnet per Availability Zone can be used by VPC Lattice.
+ **(Optional) Security groups** – If you want to control traffic with specific rules, prepare up to five security group IDs to attach to the ENIs. If you omit security groups, AWS DevOps Agent creates a default security group.

Private connections are account-level resources. After you create a private connection, you can reuse it across multiple integrations and Agent Spaces that need to reach the same host.

### Create a private connection using the console


1. Open the AWS DevOps Agent console.

1. In the navigation pane, choose **Capability providers**, then choose **Private connections**.

1. Choose **Create a new connection**.

1. For **Name**, enter a descriptive name for the connection, such as `my-mcp-tool-connection`.

1. For **VPC**, select the VPC where the resource gateway ENIs will be deployed.

1. For **Subnets**, select one or more subnets (up to 20). We recommend choosing subnets in at least two Availability Zones.

1. For **IP address type**, select the type of IP address of your target service (`IPv4`, `IPv6`, or `DualStack`).

1. (Optional) For **Number of IPv4 addresses**, if you selected IPv4 or Dualstack for the IP address type, you can enter the number of IPv4 addresses per ENI for your resource gateway. The default is 16 IPv4 addresses per ENI.

1. (Optional) For **Security groups**, select existing security groups (up to 5) to restrict what traffic is allowed to reach your target service. If you don't select any, a default security group is created.

1. (Optional) For **Port ranges**, specify the TCP ports your target application listens on (for example, `443` or `8080-8090`). You can specify up to 11 port ranges.

1. For **Host address**, enter the IP address or DNS name of your target service (for example, `mcp.internal.example.com` or `10.0.1.50`). The service must be reachable from the selected VPC. If you choose a DNS name, it must be resolvable from the selected VPC.

1. (Optional) For **Certificate public key**, if the host address you specified uses TLS certificates issued by a private certificate authority, enter the PEM-encoded public key of the certificate. This allows AWS DevOps Agent to trust the TLS connection to your target service.

1. Choose **Create connection**.

The connection status changes to **Create in progress**. This process can take up to 10 minutes. When the status changes to **Active**, the network path is ready.

If the status changes to **Create failed**, verify the following:
+ The subnets you specified have available IP addresses.
+ Your account has not reached VPC Lattice service quotas.
+ No restrictive IAM policies are preventing the service-linked role from creating resources.

**Note**  
** These steps can also be performed by selecting `Create a new private connection` during the registration of a capability provider. For more information, see [Use a private connection with a capability provider](#use-a-private-connection-with-a-capability-provider).

### Create a private connection using the AWS CLI


Run the following command to create a private connection. Replace the placeholder values with your own.

```
aws devops-agent create-private-connection \
    --name my-mcp-tool-connection \
    --mode '{
        "serviceManaged": {
            "hostAddress": "mcp.internal.example.com",
            "vpcId": "vpc-0123456789abcdef0",
            "subnetIds": [
                "subnet-0123456789abcdef0",
                "subnet-0123456789abcdef1"
            ],
            "securityGroupIds": [
                "sg-0123456789abcdef0"
            ],
            "portRanges": ["443"]
        }
    }'
```

The response includes the connection name and a status of `CREATE_IN_PROGRESS`:

```
{
    "name": "my-mcp-tool-connection",
    "status": "CREATE_IN_PROGRESS",
    "resourceGatewayId": "rgw-0123456789abcdef0",
    "hostAddress": "mcp.internal.example.com",
    "vpcId": "vpc-0123456789abcdef0"
}
```

To check the connection status, use the `describe-private-connection` command:

```
aws devops-agent describe-private-connection \
    --name my-mcp-tool-connection
```

When the status is `ACTIVE`, your private connection is ready to use.

## Use a private connection with a capability provider


To use a private connection, you can link to it during the registration of a capability provider. Supported capabilities that can be used with private connections include: `GitHub`, `GitLab`, `MCP Server`, and `Grafana`. You can perform this step using the AWS Management Console or the AWS CLI.

**Note**  
** When registering a capability provider, AWS DevOps Agent validates that the endpoint is reachable and responding. Ensure your target service is running and accepting connections before completing registration.

### Use a private connection with a capability provider using the console


In the AWS DevOps Agent console, private connections can be linked to a capability during registration by selecting the "Connect to endpoint using a private connection" option.

![\[alt text not found\]](http://docs.aws.amazon.com/devopsagent/latest/userguide/images/a2a7ffb70ffe.png)


1. Open the AWS DevOps Agent console and navigate to your Agent Space.

1. In the **Capability Providers** section, choose **Registration**.

1. Select **Register** for the capability type you want to use with the private connection.

1. On the registration details view, enter the Endpoint URL you want to connect to using the private connection (for example, `https://mcp.internal.example.com`).

1. Select **Connect to endpoint using a private connection**.

1. Either select an existing private connection that corresponds to the Endpoint URL you want to connect to, or select **Create a new private connection** to create one.

1. Complete the registration process for the capability provider.

### Use a private connection with a capability provider using the AWS CLI


You can register capabilities with a private connection by including the `private-connection-name` argument. Below is an example of registering an MCP Server with API Key authorization using the `my-mcp-tool-connection` private connection. Replace the placeholder values with your own.

```
aws devops-agent register-service \
    --service mcpserver \
    --private-connection-name my-mcp-tool-connection \
    --service-details '{
        "mcpserver": {
            "name": "my-mcp-tool",
            "endpoint": "https://mcp.internal.example.com",
            "authorizationConfig": {
                "apiKey": {
                    "apiKeyName": "api-key",
                    "apiKeyValue": "secret-value",
                    "apiKeyHeader": "x-api-key"
                }
            }
        }
    }' \
    --region us-east-1
```

## Verify a private connection


After the private connection reaches the **Active** state and has been utilized by a capability provider, verify that AWS DevOps Agent can reach your target service:

1. Open the AWS DevOps Agent console and navigate to your Agent Space.

1. Start a new chat session.

1. Invoke a command that uses the integration backed by your private connection. For example, if your MCP tool provides access to an internal knowledge base, ask the agent a question that requires that knowledge base.

1. Confirm that the agent returns results from the private service.

If the connection fails, check the following:
+ **VPC Lattice limits** - Verify that you have not reached any resource gateway or other [VPC Lattice quota](https://docs.aws.amazon.com/vpc-lattice/latest/ug/quotas.html) limits
+ **Security group rules** – Verify that the security groups attached to the ENIs allow outbound traffic on the port your service listens on. Also verify that your service's security group allows inbound traffic on the target port. Traffic arrives from VPC Lattice data plane IPs within your VPC CIDR range. You can use security group referencing (allowing the ENI security group as a source) or allow inbound from the VPC CIDR.
+ **Subnet connectivity** – Verify that the subnets you selected can route traffic to your service. If the service runs in a different subnet, confirm that the route tables allow traffic between them.
+ **Service availability** – Confirm that your service is running and accepting connections on the expected port.
+ **Unsupported Availability Zone** - Verify your subnets are in supported Availability Zones. Run `aws ec2 describe-subnets --subnet-ids <your-subnet-ids> --query 'Subnets[*].[SubnetId,AvailabilityZoneId]'` and check against the unsupported Availability Zones listed above.

## Delete a private connection


You can delete unused private connections using the AWS Management Console or the AWS CLI.

### Delete a private connection using the console


1. Open the AWS DevOps Agent console.

1. In the navigation pane, choose **Capability providers**, then choose **Private connections**.

1. Select the **Actions** menu for the private connection you want to delete, and select **Remove**.

The private connection will be displayed with a status of "Removing connection" while AWS DevOps Agent removes the managed resource gateway and ENIs from your VPC. After deletion completes, the connection no longer appears in your list of private connections.

### Delete a private connection using the AWS CLI


```
aws devops-agent delete-private-connection \
    --name my-mcp-tool-connection
```

The response returns a status of `DELETE_IN_PROGRESS`. AWS DevOps Agent removes the managed resource gateway and ENIs from your VPC. After deletion completes, the connection no longer appears in your list of private connections.

## Advanced setup using existing VPC Lattice resources


If your organization already uses Amazon VPC Lattice and manages your own resource configurations, you can create a private connection in self-managed mode. Instead of having AWS DevOps Agent create a resource gateway for you, you provide the Amazon Resource Name (ARN) of an existing resource configuration that points to your target service.

This approach is useful when you:
+ Want full control over the resource gateway and resource configuration lifecycle.
+ Need to share resource configurations across multiple AWS accounts or services.
+ Require VPC Lattice access logs for detailed traffic monitoring.
+ Run a hub-and-spoke network architecture.

To create a self-managed private connection with the AWS CLI:

```
aws devops-agent create-private-connection \
    --name my-advanced-connection \
    --mode '{
        "selfManaged": {
            "resourceConfigurationId": "arn:aws:vpc-lattice:us-east-1:123456789012:resourceconfiguration/rcfg-0123456789abcdef0"
        }
    }'
```

For more details on setting up VPC Lattice resource gateways and resource configurations, see the [Amazon VPC Lattice User Guide](https://docs.aws.amazon.com/vpc-lattice/latest/ug/).

## Related topics

+ [VPC Endpoints (AWS PrivateLink)](aws-devops-agent-security-vpc-endpoints-aws-privatelink.md)
+ [Connecting MCP Servers](configuring-capabilities-for-aws-devops-agent-connecting-mcp-servers.md)
+ [Configuring capabilities for AWS DevOps Agent](configuring-capabilities-for-aws-devops-agent.md)
+ [AWS DevOps Agent Security](aws-devops-agent-security.md)
+ [DevOps Agent IAM permissions](aws-devops-agent-security-devops-agent-iam-permissions.md)