

# Application Signals
<a name="CloudWatch-Application-Monitoring-Sections"></a>

CloudWatch Application Signals helps you monitor and improve application performance on AWS. It automatically collects data from your applications running on services like Amazon EC2, Amazon ECS, and Lambda. You can use CloudWatch Application Signals for the following:
+ Monitor application health in real time
+ Track performance against business goals
+ View relationships between services and dependencies
+ Quickly identify and resolve performance issues
+ Enable Application Signals to automatically collect metrics and traces from your applications, and display key metrics such as call volume, availability, latency, faults, and errors. Quickly see and triage current operational health, and whether your applications are meeting their longer-term performance goals, without writing custom code or creating dashboards.
+ Create and monitor [service-level objectives (SLOs)](CloudWatch-ServiceLevelObjectives.md) with Application Signals. Easily create and track status of SLOs related to CloudWatch metrics, including the new standard application metrics that Application Signals collects. See and track the [service level indicator (SLI)](CloudWatch-ServiceLevelObjectives.md#CloudWatch-ServiceLevelObjectives-concepts) status of your application services within a services list and topology map. Create alarms to track your SLOs, and track the new standard application metrics that Application Signals collects.
+ See a map of your application topology that Application Signals automatically discovers, that gives you a visual representation of your applications, dependencies, and their connectivity.
+ Application Signals works with [CloudWatch RUM](CloudWatch-RUM.md), [CloudWatch Synthetics canaries](CloudWatch_Synthetics_Canaries.md), [AWS Service Catalog AppRegistry](https://docs.aws.amazon.com/servicecatalog/latest/arguide/intro-app-registry.html), and Amazon EC2 Auto Scaling to display your client pages, Synthetics canaries, and application names within dashboards and maps.

**Supported languages and architectures**

Application Signals supports Java, Python, Node.js, and .NET applications.

Application Signals is supported and tested on Amazon EKS, Amazon ECS, and Amazon EC2. On Amazon EKS clusters, it automatically discovers the names of your services and clusters. On other architectures, you must supply the names of services and environments when you enable those services for Application Signals.

The instructions for enabling Application Signals on Amazon EC2 should work on any architecture that supports the CloudWatch agent and AWS Distro for OpenTelemetry. However, the instructions have not been tested on architectures other than Amazon ECS and Amazon EC2.

**Supported Regions**

Application Signals is supported in every commercial Region except for Canada West (Calgary).

**Topics**
+ [Features](#application-signals-features)
+ [Permissions required for Application Signals](Application_Signals_Permissions.md)
+ [Supported systems](CloudWatch-Application-Signals-supportmatrix.md)
+ [Supported instrumentation setups](Getting-Started-App-Signals.md)
+ [Enable Application Signals in your account](CloudWatch-Application-Signals-Enable.md)
+ [(Optional) Try out Application Signals with a sample app](CloudWatch-Application-Signals-Enable-EKS-sample.md)
+ [Enable your applications on Amazon EKS clusters](CloudWatch-Application-Signals-Enable-EKS.md)
+ [Enable your applications on Amazon EC2](CloudWatch-Application-Signals-Enable-EC2Main.md)
+ [Enable your applications on Amazon ECS](CloudWatch-Application-Signals-Enable-ECSMain.md)
+ [Enable your applications on Kubernetes](CloudWatch-Application-Signals-Enable-KubernetesMain.md)
+ [Enable your applications on Lambda](CloudWatch-Application-Signals-Enable-LambdaMain.md)
+ [Troubleshooting your Application Signals installation](CloudWatch-Application-Signals-Enable-Troubleshoot.md)
+ [(Optional) Configuring Application Signals](CloudWatch-Application-Signals-Configure.md)
+ [Monitor the operational health of your applications with Application Signals](Services.md)
+ [Metrics collected by Application Signals](AppSignals-MetricsCollected.md)
+ [Custom metrics with Application Signals](AppSignals-CustomMetrics.md)

## Features
<a name="application-signals-features"></a>
+ **Use Application Signals for daily application monitoring** – Use Application Signals within the CloudWatch console, as part of daily application monitoring:

  1. If you have created service level objectives (SLOs) for your services, start with the [Service Level Objectives (SLO)](CloudWatch-ServiceLevelObjectives.md#CloudWatch-ServiceLevelObjectives-Triage) page. This gives you an immediate view of the health of your most critical services, operations, and dependencies. Choose the service, operation, or dependency name for an SLO to open the [Service detail](ServiceDetail.md) page and see detailed service information as you troubleshoot issues. 

  1. Open the [Services](Services-page.md) page to see a summary of all your services, and quickly see services with the highest fault rate or latency. If you have created SLOs, look at the Services table to see which services have unhealthy service level indicators (SLIs). If a particular service is in an unhealthy state, select the service to open the [Service detail](ServiceDetail.md) page and see service operations, dependencies, Synthetics canaries, and client requests. Select a point in a graph to see correlated traces so that you can troubleshoot and identify the root cause of operational issues. 

  1. If new services have been deployed or dependencies have changed, open the [Application Map](ServiceMap.md) to inspect your application topology. See a map of your applications that shows the relationship between clients, Synthetics canaries, services, and dependencies. Quickly see SLI health, view key metrics such as call volume, fault rate, and latency, and drill down to see more detailed information in the [Service detail](ServiceDetail.md) page. 

  Using Application Signals incurs charges. For information about CloudWatch pricing, see [Amazon CloudWatch Pricing](http://aws.amazon.com/cloudwatch/pricing).
**Note**  
It is not necessary to enable Application Signals to use CloudWatch Synthetics, or CloudWatch RUM. However, Synthetics and CloudWatch RUM work with Application Signals to provide benefits when you use these features together.
+ **Application Signals cross-account** – With Application Signals cross-account observability, you can monitor and troubleshoot your applications that span multiple AWS accounts within a single Region.

  You can use Amazon CloudWatch Observability Access Manager to set up one or more of your AWS accounts as a monitoring account. You’ll provide the monitoring account with the ability to view data in your source account by creating a sink in your monitoring account. You use the sink to create a link from your source account to your monitoring account. For more information, see [CloudWatch cross-account observability](CloudWatch-Unified-Cross-Account.md).

  For proper functionality of Application Signals cross-account observability, ensure that the following telemetry types are shared through the CloudWatch Observability Access Manager.
  + Application Signals services and service level objectives (SLOs)
  + Metrics in Amazon CloudWatch
  + Log groups in Amazon CloudWatch Logs
  + Traces in [AWS X-Ray](https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html)
+ **Dynamic service grouping and filtering** – Group and filter services with Application Signals' dynamic grouping capabilities. Automatically aggregate metrics and SLIs of services within groups, allowing you to start from a group view and dive deep into specific problematic areas. Application Signals provides two default groupings: "Environment" grouping that organizes by service environment, and "Related services" grouping that groups services based on their dependencies. For example, in Related services grouping, if Service A calls Service B, which calls Service C, they're grouped under Service A. Beyond default groupings, create custom groups by selecting services that align with your organizational needs, such as Business unit or Team.

  Create custom groupings using AWS tags or OpenTelemetry attributes that align with your team structure, business domains, or operational requirements. Custom groupings enable you to organize services according to your specific monitoring and troubleshooting workflows. For more information, see [Configuring custom groups](ServiceMap.md#Application-Map-Configure-Custom-Groups).   
![\[CloudWatch application map with grouping by related services.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/application-map.png)  
![\[CloudWatch services list page with filtering.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/services-page.png)
+ **Change Events** – Track change events across your application with Application Signals' automatic processing of CloudTrail events. Monitor configuration and deployment events for services and their dependencies, providing immediate context for operational analysis and troubleshooting. Change event detection is enabled alongside service discovery enablement through the CloudWatch Console or StartDiscovery API. For Amazon EKS services, deployment detection requires that the Amazon EKS services are instrumented with the Application Signals instrumentation SDK.

   Change events are supported for the following resources: 
  + Autoscaling Group
  + EKS Cluster
  + EKS Workload (only deployments)
  + ECS Cluster and Service
  + ELB Load balancer and Target Group
  + Lambda Function
  + BedrockAgentCore Runtime and RuntimeEndpoint  
![\[CloudWatch application map with deployment filtering and change events in group drawer.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/application-map-with-drawer.png)  
![\[CloudWatch application overview with change events table.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/application-overview.png)
+ **Automated audit findings** – Discover critical insights through Application Signals' automated audit findings. The service analyzes your applications to report significant observations and potential problems, simplifying root cause analysis. These automated findings consolidate relevant traces, eliminating the need to navigate through multiple clicks. The audit system helps teams quickly identify issues and their underlying causes, enabling faster problem resolution.

  Application Signals employs advanced analytics to detect patterns, highlight resource inefficiencies, and suggest optimization opportunities. Findings are prioritized based on severity and potential business impact, enabling teams to focus on the most critical issues first. Get actionable recommendations for improving service reliability and performance without manual analysis.  
![\[CloudWatch service overview with audit findings.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-overview.png)

# Permissions required for Application Signals
<a name="Application_Signals_Permissions"></a>

This section explains the permissions necessary for you to enable, manage, and operate Application Signals.

## Permissions to enable and manage Application Signals
<a name="Application_Signals_Permissions_Enabling"></a>

To manage Application Signals, you must be signed on with the required permissions. To view the contents of the **CloudWatchApplicationSignalsFullAccess** policy, see [CloudWatchApplicationSignalsFullAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/CloudWatchApplicationSignalsFullAccess.html). 



To enable Application Signals on Amazon EC2, or custom architectures, see [Enable Application Signals on Amazon EC2](CloudWatch-Application-Signals-Enable-EC2Main.md). To enable and manage Application Signals on Amazon EKS using the [Amazon CloudWatch Observability EKS add-on](install-CloudWatch-Observability-EKS-addon.md), you need the following permissions.

**Important**  
These permissions include `iam:PassRole` with `Resource "*”` and `eks:CreateAddon` with `Resource “*”`. These are powerful permissions and you should use caution in granting them.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
    {
    "Sid": "CloudWatchApplicationSignalsEksAddonManagementPermissions",
    "Effect": "Allow",
    "Action": [
    "eks:AccessKubernetesApi",
    "eks:CreateAddon",
    "eks:DescribeAddon",
    "eks:DescribeAddonConfiguration",
    "eks:DescribeAddonVersions",
    "eks:DescribeCluster",
    "eks:DescribeUpdate",
    "eks:ListAddons",
    "eks:ListClusters",
    "eks:ListUpdates",
    "iam:ListRoles",
    "iam:PassRole"
    ],
    "Resource": "*",
    "Condition": {
    "StringEquals": {
    "iam:PassedToService": [
    "eks.amazonaws.com",
    "application-signals.cloudwatch.amazonaws.com"
    ]
    }
    }
    },
    {
    "Sid": "CloudWatchApplicationSignalsEksCloudWatchObservabilityAddonManagementPermissions",
    "Effect": "Allow",
    "Action": [
    "eks:DeleteAddon",
    "eks:UpdateAddon"
    ],
    "Resource": "arn:aws:eks:*:*:addon/*/amazon-cloudwatch-observability/*"
    }
    ]
    }
```

------

The Application Signals dashboard shows the AWS Service Catalog AppRegistry applications that your SLOs are associated with. To see these applications in the SLO pages, you must have the following permissions:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "CloudWatchApplicationSignalsTaggingReadPermissions",
            "Effect": "Allow",
            "Action": "tag:GetResources",
            "Resource": "*"
        }
    ]
}
```

------

## Operating Application Signals
<a name="Application_Signals_Permissions_Operate"></a>

Service operators who are using Application Signals to monitor services and SLOs must be signed on to an account with read only permissions. To view the contents of the **CloudWatchApplicationSignalsReadOnlyAccess** policy, see [CloudWatchApplicationSignalsReadOnlyAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/CloudWatchApplicationSignalsReadOnlyAccess.html).

To see which AWS Service Catalog AppRegistry Applications that your SLOs are associated within the Application Signals dashboard, you must have the following permissions:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "CloudWatchApplicationSignalsTaggingReadPermissions",
            "Effect": "Allow",
            "Action": "tag:GetResources",
            "Resource": "*"
        }
    ]
}
```

------

To check if Application Signals on Amazon EKS using the [Amazon CloudWatch Observability EKS add-on](install-CloudWatch-Observability-EKS-addon.md) is enabled, you need to have the following permissions:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "CloudWatchApplicationSignalsResourceExplorerReadPermissions",
            "Effect": "Allow",
            "Action": [
                "resource-explorer-2:ListIndexes",
                "resource-explorer-2:Search"
            ],
            "Resource": "*"
        },
        {
            "Sid": "CloudWatchApplicationSignalsResourceExplorerSLRPermissions",
            "Effect": "Allow",
            "Action": [
                "iam:CreateServiceLinkedRole"
            ],
            "Resource": "arn:aws:iam::*:role/aws-service-role/resource-explorer-2.amazonaws.com/AWSServiceRoleForResourceExplorer",
            "Condition": {
                "StringEquals": {
                    "iam:AWSServiceName": [
                        "resource-explorer-2.amazonaws.com"
                    ]
                }
            }
        },
        {
            "Sid": "CloudWatchApplicationSignalsResourceExplorerCreateIndexPermissions",
            "Effect": "Allow",
            "Action": [
                "resource-explorer-2:CreateIndex"
            ],
            "Resource": "arn:aws:resource-explorer-2:*:*:index/*"
        }
    ]
}
```

------

# Supported systems
<a name="CloudWatch-Application-Signals-supportmatrix"></a>

Application Signals is supported and tested on Amazon EKS, native Kubernetes, Amazon ECS, and Amazon EC2. The instructions for enabling Application Signals on Amazon EC2 should work on any platform that supports the CloudWatch agent and AWS Distro for OpenTelemetry.

**Topics**
+ [Java compatibility](#CloudWatch-Application-Signals-supportmatrix-java)
+ [.NET compatibility](#CloudWatch-Application-Signals-supportmatrix-dotnet)
+ [PHP compatibility](#php-compatibility)
+ [Ruby compatibility](#ruby-compatibility)
+ [Python compatibility](#CloudWatch-Application-Signals-supportmatrix-python)
+ [Node.js compatibility](#CloudWatch-Application-Signals-supportmatrix-node)
+ [GoLang compatibility](#golang-compatibility)
+ [Runtime version support matrix](#rumtime-version-matix)
+ [Known issues](#AppSignals-Issues)

## Java compatibility
<a name="CloudWatch-Application-Signals-supportmatrix-java"></a>

Application Signals supports Java applications, and supports the same Java libraries and frameworks as the AWS Distro for OpenTelemetry does. For more information, see [ Supported libraries, frameworks, application servers, and JVMs](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/supported-libraries.md).

## .NET compatibility
<a name="CloudWatch-Application-Signals-supportmatrix-dotnet"></a>

Application Signals supports the same .NET libraries and frameworks as the AWS Distro for OpenTelemetry does. For more information, see [ Supported instrumentations](https://github.com/open-telemetry/opentelemetry-dotnet-instrumentation/blob/main/docs/internal/instrumentation-libraries.md).

Application Signals supports .NET applications that are running on x86-64 or ARM64 CPUs, and supports the Linux x64, Linux ARM64, and Microsoft Windows Server 2022 x64.

**Note**  
The AWS Distro for Open Telemetry (ADOT) SDK for .NET does not support AWS SDK for .NET V4. Use AWS SDK .NET V3 for full Application Signals support.

## PHP compatibility
<a name="php-compatibility"></a>

Application Signals supports PHP applications with OpenTelemetry Zero Code instrumentation. There is no AWS Distro for Open Telemetry (ADOT) SDK available for this purpose. You should use the standard OpenTelemetry Instrumentation SDK with [Transaction Search](AmazonCloudWatch/latest/monitoring/CloudWatch-Transaction-Search.html) enabled. To start using zero code instrumentation in PHP, follow these steps from the OpenTelemetry PHP Instrumentation docs, [PHP zero-code instrumentation ](https://opentelemetry.io/docs/zero-code/php/). Automatic instrumentation is available for a number of commonly used PHP libraries. For more information, see [OpenTelemetry registry](https://packagist.org/search/?query=open-telemetry%3Dinstrumentation).

## Ruby compatibility
<a name="ruby-compatibility"></a>

Application Signals supports Ruby applications with OpenTelemetry Zero Code instrumentation. There is no AWS Distro for Open Telemetry (ADOT) SDK available for this purpose. You should use the standard OpenTelemetry Instrumentation SDK with [Transaction Search](AmazonCloudWatch/latest/monitoring/CloudWatch-Transaction-Search.html) enabled. To start using zero code instrumentation in Ruby, follow these steps from the OpenTelemetry Ruby Instrumentation docs, [Ruby zero-code instrumentation ](https://opentelemetry.io/docs/languages/ruby/getting-started/#instrumentation). For a list of released instrumentation libraries, see [Registry](https://opentelemetry.io/ecosystem/registry/?language=rubycomponent=instrumentation). 

## Python compatibility
<a name="CloudWatch-Application-Signals-supportmatrix-python"></a>

Application Signals supports the same libraries and frameworks as the AWS Distro for OpenTelemetry does. For more information, see **Supported packages** at [ opentelemetry-python-contrib](https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/instrumentation/README.md).

Before you enable Application Signals for your Python applications, be aware of the following considerations.
+ In some containerized applications, a missing `PYTHONPATH` environment variable can sometimes cause the application to fail to start. To resolve this, ensure that you set the `PYTHONPATH` environment variable to the location of your application’s working directory. This is due to a known issue with OpenTelemetry auto-instrumentation. For more information about this issue, see [ Python autoinstrumentation setting of PYTHONPATH is not compliant](https://github.com/open-telemetry/opentelemetry-operator/issues/2302).
+ For Django applications, there are additional required configurations, which are outlined in the [ OpenTelemetry Python documentation](https://opentelemetry-python.readthedocs.io/en/latest/examples/django/README.html).
  + Use the `--noreload` flag to prevent automatic reloading.
  + Set the `DJANGO_SETTINGS_MODULE` environment variable to the location of your Django application’s `settings.py` file. This ensures that OpenTelemetry can correctly access and integrate with your Django settings. 

## Node.js compatibility
<a name="CloudWatch-Application-Signals-supportmatrix-node"></a>

Application Signals supports the same Node.js libraries and frameworks as the AWS Distro for OpenTelemetry does. For more information, see [ Supported instrumentations](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main).

### Known limitations about Node.js with ESM
<a name="ESM-limitations"></a>

The AWS Distro for Opentelemetry Node.js supports two module systems: ECMAScript Modules (ESM) and CommonJS (CJS). To enable Application Signals, we recommend that you use the CJS module format because OpenTelemetry JavaScript’s support of ESM is experimental and a work in progress. For more details, see [ ECMAScript Modules vs. CommonJS](https://github.com/open-telemetry/opentelemetry-js/blob/eb3ca4fb07ee31c62093f5fcec56575573c902ce/doc/esm-support.md) on GitHub.

To determine if your application is using CJS and not ESM, ensure that your application does not fulfill the conditions to enable ESM. For more information about these conditions, see [ Enabling](https://nodejs.org/api/esm.html#enabling) in the Node.js documentation.

The AWS Distro for Opentelemetry Node.js provides limited support for ESM based on OpenTelemetry JavaScript’s experimental support for ESM. This means the following:
+ The Node.js version must be 18.19.0 or later.
+ The Node.js application that you want to instrument must include `@aws/aws-distro-opentelemetry-node-autoinstrumentation` and `@opentelemetry/instrumentation` as dependencies. 
+ The Node.js application that you want to instrument must start with the following node option: 

  ```
  NODE_OPTIONS=' --import @aws/aws-distro-opentelemetry-node-autoinstrumentation/register --experimental-loader=@opentelemetry/instrumentation/hook.mjs'
  ```

To enable Application Signals with Node.js ESM module format, we provide the different setup for different platforms:
+ **Amazon EKS** – [Setting up a Node.js application with the ESM module format](CloudWatch-Application-Signals-Enable-EKS.md#EKS-NodeJs-ESM)
+ **Amazon ECS with sidecar strategy** – [Setting up a Node.js application with the ESM module format](CloudWatch-Application-Signals-ECS-Sidecar.md#ECS-NodeJs-ESM)
+ **Amazon ECS with daemon strategy** – [Setting up a Node.js application with the ESM module format](CloudWatch-Application-Signals-ECS-Daemon.md#ECSDaemon-NodeJs-ESM)
+ **Amazon ECS with AWS CDK**
+ **Amazon EC2** – [Setting up a Node.js application with the ESM module format](CloudWatch-Application-Signals-Enable-EC2Main.md#EC2-NodeJs-ESM)
+ **Kubernetes** – [Setting up a Node.js application with the ESM module format](CloudWatch-Application-Signals-Enable-KubernetesMain.md#Kubernetes-NodeJs-ESM)

## GoLang compatibility
<a name="golang-compatibility"></a>

Application Signals supports GoLang applications with OpenTelemetry Zero Code instrumentation. There is no AWS Distro for Open Telemetry (ADOT) SDK available for this purpose. You should use the standard OpenTelemetry Instrumentation SDK with [Transaction Search](AmazonCloudWatch/latest/monitoring/CloudWatch-Transaction-Search.html) enabled. To start using zero code instrumentation in GoLang, follow these steps from the OpenTelemetry GoLang Instrumentation docs, [Getting Started with OpenTelemetry Go Automatic Instrumentation ](https://github.com/open-telemetry/opentelemetry-go-instrumentation/blob/main/docs/getting-started.md).

### Implementation considerations GoLang instrumentation
<a name="implementation-considerations-golang"></a>

Learn about important implementation details for using GoLang instrumentation. This guidance explains how to implement explicit context propagation in GoLang applications and set up Application Signals. Properly implementing GoLang instrumentation helps you track and analyze your application's performance effectively.

#### Instrumenting the AWS SDK
<a name="instrumenting-aws-sdk"></a>

The Golang auto-instrumentation library does not support AWS SDK instrumentation out of the box. You must use the `otelaws` library instrumentation along with the auto-instrumentation agent:

1. Install the required dependency:

   ```
   go get go.opentelemetry.io/contrib/instrumentation/github.com/aws/aws-sdk-go-v2/otelaws
   ```

1. Add the following line to the application:

   ```
   otelaws.AppendMiddlewares(&cfg.APIOptions)
   ```

1. Create subsequent AWS clients with the previous `aws.Config` object:

   ```
   s3Client := s3.NewFromConfig(cfg)
   ```

The following example will generate spans for AWS calls and integrates with auto-instrumentation.

```
func handleRequest(ctx context.Context) error {
    cfg, err := config.LoadDefaultConfig(ctx)
    if err != nil {
        return err
    }
    
    // Add OpenTelemetry instrumentation middleware to the AWS config
    otelaws.AppendMiddlewares(&cfg.APIOptions)
    
    // Create S3 client with the instrumented config
    s3Client := s3.NewFromConfig(cfg)
    
    // Now any operations with this client will be traced
    // with the context from the upstream call
    _, err = s3Client.ListBuckets(ctx, &s3.ListBucketsInput{})
    return err
}
```

For information on configuring the auto-instrumentation executable, see [Configuration methods](https://github.com/open-telemetry/opentelemetry-go-instrumentation/blob/main/docs/configuration.md).

#### Instrumenting HTTP calls
<a name="instrumenting-http-calls"></a>

HTTP calls can split traces when Context isn't passed between requests – HTTP clients must use `NewRequestWithContext()` instead of `NewRequest()` to ensure ensures the downstream service uses the same context. When both services have instrumentation agents, the spans connect with the same trace ID to provide end-to-end visibility.

```
func makeDownstreamCall(ctx context.Context, url string) ([]byte, error) {
    client := &http.Client{}
    
    // Create request with context from the upstream call
    req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
    if err != nil {
        return nil, err
    }
    
    // Execute the request
    resp, err := client.Do(req)
    if err != nil {
        return nil, err
    }
    defer resp.Body.Close()
}
```

#### Instrumenting SQL calls
<a name="instrumenting-sql-calls"></a>

SQL spans may become disconnected from their parent span, causing client calls to be inferred as server spans. This occurs when SQL calls do not receive the context from their upstream handlers. Standard SQL calls like `Query` and `Exec` use `context.Background()` by default, not the context of the upstream caller. Replace standard SQL calls with their context-aware equivalents:
+ Use `QueryContext` instead of `Query`
+ Use `ExecContext` instead of `Exec`

These methods Pass the upstream request context to the DB calls, maintaining proper trace continuity.

```
func queryDatabase(ctx context.Context, db *sql.DB, userID string) (*sql.Rows, error) {
    // This breaks the trace context
    // row := db.Query("SELECT name FROM users WHERE id = $1", userID)
    
    // This passes the context from the upstream call for trace continuity
    rows, err := db.QueryContext(ctx, "SELECT name FROM users WHERE id = $1", userID)
    
    return rows, error
}
```

**Note**  
The `db.system` attribute is not currently supported for SQL calls. This limitation affects CloudWatch's ability to accurately identify database clients. As a result, dependencies will display **UnknownRemoteService** instead of the name of the DB client making the query.

#### Resource detectors
<a name="resource-detectors"></a>

Go auto-instrumentation does not currently support configuring resource detectors at runtime. The OpenTelemetry community is working on a feature to configure resource detectors using environment variables. Look for this feature in a future update. In the meantime, you can use the CloudWatch Agent with auto-instrumentation to automatically generate host resource attributes.

## Runtime version support matrix
<a name="rumtime-version-matix"></a>




| Language | Runtime version | 
| --- | --- | 
|  Java  |  JVM versions 8, 11, 17, 21, and 23  | 
|  Python  |  Python versions 3.9 and higher are supported  | 
|  .NET  |  Release 1.6.0 and below supports .NET 6, 8, and .NET Framework 4.6.2 and higher Release 1.7.0 and higher supports .NET 8, 9, and .NET Framework 4.6.2 and higher  | 
|  Node.js  |  Node.js versions 14, 16, 18, 20, and 22  | 
|  PHP  |  PHP versions 8.0 and higher  | 
|  Ruby  |  CRuby >= 3.1, JRuby >= 9.3.2.0, or TruffleRuby >= 22.1  | 
| GoLang | Golang versions 1.18 and higher | 

## Known issues
<a name="AppSignals-Issues"></a>

The runtime metrics collection in the Java SDK release v1.32.5 is known to not work with applications using JBoss Wildfly. This issue extends to the Amazon CloudWatch Observability EKS add-on, affecting versions `2.3.0-eksbuild.1` through `2.6.0-eksbuild.1`. The issue is fixed in Java SDK release `v1.32.6` and the Amazon CloudWatch Observability EKS add-on version `v3.0.0-eksbuild.1`.

If you are impacted, either upgrade the Java SDK version or disable your runtime metrics collection by adding the environment variable `OTEL_AWS_APPLICATION_SIGNALS_RUNTIME_ENABLED=false` to your application. 

# Supported instrumentation setups
<a name="Getting-Started-App-Signals"></a>

 You can enable CloudWatch Application Signals with different instrumentation setups. This topic describes each of the setup methods and recommendations based on the method you choose. 

## Use AWS Distro for OpenTelemetry with the CloudWatch Agent
<a name="w2aac28c17c31b5"></a>

 The most integrated application performance monitoring(APM) experience in CloudWatch is delivered through the AWS Distro for OpenTelemetry (ADOT) SDKs and are used with the CloudWatch Agent to collect application metrics and traces. This option works best if you want to get started with APM in CloudWatch quickly and also leverage out-of-the box integrations with features, such as Container Insights and CloudWatch Logs. For more information, see [Enable Application Signals on Amazon EKS Clusters](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-Enable-EKS.html) and [Enable Application Signals on Amazon EC2, Amazon ECS, or Kubernates](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-Enable-Other.html). 

## Use the OpenTelemetry SDK and Collector
<a name="w2aac28c17c31b7"></a>

 This setup works for the following use cases: 

1.  You instrumented your application or plan with OpenTelemetry SDKs and currently are using OpenTelemetry Collector. 

1.  You're using languages, such as Erlang and Rust, that aren't supported by AWS Distro for OpenTelemetry (ADOT). 

 For more information, see [OpenTelemetry with CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-OpenTelemetry-Sections.html). 

## Use the AWS X-Ray SDK and daemon
<a name="w2aac28c17c31c11"></a>

 This option is best if you instrumented your application using X-Ray SDKs and haven't migrated ADOT SDKs or OpenTelemetry SDKs. 

 For more information, see [Transaction Search](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Transaction-Search.html). 

## Feature comparison
<a name="w2aac28c17c31c13"></a>


| Feature | ADOT SDK \$1 CloudWatch Agent | Open Telemetry SDK \$1 OpenTelemetry Collector | X-Ray SDKs | 
| --- | --- | --- | --- | 
| AWS Support | Yes | Only for data sent to AWS | Yes | 
| Nonstandard language support | No | Yes | No | 
| Container Insights integration | Yes | No | No | 
| Out of the box logging with CloudWatch Logs | Yes | No | No | 
| Out of the box runtime metrics | Yes | No | No | 
| Always gets metrics on 100% of traffic | Yes | Only at 100% sampling rate | Only at 100% sampling rate | 

# Enable Application Signals in your account
<a name="CloudWatch-Application-Signals-Enable"></a>

If you haven't enabled Application Signals in this account yet, you must grant Application Signals the permissions it needs to discover your services. You need to do this only once for your account.

**To enable CloudWatch Application Signals, do the following.**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Services**.

1. Choose **Start discovering your Services**.

1. Select the check box and choose **Start discovering Services**.

   Completing this step for the first time in your account creates the **AWSServiceRoleForCloudWatchApplicationSignals** service-linked role. This role grants Application Signals the following permissions:
   + `xray:GetServiceGraph`
   + `logs:StartQuery`
   + `logs:GetQueryResults`
   + `cloudwatch:GetMetricData`
   + `cloudwatch:ListMetrics`
   + `tag:GetResources`

   For more information about this role, see [Service-linked role permissions for CloudWatch Application Signals](using-service-linked-roles.md#service-linked-role-signals).

1. Choose **Enable Application Signals**.

# (Optional) Try out Application Signals with a sample app
<a name="CloudWatch-Application-Signals-Enable-EKS-sample"></a>

To try out CloudWatch Application Signals on a sample app before you instrument your own applications with it, follow the instructions in this section. These instructions use scripts to help you create an Amazon EKS cluster, install a sample application, and instrument the sample application to work with Application Signals.

The sample application is a Spring "Pet Clinic" application that is composed of four microservices. These services run on Amazon EKS on Amazon EC2 and leverage Application Signals enablement scripts to enable the cluster with the Java, Python, or .NET auto-instrumentation agent.

**Requirements**
+ Application Signals monitors only Java, Python, or .NET applications.
+ You must have the AWS CLI installed on the instance. We recommend AWS CLI version 2, but version 1 should also work. For more information about installing the AWS CLI, see [ Install or update the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
+ The scripts in this section are intended to be run in Linux and macOS environments. For Windows instances, we recommend that you use an AWS Cloud9 environment to run these scripts. For more information about AWS Cloud9, see [ What is AWS Cloud9?](https://docs.aws.amazon.com/cloud9/latest/user-guide/welcome.html).
+ Install a supported version of `kubectl`. You must use a version of `kubectl` within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.26 `kubectl` client works with Kubernetes 1.25, 1.26, and 1.27 clusters. If you already have an Amazon EKS cluster, you might need to configure AWS credentials for `kubectl`. For more information, see [ Creating or updating a `kubeconfig` file for an Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html).
+ Install `eksctl`. `eksctl` uses the AWS CLI to interact with AWS, which means it uses the same AWS credentials as the AWS CLI. For more information, see [ Installing or updating `eksctl`](https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html).
+ Install `jq`. `jq` is required to run the Application Signals enablement scripts. For more information, see [ Download jq](https://jqlang.github.io/jq/download/). 

## Step 1: Download the scripts
<a name="CloudWatch-Application-Signals-Enable-EKS-sample-scripts"></a>

To download the scripts to set up CloudWatch Application Signals with a sample app, you can download and uncompress the zipped GitHub project file to a local drive, or you can clone the GitHub project.

To clone the project, open a terminal window and enter the following Git command in a given working directory.

```
git clone https://github.com/aws-observability/application-signals-demo.git
```

## Step 2: Build and deploy the sample application
<a name="CloudWatch-Application-Signals-Enable-EKS-sample-build"></a>

To build and push the sample application images, [follow these instructions](https://github.com/aws-observability/application-signals-demo?tab=readme-ov-file#build-the-sample-application-images-and-push-to-ecr).

### Step 3: Deploy and enable Application Signals and the sample application
<a name="CloudWatch-Application-Signals-Enable-EKS-sample-deploy"></a>

Be sure that you have completed the requirements listed in [(Optional) Try out Application Signals with a sample app](#CloudWatch-Application-Signals-Enable-EKS-sample) before you complete the following steps.

**To deploy and enable Application Signals and the sample application**

1. Enter the following command. Replace *new-cluster-name* with the name that you want to use for the new cluster. Replace *region-name* with the name of the AWS Region, such as `us-west-1`.

   This command sets up the sample app running in a new Amazon EKS cluster with Application Signals enabled. 

   ```
   # this script sets up a new cluster, enables Application Signals, and deploys the
   # sample application
   cd application-signals-demo/scripts/eks/appsignals/one-step && ./setup.sh new-cluster-name region-name
   ```

   The setup script takes about 30 minutes to run, and does the following:
   + Creates a new Amazon EKS cluster in the specified Region.
   + Creates the necessary IAM permissions for Application Signals (`arn:aws:iam::aws:policy/AWSXrayWriteOnlyAccess` and `arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy`).
   + Enables Application Signals by installing the CloudWatch agent and Auto-instrumenting the sample application for CloudWatch metrics and X-Ray traces.
   + Deploys the PetClinic Spring sample application in the same Amazon EKS cluster.
   + Creates five CloudWatch Synthetics canaries, named `pc-add-vist`, `pc-create-owners`, `pc-visit-pet`, `pc-visit-vet`, `pc-clinic-traffic`. These canaries will run at a one-minute frequency to generate synthetic traffic for the sample app and demonstrate how Synthetics canaries appear in Application Signals.
   + Creates four service level objectives (SLOs) for the PetClinic application with the following names:
     + **Availability for Searching an Owner**
     + **Latency for Searching an Owner**
     + **Availability for Registering an Owner**
     + **Latency for Registering an Owner**
   + Creates the required IAM role with a custom trust policy granting Application Signals the following permissions:
     + `cloudwatch:PutMetricData`
     + `cloudwatch:GetMetricData`
     + `xray:GetServiceGraph`
     + `logs:StartQuery`
     + `logs:GetQueryResults`

1. (Optional) If you want to review the source code for the PetClinic sample application, you can find them under the root folder.

   ```
   - application-signals-demo
     - spring-petclinic-admin-server
     - spring-petclinic-api-gateway
     - spring-petclinic-config-server
     - spring-petclinic-customers-service
     - spring-petclinic-discovery-server
     - spring-petclinic-vets-service
     - spring-petclinic-visits-service
   ```

1. To view the deployed PetClinic sample application, run the following command to find the URL:

   ```
   kubectl get ingress
   ```

### Step 4: Monitor the sample application
<a name="CloudWatch-Application-Signals-Enable-EKS-sample-monitor"></a>

After completing the steps in the previous section to create the Amazon EKS cluster and deploy the sample application, you can use Application Signals to monitor the application.

**Note**  
For the Application Signals console to start populating, some traffic must reach the sample application. Part of the previous steps created CloudWatch Synthetics canaries that generate traffic to the sample application. 

#### Service health monitoring
<a name="CloudWatch-Application-Signals-Enable-EKS-sample-monitor-service"></a>

After it is enabled, CloudWatch Application Signals automatically discovers and populates a list of services without requiring any additional setup. 

**To view the list of discovered services and monitor their health**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Application Signals**, **Services**.

1. To view a service, its operations, and its dependencies, choose the name of one of the services in the list.

   This unified, application-centric view helps provide a full perspective of how users are interacting with your service. This can help you triage issues if performance anomalies occur. For complete details about the **Services** view, see [Monitor the operational health of your applications with Application Signals](Services.md).

1. Choose the **Service Operations** tab to see the standard application metrics for that service's operations. The operations are the API operations that the service calls, for example.

   Then, to view the graphs for a single operation of that service, choose that operation name.

1. Choose the **Dependencies** tab to see the dependencies that your application has, along with the critical application metrics for each dependency. Dependencies include AWS services and third-party services that your application calls.

1. To view correlated traces from the service details page, choose a data point in one of the three graphs above the table. This populates a new pane with filtered traces from the time period. These traces are sorted and filtered based on the graph that you chose. For example, if you chose the **Latency** graph, the traces are sorted by service response time.

1. In the CloudWatch console navigation pane, choose **SLOs**. You see the SLOs that the script created for the sample application. For more information about SLOs, see [Service level objectives (SLOs)](CloudWatch-ServiceLevelObjectives.md).

### (Optional) Step 5: Cleanup
<a name="CloudWatch-Application-Signals-Enable-EKS-sample-cleanup"></a>

When you're finished testing Application signals, you can use a script provided by Amazon to clean up and delete the artifacts created in your account for the sample application. To perform the cleanup, enter the following command. Replace *new-cluster-name* with the name of the cluster that you created for the sample app, and replace *region*-name with the name of the AWS Region, such as `us-west-1`.

```
cd application-signals-demo/scripts/eks/appsignals/one-step && ./cleanup.sh new-cluster-name region-name
```

# Enable your applications on Amazon EKS clusters
<a name="CloudWatch-Application-Signals-Enable-EKS"></a>

CloudWatch Application Signals is supported for Java, Python, Node.js, and .NET applications. To enable Application Signals for your applications on an existing Amazon EKS cluster, you can use the AWS Management Console, AWS CDK, or CloudWatch Observability add-on Auto monitor advanced configuration.

**Topics**
+ [Enable Application Signals on an Amazon EKS cluster using the console](#CloudWatch-Application-Signals-Enable-EKS-Console)
+ [Enable Application Signals on an Amazon EKS cluster using the CloudWatch Observability add-on advanced configuration](#CloudWatch-Application-Signals-Enable-EKS-Addon)
+ [Enable Application Signals on Amazon EKS using AWS CDK](#CloudWatch-Application-Signals-EKS-CDK)
+ [Enable Application Signals on Amazon EKS using Model Context Protocol (MCP)](#CloudWatch-Application-Signals-EKS-MCP)

## Enable Application Signals on an Amazon EKS cluster using the console
<a name="CloudWatch-Application-Signals-Enable-EKS-Console"></a>

To enable CloudWatch Application Signals on your applications on an existing Amazon EKS cluster, use the instructions in this section.

**Important**  
If you are already using OpenTelemetry with an application that you intend to enable for Application Signals, see [Supported systems](CloudWatch-Application-Signals-supportmatrix.md) before you enable Application Signals.

**To enable Application Signals for your applications on an existing Amazon EKS cluster**
**Note**  
If you haven't already enabled Application Signals, follow the instructions in [Enable Application Signals in your account](CloudWatch-Application-Signals-Enable.md) and then follow the procedure below.

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. Choose **Application Signals**.

1. For **Specify platform**, choose **EKS**.

1. For **Select an EKS cluster**, select the cluster where you want to enable Application Signals.

1. If this cluster does not already have the Amazon CloudWatch Observability EKS add-on enabled, you are prompted to enable it. If this is the case, do the following:

   1. Choose **Add CloudWatch Observability EKS add-on**. The Amazon EKS console appears. 

   1. Select the check box for **Amazon CloudWatch Observability** and choose **Next**.

      The CloudWatch Observability EKS add-on enables both Application Signals and CloudWatch Container Insights with enhanced observability for Amazon EKS. For more information about Container Insights, see [Container Insights](ContainerInsights.md).

   1. Select the most recent version of the add-on to install.

   1. Select an IAM role to use for the add-on. If you choose **Inherit from node**, attach the correct permissions to the IAM role used by your worker nodes. Replace *my-worker-node-role* with the IAM role used by your Kubernetes worker nodes.

      ```
      aws iam attach-role-policy \
      --role-name my-worker-node-role \
      --policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy \
      --policy-arn arn:aws:iam::aws:policy/AWSXRayWriteOnlyAccess
      ```

   1. If you want to create a service role to use the add-on, see [Install the CloudWatch agent with the Amazon CloudWatch Observability EKS add-on or the Helm chart](install-CloudWatch-Observability-EKS-addon.md).

   1. Choose **Next**, confirm the information on the screen, and choose **Create**.

   1. In the next screen, choose **Enable CloudWatch Application Signals** to return to the CloudWatch console and finish the process.

1. There are two options for enabling your applications for Application Signals. For consistency, we recommend that you choose one option per cluster.
   + The **Console** option is simpler. Using this method causes your pods to restart immediately.
   + The **Annotate Manifest File** method gives you more control of when your pods restart, and can also help you manage your monitoring in a more decentralized way if you don’t want to centralize it.
**Note**  
If you are enabling Application Signals for a Node.js application with ESM, skip to [Setting up a Node.js application with the ESM module format](#EKS-NodeJs-ESM) instead.

------
#### [ Console ]

   The **Console** option uses the advanced configuration of the Amazon CloudWatch Observability EKS add-on to setup Application Signals for your services. For more information about the add-on, see [(Optional) Additional configuration](install-CloudWatch-Observability-EKS-addon.md#install-CloudWatch-Observability-EKS-addon-configuration).

   If you don’t see a list of workloads and namespaces, ensure you have the right permissions to view them for this cluster. For more information, see [ Required permissions](https://docs.aws.amazon.com/eks/latest/userguide/view-kubernetes-resources.html#view-kubernetes-resources-permissions).

   You can either monitor all service workloads by selecting the **Auto monitor** check box or selectively choose specific workloads and namespaces to monitor.

   To monitor all service workload(s) with Auto monitor:

   1. Select the **Auto monitor** check box to automatically select all service workload(s) in a cluster.

   1. Choose **Auto restart** to restart all workload pods to enable Application Signals immediately with AWS Distro for OpenTelemetry auto-instrumentation (ADOT) SDKs injected into your pods. 

   1. Choose **Done** . When **Auto restart** is selected, CloudWatch Observability EKS add-on will enable Application Signals immediately. Otherwise, Application Signals will be enabled during the next deployment of each workload.

   You can monitor single workloads or entire namespaces.

   To monitor a single workload:

   1. Select the check box by the workload that you want to monitor.

   1. Use the **Select language(s)** dropdown list to select the language of the workload. Select the languages that you want to enable Application Signals for, and then choose the check mark icon (✓) to save this selection.

      For Python applications, ensure your application follows the required prerequisites before continuing. For more information, see [Python application doesn't start after Application Signals is enabled](CloudWatch-Application-Signals-Enable-Troubleshoot.md#Application-Signals-troubleshoot-starting-Python).

   1. Choose **Done**. The Amazon CloudWatch Observability EKS add-on will immediately inject AWS Distro for OpenTelemetry autoinstrumentation (ADOT) SDKs into your pods and trigger pod restarts to enable collection of application metrics and traces.

   To monitor an entire namespace:

   1. Select the check box by the namespace that you want to monitor.

   1. Use the **Select language(s)** dropdown list to select the language of the namespace. Select the languages that you want to enable Application Signals for, and then choose the check mark icon (✓) to save this selection. This applies it to all workloads in this namespace, whether they are currently deployed or will be deployed in the future.

      For Python applications, ensure your application follows the required prerequisites before continuing. For more information, see [Python application doesn't start after Application Signals is enabled](CloudWatch-Application-Signals-Enable-Troubleshoot.md#Application-Signals-troubleshoot-starting-Python).

   1. Choose **Done**. The Amazon CloudWatch Observability EKS add-on will immediately inject AWS Distro for OpenTelemetry autoinstrumentation (ADOT) SDKs into your pods and trigger pod restarts to enable collection of application metrics and traces.

   To enable Application Signals in another Amazon EKS cluster, choose **Enable Application Signals** from the **Services** screen.

------
#### [ Annotate manifest file ]

   In the CloudWatch console, the **Monitor Services** section explains that you must add an annotation to a manifest YAML in the cluster. Adding this annotation auto-instruments the application to send metrics, traces, and logs to Application Signals.

   You have two options for the annotation:
   + **Annotate Workload** auto-instruments a single workload in the cluster.
   + **Annotate Namespace** auto-instruments all workloads deployed in the selected namespace.

   Choose one of those options, and follow the appropriate steps:
   + To annotate a single workload:

     1. Choose **Annotate Workload**.

     1. Paste one of the following lines into the `PodTemplate` section of the workload manifest file.
        + **For Java workloads:** `annotations: instrumentation.opentelemetry.io/inject-java: "true"`
        + **For Python workloads:** `annotations: instrumentation.opentelemetry.io/inject-python: "true"`

          For Python applications, there are additional required configurations. For more information, see [Python application doesn't start after Application Signals is enabled](CloudWatch-Application-Signals-Enable-Troubleshoot.md#Application-Signals-troubleshoot-starting-Python).
        + **For .NET workloads** `annotations: instrumentation.opentelemetry.io/inject-dotnet: "true"`
**Note**  
To enable Application Signals for a .NET workload on Alpine Linux (`linux-musl-x64`) based images, add the following annotation.  

          ```
          instrumentation.opentelemetry.io/otel-dotnet-auto-runtime: "linux-musl-x64"
          ```
        + **For Node.js workloads:** `annotations: instrumentation.opentelemetry.io/inject-nodejs: "true"`

     1. In your terminal, enter `kubectl apply -f your_deployment_yaml` to apply the change.
   + To annotate all workloads in a namespace:

     1. Choose **Annotate Namespace**.

     1. Paste one of the following lines into the metadata section of the namespace manifest file. If the namespace includes Java, Python, and .NET workloads, paste all of the following lines into the namespace manifest file.
        + **If there are Java workloads in the namespace:** `annotations: instrumentation.opentelemetry.io/inject-java: "true"`
        + **If there are Python workloads in the namespace:** `annotations: instrumentation.opentelemetry.io/inject-python: "true"`

          For Python applications, there are additional required configurations. For more information, see [Python application doesn't start after Application Signals is enabled](CloudWatch-Application-Signals-Enable-Troubleshoot.md#Application-Signals-troubleshoot-starting-Python).
        + **If there are .NET workloads in the namespace:** `annotations: instrumentation.opentelemetry.io/inject-dotnet: "true"`
        + **If there are Node.JS workloads in the namespace:** `annotations: instrumentation.opentelemetry.io/inject-nodejs: "true"`

     1. In your terminal, enter `kubectl apply -f your_namespace_yaml` to apply the change.

     1. In your terminal, enter a command to restart all pods in the namespace. An example command to restart deployment workloads is `kubectl rollout restart deployment -n namespace_name`

------

1. Choose **View Services when done**. This takes you to the Application Signals Services view, where you can see the data that Application Signals is collecting. It might take a few minutes for data to appear.

   To enable Application Signals in another Amazon EKS cluster, choose **Enable Application Signals** from the **Services** screen.

   For more information about the **Services** view, see [Monitor the operational health of your applications with Application Signals](Services.md).

**Note**  
If you're using a WSGI server for your Python application, see [No Application Signals data for Python application that uses a WSGI server](CloudWatch-Application-Signals-Enable-Troubleshoot.md#Application-Signals-troubleshoot-Python-WSGI) for information to make Application Signals work.  
We've also identified other considerations that you should keep in mind when enabling Python applications for Application Signals. For more information, see [Python application doesn't start after Application Signals is enabled](CloudWatch-Application-Signals-Enable-Troubleshoot.md#Application-Signals-troubleshoot-starting-Python).

### Setting up a Node.js application with the ESM module format
<a name="EKS-NodeJs-ESM"></a>

We provide limited support for Node.js applications with the ESM module format. For details, see [Known limitations about Node.js with ESM](CloudWatch-Application-Signals-supportmatrix.md#ESM-limitations).

For the ESM module format, enabling Application Signals through the console or by annotating the manifest file doesn’t work. Skip step 8 of the previous procedure, and do the following instead.

**To enable Application Signals for a Node.js application with ESM**

1. Install the relevant dependencies to your Node.js application for autoinstrumentation:

   ```
   npm install @aws/aws-distro-opentelemetry-node-autoinstrumentation
   npm install @opentelemetry/instrumentation@0.54.0
   ```

1. Add the following environmental variables to the Dockerfile for your application and build the image.

   ```
   ...
   ENV OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true
   ENV OTEL_TRACES_SAMPLER_ARG='endpoint=http://cloudwatch-agent.amazon-cloudwatch:2000'
   ENV OTEL_TRACES_SAMPLER='xray'
   ENV OTEL_EXPORTER_OTLP_PROTOCOL='http/protobuf'
   ENV OTEL_EXPORTER_OTLP_TRACES_ENDPOINT='http://cloudwatch-agent.amazon-cloudwatch:4316/v1/traces'
   ENV OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT='http://cloudwatch-agent.amazon-cloudwatch:4316/v1/metrics'
   ENV OTEL_METRICS_EXPORTER='none'
   ENV OTEL_LOGS_EXPORTER='none'
   ENV NODE_OPTIONS='--import @aws/aws-distro-opentelemetry-node-autoinstrumentation/register --experimental-loader=@opentelemetry/instrumentation/hook.mjs'
   ENV OTEL_SERVICE_NAME='YOUR_SERVICE_NAME' #replace with a proper service name
   ENV OTEL_PROPAGATORS='tracecontext,baggage,b3,xray'
   ...
   
   # command to start the application
   # for example
   # CMD ["node", "index.mjs"]
   ```

1. Add the environmental variables `OTEL_RESOURCE_ATTRIBUTES_POD_NAME`, `OTEL_RESOURCE_ATTRIBUTES_NODE_NAME`, `OTEL_RESOURCE_ATTRIBUTES_DEPLOYMENT_NAME`, `POD_NAMESPACE` and `OTEL_RESOURCE_ATTRIBUTES` to the deployment yaml file for the application. For example:

   ```
   apiVersion: apps/v1
   kind: Deployment
   metadata:
     name: nodejs-app
     labels:
       app: nodejs-app
   spec:
     replicas: 2
     selector:
       matchLabels:
         app: nodejs-app
     template:
       metadata:
         labels:
           app: nodejs-app
         # annotations:
         # make sure this annotation doesn't exit
         #   instrumentation.opentelemetry.io/inject-nodejs: 'true'
       spec:
         containers:
         - name: nodejs-app
           image:your-nodejs-application-image #replace with a proper image uri
           imagePullPolicy: Always
           ports:
           - containerPort: 8000
           env:
             - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME
               valueFrom:
                 fieldRef:
                   fieldPath: metadata.name
             - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME
               valueFrom:
                 fieldRef:
                   fieldPath: spec.nodeName
             - name: OTEL_RESOURCE_ATTRIBUTES_DEPLOYMENT_NAME
               valueFrom:
                 fieldRef:
                   fieldPath: metadata.labels['app'] # Assuming 'app' label is set to the deployment name
             - name: POD_NAMESPACE
               valueFrom:
                 fieldRef:
                   fieldPath: metadata.namespace
             - name: OTEL_RESOURCE_ATTRIBUTES
               value: "k8s.deployment.name=$(OTEL_RESOURCE_ATTRIBUTES_DEPLOYMENT_NAME),k8s.namespace.name=$(POD_NAMESPACE),k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME)"
   ```

1. Deploy the Node.js application to the cluster.

Once you have enabled your applications on the Amazon EKS Clusters, you can monitor your application health. For more information, see [Monitor the operational health of your applications with Application Signals](Services.md).

## Enable Application Signals on an Amazon EKS cluster using the CloudWatch Observability add-on advanced configuration
<a name="CloudWatch-Application-Signals-Enable-EKS-Addon"></a>

By default, OpenTelemetry (OTEL) based Application Performance Monitoring (APM) is enabled through Application Signals when installing either the CloudWatch Observability EKS add-on (V5.0.0 or greater) or the Helm chart. You can further customize specific settings using the advanced configuration for the Amazon EKS add-on or by overriding values with the Helm chart.

**Note**  
If you use any OpenTelemetry (OTEL) based APM solution, enabling Application Signals affects your existing observability setup. Review your current implementation before proceeding. To maintain your existing APM setup after upgrading to V5.0.0 or later, see [Opt out of Application Signals](install-CloudWatch-Observability-EKS-addon.md#Opting-out-App-Signals).

CloudWatch Observability Add-on also provides additional fine-grained control to include or exclude specific services as needed in the new advanced configuration. For more information, see [Enabling APM through Application Signals for your Amazon EKS cluster](install-CloudWatch-Observability-EKS-addon.md#Container-Insights-setup-EKS-appsignalsconfiguration) .

## Enable Application Signals on Amazon EKS using AWS CDK
<a name="CloudWatch-Application-Signals-EKS-CDK"></a>

 If you haven't enabled Application Signals in this account yet, you must grant Application Signals the permissions it needs to discover your services. See [Enable Application Signals in your account](CloudWatch-Application-Signals-Enable.md).

1. Enable Application Signals for your applications.

   ```
   import { aws_applicationsignals as applicationsignals } from 'aws-cdk-lib';
   
   const cfnDiscovery = new applicationsignals.CfnDiscovery(this,
     'ApplicationSignalsServiceRole', { }
   );
   ```

   The Discovery CloudFormation resource grants Application Signals the following permissions:
   + `xray:GetServiceGraph`
   + `logs:StartQuery`
   + `logs:GetQueryResults`
   + `cloudwatch:GetMetricData`
   + `cloudwatch:ListMetrics`
   + `tag:GetResources`

   For more information about this role, see [Service-linked role permissions for CloudWatch Application Signals](using-service-linked-roles.md#service-linked-role-signals).

1. Install the `amazon-cloudwatch-observability` add-on.

   1. Create an IAM role with the `CloudWatchAgentServerPolicy` and the OIDC associated with the cluster.

     ```
     const cloudwatchRole = new Role(this, 'CloudWatchAgentAddOnRole', {
         assumedBy: new OpenIdConnectPrincipal(cluster.openIdConnectProvider),
         managedPolicies: [ManagedPolicy.fromAwsManagedPolicyName('CloudWatchAgentServerPolicy')],
     });
     ```

1. Install the add-on with the IAM role created above.

   ```
   new CfnAddon(this, 'CloudWatchAddon', {
       addonName: 'amazon-cloudwatch-observability',
       clusterName: cluster.clusterName,
       serviceAccountRoleArn: cloudwatchRole.roleArn
   });
   ```

1. Add one of the following into the `PodTemplate` section of your workload manifest file.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-Enable-EKS.html)

   ```
   const deployment = {
     apiVersion: "apps/v1",
     kind: "Deployment",
     metadata: { name: "sample-app" },
     spec: {
       replicas: 3,
       selector: {
         matchLabels: {
           "app": "sample-app"
         }
       },
       template: {
         metadata: {
           labels: {
             "app": "sample-app"
           },
           annotations: {
             "instrumentation.opentelemetry.io/inject-$LANG": "true"
           }
         },
         spec: {...},
       },
     },
   };
   
   cluster.addManifest('sample-app', deployment)
   ```

## Enable Application Signals on Amazon EKS using Model Context Protocol (MCP)
<a name="CloudWatch-Application-Signals-EKS-MCP"></a>

You can use the CloudWatch Application Signals Model Context Protocol (MCP) server to enable Application Signals on your Amazon EKS clusters through conversational AI interactions. This provides a natural language interface for setting up Application Signals monitoring.

The MCP server automates the enablement process by understanding your requirements and generating the appropriate configuration. Instead of manually following console steps or writing CDK code, you can simply describe what you want to enable.

### Prerequisites
<a name="CloudWatch-Application-Signals-EKS-MCP-Prerequisites"></a>

Before using the MCP server to enable Application Signals, ensure you have:
+ A Development Environment that supports MCP (such as Kiro, Claude Desktop, VSCode with MCP extensions, or other MCP-compatible tools)
+ The CloudWatch Application Signals MCP server configured in your IDE. For detailed setup instructions, see [CloudWatch Application Signals MCP Server documentation](https://awslabs.github.io/mcp/servers/cloudwatch-applicationsignals-mcp-server).

### Using the MCP server
<a name="CloudWatch-Application-Signals-EKS-MCP-Usage"></a>

Once you have configured the CloudWatch Application Signals MCP server in your IDE, you can request enablement guidance using natural language prompts. While the coding assistant can infer context from your project structure, providing specific details in your prompts helps ensure more accurate and relevant guidance. Include information such as your application language, Amazon EKS cluster name, and absolute paths to your infrastructure and application code.

**Best practice prompts (specific and complete):**

```
"Enable Application Signals for my Python service running on EKS.
My app code is in /home/user/flask-api and IaC is in /home/user/flask-api/terraform"

"I want to add observability to my Node.js application on EKS cluster 'production-cluster'.
The application code is at /Users/dev/checkout-service and
the Kubernetes manifests are at /Users/dev/checkout-service/k8s"

"Help me instrument my Java Spring Boot application on EKS with Application Signals.
Application directory: /opt/apps/payment-api
CDK infrastructure: /opt/apps/payment-api/cdk"
```

**Less effective prompts:**

```
"Enable monitoring for my app"
→ Missing: platform, language, paths

"Enable Application Signals. My code is in ./src and IaC is in ./infrastructure"
→ Problem: Relative paths instead of absolute paths

"Enable Application Signals for my EKS service at /home/user/myapp"
→ Missing: programming language
```

**Quick template:**

```
"Enable Application Signals for my [LANGUAGE] service on EKS.
App code: [ABSOLUTE_PATH_TO_APP]
IaC code: [ABSOLUTE_PATH_TO_IAC]"
```

### Benefits of using the MCP server
<a name="CloudWatch-Application-Signals-EKS-MCP-Benefits"></a>

Using the CloudWatch Application Signals MCP server offers several advantages:
+ **Natural language interface:** Describe what you want to enable without memorizing commands or configuration syntax
+ **Context-aware guidance:** The MCP server understands your specific environment and provides tailored recommendations
+ **Reduced errors:** Automated configuration generation minimizes manual typing errors
+ **Faster setup:** Get from intention to implementation more quickly
+ **Learning tool:** See the generated configurations and understand how Application Signals works

### Additional resources
<a name="CloudWatch-Application-Signals-EKS-MCP-MoreInfo"></a>

For more information about configuring and using the CloudWatch Application Signals MCP server, see the [MCP server documentation](https://awslabs.github.io/mcp/servers/cloudwatch-applicationsignals-mcp-server).

# Enable your applications on Amazon EC2
<a name="CloudWatch-Application-Signals-Enable-EC2Main"></a>

Enable CloudWatch Application Signals on Amazon EC2 by using the custom setup steps described in this section.

For applications running on Amazon EC2, you install and configure the CloudWatch agent and AWS Distro for OpenTelemetry yourself. On these architectures enabled with a custom Application Signals setup, Application Signals doesn't autodiscover the names of your services or the hosts or clusters they run on. You must specify these names during the custom setup, and the names that you specify are what is displayed on Application Signals dashboards.

The instructions in this section are for Java, Python, and .NET applications. The steps have been tested on Amazon EC2 instances, but are also expected to work on other architectures that support AWS Distro for OpenTelemetry.

**Requirements**
+ To get support for Application Signals, you must use the most recent version of both the CloudWatch agent and the AWS Distro for OpenTelemetry agent.
+ You must have the AWS CLI installed on the instance. We recommend AWS CLI version 2, but version 1 should also work. For more information about installing the AWS CLI, see [ Install or update the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

**Important**  
If you are already using OpenTelemetry with an application that you intend to enable for Application Signals, see [Supported systems](CloudWatch-Application-Signals-supportmatrix.md) before you enable Application Signals.

## Step 1: Enable Application Signals in your account
<a name="CloudWatch-Application-Signals-EC2-Grant"></a>

You must first enable Application Signals in your account. If you haven't, see [Enable Application Signals in your account](CloudWatch-Application-Signals-Enable.md).

## Step 2: Download and start the CloudWatch agent
<a name="CloudWatch-Application-Signals-Enable-Other-agent"></a>

**To install the CloudWatch agent as part of enabling Application Signals on an Amazon EC2 instance or on-premises host**

1. Download the latest version of the CloudWatch agent to the instance. If the instance already has the CloudWatch agent installed, you might need to update it. Only versions of the agent released on November 30, 2023 or later support CloudWatch Application Signals.

1. Before you start the CloudWatch agent, configure it to enable Application Signals. The following example is a CloudWatch agent configuration that enables Application Signals for both metrics and traces on an EC2 host.

   We recommend that you place this file at `/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json` on Linux systems.

   ```
   {
     "traces": {
       "traces_collected": {
         "application_signals": {}
       }
     },
     "logs": {
       "metrics_collected": {
         "application_signals": {}
       }
     }
   }
   ```

1. Attach the **CloudWatchAgentServerPolicy** IAM policy to the IAM role of your Amazon EC2 instance. For permissions for on-premises hosts, see [Permissions for on-premises servers](#Enable-OnPremise-Permissions).

   1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

   1. Choose **Roles** and find the role used by your Amazon EC2 instance. Then choose the name of that role.

   1. In the **Permissions** tab, choose **Add permissions**, **Attach policies**.

   1. Find **CloudWatchAgentServerPolicy**. Use the search box if needed. Then select the check box for that policy and choose **Add permissions**.

1. Start the CloudWatch agent by entering the following commands. Replace *agent-config-file-path* with the path to the CloudWatch agent configuration file, such as `./amazon-cloudwatch-agent.json`. You must include the `file:` prefix as shown.

   ```
   export CONFIG_FILE_PATH=./amazon-cloudwatch-agent.json
   ```

   ```
   sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
   -a fetch-config \
   -m ec2 -s -c file:agent-config-file-path
   ```

### Permissions for on-premises servers
<a name="Enable-OnPremise-Permissions"></a>

For an on-premises host, you will need to provide AWS authorization to your device.

**To set up permissions for an on-premises host**

1. Create the IAM user to be used to provide permissions to your on-premises host:

   1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

   1. Choose **Users**, ** Create User**.

   1. In **User details**, for **User name**, enter a name for the new IAM user. This is the sign-in name for AWS that will be used to authenticate your host. Then choose **Next**

   1. On the **Set permissions** page, under **Permissions options**, select **Attach policies directly**.

   1. From the **Permissions policies** list, select the **CloudWatchAgentServerPolicy** policy to add to your user. Then choose **Next**.

   1. On the **Review and create** page, ensure that you are satisfied with the user name and that the **CloudWatchAgentServerPolicy** policy is in the **Permissions summary**.

   1. Choose **Create user**

1. Create and retrieve your AWS access key and secret key:

   1. In the navigation pane in the IAM console, choose **Users** and then select the user name of the user that you created in the previous step.

   1.  On the user's page, choose the **Security credentials** tab. Then, in the **Access keys** section, choose **Create access key**.

   1. For **Create access key Step 1**, choose **Command Line Interface (CLI)**.

   1. For **Create access key Step 2**, optionally enter a tag and then choose **Next**.

   1. For **Create access key Step 3**, select **Download .csv file** to save a .csv file with your IAM user's access key and secret access key. You need this information for the next steps.

   1. Choose **Done**.

1. Configure your AWS credentials in your on-premises host by entering the following command. Replace *ACCESS\$1KEY\$1ID* and *SECRET\$1ACCESS\$1ID* with your newly generated access key and secret access key from the .csv file that you downloaded in the previous step.

   ```
   $ aws configure
   AWS Access Key ID [None]: ACCESS_KEY_ID
   AWS Secret Access Key [None]: SECRET_ACCESS_ID
   Default region name [None]: MY_REGION
   Default output format [None]: json
   ```

## Step 3: Instrument your application and start it
<a name="CloudWatch-Application-Signals-Enable-Other-instrument"></a>

The next step is to instrument your application for CloudWatch Application Signals.

------
#### [ Java ]

**To instrument your Java applications as part of enabling Application Signals on an Amazon EC2 instance or on-premises host**

1. Download the latest version of the AWS Distro for OpenTelemetry Java auto-instrumentation agent. You can download the latest version by using [ this link](https://github.com/aws-observability/aws-otel-java-instrumentation/releases/latest/download/aws-opentelemetry-agent.jar). You can view information about all released versions at [ aws-otel-java-instrumentation Releases](https://github.com/aws-observability/aws-otel-java-instrumentation/releases). 

1. To optimize your Application Signals benefits, use environment variables to provide additional information before you start your application. This information will be displayed in Application Signals dashboards.

   1. For the `OTEL_RESOURCE_ATTRIBUTES` variable, specify the following information as key-value pairs:
     + (Optional) `service.name` sets the name of the service. This will be displayed as the service name for your application in Application Signals dashboards. If you don't provide a value for this key, the default of `UnknownService` is used.
     + (Optional) `deployment.environment` sets the environment that the application runs in. This will be diplayed as the **Hosted In** environment of your application in Application Signals dashboards. If you don't specify this, one of the following defaults is used:
       + If this is an instance that is part of an Auto Scaling group, it is set to `ec2:name-of-Auto-Scaling-group`
       + If this is an Amazon EC2 instance that is not part of an Auto Scaling group, it is set to `ec2:default` 
       + If this is an on-premises host, it is set to `generic:default` 

       This environment variable is used only by Application Signals, and is converted into X-Ray trace annotations and CloudWatch metric dimensions.
     + For the `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` variable, specify the base endpoint URL where traces are to be exported to. The CloudWatch agent exposes 4316 as its OTLP port. On Amazon EC2, because applications communicate with the local CloudWatch agent, you should set this value to `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces`
     + For the `OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT` variable, specify the base endpoint URL where metrics are to be exported to. The CloudWatch agent exposes 4316 as its OTLP port. On Amazon EC2, because applications communicate with the local CloudWatch agent, you should set this value to `OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics`
     + For the `JAVA_TOOL_OPTIONS` variable, specify the path where the AWS Distro for OpenTelemetry Java auto-instrumentation agent is stored.

       ```
       export JAVA_TOOL_OPTIONS=" -javaagent:$AWS_ADOT_JAVA_INSTRUMENTATION_PATH"
       ```

       For example:

       ```
       export AWS_ADOT_JAVA_INSTRUMENTATION_PATH=./aws-opentelemetry-agent.jar
       ```
     + For the `OTEL_METRICS_EXPORTER` variable, we recommend that you set the value to `none`. This disables other metrics exporters so that only the Application Signals exporter is used.
     + Set `OTEL_AWS_APPLICATION_SIGNALS_ENABLED` to `true`. This generates Application Signals metrics from traces.

1. Start your application with the environment variables listed in the previous step. The following is an example of a starting script.
**Note**  
The following configuration supports only versions 1.32.2 and later of the AWS Distro for OpenTelemetry auto-instrumentation agent for Java.

   ```
   JAVA_TOOL_OPTIONS=" -javaagent:$AWS_ADOT_JAVA_INSTRUMENTATION_PATH" \
   OTEL_METRICS_EXPORTER=none \
   OTEL_LOGS_EXPORTER=none \
   OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true \
   OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics \
   OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \
   OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces \
   OTEL_RESOURCE_ATTRIBUTES="service.name=$YOUR_SVC_NAME" \
   java -jar $MY_JAVA_APP.jar
   ```

1. (Optional) To enable log correlation, in `OTEL_RESOURCE_ATTRIBUTES`, set an additional environment variable `aws.log.group.names` for the log groups of your application. By doing so, the traces and metrics from your application can be correlated with the relevant log entries from these log groups. For this variable, replace *\$1YOUR\$1APPLICATION\$1LOG\$1GROUP* with the log group names for your application. If you have multiple log groups, you can use an ampersand (`&`) to separate them as in this example: `aws.log.group.names=log-group-1&log-group-2`. To enable metric to log correlation, setting this current environmental variable is enough. For more information, see [Enable metric to log correlation](Application-Signals-MetricLogCorrelation.md). To enable trace to log correlation, you'll also need to change the logging configuration in your application. For more information, see [Enable trace to log correlation](Application-Signals-TraceLogCorrelation.md). 

   The following is an example of a starting script that helps enable log correlation. 

   ```
   JAVA_TOOL_OPTIONS=" -javaagent:$AWS_ADOT_JAVA_INSTRUMENTATION_PATH" \
   OTEL_METRICS_EXPORTER=none \
   OTEL_LOGS_EXPORT=none \
   OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true \
   OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics \
   OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \
   OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces \
   OTEL_RESOURCE_ATTRIBUTES="aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=$YOUR_SVC_NAME" \
   java -jar $MY_JAVA_APP.jar
   ```

------
#### [ Python ]

**Note**  
If you're using a WSGI server for your Python application, in addition to the following steps in this section, see [No Application Signals data for Python application that uses a WSGI server](CloudWatch-Application-Signals-Enable-Troubleshoot.md#Application-Signals-troubleshoot-Python-WSGI) for information to make Application Signals work.

**To instrument your Python applications as part of enabling Application Signals on an Amazon EC2 instance**

1. Download the latest version of the AWS Distro for OpenTelemetry Python auto-instrumentation agent. Install it by running the following command.

   ```
   pip install aws-opentelemetry-distro
   ```

   You can view information about all released versions at [AWS Distro for OpenTelemetry Python instrumentation](https://github.com/aws-observability/aws-otel-python-instrumentation/releases). 

1. To optimize your Application Signals benefits, use environment variables to provide additional information before you start your application. This information will be displayed in Application Signals dashboards.

   1. For the `OTEL_RESOURCE_ATTRIBUTES` variable, specify the following information as key-value pairs:
      + `service.name` sets the name of the service. This will be diplayed as the service name for your application in Application Signals dashboards. If you don't provide a value for this key, the default of `UnknownService` is used.
      + `deployment.environment` sets the environment that the application runs in. This will be diplayed as the **Hosted In** environment of your application in Application Signals dashboards. If you don't specify this, one of the following defaults is used:
        + If this is an instance that is part of an Auto Scaling group, it is set to `ec2:name-of-Auto-Scaling-group`. 
        + If this is an Amazon EC2 instance that is not part of an Auto Scaling group, it is set to `ec2:default` 
        + If this is an on-premises host, it is set to `generic:default` 

         This attribute key is used only by Application Signals, and is converted into X-Ray trace annotations and CloudWatch metric dimensions.

   1. For the `OTEL_EXPORTER_OTLP_PROTOCOL` variable, specify `http/protobuf` to export telemetry data over HTTP to the CloudWatch agent endpoints listed in the following steps.

   1. For the `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` variable, specify the base endpoint URL where traces are to be exported to. The CloudWatch agent exposes 4316 as its OTLP port over HTTP. On Amazon EC2, because applications communicate with the local CloudWatch agent, you should set this value to `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces`

   1. For the `OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT` variable, specify the base endpoint URL where metrics are to be exported to. The CloudWatch agent exposes 4316 as its OTLP port over HTTP. On Amazon EC2, because applications communicate with the local CloudWatch agent, you should set this value to `OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics`

   1. For the `OTEL_METRICS_EXPORTER` variable, we recommend that you set the value to `none`. This disables other metrics exporters so that only the Application Signals exporter is used.

   1. Set the `OTEL_AWS_APPLICATION_SIGNALS_ENABLED` variable to `true` to have your container start sending X-Ray traces and CloudWatch metrics to Application Signals.

1. Start your application with the environment variables discussed in the previous step. The following is an example of a starting script.
   + Replace `$SVC_NAME` with your application name. This will be displayed as the name of the application, in Application Signals dashboards.
   + Replace `$PYTHON_APP` with the location and name of your application.

   ```
   OTEL_METRICS_EXPORTER=none \
   OTEL_LOGS_EXPORTER=none \
   OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true \
   OTEL_PYTHON_DISTRO=aws_distro \
   OTEL_PYTHON_CONFIGURATOR=aws_configurator \
   OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \
   OTEL_TRACES_SAMPLER=xray \
   OTEL_TRACES_SAMPLER_ARG="endpoint=http://localhost:2000" \
   OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics \
   OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces \
   OTEL_RESOURCE_ATTRIBUTES="service.name=$SVC_NAME" \
   opentelemetry-instrument python $MY_PYTHON_APP.py
   ```

   Before you enable Application Signals for your Python applications, be aware of the following considerations.
   + In some containerized applications, a missing `PYTHONPATH` environment variable can sometimes cause the application to fail to start. To resolve this, ensure that you set the `PYTHONPATH` environment variable to the location of your application’s working directory. This is due to a known issue with OpenTelemetry auto-instrumentation. For more information about this issue, see [ Python autoinstrumentation setting of PYTHONPATH is not compliant](https://github.com/open-telemetry/opentelemetry-operator/issues/2302).
   + For Django applications, there are additional required configurations, which are outlined in the [ OpenTelemetry Python documentation](https://opentelemetry-python.readthedocs.io/en/latest/examples/django/README.html).
     + Use the `--noreload` flag to prevent automatic reloading.
     + Set the `DJANGO_SETTINGS_MODULE` environment variable to the location of your Django application’s `settings.py` file. This ensures that OpenTelemetry can correctly access and integrate with your Django settings. 

1. (Optional) To enable log correlation, in `OTEL_RESOURCE_ATTRIBUTES`, set an additional environment variable `aws.log.group.names` for the log groups of your application. By doing so, the traces and metrics from your application can be correlated with the relevant log entries from these log groups. For this variable, replace *\$1YOUR\$1APPLICATION\$1LOG\$1GROUP* with the log group names for your application. If you have multiple log groups, you can use an ampersand (`&`) to separate them as in this example: `aws.log.group.names=log-group-1&log-group-2`. To enable metric to log correlation, setting this current environmental variable is enough. For more information, see [Enable metric to log correlation](Application-Signals-MetricLogCorrelation.md). To enable trace to log correlation, you'll also need to change the logging configuration in your application. For more information, see [Enable trace to log correlation](Application-Signals-TraceLogCorrelation.md). 

   The following is an example of a starting script that helps enable log correlation. 

   ```
   OTEL_METRICS_EXPORTER=none \
   OTEL_LOGS_EXPORTER=none \
   OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true \
   OTEL_PYTHON_DISTRO=aws_distro \
   OTEL_PYTHON_CONFIGURATOR=aws_configurator \
   OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \
   OTEL_TRACES_SAMPLER=xray \
   OTEL_TRACES_SAMPLER_ARG="endpoint=http://localhost:2000" \
   OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics \
   OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces \
   OTEL_RESOURCE_ATTRIBUTES="aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=$YOUR_SVC_NAME" \
   java -jar $MY_PYTHON_APP.jar
   ```

------
#### [ .NET ]

**To instrument your .NET applications as part of enabling Application Signals on an Amazon EC2 instance or on-premises host**

1. Download the latest version of the AWS Distro for OpenTelemetry .NET auto-instrumentation package. You can download the latest version at [ aws-otel-dotnet-instrumentation Releases](https://github.com/aws-observability/aws-otel-dotnet-instrumentation/releases). 

1. To enable Application Signals, set the following environment variables to provide additional information before you start your application. These variables are necessary to set up the startup hook for .NET instrumentation, before you start your .NET application. Replace `dotnet-service-name` in the `OTEL_RESOURCE_ATTRIBUTES` environment variable with the service name of your choice.
   + The following is an example for Linux.

     ```
     export INSTALL_DIR=OpenTelemetryDistribution
     export CORECLR_ENABLE_PROFILING=1
     export CORECLR_PROFILER={918728DD-259F-4A6A-AC2B-B85E1B658318}
     export CORECLR_PROFILER_PATH=${INSTALL_DIR}/linux-x64/OpenTelemetry.AutoInstrumentation.Native.so
     export DOTNET_ADDITIONAL_DEPS=${INSTALL_DIR}/AdditionalDeps
     export DOTNET_SHARED_STORE=${INSTALL_DIR}/store
     export DOTNET_STARTUP_HOOKS=${INSTALL_DIR}/net/OpenTelemetry.AutoInstrumentation.StartupHook.dll
     export OTEL_DOTNET_AUTO_HOME=${INSTALL_DIR}
     
     export OTEL_DOTNET_AUTO_PLUGINS="AWS.Distro.OpenTelemetry.AutoInstrumentation.Plugin, AWS.Distro.OpenTelemetry.AutoInstrumentation"
     
     export OTEL_RESOURCE_ATTRIBUTES=service.name=dotnet-service-name
     export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
     export OTEL_EXPORTER_OTLP_ENDPOINT=http://127.0.0.1:4316
     export OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://127.0.0.1:4316/v1/metrics
     export OTEL_METRICS_EXPORTER=none
     export OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true
     export OTEL_TRACES_SAMPLER=xray
     export OTEL_TRACES_SAMPLER_ARG=http://127.0.0.1:2000
     ```
   + The following is an example for Windows Server.

     ```
     $env:INSTALL_DIR = "OpenTelemetryDistribution"
     $env:CORECLR_ENABLE_PROFILING = 1
     $env:CORECLR_PROFILER = "{918728DD-259F-4A6A-AC2B-B85E1B658318}"
     $env:CORECLR_PROFILER_PATH = Join-Path $env:INSTALL_DIR "win-x64/OpenTelemetry.AutoInstrumentation.Native.dll"
     $env:DOTNET_ADDITIONAL_DEPS = Join-Path $env:INSTALL_DIR "AdditionalDeps"
     $env:DOTNET_SHARED_STORE = Join-Path $env:INSTALL_DIR "store"
     $env:DOTNET_STARTUP_HOOKS = Join-Path $env:INSTALL_DIR "net/OpenTelemetry.AutoInstrumentation.StartupHook.dll"
     $env:OTEL_DOTNET_AUTO_HOME = $env:INSTALL_DIR
     
     $env:OTEL_DOTNET_AUTO_PLUGINS = "AWS.Distro.OpenTelemetry.AutoInstrumentation.Plugin, AWS.Distro.OpenTelemetry.AutoInstrumentation"
     
     $env:OTEL_RESOURCE_ATTRIBUTES = "service.name=dotnet-service-name"
     $env:OTEL_EXPORTER_OTLP_PROTOCOL = "http/protobuf"
     $env:OTEL_EXPORTER_OTLP_ENDPOINT = "http://127.0.0.1:4316"
     $env:OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT = "http://127.0.0.1:4316/v1/metrics"
     $env:OTEL_METRICS_EXPORTER = "none"
     $env:OTEL_AWS_APPLICATION_SIGNALS_ENABLED = "true"
     $env:OTEL_TRACES_SAMPLER = "xray"
     $env:OTEL_TRACES_SAMPLER_ARG = "http://127.0.0.1:2000"
     ```

1. Start your application with the environment variables listed in the previous step.

   (Optional) Alternatively, you can use the installation scripts provided to help installation and setup of AWS Distro for OpenTelemetry .NET auto-instrumentation package.

   For Linux, download and install the Bash installation script from the GitHub releases page:

   ```
   # Download and Install
   curl -L -O https://github.com/aws-observability/aws-otel-dotnet-instrumentation/releases/latest/download/aws-otel-dotnet-install.sh
   chmod +x ./aws-otel-dotnet-install.sh
   ./aws-otel-dotnet-install.sh
   
   # Instrument
   . $HOME/.otel-dotnet-auto/instrument.sh
   export OTEL_RESOURCE_ATTRIBUTES=service.name=dotnet-service-name
   ```

   For Windows Server, download and install the PowerShell installation script from the GitHub releases page:

   ```
   # Download and Install
   $module_url = "https://github.com/aws-observability/aws-otel-dotnet-instrumentation/releases/latest/download/AWS.Otel.DotNet.Auto.psm1"
   $download_path = Join-Path $env:temp "AWS.Otel.DotNet.Auto.psm1"
   Invoke-WebRequest -Uri $module_url -OutFile $download_path
   Import-Module $download_path
   Install-OpenTelemetryCore
   
   # Instrument
   Import-Module $download_path
   Register-OpenTelemetryForCurrentSession -OTelServiceName "dotnet-service-name"
   Register-OpenTelemetryForIIS
   ```

   You can find the NuGet package of the AWS Distro for OpenTelemetry .NET auto-instrumentation package in the [ official NuGet repository](https://www.nuget.org/packages/AWS.Distro.OpenTelemetry.AutoInstrumentation). Be sure to check the [ README file](https://github.com/aws-observability/aws-otel-dotnet-instrumentation/blob/main/src/AWS.Distro.OpenTelemetry.AutoInstrumentation/nuget-readme.md) for instructions.

------
#### [ Node.js ]

**Note**  
If you are enabling Application Signals for a Node.js application with ESM, see [Setting up a Node.js application with the ESM module format](#EC2-NodeJs-ESM) before you start these steps.

**To instrument your Node.js applications as part of enabling Application Signals on an Amazon EC2 instance**

1. Download the latest version of the AWS Distro for OpenTelemetry JavaScript auto-instrumentation agent for Node.js. Install it by running the following command.

   ```
   npm install @aws/aws-distro-opentelemetry-node-autoinstrumentation
   ```

   You can view information about all released versions at [AWS Distro for OpenTelemetry JavaScript instrumentation](https://github.com/aws-observability/aws-otel-js-instrumentation/releases). 

1. To optimize your Application Signals benefits, use environment variables to provide additional information before you start your application. This information will be displayed in Application Signals dashboards.

   1. For the `OTEL_RESOURCE_ATTRIBUTES` variable, specify the following information as key-value pairs:
      + `service.name` sets the name of the service. This will be diplayed as the service name for your application in Application Signals dashboards. If you don't provide a value for this key, the default of `UnknownService` is used.
      + `deployment.environment` sets the environment that the application runs in. This will be diplayed as the **Hosted In** environment of your application in Application Signals dashboards. If you don't specify this, one of the following defaults is used:
        + If this is an instance that is part of an Auto Scaling group, it is set to `ec2:name-of-Auto-Scaling-group`. 
        + If this is an Amazon EC2 instance that is not part of an Auto Scaling group, it is set to `ec2:default` 
        + If this is an on-premises host, it is set to `generic:default` 

         This attribute key is used only by Application Signals, and is converted into X-Ray trace annotations and CloudWatch metric dimensions.

   1. For the `OTEL_EXPORTER_OTLP_PROTOCOL` variable, specify `http/protobuf` to export telemetry data over HTTP to the CloudWatch agent endpoints listed in the following steps.

   1. For the `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` variable, specify the base endpoint URL where traces are to be exported to. The CloudWatch agent exposes 4316 as its OTLP port over HTTP. On Amazon EC2, because applications communicate with the local CloudWatch agent, you should set this value to `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces`

   1. For the `OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT` variable, specify the base endpoint URL where metrics are to be exported to. The CloudWatch agent exposes 4316 as its OTLP port over HTTP. On Amazon EC2, because applications communicate with the local CloudWatch agent, you should set this value to `OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics`

   1. For the `OTEL_METRICS_EXPORTER` variable, we recommend that you set the value to `none`. This disables other metrics exporters so that only the Application Signals exporter is used.

   1. Set the `OTEL_AWS_APPLICATION_SIGNALS_ENABLED` variable to `true` to have your container start sending X-Ray traces and CloudWatch metrics to Application Signals.

1. Start your application with the environment variables discussed in the previous step. The following is an example of a starting script.
   + Replace `$SVC_NAME` with your application name. This will be displayed as the name of the application, in Application Signals dashboards.

   ```
   OTEL_METRICS_EXPORTER=none \
   OTEL_LOGS_EXPORTER=none \
   OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true \
   OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \
   OTEL_TRACES_SAMPLER=xray \
   OTEL_TRACES_SAMPLER_ARG="endpoint=http://localhost:2000" \
   OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics \
   OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces \
   OTEL_RESOURCE_ATTRIBUTES="service.name=$SVC_NAME" \
   node --require '@aws/aws-distro-opentelemetry-node-autoinstrumentation/register' your-application.js
   ```

1. (Optional) To enable log correlation, in `OTEL_RESOURCE_ATTRIBUTES`, set an additional environment variable `aws.log.group.names` for the log groups of your application. By doing so, the traces and metrics from your application can be correlated with the relevant log entries from these log groups. For this variable, replace *\$1YOUR\$1APPLICATION\$1LOG\$1GROUP* with the log group names for your application. If you have multiple log groups, you can use an ampersand (`&`) to separate them as in this example: `aws.log.group.names=log-group-1&log-group-2`. To enable metric to log correlation, setting this current environmental variable is enough. For more information, see [Enable metric to log correlation](Application-Signals-MetricLogCorrelation.md). To enable trace to log correlation, you'll also need to change the logging configuration in your application. For more information, see [Enable trace to log correlation](Application-Signals-TraceLogCorrelation.md). 

   The following is an example of a starting script that helps enable log correlation. 

   ```
   export OTEL_METRICS_EXPORTER=none \
   export OTEL_LOGS_EXPORTER=none \
   export OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true \
   export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \
   export OTEL_TRACES_SAMPLER=xray \
   export OTEL_TRACES_SAMPLER_ARG=endpoint=http://localhost:2000 \
   export OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics \
   export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces \
   export OTEL_RESOURCE_ATTRIBUTES="aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=$SVC_NAME" \
   node --require '@aws/aws-distro-opentelemetry-node-autoinstrumentation/register' your-application.js
   ```<a name="EC2-NodeJs-ESM"></a>

**Setting up a Node.js application with the ESM module format**

We provide limited support for Node.js applications with the ESM module format. For details, see [Known limitations about Node.js with ESM](CloudWatch-Application-Signals-supportmatrix.md#ESM-limitations).

To enable Application Signals for a Node.js application with ESM, you need to modify the steps in the previous procedure.

First, install `@opentelemetry/instrumentation` for your Node.js application:

```
npm install @opentelemetry/instrumentation@0.54.0
```

Then, in steps 3 and 4 in the previous procedure, change the node options from:

```
--require '@aws/aws-distro-opentelemetry-node-autoinstrumentation/register'
```

to the following:

```
--import @aws/aws-distro-opentelemetry-node-autoinstrumentation/register --experimental-loader=@opentelemetry/instrumentation/hook.mjs
```

------

## Enable Application Signals on Amazon EC2 using Model Context Protocol (MCP)
<a name="CloudWatch-Application-Signals-EC2-MCP"></a>

You can use the CloudWatch Application Signals Model Context Protocol (MCP) server to enable Application Signals on your Amazon EC2 instances through conversational AI interactions. This provides a natural language interface for setting up Application Signals monitoring.

The MCP server automates the enablement process by understanding your requirements and generating the appropriate configuration. Instead of manually following setup steps, you can simply describe what you want to enable.

### Prerequisites
<a name="CloudWatch-Application-Signals-EC2-MCP-Prerequisites"></a>

Before using the MCP server to enable Application Signals, ensure you have:
+ A Development Environment that supports MCP (such as Kiro, Claude Desktop, VSCode with MCP extensions, or other MCP-compatible tools)
+ The CloudWatch Application Signals MCP server configured in your IDE. For detailed setup instructions, see [CloudWatch Application Signals MCP Server documentation](https://awslabs.github.io/mcp/servers/cloudwatch-applicationsignals-mcp-server).

### Using the MCP server
<a name="CloudWatch-Application-Signals-EC2-MCP-Usage"></a>

Once you have configured the CloudWatch Application Signals MCP server in your IDE, you can request enablement guidance using natural language prompts. While the coding assistant can infer context from your project structure, providing specific details in your prompts helps ensure more accurate and relevant guidance. Include information such as your application language, instance details, and absolute paths to your infrastructure and application code.

**Best practice prompts (specific and complete):**

```
"Enable Application Signals for my Python service running on EC2.
My app code is in /home/ec2-user/flask-api and IaC is in /home/ec2-user/flask-api/terraform"

"I want to add observability to my Java application on EC2.
The application code is at /opt/apps/checkout-service and
the infrastructure code is at /opt/apps/checkout-service/cloudformation"

"Help me instrument my Node.js application on EC2 with Application Signals.
Application directory: /home/ubuntu/payment-api
Terraform code: /home/ubuntu/payment-api/terraform"
```

**Less effective prompts:**

```
"Enable monitoring for my app"
→ Missing: platform, language, paths

"Enable Application Signals. My code is in ./src and IaC is in ./infrastructure"
→ Problem: Relative paths instead of absolute paths

"Enable Application Signals for my EC2 service at /home/user/myapp"
→ Missing: programming language
```

**Quick template:**

```
"Enable Application Signals for my [LANGUAGE] service on EC2.
App code: [ABSOLUTE_PATH_TO_APP]
IaC code: [ABSOLUTE_PATH_TO_IAC]"
```

### Benefits of using the MCP server
<a name="CloudWatch-Application-Signals-EC2-MCP-Benefits"></a>

Using the CloudWatch Application Signals MCP server offers several advantages:
+ **Natural language interface:** Describe what you want to enable without memorizing commands or configuration syntax
+ **Context-aware guidance:** The MCP server understands your specific environment and provides tailored recommendations
+ **Reduced errors:** Automated configuration generation minimizes manual typing errors
+ **Faster setup:** Get from intention to implementation more quickly
+ **Learning tool:** See the generated configurations and understand how Application Signals works

### Additional resources
<a name="CloudWatch-Application-Signals-EC2-MCP-MoreInfo"></a>

For more information about configuring and using the CloudWatch Application Signals MCP server, see the [MCP server documentation](https://awslabs.github.io/mcp/servers/cloudwatch-applicationsignals-mcp-server).

## (Optional) Monitor your application health
<a name="CloudWatch-Application-Signals-Monitor-EC2"></a>

Once you have enabled your applications on Amazon EC2, you can monitor your application health. For more information, see [Monitor the operational health of your applications with Application Signals](Services.md).

# Enable your applications on Amazon ECS
<a name="CloudWatch-Application-Signals-Enable-ECSMain"></a>

Enable CloudWatch Application Signals on Amazon ECS by using the custom setup steps described in this section.

For applications running on Amazon ECS, you install and configure the CloudWatch agent and AWS Distro for OpenTelemetry yourself. On these architectures enabled with a custom Application Signals setup, Application Signals doesn't autodiscover the names of your services or the hosts or clusters they run on. You must specify these names during the custom setup, and the names that you specify are what is displayed on Application Signals dashboards.

## Use a custom setup to enable Application Signals on Amazon ECS
<a name="CloudWatch-Application-Signals-Enable-ECS"></a>

Use these custom setup instructions to onboard your applications on Amazon ECS to CloudWatch Application Signals. You install and configure the CloudWatch agent and AWS Distro for OpenTelemetry yourself.

There are two methods for deploying Application Signals on Amazon ECS. Choose the one that is best for your environment.
+ [Deploy using the sidecar strategy](CloudWatch-Application-Signals-ECS-Sidecar.md) – You add a CloudWatch agent sidecar container to each task definition in the cluster.

  Advantages:
  + Supports both the `ec2` and `Fargate` launch types.
  + You can always use `localhost` as the IP address when you set up environment variables.

  Disadvantages:
  + You must set up the CloudWatch agent sidecar container for each service task that runs in the cluster.
  + Only the `awsvpc` network mode is supported.
+ [Deploy using the daemon strategy](CloudWatch-Application-Signals-ECS-Daemon.md) – You add a CloudWatch agent task only once in the cluster, and the [ Amazon ECS daemon scheduling strategy](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html#service_scheduler_daemon) deploys it as needed. The ensures that each instance continuously receives traces and metrics, providing centralized visibility without the need for the agent to run as a sidecar with each application task definition.

  Advantages:
  + You need to set up the daemon service for the CloudWatch agent only once in the cluster.

  Disadvantages:
  + Doesn't support the Fargate launch type.
  + If you use the `awsvpc` or `bridge` network mode, you have to manually specify each container instance's private IP address in the environment variables.

With either method, on Amazon ECS clusters Application Signals doesn't autodiscover the names of your services. You must specify your service names during the custom setup, and the names that you specify are what is displayed on Application Signals dashboards.

# Deploy using the sidecar strategy
<a name="CloudWatch-Application-Signals-ECS-Sidecar"></a>

## Step 1: Enable Application Signals in your account
<a name="CloudWatch-Application-Signals-ECS-Grant"></a>

You must first enable Application Signals in your account. If you haven't, see [Enable Application Signals in your account](CloudWatch-Application-Signals-Enable.md).

## Step 2: Create IAM roles
<a name="CloudWatch-Application-Signals-Enable-ECS-IAM"></a>

You must create an IAM role. If you already have created this role, you might need to add permissions to it.
+ **ECS task role—** Containers use this role to run. The permissions should be whatever your applications need, plus **CloudWatchAgentServerPolicy**. 

For more information about creating IAM roles, see [Creating IAM Roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html).

## Step 3: Prepare CloudWatch agent configuration
<a name="CloudWatch-Application-Signals-Enable-ECS-PrepareAgent"></a>

First, prepare the agent configuration with Application Signals enabled. To do this, create a local file named `/tmp/ecs-cwagent.json`. 

```
{
  "traces": {
    "traces_collected": {
      "application_signals": {}
    }
  },
  "logs": {
    "metrics_collected": {
      "application_signals": {}
    }
  }
}
```

Then upload this configuration to the SSM Parameter Store. To do this, enter the following command. In the file, replace *\$1REGION* with your actual Region name.

```
aws ssm put-parameter \
--name "ecs-cwagent" \
--type "String" \
--value "`cat /tmp/ecs-cwagent.json`" \
--region "$REGION"
```

## Step 4: Instrument your application with the CloudWatch agent
<a name="CloudWatch-Application-Signals-Enable-ECS-Instrument"></a>

The next step is to instrument your application for CloudWatch Application Signals.

------
#### [ Java ]

**To instrument your application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation"
     }
   ]
   ```

1. Add a CloudWatch agent sidecar definition. To do this, append a new container called `ecs-cwagent` to your application's task definition. Replace *\$1REGION* with your actual Region name. Replace *\$1IMAGE* with the path to the latest CloudWatch container image on Amazon Elastic Container Registry. For more information, see [cloudwatch-agent](https://gallery.ecr.aws/cloudwatch-agent/cloudwatch-agent) on Amazon ECR.

   If you want to enable the CloudWatch agent with a daemon strategy instead, see the instructions at [Deploy using the daemon strategy](CloudWatch-Application-Signals-ECS-Daemon.md).

   ```
   {
     "name": "ecs-cwagent",
     "image": "$IMAGE",
     "essential": true,
     "secrets": [
       {
         "name": "CW_CONFIG_CONTENT",
         "valueFrom": "ecs-cwagent"
       }
     ],
     "logConfiguration": {
       "logDriver": "awslogs",
       "options": {
         "awslogs-create-group": "true",
         "awslogs-group": "/ecs/ecs-cwagent",
         "awslogs-region": "$REGION",
         "awslogs-stream-prefix": "ecs"
       }
     }
   }
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-java). 

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
       "cp",
       "/javaagent.jar",
       "/otel-auto-instrumentation/javaagent.jar"
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation",
         "containerPath": "/otel-auto-instrumentation",
         "readOnly": false
       }
     ]
   }
   ```

1. Add a dependency on the `init` container to make sure that this container finishes before your application container starts.

   ```
   "dependsOn": [
     {
       "containerName": "init",
       "condition": "SUCCESS"
     }
   ]
   ```

1. Add the following environment variables to your application container. You must be using version 1.32.2 or later of the AWS Distro for OpenTelemetry [auto-instrumentation agent for Java](https://opentelemetry.io/docs/zero-code/java/agent/).    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Sidecar.html)

1. Mount the volume `opentelemetry-auto-instrumentation` that you defined in step 1 of this procedure. If you don't need to enable log correlation with metrics and traces, use the following example for a Java application. If you want to enable log correlation, see the next step instead.

   ```
   {
     "name": "my-app",
      ...
     "environment": [
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "service.name=$SVC_NAME"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "JAVA_TOOL_OPTIONS",
         "value": " -javaagent:/otel-auto-instrumentation/javaagent.jar"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://localhost:4316/v1/metrics"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://localhost:4316/v1/traces"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_PROPAGATORS",
         "value": "tracecontext,baggage,b3,xray"
       }
     ],
     "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation",
         "containerPath": "/otel-auto-instrumentation",
         "readOnly": false
       }
     ]
   }
   ```

------
#### [ Python ]

Before you enable Application Signals for your Python applications, be aware of the following considerations.
+ In some containerized applications, a missing `PYTHONPATH` environment variable can sometimes cause the application to fail to start. To resolve this, ensure that you set the `PYTHONPATH` environment variable to the location of your application’s working directory. This is due to a known issue with OpenTelemetry auto-instrumentation. For more information about this issue, see [ Python autoinstrumentation setting of PYTHONPATH is not compliant](https://github.com/open-telemetry/opentelemetry-operator/issues/2302).
+ For Django applications, there are additional required configurations, which are outlined in the [ OpenTelemetry Python documentation](https://opentelemetry-python.readthedocs.io/en/latest/examples/django/README.html).
  + Use the `--noreload` flag to prevent automatic reloading.
  + Set the `DJANGO_SETTINGS_MODULE` environment variable to the location of your Django application’s `settings.py` file. This ensures that OpenTelemetry can correctly access and integrate with your Django settings. 
+ If you're using a WSGI server for your Python application, in addition to the following steps in this section, see [No Application Signals data for Python application that uses a WSGI server](CloudWatch-Application-Signals-Enable-Troubleshoot.md#Application-Signals-troubleshoot-Python-WSGI) for information to make Application Signals work.

**To instrument your Python application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation-python"
     }
   ]
   ```

1. Add a CloudWatch agent sidecar definition. To do this, append a new container called `ecs-cwagent` to your application's task definition. Replace *\$1REGION* with your actual Region name. Replace *\$1IMAGE* with the path to the latest CloudWatch container image on Amazon Elastic Container Registry. For more information, see [cloudwatch-agent](https://gallery.ecr.aws/cloudwatch-agent/cloudwatch-agent) on Amazon ECR.

   If you want to enable the CloudWatch agent with a daemon strategy instead, see the instructions at [Deploy using the daemon strategy](CloudWatch-Application-Signals-ECS-Daemon.md).

   ```
   {
     "name": "ecs-cwagent",
     "image": "$IMAGE",
     "essential": true,
     "secrets": [
       {
         "name": "CW_CONFIG_CONTENT",
         "valueFrom": "ecs-cwagent"
       }
     ],
     "logConfiguration": {
       "logDriver": "awslogs",
       "options": {
         "awslogs-create-group": "true",
         "awslogs-group": "/ecs/ecs-cwagent",
         "awslogs-region": "$REGION",
         "awslogs-stream-prefix": "ecs"
       }
     }
   }
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-python).

   ```
   {
       "name": "init",
       "image": "$IMAGE",
       "essential": false,
       "command": [
           "cp",
           "-a",
           "/autoinstrumentation/.",
           "/otel-auto-instrumentation-python"
       ],
       "mountPoints": [
           {
               "sourceVolume": "opentelemetry-auto-instrumentation-python",
               "containerPath": "/otel-auto-instrumentation-python",
               "readOnly": false
           }
       ]
   }
   ```

1. Add a dependency on the `init` container to make sure that this container finishes before your application container starts.

   ```
   "dependsOn": [
     {
       "containerName": "init",
       "condition": "SUCCESS"
     }
   ]
   ```

1. Add the following environment variables to your application container.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Sidecar.html)

1. Mount the volume `opentelemetry-auto-instrumentation-python` that you defined in step 1 of this procedure. If you don't need to enable log correlation with metrics and traces, use the following example for a Python application. If you want to enable log correlation, see the next step instead. 

   ```
   {
     "name": "my-app",
     ...
     "environment": [
       {
         "name": "PYTHONPATH",
         "value": "/otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation:$APP_PATH:/otel-auto-instrumentation-python"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://localhost:2000"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_PYTHON_DISTRO",
         "value": "aws_distro"
       },
       {
         "name": "OTEL_PYTHON_CONFIGURATOR",
         "value": "aws_configurator"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://localhost:4316/v1/traces"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://localhost:4316/v1/metrics"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "service.name=$SVC_NAME"
       },
       {
         "name": "DJANGO_SETTINGS_MODULE",
         "value": "$PATH_TO_SETTINGS.settings"
       }
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-python",
         "containerPath": "/otel-auto-instrumentation-python",
         "readOnly": false
       }
     ]
   }
   ```

1. (Optional) To enable log correlation, do the following before you mount the volume. In `OTEL_RESOURCE_ATTRIBUTES`, set an additional environment variable `aws.log.group.names` for the log groups of your application. By doing so, the traces and metrics from your application can be correlated with the relevant log entries from these log groups. For this variable, replace *\$1YOUR\$1APPLICATION\$1LOG\$1GROUP* with the log group names for your application. If you have multiple log groups, you can use an ampersand (`&`) to separate them as in this example: `aws.log.group.names=log-group-1&log-group-2`. To enable metric to log correlation, setting this current environmental variable is enough. For more information, see [Enable metric to log correlation](Application-Signals-MetricLogCorrelation.md). To enable trace to log correlation, you'll also need to change the logging configuration in your application. For more information, see [Enable trace to log correlation](Application-Signals-TraceLogCorrelation.md). 

   The following is an example. To enable log correlation, use this example when you mount the volume `opentelemetry-auto-instrumentation-python` that you defined in step 1 of this procedure.

   ```
   {
     "name": "my-app",
     ...
     "environment": [
       {
         "name": "PYTHONPATH",
         "value": "/otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation:$APP_PATH:/otel-auto-instrumentation-python"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://localhost:2000"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_PYTHON_DISTRO",
         "value": "aws_distro"
       },
       {
         "name": "OTEL_PYTHON_CONFIGURATOR",
         "value": "aws_configurator"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://localhost:4316/v1/traces"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://localhost:4316/v1/metrics"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=$SVC_NAME"
       },
       {
         "name": "DJANGO_SETTINGS_MODULE",
         "value": "$PATH_TO_SETTINGS.settings"
       }
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-python",
         "containerPath": "/otel-auto-instrumentation-python",
         "readOnly": false
       }
     ]
   }
   ```

------
#### [ .NET ]

**To instrument your application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation"
     }
   ]
   ```

1. Add a CloudWatch agent sidecar definition. To do this, append a new container called `ecs-cwagent` to your application's task definition. Replace *\$1REGION* with your actual Region name. Replace *\$1IMAGE* with the path to the latest CloudWatch container image on Amazon Elastic Container Registry. For more information, see [cloudwatch-agent](https://gallery.ecr.aws/cloudwatch-agent/cloudwatch-agent) on Amazon ECR.

   If you want to enable the CloudWatch agent with a daemon strategy instead, see the instructions at [Deploy using the daemon strategy](CloudWatch-Application-Signals-ECS-Daemon.md).

   ```
   {
     "name": "ecs-cwagent",
     "image": "$IMAGE",
     "essential": true,
     "secrets": [
       {
         "name": "CW_CONFIG_CONTENT",
         "valueFrom": "ecs-cwagent"
       }
     ],
     "logConfiguration": {
       "logDriver": "awslogs",
       "options": {
         "awslogs-create-group": "true",
         "awslogs-group": "/ecs/ecs-cwagent",
         "awslogs-region": "$REGION",
         "awslogs-stream-prefix": "ecs"
       }
     }
   }
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-dotnet). 

   For a Linux container instance, use the following.

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
         "cp",
         "-a",
         "autoinstrumentation/.",
         "/otel-auto-instrumentation"
     ],
     "mountPoints": [
         {
             "sourceVolume": "opentelemetry-auto-instrumentation",
             "containerPath": "/otel-auto-instrumentation",
             "readOnly": false
         }
     ]
   }
   ```

   For a Windows Server container instance, use the following.

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
         "CMD",
         "/c",
         "xcopy",
         "/e",
         "C:\\autoinstrumentation\\*",
         "C:\\otel-auto-instrumentation",
         "&&",
         "icacls",
         "C:\\otel-auto-instrumentation",
         "/grant",
         "*S-1-1-0:R",
         "/T"
     ],
     "mountPoints": [
         {
             "sourceVolume": "opentelemetry-auto-instrumentation",
             "containerPath": "C:\\otel-auto-instrumentation",
             "readOnly": false
         }
     ]
   }
   ```

1. Add a dependency on the `init` container to make sure that this container finishes before your application container starts.

   ```
   "dependsOn": [
       {
           "containerName": "init",
           "condition": "SUCCESS"
       }
   ]
   ```

1. Add the following environment variables to your application container. You must be using version 1.1.0 or later of the AWS Distro for OpenTelemetry[ auto-instrumentation agent for .NET](https://opentelemetry.io/docs/zero-code/net/).    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Sidecar.html)

1. Mount the volume `opentelemetry-auto-instrumentation` that you defined in step 1 of this procedure. For Linux, use the following.

   ```
   {
       "name": "my-app",
      ...
       "environment": [
           {
              "name": "OTEL_RESOURCE_ATTRIBUTES",
              "value": "service.name=$SVC_NAME"
          },
           {
               "name": "CORECLR_ENABLE_PROFILING",
               "value": "1"
           },
           {
               "name": "CORECLR_PROFILER",
               "value": "{918728DD-259F-4A6A-AC2B-B85E1B658318}"
           },
           {
               "name": "CORECLR_PROFILER_PATH",
               "value": "/otel-auto-instrumentation/linux-x64/OpenTelemetry.AutoInstrumentation.Native.so"
           },
           {
               "name": "DOTNET_ADDITIONAL_DEPS",
               "value": "/otel-auto-instrumentation/AdditionalDeps"
           },
           {
               "name": "DOTNET_SHARED_STORE",
               "value": "/otel-auto-instrumentation/store"
           },
           {
               "name": "DOTNET_STARTUP_HOOKS",
               "value": "/otel-auto-instrumentation/net/OpenTelemetry.AutoInstrumentation.StartupHook.dll"
           },
           {
               "name": "OTEL_DOTNET_AUTO_HOME",
               "value": "/otel-auto-instrumentation"
           },
           {
               "name": "OTEL_DOTNET_AUTO_PLUGINS",
               "value": "AWS.Distro.OpenTelemetry.AutoInstrumentation.Plugin, AWS.Distro.OpenTelemetry.AutoInstrumentation"
           },
           {
               "name": "OTEL_RESOURCE_ATTRIBUTES",
               "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=aws-dotnet-service-name"
           },
           {
               "name": "OTEL_LOGS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_METRICS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
               "value": "http/protobuf"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
               "value": "true"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
               "value": "http://localhost:4316/v1/metrics"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
               "value": "http://localhost:4316/v1/traces"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_ENDPOINT",
               "value": "http://localhost:4316"
           },
           {
              "name": "OTEL_TRACES_SAMPLER",
              "value": "xray"
          },
          {
              "name": "OTEL_TRACES_SAMPLER_ARG",
              "value": "endpoint=http://localhost:2000"
          },
           {
               "name": "OTEL_PROPAGATORS",
               "value": "tracecontext,baggage,b3,xray"
           }
       ],
       "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ],
       "mountPoints": [
           {
               "sourceVolume": "opentelemetry-auto-instrumentation",
               "containerPath": "/otel-auto-instrumentation",
               "readOnly": false
           }
       ]
   }
   ```

   For Windows Server, use the following.

   ```
   {
       "name": "my-app",
      ...
       "environment": [
          {
              "name": "OTEL_RESOURCE_ATTRIBUTES",
              "value": "service.name=$SVC_NAME"
          },
           {
               "name": "CORECLR_ENABLE_PROFILING",
               "value": "1"
           },
           {
               "name": "CORECLR_PROFILER",
               "value": "{918728DD-259F-4A6A-AC2B-B85E1B658318}"
           },
           {
               "name": "CORECLR_PROFILER_PATH",
               "value": "C:\\otel-auto-instrumentation\\win-x64\\OpenTelemetry.AutoInstrumentation.Native.dll"
           },
           {
               "name": "DOTNET_ADDITIONAL_DEPS",
               "value": "C:\\otel-auto-instrumentation\\AdditionalDeps"
           },
           {
               "name": "DOTNET_SHARED_STORE",
               "value": "C:\\otel-auto-instrumentation\\store"
           },
           {
               "name": "DOTNET_STARTUP_HOOKS",
               "value": "C:\\otel-auto-instrumentation\\net\\OpenTelemetry.AutoInstrumentation.StartupHook.dll"
           },
           {
               "name": "OTEL_DOTNET_AUTO_HOME",
               "value": "C:\\otel-auto-instrumentation"
           },
           {
               "name": "OTEL_DOTNET_AUTO_PLUGINS",
               "value": "AWS.Distro.OpenTelemetry.AutoInstrumentation.Plugin, AWS.Distro.OpenTelemetry.AutoInstrumentation"
           },
           {
               "name": "OTEL_RESOURCE_ATTRIBUTES",
               "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=dotnet-service-name"
           },
           {
               "name": "OTEL_LOGS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_METRICS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
               "value": "http/protobuf"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
               "value": "true"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
               "value": "http://localhost:4316/v1/metrics"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
               "value": "http://localhost:4316/v1/traces"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_ENDPOINT",
               "value": "http://localhost:4316"
           },
           {
              "name": "OTEL_TRACES_SAMPLER",
              "value": "xray"
          },
          {
              "name": "OTEL_TRACES_SAMPLER_ARG",
              "value": "endpoint=http://localhost:2000"
          },
           {
               "name": "OTEL_PROPAGATORS",
               "value": "tracecontext,baggage,b3,xray"
           }
       ],
       "mountPoints": [
           {
               "sourceVolume": "opentelemetry-auto-instrumentation",
               "containerPath": "C:\\otel-auto-instrumentation",
               "readOnly": false
           }
       ],
       "dependsOn": [
           {
               "containerName": "init",
               "condition": "SUCCESS"
           }
      ]
   }
   ```

------
#### [ Node.js ]

**Note**  
If you are enabling Application Signals for a Node.js application with ESM, see [Setting up a Node.js application with the ESM module format](#ECS-NodeJs-ESM) before you start these steps.

**To instrument your application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation-node"
     }
   ]
   ```

1. Add a CloudWatch agent sidecar definition. To do this, append a new container called `ecs-cwagent` to your application's task definition. Replace *\$1REGION* with your actual Region name. Replace *\$1IMAGE* with the path to the latest CloudWatch container image on Amazon Elastic Container Registry. For more information, see [cloudwatch-agent](https://gallery.ecr.aws/cloudwatch-agent/cloudwatch-agent) on Amazon ECR.

   If you want to enable the CloudWatch agent with a daemon strategy instead, see the instructions at [Deploy using the daemon strategy](CloudWatch-Application-Signals-ECS-Daemon.md).

   ```
   {
     "name": "ecs-cwagent",
     "image": "$IMAGE",
     "essential": true,
     "secrets": [
       {
         "name": "CW_CONFIG_CONTENT",
         "valueFrom": "ecs-cwagent"
       }
     ],
     "logConfiguration": {
       "logDriver": "awslogs",
       "options": {
         "awslogs-create-group": "true",
         "awslogs-group": "/ecs/ecs-cwagent",
         "awslogs-region": "$REGION",
         "awslogs-stream-prefix": "ecs"
       }
     }
   }
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-node). 

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
       "cp",
       "-a",
       "/autoinstrumentation/.",
       "/otel-auto-instrumentation-node"
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-node",
         "containerPath": "/otel-auto-instrumentation-node",
         "readOnly": false
       }
     ],
   }
   ```

1. Add a dependency on the `init` container to make sure that this container finishes before your application container starts.

   ```
   "dependsOn": [
     {
       "containerName": "init",
       "condition": "SUCCESS"
     }
   ]
   ```

1. Add the following environment variables to your application container.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Sidecar.html)

1. Mount the volume `opentelemetry-auto-instrumentation` that you defined in step 1 of this procedure. If you don't need to enable log correlation with metrics and traces, use the following example for a Node.js application. If you want to enable log correlation, see the next step instead.

   For your Application Container, add a dependency on the `init` container to make sure that container finishes before your application container starts.

   ```
   {
     "name": "my-app",
      ...
     "environment": [
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "service.name=$SVC_NAME"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://localhost:4316/v1/metrics"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://localhost:4316/v1/traces"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://localhost:2000"
       },
       {
         "name": "NODE_OPTIONS",
         "value": "--require /otel-auto-instrumentation-node/autoinstrumentation.js"
       }
       ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-node",
         "containerPath": "/otel-auto-instrumentation-node",
         "readOnly": false
       }
     ],
     "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ]
   }
   ```

1. (Optional) To enable log correlation, do the following before you mount the volume. In `OTEL_RESOURCE_ATTRIBUTES`, set an additional environment variable `aws.log.group.names` for the log groups of your application. By doing so, the traces and metrics from your application can be correlated with the relevant log entries from these log groups. For this variable, replace *\$1YOUR\$1APPLICATION\$1LOG\$1GROUP* with the log group names for your application. If you have multiple log groups, you can use an ampersand (`&`) to separate them as in this example: `aws.log.group.names=log-group-1&log-group-2`. To enable metric to log correlation, setting this current environmental variable is enough. For more information, see [Enable metric to log correlation](Application-Signals-MetricLogCorrelation.md). To enable trace to log correlation, you'll also need to change the logging configuration in your application. For more information, see [Enable trace to log correlation](Application-Signals-TraceLogCorrelation.md). 

   The following is an example. Use this example to enable log correlation when you mount the volume `opentelemetry-auto-instrumentation` that you defined in step 1 of this procedure.

   ```
   {
     "name": "my-app",
      ...
     "environment": [
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=$SVC_NAME"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://localhost:4316/v1/metrics"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://localhost:4316/v1/traces"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://localhost:2000"
       },
       {
         "name": "NODE_OPTIONS",
         "value": "--require /otel-auto-instrumentation-node/autoinstrumentation.js"
       }
       ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-node",
         "containerPath": "/otel-auto-instrumentation-node",
         "readOnly": false
       }
     ],
     "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ]
   }
   ```<a name="ECS-NodeJs-ESM"></a>

**Setting up a Node.js application with the ESM module format**

We provide limited support for Node.js applications with the ESM module format. For details, see [Known limitations about Node.js with ESM](CloudWatch-Application-Signals-supportmatrix.md#ESM-limitations).

For the ESM module format, using the `init` container to inject the Node.js instrumentation SDK doesn’t apply. To enable Application Signals for Node.js with ESM, skip steps 1 and 3 of the previous procedure, and do the following instead.

**To enable Application Signals for a Node.js application with ESM**

1. Install the relevant dependencies to your Node.js application for autoinstrumentation:

   ```
   npm install @aws/aws-distro-opentelemetry-node-autoinstrumentation
   npm install @opentelemetry/instrumentation@0.54.0
   ```

1. In steps 5 and 6 in the previous procedure, remove the mounting of the volume `opentelemetry-auto-instrumentation-node`:

   ```
    "mountPoints": [
       {
           "sourceVolume": "opentelemetry-auto-instrumentation-node",
           "containerPath": "/otel-auto-instrumentation-node",
           "readOnly": false
       }
    ]
   ```

   Replace the node options with the following.

   ```
   {
       "name": "NODE_OPTIONS",
       "value": "--import @aws/aws-distro-opentelemetry-node-autoinstrumentation/register --experimental-loader=@opentelemetry/instrumentation/hook.mjs"
   }
   ```

------

## Step 5: Deploy your application
<a name="CloudWatch-Application-Signals-Enable-ECS-Deploy"></a>

Create a new revision of your task definition and deploy it to your application cluster. You should see three containers in the newly created task:
+ `init`– A required container for initializing Application Signals.
+ `ecs-cwagent`– A container running the CloudWatch agent
+ `my-app`– This is the example application container in our documentation. In your actual workloads, this specific container might not exist or might be replaced with your own service containers.

## (Optional) Step 6: Monitor your application health
<a name="CloudWatch-Application-Signals-Monitor-sidecar"></a>

Once you have enabled your applications on Amazon ECS, you can monitor your application health. For more information, see [Monitor the operational health of your applications with Application Signals](Services.md).

# Deploy using the daemon strategy
<a name="CloudWatch-Application-Signals-ECS-Daemon"></a>

## Step 1: Enable Application Signals in your account
<a name="Application-Signals-ECS-Grant-Daemon"></a>

You must first enable Application Signals in your account. If you haven't, see [Enable Application Signals in your account](CloudWatch-Application-Signals-Enable.md).

## Step 2: Create IAM roles
<a name="Application-Signals-Enable-ECS-IAM-Daemon"></a>

You must create an IAM role. If you already have created this role, you might need to add permissions to it.
+ **ECS task role—** Containers use this role to run. The permissions should be whatever your applications need, plus **CloudWatchAgentServerPolicy**. 

For more information about creating IAM roles, see [Creating IAM Roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html).

## Step 3: Prepare CloudWatch agent configuration
<a name="Application-Signals-Enable-ECS-PrepareAgent-Daemon"></a>

First, prepare the agent configuration with Application Signals enabled. To do this, create a local file named `/tmp/ecs-cwagent.json`. 

```
{
  "traces": {
    "traces_collected": {
      "application_signals": {}
    }
  },
  "logs": {
    "metrics_collected": {
      "application_signals": {}
    }
  }
}
```

Then upload this configuration to the SSM Parameter Store. To do this, enter the following command. In the file, replace *\$1REGION* with your actual Region name.

```
aws ssm put-parameter \
--name "ecs-cwagent" \
--type "String" \
--value "`cat /tmp/ecs-cwagent.json`" \
--region "$REGION"
```

## Step 4: Deploy the CloudWatch agent daemon service
<a name="Application-Signals-Enable-ECS-Sidecar-Daemon"></a>

Create the following task definition and deploy it to your application cluster. Replace *\$1REGION* with your actual Region name. Replace *\$1TASK\$1ROLE\$1ARN* and *\$1EXECUTION\$1ROLE\$1ARN* with the IAM roles you prepared in [Step 2: Create IAM roles](#Application-Signals-Enable-ECS-IAM-Daemon). Replace *\$1IMAGE* with the path to the latest CloudWatch container image on Amazon Elastic Container Registry. For more information, see [ cloudwatch-agent](https://gallery.ecr.aws/cloudwatch-agent/cloudwatch-agent) on Amazon ECR. 

**Note**  
The daemon service exposes two ports on the host, with 4316 used as endpoint for receiving metrics and traces and 2000 as the CloudWatch trace sampler endpoint. This setup allows the agent to collect and transmit telemetry data from all application tasks running on the host. Ensure that these ports are not used by other services on the host to avoid conflicts.

```
{
  "family": "ecs-cwagent-daemon",
  "taskRoleArn": "$TASK_ROLE_ARN",
  "executionRoleArn": "$EXECUTION_ROLE_ARN",
  "networkMode": "bridge",
  "containerDefinitions": [
    {
      "name": "ecs-cwagent",
      "image": "$IMAGE",
      "essential": true,
      "portMappings": [
        {
          "containerPort": 4316,
          "hostPort": 4316
        },
        {
          "containerPort": 2000,
          "hostPort": 2000
        }
      ],
      "secrets": [
        {
          "name": "CW_CONFIG_CONTENT",
          "valueFrom": "ecs-cwagent"
        }
      ],
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-create-group": "true",
          "awslogs-group": "/ecs/ecs-cwagent",
          "awslogs-region": "$REGION",
          "awslogs-stream-prefix": "ecs"
        }
      }
    }
  ],
  "requiresCompatibilities": [
    "EC2"
  ],
  "cpu": "128",
  "memory": "64"
}
```

## Step 5: Instrument your application
<a name="Application-Signals-Enable-ECS-Instrument-Daemon"></a>

The next step is to instrument your application for Application Signals.

------
#### [ Java ]

**To instrument your application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation"
     }
   ]
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-java). 

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
       "cp",
       "/javaagent.jar",
       "/otel-auto-instrumentation/javaagent.jar"
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation",
         "containerPath": "/otel-auto-instrumentation",
         "readOnly": false
       }
     ]
   }
   ```

1. Add a dependency on the `init` container to make sure that this container finishes before your application container starts.

   ```
   "dependsOn": [
     {
       "containerName": "init",
       "condition": "SUCCESS"
     }
   ]
   ```

1. Add the following environment variables to your application container. You must be using version 1.32.2 or later of the AWS Distro for OpenTelemetry [auto-instrumentation agent for Java](https://opentelemetry.io/docs/zero-code/java/agent/).    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Daemon.html)

1. Mount the volume `opentelemetry-auto-instrumentation` that you defined in step 1 of this procedure. If you don't need to enable log correlation with metrics and traces, use the following example for a Java application. If you want to enable log correlation, see the next step instead.

   ```
   {
     "name": "my-app",
      ...
     "environment": [
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "service.name=$SVC_NAME"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "JAVA_TOOL_OPTIONS",
         "value": " -javaagent:/otel-auto-instrumentation/javaagent.jar"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/metrics"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/traces"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_PROPAGATORS",
         "value": "tracecontext,baggage,b3,xray"
       }
     ],
       "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation",
         "containerPath": "/otel-auto-instrumentation",
         "readOnly": false
       }
     ]
   }
   ```

------
#### [ Python ]

Before you enable Application Signals for your Python applications, be aware of the following considerations.
+ In some containerized applications, a missing `PYTHONPATH` environment variable can sometimes cause the application to fail to start. To resolve this, ensure that you set the `PYTHONPATH` environment variable to the location of your application’s working directory. This is due to a known issue with OpenTelemetry auto-instrumentation. For more information about this issue, see [ Python autoinstrumentation setting of PYTHONPATH is not compliant](https://github.com/open-telemetry/opentelemetry-operator/issues/2302).
+ For Django applications, there are additional required configurations, which are outlined in the [ OpenTelemetry Python documentation](https://opentelemetry-python.readthedocs.io/en/latest/examples/django/README.html).
  + Use the `--noreload` flag to prevent automatic reloading.
  + Set the `DJANGO_SETTINGS_MODULE` environment variable to the location of your Django application’s `settings.py` file. This ensures that OpenTelemetry can correctly access and integrate with your Django settings. 
+ If you're using a WSGI server for your Python application, in addition to the following steps in this section, see [No Application Signals data for Python application that uses a WSGI server](CloudWatch-Application-Signals-Enable-Troubleshoot.md#Application-Signals-troubleshoot-Python-WSGI) for information to make Application Signals work.

**To instrument your Python application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation-python"
     }
   ]
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-python).

   ```
   {
       "name": "init",
       "image": "$IMAGE",
       "essential": false,
       "command": [
           "cp",
           "-a",
           "/autoinstrumentation/.",
           "/otel-auto-instrumentation-python"
       ],
       "mountPoints": [
           {
               "sourceVolume": "opentelemetry-auto-instrumentation-python",
               "containerPath": "/otel-auto-instrumentation-python",
               "readOnly": false
           }
       ]
   }
   ```

1. Add a dependency on the `init` container to make sure that this container finishes before your application container starts.

   ```
   "dependsOn": [
     {
       "containerName": "init",
       "condition": "SUCCESS"
     }
   ]
   ```

1. Add the following environment variables to your application container.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Daemon.html)

1. Mount the volume `opentelemetry-auto-instrumentation-python` that you defined in step 1 of this procedure. If you don't need to enable log correlation with metrics and traces, use the following example for a Python application. If you want to enable log correlation, see the next step instead. 

   ```
   {
     "name": "my-app",
     ...
     "environment": [
       {
         "name": "PYTHONPATH",
         "value": "/otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation:$APP_PATH:/otel-auto-instrumentation-python"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://CW_CONTAINER_IP:2000"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_PYTHON_DISTRO",
         "value": "aws_distro"
       },
       {
         "name": "OTEL_PYTHON_CONFIGURATOR",
         "value": "aws_configurator"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/traces"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/metrics"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "service.name=$SVC_NAME"
       },
       {
         "name": "DJANGO_SETTINGS_MODULE",
         "value": "$PATH_TO_SETTINGS.settings"
       }
     ],
     "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-python",
         "containerPath": "/otel-auto-instrumentation-python",
         "readOnly": false
       }
     ]
   }
   ```

1. (Optional) To enable log correlation, do the following before you mount the volume. In `OTEL_RESOURCE_ATTRIBUTES`, set an additional environment variable `aws.log.group.names` for the log groups of your application. By doing so, the traces and metrics from your application can be correlated with the relevant log entries from these log groups. For this variable, replace *\$1YOUR\$1APPLICATION\$1LOG\$1GROUP* with the log group names for your application. If you have multiple log groups, you can use an ampersand (`&`) to separate them as in this example: `aws.log.group.names=log-group-1&log-group-2`. To enable metric to log correlation, setting this current environmental variable is enough. For more information, see [Enable metric to log correlation](Application-Signals-MetricLogCorrelation.md). To enable trace to log correlation, you'll also need to change the logging configuration in your application. For more information, see [Enable trace to log correlation](Application-Signals-TraceLogCorrelation.md). 

   The following is an example. To enable log correlation, use this example when you mount the volume `opentelemetry-auto-instrumentation-python` that you defined in step 1 of this procedure.

   ```
   {
     "name": "my-app",
     ...
     "environment": [
       {
         "name": "PYTHONPATH",
         "value": "/otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation:$APP_PATH:/otel-auto-instrumentation-python"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://CW_CONTAINER_IP:2000"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_PYTHON_DISTRO",
         "value": "aws_distro"
       },
       {
         "name": "OTEL_PYTHON_CONFIGURATOR",
         "value": "aws_configurator"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/traces"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/metrics"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=$SVC_NAME"
       },
       {
         "name": "DJANGO_SETTINGS_MODULE",
         "value": "$PATH_TO_SETTINGS.settings"
       }
     ],
     "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-python",
         "containerPath": "/otel-auto-instrumentation-python",
         "readOnly": false
       }
     ]
   }
   ```

------
#### [ .NET ]

**To instrument your application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation"
     }
   ]
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-dotnet). 

   For a Linux container instance, use the following.

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
         "cp",
         "-a",
         "autoinstrumentation/.",
         "/otel-auto-instrumentation"
     ],
     "mountPoints": [
         {
             "sourceVolume": "opentelemetry-auto-instrumentation",
             "containerPath": "/otel-auto-instrumentation",
             "readOnly": false
         }
     ]
   }
   ```

   For a Windows Server container instance, use the following.

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
         "CMD",
         "/c",
         "xcopy",
         "/e",
         "C:\\autoinstrumentation\\*",
         "C:\\otel-auto-instrumentation",
         "&&",
         "icacls",
         "C:\\otel-auto-instrumentation",
         "/grant",
         "*S-1-1-0:R",
         "/T"
     ],
     "mountPoints": [
         {
             "sourceVolume": "opentelemetry-auto-instrumentation",
             "containerPath": "C:\\otel-auto-instrumentation",
             "readOnly": false
         }
     ]
   }
   ```

1. Add a dependency on the `init` container to make sure that container finishes before your application container starts.

   ```
   "dependsOn": [
       {
           "containerName": "init",
           "condition": "SUCCESS"
       }
   ]
   ```

1. Add the following environment variables to your application container. You must be using version 1.1.0 or later of the AWS Distro for OpenTelemetry[ auto-instrumentation agent for .NET](https://opentelemetry.io/docs/zero-code/net/).    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Daemon.html)

1. Mount the volume `opentelemetry-auto-instrumentation` that you defined in step 1 of this procedure. For Linux, use the following.

   ```
   {
       "name": "my-app",
      ...
       "environment": [
           {
              "name": "OTEL_RESOURCE_ATTRIBUTES",
              "value": "service.name=$SVC_NAME"
          },
           {
               "name": "CORECLR_ENABLE_PROFILING",
               "value": "1"
           },
           {
               "name": "CORECLR_PROFILER",
               "value": "{918728DD-259F-4A6A-AC2B-B85E1B658318}"
           },
           {
               "name": "CORECLR_PROFILER_PATH",
               "value": "/otel-auto-instrumentation/linux-x64/OpenTelemetry.AutoInstrumentation.Native.so"
           },
           {
               "name": "DOTNET_ADDITIONAL_DEPS",
               "value": "/otel-auto-instrumentation/AdditionalDeps"
           },
           {
               "name": "DOTNET_SHARED_STORE",
               "value": "/otel-auto-instrumentation/store"
           },
           {
               "name": "DOTNET_STARTUP_HOOKS",
               "value": "/otel-auto-instrumentation/net/OpenTelemetry.AutoInstrumentation.StartupHook.dll"
           },
           {
               "name": "OTEL_DOTNET_AUTO_HOME",
               "value": "/otel-auto-instrumentation"
           },
           {
               "name": "OTEL_DOTNET_AUTO_PLUGINS",
               "value": "AWS.Distro.OpenTelemetry.AutoInstrumentation.Plugin, AWS.Distro.OpenTelemetry.AutoInstrumentation"
           },
           {
               "name": "OTEL_RESOURCE_ATTRIBUTES",
               "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=dotnet-service-name"
           },
           {
               "name": "OTEL_LOGS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_METRICS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
               "value": "http/protobuf"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
               "value": "true"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
               "value": "http://localhost:4316/v1/metrics"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
               "value": "http://CW_CONTAINER_IP:4316/v1/traces"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_ENDPOINT",
               "value": "http://CW_CONTAINER_IP:4316"
           },
           {
              "name": "OTEL_TRACES_SAMPLER",
              "value": "xray"
          },
          {
              "name": "OTEL_TRACES_SAMPLER_ARG",
              "value": "endpoint=http://CW_CONTAINER_IP:2000"
          },
           {
               "name": "OTEL_PROPAGATORS",
               "value": "tracecontext,baggage,b3,xray"
           }
       ],
         "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ],
       "mountPoints": [
           {
               "sourceVolume": "opentelemetry-auto-instrumentation",
               "containerPath": "/otel-auto-instrumentation",
               "readOnly": false
           }
       ],
       "dependsOn": [
           {
               "containerName": "init",
               "condition": "SUCCESS"
           }
      ]
   }
   ```

   For Windows Server, use the following.

   ```
   {
       "name": "my-app",
      ...
       "environment": [
          {
              "name": "OTEL_RESOURCE_ATTRIBUTES",
              "value": "service.name=$SVC_NAME"
          },
           {
               "name": "CORECLR_ENABLE_PROFILING",
               "value": "1"
           },
           {
               "name": "CORECLR_PROFILER",
               "value": "{918728DD-259F-4A6A-AC2B-B85E1B658318}"
           },
           {
               "name": "CORECLR_PROFILER_PATH",
               "value": "C:\\otel-auto-instrumentation\\win-x64\\OpenTelemetry.AutoInstrumentation.Native.dll"
           },
           {
               "name": "DOTNET_ADDITIONAL_DEPS",
               "value": "C:\\otel-auto-instrumentation\\AdditionalDeps"
           },
           {
               "name": "DOTNET_SHARED_STORE",
               "value": "C:\\otel-auto-instrumentation\\store"
           },
           {
               "name": "DOTNET_STARTUP_HOOKS",
               "value": "C:\\otel-auto-instrumentation\\net\\OpenTelemetry.AutoInstrumentation.StartupHook.dll"
           },
           {
               "name": "OTEL_DOTNET_AUTO_HOME",
               "value": "C:\\otel-auto-instrumentation"
           },
           {
               "name": "OTEL_DOTNET_AUTO_PLUGINS",
               "value": "AWS.Distro.OpenTelemetry.AutoInstrumentation.Plugin, AWS.Distro.OpenTelemetry.AutoInstrumentation"
           },
           {
               "name": "OTEL_RESOURCE_ATTRIBUTES",
               "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=dotnet-service-name"
           },
           {
               "name": "OTEL_LOGS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_METRICS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
               "value": "http/protobuf"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
               "value": "true"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
               "value": "http://CW_CONTAINER_IP:4316/v1/metrics"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
               "value": "http://CW_CONTAINER_IP:4316/v1/traces"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_ENDPOINT",
               "value": "http://CW_CONTAINER_IP:4316"
           },
           {
              "name": "OTEL_TRACES_SAMPLER",
              "value": "xray"
          },
          {
              "name": "OTEL_TRACES_SAMPLER_ARG",
              "value": "endpoint=http://CW_CONTAINER_IP:2000"
          },
           {
               "name": "OTEL_PROPAGATORS",
               "value": "tracecontext,baggage,b3,xray"
           }
       ],
       "mountPoints": [
           {
               "sourceVolume": "opentelemetry-auto-instrumentation",
               "containerPath": "C:\\otel-auto-instrumentation",
               "readOnly": false
           }
       ],
       "dependsOn": [
           {
               "containerName": "init",
               "condition": "SUCCESS"
           }
      ]
   }
   ```

------
#### [ Node.js ]

**Note**  
If you are enabling Application Signals for a Node.js application with ESM, see [Setting up a Node.js application with the ESM module format](#ECSDaemon-NodeJs-ESM) before you start these steps.

**To instrument your application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation-node"
     }
   ]
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-node). 

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
       "cp",
       "-a",
       "/autoinstrumentation/.",
       "/otel-auto-instrumentation-node"
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-node",
         "containerPath": "/otel-auto-instrumentation-node",
         "readOnly": false
       }
     ],
   }
   ```

1. Add a dependency on the `init` container to make sure that this container finishes before your application container starts.

   ```
   "dependsOn": [
     {
       "containerName": "init",
       "condition": "SUCCESS"
     }
   ]
   ```

1. Add the following environment variables to your application container.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Daemon.html)

1. Mount the volume `opentelemetry-auto-instrumentation-node` that you defined in step 1 of this procedure. If you don't need to enable log correlation with metrics and traces, use the following example for a Node.js application. If you want to enable log correlation, see the next step instead.

   For your Application Container, add a dependency on the `init` container to make sure that container finishes before your application container starts.

   ```
   {
     "name": "my-app",
      ...
     "environment": [
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "service.name=$SVC_NAME"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/metrics"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/traces"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://CW_CONTAINER_IP:2000"
       },
       {
         "name": "NODE_OPTIONS",
         "value": "--require /otel-auto-instrumentation-node/autoinstrumentation.js"
       }
       ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-node",
         "containerPath": "/otel-auto-instrumentation-node",
         "readOnly": false
       }
     ],
     "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ]
   }
   ```

1. (Optional) To enable log correlation, do the following before you mount the volume. In `OTEL_RESOURCE_ATTRIBUTES`, set an additional environment variable `aws.log.group.names` for the log groups of your application. By doing so, the traces and metrics from your application can be correlated with the relevant log entries from these log groups. For this variable, replace *\$1YOUR\$1APPLICATION\$1LOG\$1GROUP* with the log group names for your application. If you have multiple log groups, you can use an ampersand (`&`) to separate them as in this example: `aws.log.group.names=log-group-1&log-group-2`. To enable metric to log correlation, setting this current environmental variable is enough. For more information, see [Enable metric to log correlation](Application-Signals-MetricLogCorrelation.md). To enable trace to log correlation, you'll also need to change the logging configuration in your application. For more information, see [Enable trace to log correlation](Application-Signals-TraceLogCorrelation.md). 

   The following is an example. Use this example to enable log correlation when you mount the volume `opentelemetry-auto-instrumentation` that you defined in step 1 of this procedure.

   ```
   {
     "name": "my-app",
      ...
     "environment": [
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=$SVC_NAME"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/metrics"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/traces"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://CW_CONTAINER_IP:2000"
       },
       {
         "name": "NODE_OPTIONS",
         "value": "--require /otel-auto-instrumentation-node/autoinstrumentation.js"
       }
       ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-node",
         "containerPath": "/otel-auto-instrumentation-node",
         "readOnly": false
       }
     ],
     "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ]
   }
   ```<a name="ECSDaemon-NodeJs-ESM"></a>

**Setting up a Node.js application with the ESM module format**

We provide limited support for Node.js applications with the ESM module format. For details, see [Known limitations about Node.js with ESM](CloudWatch-Application-Signals-supportmatrix.md#ESM-limitations).

For the ESM module format, using the `init` container to inject the Node.js instrumentation SDK doesn’t apply. To enable Application Signals for Node.js with ESM, skip steps 1 and 2 in of the previous procedure, and do the following instead.

**To enable Application Signals for a Node.js application with ESM**

1. Install the relevant dependencies to your Node.js application for autoinstrumentation:

   ```
   npm install @aws/aws-distro-opentelemetry-node-autoinstrumentation
   npm install @opentelemetry/instrumentation@0.54.0
   ```

1. In steps 4 and 5 in the previous procedure, remove the mounting of the volume `opentelemetry-auto-instrumentation-node`:

   ```
   "mountPoints": [
       {
           "sourceVolume": "opentelemetry-auto-instrumentation-node",
           "containerPath": "/otel-auto-instrumentation-node",
           "readOnly": false
       }
    ]
   ```

   Replace the node options with the following.

   ```
   {
       "name": "NODE_OPTIONS",
       "value": "--import @aws/aws-distro-opentelemetry-node-autoinstrumentation/register --experimental-loader=@opentelemetry/instrumentation/hook.mjs"
   }
   ```

------

## Step 6: Deploy your application
<a name="Application-Signals-Enable-ECS-Deploy-Daemon"></a>

Create a new revision of your task definition and deploy it to your application cluster. You should see two containers in the newly created task:
+ `init`– A required container for initializing Application Signals
+ `my-app`– This is the example application container in our documentation. In your actual workloads, this specific container might not exist or might be replaced with your own service containers.

## (Optional) Step 7: Monitor your application health
<a name="CloudWatch-Application-Signals-Monitor-daemon"></a>

Once you have enabled your applications on Amazon ECS, you can monitor your application health. For more information, see [Monitor the operational health of your applications with Application Signals](Services.md).

# Enable Application Signals on Amazon ECS using AWS CDK
<a name="CloudWatch-Application-Signals-EKS-CDK"></a>

To enable Application Signals on Amazon ECS using AWS CDK, do the following.

1. Enable Application Signals for your applications – If you haven't enabled Application Signals in this account yet, you must grant Application Signals the permissions it needs to discover your services.

   ```
   import { aws_applicationsignals as applicationsignals } from 'aws-cdk-lib';
   
   const cfnDiscovery = new applicationsignals.CfnDiscovery(this,
     'ApplicationSignalsServiceRole', { }
   );
   ```

   The Discovery CloudFormation resource grants Application Signals the following permissions:
   + `xray:GetServiceGraph`
   + `logs:StartQuery`
   + `logs:GetQueryResults`
   + `cloudwatch:GetMetricData`
   + `cloudwatch:ListMetrics`
   + `tag:GetResources`

   For more information about this role, see [Service-linked role permissions for CloudWatch Application Signals](using-service-linked-roles.md#service-linked-role-signals).

1. Instrument your application with the [AWS::ApplicationSignals Construct Library](https://www.npmjs.com/package/@aws-cdk/aws-applicationsignals-alpha) in the AWS CDK. The code snippets in this document are provided in *TypeScript*. For other language-specific alternatives, see [Supported programming languages for the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/languages.html). 
   + **Enable Application Signals on Amazon ECS with sidecar mode**

     1. Configure `instrumentation` to instrument the application with the AWS Distro for OpenTelemetry (ADOT) SDK Agent. The following is an example of instrumenting a Java application. See [InstrumentationVersion](https://docs.aws.amazon.com/cdk/api/v2/docs/@aws-cdk_aws-applicationsignals-alpha.InstrumentationVersion.html) for all supported language versions.

     1. Specify `cloudWatchAgentSidecar` to configure the CloudWatch Agent as a sidecar container.

        ```
        import { Construct } from 'constructs';
        import * as appsignals from '@aws-cdk/aws-applicationsignals-alpha';
        import * as cdk from 'aws-cdk-lib';
        import * as ec2 from 'aws-cdk-lib/aws-ec2';
        import * as ecs from 'aws-cdk-lib/aws-ecs';
        
        class MyStack extends cdk.Stack {
          public constructor(scope?: Construct, id?: string, props: cdk.StackProps = {}) {
            super();
            const vpc = new ec2.Vpc(this, 'TestVpc', {});
            const cluster = new ecs.Cluster(this, 'TestCluster', { vpc });
        
            const fargateTaskDefinition = new ecs.FargateTaskDefinition(this, 'SampleAppTaskDefinition', {
              cpu: 2048,
              memoryLimitMiB: 4096,
            });
        
            fargateTaskDefinition.addContainer('app', {
              image: ecs.ContainerImage.fromRegistry('test/sample-app'),
            });
        
            new appsignals.ApplicationSignalsIntegration(this, 'ApplicationSignalsIntegration', {
              taskDefinition: fargateTaskDefinition,
              instrumentation: {
                sdkVersion: appsignals.JavaInstrumentationVersion.V2_10_0,
              },
              serviceName: 'sample-app',
              cloudWatchAgentSidecar: {
                containerName: 'ecs-cwagent',
                enableLogging: true,
                cpu: 256,
                memoryLimitMiB: 512,
              }
            });
        
            new ecs.FargateService(this, 'MySampleApp', {
              cluster: cluster,
              taskDefinition: fargateTaskDefinition,
              desiredCount: 1,
            });
          }
        }
        ```
   + **Enable Application Signals on Amazon ECS with daemon mode**
**Note**  
The daemon deployment strategy is not supported on Amazon ECS Fargate and is only supported on Amazon ECS on Amazon EC2.

     1. Run CloudWatch Agent as a daemon service with `HOST` network mode.

     1. Configure `instrumentation` to instrument the application with the ADOT Python Agent.

        ```
        import { Construct } from 'constructs';
        import * as appsignals from '@aws-cdk/aws-applicationsignals-alpha';
        import * as cdk from 'aws-cdk-lib';
        import * as ec2 from 'aws-cdk-lib/aws-ec2';
        import * as ecs from 'aws-cdk-lib/aws-ecs';
        
        class MyStack extends cdk.Stack {
          public constructor(scope?: Construct, id?: string, props: cdk.StackProps = {}) {
            super(scope, id, props);
        
            const vpc = new ec2.Vpc(this, 'TestVpc', {});
            const cluster = new ecs.Cluster(this, 'TestCluster', { vpc });
        
            // Define Task Definition for CloudWatch agent (Daemon)
            const cwAgentTaskDefinition = new ecs.Ec2TaskDefinition(this, 'CloudWatchAgentTaskDefinition', {
              networkMode: ecs.NetworkMode.HOST,
            });
        
            new appsignals.CloudWatchAgentIntegration(this, 'CloudWatchAgentIntegration', {
              taskDefinition: cwAgentTaskDefinition,
              containerName: 'ecs-cwagent',
              enableLogging: false,
              cpu: 128,
              memoryLimitMiB: 64,
              portMappings: [
                {
                  containerPort: 4316,
                  hostPort: 4316,
                },
                {
                  containerPort: 2000,
                  hostPort: 2000,
                },
              ],
            });
        
            // Create the CloudWatch Agent daemon service
            new ecs.Ec2Service(this, 'CloudWatchAgentDaemon', {
              cluster,
              taskDefinition: cwAgentTaskDefinition,
              daemon: true,  // Runs one container per EC2 instance
            });
        
            // Define Task Definition for user application
            const sampleAppTaskDefinition = new ecs.Ec2TaskDefinition(this, 'SampleAppTaskDefinition', {
              networkMode: ecs.NetworkMode.HOST,
            });
        
            sampleAppTaskDefinition.addContainer('app', {
              image: ecs.ContainerImage.fromRegistry('test/sample-app'),
              cpu: 0,
              memoryLimitMiB: 512,
            });
        
            // No CloudWatch Agent sidecar is needed as application container communicates to CloudWatch Agent daemon through host network
            new appsignals.ApplicationSignalsIntegration(this, 'ApplicationSignalsIntegration', {
              taskDefinition: sampleAppTaskDefinition,
              instrumentation: {
                sdkVersion: appsignals.PythonInstrumentationVersion.V0_8_0
              },
              serviceName: 'sample-app'
            });
        
            new ecs.Ec2Service(this, 'MySampleApp', {
              cluster,
              taskDefinition: sampleAppTaskDefinition,
              desiredCount: 1,
            });
          }
        }
        ```
   + **Enable Application Signals on Amazon ECS with replica mode**
**Note**  
Running CloudWatch Agent service using replica mode requires specific security group configurations to enable communication with other services. For Application Signals functionality, configure the security group with the minimum inbound rules: Port 2000 (HTTP) and Port 4316 (HTTP). This configuration ensures proper connectivity between the CloudWatch Agent and dependent services.

     1. Run CloudWatch Agent as a replica service with service connect.

     1. Configure `instrumentation` to instrument the application with the ADOT Python Agent.

     1. Override environment variables by configuring `overrideEnvironments` to use service connect endpoints to communicate to the CloudWatch agent server.

        ```
        import { Construct } from 'constructs';
        import * as appsignals from '@aws-cdk/aws-applicationsignals-alpha';
        import * as cdk from 'aws-cdk-lib';
        import * as ec2 from 'aws-cdk-lib/aws-ec2';
        import * as ecs from 'aws-cdk-lib/aws-ecs';
        import { PrivateDnsNamespace } from 'aws-cdk-lib/aws-servicediscovery';
        
        class MyStack extends cdk.Stack {
          public constructor(scope?: Construct, id?: string, props: cdk.StackProps = {}) {
            super(scope, id, props);
        
            const vpc = new ec2.Vpc(this, 'TestVpc', {});
            const cluster = new ecs.Cluster(this, 'TestCluster', { vpc });
            const dnsNamespace = new PrivateDnsNamespace(this, 'Namespace', {
              vpc,
              name: 'local',
            });
            const securityGroup = new ec2.SecurityGroup(this, 'ECSSG', { vpc });
            securityGroup.addIngressRule(securityGroup, ec2.Port.tcpRange(0, 65535));
        
            // Define Task Definition for CloudWatch agent (Replica)
            const cwAgentTaskDefinition = new ecs.FargateTaskDefinition(this, 'CloudWatchAgentTaskDefinition', {});
        
            new appsignals.CloudWatchAgentIntegration(this, 'CloudWatchAgentIntegration', {
              taskDefinition: cwAgentTaskDefinition,
              containerName: 'ecs-cwagent',
              enableLogging: false,
              cpu: 128,
              memoryLimitMiB: 64,
              portMappings: [
                {
                  name: 'cwagent-4316',
                  containerPort: 4316,
                  hostPort: 4316,
                },
                {
                  name: 'cwagent-2000',
                  containerPort: 2000,
                  hostPort: 2000,
                },
              ],
            });
        
            // Create the CloudWatch Agent replica service with service connect
            new ecs.FargateService(this, 'CloudWatchAgentService', {
              cluster: cluster,
              taskDefinition: cwAgentTaskDefinition,
              securityGroups: [securityGroup],
              serviceConnectConfiguration: {
                namespace: dnsNamespace.namespaceArn,
                services: [
                  {
                    portMappingName: 'cwagent-4316',
                    dnsName: 'cwagent-4316-http',
                    port: 4316,
                  },
                  {
                    portMappingName: 'cwagent-2000',
                    dnsName: 'cwagent-2000-http',
                    port: 2000,
                  },
                ],
              },
              desiredCount: 1,
            });
        
            // Define Task Definition for user application
            const sampleAppTaskDefinition = new ecs.FargateTaskDefinition(this, 'SampleAppTaskDefinition', {});
        
            sampleAppTaskDefinition.addContainer('app', {
              image: ecs.ContainerImage.fromRegistry('test/sample-app'),
              cpu: 0,
              memoryLimitMiB: 512,
            });
        
            // Overwrite environment variables to connect to the CloudWatch Agent service just created
            new appsignals.ApplicationSignalsIntegration(this, 'ApplicationSignalsIntegration', {
              taskDefinition: sampleAppTaskDefinition,
              instrumentation: {
                sdkVersion: appsignals.PythonInstrumentationVersion.V0_8_0,
              },
              serviceName: 'sample-app',
              overrideEnvironments: [
                {
                  name: appsignals.CommonExporting.OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT,
                  value: 'http://cwagent-4316-http:4316/v1/metrics',
                },
                {
                  name: appsignals.TraceExporting.OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,
                  value: 'http://cwagent-4316-http:4316/v1/traces',
                },
                {
                  name: appsignals.TraceExporting.OTEL_TRACES_SAMPLER_ARG,
                  value: 'endpoint=http://cwagent-2000-http:2000',
                },
              ],
            });
        
            // Create ECS Service with service connect configuration
            new ecs.FargateService(this, 'MySampleApp', {
              cluster: cluster,
              taskDefinition: sampleAppTaskDefinition,
              serviceConnectConfiguration: {
                namespace: dnsNamespace.namespaceArn,
              },
              desiredCount: 1,
            });
          }
        }
        ```

1. Setting up a Node.js application with the ESM module format. There is limited support for Node.js applications with the ESM module format. For more information, see [Known limitations about Node.js with ESM](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-supportmatrix.html#ESM-limitations).

   For the ESM module format, enabling Application Signals by using the `init` container to inject the Node.js instrumentation SDK doesn't apply. Skip Step 2 in this procedure, and do the following instead.
   + Install the relevant dependencies to your Node.js application for autoinstrumentation.

     ```
     npm install @aws/aws-distro-opentelemetry-node-autoinstrumentation
     npm install @opentelemetry/instrumentation@0.54.
     ```
   +  Update TaskDefinition.

     1. Add additional configuration to your application container.

     1. Configure `NODE_OPTIONS`.

     1. (Optional) Add CloudWatch Agent if you choose sidecar mode.

        ```
        import { Construct } from 'constructs';
        import * as appsignals from '@aws-cdk/aws-applicationsignals-alpha';
        import * as ecs from 'aws-cdk-lib/aws-ecs';
        
        class MyStack extends cdk.Stack {
          public constructor(scope?: Construct, id?: string, props: cdk.StackProps = {}) {
            super(scope, id, props);
            const fargateTaskDefinition = new ecs.FargateTaskDefinition(stack, 'TestTaskDefinition', {
              cpu: 256,
              memoryLimitMiB: 512,
            });
            const appContainer = fargateTaskDefinition.addContainer('app', {
              image: ecs.ContainerImage.fromRegistry('docker/cdk-test'),
            });
        
            const volumeName = 'opentelemetry-auto-instrumentation'
            fargateTaskDefinition.addVolume({name: volumeName});
        
            // Inject additional configurations
            const injector = new appsignals.NodeInjector(volumeName, appsignals.NodeInstrumentationVersion.V0_5_0);
            injector.renderDefaultContainer(fargateTaskDefinition);
            // Configure NODE_OPTIONS
            appContainer.addEnvironment('NODE_OPTIONS', '--import @aws/aws-distro-opentelemetry-node-autoinstrumentation/register --experimental-loader=@opentelemetry/instrumentation/hook.mjs')
        
            // Optional: add CloudWatch agent
            const cwAgent = new appsignals.CloudWatchAgentIntegration(stack, 'AddCloudWatchAgent', {
              containerName: 'ecs-cwagent',
              taskDefinition: fargateTaskDefinition,
              memoryReservationMiB: 50,
            });
            appContainer.addContainerDependencies({
              container: cwAgent.agentContainer,
              condition: ecs.ContainerDependencyCondition.START,
            });
        }
        ```

1. Deploy the updated stack – Run the `cdk synth` command in your application's main directory. To deploy the service in your AWS account, run the `cdk deploy` command in your application's main directory.

   If you used the sidecar strategy, you'll see one service created:
   + *APPLICATION\$1SERVICE* is the service of your application. It includes the three following containers:
     + `init`– A required container for initializing Application Signals.
     + `ecs-cwagent`– A container running the CloudWatch agent
     + `my-app`– This is the example application container in our documentation. In your actual workloads, this specific container might not exist or might be replaced with your own service containers.

   If you used the daemon strategy, you'll see two services created:
   + *CloudWatchAgentDaemon * is the CloudWatch agent daemon service.
   + *APPLICATION\$1SERVICE* is the service of your application. It includes the two following containers:
     + `init`– A required container for initializing Application Signals.
     + `my-app`– This is the example application container in our documentation. In your actual workloads, this specific container might not exist or might be replaced with your own service containers.

   If you used the replica strategy, you'll see two services created:
   + *CloudWatchAgentService* is the CloudWatch agent replica service.
   + *APPLICATION\$1SERVICE* is the service of your application. It includes the two following containers:
     + `init`– A required container for initializing Application Signals.
     + `my-app`– This is the example application container in our documentation. In your actual workloads, this specific container might not exist or might be replaced with your own service containers.

## Enable Application Signals on Amazon ECS using Model Context Protocol (MCP)
<a name="CloudWatch-Application-Signals-ECS-MCP"></a>

You can use the CloudWatch Application Signals Model Context Protocol (MCP) server to enable Application Signals on your Amazon ECS clusters through conversational AI interactions. This provides a natural language interface for setting up Application Signals monitoring.

The MCP server automates the enablement process by understanding your requirements and generating the appropriate configuration. Instead of manually following console steps or writing CDK code, you can simply describe what you want to enable.

### Prerequisites
<a name="CloudWatch-Application-Signals-ECS-MCP-Prerequisites"></a>

Before using the MCP server to enable Application Signals, ensure you have:
+ A Development Environment that supports MCP (such as Kiro, Claude Desktop, VSCode with MCP extensions, or other MCP-compatible tools)
+ The CloudWatch Application Signals MCP server configured in your IDE. For detailed setup instructions, see [CloudWatch Application Signals MCP Server documentation](https://awslabs.github.io/mcp/servers/cloudwatch-applicationsignals-mcp-server).

### Using the MCP server
<a name="CloudWatch-Application-Signals-ECS-MCP-Usage"></a>

Once you have configured the CloudWatch Application Signals MCP server in your IDE, you can request enablement guidance using natural language prompts. While the coding assistant can infer context from your project structure, providing specific details in your prompts helps ensure more accurate and relevant guidance. Include information such as your application language, Amazon ECS cluster name, deployment strategy (sidecar or daemon), and absolute paths to your infrastructure and application code.

**Best practice prompts (specific and complete):**

```
"Enable Application Signals for my Python service running on ECS.
My app code is in /home/user/flask-api and IaC is in /home/user/flask-api/terraform"

"I want to add observability to my Node.js application on ECS cluster 'production-cluster' using sidecar deployment.
The application code is at /Users/dev/checkout-service and
the task definitions are at /Users/dev/checkout-service/ecs"

"Help me instrument my Java Spring Boot application on ECS with Application Signals using daemon strategy.
Application directory: /opt/apps/payment-api
CDK infrastructure: /opt/apps/payment-api/cdk"
```

**Less effective prompts:**

```
"Enable monitoring for my app"
→ Missing: platform, language, paths

"Enable Application Signals. My code is in ./src and IaC is in ./infrastructure"
→ Problem: Relative paths instead of absolute paths

"Enable Application Signals for my ECS service at /home/user/myapp"
→ Missing: programming language, deployment strategy
```

**Quick template:**

```
"Enable Application Signals for my [LANGUAGE] service on ECS.
Deployment strategy: [sidecar/daemon]
App code: [ABSOLUTE_PATH_TO_APP]
IaC code: [ABSOLUTE_PATH_TO_IAC]"
```

### Benefits of using the MCP server
<a name="CloudWatch-Application-Signals-ECS-MCP-Benefits"></a>

Using the CloudWatch Application Signals MCP server offers several advantages:
+ **Natural language interface:** Describe what you want to enable without memorizing commands or configuration syntax
+ **Context-aware guidance:** The MCP server understands your specific environment and provides tailored recommendations
+ **Reduced errors:** Automated configuration generation minimizes manual typing errors
+ **Faster setup:** Get from intention to implementation more quickly
+ **Learning tool:** See the generated configurations and understand how Application Signals works

### Additional resources
<a name="CloudWatch-Application-Signals-ECS-MCP-MoreInfo"></a>

For more information about configuring and using the CloudWatch Application Signals MCP server, see the [MCP server documentation](https://awslabs.github.io/mcp/servers/cloudwatch-applicationsignals-mcp-server).

# Enable your applications on Kubernetes
<a name="CloudWatch-Application-Signals-Enable-KubernetesMain"></a>

Enable CloudWatch Application Signals on Kubernetes by using the custom setup steps described in this section.

For applications running on Kubernetes, you install and configure the CloudWatch agent and AWS Distro for OpenTelemetry yourself. On these architectures enabled with a custom Application Signals setup, Application Signals doesn't autodiscover the names of your services or the hosts or clusters they run on. You must specify these names during the custom setup, and the names that you specify are what is displayed on Application Signals dashboards.

**Requirements**
+ You have admininstrator permission on the Kubernetes cluster where you are enabling Application Signals.
+ You must have the AWS CLI installed on the environment where your Kubernetes cluster is running. For more information about installing the AWS CLI, see [ Install or update the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
+ You have kubectl and helm installed on your local terminal. For more information, see the [ kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) and [ Helm](https://helm.sh/) documentation.

## Step 1: Enable Application Signals in your account
<a name="CloudWatch-Application-Signals-Kubernetes"></a>

You must first enable Application Signals in your account. If you haven't, see [Enable Application Signals in your account](CloudWatch-Application-Signals-Enable.md).

## Step 2: Install the CloudWatch agent operator in your cluster
<a name="Application-Signals-Enable-Kubernetes-agent"></a>

Installing the CloudWatch agent operator installs the operator, the CloudWatch agent, and other auto-instrumentation into your cluster. To do so, enter the following command. Replace *\$1REGION* with your AWS Region. Replace *\$1YOUR\$1CLUSTER\$1NAME* with the name that you want to appear for your cluster in Application Signals dashboards.

```
helm repo add aws-observability https://aws-observability.github.io/helm-charts
helm install amazon-cloudwatch-operator aws-observability/amazon-cloudwatch-observability \
--namespace amazon-cloudwatch --create-namespace \
--set region=$REGION \
--set clusterName=$YOUR_CLUSTER_NAME
```

For more information, see [ amazon-cloudwatch-observability](https://github.com/aws-observability/helm-charts/tree/main/charts/amazon-cloudwatch-observability) on GitHub.

## Step 3: Set up AWS credentials for your Kubernetes clusters
<a name="Application-Signals-Enable-Kubernetes-credentials"></a>

**Important**  
If your Kubernetes cluster is hosted on Amazon EC2, you can skip this section and proceed to [Step 4: Add annotations](#Application-Signals-Enable-Kubernetes-annotations).

If your Kubernetes cluster is hosted on-premises, you must use the instructions in this section to add AWS credentials to your Kubernetes environment.

**To set up permissions for an on-premises Kubernetes cluster**

1. Create the IAM user to be used to provide permissions to your on-premises host:

   1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

   1. Choose **Users**, ** Create User**.

   1. In **User details**, for **User name**, enter a name for the new IAM user. This is the sign-in name for AWS that will be used to authenticate your host. Then choose **Next**

   1. On the **Set permissions** page, under **Permissions options**, select **Attach policies directly**.

   1. From the **Permissions policies** list, select the **CloudWatchAgentServerPolicy** policy to add to your user. Then choose **Next**.

   1. On the **Review and create** page, ensure that you are satisfied with the user name and that the **CloudWatchAgentServerPolicy** policy is in the **Permissions summary**.

   1. Choose **Create user**

1. Create and retrieve your AWS access key and secret key:

   1. In the navigation pane in the IAM console, choose **Users** and then select the user name of the user that you created in the previous step.

   1.  On the user's page, choose the **Security credentials** tab. Then, in the **Access keys** section, choose **Create access key**.

   1. For **Create access key Step 1**, choose **Command Line Interface (CLI)**.

   1. For **Create access key Step 2**, optionally enter a tag and then choose **Next**.

   1. For **Create access key Step 3**, select **Download .csv file** to save a .csv file with your IAM user's access key and secret access key. You need this information for the next steps.

   1. Choose **Done**.

1. Configure your AWS credentials in your on-premises host by entering the following command. Replace *ACCESS\$1KEY\$1ID* and *SECRET\$1ACCESS\$1ID* with your newly generated access key and secret access key from the .csv file that you downloaded in the previous step. By default, the credential file is saved in `/home/user/.aws/credentials.`

   ```
   $ aws configure --profile AmazonCloudWatchAgent
   AWS Access Key ID [None]: ACCESS_KEY_ID
   AWS Secret Access Key [None]: SECRET_ACCESS_ID
   Default region name [None]: MY_REGION
   Default output format [None]: json
   ```

1. Edit the custom resource that the CloudWatch agent installed using the Helm chart to add the newly created AWS credentials secret.

   ```
   kubectl edit amazoncloudwatchagent cloudwatch-agent -n amazon-cloudwatch
   ```

1. While your file editor is open mount the AWS credentials into the CloudWatch agent container by adding the following configuration to the top of the deployment. Replace the path `/home/user/.aws/credentials` with the location of your local AWS credentials file.

   ```
   apiVersion: cloudwatch.aws.amazon.com/v1alpha1
   kind: AmazonCloudWatchAgent
   metadata:
     name: cloudwatch-agent
     namespace: amazon-cloudwatch
   spec:
     volumeMounts:
     - mountPath: /rootfs
       volumeMounts:
       - name: aws-credentials
         mountPath: /root/.aws
         readOnly: true
       volumes:
       - hostPath:
         path: /home/user/.aws/credentials
       name: aws-credentials
   ---
   ```

## Step 4: Add annotations
<a name="Application-Signals-Enable-Kubernetes-annotations"></a>

**Note**  
If you are enabling Application Signals for a Node.js application with ESM, skip the steps in this section and see [Setting up a Node.js application with the ESM module format](#Kubernetes-NodeJs-ESM) instead.

The next step is to instrument your application for CloudWatch Application Signals by adding a language-specific [annotation](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) to your Kubernetes [workload](https://kubernetes.io/docs/concepts/workloads/) or [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). This annotation auto-instruments your application to send metrics, traces, and logs to Application Signals.

**To add the annotations for Application Signals**

1. You have two options for the annotation:
   + **Annotate Workload** auto-instruments a single workload in a cluster.
   + **Annotate Namespace** auto-instruments all workloads deployed in the selected namespace.

   Choose one of those options, and follow the appropriate steps.

1. To annotate a single workload, enter one of the following commands. Replace *\$1WORKLOAD\$1TYPE* and *\$1WORKLOAD\$1NAME* with the values for your workload.
   + For Java workloads:

     ```
     kubectl patch $WORKLOAD_TYPE $WORKLOAD_NAME -p '{"spec": {"template": {"metadata": {"annotations": {"instrumentation.opentelemetry.io/inject-java": "true"}}}}}'
     ```
   + For Python workloads:

     ```
     kubectl patch $WORKLOAD_TYPE $WORKLOAD_NAME -p '{"spec": {"template": {"metadata": {"annotations": {"instrumentation.opentelemetry.io/inject-python": "true"}}}}}'
     ```

     For Python applications, there are additional required configurations. For more information, see [Python application doesn't start after Application Signals is enabled](CloudWatch-Application-Signals-Enable-Troubleshoot.md#Application-Signals-troubleshoot-starting-Python).
   + For .NET workloads:

     ```
     kubectl patch $WORKLOAD_TYPE $WORKLOAD_NAME -p '{"spec": {"template": {"metadata": {"annotations": {"instrumentation.opentelemetry.io/inject-dotnet": "true"}}}}}'
     ```
**Note**  
To enable Application Signals for a .NET workload on Alpine Linux (`linux-musl-x64`) based images, add the following additional annotation.  

     ```
     instrumentation.opentelemetry.io/otel-dotnet-auto-runtime: "linux-musl-x64"
     ```
   + For Node.js workloads:

     ```
     kubectl patch $WORKLOAD_TYPE $WORKLOAD_NAME -p '{"spec": {"template": {"metadata": {"annotations": {"instrumentation.opentelemetry.io/inject-nodejs": "true"}}}}}'
     ```

1. To annotate all workloads in a namespace, enter enter one of the following commands. Replace *\$1NAMESPACE* with the name of your namespace.

   If the namespace includes Java, Python, and .NET workloads, add all annotations to the namespace.
   + For Java workloads in the namespace:

     ```
     kubectl annotate ns $NAMESPACE instrumentation.opentelemetry.io/inject-java=true
     ```
   + For Python workloads in the namespace:

     ```
     kubectl annotate ns $NAMESPACE instrumentation.opentelemetry.io/inject-python=true
     ```

     For Python applications, there are additional required configurations. For more information, see [Python application doesn't start after Application Signals is enabled](CloudWatch-Application-Signals-Enable-Troubleshoot.md#Application-Signals-troubleshoot-starting-Python).
   + For .NET workloads in the namespace:

     ```
     kubectl annotate ns $NAMESPACE instrumentation.opentelemetry.io/inject-dotnet=true
     ```
   + For Node.js workloads in the namespace:

     ```
     kubectl annotate ns $NAMESPACE instrumentation.opentelemetry.io/inject-nodejs=true
     ```

   After adding the annotations, restart all pods in the namespace by entering the following command:

   ```
   kubectl rollout restart
   ```

1. When the previous steps are completed, in the CloudWatch console, choose **Application Signals**, **Services**. This opens the dashboards where you can see the data that Application Signals collects. It might take a few minutes for data to appear.

   For more information about the **Services** view, see [Monitor the operational health of your applications with Application Signals](Services.md).

### Setting up a Node.js application with the ESM module format
<a name="Kubernetes-NodeJs-ESM"></a>

We provide limited support for Node.js applications with the ESM module format. For details, see [Known limitations about Node.js with ESM](CloudWatch-Application-Signals-supportmatrix.md#ESM-limitations).

For the ESM module format, enabling Application Signals by annotating the manifest file doesn’t work. Skip the previous procedure and do the following instead:

**To enable Application Signals for a Node.js application with ESM**

1. Install the relevant dependencies to your Node.js application for autoinstrumentation:

   ```
   npm install @aws/aws-distro-opentelemetry-node-autoinstrumentation
   npm install @opentelemetry/instrumentation@0.54.0
   ```

1. Add the following environmental variables to the Dockerfile for your application and build the image.

   ```
   ...
   ENV OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true
   ENV OTEL_TRACES_SAMPLER_ARG='endpoint=http://cloudwatch-agent.amazon-cloudwatch:2000'
   ENV OTEL_TRACES_SAMPLER='xray'
   ENV OTEL_EXPORTER_OTLP_PROTOCOL='http/protobuf'
   ENV OTEL_EXPORTER_OTLP_TRACES_ENDPOINT='http://cloudwatch-agent.amazon-cloudwatch:4316/v1/traces'
   ENV OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT='http://cloudwatch-agent.amazon-cloudwatch:4316/v1/metrics'
   ENV OTEL_METRICS_EXPORTER='none'
   ENV OTEL_LOGS_EXPORTER='none'
   ENV NODE_OPTIONS='--import @aws/aws-distro-opentelemetry-node-autoinstrumentation/register --experimental-loader=@opentelemetry/instrumentation/hook.mjs'
   ENV OTEL_SERVICE_NAME='YOUR_SERVICE_NAME' #replace with a proper service name
   ENV OTEL_PROPAGATORS='tracecontext,baggage,b3,xray'
   ...
   
   # command to start the application
   # for example
   # CMD ["node", "index.mjs"]
   ```

1. Add the environmental variables `OTEL_RESOURCE_ATTRIBUTES_POD_NAME`, `OTEL_RESOURCE_ATTRIBUTES_NODE_NAME`, `OTEL_RESOURCE_ATTRIBUTES_DEPLOYMENT_NAME`, `POD_NAMESPACE` and `OTEL_RESOURCE_ATTRIBUTES` to the deployment yaml file for the application. For example:

   ```
   apiVersion: apps/v1
   kind: Deployment
   metadata:
     name: nodejs-app
     labels:
       app: nodejs-app
   spec:
     replicas: 2
     selector:
       matchLabels:
         app: nodejs-app
     template:
       metadata:
         labels:
           app: nodejs-app
         # annotations:
         # make sure this annotation doesn't exit
         #   instrumentation.opentelemetry.io/inject-nodejs: 'true'
       spec:
         containers:
         - name: nodejs-app
           image:your-nodejs-application-image #replace it with a proper image uri
           imagePullPolicy: Always
           ports:
           - containerPort: 8000
           env:
             - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME
               valueFrom:
                 fieldRef:
                   fieldPath: metadata.name
             - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME
               valueFrom:
                 fieldRef:
                   fieldPath: spec.nodeName
             - name: OTEL_RESOURCE_ATTRIBUTES_DEPLOYMENT_NAME
               valueFrom:
                 fieldRef:
                   fieldPath: metadata.labels['app'] # Assuming 'app' label is set to the deployment name
             - name: POD_NAMESPACE
               valueFrom:
                 fieldRef:
                   fieldPath: metadata.namespace
             - name: OTEL_RESOURCE_ATTRIBUTES
               value: "k8s.deployment.name=$(OTEL_RESOURCE_ATTRIBUTES_DEPLOYMENT_NAME),k8s.namespace.name=$(POD_NAMESPACE),k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME)"
   ```

1. Deploy the Node.js application to the Kubernetes cluster.

## (Optional) Step 5: Monitor your application health
<a name="CloudWatch-Application-Signals-Monitor-Kubernetes"></a>

Once you have enabled your applications on Kubernetes, you can monitor your application health. For more information, see [Monitor the operational health of your applications with Application Signals](Services.md).

# Enable your applications on Lambda
<a name="CloudWatch-Application-Signals-Enable-LambdaMain"></a>

You can enable Application Signals for your Lambda functions. Application Signals automatically instruments your Lambda functions using enhanced AWS Distro for OpenTelemetry (ADOT) libraries, provided through a Lambda layer. This AWS Lambda Layer for OpenTelemetry packages and deploys the libraries that are required for auto-instrumentation for Application Signals.

In addition to supporting Application Signals, this Lambda layer is also a component of Lambda OpenTelemetry support and provides tracing functionality.

You can also enhance Lambda observability by using transaction search, which enables the capture of trace spans for Lambda function invocation without sampling. This feature allows you to collect spans for your functions, unaffected by the `sampled` flag in trace context propagation. This ensures that there is no additional impact to downstream dependent services. By enabling transaction search on Lambda, you gain complete visibility into your function performance and you can troubleshoot rarely occurring issues. To get started, see [Transaction Search](CloudWatch-Transaction-Search.md)

**Topics**
+ [Getting started](#Application-Signals-Enable-Lambda-Methods-Getting-Started)
+ [Use the CloudWatch Application Signals console](#Enable-Lambda-CWConsole)
+ [Use the Lambda console](#Enable-Lambda-LambdaConsole)
+ [Enable Application Signals on Lambda using AWS CDK](#CloudWatch-Application-Signals-Lambda-CDK)
+ [Enable Application Signals on Lambda using Model Context Protocol (MCP)](#CloudWatch-Application-Signals-Lambda-MCP)
+ [(Optional) Monitor your application health](#CloudWatch-Application-Signals-Monitor-Lambda)
+ [Manually enable Application Signals.](#Enable-Lambda-Manually)
+ [Manually disable Application Signals](#Disable-Lambda-Manually)
+ [Configuring Application Signals](#Configuring-Lambda-AppSignals)
+ [AWS Lambda Layer for OpenTelemetry ARNs](#Enable-Lambda-Layers)
+ [Deploy Lambda functions using Amazon ECR container](#containerized-lambda)

## Getting started
<a name="Application-Signals-Enable-Lambda-Methods-Getting-Started"></a>

There are three methods for enabling Application Signals for your Lambda functions.

After you enable Application Signals for a Lambda function, it takes a few minutes for telemetry from that function to appear in the Application Signals console. 
+ Use the CloudWatch Application Signals console
+ Use the Lambda console
+ Manually add the AWS Lambda Layer for OpenTelemetry to your Lambda function runtime.

Each of these methods adds the AWS Lambda Layer for OpenTelemetry to your function.

## Use the CloudWatch Application Signals console
<a name="Enable-Lambda-CWConsole"></a>

Use these steps to use the Application Signals console to enable Application Signals for a Lambda function.

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Application Signals**, **Services**.

1. In the **Services** list area, choose **Enable Application Signals**.

1. Choose the **Lambda** tile.

1. Select each function that you want to enable for Application Signals, and then choose **Done**.

## Use the Lambda console
<a name="Enable-Lambda-LambdaConsole"></a>

Use these steps to use the Lambda console to enable Application Signals for a Lambda function.

1. Open the AWS Lambda console at [https://console.aws.amazon.com/lambda/](https://console.aws.amazon.com/lambda/).

1. In the navigation pane, choose **Functions** and then choose the name of the function that you want to enable.

1. Choose the **Configuration** tab, and then choose **Monitoring and operations tools**.

1. Choose **Edit**.

1. In the **CloudWatch Application Signals and X-Ray** section, select both **Automatically collect application traces and standard application metrics with Application Signals** and **Automatically collect Lambda service traces for end to end visibility with X-Ray.**.

1. Choose **Save**.

## Enable Application Signals on Lambda using AWS CDK
<a name="CloudWatch-Application-Signals-Lambda-CDK"></a>

 If you haven't enabled Application Signals in this account yet, you must grant Application Signals the permissions it needs to discover your services. For more information, see [Enable Application Signals in your account](CloudWatch-Application-Signals-Enable.md).

1. Enable Application Signals for your applications

   ```
   import { aws_applicationsignals as applicationsignals } from 'aws-cdk-lib';
   
   const cfnDiscovery = new applicationsignals.CfnDiscovery(this,
     'ApplicationSignalsServiceRole', { }
   );
   ```

   The Discovery CloudFormation resource grants Application Signals the following permissions:
   + `xray:GetServiceGraph`
   + `logs:StartQuery`
   + `logs:GetQueryResults`
   + `cloudwatch:GetMetricData`
   + `cloudwatch:ListMetrics`
   + `tag:GetResources`

   For more information about this role, see [Service-linked role permissions for CloudWatch Application Signals](using-service-linked-roles.md#service-linked-role-signals).

1. Add the IAM policy `CloudWatchLambdaApplicationSignalsExecutionRolePolicy` to the lambda function.

   ```
   const fn = new Function(this, 'DemoFunction', {
       code: Code.fromAsset('$YOUR_LAMBDA.zip'),
       runtime: Runtime.PYTHON_3_12,
       handler: '$YOUR_HANDLER'
   })
   
   fn.role?.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('CloudWatchLambdaApplicationSignalsExecutionRolePolicy'));
   ```

1. Replace `$AWS_LAMBDA_LAYER_FOR_OTEL_ARN` with the actual [AWS Lambda Layer for OpenTelemetry ARN](https://aws-otel.github.io/docs/getting-started/lambda#adot-lambda-layer-arns) for your Region.

   ```
   fn.addLayers(LayerVersion.fromLayerVersionArn(
       this, 'AwsLambdaLayerForOtel',
       '$AWS_LAMBDA_LAYER_FOR_OTEL_ARN'
   ))
   fn.addEnvironment("AWS_LAMBDA_EXEC_WRAPPER", "/opt/otel-instrument");
   ```

## Enable Application Signals on Lambda using Model Context Protocol (MCP)
<a name="CloudWatch-Application-Signals-Lambda-MCP"></a>

You can use the CloudWatch Application Signals Model Context Protocol (MCP) server to enable Application Signals on your Lambda functions through conversational AI interactions. This provides a natural language interface for setting up Application Signals monitoring.

The MCP server automates the enablement process by understanding your requirements and generating the appropriate configuration. Instead of manually following console steps or writing CDK code, you can simply describe what you want to enable.

### Prerequisites
<a name="CloudWatch-Application-Signals-Lambda-MCP-Prerequisites"></a>

Before using the MCP server to enable Application Signals, ensure you have:
+ A Development Environment that supports MCP (such as Kiro, Claude Desktop, VSCode with MCP extensions, or other MCP-compatible tools)
+ The CloudWatch Application Signals MCP server configured in your IDE. For detailed setup instructions, see [CloudWatch Application Signals MCP Server documentation](https://awslabs.github.io/mcp/servers/cloudwatch-applicationsignals-mcp-server).

### Using the MCP server
<a name="CloudWatch-Application-Signals-Lambda-MCP-Usage"></a>

Once you have configured the CloudWatch Application Signals MCP server in your IDE, you can request enablement guidance using natural language prompts. While the coding assistant can infer context from your project structure, providing specific details in your prompts helps ensure more accurate and relevant guidance. Include information such as your Lambda function's programming language, function name, and absolute paths to your Lambda function code and infrastructure code.

**Best practice prompts (specific and complete):**

```
"Enable Application Signals for my Python Lambda function.
My function code is in /home/user/order-processor/lambda and IaC is in /home/user/order-processor/terraform"

"I want to add observability to my Node.js Lambda function 'checkout-handler'.
The function code is at /Users/dev/checkout-function and
the CDK infrastructure is at /Users/dev/checkout-function/cdk"

"Help me instrument my Java Lambda function with Application Signals.
Function directory: /opt/apps/payment-lambda
CDK infrastructure: /opt/apps/payment-lambda/cdk"
```

**Less effective prompts:**

```
"Enable monitoring for my Lambda"
→ Missing: language, paths

"Enable Application Signals. My code is in ./src and IaC is in ./infrastructure"
→ Problem: Relative paths instead of absolute paths

"Enable Application Signals for my Lambda at /home/user/myfunction"
→ Missing: programming language
```

**Quick template:**

```
"Enable Application Signals for my [LANGUAGE] Lambda function.
Function code: [ABSOLUTE_PATH_TO_FUNCTION]
IaC code: [ABSOLUTE_PATH_TO_IAC]"
```

### Benefits of using the MCP server
<a name="CloudWatch-Application-Signals-Lambda-MCP-Benefits"></a>

Using the CloudWatch Application Signals MCP server offers several advantages:
+ **Natural language interface:** Describe what you want to enable without memorizing commands or configuration syntax
+ **Context-aware guidance:** The MCP server understands your specific environment and provides tailored recommendations
+ **Reduced errors:** Automated configuration generation minimizes manual typing errors
+ **Faster setup:** Get from intention to implementation more quickly
+ **Learning tool:** See the generated configurations and understand how Application Signals works

For more information about configuring and using the CloudWatch Application Signals MCP server, see the [MCP server documentation](https://awslabs.github.io/mcp/servers/cloudwatch-applicationsignals-mcp-server).

## (Optional) Monitor your application health
<a name="CloudWatch-Application-Signals-Monitor-Lambda"></a>

Once you have enabled your applications on Lambda, you can monitor your application health. For more information, see [Monitor the operational health of your applications with Application Signals](Services.md).

## Manually enable Application Signals.
<a name="Enable-Lambda-Manually"></a>

Use these steps to manually enable Application Signals for a Lambda function.

1. Add the AWS Lambda Layer for OpenTelemetry to your Lambda runtime. To find the layer ARN for your Region, see [ADOT Lambda Layer ARNs](https://aws-otel.github.io/docs/getting-started/lambda#adot-lambda-layer-arns).

1. Add the environment variable `AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-instrument`

   Add the environment variable `LAMBDA_APPLICATION_SIGNALS_REMOTE_ENVIRONMENT` to configure custom Lambda environments. By default, lambda environments are configured to `lambda:default`.

1. Attach the AWS managed IAM policy **CloudWatchLambdaApplicationSignalsExecutionRolePolicy** to the Lambda execution role.

1. (Optional) We recommend that you enable Lambda active tracing to get a better tracing experience. For more information, see [ Visualize Lambda function invocations using AWS X-Ray](https://docs.aws.amazon.com/lambda/latest/dg/services-xray.html).

## Manually disable Application Signals
<a name="Disable-Lambda-Manually"></a>

To manually disable Application Signals for a Lambda function, remove the AWS Lambda Layer for OpenTelemetry from your Lambda runtime, and remove the `AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-instrument` environment variable.

## Configuring Application Signals
<a name="Configuring-Lambda-AppSignals"></a>

You can use this section to configure Application Signals in Lambda.

 **Grouping multiple Lambda functions into one service** 

Environment variable `OTEL_SERVICE_NAME` sets the name of the service. This will be displayed as the service name for your application in Application Signals dashboards. You can assign the same service name to multiple Lambda functions, and they will be merged into a single service in Application Signals. When you don't provide a value for this key, the default Lambda Function name is used.

 **Sampling** 

By default, the trace sampling strategy is parent based. You can adjust the sampling strategy by setting environment variables `OTEL_TRACES_SAMPLER`.

For example, set trace sampling rate to 30%.

```
OTEL_TRACES_SAMPLER=traceidratio
OTEL_TRACES_SAMPLER_ARG=0.3
```

For more information , see [OpenTelemetry Environment Variable Specification](https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/).

 **Enabling all library instrumentation’s** 

To reduce Lambda cold starts, by default, only AWS SDK and HTTP instrumentation’s are enabled for Python, Node, and Java. You can set environment variables to enable instrumentation for other libraries used in your Lambda function.
+ Python – `OTEL_PYTHON_DISABLED_INSTRUMENTATIONS=none`
+ Node – `OTEL_NODE_DISABLED_INSTRUMENTATIONS=none`
+ Java – `OTEL_INSTRUMENTATION_COMMON_DEFAULT_ENABLED=true`

## AWS Lambda Layer for OpenTelemetry ARNs
<a name="Enable-Lambda-Layers"></a>

For the complete list of AWS Lambda Layer for OpenTelemetry ARNs by Region and runtime, see [ADOT Lambda Layer ARNs](https://aws-otel.github.io/docs/getting-started/lambda#adot-lambda-layer-arns) in the AWS Distro for OpenTelemetry documentation. The layer is available for Python, Node.js, .NET, and Java runtimes.

## Deploy Lambda functions using Amazon ECR container
<a name="containerized-lambda"></a>

Lambda functions deployed as container images do not support Lambda Layers in the traditional way. When using container images, you cannot attach a layer as you would with other Lambda deployment methods. Instead, you must manually incorporate the layer’s contents into your container image during the build process.

------
#### [ Java ]

You can learn how to integrate the AWS Lambda Layer for OpenTelemetry into your containerized Java Lambda function, download the `layer.zip` artifact, and integrate it into your Java Lambda function container to enable Application Signals monitoring.

**Prerequisites**
+ AWS CLI configured with your credentials
+ Docker installed
+ These instructions assume you are on x86\$164 platform

1. **Set Up Project Structure**

   Create a directory for your Lambda function

   ```
   mkdir java-appsignals-container-lambda && \
   cd java-appsignals-container-lambda
   ```

   Create a Maven project structure

   ```
   mkdir -p src/main/java/com/example/java/lambda
   mkdir -p src/main/resources
   ```

1. **Create Dockerfile**

   Download and integrate the OpenTelemetry Layer with Application Signals support directly into your Lambda container image. To do this, the `Dockerfile` file is created.

   ```
   FROM public.ecr.aws/lambda/java:21
   
   # Install utilities
   RUN dnf install -y unzip wget maven
   
   # Download the OpenTelemetry Layer with AppSignals Support
   RUN wget https://github.com/aws-observability/aws-otel-java-instrumentation/releases/latest/download/layer.zip -O /tmp/layer.zip
   
   # Extract and include Lambda layer contents
   RUN mkdir -p /opt && \
       unzip /tmp/layer.zip -d /opt/ && \
       chmod -R 755 /opt/ && \
       rm /tmp/layer.zip
   
   # Copy and build function code
   COPY pom.xml ${LAMBDA_TASK_ROOT}
   COPY src ${LAMBDA_TASK_ROOT}/src
   RUN mvn clean package -DskipTests
   
   # Copy the JAR file to the Lambda runtime directory (from inside the container)
   RUN mkdir -p ${LAMBDA_TASK_ROOT}/lib/
   RUN cp ${LAMBDA_TASK_ROOT}/target/function.jar ${LAMBDA_TASK_ROOT}/lib/
   
   # Set the handler
   CMD ["com.example.java.lambda.App::handleRequest"]
   ```
**Note**  
The `layer.zip` file contains the OpenTelemetry instrumentation necessary for AWS Application Signals support to monitor your Lambda function.  
The layer extraction steps ensures:  
The layer.zip contents are properly extracted to the `/opt/ directory`
The `otel-instrument` script receives proper execution permissions
The temporary layer.zip file is removed to keep the image size smaller

1. **Lambda function code** – Create a Java file for your Lambda handler at `src/main/java/com/example/lambda/App.java:`

   Your project should look something like:

   ```
   .
   ├── Dockerfile
   ├── pom.xml
   └── src
       └── main
           ├── java
           │   └── com
           │       └── example
           │           └── java
           │               └── lambda
           │                   └── App.java
           └── resources
   ```

1. **Build and deploy the container image**

   **Set up environment variables**

   ```
   AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
   AWS_REGION=$(aws configure get region)
   
   # For fish shell users:
   # set AWS_ACCOUNT_ID (aws sts get-caller-identity --query Account --output text)
   # set AWS_REGION (aws configure get region)
   ```

   **Authenticate with ECR** 

   First with public ECR (for base image):

   ```
   aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
   ```

   Then with your private ECR:

   ```
   aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
   ```

   **Build, tag and push your image**

   ```
   # Build the Docker image
   docker build -t lambda-appsignals-demo .
   
   # Tag the image
   docker tag lambda-appsignals-demo:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest
   
   # Push the image
   docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest
   ```

1. **Create and configure the Lambda function**

   Create a new function using the Lambda console.

   Select **Container image** as the deployment option.

   Choose **Browse images** to select your Amazon ECR image.

1. **Testing and verifications – Test your Lambda with a simple event. If the layer integration is successful, your Lambda appears under the Application Signals service map.**

   You will see traces and metrics for your Lambda function in the CloudWatch console.

**Troubleshooting**

If Application Signals is not working, check the following:
+ Check the function logs for any errors related to the OpenTelemetry instrumentation
+ Verify if the environment variable `AWS_LAMBDA_EXEC_WRAPPER` is set correctly
+ Make sure the layer extraction in the Docker file completed successfully
+ Confirm if the IAM permissions are properly attached
+ If needed, increase the *Timeout and Memory *settings in the general configuration of the Lambda function

------
#### [ .Net ]

You can learn how to integrate the OpenTelemetry Layer with Application Signals support into your containerized .Net Lambda function, download the `layer.zip` artifact, and integrate it into your .Net Lambda function to enable Application Signals monitoring.

**Prerequisites**
+ AWS CLI configured with your credentials
+ Docker installed
+ .Net 8 SDK
+ These instructions assume you are on x86\$164 platform

1. **Set Up Project Structure**

   Create a directory for your Lambda function container image

   ```
   mkdir dotnet-appsignals-container-lambda && \
   cd dotnet-appsignals-container-lambda
   ```

1. **Create Dockerfile**

   Download and integrate the OpenTelemetry Layer with Application Signals support directly into your Lambda container image. To do this, the `Dockerfile` file is created.

   ```
   FROM public.ecr.aws/lambda/dotnet:8
   
   # Install utilities
   RUN dnf install -y unzip wget dotnet-sdk-8.0 which
   
   # Add dotnet command to docker container's PATH
   ENV PATH="/usr/lib64/dotnet:${PATH}"
   
   # Download the OpenTelemetry Layer with AppSignals Support
   RUN wget https://github.com/aws-observability/aws-otel-dotnet-instrumentation/releases/latest/download/layer.zip -O /tmp/layer.zip
   
   # Extract and include Lambda layer contents
   RUN mkdir -p /opt && \
       unzip /tmp/layer.zip -d /opt/ && \
       chmod -R 755 /opt/ && \
       rm /tmp/layer.zip
   
   WORKDIR ${LAMBDA_TASK_ROOT}
   
   # Copy the project files
   COPY dotnet-lambda-function/src/dotnet-lambda-function/*.csproj ${LAMBDA_TASK_ROOT}/
   COPY dotnet-lambda-function/src/dotnet-lambda-function/Function.cs ${LAMBDA_TASK_ROOT}/
   COPY dotnet-lambda-function/src/dotnet-lambda-function/aws-lambda-tools-defaults.json ${LAMBDA_TASK_ROOT}/
   
   # Install dependencies and build the application
   RUN dotnet restore
   
   # Use specific runtime identifier and disable ReadyToRun optimization
   RUN dotnet publish -c Release -o out --self-contained false /p:PublishReadyToRun=false
   
   # Copy the published files to the Lambda runtime directory
   RUN cp -r out/* ${LAMBDA_TASK_ROOT}/
   
   CMD ["dotnet-lambda-function::dotnet_lambda_function.Function::FunctionHandler"]
   ```
**Note**  
The `layer.zip` file contains the OpenTelemetry instrumentation necessary for AWS Application Signals support to monitor your Lambda function.  
The layer extraction steps ensures:  
The layer.zip contents are properly extracted to the `/opt/ directory`
The `otel-instrument` script receives proper execution permissions
The temporary layer.zip file is removed to keep the image size smaller

1. **Lambda function code** – Initialize your Lambda project using the AWS Lambda .NET template:

   ```
   # Install the Lambda templates if you haven't already
   dotnet new -i Amazon.Lambda.Templates
   
   # Create a new Lambda project
   dotnet new lambda.EmptyFunction -n dotnet-lambda-function
   ```

   Your project should look something like:

   ```
   .
   ├── Dockerfile
   └── dotnet-lambda-function
       ├── src
       │   └── dotnet-lambda-function
       │       ├── Function.cs
       │       ├── Readme.md
       │       ├── aws-lambda-tools-defaults.json
       │       └── dotnet-lambda-function.csproj
       └── test
           └── dotnet-lambda-function.Tests
               ├── FunctionTest.cs
               └── dotnet-lambda-function.Tests.csproj
   ```

1. **Build and deploy the container image**

   **Set up environment variables**

   ```
   AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
   AWS_REGION=$(aws configure get region)
   
   # For fish shell users:
   # set AWS_ACCOUNT_ID (aws sts get-caller-identity --query Account --output text)
   # set AWS_REGION (aws configure get region)
   ```

   Update the `Function.cs` code to:

   Update the `dotnet-lambda-function.csproj` code to:

   ```
   <Project Sdk="Microsoft.NET.Sdk">
     <PropertyGroup>
       <TargetFramework>net8.0>/TargetFramework>
      <ImplicitUsings>enable</ImplicitUsings>
       <Nullable>enable</Nullable>
       <GenerateRuntimeConfigurationFiles>true</GenerateRuntimeConfigurationFiles>
       <AWSProjectType>Lambda</AWSProjectType>
       
       <CopyLocalLockFileAssemblies>true</CopyLocalLockFileAssemblies>
       
       <PublishReadyToRun>true</PublishReadyToRun>
    </PropertyGroup>
     <ItemGroup>
       <PackageReference Include="Amazon.Lambda.Core" Version="2.5.0" />
       <PackageReference Include="Amazon.Lambda.Serialization.SystemTextJson" Version="2.4.4" />
       <PackageReference Include="AWSSDK.S3" Version="3.7.305.23" />
     </ItemGroup>
   </Project>
   ```

1. **Build and deploy the container image**

   Set up environment variables

   ```
   AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
   AWS_REGION=$(aws configure get region)
   
   # For fish shell users:
   # set AWS_ACCOUNT_ID (aws sts get-caller-identity --query Account --output text)
   # set AWS_REGION (aws configure get region)
   ```

   Authenticate with public Amazon ECR

   ```
   aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
   ```

   Authenticate with private Amazon ECR

   ```
   aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
   ```

   Create Amazon ECR repository (if needed)

   ```
   aws ecr create-repository \
       --repository-name lambda-appsignals-demo \
       --region $AWS_REGION
   ```

   Build, tag, and push your image

   ```
   # Build the Docker image
   docker build -t lambda-appsignals-demo .
   
   # Tag the image
   docker tag lambda-appsignals-demo:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest
   
   # Push the image
   docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest
   
   5. Create and Configure the Lambda Function
   ```

1. **Create and configure the Lambda function**

   Create a new function using the Lambda console.

   Select **Container image** as the deployment option.

   Choose **Browse images** to select your Amazon ECR image.

1. **Testing and verifications – Test your Lambda with a simple event. If the layer integration is successful, your Lambda appears under the Application Signals service map.**

   You will see traces and metrics for your Lambda function in the CloudWatch console.

**Troubleshooting**

If Application Signals is not working, check the following:
+ Check the function logs for any errors related to the OpenTelemetry instrumentation
+ Verify if the environment variable `AWS_LAMBDA_EXEC_WRAPPER` is set correctly
+ Make sure the layer extraction in the Docker file completed successfully
+ Confirm if the IAM permissions are properly attached
+ If needed, increase the *Timeout and Memory *settings in the general configuration of the Lambda function

------
#### [ Node.js ]

You can learn how to integrate the OpenTelemetry Layer with Application Signals support into your containerized Node.js Lambda function, download the `layer.zip` artifact, and integrate it into your Node.js Lambda function to enable Application Signals monitoring.

**Prerequisites**
+ AWS CLI configured with your credentials
+ Docker installed
+ These instructions assume you are on x86\$164 platform

1. **Set Up Project Structure**

   Create a directory for your Lambda function container image

   ```
   mkdir nodejs-appsignals-container-lambda &&\
   cd nodejs-appsignals-container-lambda
   ```

1. **Create Dockerfile**

   Download and integrate the OpenTelemetry Layer with Application Signals support directly into your Lambda container image. To do this, the `Dockerfile` file is created.

   ```
   # Dockerfile
   FROM public.ecr.aws/lambda/nodejs:22
   
   # Install utilities
   RUN dnf install -y unzip wget
   
   # Download the OpenTelemetry Layer with AppSignals Support
   RUN wget https://github.com/aws-observability/aws-otel-js-instrumentation/releases/latest/download/layer.zip -O /tmp/layer.zip
   
   # Extract and include Lambda layer contents
   RUN mkdir -p /opt && \
       unzip /tmp/layer.zip -d /opt/ && \
       chmod -R 755 /opt/ && \
       rm /tmp/layer.zip
   
   # Install npm dependencies
   RUN npm init -y
   RUN npm install
   
   # Copy function code
   COPY *.js ${LAMBDA_TASK_ROOT}/
   
   # Set the CMD to your handler
   CMD [ "index.handler" ]
   ```
**Note**  
The `layer.zip` file contains the OpenTelemetry instrumentation necessary for AWS Application Signals support to monitor your Lambda function.  
The layer extraction steps ensures:  
The layer.zip contents are properly extracted to the `/opt/ directory`
The `otel-instrument` script receives proper execution permissions
The temporary layer.zip file is removed to keep the image size smaller

1. **Lambda function code**

   Create an `index.js` file with the following content:

   ```
   const { S3Client, ListBucketsCommand } = require('@aws-sdk/client-s3');
   
   // Initialize S3 client
   const s3Client = new S3Client({ region: process.env.AWS_REGION });
   
   exports.handler = async function(event, context) {
     console.log('Received event:', JSON.stringify(event, null, 2));
     console.log('Handler initializing:', exports.handler.name);
   
     const response = {
       statusCode: 200,
       body: {}
     };
   
     try {
       // List S3 buckets
       const command = new ListBucketsCommand({});
       const data = await s3Client.send(command);
   
       // Extract bucket names
       const bucketNames = data.Buckets.map(bucket => bucket.Name);
   
       response.body = {
         message: 'Successfully retrieved buckets',
         buckets: bucketNames
       };
   
     } catch (error) {
       console.error('Error listing buckets:', error);
   
       response.statusCode = 500;
       response.body = {
         message: `Error listing buckets: ${error.message}`
       };
     }
   
     return response;
   };
   ```

   Your project structure should look something like this:

   ```
   .
   ├── Dockerfile
   └── index.js
   ```

1. **Build and deploy the container image**

   **Set up environment variables**

   ```
   AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
   AWS_REGION=$(aws configure get region)
   
   # For fish shell users:
   # set AWS_ACCOUNT_ID (aws sts get-caller-identity --query Account --output text)
   # set AWS_REGION (aws configure get region)
   ```

   Authenticate with public Amazon ECR

   ```
   aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
   ```

   Authenticate with private Amazon ECR

   ```
   aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
   ```

   Create Amazon ECR repository (if needed)

   ```
   aws ecr create-repository \
       --repository-name lambda-appsignals-demo \
       --region $AWS_REGION
   ```

   Build, tag, and push your image

   ```
   # Build the Docker image
   docker build -t lambda-appsignals-demo .
   
   # Tag the image
   docker tag lambda-appsignals-demo:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest
   
   # Push the image
   docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest
   
   5. Create and Configure the Lambda Function
   ```

1. **Create and configure the Lambda function**

   Create a new function using the Lambda console.

   Select **Container image** as the deployment option.

   Choose **Browse images** to select your Amazon ECR image.

1. **Testing and verifications – Test your Lambda with a simple event. If the layer integration is successful, your Lambda appears under the Application Signals service map.**

   You will see traces and metrics for your Lambda function in the CloudWatch console.

**Troubleshooting**

If Application Signals is not working, check the following:
+ Check the function logs for any errors related to the OpenTelemetry instrumentation
+ Verify if the environment variable `AWS_LAMBDA_EXEC_WRAPPER` is set correctly
+ Make sure the layer extraction in the Docker file completed successfully
+ Confirm if the IAM permissions are properly attached
+ If needed, increase the *Timeout and Memory *settings in the general configuration of the Lambda function

------
#### [ Python ]

You can learn how to integrate the OpenTelemetry Layer with Application Signals support into your containerized Python Lambda function, download the `layer.zip` artifact, and integrate it into your Python Lambda function to enable Application Signals monitoring.

**Prerequisites**
+ AWS CLI configured with your credentials
+ Docker installed
+ These instructions assume you are on x86\$164 platform

1. **Set Up Project Structure**

   Create a directory for your Lambda function container image

   ```
   mkdir python-appsignals-container-lambda &&\
   cd python-appsignals-container-lambda
   ```

1. **Create Dockerfile**

   Download and integrate the OpenTelemetry Layer with Application Signals support directly into your Lambda container image. To do this, the `Dockerfile` file is created.

   ```
   # Dockerfile
   
   FROM public.ecr.aws/lambda/python:3.13
   
   # Copy function code
   COPY app.py ${LAMBDA_TASK_ROOT}
   
   # Install unzip and wget utilities
   RUN dnf install -y unzip wget
   
   # Download the OpenTelemetry Layer with AppSignals Support
   RUN wget https://github.com/aws-observability/aws-otel-python-instrumentation/releases/latest/download/layer.zip -O /tmp/layer.zip
   
   # Extract and include Lambda layer contents
   RUN mkdir -p /opt && \
       unzip /tmp/layer.zip -d /opt/ && \
       chmod -R 755 /opt/ && \
       rm /tmp/layer.zip
   
   # Set the CMD to your handler
   CMD [ "app.lambda_handler" ]
   ```
**Note**  
The `layer.zip` file contains the OpenTelemetry instrumentation necessary for AWS Application Signals support to monitor your Lambda function.  
The layer extraction steps ensures:  
The layer.zip contents are properly extracted to the `/opt/ directory`
The `otel-instrument` script receives proper execution permissions
The temporary layer.zip file is removed to keep the image size smaller

1. **Lambda function code**

   Create your Lambda function in an `app.py` file:

   ```
   import json
   import boto3
   
   def lambda_handler(event, context):
       """
       Sample Lambda function that can be used in a container image.
   
       Parameters:
       -----------
       event: dict
           Input event data
       context: LambdaContext
           Lambda runtime information
   
       Returns:
       __
       dict
           Response object
       """
       print("Received event:", json.dumps(event, indent=2))
   
       # Create S3 client
       s3 = boto3.client('s3')
   
       try:
           # List buckets
           response = s3.list_buckets()
   
           # Extract bucket names
           buckets = [bucket['Name'] for bucket in response['Buckets']]
   
           return {
               'statusCode': 200,
               'body': json.dumps({
                   'message': 'Successfully retrieved buckets',
                   'buckets': buckets
               })
           }
       except Exception as e:
           print(f"Error listing buckets: {str(e)}")
           return {
               'statusCode': 500,
               'body': json.dumps({
                   'message': f'Error listing buckets: {str(e)}'
               })
           }
   ```

   Your project structure should look something like this:

   ```
   .
   ├── Dockerfile
   ├── app.py
   └── instructions.md
   ```

1. **Build and deploy the container image**

   **Set up environment variables**

   ```
   AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
   AWS_REGION=$(aws configure get region)
   
   # For fish shell users:
   # set AWS_ACCOUNT_ID (aws sts get-caller-identity --query Account --output text)
   # set AWS_REGION (aws configure get region)
   ```

   Authenticate with public Amazon ECR

   ```
   aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
   ```

   Authenticate with private Amazon ECR

   ```
   aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
   ```

   Create Amazon ECR repository (if needed)

   ```
   aws ecr create-repository \
       --repository-name lambda-appsignals-demo \
       --region $AWS_REGION
   ```

   Build, tag, and push your image

   ```
   # Build the Docker image
   docker build -t lambda-appsignals-demo .
   
   # Tag the image
   docker tag lambda-appsignals-demo:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest
   
   # Push the image
   docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest
   
   5. Create and Configure the Lambda Function
   ```

1. **Create and configure the Lambda function**

   Create a new function using the Lambda console.

   Select **Container image** as the deployment option.

   Choose **Browse images** to select your Amazon ECR image.

1. **Testing and verifications – Test your Lambda with a simple event. If the layer integration is successful, your Lambda appears under the Application Signals service map.**

   You will see traces and metrics for your Lambda function in the CloudWatch console.

**Troubleshooting**

If Application Signals is not working, check the following:
+ Check the function logs for any errors related to the OpenTelemetry instrumentation
+ Verify if the environment variable `AWS_LAMBDA_EXEC_WRAPPER` is set correctly
+ Make sure the layer extraction in the Docker file completed successfully
+ Confirm if the IAM permissions are properly attached
+ If needed, increase the *Timeout and Memory *settings in the general configuration of the Lambda function

------

# Troubleshooting your Application Signals installation
<a name="CloudWatch-Application-Signals-Enable-Troubleshoot"></a>

This section contains troubleshooting tips for CloudWatch Application Signals.

**Topics**
+ [Address OpenTelemetry configuration conflicts in Amazon EKS with Application Signals](#Application-Signals-troubleshoot-eks-applications)
+ [Application Signals Java layer cold start performance](#Application-Signals-troubleshoot-cold-start-performance)
+ [Application doesn't start after Application Signals is enabled](#Application-Signals-troubleshoot-starting)
+ [Python application doesn't start after Application Signals is enabled](#Application-Signals-troubleshoot-starting-Python)
+ [No Application Signals data for Python application that uses a WSGI server](#Application-Signals-troubleshoot-Python-WSGI)
+ [My Node.js application is not instrumented or isn't generating Application Signals telemetry](#Application-Signals-troubleshoot-telemetry-nodejs)
+ [My .NET application isn't instrumented or breaks for AWS SDK calls](#Application-Signals-troubleshoot-sdk-calls)
+ [No application data in Application Signals dashboard](#Application-Signals-troubleshoot-missingdata)
+ [Service metrics or dependency metrics have Unknown values](#Application-Signals-troubleshoot-unknown-values)
+ [Handling a ConfigurationConflict when managing the Amazon CloudWatch Observability EKS add-on](#Application-Signals-troubleshoot-conflict)
+ [I want to filter out unnecessary metrics and traces](#Application-Signals-troubleshoot-cardinality)
+ [What does `InternalOperation` mean?](#Application-Signals-troubleshoot-InternalOperation)
+ [How do I enable logging for .NET applications?](#Application-Signals-troubleshoot-dotnet-logging)
+ [How can I resolve assembly version conflicts in .NET applications?](#Application-Signals-troubleshoot-dotnet-conflicts)
+ [Can I disable FluentBit?](#Application-Signals-troubleshoot-FluentBit)
+ [Can I filter container logs before exporting to CloudWatch Logs?](#Application-Signals-troubleshoot-filter-logs)
+ [Resolving TypeError when Using AWS Distro for OpenTelemetry (ADOT) JavaScript Lambda Layer](#lambda-execution)
+ [TypeError when using Response Streaming Lambda handlers with AWS Distro for OpenTelemetry (ADOT) JavaScript Lambda Layer](#lambda-execution-streaming)
+ [Update to required versions of agents or Amazon EKS add-on](#CloudWatch-Application-Signals-Agent-Versions)
+ [Embedded Metric Format (EMF) disabled for Application Signals](#emf-appsignals)

## Address OpenTelemetry configuration conflicts in Amazon EKS with Application Signals
<a name="Application-Signals-troubleshoot-eks-applications"></a>

If you use OpenTelemetry (OTel) for application performance monitoring (APM) with Amazon EKS and configure custom OTLP exporter endpoints other than CloudWatch endpoints, you may experience the following behaviors after installing or upgrading to CloudWatch Observability add-on version 5.0.0 or later:
+ Disruption to existing OTel telemetry – CloudWatch Observability add-on may override OTLP exporter endpoints that you hard coded in your application. This override doesn't affect endpoints configured through container environment variables or `envFrom` ConfigMap. When overridden, your metrics and traces may not reach their intended destination. To maintain your existing APM setup after upgrading to V5.0.0 or later, see [Opt out of Application Signals](install-CloudWatch-Observability-EKS-addon.md#Opting-out-App-Signals)
+ Application Signals may not work if you previously enabled Application Signals using CloudWatch Observability add-on and have a custom OTLP endpoint configured. To resolve this, either remove custom OTLP endpoints or set the environment variable `OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true` when installing or upgrading to version 5.0.0 or later

## Application Signals Java layer cold start performance
<a name="Application-Signals-troubleshoot-cold-start-performance"></a>

Adding the Application Signals Layer to Java Lambda functions increases the startup latency (cold start time). The following tips can help reduce latency for time-sensitive functions.

**Fast startup for Java agent ** – The Application Signals Java Lambda Layer includes a Fast Startup feature that's turned off by default but can be enabled by setting the OTEL\$1JAVA\$1AGENT\$1FAST\$1STARTUP\$1ENABLED variable to true. When enabled, this feature configures the JVM to use tiered compilation level 1 C1 compiler to generate quick optimized native code for faster cold starts. The C1 compiler prioritizes speed at the cost of long-term optimization whereas the C2 compiler provides superior overall performance by profiling data over time.

For more information, see [Fast startup for Java agent ](https://github.com/open-telemetry/opentelemetry-lambda/blob/main/java/README.md#fast-startup-for-java-agent).

**Reduce cold start times with Provisioned Concurrency** – AWS Lambda provisioned concurrency pre-allocates a specified number of function instances, keeping them initialized and ready to handle requests immediately. This reduces cold-start times by eliminating the need to initialize the function environment during execution, ensuring faster and more consistent performance, especially for latency-sensitive workloads. For more information, see [Configuring provisioned concurrency for a function ](https://docs.aws.amazon.com/lambda/latest/dg/provisioned-concurrency.html).

**Optimize startup performance using Lambda SnapStart** – AWS Lambda SnapStart is a feature that optimizes the startup performance of Lambda functions by creating a pre-initialized snapshot of the execution environment after the function's initialization phase. This snapshot is then reused to start new instances, significantly reducing cold-start times by skipping the initialization process during function invocation. For information, see [Improving startup performance with Lambda SnapStart ](https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html)

## Application doesn't start after Application Signals is enabled
<a name="Application-Signals-troubleshoot-starting"></a>

If your application on an Amazon EKS cluster doesn't start after you enable Application Signals on the cluster, check for the following:
+ Check if the application has been instrumented by another monitoring solution. Application Signals might not support co-existing with other instrumentation solutions.
+ Confirm that your the application meets the compatibility requirements to use Application Signals. For more information, see [Supported systems](CloudWatch-Application-Signals-supportmatrix.md).
+ If your application failed to pull the Application Signals artifacts such as the AWS Distro for OpenTelemetery Java or Python agent and CloudWatch agent images, it could be a network issue.

To mitigate the issue, remove the annotation `instrumentation.opentelemetry.io/inject-java: "true"` or `instrumentation.opentelemetry.io/inject-python: "true"` from your application deployment manifest, and re-deploy your application. Then check if the application is working.

**Known issues**

The runtime metrics collection in the Java SDK release v1.32.5 is known to not work with applications using JBoss Wildfly. This issue extends to the Amazon CloudWatch Observability EKS add-on, affecting versions `2.3.0-eksbuild.1` through `2.5.0-eksbuild.1`.

If you are impacted, either downgrade the version or disable your runtime metrics collection by adding the environment variable `OTEL_AWS_APPLICATION_SIGNALS_RUNTIME_ENABLED=false` to your application. 

## Python application doesn't start after Application Signals is enabled
<a name="Application-Signals-troubleshoot-starting-Python"></a>

It is a known issue in OpenTelemetry auto-instrumentation that a missing `PYTHONPATH` environment variable can sometimes cause the application to fail to start . To resolve this, ensure that you set the `PYTHONPATH` environment variable to the location of your application’s working directory. For more information about this issue, see [ Python autoinstrumentation setting of PYTHONPATH is not compliant with Python's module resolution behavior, breaking Django applications](https://github.com/open-telemetry/opentelemetry-operator/issues/2302). 

For Django applications, there are additional required configurations, which are outlined in the [ OpenTelemetry Python documentation](https://opentelemetry-python.readthedocs.io/en/latest/examples/django/README.html).
+ Use the `--noreload` flag to prevent automatic reloading.
+ Set the `DJANGO_SETTINGS_MODULE` environment variable to the location of your Django application’s `settings.py` file. This ensures that OpenTelemetry can correctly access and integrate with your Django settings.

## No Application Signals data for Python application that uses a WSGI server
<a name="Application-Signals-troubleshoot-Python-WSGI"></a>

If you are using a WSGI server such as Gunicorn or uWSGI, you must make additional changes to make the ADOT Python auto-instrumentation work.

**Note**  
Be sure that you are using the latest version of ADOT Python and the Amazon CloudWatch Observability EKS add-on before proceeding.

**Additional steps to enable Application Signals with a WSGI server**

1. Import the auto-instrumentation in the forked worker processes.

   For Gunicorn, use the `post_fork` hook:

   ```
   # gunicorn.conf.py
   def post_fork(server, worker):
       from opentelemetry.instrumentation.auto_instrumentation import sitecustomize
   ```

   For uWSGI, use the `import` directive.

   ```
   #  uwsgi.ini
   [uwsgi]
   ; required for the instrumentation of worker processes
   enable-threads = true
   lazy-apps = true
   import = opentelemetry.instrumentation.auto_instrumentation.sitecustomize
   ```

1.  Enable the configuration for ADOT Python auto-instrumentation to skip the main process and defer to workers by setting the `OTEL_AWS_PYTHON_DEFER_TO_WORKERS_ENABLED` environment variable to `true`.

## My Node.js application is not instrumented or isn't generating Application Signals telemetry
<a name="Application-Signals-troubleshoot-telemetry-nodejs"></a>

To enable Application Signals for Node.js, you must ensure that your Node.js application uses the CommonJS (CJS) module format. The AWS Distro for OpenTelemetry Node.js doesn't support the ESM module format, because OpenTelemetry JavaScript’s support of ESM is experimental and is a work in progress.

To determine if your application is using CJS and not ESM, make sure that your application does not fulfill the [ conditions to enable ESM](https://nodejs.org/api/esm.html#enabling).

## My .NET application isn't instrumented or breaks for AWS SDK calls
<a name="Application-Signals-troubleshoot-sdk-calls"></a>

The AWS Distro for Open Telemetry (ADOT) SDK for .NET does not support AWS SDK for .NET V4. Use AWS SDK .NET V3 for full Application Signals support.

## No application data in Application Signals dashboard
<a name="Application-Signals-troubleshoot-missingdata"></a>

If metrics or traces are missing in the Application Signals dashboards, the following might be causes. Investigate these causes only if you have waited 15 minutes for Application Signals to collect and display data since your last update.
+ Make sure that your library and framework you are using is supported by the ADOT Java agent. For more information, see [ Libraries / Frameworks](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/supported-libraries.md#libraries--frameworks). 
+ Make sure that the CloudWatch agent is running. First check the status of the CloudWatch agent pods and make sure they are all in `Running` status.

  ```
  kubectl -n amazon-cloudwatch get pods.
  ```

  Add the following to the CloudWatch agent configuration file to enable debugging logs, and then restart the agent.

  ```
  "agent": {
  
    "region": "${REGION}",
    "debug": true
  },
  ```

  Then check for errors in the CloudWatch agent pods.
+ Check for configuration issues with the CloudWatch agent. Confirm that the following is still in the CloudWatch agent configuration file and the agent has been restarted since it was added.

  ```
  "agent": {
    "region": "${REGION}",
    "debug": true
  },
  ```

  Then check the OpenTelemetry debugging logs for error messages such as `ERROR io.opentelemetry.exporter.internal.grpc.OkHttpGrpcExporter - Failed to export ...`. These messages might indicate the problem.

  If that doesn't solve the issue, dump and check the environment variables with names that start with `OTEL_` by describing the pod with the `kubectl describe pod` command.
+ To enable the OpenTelemetry Python debug logging, set the environment variable `OTEL_PYTHON_LOG_LEVEL` to `debug` and redeploy the application.
+ Check for wrong or insufficient permissions for exporting data from the CloudWatch agent. If you see `Access Denied` messages in the CloudWatch agent logs, this might be the issue. It is possible that the permissions applied when you installed the CloudWatch agent were later changed or revoked.
+ Check for an AWS Distro for OpenTelemetry (ADOT) issue when generating telemetry data.

  Make sure that the instrumentation annotations `instrumentation.opentelemetry.io/inject-java` and ` sidecar.opentelemetry.io/inject-java` are applied to the application deployment and the value is `true`. Without these, the application pods will not be instrumented even if the ADOT addon is installed correctly.

  Next, check if the `init` container is applied on the application and the `Ready` state is `True`. If the `init` container is not ready, see the status for the reason.

  If the issue persists, enable debug logging on the OpenTelemetry Java SDK by setting the environment variable `OTEL_JAVAAGENT_DEBUG` to true and redeploying the application. Then look for messages that start with `ERROR io.telemetry`.
+ The metric/span exporter might be dropping data. To find out, check the application log for messages that include `Failed to export...`
+ The CloudWatch agent might be getting throttled when sending metrics or spans to Application Signals. Check for messages indicating throttling in the CloudWatch agent logs.
+ Make sure that you've enabled the service discovery setup. You need to do this only once in your Region. 

  To confirm this, in the CloudWatch console choose **Application Signals**, **Services**. If Step 1 is not marked **Complete**, choose **Start discovering your services**. Data should start flowing in within five minutes.

## Service metrics or dependency metrics have Unknown values
<a name="Application-Signals-troubleshoot-unknown-values"></a>

If you see **UnknownService**, **UnknownOperation**, **UnknownRemoteService**, or **UnknownRemoteOperation** for a dependency name or operation in the Application Signals dashboards, check whether the occurrence of data points for the unknown remote service and unknown remote operation are coinciding with their deployments.
+ **UnknownService** means that the name of an instrumented application is unknown. If the `OTEL_SERVICE_NAME` environment variable is undefined and `service.name` isn't specified in `OTEL_RESOURCE_ATTRIBUTES`, the service name is set to `UnknownService`. To fix this, specify the service name in `OTEL_SERVICE_NAME` or `OTEL_RESOURCE_ATTRIBUTES`.
+ **UnknownOperation** means that the name of an invoked operation is unknown. This occurs when Application Signals is unable to discover an operation name which invokes the remote call, or when the extracted operation name contains high cardinality values.
+ **UnknownRemoteService** means that the name of the destination service is unknown. This occurs when the system is unable to extract the destination service name that the remote call accesses.

  One solution is to create a custom span around the function that sends out the request, and add the attribute `aws.remote.service` with the designated value. Another option is to configure the CloudWatch agent to customize the metric value of `RemoteService`. For more information about customizations in the CloudWatch agent, see [Enable CloudWatch Application Signals](CloudWatch-Agent-Application_Signals.md). 
+ **UnknownRemoteOperation** means that the name of the destination operation is unknown. This occurs when the system is unable to extract the destination operation name that the remote call accesses.

  One solution is to create a custom span around the function that sends out the request, and add the attribute `aws.remote.operation` with the designated value. Another option is to configure the CloudWatch agent to customize the metric value of `RemoteOperation`. For more information about customizations in the CloudWatch agent, see [Enable CloudWatch Application Signals](CloudWatch-Agent-Application_Signals.md).

## Handling a ConfigurationConflict when managing the Amazon CloudWatch Observability EKS add-on
<a name="Application-Signals-troubleshoot-conflict"></a>

When you install or update the Amazon CloudWatch Observability EKS add-on, if you notice a failure caused by a `Health Issue` of type `ConfigurationConflict` with a description that starts with `Conflicts found when trying to apply. Will not continue due to resolve conflicts mode`, it is likely because you already have the CloudWatch agent and its associated components such as the ServiceAccount, the ClusterRole and the ClusterRoleBinding installed on the cluster. When the add-on tries to install the CloudWatch agent and its associated components, if it detects any change in the contents, it by default fails the installation or update to avoid overwriting the state of the resources on the cluster.

If you are trying to onboard to the Amazon CloudWatch Observability EKS add-on and you see this failure, we recommend deleting an existing CloudWatch agent setup that you had previously installed on the cluster and then installing the EKS add-on. Be sure to back up any customizations you might have made to the original CloudWatch agent setup such as a custom agent configuration, and provide these to the Amazon CloudWatch Observability EKS add-on when you next install or update it. If you had previously installed the CloudWatch agent for onboarding to Container Insights, see [Deleting the CloudWatch agent and Fluent Bit for Container Insights](ContainerInsights-delete-agent.md) for more information.

Alternatively, the add-on supports a conflict resolution configuration option that has the capability to specify `OVERWRITE`. You can use this option to proceed with installing or updating the add-on by overwriting the conflicts on the cluster. If you are using the Amazon EKS console, you'll find the **Conflict resolution method** when you choose the **Optional configuration settings** when you create or update the add-on. If you are using the AWS CLI, you can supply the `--resolve-conflicts OVERWRITE` to your command to create or update the add-on. 

## I want to filter out unnecessary metrics and traces
<a name="Application-Signals-troubleshoot-cardinality"></a>

If Application Signals is collecting traces and metrics that you don't want, see [Manage high-cardinality operations](Application-Signals-Cardinality.md) for information about configuring the CloudWatch agent with custom rules to reduce cardinality.

For information about customizing trace sampling rules, see [ Configure sampling rules](https://docs.aws.amazon.com/xray/latest/devguide/aws-xray-interface-console.html#xray-console) in the X-Ray documentation.

## What does `InternalOperation` mean?
<a name="Application-Signals-troubleshoot-InternalOperation"></a>

An `InternalOperation` is an operation that is triggered by the application internally rather than by an external invocation. Seeing `InternalOperation` is expected, healthy behavior.

Some typical examples where you would see `InternalOperation` include the following:
+ **Preloading on start**– Your application performs an operation named `loadDatafromDB` which reads metadata from a database during the warm up phase. Instead of observing `loadDatafromDB` as a service operation, you'll see it categorized as an `InternalOperation`.
+ **Async execution in the background**– Your application subscribes to an event queue, and processes streaming data accordingly whenever there’s an update. Each triggered operation will be under `InternalOperation` as a service operation.
+ **Retrieving host information from a service registry**– Your application talks to a service registry for service discovery. All interactions with the discovery system are classified as an `InternalOperation`.

## How do I enable logging for .NET applications?
<a name="Application-Signals-troubleshoot-dotnet-logging"></a>

To enable logging for .NET applications, configure the following environment variables. For more information about how to configure these environment variables, see [Troubleshooting .NET automatic instrumentation issues](https://opentelemetry.io/docs/zero-code/net/troubleshooting/#general-steps) in the OpenTelemetry documentation.
+ `OTEL_LOG_LEVEL`
+ `OTEL_DOTNET_AUTO_LOG_DIRECTORY`
+ `COREHOST_TRACE`
+ `COREHOST_TRACEFILE`

## How can I resolve assembly version conflicts in .NET applications?
<a name="Application-Signals-troubleshoot-dotnet-conflicts"></a>

If you get the following error, see [Assembly version conflicts](https://opentelemetry.io/docs/zero-code/net/troubleshooting/#assembly-version-conflicts) in the OpenTelemetry documentation for resolution steps.

```
Unhandled exception. System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.Extensions.DependencyInjection.Abstractions, Version=7.0.0.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'. The system cannot find the file specified.

File name: 'Microsoft.Extensions.DependencyInjection.Abstractions, Version=7.0.0.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'
   at Microsoft.AspNetCore.Builder.WebApplicationBuilder..ctor(WebApplicationOptions options, Action`1 configureDefaults)
   at Microsoft.AspNetCore.Builder.WebApplication.CreateBuilder(String[] args)
   at Program.<Main>$(String[] args) in /Blog.Core/Blog.Core.Api/Program.cs:line 26
```

## Can I disable FluentBit?
<a name="Application-Signals-troubleshoot-FluentBit"></a>

You can disable FluentBit by configuring the Amazon CloudWatch Observability EKS add-on. For more information, see [(Optional) Additional configuration](install-CloudWatch-Observability-EKS-addon.md#install-CloudWatch-Observability-EKS-addon-configuration).

## Can I filter container logs before exporting to CloudWatch Logs?
<a name="Application-Signals-troubleshoot-filter-logs"></a>

No, filtering container logs is not yet supported.

## Resolving TypeError when Using AWS Distro for OpenTelemetry (ADOT) JavaScript Lambda Layer
<a name="lambda-execution"></a>

Your Lambda function may fail with this error: `TypeError - "Cannot redefine property: handler"` when you: 
+ Use the ADOT JavaScript Lambda Layer 
+ Use `esbuild` to compile TypeScript
+ Export your handler with the `export` keyword

The ADOT JavaScript Lambda Layer needs to modify your handler at runtime. When you use the `export` keyword with `esbuild` (directly or through AWS CDK), `esbuild` makes your handler immutable, preventing these modifications. 

Export your handler function using `module.exports` instead of the `export` keyword: 

```
// Before
export const handler = (event) => {
  // Handler Code
}
```

```
// After
const handler = async (event) => {
  // Handler Code
}
module.exports = { handler }
```

## TypeError when using Response Streaming Lambda handlers with AWS Distro for OpenTelemetry (ADOT) JavaScript Lambda Layer
<a name="lambda-execution-streaming"></a>

Your Lambda function may fail with this error: `TypeError - "responseStream.write is not a function"` when you: 
+ Use the ADOT JavaScript Lambda Layer with AWS Lambda Instrumentation enabled (enabled by default) 
+ Using the response streaming feature in Node.js managed runtimes. For example, when your function handler is like:

  ```
  * export const handler = awslambda.streamifyResponse(...)
  ```

The AWS Lambda Instrumentation in the ADOT JavaScript Lambda Layer currently does not support Response Streaming in Node.js managed runtimes, so it must be disabled to avoid this TypeError.

## Update to required versions of agents or Amazon EKS add-on
<a name="CloudWatch-Application-Signals-Agent-Versions"></a>

After August 9, 2024, CloudWatch Application Signals will no longer support older versions of the Amazon CloudWatch Observability EKS add-on, the CloudWatch agent, and the AWS Distro for OpenTelemetry auto-instrumentation agent. 
+ For the Amazon CloudWatch Observability EKS add-on, versions older than `v1.7.0-eksbuild.1` won't be supported.
+ For the CloudWatch agent, versions older than `1.300040.0` won't be supported.
+ For the AWS Distro for OpenTelemetry auto-instrumentation agent:
  + For Java, versions older than `1.32.2` aren't supported.
  + For Python, versions older than `0.2.0` aren't supported.
  + For .NET, versions older than `1.3.2` aren't supported.
  + For Node.js, versions older than `0.3.0` aren't supported.

**Important**  
The latest versions of the agents include updates to the Application Signals metric schema. These updates are not backward compatible, and this can result in data issues if incompatible versions are used. To help ensure a seamless transition to the new functionality, do the following:  
If your application is running on Amazon EKS, be sure to restart all instrumented applications after you update the Amazon CloudWatch Observability add-on.
For applications running on other platforms, be sure to upgrade **both** the CloudWatch agent and the AWS OpenTelemetry auto-instrumentation agent to the latest versions.

The instructions in the following sections can help you update to a supported version.

**Contents**
+ [Update the Amazon CloudWatch Observability EKS add-on](#Application-Signals-Upgrade-Addon)
  + [Use the console](#Upgrade-Addon-Console)
  + [Use the AWS CLI](#Upgrade-Addon-CLI)
+ [Update the CloudWatch agent and ADOT agent](#Application-Signals-Upgrade-Agents)
  + [Update on Amazon ECS](#Upgrade-Agents-ECS)
  + [Update on Amazon EC2 or other architectures](#Upgrade-Addon-EC2)

### Update the Amazon CloudWatch Observability EKS add-on
<a name="Application-Signals-Upgrade-Addon"></a>

To the Amazon CloudWatch Observability EKS add-on, you can use the AWS Management Console or the AWS CLI.

#### Use the console
<a name="Upgrade-Addon-Console"></a>

**To upgrade the add-on using the console**

1. Open the Amazon EKS console at [https://console.aws.amazon.com/eks/home\$1/clusters](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the Amazon EKS cluster to update.

1. Choose the **Add-ons** tab, then choose **Amazon CloudWatch Observability**.

1. Choose **Edit**, select the version you want to update to, and then choose **Save changes**.

   Be sure to choose `v1.7.0-eksbuild.1` or later.

1. Enter one of the following AWS CLI commands to restart your services.

   ```
     # Restart a deployment
     kubectl rollout restart deployment/name
     # Restart a daemonset
     kubectl rollout restart daemonset/name
     # Restart a statefulset
     kubectl rollout restart statefulset/name
   ```

#### Use the AWS CLI
<a name="Upgrade-Addon-CLI"></a>

**To upgrade the add-on using the AWS CLI**

1. Enter the following command to find the latest version.

   ```
   aws eks describe-addon-versions \
   --addon-name amazon-cloudwatch-observability
   ```

1. Enter the following command to update the add-on. Replace *\$1VERSION* with a version that is `v1.7.0-eksbuild.1` or later. Replace *\$1AWS\$1REGION* and *\$1CLUSTER* with your Region and cluster name.

   ```
   aws eks update-addon \
   --region $AWS_REGION \
   --cluster-name $CLUSTER \
   --addon-name amazon-cloudwatch-observability \
   --addon-version $VERSION \
   # required only if the advanced configuration is used.
   --configuration-values $JSON_CONFIG
   ```
**Note**  
If you're using an custom configuration for the add-on, you can find an example of the configuration to use for *\$1JSON\$1CONFIG* in [Enable CloudWatch Application Signals](CloudWatch-Agent-Application_Signals.md). 

1. Enter one of the following AWS CLI commands to restart your services.

   ```
     # Restart a deployment
     kubectl rollout restart deployment/name
     # Restart a daemonset
     kubectl rollout restart daemonset/name
     # Restart a statefulset
     kubectl rollout restart statefulset/name
   ```

### Update the CloudWatch agent and ADOT agent
<a name="Application-Signals-Upgrade-Agents"></a>

If your services are running on architectures other than Amazon EKS, you will need to upgrade both the CloudWatch agent and the ADOT auto-instrumentation agent to use the latest Application Signals features.

#### Update on Amazon ECS
<a name="Upgrade-Agents-ECS"></a>

**To upgrade your agents for services running on Amazon ECS**

1. Create a new task definition revision. For more information, see [ Updating a task definition using the console](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-task-definition-console-v2).

1. Replace the `$IMAGE` of the `ecs-cwagent` container with the latest image tag from [cloudwatch-agent](https://gallery.ecr.aws/cloudwatch-agent/cloudwatch-agent) on Amazon ECR.

   If you upgrade to a fixed version, be sure to use a version equal to or later than `1.300040.0`.

1. Replace the `$IMAGE` of the `init` container with the latest image tag from the following locations:
   + For Java, use [aws-observability/adot-autoinstrumentation-java](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-java).

     If you upgrade to a fixed version, be sure to use a version equal to or later than `1.32.2`.
   + For Python, use [aws-observability/adot-autoinstrumentation-python](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-python).

     If you upgrade to a fixed version, be sure to use a version equal to or later than `0.2.0`.
   + For .NET, use [aws-observability/adot-autoinstrumentation-dotnet](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-dotnet).

     If you upgrade to a fixed version, be sure to use a version equal to or later than `1.3.2`.
   + For Node.js, use [aws-observability/adot-autoinstrumentation-node](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-node).

     If you upgrade to a fixed version, be sure to use a version equal to or later than `0.3.0`.

1. Update the Application Signals environment variables in your app container by following the instructions at [Step 4: Instrument your application with the CloudWatch agent](CloudWatch-Application-Signals-ECS-Sidecar.md#CloudWatch-Application-Signals-Enable-ECS-Instrument).

1. Deploy your service with the new task definition.

#### Update on Amazon EC2 or other architectures
<a name="Upgrade-Addon-EC2"></a>

**To upgrade your agents for services running on Amazon EC2 or other architectures**

1. Be sure to select version `1.300040.0` or later of the CloudWatch agent version.

1. Download the latest version of the AWS Distro for OpenTelemetry auto-instrumentation agent from one of the following locations:
   + For Java, use [aws-otel-java-instrumentation ](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-java).

     If you upgrade to a fixed version, be sure to choose `1.32.2` or later.
   + For Python, use [aws-otel-python-instrumentation ](https://github.com/aws-observability/aws-otel-python-instrumentation/releases).

     If you upgrade to a fixed version, be sure to choose `0.2.0` or later.
   + For .NET, use [aws-otel-dotnet-instrumentation](https://github.com/aws-observability/aws-otel-dotnet-instrumentation/releases).

     If you upgrade to a fixed version, be sure to choose `1.3.2` or later.
   + For Node.js, use [aws-otel-js-instrumentation ](https://github.com/aws-observability/aws-otel-js-instrumentation/releases).

     If you upgrade to a fixed version, be sure to choose `0.3.0` or later.

1. Apply the updated Application Signals environment variables to your application, then start your application. For more information, see [Step 3: Instrument your application and start it](CloudWatch-Application-Signals-Enable-EC2Main.md#CloudWatch-Application-Signals-Enable-Other-instrument).

## Embedded Metric Format (EMF) disabled for Application Signals
<a name="emf-appsignals"></a>

Disabling EMF for the `/aws/application-signals/data` log group can have the following impact on Application Signals functionality.
+ Application Signals metrics and charts will not be displayed
+ Application Signals functionality will be degraded

**How do I restore Application Signals?**

When Application Signals displays empty charts or metrics, you must enable EMF for the `/aws/application-signals/data` log group to restore full functionality. For more information, see [PutAccountPolicy](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutAccountPolicy.html#API_PutAccountPolicy_RequestSyntax).

# (Optional) Configuring Application Signals
<a name="CloudWatch-Application-Signals-Configure"></a>

This section contains information about configuring CloudWatch Application Signals.

**Topics**
+ [Trace sampling rate](Application-Signals-SampleRate.md)
+ [Enable trace to log correlation](Application-Signals-TraceLogCorrelation.md)
+ [Enable metric to log correlation](Application-Signals-MetricLogCorrelation.md)
+ [Manage high-cardinality operations](Application-Signals-Cardinality.md)

# Trace sampling rate
<a name="Application-Signals-SampleRate"></a>

By default, when you enable Application Signals X-Ray centralized sampling is enabled using the default sampling rate settings of `reservoir=1/s` and `fixed_rate=5%`. The environment variables for the AWS Distro for OpenTelemetry (ADOT) SDK agent as set as follows.


| Environment variable | Value | Note | 
| --- | --- | --- | 
| `OTEL_TRACES_SAMPLER` | `xray` |  | 
| `OTEL_TRACES_SAMPLER_ARG` | `endpoint=http://cloudwatch-agent.amazon-cloudwatch:2000` | Endpoint of the CloudWatch agent | 

For information about changing the sampling configuration, see the following:
+ To change X-Ray sampling, see [ Configure sampling rules](https://docs.aws.amazon.com/xray/latest/devguide/aws-xray-interface-console.html#xray-console)
+ To change ADOT sampling, see [ Configuring the OpenTelemetry Collector for X-Ray remote sampling](https://aws-otel.github.io/docs/getting-started/remote-sampling)

If you want to disable X-Ray centralized sampling and use local sampling instead, set the following values for the ADOT SDK Java agent as below. The following example sets the sampling rate at 5%.


| Environment variable | Value | 
| --- | --- | 
| `OTEL_TRACES_SAMPLER` | `parentbased_traceidratio` | 
| `OTEL_TRACES_SAMPLER_ARG` | `0.05` | 

For information about more advanced sampling settings, see [ OTEL\$1TRACES\$1SAMPLER](https://opentelemetry.io/docs/concepts/sdk-configuration/general-sdk-configuration/#otel_traces_sampler).

# Enable trace to log correlation
<a name="Application-Signals-TraceLogCorrelation"></a>

You can enable *trace to log correlation* in Application Signals. This automatically injects trace IDs and span IDs into the relevant application logs. Then, when you open a trace detail page in the Application Signals console, the relevant log entries (if any) that correlate with the current trace automatically appear at the bottom of the page.

For example, suppose you notice a spike in a latency graph. You can choose the point on the graph to load the diagnostics information for that point in time. You then choose the relevant trace to get more information. When you view the trace information, you can scroll down to see the logs associated with the trace. These logs might reveal patterns or error codes associated with the issues causing the latency spike.

To achieve trace log correlation, Application Signals relies on the following:
+ [ Logger MDC auto-instrumentation](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/logger-mdc-instrumentation.md) for Java.
+ [ OpenTelemetry Logging Instrumentation](https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/logging/logging.html) for Python.
+ The [ Pino](https://www.npmjs.com/package/@opentelemetry/instrumentation-pino), [ Winston](https://www.npmjs.com/package/@opentelemetry/instrumentation-winston), or [ Bunyan](https://www.npmjs.com/package/@opentelemetry/instrumentation-bunyan) auto-instrumentations for Node.js.

All of these isntrumentations are provided by OpenTelemetry community. Application Signals uses them to inject trace contexts such as trace ID and span ID into application logs. To enable this, you must manually change your logging configuration to enable the auto-instrumentation. 

Depending on the architecture that your application runs on, you might have to also set an environment variable to enable trace log correlation, in addition to following the steps in this section.
+ On Amazon EKS, no further steps are needed.
+ On Amazon ECS, no further steps are needed.
+ On Amazon EC2, see the step 4 in the procedure in [Step 3: Instrument your application and start it](CloudWatch-Application-Signals-Enable-EC2Main.md#CloudWatch-Application-Signals-Enable-Other-instrument).

After you enable trace log correlation, 

## Trace log correlation setup examples
<a name="Application-Signals-TraceLogCorrelation-Examples"></a>

This section contains examples of setting up trace log correlation in several environments.

**Spring Boot for Java**

Suppose you have a Spring Boot application in a folder called `custom-app`. The application configuration is usually a YAML file named `custom-app/src/main/resources/application.yml` that might look like this: 

```
spring:
  application:
    name: custom-app
  config:
    import: optional:configserver:${CONFIG_SERVER_URL:http://localhost:8888/}
    
...
```

To enable trace log correlation, add the following logging configuration.

```
spring:
  application:
    name: custom-app
  config:
    import: optional:configserver:${CONFIG_SERVER_URL:http://localhost:8888/}
    
...    

logging:
  pattern:
    level: trace_id=%mdc{trace_id} span_id=%mdc{span_id} trace_flags=%mdc{trace_flags} %5p
```

**Logback for Java**

In the logging configuration (such as logback.xml), insert the trace context `trace_id=%mdc{trace_id} span_id=%mdc{span_id} trace_flags=%mdc{trace_flags} %5p` into `pattern` of Encoder. For example, the following configuration prepends the trace context before the log message.

```
<appender name="FILE" class="ch.qos.logback.core.FileAppender">
  <file>app.log</file>
  <append>true</append>
  <encoder> 
    <pattern>trace_id=%mdc{trace_id} span_id=%mdc{span_id} trace_flags=%mdc{trace_flags} %5p - %m%n</pattern> 
  </encoder>
</appender>
```

For more information about encoders in Logback, see [ Encoders](https://logback.qos.ch/manual/encoders.html) in the Logback documentation.

**Log4j2 for Java**

In the logging configuration (such as log4j2.xml), insert the trace context `trace_id=%mdc{trace_id} span_id=%mdc{span_id} trace_flags=%mdc{trace_flags} %5p` into `PatternLayout`. For example, the following configuration prepends the trace context before the log message.

```
<Appenders>
  <File name="FILE" fileName="app.log">
    <PatternLayout pattern="trace_id=%mdc{trace_id} span_id=%mdc{span_id} trace_flags=%mdc{trace_flags} %5p - %m%n"/>
  </File>
</Appenders>
```

For more information about pattern layouts in Log4j2, see [ Pattern Layout](https://logging.apache.org/log4j/2.x/manual/layouts.html#Pattern_Layout) in the Log4j2 documentation.

**Log4j for Java **

In the logging configuration (such as log4j.xml), insert the trace context `trace_id=%mdc{trace_id} span_id=%mdc{span_id} trace_flags=%mdc{trace_flags} %5p` into `PatternLayout`. For example, the following configuration prepends the trace context before the log message.

```
<appender name="FILE" class="org.apache.log4j.FileAppender">;
  <param name="File" value="app.log"/>;
  <param name="Append" value="true"/>;
  <layout class="org.apache.log4j.PatternLayout">;
    <param name="ConversionPattern" value="trace_id=%mdc{trace_id} span_id=%mdc{span_id} trace_flags=%mdc{trace_flags} %5p - %m%n"/>;
  </layout>;
</appender>;
```

For more information about pattern layouts in Log4j, see [ Class Pattern Layout](https://logging.apache.org/log4j/1.x/apidocs/org/apache/log4j/PatternLayout.html) in the Log4j documentation.

**Python**

Set the environment variable `OTEL_PYTHON_LOG_CORRELATION` to `true` while running your application. For more information, see [ Enable trace context injection](https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/logging/logging.html#enable-trace-context-injection)in the Python OpenTelemetry documentation.

**Node.js**

For more information about enabling trace context injection in Node.js for the logging libraries that support it, see the NPM usage documentations of the [ Pino](https://www.npmjs.com/package/@opentelemetry/instrumentation-pino), [ Winston](https://www.npmjs.com/package/@opentelemetry/instrumentation-winston), or [ Bunyan](https://www.npmjs.com/package/@opentelemetry/instrumentation-bunyan) auto-instrumentations for Node.js.

# Enable metric to log correlation
<a name="Application-Signals-MetricLogCorrelation"></a>

If you publish application logs to log groups in CloudWatch Logs, you can enable *metric to application log correlation* in Application Signals. With metric log correlation, the Application Signals console automatically displays the relevant log groups associated with a metric.

For example, suppose you notice a spike in a latency graph. You can choose a point on the graph to load the diagnostics information for that point in time. The diagnostics information will show the relevant application log groups that are associated with the current service and metric. Then you can choose a button to run a CloudWatch Logs Insights query on those log groups. Depending on the information contained in the application logs, this might help you to investigate the cause of the latency spike.

Depending on the architecture that your application runs on, you might have to also set an environment variable to enable metric to application log correlation.
+ On Amazon EKS, no further steps are needed.
+ On Amazon ECS, no further steps are needed.
+ On Amazon EC2, see step 4 in the procedure in [Step 3: Instrument your application and start it](CloudWatch-Application-Signals-Enable-EC2Main.md#CloudWatch-Application-Signals-Enable-Other-instrument).

# Manage high-cardinality operations
<a name="Application-Signals-Cardinality"></a>

Application Signals includes settings in the CloudWatch agent that you can use to manage the cardinality of your operations and manage the metric exportation to optimize costs. By default, the metric limiting function becomes active when the number of distinct operations for a service over time exceeds the default threshold of 500. You can tune the behavior by adjusting the configuration settings. 

## Determine if metric limiting is activated
<a name="Limiting-Activated"></a>

You can use the following methods to find if the default metric limiting is happening. If it is, you should consider optimizing the cardinality control by following the steps in the next section.
+ In the CloudWatch console, choose **Application Signals**, **Services**. If you see an **Operation** named **AllOtherOperations** or a **RemoteOperation** named **AllOtherRemoteOperations**, then metric limiting is happening.
+ If any metrics collected by Application Signals have the value `AllOtherOperations` for their `Operation` dimension, then metric limiting is happening.
+ If any metrics collected by Application Signals have the value `AllOtherRemoteOperations` for their `RemoteOperation` dimension, then metric limiting is happening.

### Optimize cardinality control
<a name="Optimize-Cardinality"></a>

To optimize your cardinality control, you can do the following:
+ Create custom rules to aggregate operations.
+ Configure your metric limiting policy.

#### Create custom rules to aggregate operations
<a name="Optimize-Cardinality-Custom-Rules"></a>

High-cardinality operations can sometimes be caused by inappropriate unique values extracted from the context. For example, sending out HTTP/S requests that include user IDs or session IDs in the path can lead to hundreds of disparate operations. To resolve such issues, we recommend that you configure the CloudWatch agent with customization rules to rewrite these operations.

In cases where there is a surge in generating numerous different metrics through individual `RemoteOperation` calls, such as `PUT /api/customer/owners/123`, `PUT /api/customer/owners/456`, and similar requests, we recommend that you consolidate these operations into a single `RemoteOperation`. One approach is to standardize all `RemoteOperation` calls that start with `PUT /api/customer/owners/` to a uniform format, specifically `PUT /api/customer/owners/{ownerId}`. The following example illustrates this. For information about other customization rules, see [Enable CloudWatch Application Signals](CloudWatch-Agent-Application_Signals.md).

```
{
   "logs":{
      "metrics_collected":{
         "application_signals":{
            "rules":[
               {
                  "selectors":[
                     {
                        "dimension":"RemoteOperation",
                        "match":"PUT /api/customer/owners/*"
                     }
                  ],
                  "replacements":[
                     {
                        "target_dimension":"RemoteOperation",
                        "value":"PUT /api/customer/owners/{ownerId}"
                     }
                  ],
                  "action":"replace"
               }
            ]
         }
      }
   }
}
```

In other cases, high-cardinality metrics might have been aggregated to `AllOtherRemoteOperations`, and it might be unclear what specific metrics are included. The CloudWatch agent is able to log the dropped operations. To identify dropped operations, use the configuration in the following example to activate logging until the problem resurfaces. Then inspect the CloudWatch agent logs (accessible by container `stdout` or EC2 log files) and search for the keyword `drop metric data`.

```
{
  "agent": {
    "config": {
      "agent": {
        "debug": true
      },
      "traces": {
        "traces_collected": {
          "application_signals": {
          }
        }
      },
      "logs": {
        "metrics_collected": {
          "application_signals": {
            "limiter": {
              "log_dropped_metrics": true
            }
          }
        }
      }
    }
  }
}
```

#### Create your metric limiting policy
<a name="Optimize-Cardinality-Metric-Limiting"></a>

If the default metric limiting configuration doesn’t address the cardinality for your service, you can customize the metric limiter configuration. To do this, add a `limiter` section under the `logs/metrics_collected/application_signals` section in the CloudWatch Agent configuration file.

The following example lowers the threshold of metric limiting from 500 distinct metrics to 100.

```
{
  "logs": {
    "metrics_collected": {
      "application_signals": {
        "limiter": {
          "drop_threshold": 100
        }
      }
    }
  }
}
```

# Monitor the operational health of your applications with Application Signals
<a name="Services"></a>

Use Application Signals within the [CloudWatch console](https://console.aws.amazon.com/cloudwatch/) to monitor and troubleshoot the operational health of your applications:
+ **Monitor your application services** — As part of daily operational monitoring, use the [Services](Services-page.md) page to see a summary of all your services. See services with the highest fault rate or latency, and see which services have unhealthy [service level indicators (SLIs)](CloudWatch-ServiceLevelObjectives.md). Select a service to open the [Service detail](ServiceDetail.md) page and see detailed metrics, service operations, Synthetics canaries, and client requests. This can help you troubleshoot and identify the root cause of operational issues. 
+ **Inspect your application topology** — Use the [Application Map](ServiceMap.md) to understand and monitor your application topology over time, including the relationships between clients, Synthetics canaries, services, and dependencies. Instantly see service level indicator (SLI) health and view key metrics such as call volume, fault rate, and latency. Drill down to see more detailed information in the [Service detail](ServiceDetail.md) page.

Explore an [example scenario](Services-example-scenario.md) that demonstrates how these pages can be used to quickly troubleshoot an operational service health issue, from initial detection to identifying root cause.

**How Application Signals enables operational health monitoring**

After you [enable your application](CloudWatch-Application-Signals-Enable.md) for Application Signals, your application services, APIs, and their dependencies are automatically discovered and displayed in the **Services**, **Service detail**, and **Application Map** pages. Application Signals collects information from multiple sources to enable service discovery and operational health monitoring: 
+ [AWS Distro for OpenTelemetry (ADOT)](CloudWatch-Application-Signals-supportmatrix.md) — As part of enabling Application Signals, OpenTelemetry Java and Python auto-instrumentation libraries are configured to emit metrics and traces that are collected by the CloudWatch agent. The metrics and traces are used to enable discovery of services, operations, dependencies, and other service information.
+ [Service-level objectives (SLOs)](CloudWatch-ServiceLevelObjectives.md) — After you create service level objectives for your services, the Services, Service detail, and Application Map pages display service level indicator (SLI) health. SLIs can monitor latency, availability, and other operational metrics.
+ [CloudWatch Synthetics canaries](CloudWatch_Synthetics_Canaries.md) — When you configure X-Ray tracing on your canaries, calls to your services from your canary scripts are associated with your service and displayed within the Service detail page.
+ [CloudWatch Real user monitoring (RUM)](CloudWatch-RUM.md) — When X-Ray tracing is enabled on your CloudWatch RUM web client, requests to your services are automatically associated and displayed within the service detail page.
+ [AWS Service Catalog AppRegistry](https://docs.aws.amazon.com/servicecatalog/latest/arguide/intro-app-registry.html) — Application Signals auto-discovers AWS resources within your account and allows you to group them into logical applications created in AppRegistry. The application name displayed in the Services page is based on the underlying compute resource that your services are running on.

**Note**  
Application Signals displays your services and operations based on metrics and traces emitted within the current time filter that you chose. (By default, this is the past three hours.) If there is no activity within the current time filter for a service, operation, dependency, Synthetics canary, or client page, it won't be displayed.   
Up to 1,000 services can be displayed. Discovery of your services and service topology might be delayed up to 10 minutes. Evaluation of your service level indicator (SLI) health might be delayed up to 15 minutes. 

**Note**  
Application Signals console currently only supports choosing a maximum of one day within the 30 days time range.

# View overall service activity and operational health with the Services page
<a name="Services-page"></a>

Use the Services page to see a list of your services that are [enabled for Application Signals](CloudWatch-Application-Signals-Enable.md). You can also view operational metrics and quickly see which services have unhealthy service level indicators (SLIs). Drill down to look for performance anomalies as you identify the root cause of operational issues. To view this page, open the [CloudWatch console](https://console.aws.amazon.com/cloudwatch/) and choose **Services** under the **Application Signals** section in the left navigation pane.

For un-instrumented services, the Service overview page displays limited information with prominent calls-to-action to enable Application Signals instrumentation.

## Explore operational health metrics for your services
<a name="services-top-graphs"></a>

The top of the Services page includes an overall service operational health graph and several tables displaying top services and service dependencies by fault rate and list of services. The Services graph on the left displays a breakdown of the number of services that have healthy or unhealthy service level indicators (SLIs) during the current page-level time filter. SLIs can monitor latency, availability, and other operational metrics. View the top services by fault rate in the two tables next to the graph. Select a service name in either table to open its [service detail page](ServiceDetail.md) page, which displays detailed service operation information. Select a dependency path to view service dependency details on its detail page.

Both tables display information for up to the past three hours, even if a longer time period filter is chosen at the top right of the page.

When using dynamic service grouping, the operational health metrics automatically aggregates data across all services within each group. This provides:
+ Consolidated fault rates for service groups
+ Group-level SLI health status
+ Aggregated performance metrics that help identify problematic service clusters
+ Quick identification of which groups require immediate attention during incidents

![\[CloudWatch Services top graphs\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/services-top-graphs.png)


## Monitor operational health with the Services table
<a name="services-table"></a>

The Services table displays a list of your services that have been enabled for Application Signals. Choose **Enable Application Signals** to open a setup page and start configuring your services. For more information, see [Enable Application Signals](CloudWatch-Application-Signals-Enable.md). 

Filter the Services table to make it easier to find what you're looking for, by choosing one or more properties from the filter text box. As you choose each property, you are guided through filter criteria. You will see the complete filter below the filter text box. Choose **Clear filters** at any time to remove the table filter. 

The advanced filtering options allows you to:
+ Filter by service groups (both default and custom groupings)
+ Filter by recent deployment activity
+ Filter by Platform
+ Filter by SLI Health
+ Filter by Account ID (in cross-account observability setups)
+ Filter by instrumentation status (instrumented vs un-instrumented)
+ Filter by environment
+ Filter by service health status

![\[CloudWatch Services table\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/services-table-healthy-updated.png)


For un-instrumented services, the Service overview page displays limited information with prominent calls-to-action to enable Application Signals instrumentation. Un-instrumented services appear in the Services table even when they haven't been configured with Application Signals, helping you identify gaps in your observability coverage and prioritize which services to instrument next based on their position in your architecture.

Choose the name of any service in the table to view a [service detail page](ServiceDetail.md) containing service-level metrics, operations, and additional details. If you have associated the service's underlying compute resource with an application in AppRegistry or the Applications card on the AWS Management Console home page, choose the application name to display the application details in the [myApplications](https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/aws-myApplications.html) console page. For services hosted in Amazon EKS, choose any link within the **Hosted in** column to view Cluster, Namespace, or Workload within CloudWatch Container Insights. For services running on Amazon ECS or Amazon EC2, the Environment value is shown. 

[Service level indicator (SLI)](CloudWatch-ServiceLevelObjectives.md#CloudWatch-ServiceLevelObjectives-concepts) status is displayed for each service in the table. Choose the SLI status for a service to display a pop-up containing a link to any unhealthy SLIs, and a link to see all SLOs for the service. 

![\[Service with unhealthy SLI\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/services-unhealthy-sli.png)


If no SLOs have been created for a service, choose the **Create SLO** button within the **SLI Status** column. To create additional SLOs for any service, select the option button next to the service name, and then choose **Create SLO** at the top-right of the table. When you create SLOs, you can see at a glance which of your services and operations are performing well and which are unhealthy. See [service level objectives (SLOs)](CloudWatch-ServiceLevelObjectives.md) for more information. 

## Service overview
<a name="services-overview"></a>

After you select a service from the Services table, the Service overview page opens. This page provides a comprehensive view of your service's operational health and performance metrics. The overview displays these summary metrics:
+ Total operations
+ Service dependencies
+ Canary monitoring status
+ RUM client data

These metrics give you immediate insight into your service's current state.

You can visualize key operational performance indicators over time using a series of charts. To analyze trends and identify potential issues affecting your service health, adjust the time filter. All charts automatically update to reflect data for the selected time period.

The Audit findings section automatically detects and shows critical problems in your service's behavior, so you don't need to investigate manually. Application Signals analyzes your applications to report significant observations and potential problems, simplifying root cause analysis. These automated findings consolidate relevant traces, eliminating the need to navigate through multiple clicks. The audit system helps teams quickly identify issues and their underlying causes, enabling faster problem resolution.

You can use the Change events section to identify how recent deployments or configuration changes affect your service behavior. Application Signals automatically processes CloudTrail events to track change events across your application. Monitor configuration and deployment events for services and their dependencies, providing immediate context for operational analysis and troubleshooting. Application Signals automatically correlates deployment times with performance changes, helping you quickly identify if recent deployments contributed to service issues.

![\[Service overview\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Service_detail.png)


# View detailed service activity and operational health with the service detail page
<a name="ServiceDetail"></a>

When you instrument your application, [Amazon CloudWatch Application Signals](CloudWatch-Application-Monitoring-Sections.md) maps all of the services that your application discovers. Use the service detail page to see an overview of your services, operations, dependencies, canaries, and client requests for a single service. To view the service detail page, do the following:
+ Open the [CloudWatch console](https://console.aws.amazon.com/cloudwatch/).
+ Choose **Services** under the **Application Signals** section in the left navigation pane.
+ Choose the name of any service from the **Services**, **Top services**, or dependency tables.

Under **schedule-visits**, you will see the account label and ID under the service name.

The service detail page is organized into the following tabs:
+  [Overview](#ServiceDetail-overview) — Use this tab to see an overview of a single service, including the number of operations, dependencies, synthetics, and client pages. The tab shows key metrics for your entire service, top operations and dependencies. These metrics include time series data on latency, faults, and errors across all service operations for that service.
+  [Service operations](#ServiceDetail-operations) — Use this tab to see a list of the operations that your service calls and interactive graphs with key metrics that measure the health of each operation. You can select a data point in a graph to obtain information about traces, logs, or metrics associated with that data point.
+  [Dependencies](#ServiceDetail-dependencies) — Use this tab to see a list of dependencies that your service calls, and a list of metrics for those dependencies.
+  [Synthetics canaries](#ServiceDetail-canaries) — Use this tab to see a list of synthetics canaries that simulate user calls to your service, and key performance metrics for how those canaries. 
+  [Client pages](#ServiceDetail-clientpages) — Use this tab to see a list of client pages that call your service, and metrics that measure the quality of client interactions with your application. 
+  [Related metrics](#ServiceDetail-relatedmetrics) — Use this tab to correlate related metrics, such as standard metrics, runtime metrics, and custom metrics for a service, it's operations or dependencies.

## View your service overview
<a name="ServiceDetail-overview"></a>

Use the service overview page to view a high-level summary of metrics for all service operations in a single location. Check the performance of all the operations, dependencies, client pages and synthetics canaries that interact with your application. Use this information to help you determine where to focus efforts to identify issues, troubleshoot errors, and find opportunities for optimization.

Choose any link in **Service Details** to view information that is related to a specific service. For example, for services hosted in Amazon EKS, the service details page shows **Cluster**, **Namespace**, and **Workload** information. For services hosted in Amazon ECS or Amazon EC2, the service details page shows the **Environment** value.

Under **Services**, the **Overview** tab displays a summary of the following:
+ Operations – Use this tab to see the health of your service operations. The health status is determined by service level indicators (SLI) that are defined as a part of a [service level objective](CloudWatch-ServiceLevelObjectives.md) (SLO).
+ Dependencies – Use this tab to see the top dependencies of the services called by your application, listed by fault rate and to see the health of your service dependencies. The health status is determined by service level indicators (SLI) that are defined as a part of a service level objective (SLO).
+ Synthetics canaries – Use this tab to see the result of simulated calls to endpoints or APIs associated with your service, and the number of failed canaries.
+ Client pages – Use this tab to see top pages called by clients that have asynchronous JavaScript and XML (AJAX) errors.

The following illustration shows an overview of your services:

![\[Service overview widgets\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-detail-widgets.png)


The **Overview** tab also displays a graph of dependencies with the highest latency across all services. Use the **p99**, **p90** and **p50** latency metrics to quickly assess which dependencies are contributing to your total service latency, as follows:

![\[Service operations latency graph\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-detail-latency.png)


For example, the previous graph shows that 99% of the requests made to the **customer-service** dependency were completed in approximately 4,950 milliseconds. The other dependencies took less time.

Graphs displaying the top four service operations by latency show the volume of requests, availability, fault rate, and error rate for those services, as shown in the following image:

![\[Service operations volume, availability, fault rate, and error rate graphs\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-detail-operations-graphs.png)


The **Service details** section displays the details of the service including the **Account ID** and **Account label**.

## View your service operations
<a name="ServiceDetail-operations"></a>

When you instrument your application, [Application Signals](CloudWatch-Application-Monitoring-Sections.md) discovers all of the service operations that your application calls. Use the **Service operations** tab to see a table that contains the service operations and a set of metrics that measure the performance of a selected operation. These metrics include SLI status, number of dependencies, latency, volume, faults, errors, and availability, as shown in the following image:

![\[Service operations table\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-operations-table.png)


Filter the table to make it easier to find a service operation by choosing one or more properties from the filter text box. As you choose each property, you are guided through filter criteria and will see the complete filter below the filter text box. Choose **Clear filters** at any time to remove the table filter. 

Choose the SLI status for an operation to display a popup containing a link to any unhealthy SLI, and a link to see all SLOs for the operation, as shown in the following table:

![\[Service operation SLI status\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-operation-unhealthy-slo.png)


The service operations table lists the SLI status, the number of healthy or unhealthy SLIs, and the total number of SLOs for each operation.

Use SLIs to monitor latency, availability, and other operational metrics that measure the operational health of a service. Use an SLO to check the performance and health status of your services and operations.

To create an SLO, do the following:
+ If an operation does not have an SLO, choose the **Create SLO** button within the **SLI Status** column.
+ If an operation already has an SLO, do the following:
  + Select the radio button next to the operation name.
  + Choose **Create SLO** from the **Actions** down arrow at the top right of the table.

For more information, see [service level objectives (SLOs)](CloudWatch-ServiceLevelObjectives.md).

The **Dependencies** column shows the number of dependencies this operation calls. Choose this number to open the **Dependencies** tab filtered to the selected operation.

### View service operations metrics, correlated traces, and application logs
<a name="ServiceDetail-traces"></a>

Application Signals correlates service operation metrics with AWS X-Ray traces, CloudWatch [Container Insights](ContainerInsights.md), and application logs. Use these metrics to troubleshoot operational health issues. To view metrics as graphical information, do the following:

1. Select a service operation in the **Service operations** table to see a set of graphs for the selected operation above the table with metrics for **Volume and Availability**, **Latency**, and **Faults and Errors**.

1. Hover over a point in a graph to view more information.

1. Select a point to open a diagnostic pane that shows correlated traces, metrics, and application logs for the selected point in the graph.

The following image shows the tooltip that appears after hovering over a point in the graph, and the diagnostic pane which appears after clicking on a point. The tooltip contains information about the associated data point in the **Faults and Errors** graph. The pane contains **Correlated traces**, **Top contributors**, and **Application logs** associated with the selected point.

![\[Correlated traces for faults and errors\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-detail-correlated-traces.png)


#### Correlated traces
<a name="ServiceDetail-traces-correlated"></a>

Look at related traces to understand an underlying issue with a trace. You can check to see if correlated traces or any service nodes associated with them behave similarly. To examine correlated traces, choose a **Trace ID** from the **Correlated traces** table to open the [X-Ray trace details](https://docs.aws.amazon.com/xray/latest/devguide/xray-console-traces.html) page for the chosen trace. The trace details page contains a map of service nodes that are associated with the selected trace and a timeline of trace segments.

#### Top contributors
<a name="ServiceDetail-traces-top-contributors"></a>

View the top contributors to find main input sources to a metric. Group contributors by different components to look for similarities within the group and understand how trace behavior differs between them.

The **Top contributors** tab gives metrics for **Call volume**, **Availability**, **Avg latency**, **Errors**, and **Faults** for each group. The following example image shows top contributors to a suite of metrics for an application deployed on an Amazon EKS platform:

![\[Service operation top contributors\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-operations-top-contributors.png)


The top contributors contains the following metrics:
+ **Call volume** - Use the call volume to understand the number of requests per time interval for a group.
+ **Availability** - Use availability to see what percentage of time that no faults were detected for a group.
+ **Avg latency** - Use latency to check the average time that requests ran for a group over a time interval that depends on how long ago the requests that you are investigating were made. Requests that were made less than 15 days prior are evaluated over 1 minute intervals. Requests that were made between 15 and 30 days prior, inclusive, are evaluated over 5 minute intervals. For example, if you are investigating requests that caused a fault 15 days ago, the call volume metric is equal to the number of requests per 5 minute interval.
+ **Errors** - The number of errors per group measured over a time interval.
+ **Faults** - The number of faults per group over a time interval.

**Top contributors using Amazon EKS or Kubernetes**

Use information about the top contributors for applications deployed on Amazon EKS or Kubernetesto see operational health metrics grouped by **Node**, **Pod** and **PodTemplateHash**. The following definitions apply:
+ A **pod** is a group of one or more Docker containers that share storage and resources. A pod is the smallest unit that can be deployed on a Kubernetes platform. Group by pods to check if errors are related to pod-specific limitations.
+ A **node** is a server that runs pods. Group by nodes to check if errors are related to node-specific limitations.
+ A **pod template hash** is used to find a particular version of a deployment. Group by pod template hash to check if errors are related to a particular deployment.

**Top contributors using Amazon EC2**

Use information about the top contributors for applications deployed on Amazon EKS to see operational health metrics grouped by instance ID, and auto scaling group. The following definitions apply:
+ An **Instance ID** is a unique identifier for the Amazon EC2 instance that your service runs. Group by instance ID to check if errors are related to a specific Amazon EC2 instance.
+ An [auto scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-groups.html) is a collection of Amazon EC2 instances that allow you to scale up or down the resources you need to serve your application requests. Group by auto scaling group if you want to check if errors are limited in scope to the instances inside the group.

**Top contributors using a custom platform**

Use information about the top contributors for applications deployed using custom instrumentation to see operational health metrics grouped by **Host name**. The following definitions apply:
+ A host name identifies a device such as an endpoint or Amazon EC2 instance that is connected to a network. Group by host name to check if your errors are related to a specific physical or virtual device.

**View top contributors in Log Insights and Container Insights**

View and modify the automatic query that generated metrics for your top contributors in [Log Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html). View infrastructure performance metrics by specific groups such as pods or nodes in [Container Insights](ContainerInsights.md). You can sort clusters, nodes or workloads by resource consumption and quickly identify anomalies or and mitigate risks pro-actively before end user experience is impacted. An image showing how to select these options follows:

![\[Top contributors table\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-operations-top-contributors-insights.png)


In **Container Insights**, you can view metrics for your Amazon EKS or Amazon ECS container that are specific to the grouping of your top contributors. For example, if you grouped by pod for an EKS container to generate top contributors, container insights will show metrics and statistics filtered for your pod.

In **Log Insights**, you can modify the query that generated the metrics under **Top contributors** using the following steps:

1. Select **View in Log Insights**. The **Logs Insights** page that opens contains an query that is automatically generated and contains the following information:
   + The log cluster group name.
   + The operation that you were investigating with CloudWatch.
   + The aggregate of the operational health metric interacted with on the graph.

   The log results are automatically filtered to show data from the last five minutes before you selected the data point on the service graph.

1. To edit the query, replace the generated text with your changes. You can also use the **Query generator** to help you generate a new query, or update the existing query.

#### Application logs
<a name="ServiceDetail-traces-application-logs"></a>

Use the query in the **Application logs** tab to generate logged information for your current log group, service and insert a timestamp. A log group is a group of log streams that you can define when you configure your application.

Use a log group to organize logs with similar characteristics including the following:
+ Capture logs from a specific organization, source or function.
+ Capture logs that are accessed by a particular user.
+ Capture logs for a specific time period.

Use these log streams to track specific groups or time frames. You can also set up monitoring rules, alarms and notifications for these log groups. For more information about log groups, see [Working with log groups and log streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html).

The application logs query returns the logs, recurring text patterns and graphical visualizations for your log groups.

To run the query, select **Run query in Logs Insights** to either run the automatically generated query or modify the query. To edit the query, replace the automatically generated text with your changes. You can also use the **Query generator** to help you generate a new query or update the existing query.

The following image shows the sample query that is automatically generated based on the selected point in the service operations graph:

![\[Application logs table\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-operations-application-logs.png)


In the preceding image, CloudWatch has automatically detected the log group that is associated with your selected point, and included it in a generated query.

## View your service dependencies
<a name="ServiceDetail-dependencies"></a>

Choose the **Dependencies** tab to display the **Dependencies** table and a set of metrics for the dependencies of all service operations or a single operation. The table contains a list of dependencies discovered by Application Signals, including metrics for SLI status, latency, call volume, fault rate, error rate, and availability.

At the top of the page, choose an operation from the down arrow list to view its dependencies, or choose **All** to see dependencies for all operations. 

Filter the table to make it easier to find what you're looking for, by choosing one or more properties from the filter text box. As you choose each property, you are guided through filter criteria and will see the complete filter below the filter text box. Choose **Clear filters** at any time to remove the table filter. Select **Group by Dependency** at the top right of the table to group dependencies by service and operation name. When grouping is turned on, expand or collapse a group of dependencies with the **\$1** icon next to the dependency name. 

![\[Dependencies table\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-dependencies-table.png)


The **Dependency** column displays the dependency service name, while the **Remote Operation** column displays the service operation name. The **SLI status** column displays the number of healthy or unhealthy SLIs along with the total number of SLIs for each dependency. When calling AWS services, the **Target** column displays the AWS resource, such as DynamoDB table or Amazon SNS queue.

To select a dependency, select the option next to a dependency in the **Dependencies** table. This shows a set of graphs that display detailed metrics for call volume, availability, faults, and errors. Hover over a point in a graph to see a popup containing more information. Select a point in a graph to open a diagnostic pane that shows correlated traces for the selected point in the graph. Choose a trace ID from the **Correlated traces** table to open the [X-Ray Trace details](https://docs.aws.amazon.com/xray/latest/devguide/xray-console-traces.html) page for the selected trace.

![\[Dependency graphs and correlated traces\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-dependency-graph-traces.jpg)


## View your Synthetics canaries
<a name="ServiceDetail-canaries"></a>

Choose the **Synthetics Canaries** tab to display the **Synthetics Canaries** table, and a set of metrics for each canary in the table. The table includes metrics for success percentage, average duration, runs, and failure rate. Only canaries that are [enabled for AWS X-Ray tracing](CloudWatch_Synthetics_Canaries_tracing.md) are displayed.

Use the filter text box in the synthetics canaries table to find the canary that you are interested in. Each filter that you create appears below the filter text box. Choose **Clear filters** at any time to remove the table filter. 

![\[Synthetics canaries table\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-canaries-table.png)


Select the radio button next to the name of the canary to see a set of tabs containing graphs detailed metrics including success percentage, errors and duration. Hover over a point in a graph to see a popup containing more information. Select a point in a graph to open a diagnostic pane that shows canary runs that correlate to the selected point. Select a canary run and choose the **Run time** to see artifacts for your selected canary run including logs, HTTP Archive (HAR) files, screenshots, and suggested steps to help you troubleshoot problems. Choose **Larn more** to open the [CloudWatch Synthetics Canaries](CloudWatch_Synthetics_Canaries.md) page next to **Canary runs**.

![\[Synthetics canary graphs and runs\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-canary-graphs-runs.jpg)


## View your client pages
<a name="ServiceDetail-clientpages"></a>

Choose the **Client pages** tab to display a list of client web pages that call your service. Use the set of metrics for the selected client page to measure the quality of your client's experience when interacting with a service or application. These metrics include page loads, web vitals, and errors.

To display your client pages in the table, you must [configure your CloudWatch RUM web client for X-Ray tracing](CloudWatch-RUM-get-started-create-app-monitor.md) and turn on Application Signals metrics for your client pages. Choose **Manage pages** to select which pages are enabled for Application Signals metrics.

Use the filter text box to find the client page or application monitor that you are interested in below the filter text box. Choose **Clear filters** to remove the table filter. Select **Group by Client** to group client pages by client. When grouped, choose the **\$1** icon next to a client name to expand the row and see all pages for that client.

![\[Client pages table\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-client-pages-table.png)


To select a client page, select the option next to a client page in the **Client pages** table. You will see a set of graphs that display detailed metrics. Hover over a point in a graph to see a popup containing more information. Select a point in a graph to open a diagnostic pane that shows correlated performance navigation events for the selected point in the graph. Choose an event ID from the list of navigation events to open the [CloudWatch RUM Page view](CloudWatch-RUM-view-data.md) for the chosen event.

![\[CloudWatch RUM client page requests\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-client-page-graphs-events.jpg)


**Note**  
To see AJAX errors within your client pages, use the [CloudWatch RUM web client](CloudWatch-RUM-configure-client.md) version 1.15 or newer.  
 Up to 100 operations, canaries, and client pages, and up to 250 dependencies, can be shown per service. 

## View Related metrics
<a name="ServiceDetail-relatedmetrics"></a>

Use the Related metrics tab to visualize multiple metrics, identify correlation patterns, and determine root causes of issues.

The metrics table shows three types of metrics:
+ Standard metrics – Application Signals collects standard application metrics from the services that it discovers. For more information, see [Standard application metrics collected](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AppSignals-MetricsCollected.html#AppSignals-StandardMetrics)
+ Runtime metrics – Application Signals uses the AWS Distro for OpenTelemetry SDK to automatically collect OpenTelemetry-compatible metrics from your Java and Python applications. For more information, see [Rumtime metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AppSignals-MetricsCollected.html#AppSignals-RuntimeMetrics)
+ Custom metrics – Application Signals enables you to generate custom metrics from your application. For more information, see [Custom metrics with Application Signals](AppSignals-CustomMetrics.md)

You can access the Related metrics tab from Service Overview, Service Operations, Dependencies, Synthetics canaries, or RUM tabs.

![\[View related metrics\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Custom_metrics.png)

+ The left navigation panel starts with all operations and dependencies unselected
+ The graph initially shows the Fault metric from the operation with the highest fault rate

Before you begin correlation analysis, make sure you have data points visible in Service Operations or Dependencies. To analyze correlations:

1. Open the Service Operations or Dependencies page.

1. Select a data point on any graph.

1. In the right-panel, choose **Correlate with Other Metrics**.

1. On the **Related metrics** tab that opens, you'll see:
   + Your selected operation or dependency in the left navigation
   + Your selected metric graphed in the *Browse metrics* table
   + Correlated spans when you select a data point

To graph multiple metrics, select one or more metrics from the **Browse** view in the **Related metrics** tab. Choose **Graphed Metrics** to view all graphed metrics.

To filter metrics, use the left panel filters to focus on specific operations or dependencies and use the table header filter bar to search by name, type, or other attributes. These filtering options help you detect patterns and troubleshoot issues more efficiently.

To analyze related metrics in detail, select a data point in the **Related metrics** tab. You can then view:
+ Top Contributors – Analyzes metrics by running CloudWatch Logs Insights queries. These queries process Enhanced Metrics Format (EMF) records that contain key attributes for detailed analysis for the following:
  + Latency measurements
  + Fault occurrences
  + Service availability metrics

  The following metrics do not support Top Contributors:
  + OTEL Metrics
  + Server-side Span Metrics

  You can view Top Contributors for RED Metrics and Client-side Span Metrics.
+ Correlated Spans – The Correlated Spans section works consistently with the Service Operations tab. To help you identify related traces and metrics, the correlation mechanism works by:
  + Comparing metric names with span attributes
  + Identifying matching patterns during the selected time period
  + Displaying relevant trace information

  To effectively analyze your metrics and spans together, you need to understand how different metric types correlate. Here are the key limitations:
  + OTEL Metrics don't correlate with spans because they use independent naming systems
  + To correlate Server or Client-side Span Metrics with spans:
  + Include a Service dimension field in your configuration
  + Without this Service dimension, you cannot correlate these metrics with spans
+ Log Applications – For information on log application, see [Application logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ServiceDetail.html#ServiceDetail-operations)

# View your application topology and monitor operational health with the CloudWatch application map
<a name="ServiceMap"></a>

**Note**  
The CloudWatch application map replaces the Service Map. To see a map of your application based on AWS X-Ray traces, open the [X-Ray Trace Map](https://docs.aws.amazon.com/xray/latest/devguide/xray-console-servicemap.html). Choose **Trace Map** under the **X-Ray** section in the left navigation pane of the CloudWatch console. 

After enabling your application for Application Signals, the application map displays nodes representing your groups. You then drill down in these groups to view your services and their dependencies. Use the application map to view the topology of your application clients, synthetics canaries, services and dependencies, and monitor operational health. To view the application map, open the [CloudWatch console](https://console.aws.amazon.com/cloudwatch/) and choose **Application Map** under the **Application Signals** section in the left navigation pane.



After you [enable your application for Application Signals](CloudWatch-Application-Signals-Enable.md), use the application map to make it easier to monitor your application's operational health:
+ View connections between client, canary, service, and dependency nodes to help you understand your application topology and execution flow. This is especially helpful if your service operators are not your development team. 
+ See which services are meeting or not meeting your [service level objectives (SLOs)](CloudWatch-ServiceLevelObjectives.md). When a service is not meeting your SLOs, you can quickly identify whether a downstream service or dependency might be contributing to the issue or impacting multiple upstream services. 
+ Select an individual client, synthetics canary, service, or dependency node to see related metrics. The [Service details](ServiceDetail.md) page shows more detailed information about operations, dependencies, synthetics canaries, and client pages. 
+ Filter and zoom the application map to make it easier to focus on a part of your application topology, or see the entire map. Create a filter by choosing one or more properties from the filter text box. As you choose each property, you are guided through filter criteria. You will see the complete filter below the filter text box. Choose **Clear filters** at any time to remove the filter. 
+ Monitor services across multiple AWS accounts in a single unified application map. Services from different accounts are clearly identified with account information, enabling unified observability for distributed applications.
+ Identify services not yet instrumented in your application. Application Signals automatically detects and displays services that haven't been instrumented yet, helping you achieve complete observability coverage. Un-instrumented services are visually distinguished on the map to help you prioritize instrumentation efforts.
+ Group and filter services to create customized views that match your workflows. This organization helps you quickly find and access the services you use most frequently
+ Save your filtered and grouped views to quickly return to frequently used configurations

## Explore the application map
<a name="Service-map-exploring"></a>

When you visit the application map, by default it shows services grouped by **Related services**. Related services group services based on their dependencies. For example, if Service A calls Service B, which calls Service C, they're grouped under Service A. You can view SLI health, metrics and service count for all services in each group.

![\[CloudWatch default application map grouped by related services.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-overview.png)


Choose a tab for information about exploring each kind of node and the edges (connections) between them.

### Dynamic grouping and filtering
<a name="Application-Map-Grouping"></a>

You can click the **Group by** dropdown to use different grouping options. By default, Application Map provides 2 groupings:
+ **Related services** - Groups services based on their dependencies
+ **Environment** - Groups services by their environment

If you want to define your own custom grouping, click **Manage groups** to define custom groups and then tag your services or add OTEL Resource Attributes with the group key.

**Note**  
To enable grouping via OTEL resource attributes, the CloudWatch agent version must be v1.300056.0 or later. 

![\[Create custom grouping panel\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-create-custom-grouping.png)


Default grouping in Application Signals automatically organizes services based on their downstream dependencies. The system analyzes the service dependency graph and creates groups where the root node (a service with no upstream dependencies) becomes the group name. All services that depend on this root service, either directly or indirectly, are automatically included in the group. For example, if Service A calls Service B, which in turn calls Service C, all three services will be grouped together with Service A as the group name since it's the root of the dependency chain. This automatic grouping mechanism provides a natural way to visualize and manage related services based on their actual runtime interactions and dependencies.

### Group actions and insights
<a name="Application-Map-Group-Actions"></a>

For each group, you can perform the following actions:
+ Click **View more** to view metrics charts, the last two change events, and last deployment time for the group  
![\[View more drawer for group in application map\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-view-more.png)
+ Click **View dashboard** to view metrics dashboard, change events table, and service list for the group  
![\[View application dashboard for group\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-team-overview.png)  
![\[View application dashboard for group with metrics graphs\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-team-overview-2.png)

You can use **Group and filter** on the left bar to filter groups which have services with deployment time, SLI health status or compute platform type.

![\[Grouping and filter services on the application dashboard\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-grouping-filter.png)


You can also filter by account to view services from specific AWS accounts in your cross-account observability setup.

![\[Filter services by account on the application dashboard\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-account-filter.png)


Use the **Search and filter** bar to search groups by name or search groups which contain specific service environment or dependency. Filter by account ID to focus on services from specific accounts.

![\[Search and filter services in application map\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-search-and-filter.png)


### Configuring custom groups
<a name="Application-Map-Configure-Custom-Groups"></a>

Custom grouping allows you to organize your services logically based on your business requirements and operational priorities. This feature enables you to view and save defined views prioritized by your specific needs, create groups based on team ownership, and assemble groups of services needed for critical business transactions.

Create the custom group names (the group names you will see in the UI) and the corresponding group key names. Complete this step either from the Application Signals UI or using the [PutGroupingConfiguration](https://docs.aws.amazon.com/applicationsignals/latest/APIReference/API_PutGroupingConfiguration.html) API.

Group key names can be either, AWS tag key or OTEL resource attribute for your service. When deciding between tags and OTEL resource attributes, consider your compute platform:
+ For single-service platforms (for example, Lambda or Auto Scaling Group) – Use AWS tags
+ For multi-service platforms (for example, Amazon EKS cluster) – Use OTEL resource attributes for more granular grouping

**Adding AWS tags**

Add an AWS tag with the custom group key as a key and a value to an Amazon EKS cluster. When there are multiple services running in one Amazon EKS cluster all of them are tagged with the same custom group key. For example, when Amazon EKS Cluster A has Service 1, Service 2 and Service 3 running, adding an AWS tag with key *Team X* to the cluster will add all three services to *Team X*. To add only specific services to *Team X*, add OTEL resource attributes for the services as shown below.

**Adding OTEL resource attributes**

To add an OTEL resource attribute, see the configuration below:

**General configuration**

Configure the `OTEL_RESOURCE_ATTRIBUTES` environment variable in your application using the custom group key-value pairs. The keys are listed under `aws.application_signals.metric_resource_keys` separated by `&`.

For example, to create custom groups using `Application=PetClinic` and `Owner=Test`, use the following:

```
OTEL_RESOURCE_ATTRIBUTES=Application=PetClinic,Owner=Test,aws.application_signals.metric_resource_keys=Application&Owner
```

**Platform-specific configuration**

The following are the deployment specifications.

**Amazon EKS and native kubernetes**

```
apiVersion: apps/v1
kind: Deployment
metadata:
  ...
spec:
  replicas: 1
  ...
  template:
    spec:
      containers:
      - name: your-app
        image: your-app-image
        env:
          ...
          - name: OTEL_RESOURCE_ATTRIBUTES
            value: Application=PetClinic,Owner=Test,aws.application_signals.metric_resource_keys=Application&Owner
```

**Amazon EC2**

Add `OTEL_RESOURCE_ATTRIBUTES` to your application start script. For the complete example, see [Adding `OTEL_RESOURCE_ATTRIBUTES`](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-Enable-EC2Main.html#CloudWatch-Application-Signals-Monitor-EC2).

```
...
OTEL_RESOURCE_ATTRIBUTES="service.name=$YOUR_SVC_NAME,Application=PetClinic,Owner=Test,aws.application_signals.metric_resource_keys=Application&Owner" \
java -jar $MY_JAVA_APP.jar
```

**Amazon ECS**

Add `OTEL_RESOURCE_ATTRIBUTES` to the TaskDefinition. For the complete example, see [Enable on Amazon ECS](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-Enable-ECSMain.html).

```
{
  "name": "my-app",
   ...
  "environment": [
    {
      "name": "OTEL_RESOURCE_ATTRIBUTES",
      "value": "service.name=$YOUR_SVC_NAME,Application=PetClinic,Owner=Test,aws.application_signals.metric_resource_keys=Applicationmanagement portalOwner"
    }, 
    ...
  ]
}
```

**Lambda**

Add `OTEL_RESOURCE_ATTRIBUTES` to the Lambda environment variable.

```
OTEL_RESOURCE_ATTRIBUTES="Application=PetClinic,Owner=Test,aws.application_signals.metric_resource_keys=Application&Owner"
```

### Viewing services within groups
<a name="Application-Map-Service-View"></a>

To view services and their dependencies in a group, click on the Group name. It will show a map of services inside the group. Each service node will show SLI health, metrics and platform details. Services with SLI breach are highlighted to be easily recognizable.

![\[CloudWatch application map services within group.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/View-services-groups.png)


Un-instrumented services are displayed with a distinctive visual indicator (such as a dashed border or different color) to differentiate them from instrumented services. Hover over an un-instrumented service node to see instrumentation guidance and links to setup documentation.

![\[Filter by uninstrumented services on application map\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-uninstrumented-filter.png)


All Canaries, RUM Clients and AWS Service nodes will be collapsed by default. If services in this group call services which are not part of this group, they will also be collapsed by default.

![\[Canary nodes are collapsed into a group in application map\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-canary-collapse.png)


If your map is still too large to investigate effectively, you can apply nested grouping to narrow down your investigation. For example, after grouping services by **Business Unit**, if you still have too many services in a group, use the Group by dropdown to select **Team**, creating a nested grouping structure.

![\[Nested grouping in application map\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-nested-grouping.png)


### Service insights and details
<a name="Application-Map-Service-Details"></a>

While on this page you can also click **Save view** next to search bar to save your view so next time you don't have to apply the same grouping and filtering again.

![\[Save grouping configuration\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-save-view.png)


Click on **View more** in service node to view Service Audit, Change events, SLI health and Metrics graphs.

![\[CloudWatch application map service insights.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-service-view-more.png)


If you want to view service operation and other service detail, click on **View dashboard** to go to service overview page.

![\[CloudWatch application map service overview.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-service-overview.png)


Alternatively you can click on Edge to view metrics of a specific dependency call of a service.

![\[CloudWatch application map node edge drawer\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-edge.png)


### Change Events
<a name="Application-Map-Change-Events"></a>

Track change events across your application with Application Signals' automatic processing of CloudTrail events. Monitor configuration and deployment events for services and their dependencies, providing immediate context for operational analysis and troubleshooting. Change event detection is enabled alongside service discovery enablement through the CloudWatch Console or StartDiscovery API. For EKS services, deployment detection requires that the EKS services are instrumented with the Application Signals instrumentation SDK. Application Signals automatically correlates deployment times with performance changes, helping you quickly identify if recent deployments contributed to service issues. View change event history and impact across your services without additional configuration or setup requirements.

### Audit findings
<a name="Application-Map-Audit-Findings"></a>

Discover critical insights through Application Signals' audit findings. The service analyzes your applications to report significant observations and potential problems, simplifying root cause analysis. These automated findings consolidate relevant traces, eliminating the need to navigate through multiple clicks. The audit system helps teams quickly identify issues and their underlying causes, enabling faster problem resolution. 

For services running on Amazon Bedrock, Application Signals automatically monitors GenAI token usage patterns. The audit system detects anomalies in input and output token consumption, comparing current usage against historical baselines. When token usage exceeds normal patterns, audit findings provide detailed analysis including token consumption trends, cost implications, and recommendations for optimization. This helps teams identify inefficient prompts, unexpected token spikes, and opportunities to reduce GenAI operational costs.

### Cross-Account Observability on Application Map
<a name="Application-Map-Cross-Account"></a>

Application Signals supports cross-account observability, allowing you to monitor and visualize services distributed across multiple AWS accounts in a single unified application map. This capability is essential for organizations with multi-account architectures following AWS best practices.

**Key Capabilities:**
+ *Unified View*: View services from multiple AWS accounts in a single application map, providing a complete picture of your distributed application architecture.
+ *Account Identification*: Each service node clearly displays its account ID and region, making it easy to identify service ownership and location.
+ *Centralized Monitoring*: Monitor the health, performance, and SLO status of services across all connected accounts from a single monitoring account.
+ *Cross-Account Filtering*: Filter and group services by account ID to focus on specific accounts or view cross-account interactions.

**How It Works:**

Application Signals uses AWS Organizations and cross-account sharing to enable observability across multiple accounts. To setup cross account observability please refer to [CloudWatch cross-account observability](CloudWatch-Unified-Cross-Account.md).

------
#### [ View your application services ]

**Service (Instrumented)**

You can view your application services and the status of their SLOs and service level indicators (SLIs) in the **Application Map**. If you didn't create SLOs for a service, choose the **Create SLO** button below the service node.

 The **Application Map** displays all of your services. It also shows the customers and canaries that consume the service and the dependencies that your services calls, as shown in the following image:

![\[A CloudWatch application map displaying healthy and unhealthy service.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-map-service-healthy-unhealthy.png)


When you select a service node, a pane opens displaying detailed service information: 
+ Total error and fault rate.
+ The number of SLIs and SLOs that are `healthy` or `unhealthy`. 
+ The option to view more information about an SLO.
+ The `Cluster`, `Namespace`, and `Workload` for services hosted in Amazon EKS, or Environment for services hosted in Amazon ECS or Amazon EC2. For Amazon EKS-hosted services, choose any link to open CloudWatch Container Insights.
+ AccountId and region.
+ The **Change** section showing recent change events and the last deployment time.
+ The **Operational Audit** tab providing automated audit findings and recommendations.
+ Service Metrics chart of Availability, latency, fault and errors.

Select an edge or connection between a service node and a downstream service or dependency node. This opens a pane containing top paths by fault rate, latency, and error rate, as shown in the following example image. Choose any link in the pane to open the [Service details](ServiceDetail.md) page and see detailed information for the chosen service or dependency.

![\[A CloudWatch application map service edge\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/App-signals-service-edge.png)


When you select a edge node, a pane opens displaying detailed service information: 
+ Total request count, latency, error rate and fault rate
+ Top path by fault rate
+ Top path by latency
+ Top path by error rate

**Service (Un-instrumented)**

Un-instrumented services appear on the Application Map even when they haven't been configured with Application Signals. These services are automatically discovered by leveraging Resource Explorer using application names and tags. The system can automatically detect up to 3,000 resources in your AWS account.

When you select an un-instrumented service node, a pane opens displaying:
+ Service name and identification information
+ AccountId and region where the service is detected
+ Instrumentation status and guidance
+ Call to action button "Enable Application Signals" that provides setup instructions
+ Compute platform type (if detectable)

Un-instrumented services help you:
+ Identify gaps in your observability coverage
+ Prioritize which services to instrument next based on their position in your architecture
+ Understand the complete application topology even before full instrumentation
+ Plan instrumentation rollout across your organization

**Note**  
Un-instrumented services display limited telemetry data since they don't actively send metrics or traces.

![\[CloudWatch application map instrumentation filter\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/explore-application-map-instrumentation-filter.png)


------
#### [ View dependencies ]

Your application dependencies are displayed on the application map, connected to the services that call them.

Choose a dependency node to open a pane containing error rate and fault rate, metrics chart for request, availability, latency, fault rate, and error rate.

 If the dependency node is a service or resource, then the pane will display change events for the requested time range.

![\[A CloudWatch application map displaying an expandable AWS service dependency node.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-map-dependency.png)


------
#### [ View clients ]

After you [turn on X-Ray tracing](CloudWatch-RUM-get-started-create-app-monitor.md) for your CloudWatch RUM web clients, they display on the application map connected to services they call.

Choose a client node to open a pane displaying detailed client information:
+ Metrics for page loads, average load time, errors, and average web vitals
+ A graph displaying a breakdown of errors
+ A link to display the client details in CloudWatch RUM

![\[A CloudWatch application map displaying an expandable client node.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-map-client.png)


Choose **View dashboard** to open the canary details.

------
#### [ View synthetics canaries ]

To view canaries on your application map, turn on [turn on X-Ray tracing](CloudWatch-RUM-get-started-create-app-monitor.md) for your CloudWatch Synthetics canaries. Once enabled, canaries will appear connected to their called services on the application map.

The system groups canaries together by default into a single expandable icon. The detailed canary information pane displays metrics, traces, and status information.

Choose a canary node to open a pane displaying detailed canary information, as shown in the following image:

![\[A CloudWatch application map displaying an expandable synthetics canary node.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/service-map-canary.png)


Choose **View dashboard** to open the canary details.

------

# Application observability for AWS Action
<a name="Service-Application-Observability-for-AWS-GitHub-Action"></a>

The Application Observability for AWS GitHub Action provides an end-to-end application observability investigation workflow that connects your source code and live production telemetry data to AI agent. It leverages CloudWatch MCPs and generates custom prompts to provide the context that AI agents need for troubleshooting and applying code fixes.

The action sets up and configures [CloudWatch Application Signals MCP Server](https://awslabs.github.io/mcp/servers/cloudwatch-applicationsignals-mcp-server) and [CloudWatch MCP Server](https://awslabs.github.io/mcp/servers/cloudwatch-applicationsignals-mcp-server), enabling them to access live telemetry data as troubleshooting context. You can use your preferred AI model - whether through your own API key, a third-party model, or Amazon Bedrock - for application performance investigations.

To get started, mention `@awsapm` in your GitHub issues to trigger the AI agent. The agent will troubleshoot production issues, implement fixes, and enhance observability coverage based on your live application data.

This action itself does not incur any direct costs. However, using this action may result in charges for AWS services and AI model usage. Please refer to the [cost considerations documentation](https://github.com/marketplace/actions/application-observability-for-aws#-cost-considerations) for detailed information about potential costs.

## Getting Started
<a name="Service-Application-Observability-for-AWS-GitHub-Action-getting-started"></a>

This action configures AI agents within your GitHub workflow by generating AWS-specific MCP configurations and custom observability prompts. You only need to provide IAM role to assume and a [Bedrock Model ID](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html) you want to use, or API token from your existing LLM subscription. The example below demonstrates a workflow template that integrates this action with [Anthropic's claude-code-base-action](https://github.com/anthropics/claude-code-base-action) to run automated investigations.

### Prerequisites
<a name="Service-Application-Observability-for-AWS-GitHub-Action-prerequisites"></a>

Before you begin, ensure you have the following:
+ **GitHub Repository Permissions**: Write access or higher to the repository (required to trigger the action)
+ **AWS IAM Role**: An IAM role configured with OpenID Connect (OIDC) for GitHub Actions with permissions for:
  + CloudWatch Application Signals and CloudWatch access
  + Amazon Bedrock model access (if using Bedrock models)
+ **GitHub Token**: The workflow automatically uses GITHUB\$1TOKEN with the required permissions

### Setup Steps
<a name="Service-Application-Observability-for-AWS-GitHub-Action-setup-steps"></a>

#### Step 1: Set up AWS Credentials
<a name="Service-Application-Observability-for-AWS-GitHub-Action-step1"></a>

This action relies on the [aws-actions/configure-aws-credentials](https://github.com/aws-actions/configure-aws-credentials) action to set up AWS authentication in your GitHub Actions Environment. We recommend using OpenID Connect (OIDC) to authenticate with AWS. OIDC allows your GitHub Actions workflows to access AWS resources using short-lived AWS credentials so you do not have to store long-term credentials in your repository.

1. **Create an IAM Identity Provider**

   First, create an IAM Identity Provider that trusts GitHub's OIDC endpoint in the AWS Management Console:

   1. Open the IAM console

   1. Click **Identity providers** under **Access management**

   1. Click the **Add provider** button to add GitHub Identity provider if not yet created

   1. Select **OpenID Connect** type of Identity provider

   1. Enter `https://token.actions.githubusercontent.com` for the **Provider URL** input box

   1. Enter `sts.amazonaws.com` for the **Audience** input box

   1. Click the **Add provider** button

1. **Create an IAM Policy**

   Create an IAM policy with the required permissions for this action. See the [Required Permissions](#Service-Application-Observability-for-AWS-GitHub-Action-required-permissions) section below for details.

1. **Create an IAM Role**

   Create an IAM role (for example, `AWS_IAM_ROLE_ARN`) in the AWS Management Console with the following trust policy template. This allows authorized GitHub repositories to assume the role:

   ```
   {
     "Version": "2012-10-17",		 	 	 
     "Statement": [
       {
         "Effect": "Allow",
         "Principal": {
           "Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/token.actions.githubusercontent.com"
         },
         "Action": "sts:AssumeRoleWithWebIdentity",
         "Condition": {
           "StringEquals": {
             "token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
           },
           "StringLike": {
             "token.actions.githubusercontent.com:sub": "repo:<GITHUB_ORG>/<GITHUB_REPOSITORY>:ref:refs/heads/<GITHUB_BRANCH>"
           }
         }
       }
     ]
   }
   ```

   Replace the following placeholders in the template:
   + `<AWS_ACCOUNT_ID>` - Your AWS account ID
   + `<GITHUB_ORG>` - Your GitHub organization name
   + `<GITHUB_REPOSITORY>` - Your repository name
   + `<GITHUB_BRANCH>` - Your branch name (e.g., main)

1. **Attach the IAM Policy**

   In the role's Permissions tab, attach the IAM policy you created in step 2.

For more information about configuring OIDC with AWS, see the [configure-aws-credentials OIDC Quick Start Guide](https://github.com/aws-actions/configure-aws-credentials/tree/main?tab=readme-ov-file#quick-start-oidc-recommended).

#### Step 2: Configure Secrets and Add Workflow
<a name="Service-Application-Observability-for-AWS-GitHub-Action-step2"></a>

1. **Configure Repository Secrets**

   Go to your repository → Settings → Secrets and variables → Actions.
   + Create a new repository secret named `AWS_IAM_ROLE_ARN` and set its value to the ARN of the IAM role you created in Step 1.
   + (Optional) Create a repository variable named `AWS_REGION` to specify your AWS region (defaults to `us-east-1` if not set)

1. **Add the Workflow File**

   The following is an example workflow that demonstrates using this action with Amazon Bedrock models. Create Application Observability Investigation workflow from this template in your GitHub Repository directory `.github/workflows`.

   ```
   name: Application observability for AWS
   
   on:
     issue_comment:
       types: [created, edited]
     issues:
       types: [opened, assigned, edited]
   
   jobs:
     awsapm-investigation:
       if: |
         (github.event_name == 'issue_comment' && contains(github.event.comment.body, '@awsapm')) ||
         (github.event_name == 'issues' && (contains(github.event.issue.body, '@awsapm') || contains(github.event.issue.title, '@awsapm')))
       runs-on: ubuntu-latest
   
       permissions:
         contents: write        # To create branches for PRs
         pull-requests: write   # To post comments on PRs
         issues: write          # To post comments on issues
         id-token: write        # Required for AWS OIDC authentication
   
       steps:
         - name: Checkout repository
           uses: actions/checkout@v4
   
         - name: Configure AWS credentials
           uses: aws-actions/configure-aws-credentials@v4
           with:
             role-to-assume: ${{ secrets.AWS_IAM_ROLE_ARN }}
             aws-region: ${{ vars.AWS_REGION || 'us-east-1' }}
   
         # Step 1: Prepare AWS MCP configuration and investigation prompt
         - name: Prepare Investigation Context
           id: prepare
           uses: aws-actions/application-observability-for-aws@v1
           with:
             bot_name: "@awsapm"
             cli_tool: "claude_code"
   
         # Step 2: Execute investigation with Claude Code
         - name: Run Claude Investigation
           id: claude
           uses: anthropics/claude-code-base-action@beta
           with:
             use_bedrock: "true"
             # Set to any Bedrock Model ID
             model: "us.anthropic.claude-sonnet-4-5-20250929-v1:0"
             prompt_file: ${{ steps.prepare.outputs.prompt_file }}
             mcp_config: ${{ steps.prepare.outputs.mcp_config_file }}
             allowed_tools: ${{ steps.prepare.outputs.allowed_tools }}
   
         # Step 3: Post results back to GitHub issue/PR (reuse the same action)
         - name: Post Investigation Results
           if: always()
           uses: aws-actions/application-observability-for-aws@v1
           with:
             cli_tool: "claude_code"
             comment_id: ${{ steps.prepare.outputs.awsapm_comment_id }}
             output_file: ${{ steps.claude.outputs.execution_file }}
             output_status: ${{ steps.claude.outputs.conclusion }}
   ```

   **Configuration Note:**
   + This workflow triggers automatically when `@awsapm` is mentioned in an issue or comment
   + The workflow uses the `AWS_IAM_ROLE_ARN` secret configured in the previous step
   + Update the model parameter in Step 2 to specify your preferred Amazon Bedrock model ID
   + You can customize the bot name (e.g., `@awsapm-prod`, `@awsapm-staging`) in the bot\$1name parameter to support different environments

#### Step 3: Start Using the Action
<a name="Service-Application-Observability-for-AWS-GitHub-Action-step3"></a>

Once the workflow is configured, mention `@awsapm` in any GitHub issue to trigger an AI-powered investigation. The action will analyze your request, access live telemetry data, and provide recommendations or implement fixes automatically.

**Example Use Cases:**

1. Investigate performance issues and post and fix:

   `@awsapm, can you help me investigate availability issues in my appointment service?`  
![\[alt text not found\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/github-availability-issue-investigate.png)

   `@awsapm, can you post a fix?`  
![\[alt text not found\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/github-availability-issue-pr-fix.png)

1. Enable instrumentation:

   `@awsapm, please enable Application Signals for lambda-audit-service and create a PR with the required changes.`

1. Query telemetry data:

   `@awsapm, how many GenAI tokens have been consumed by my services in the past 24 hours?`

**What Happens Next:**

1. The workflow detects the `@awsapm` mention and triggers the investigation

1. The AI agent accesses your live AWS telemetry data through the configured MCP servers

1. The agent analyzes the issue and either:
   + Posts findings and recommendations directly in the issue
   + Creates a pull request with code changes (for instrumentation or fixes)

1. You can review the results and continue the conversation by mentioning @awsapm again with follow-up questions

## Security
<a name="Service-Application-Observability-for-AWS-GitHub-Action-security"></a>

This action prioritizes security with strict access controls, OIDC-based AWS authentication, and built-in protections against prompt injection attacks. Only users with write access or higher can trigger the action, and all operations are scoped to the specific repository.

For detailed security information, including:
+ Access control and permission requirements
+ AWS IAM permissions and OIDC configuration
+ Prompt injection risks and mitigations
+ Security best practices

See the [Security Documentation](https://github.com/aws-actions/application-observability-for-aws/blob/main/docs/security.md).

## Configuration
<a name="Service-Application-Observability-for-AWS-GitHub-Action-configuration"></a>

### Required Permissions
<a name="Service-Application-Observability-for-AWS-GitHub-Action-required-permissions"></a>

The IAM role assumed by GitHub Actions must have the following permissions.

**Note**: `bedrock:InvokeModel` and `bedrock:InvokeModelWithResponseStream` are only required if you're using Amazon Bedrock models

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "application-signals:ListServices",
                "application-signals:GetService",
                "application-signals:ListServiceOperations",
                "application-signals:ListServiceLevelObjectives",
                "application-signals:GetServiceLevelObjective",
                "application-signals:ListAuditFindings",
                "cloudwatch:DescribeAlarms",
                "cloudwatch:DescribeAlarmHistory",
                "cloudwatch:ListMetrics",
                "cloudwatch:GetMetricData",
                "cloudwatch:GetMetricStatistics",
                "logs:DescribeLogGroups",
                "logs:DescribeQueryDefinitions",
                "logs:ListLogAnomalyDetectors",
                "logs:ListAnomalies",
                "logs:StartQuery",
                "logs:StopQuery",
                "logs:GetQueryResults",
                "logs:FilterLogEvents",
                "xray:GetTraceSummaries",
                "xray:GetTraceSegmentDestination",
                "xray:BatchGetTraces",
                "xray:ListRetrievedTraces",
                "xray:StartTraceRetrieval",
                "servicequotas:GetServiceQuota",
                "synthetics:GetCanary",
                "synthetics:GetCanaryRuns",
                "s3:GetObject",
                "s3:ListBucket",
                "iam:GetRole",
                "iam:ListAttachedRolePolicies",
                "iam:GetPolicy",
                "iam:GetPolicyVersion",
                "bedrock:InvokeModel",
                "bedrock:InvokeModelWithResponseStream"
            ],
            "Resource": "*"
        }
    ]
}
```

## Documentation
<a name="Service-Application-Observability-for-AWS-GitHub-Action-documentation"></a>

For more information, check out:
+ [CloudWatch Application Signals Documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Monitoring-Intro.html) - Learn about CloudWatch Application Signals features and capabilities
+ [Application observability for AWS Action Public Documentation](https://github.com/marketplace/actions/application-observability-for-aws) - Detailed guides and tutorials

# Example: Use Application Signals to resolve an operational health issue
<a name="Services-example-scenario"></a>

The following scenario provides an example of how Application Signals can be used to monitor your services and identify service quality issues. Drill down to identify potential root causes and take action to resolve the issue. This example is focused on a pet clinic application composed of several microservices that call AWS services such as DynamoDB. 

Jane is part of a DevOps team that oversees the operational health of a pet clinic application. Jane's team is committed to ensuring that the application is highly available and responsive. They use [service level objectives (SLOs)](CloudWatch-ServiceLevelObjectives.md) to measure application performance against these business commitments. She receives an alert about several unhealthy service level indicators (SLIs). She opens the CloudWatch console and navigates to the Services page, where she sees several services in an unhealthy state.

![\[Services with unhealthy SLIs\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/example-scenario-services-page.jpg)


At the top of the page, Jane sees that the `visits-service` is the top service by fault rate. She selects the link in the graph, which opens the Service detail page for the service. She sees that there is an unhealthy operation in the Service operations table. She selects this operation and sees in the Volume and Availability graph that there are periodic call volume spikes that seem to correlate to dips in availability. 

![\[Service operation volume and availability\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/example-scenario-unhealthy-operation.png)


In order to look closer at the dips in service availability, Jane selects one of the availability data points in the graph. A drawer opens showing X-Ray traces that are correlated to the selected data point. She sees that there are multiple traces containing faults. 

![\[Service availability and correlated traces\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/example-scenario-correlated-traces.jpg)


Jane selects one of the correlated traces with a fault status, which opens the X-Ray Trace detail page for the selected trace. Jane scrolls down to the Segments Timeline section and follows the call path until she sees that calls to a DynamoDB table are returning errors. She selects the DynamoDB segment and navigates to the Exceptions tab of the right-side drawer. 

![\[Trace segment with DynamoDB errors\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/example-scenario-DDB-segment.jpg)


Jane sees that a DynamoDB resource is misconfigured, resulting in errors during spikes in customer requests. The DynamoDB table's level of provisioned throughput is periodically exceeded, resulting in service availability issues and unhealthy SLIs. Based on this information, her team is able to configure a higher level of provisioned throughput and ensure high availability of the application. 

# Example: Use Application Signals to troubleshoot generative AI applications interacting with Amazon Bedrock models
<a name="Services-example-scenario-GenerativeAI"></a>

You can use Application Signals to troubleshoot your generative AI applications that interact with Amazon Bedrock models. Application Signals streamlines this process by providing out-of-the-box telemetry data, offering deeper insights into your application's interactions with LLM models. It helps address key use cases such as:
+ Model configuration issues
+ Model usage costs
+ Model latency
+ Model response generation stopped reasons

[Enabling Application Signals](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-Enable.html) with LLM/GenAI Observability provides real-time visibility into your application's interactions with Amazon Bedrock services. Application Signals automatically generates and correlates performance metrics and traces for Amazon Bedrock API calls.

Application Signals currently support the following LLM Models from Amazon Bedrock.
+ AI21 Jamba
+ Amazon Titan
+ Anthropic Claude
+ Cohere Command
+ Meta Llama
+ Mistral AI
+ Nova

## Fine-grained metrics and traces
<a name="Services-example-scenario-GenerativeAI-metricandtraces"></a>

For each Amazon Bedrock API call, Application Signals generates detailed performance metrics at the resource level, including:
+ Model ID
+ Guardrails ID
+ Knowledge Base ID
+ Bedrock Agent ID

Additionally, correlated trace spans at the same level help provide a comprehensive view of request execution and dependencies.

![\[Performance metrics using Application Signals.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/AppSignalsAIExample.png)


## OpenTelemetry GenAI attributes support
<a name="Services-example-scenario-GenerativeAI-OpenTelemetryAISupport"></a>

Application Signals generates the following GenAI attributes for Amazon Bedrock API calls with OpenTelemetry semantic convention. These attributes help analyze model usage, cost, and response quality, and can be leveraged through [Transaction Search](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Transaction-Search.html) for deeper insights.
+ gen\$1ai.system
+ gen\$1ai.request.model
+ gen\$1ai.request.max\$1tokens
+ gen\$1ai.request.temperature
+ gen\$1ai.request.top\$1p
+ gen\$1ai.usage.input\$1tokens
+ gen\$1ai.usage.output\$1tokens
+ gen\$1ai.response.finish\$1reasons

![\[GenAI attributes using Application Signals.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/AppSignalsAIExample_1.png)


For example, your can leverage the analytic capability from Transaction Search to compare the token usage and cost across different LLM models for the same prompt, enabling cost-efficient model selection.

![\[GenAI attributes using Application Signals.\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/AppSignalsAIExample_2.png)


For more information, see [Improve Amazon Bedrock Observability with CloudWatch Application Signals](https://aws.amazon.com/blogs/mt/improve-amazon-bedrock-observability-with-amazon-cloudwatch-appsignals/).

# Metrics collected by Application Signals
<a name="AppSignals-MetricsCollected"></a>

Application Signals collects both [standard application metrics](#AppSignals-StandardMetrics) and [runtime metrics](#AppSignals-StandardMetrics) from the applications that you enable it for.

Standard application metrics relate to the most critical parts of service performance, latency and availability.

Runtime metrics track application metrics over time, including memory usage, CPU usage, and garbage collection. Application Signals displays the runtime metrics in the context of the services that you have enabled for Application Signals. When you have an operational issue, observing the runtime metrics can be useful to help you find the root cause of the issue. For example, you can see if spikes in latency in your service relate to spikes in a runtime metric. 

**Topics**
+ [Standard application metrics collected](#AppSignals-StandardMetrics)
+ [Runtime metrics](#AppSignals-RuntimeMetrics)
+ [Disabling the collection of runtime metrics](#AppSignals-RuntimeMetrics-Disable)

## Standard application metrics collected
<a name="AppSignals-StandardMetrics"></a>

Application Signals collects *standard application metrics* from the services that it discovers. These metrics relate to the most critical aspects of a service's performance: latency, faults, and errors. They can help you identify issues, monitor performance trends, and optimize resources to improve the overall user experience.

The following table lists the metrics collected by Application Signals. These metrics are sent to CloudWatch in the `ApplicationSignals` namespace.


| Metric | Description | 
| --- | --- | 
|  `Latency`  |  The delay before data transfer begins after the request is made. Units: Milliseconds | 
|  `Fault`  |  A count of both HTTP 5XX server-side faults and OpenTelemetry span status errors.  Units: None | 
|  `Error`  |  A count of HTTP 4XX client-side errors. These are considered to be request errors that are not caused by service problems. Therefore, the `Availability` metric displayed on Application Signals dashboards does not regard these errors as service faults.  Units: None | 

The `Availability` metric displayed on Application Signals dashboards is computed as **(1 - `Faults`/Total)\$1100**. Total responses includes all responses and is derived from `SampleCount(Latency)`. Successful responses are all responses without a `5XX` error. `4XX` responses are treated as successful when Application Signals calculates `Availability`. 

### Dimensions collected and dimension combinations
<a name="AppSignals-StandardMetrics-Dimensions"></a>

The following dimensions are defined for each of the standard application metrics. For more information about dimensions, see [Dimensions](cloudwatch_concepts.md#Dimension).

Different dimensions are collected for *service metrics* and *dependency metrics*. Within the services discovered by Application Signals, when microservice A calls microservice B, microservice B is serving the request. In this case, microservice A emits dependency metrics and microservice B emits service metrics. When a client calls microservice A, microservice A is serving the request and emits service metrics.

**Dimensions for service metrics**

The following dimensions are collected for service metrics.


| Dimension | Description | 
| --- | --- | 
|  `Service`  |  The name of the service. The maximum value is 255 characters.  | 
|  `Operation`  |  The name of the API operation or other activity. The maxiumum value is 1024 characters. You can set service level objectives on operations only if the operation name is 194 characters or fewer.  | 
| `Environment` | The name of the environment where services are running. If services are not running on Amazon EKS, you can specify an optional custom value for `deployment.environment` in the `OTEL_ATTRIBUTE_RESOURCES` parameter. The maximum value is 259 characters. | 

When you view these metrics in the CloudWatch console, you can view them using the following dimension combinations:
+ `[Environment, Service, Operation, [Latency, Error, Fault]]`
+ `[Environment, Service, [Latency, Error, Fault]]`

**Dimensions for dependency metrics**

The following dimensions are collected for dependency metrics:


| Dimension | Description | 
| --- | --- | 
|  `Service`  |  The name of the service. The maximum value is 255 characters.  | 
|  `Operation`  |  The name of the API operation or other operation. The maximum value is 1024 characters.  | 
|  `RemoteService`  |  The name of the remote service being invoked. The maximum value is 255 characters.  | 
|  `RemoteOperation`  |  The name of the API operation being invoked. The maximum value is 1024 characters.  | 
|  `Environment`  | The name of the environment where services are running. If services are not running on Amazon EKS, you can specify an optional custom value for `deployment.environment` in the `OTEL_ATTRIBUTE_RESOURCES` parameter. The maxiumum value is 259 characters. | 
|  `RemoteEnvironment`  |  The name of the environment where dependency services are running. The `RemoteEnvironment` parameter is automatically generated when a service calls a dependency and they are both running in the same cluster. Otherwise, `RemoteEnvironment` is neither generated nor reported in the service dependency's metrics. Currently only available on Amazon EKS and K8S platforms. The maximum value is 259 characters.  | 
|  `RemoteResourceIdentifier`  |  The name of the resource invoked by a remote call. The `RemoteResourceIdentifier` parameter is automatically generated if service calls a remote AWS service. Otherwise, `RemoteResourceIdentifier` is neither generated nor reported in the service dependency's metrics. The maximum value is 1024 characters.  | 
|  `RemoteResourceType`  |  The type of the resource that is invoked by a remote call. Required only if `RemoteResourceIdentifier` is defined. The maximum value is 1024 characters.  | 

When you view these metrics in the CloudWatch console, you can view them using the following dimension combinations:

**Running on Amazon EKS clusters**
+ `[Environment, Service, Operation, RemoteService, RemoteOperation, RemoteEnvironment, RemoteResourceIdentifier, RemoteResourceType, [Latency, Error, Fault]]`
+ `[Environment, Service, Operation, RemoteService, RemoteOperation, RemoteEnvironment, [Latency, Error, Fault]]`
+ `[Environment, Service, Operation, RemoteService, RemoteOperation, RemoteResourceIdentifier, RemoteResourceType, [Latency, Error, Fault]]`
+ `[Environment, Service, Operation, RemoteService, RemoteOperation, [Latency, Error, Fault]]`
+ `[Environment, Service, RemoteService, RemoteEnvironment, [Latency, Error, Fault]]`
+ `[Environment, Service, RemoteService, [Latency, Error, Fault]]`
+ `[Environment, Service, RemoteService, RemoteOperation, RemoteEnvironment, RemoteResourceIdentifier, RemoteResourceType, [Latency, Error, Fault]]`
+ `[Environment, Service, RemoteService, RemoteOperation, RemoteEnvironment, [Latency, Error, Fault]]`
+ `[Environment, Service, RemoteService, RemoteOperation, RemoteResourceIdentifier, RemoteResourceType, [Latency, Error, Fault]]`
+ `[Environment, Service, RemoteService, RemoteOperation, [Latency, Error, Fault]]`
+ `[RemoteService [Latency, Error, Fault]]` 
+ `[RemoteService, RemoteResourceIdentifier, RemoteResourceType [Latency, Error, Fault]]`

## Runtime metrics
<a name="AppSignals-RuntimeMetrics"></a>

Application Signals uses the AWS Distro for OpenTelemetry SDK to automatically collect OpenTelemetry-compatible metrics from your Java and Python applications. To have runtime metrics collected, you must meet the following pre-requisites:
+ Your CloudWatch agent must be version `1.300049.1` or later.
+ If you use the Amazon CloudWatch Observability EKS add-on, it must be version `2.30-eksbuild.1` or later. If you update the add-on, you must restart your applications.
+ For Java applications, you must be running `1.32.5` or later of the AWS Distro for OpenTelemetry SDK for Java.
+ For Python applications, you must be running `0.7.0` or later of the AWS Distro for OpenTelemetry SDK for Python.
+ For .Net applications, you must be running `1.6.0` or later of the AWS Distro for OpenTelemetry SDK for .Net.

Runtime metrics are not collected for Node.js applications.

Runtime metric are charged as part of Application Signals costs. For more information about CloudWatch pricing, see [Amazon CloudWatch Pricing](http://aws.amazon.com/cloudwatch/pricing).

**Note**  
**Known issues**  
The runtime metrics collection in the Java SDK release v1.32.5 is known to not work with applications using JBoss Wildfly. This issue extends to the Amazon CloudWatch Observability EKS add-on, affecting versions `2.3.0-eksbuild.1` through `2.6.0-eksbuild.1`. The issue is fixed in Java SDK release `v1.32.6` and the Amazon CloudWatch Observability EKS add-on version `v3.0.0-eksbuild.1`.  
If you are impacted, either upgrade the Java SDK version or disable your runtime metrics collection by adding the environment variable `OTEL_AWS_APPLICATION_SIGNALS_RUNTIME_ENABLED=false` to your application. 

### Java runtime metrics
<a name="AppSignals-RuntimeMetrics-JVM"></a>

Application Signals collects the following JVM metrics from Java applications that you enable for Application Signals. All runtime metrics are sent to CloudWatch in the `ApplicationSignals` namespace, and are collected with the `Service` and `Environment` dimension set.


| Metric name | Description | Meaningful statistics | 
| --- | --- | --- | 
|  `JVMGCDuration` |  Aggregated metric for the duration of JVM garbage collection actions. Unit: Milliseconds  | Sum, Average, Minimum, Maximum | 
|  `JVMGCOldGenDuration` |  Aggregated metric for the duration of JVM garbage collection actions of the old generation. Available only in G1. Unit: Milliseconds  | Sum, Average, Minimum, Maximum | 
|  `JVMGCYoungGenDuration` |  Aggregated metric for the duration of JVM garbage collection actions of the young generation. Available only in G1. Unit: Milliseconds  | Sum, Average, Minimum, Maximum | 
|  `JVMGCCount` |  Aggregated metric for the number of JVM garbage collection actions. Unit: None  | Sum, Average, Minimum, Maximum | 
|  `JVMGCOldGenCount` |  Aggregated metric for the number of JVM garbage collection actions of the old generation. Available only in G1. Unit: None  | Sum, Average, Minimum, Maximum | 
|  `JVMGCYoungGenCount` |  Aggregated metric for the number of JVM garbage collection actions of the young generation. Available only in G1. Unit: None  | Sum, Average, Minimum, Maximum | 
|  `JVMMemoryHeapUsed` |  The amount of memory heap used. Unit: Bytes  | Average, Minimum, Maximum | 
|  `JVMMemoryUsedAfterLastGC` |  Amount of memory used, as measured after the most recent garbage collection event on this pool. Unit: Bytes  | Average, Minimum, Maximum | 
|  `JVMMemoryOldGenUsed` |  The amount of memory used by the old generation. Unit: Bytes  | Average, Minimum, Maximum | 
|  `JVMMemorySurvivorSpaceUsed` |  The amount of memory heap used by the survivor space. Unit: Bytes  | Average, Minimum, Maximum | 
|  `JVMMemoryEdenSpaceUsed` |  The amount of memory used by the eden space. Unit: Bytes  | Average, Minimum, Maximum | 
|  `JVMMemoryNonHeapUsed` |  The amount of non-heap memory used. Unit: Bytes  | Average, Minimum, Maximum | 
|  `JVMThreadCount` |  The number of executing threads, including both daemon and non-daemon threads. Unit: None  | Sum, Average, Minimum, Maximum | 
|  `JVMClassLoaded` |  The number of classes loaded. Unit: None  | Sum, Average, Minimum, Maximum | 
|  `JVMCpuTime` |  The CPU time used by the process, as reported by the JVM. Unit: None (Nanoseconds)  | Sum, Average, Minimum, Maximum | 
|  `JVMCpuRecentUtilization` |  The recent CPU utilized by the process, as reported by the JVM. Unit: None  | Average, Minimum, Maximum | 

### Python runtime metrics
<a name="AppSignals-RuntimeMetrics-Python"></a>

Application Signals collects the following metrics from Python applications that you enable for Application Signals. All runtime metrics are sent to CloudWatch in the `ApplicationSignals` namespace, and are collected with the `Service` and `Environment` dimension set.


| Metric name | Description | Meaningful statistics | 
| --- | --- | --- | 
|  `PythonProcessGCCount` |  The total number of objects currently being tracked. Unit: None  | Sum, Average, Minimum, Maximum | 
|  `PythonProcessGCGen0Count` |  The number of objects currently being tracked in Generation 0. Unit: None  | Sum, Average, Minimum, Maximum | 
|  `PythonProcessGCGen1Count` |  The number of objects currently being tracked in Generation 1. Unit: None  | Sum, Average, Minimum, Maximum | 
|  `PythonProcessGCGen2Count` |  The number of objects currently being tracked in Generation 2. Unit: None  | Sum, Average, Minimum, Maximum | 
|  `PythonProcessVMSMemoryUsed` |  The total amount of virtual memory used by the process. Unit: Bytes  | Average, Minimum, Maximum | 
|  `PythonProcessRSSMemoryUsed` |  The total amount of non-swapped physical memory used by the process. Unit: Bytes  | Average, Minimum, Maximum | 
|  `PythonProcessThreadCount` |  The number of threads currently used by the process. Unit: None  | Sum, Average, Minimum, Maximum | 
|  `PythonProcessCpuTime` |  The CPU time used by the process. Unit: Seconds  | Sum, Average, Minimum, Maximum | 
|  `PythonProcessCpuUtilization` |  The CPU utilization of the process. Unit: None  | Average, Minimum, Maximum | 

### .Net runtime metrics
<a name="AppSignals-RuntimeMetrics-Python"></a>

Application Signals collects the following metrics from .Net applications that you enable for Application Signals. All runtime metrics are sent to CloudWatch in the `ApplicationSignals` namespace, and are collected with the `Service` and `Environment` dimension set.


| Metric name | Description | Meaningful statistics | 
| --- | --- | --- | 
|  `DotNetGCGen0Count` |  The total number of garbage collection metrics tracked in Generation 0 since the process started. Unit: None  | Sum, Average, Minimum, Maximum | 
|  `DotNetGCGen1Count` |  The total number of garbage collection metrics tracked in Generation 1 since the process started. Unit: None  | Sum, Average, Minimum, Maximum | 
|  `DotNetGCGen2Count` |  The total number of garbage collection metrics tracked in Generation 2 since the process started. Unit: None  | Sum, Average, Minimum, Maximum | 
|  `DotNetGCDuration` |  The total amount of time paused in garbage collection since the process started. Unit: None  | Sum, Average, Minimum, Maximum | 
|  `DotNetGCGen0HeapSize` |  The heap size (including fragmentation) of Generation 0 observed during the latest garbage collection. This metric is only available after the first Garbage Collection is complete. Unit: Bytes  | Average, Minimum, Maximum | 
|  `DotNetGCGen1HeapSize` |  The heap size (including fragmentation) of Generation 1 observed during the latest garbage collection. This metric is only available after the first Garbage Collection is complete. Unit: Bytes  | Average, Minimum, Maximum | 
|  `DotNetGCGen2HeapSize` |  The heap size (including fragmentation) of Generation 2 observed during the latest garbage collection. This metric is only available after the first Garbage Collection is complete. Unit: Bytes  | Average, Minimum, Maximum | 
|  `DotNetGCLOHHeapSize` |  The large object heap size (including fragmentation) observed during the latest garbage collection. This metric is only available after the first Garbage Collection is complete. Unit: Bytes  | Average, Minimum, Maximum | 
|  `DotNetGCPOHHeapSize` |  The pinned object heap size (including fragmentation) observed during the latest garbage collection. This metric is only available after the first Garbage Collection is complete. Unit: Bytes  | Average, Minimum, Maximum | 
|  `DotNetThreadCount` |  The number of thread pool threads that currently exist. Unit: None  | Average, Minimum, Maximum | 
|  `DotNetThreadQueueLength` |  The number of work items that are currently queued to be processed by the thread pool. Unit: None  | Average, Minimum, Maximum | 

## Disabling the collection of runtime metrics
<a name="AppSignals-RuntimeMetrics-Disable"></a>

Runtime metrics are collected by default for Java and Python applications that are enabled for Application Signals. If you want to disable the collection of these metrics, follow the instructions in this section for your environment.

### Amazon EKS
<a name="AppSignals-RuntimeMetrics-Disable-EKS"></a>

To disable runtime metrics in Amazon EKS applications at the application level, add the following environment variable to your workload specification.

```
env:
    - name: OTEL_AWS_APPLICATION_SIGNALS_RUNTIME_ENABLED 
      value: "false"
```

To disable runtime metrics in Amazon EKS applications at the cluster level, apply the configuration to the advanced configuration of your Amazon CloudWatch Observability EKS add-on.

```
{
  "agent": {
    "config": {
      "traces": {
        "traces_collected": {
          "application_signals": {
            
          }
        }
      },
      "logs": {
        "metrics_collected": {
          "application_signals": {
            
          }
        }
      }
    },
    "manager": {
      "autoInstrumentationConfiguration": {
        "java": {
          "runtime_metrics": {
            "enabled": false
          }
        },
        "python": {
          "runtime_metrics": {
            "enabled": false
          }
        },
        "dotnet": {
          "runtime_metrics": {
            "enabled": false
          }
        }
      }
    }
  }
}
```

### Amazon ECS
<a name="AppSignals-RuntimeMetrics-Disable-ECS"></a>

To disable runtime metrics in Amazon ECS applications, add the environment variable `OTEL_AWS_APPLICATION_SIGNALS_RUNTIME_ENABLED=false` in the new task definition revision and redeploy the application.

### EC2
<a name="AppSignals-RuntimeMetrics-Disable-EC2"></a>

To disable runtime metrics in Amazon EC2 applications, add the environment variable `OTEL_AWS_APPLICATION_SIGNALS_RUNTIME_ENABLED=false` before the application starts.

### Kubernetes
<a name="AppSignals-RuntimeMetrics-Disable-Kubernetes"></a>

To disable runtime metrics in Kubernetes applications at the application level, add the following environment variable to your workload specification.

```
env:
    - name: OTEL_AWS_APPLICATION_SIGNALS_RUNTIME_ENABLED 
      value: "false"
```

To disable runtime metrics in Kubernetes applications at the cluster level, use the following:

```
helm upgrade ... \
--set-string manager.autoInstrumentationConfiguration.java.runtime_metrics.enabled=false \
--set-string manager.autoInstrumentationConfiguration.python.runtime_metrics.enabled=false \
-\-set-string manager.autoInstrumentationConfiguration.dotnet.runtime_metrics.enabled=false
```

# Custom metrics with Application Signals
<a name="AppSignals-CustomMetrics"></a>

To monitor application performance and availability, Application Signals collects standard metrics (faults, errors, and latency) and runtime metrics from discovered applications after you enable it.

Custom Metrics add valuable context to your application monitoring and help expedite troubleshooting. You can use them to:
+ Customize analysis of telemetry data
+  Identify root causes of issues
+ Make precise business and operational decisions quickly

Application Signals lets you view and correlate custom metrics generated from a service with standard and runtime metrics. For example, an application could emit metrics for request size and cache miss count. These custom metrics provide more granular insight into performance issues, helping you diagnose and resolve availability drops and latency spikes faster.

**Topics**
+ [Configuring custom metrics to Application Signals](#AppSignals-CustomMetrics-Adding)
+ [Viewing custom metrics in Application Signals](#AppSignals-CustomMetrics-Viewing)
+ [Frequently asked questions (FAQs)](#AppSignals-CustomMetrics-FAQ)

## Configuring custom metrics to Application Signals
<a name="AppSignals-CustomMetrics-Adding"></a>

You can generate custom metrics from your application using two methods: *OpenTelemetry metrics * and *Span metrics*.

**Topics**
+ [OpenTelemetry metrics](#AppSignals-CustomMetrics-OpenTelemetry)
+ [Span metrics](#AppSignals-CustomMetrics-SpanMetrics)

### OpenTelemetry metrics
<a name="AppSignals-CustomMetrics-OpenTelemetry"></a>

To use custom OpenTelemetry metrics with Application Signals, you must use either the CloudWatch Agent or OpenTelemetry Collector. Custom OpenTelemetry metrics allow you to create and export metrics directly from your application code using the OpenTelemetry Metrics SDK.

1. Onboard service to Application Signals.

1. Configure the agent or collector.
   + When using the CloudWatch agent, you must [configure](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-OpenTelemetry-metrics.html) `metrics_collected` with an `otlp`. For example, `cloudwatch-config.json`

     ```
     {
       "traces": {
         "traces_collected": {
           "application_signals": {}
         }
       },
       "logs": {
         "metrics_collected": {
           "application_signals": {},
           "otlp": {
             "grpc_endpoint": "0.0.0.0:4317",
             "http_endpoint": "0.0.0.0:4318"
           }
         }
       }
     }
     ```
   + When using OpenTelemetry Collector, configure a metrics pipeline. You must use [CloudWatch EMF Exporter for OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/awsemfexporter) and enable [Resource Attributes to Metric Labels](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/awsemfexporter#resource-attributes-to-metric-labels). It's recommended to configure ` dimension_rollup_option: NoDimensionRollup` to avoid emitting many metric aggregations. For example, `config.yaml`:

     ```
     receivers:
       otlp:
         protocols:
           grpc:
             endpoint: 0.0.0.0:4317
           http:
             endpoint: 0.0.0.0:4318
     
     exporters:
       awsemf:
         region: $REGION
         namespace: $NAMESPACE
         log_group_name:$LOG_GROUP_NAME
         resource_to_telemetry_conversion:
           enabled: true
         dimension_rollup_option: "NoDimensionRollup"
         
       otlphttp/traces:
         compression: gzip
         traces_endpoint: https://xray.$REGION.amazonaws.com/v1/traces
         auth:
           authenticator: sigv4auth/traces
     
     extensions:
       sigv4auth/logs:
         region: "$REGION"
         service: "logs"
       sigv4auth/traces:
         region: "$REGION"
         service: "xray"
     
     processors:
       batch:
     
     service:
       telemetry:
       extensions: [sigv4auth/logs, sigv4auth/traces]
       pipelines:
         metrics:
           receivers: [otlp]
           processors: [batch]
           exporters: [awsemf]
         traces:
           receivers: [otlp]
           processors: [batch]
           exporters: [otlphttp/traces]
     ```

1. Configure the environment. When there are multiple services with the same service name and to accurately correlate Application Signals metrics to the correct service name, it's recommended to configure the resource attribute `deployment.environment.name`. Configuring this resource attribute is commonly done through the environment variables.

   ```
   OTEL_RESOURCE_ATTRIBUTES="service.name=$YOUR_SVC_NAME,deployment.environment.name=$YOUR_ENV_NAME"
   ```

1. Configure metric export to the CloudWatch agent or OpenTelemetry Collector. You can use one of the following approach:
   + (Recommended) Custom export pipeline – In the application code, create a dedicated [MeterProvider](https://opentelemetry.io/docs/specs/otel/metrics/sdk/#meterprovider) exporting to the configured agent or collector endpoint. For example:

     ```
     Resource resource = Resource.getDefault().toBuilder()
             .put(AttributeKey.stringKey("service.name"), serviceName)
             .put(AttributeKey.stringKey("deployment.environment.name"), environment)
             .build();
     
     MetricExporter metricExporter = OtlpHttpMetricExporter.builder()
             .setEndpoint("http://localhost:4318/v1/metrics")
             .build();
     
     MetricReader metricReader = PeriodicMetricReader.builder(metricExporter)
             .setInterval(Duration.ofSeconds(10))
             .build()
     
     SdkMeterProvider meterProvider = SdkMeterProvider.builder()
         .setResource(resource)
         .registerMetricReader()
         .build();
         
     Meter meter = meterProvider.get("myMeter");
     ```
   + Agent-based export – Configure the agent environment variables [OTEL\$1METRICS\$1EXPORTER](https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/#exporter-selection) and [OTEL\$1EXPORTER\$1OTLP\$1METRICS\$1ENDPOINT](https://opentelemetry.io/docs/languages/sdk-configuration/otlp-exporter/#otel_exporter_otlp_metrics_endpoint). For example:

     ```
     OTEL_METRICS_EXPORTER=otlp
     OTEL_EXPORTER_OTLP_METRICS_ENDPOINT=http://localhost:4318/v1/metrics
     ```

     In application code, rely on global MeterProvider created by the agent. For example:

     ```
     Meter meter = GlobalOpenTelemetry.getMeter("myMeter");
     ```

1. Using the [OTEL Metrics SDK](https://opentelemetry.io/docs/specs/otel/metrics/sdk/) in the application code, add the OTEL Metrics. For example, to add the OTEL metrics in Python:

   ```
   counter = meter.counterBuilder("myCounter").build();
   counter.add(value);
   counter.add(value, Attributes.of(AttributeKey.stringKey("Operation"), "myOperation"));
   ```

   Adding the Operation attribute is not required, but can be useful for correlating Application Signals Service Operations to Custom OpenTelemetry Metrics.

### Span metrics
<a name="AppSignals-CustomMetrics-SpanMetrics"></a>

Custom Span metrics currently only work with Transaction Search. With custom Span metrics, you can:
+ Create metrics using Metrics Filters
+ Process span attributes added in application code
+ Use the OpenTelemetry Traces SDK for implementation

1. Enable Application Signals monitoring with Transaction Search. For more information, see [Transaction Search](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Transaction-Search-getting-started.html).

   To ensure 100% metric sampling, it's recommended to send 100% of spans to the endpoint.

1. Add span attributes using the [OTEL Traces SDK](https://opentelemetry.io/docs/specs/otel/trace/sdk/). There are two ways:
   + [Recommended] Add attributes to automatically generated spans. For example:

     ```
     Span.current().setAttribute("myattribute", value);
     ```
   + Add attributes to manually generated spans. For example:

     ```
     Span span = tracer.spanBuilder("myspan").startSpan();
     try (Scope scope = span.makeCurrent()) {
        span.setAttribute("myattribute",  value);
     }
     ```

1. Create a metric filter with the following values. For information on how to create a metric filter, see [Create a metric filter for a log group](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CreateMetricFilterProcedure.html).
   + Log Group – aws/spans
   + Filter pattern – \$1 \$1.attributes.['myattribute'] = \$1 \$1
   + Metric name – myattribute (The values must be an exact match or span correlation will not work
   + Metric value – \$1.attributes.['myattribute']
   + Dimensions – Field Name: Service, Field Value: \$1.attributes.['aws.local.service'], Field Name: Environment, Field Value: \$1.attributes.['aws.local.environment'], and Field Name: Operation, Field Value: \$1.attributes.['aws.local.operation']
**Note**  
When you add attributes to manually generated spans, you cannot set `Operation` because `aws.local.operation` will not be present in span data.

## Viewing custom metrics in Application Signals
<a name="AppSignals-CustomMetrics-Viewing"></a>

You can now view custom metrics for services and operations in the Application Signals console:
+ Select a service from the **Services** list to see the new **Related Metrics** tab
+  View standard metrics, runtime metrics, and related metrics for your selected service
+ Filter and select multiple metrics from the list
+ Graph selected metrics to identify correlations and root causes of issues

For more information on Related Metrics, see [View Related metrics](ServiceDetail.md#ServiceDetail-relatedmetrics).

## Frequently asked questions (FAQs)
<a name="AppSignals-CustomMetrics-FAQ"></a>

### What is the impact of not adding the configuration for environment for custom metrics?
<a name="AppSignals-CustomMetrics-FAQ-Environment"></a>

Application Signals configures the `deployment.environment.name` resource attribute to disambiguate applications. Application Signals cannot correlate custom metrics generated from two different services with the same name to the correct service without disambiguation.

To add environment configuration to your application, see [OpenTelemetry metrics](#AppSignals-CustomMetrics-OpenTelemetry).

### Are there any limits for metrics filters?
<a name="AppSignals-CustomMetrics-FAQ-Limits"></a>

You can only create up to 100 metrics filters per CloudWatch Logs log group. Each metric defined can have up to 3 dimensions. You can view limits for metrics filters here [OpenTelemetry metrics](#AppSignals-CustomMetrics-OpenTelemetry).

### Why aren't metric graphs appearing in the metrics table?
<a name="AppSignals-CustomMetrics-FAQ-Graph"></a>

The solution depends on your metric type:
+ Custom metrics – See [Configuring custom metrics to Application Signals](#AppSignals-CustomMetrics-Adding) to verify the metric configuration
+ Standard or runtime metrics – See [Troubleshooting your Application Signals installation ](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-Enable-Troubleshoot.html)