

# Enable your applications on Amazon ECS
<a name="CloudWatch-Application-Signals-Enable-ECSMain"></a>

Enable CloudWatch Application Signals on Amazon ECS by using the custom setup steps described in this section.

For applications running on Amazon ECS, you install and configure the CloudWatch agent and AWS Distro for OpenTelemetry yourself. On these architectures enabled with a custom Application Signals setup, Application Signals doesn't autodiscover the names of your services or the hosts or clusters they run on. You must specify these names during the custom setup, and the names that you specify are what is displayed on Application Signals dashboards.

## Use a custom setup to enable Application Signals on Amazon ECS
<a name="CloudWatch-Application-Signals-Enable-ECS"></a>

Use these custom setup instructions to onboard your applications on Amazon ECS to CloudWatch Application Signals. You install and configure the CloudWatch agent and AWS Distro for OpenTelemetry yourself.

There are two methods for deploying Application Signals on Amazon ECS. Choose the one that is best for your environment.
+ [Deploy using the sidecar strategy](CloudWatch-Application-Signals-ECS-Sidecar.md) – You add a CloudWatch agent sidecar container to each task definition in the cluster.

  Advantages:
  + Supports both the `ec2` and `Fargate` launch types.
  + You can always use `localhost` as the IP address when you set up environment variables.

  Disadvantages:
  + You must set up the CloudWatch agent sidecar container for each service task that runs in the cluster.
  + Only the `awsvpc` network mode is supported.
+ [Deploy using the daemon strategy](CloudWatch-Application-Signals-ECS-Daemon.md) – You add a CloudWatch agent task only once in the cluster, and the [ Amazon ECS daemon scheduling strategy](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html#service_scheduler_daemon) deploys it as needed. The ensures that each instance continuously receives traces and metrics, providing centralized visibility without the need for the agent to run as a sidecar with each application task definition.

  Advantages:
  + You need to set up the daemon service for the CloudWatch agent only once in the cluster.

  Disadvantages:
  + Doesn't support the Fargate launch type.
  + If you use the `awsvpc` or `bridge` network mode, you have to manually specify each container instance's private IP address in the environment variables.

With either method, on Amazon ECS clusters Application Signals doesn't autodiscover the names of your services. You must specify your service names during the custom setup, and the names that you specify are what is displayed on Application Signals dashboards.

# Deploy using the sidecar strategy
<a name="CloudWatch-Application-Signals-ECS-Sidecar"></a>

## Step 1: Enable Application Signals in your account
<a name="CloudWatch-Application-Signals-ECS-Grant"></a>

You must first enable Application Signals in your account. If you haven't, see [Enable Application Signals in your account](CloudWatch-Application-Signals-Enable.md).

## Step 2: Create IAM roles
<a name="CloudWatch-Application-Signals-Enable-ECS-IAM"></a>

You must create an IAM role. If you already have created this role, you might need to add permissions to it.
+ **ECS task role—** Containers use this role to run. The permissions should be whatever your applications need, plus **CloudWatchAgentServerPolicy**. 

For more information about creating IAM roles, see [Creating IAM Roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html).

## Step 3: Prepare CloudWatch agent configuration
<a name="CloudWatch-Application-Signals-Enable-ECS-PrepareAgent"></a>

First, prepare the agent configuration with Application Signals enabled. To do this, create a local file named `/tmp/ecs-cwagent.json`. 

```
{
  "traces": {
    "traces_collected": {
      "application_signals": {}
    }
  },
  "logs": {
    "metrics_collected": {
      "application_signals": {}
    }
  }
}
```

Then upload this configuration to the SSM Parameter Store. To do this, enter the following command. In the file, replace *\$1REGION* with your actual Region name.

```
aws ssm put-parameter \
--name "ecs-cwagent" \
--type "String" \
--value "`cat /tmp/ecs-cwagent.json`" \
--region "$REGION"
```

## Step 4: Instrument your application with the CloudWatch agent
<a name="CloudWatch-Application-Signals-Enable-ECS-Instrument"></a>

The next step is to instrument your application for CloudWatch Application Signals.

------
#### [ Java ]

**To instrument your application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation"
     }
   ]
   ```

1. Add a CloudWatch agent sidecar definition. To do this, append a new container called `ecs-cwagent` to your application's task definition. Replace *\$1REGION* with your actual Region name. Replace *\$1IMAGE* with the path to the latest CloudWatch container image on Amazon Elastic Container Registry. For more information, see [cloudwatch-agent](https://gallery.ecr.aws/cloudwatch-agent/cloudwatch-agent) on Amazon ECR.

   If you want to enable the CloudWatch agent with a daemon strategy instead, see the instructions at [Deploy using the daemon strategy](CloudWatch-Application-Signals-ECS-Daemon.md).

   ```
   {
     "name": "ecs-cwagent",
     "image": "$IMAGE",
     "essential": true,
     "secrets": [
       {
         "name": "CW_CONFIG_CONTENT",
         "valueFrom": "ecs-cwagent"
       }
     ],
     "logConfiguration": {
       "logDriver": "awslogs",
       "options": {
         "awslogs-create-group": "true",
         "awslogs-group": "/ecs/ecs-cwagent",
         "awslogs-region": "$REGION",
         "awslogs-stream-prefix": "ecs"
       }
     }
   }
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-java). 

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
       "cp",
       "/javaagent.jar",
       "/otel-auto-instrumentation/javaagent.jar"
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation",
         "containerPath": "/otel-auto-instrumentation",
         "readOnly": false
       }
     ]
   }
   ```

1. Add a dependency on the `init` container to make sure that this container finishes before your application container starts.

   ```
   "dependsOn": [
     {
       "containerName": "init",
       "condition": "SUCCESS"
     }
   ]
   ```

1. Add the following environment variables to your application container. You must be using version 1.32.2 or later of the AWS Distro for OpenTelemetry [auto-instrumentation agent for Java](https://opentelemetry.io/docs/zero-code/java/agent/).    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Sidecar.html)

1. Mount the volume `opentelemetry-auto-instrumentation` that you defined in step 1 of this procedure. If you don't need to enable log correlation with metrics and traces, use the following example for a Java application. If you want to enable log correlation, see the next step instead.

   ```
   {
     "name": "my-app",
      ...
     "environment": [
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "service.name=$SVC_NAME"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "JAVA_TOOL_OPTIONS",
         "value": " -javaagent:/otel-auto-instrumentation/javaagent.jar"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://localhost:4316/v1/metrics"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://localhost:4316/v1/traces"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_PROPAGATORS",
         "value": "tracecontext,baggage,b3,xray"
       }
     ],
     "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation",
         "containerPath": "/otel-auto-instrumentation",
         "readOnly": false
       }
     ]
   }
   ```

------
#### [ Python ]

Before you enable Application Signals for your Python applications, be aware of the following considerations.
+ In some containerized applications, a missing `PYTHONPATH` environment variable can sometimes cause the application to fail to start. To resolve this, ensure that you set the `PYTHONPATH` environment variable to the location of your application’s working directory. This is due to a known issue with OpenTelemetry auto-instrumentation. For more information about this issue, see [ Python autoinstrumentation setting of PYTHONPATH is not compliant](https://github.com/open-telemetry/opentelemetry-operator/issues/2302).
+ For Django applications, there are additional required configurations, which are outlined in the [ OpenTelemetry Python documentation](https://opentelemetry-python.readthedocs.io/en/latest/examples/django/README.html).
  + Use the `--noreload` flag to prevent automatic reloading.
  + Set the `DJANGO_SETTINGS_MODULE` environment variable to the location of your Django application’s `settings.py` file. This ensures that OpenTelemetry can correctly access and integrate with your Django settings. 
+ If you're using a WSGI server for your Python application, in addition to the following steps in this section, see [No Application Signals data for Python application that uses a WSGI server](CloudWatch-Application-Signals-Enable-Troubleshoot.md#Application-Signals-troubleshoot-Python-WSGI) for information to make Application Signals work.

**To instrument your Python application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation-python"
     }
   ]
   ```

1. Add a CloudWatch agent sidecar definition. To do this, append a new container called `ecs-cwagent` to your application's task definition. Replace *\$1REGION* with your actual Region name. Replace *\$1IMAGE* with the path to the latest CloudWatch container image on Amazon Elastic Container Registry. For more information, see [cloudwatch-agent](https://gallery.ecr.aws/cloudwatch-agent/cloudwatch-agent) on Amazon ECR.

   If you want to enable the CloudWatch agent with a daemon strategy instead, see the instructions at [Deploy using the daemon strategy](CloudWatch-Application-Signals-ECS-Daemon.md).

   ```
   {
     "name": "ecs-cwagent",
     "image": "$IMAGE",
     "essential": true,
     "secrets": [
       {
         "name": "CW_CONFIG_CONTENT",
         "valueFrom": "ecs-cwagent"
       }
     ],
     "logConfiguration": {
       "logDriver": "awslogs",
       "options": {
         "awslogs-create-group": "true",
         "awslogs-group": "/ecs/ecs-cwagent",
         "awslogs-region": "$REGION",
         "awslogs-stream-prefix": "ecs"
       }
     }
   }
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-python).

   ```
   {
       "name": "init",
       "image": "$IMAGE",
       "essential": false,
       "command": [
           "cp",
           "-a",
           "/autoinstrumentation/.",
           "/otel-auto-instrumentation-python"
       ],
       "mountPoints": [
           {
               "sourceVolume": "opentelemetry-auto-instrumentation-python",
               "containerPath": "/otel-auto-instrumentation-python",
               "readOnly": false
           }
       ]
   }
   ```

1. Add a dependency on the `init` container to make sure that this container finishes before your application container starts.

   ```
   "dependsOn": [
     {
       "containerName": "init",
       "condition": "SUCCESS"
     }
   ]
   ```

1. Add the following environment variables to your application container.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Sidecar.html)

1. Mount the volume `opentelemetry-auto-instrumentation-python` that you defined in step 1 of this procedure. If you don't need to enable log correlation with metrics and traces, use the following example for a Python application. If you want to enable log correlation, see the next step instead. 

   ```
   {
     "name": "my-app",
     ...
     "environment": [
       {
         "name": "PYTHONPATH",
         "value": "/otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation:$APP_PATH:/otel-auto-instrumentation-python"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://localhost:2000"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_PYTHON_DISTRO",
         "value": "aws_distro"
       },
       {
         "name": "OTEL_PYTHON_CONFIGURATOR",
         "value": "aws_configurator"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://localhost:4316/v1/traces"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://localhost:4316/v1/metrics"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "service.name=$SVC_NAME"
       },
       {
         "name": "DJANGO_SETTINGS_MODULE",
         "value": "$PATH_TO_SETTINGS.settings"
       }
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-python",
         "containerPath": "/otel-auto-instrumentation-python",
         "readOnly": false
       }
     ]
   }
   ```

1. (Optional) To enable log correlation, do the following before you mount the volume. In `OTEL_RESOURCE_ATTRIBUTES`, set an additional environment variable `aws.log.group.names` for the log groups of your application. By doing so, the traces and metrics from your application can be correlated with the relevant log entries from these log groups. For this variable, replace *\$1YOUR\$1APPLICATION\$1LOG\$1GROUP* with the log group names for your application. If you have multiple log groups, you can use an ampersand (`&`) to separate them as in this example: `aws.log.group.names=log-group-1&log-group-2`. To enable metric to log correlation, setting this current environmental variable is enough. For more information, see [Enable metric to log correlation](Application-Signals-MetricLogCorrelation.md). To enable trace to log correlation, you'll also need to change the logging configuration in your application. For more information, see [Enable trace to log correlation](Application-Signals-TraceLogCorrelation.md). 

   The following is an example. To enable log correlation, use this example when you mount the volume `opentelemetry-auto-instrumentation-python` that you defined in step 1 of this procedure.

   ```
   {
     "name": "my-app",
     ...
     "environment": [
       {
         "name": "PYTHONPATH",
         "value": "/otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation:$APP_PATH:/otel-auto-instrumentation-python"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://localhost:2000"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_PYTHON_DISTRO",
         "value": "aws_distro"
       },
       {
         "name": "OTEL_PYTHON_CONFIGURATOR",
         "value": "aws_configurator"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://localhost:4316/v1/traces"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://localhost:4316/v1/metrics"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=$SVC_NAME"
       },
       {
         "name": "DJANGO_SETTINGS_MODULE",
         "value": "$PATH_TO_SETTINGS.settings"
       }
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-python",
         "containerPath": "/otel-auto-instrumentation-python",
         "readOnly": false
       }
     ]
   }
   ```

------
#### [ .NET ]

**To instrument your application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation"
     }
   ]
   ```

1. Add a CloudWatch agent sidecar definition. To do this, append a new container called `ecs-cwagent` to your application's task definition. Replace *\$1REGION* with your actual Region name. Replace *\$1IMAGE* with the path to the latest CloudWatch container image on Amazon Elastic Container Registry. For more information, see [cloudwatch-agent](https://gallery.ecr.aws/cloudwatch-agent/cloudwatch-agent) on Amazon ECR.

   If you want to enable the CloudWatch agent with a daemon strategy instead, see the instructions at [Deploy using the daemon strategy](CloudWatch-Application-Signals-ECS-Daemon.md).

   ```
   {
     "name": "ecs-cwagent",
     "image": "$IMAGE",
     "essential": true,
     "secrets": [
       {
         "name": "CW_CONFIG_CONTENT",
         "valueFrom": "ecs-cwagent"
       }
     ],
     "logConfiguration": {
       "logDriver": "awslogs",
       "options": {
         "awslogs-create-group": "true",
         "awslogs-group": "/ecs/ecs-cwagent",
         "awslogs-region": "$REGION",
         "awslogs-stream-prefix": "ecs"
       }
     }
   }
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-dotnet). 

   For a Linux container instance, use the following.

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
         "cp",
         "-a",
         "autoinstrumentation/.",
         "/otel-auto-instrumentation"
     ],
     "mountPoints": [
         {
             "sourceVolume": "opentelemetry-auto-instrumentation",
             "containerPath": "/otel-auto-instrumentation",
             "readOnly": false
         }
     ]
   }
   ```

   For a Windows Server container instance, use the following.

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
         "CMD",
         "/c",
         "xcopy",
         "/e",
         "C:\\autoinstrumentation\\*",
         "C:\\otel-auto-instrumentation",
         "&&",
         "icacls",
         "C:\\otel-auto-instrumentation",
         "/grant",
         "*S-1-1-0:R",
         "/T"
     ],
     "mountPoints": [
         {
             "sourceVolume": "opentelemetry-auto-instrumentation",
             "containerPath": "C:\\otel-auto-instrumentation",
             "readOnly": false
         }
     ]
   }
   ```

1. Add a dependency on the `init` container to make sure that this container finishes before your application container starts.

   ```
   "dependsOn": [
       {
           "containerName": "init",
           "condition": "SUCCESS"
       }
   ]
   ```

1. Add the following environment variables to your application container. You must be using version 1.1.0 or later of the AWS Distro for OpenTelemetry[ auto-instrumentation agent for .NET](https://opentelemetry.io/docs/zero-code/net/).    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Sidecar.html)

1. Mount the volume `opentelemetry-auto-instrumentation` that you defined in step 1 of this procedure. For Linux, use the following.

   ```
   {
       "name": "my-app",
      ...
       "environment": [
           {
              "name": "OTEL_RESOURCE_ATTRIBUTES",
              "value": "service.name=$SVC_NAME"
          },
           {
               "name": "CORECLR_ENABLE_PROFILING",
               "value": "1"
           },
           {
               "name": "CORECLR_PROFILER",
               "value": "{918728DD-259F-4A6A-AC2B-B85E1B658318}"
           },
           {
               "name": "CORECLR_PROFILER_PATH",
               "value": "/otel-auto-instrumentation/linux-x64/OpenTelemetry.AutoInstrumentation.Native.so"
           },
           {
               "name": "DOTNET_ADDITIONAL_DEPS",
               "value": "/otel-auto-instrumentation/AdditionalDeps"
           },
           {
               "name": "DOTNET_SHARED_STORE",
               "value": "/otel-auto-instrumentation/store"
           },
           {
               "name": "DOTNET_STARTUP_HOOKS",
               "value": "/otel-auto-instrumentation/net/OpenTelemetry.AutoInstrumentation.StartupHook.dll"
           },
           {
               "name": "OTEL_DOTNET_AUTO_HOME",
               "value": "/otel-auto-instrumentation"
           },
           {
               "name": "OTEL_DOTNET_AUTO_PLUGINS",
               "value": "AWS.Distro.OpenTelemetry.AutoInstrumentation.Plugin, AWS.Distro.OpenTelemetry.AutoInstrumentation"
           },
           {
               "name": "OTEL_RESOURCE_ATTRIBUTES",
               "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=aws-dotnet-service-name"
           },
           {
               "name": "OTEL_LOGS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_METRICS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
               "value": "http/protobuf"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
               "value": "true"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
               "value": "http://localhost:4316/v1/metrics"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
               "value": "http://localhost:4316/v1/traces"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_ENDPOINT",
               "value": "http://localhost:4316"
           },
           {
              "name": "OTEL_TRACES_SAMPLER",
              "value": "xray"
          },
          {
              "name": "OTEL_TRACES_SAMPLER_ARG",
              "value": "endpoint=http://localhost:2000"
          },
           {
               "name": "OTEL_PROPAGATORS",
               "value": "tracecontext,baggage,b3,xray"
           }
       ],
       "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ],
       "mountPoints": [
           {
               "sourceVolume": "opentelemetry-auto-instrumentation",
               "containerPath": "/otel-auto-instrumentation",
               "readOnly": false
           }
       ]
   }
   ```

   For Windows Server, use the following.

   ```
   {
       "name": "my-app",
      ...
       "environment": [
          {
              "name": "OTEL_RESOURCE_ATTRIBUTES",
              "value": "service.name=$SVC_NAME"
          },
           {
               "name": "CORECLR_ENABLE_PROFILING",
               "value": "1"
           },
           {
               "name": "CORECLR_PROFILER",
               "value": "{918728DD-259F-4A6A-AC2B-B85E1B658318}"
           },
           {
               "name": "CORECLR_PROFILER_PATH",
               "value": "C:\\otel-auto-instrumentation\\win-x64\\OpenTelemetry.AutoInstrumentation.Native.dll"
           },
           {
               "name": "DOTNET_ADDITIONAL_DEPS",
               "value": "C:\\otel-auto-instrumentation\\AdditionalDeps"
           },
           {
               "name": "DOTNET_SHARED_STORE",
               "value": "C:\\otel-auto-instrumentation\\store"
           },
           {
               "name": "DOTNET_STARTUP_HOOKS",
               "value": "C:\\otel-auto-instrumentation\\net\\OpenTelemetry.AutoInstrumentation.StartupHook.dll"
           },
           {
               "name": "OTEL_DOTNET_AUTO_HOME",
               "value": "C:\\otel-auto-instrumentation"
           },
           {
               "name": "OTEL_DOTNET_AUTO_PLUGINS",
               "value": "AWS.Distro.OpenTelemetry.AutoInstrumentation.Plugin, AWS.Distro.OpenTelemetry.AutoInstrumentation"
           },
           {
               "name": "OTEL_RESOURCE_ATTRIBUTES",
               "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=dotnet-service-name"
           },
           {
               "name": "OTEL_LOGS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_METRICS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
               "value": "http/protobuf"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
               "value": "true"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
               "value": "http://localhost:4316/v1/metrics"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
               "value": "http://localhost:4316/v1/traces"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_ENDPOINT",
               "value": "http://localhost:4316"
           },
           {
              "name": "OTEL_TRACES_SAMPLER",
              "value": "xray"
          },
          {
              "name": "OTEL_TRACES_SAMPLER_ARG",
              "value": "endpoint=http://localhost:2000"
          },
           {
               "name": "OTEL_PROPAGATORS",
               "value": "tracecontext,baggage,b3,xray"
           }
       ],
       "mountPoints": [
           {
               "sourceVolume": "opentelemetry-auto-instrumentation",
               "containerPath": "C:\\otel-auto-instrumentation",
               "readOnly": false
           }
       ],
       "dependsOn": [
           {
               "containerName": "init",
               "condition": "SUCCESS"
           }
      ]
   }
   ```

------
#### [ Node.js ]

**Note**  
If you are enabling Application Signals for a Node.js application with ESM, see [Setting up a Node.js application with the ESM module format](#ECS-NodeJs-ESM) before you start these steps.

**To instrument your application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation-node"
     }
   ]
   ```

1. Add a CloudWatch agent sidecar definition. To do this, append a new container called `ecs-cwagent` to your application's task definition. Replace *\$1REGION* with your actual Region name. Replace *\$1IMAGE* with the path to the latest CloudWatch container image on Amazon Elastic Container Registry. For more information, see [cloudwatch-agent](https://gallery.ecr.aws/cloudwatch-agent/cloudwatch-agent) on Amazon ECR.

   If you want to enable the CloudWatch agent with a daemon strategy instead, see the instructions at [Deploy using the daemon strategy](CloudWatch-Application-Signals-ECS-Daemon.md).

   ```
   {
     "name": "ecs-cwagent",
     "image": "$IMAGE",
     "essential": true,
     "secrets": [
       {
         "name": "CW_CONFIG_CONTENT",
         "valueFrom": "ecs-cwagent"
       }
     ],
     "logConfiguration": {
       "logDriver": "awslogs",
       "options": {
         "awslogs-create-group": "true",
         "awslogs-group": "/ecs/ecs-cwagent",
         "awslogs-region": "$REGION",
         "awslogs-stream-prefix": "ecs"
       }
     }
   }
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-node). 

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
       "cp",
       "-a",
       "/autoinstrumentation/.",
       "/otel-auto-instrumentation-node"
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-node",
         "containerPath": "/otel-auto-instrumentation-node",
         "readOnly": false
       }
     ],
   }
   ```

1. Add a dependency on the `init` container to make sure that this container finishes before your application container starts.

   ```
   "dependsOn": [
     {
       "containerName": "init",
       "condition": "SUCCESS"
     }
   ]
   ```

1. Add the following environment variables to your application container.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Sidecar.html)

1. Mount the volume `opentelemetry-auto-instrumentation` that you defined in step 1 of this procedure. If you don't need to enable log correlation with metrics and traces, use the following example for a Node.js application. If you want to enable log correlation, see the next step instead.

   For your Application Container, add a dependency on the `init` container to make sure that container finishes before your application container starts.

   ```
   {
     "name": "my-app",
      ...
     "environment": [
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "service.name=$SVC_NAME"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://localhost:4316/v1/metrics"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://localhost:4316/v1/traces"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://localhost:2000"
       },
       {
         "name": "NODE_OPTIONS",
         "value": "--require /otel-auto-instrumentation-node/autoinstrumentation.js"
       }
       ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-node",
         "containerPath": "/otel-auto-instrumentation-node",
         "readOnly": false
       }
     ],
     "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ]
   }
   ```

1. (Optional) To enable log correlation, do the following before you mount the volume. In `OTEL_RESOURCE_ATTRIBUTES`, set an additional environment variable `aws.log.group.names` for the log groups of your application. By doing so, the traces and metrics from your application can be correlated with the relevant log entries from these log groups. For this variable, replace *\$1YOUR\$1APPLICATION\$1LOG\$1GROUP* with the log group names for your application. If you have multiple log groups, you can use an ampersand (`&`) to separate them as in this example: `aws.log.group.names=log-group-1&log-group-2`. To enable metric to log correlation, setting this current environmental variable is enough. For more information, see [Enable metric to log correlation](Application-Signals-MetricLogCorrelation.md). To enable trace to log correlation, you'll also need to change the logging configuration in your application. For more information, see [Enable trace to log correlation](Application-Signals-TraceLogCorrelation.md). 

   The following is an example. Use this example to enable log correlation when you mount the volume `opentelemetry-auto-instrumentation` that you defined in step 1 of this procedure.

   ```
   {
     "name": "my-app",
      ...
     "environment": [
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=$SVC_NAME"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://localhost:4316/v1/metrics"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://localhost:4316/v1/traces"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://localhost:2000"
       },
       {
         "name": "NODE_OPTIONS",
         "value": "--require /otel-auto-instrumentation-node/autoinstrumentation.js"
       }
       ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-node",
         "containerPath": "/otel-auto-instrumentation-node",
         "readOnly": false
       }
     ],
     "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ]
   }
   ```<a name="ECS-NodeJs-ESM"></a>

**Setting up a Node.js application with the ESM module format**

We provide limited support for Node.js applications with the ESM module format. For details, see [Known limitations about Node.js with ESM](CloudWatch-Application-Signals-supportmatrix.md#ESM-limitations).

For the ESM module format, using the `init` container to inject the Node.js instrumentation SDK doesn’t apply. To enable Application Signals for Node.js with ESM, skip steps 1 and 3 of the previous procedure, and do the following instead.

**To enable Application Signals for a Node.js application with ESM**

1. Install the relevant dependencies to your Node.js application for autoinstrumentation:

   ```
   npm install @aws/aws-distro-opentelemetry-node-autoinstrumentation
   npm install @opentelemetry/instrumentation@0.54.0
   ```

1. In steps 5 and 6 in the previous procedure, remove the mounting of the volume `opentelemetry-auto-instrumentation-node`:

   ```
    "mountPoints": [
       {
           "sourceVolume": "opentelemetry-auto-instrumentation-node",
           "containerPath": "/otel-auto-instrumentation-node",
           "readOnly": false
       }
    ]
   ```

   Replace the node options with the following.

   ```
   {
       "name": "NODE_OPTIONS",
       "value": "--import @aws/aws-distro-opentelemetry-node-autoinstrumentation/register --experimental-loader=@opentelemetry/instrumentation/hook.mjs"
   }
   ```

------

## Step 5: Deploy your application
<a name="CloudWatch-Application-Signals-Enable-ECS-Deploy"></a>

Create a new revision of your task definition and deploy it to your application cluster. You should see three containers in the newly created task:
+ `init`– A required container for initializing Application Signals.
+ `ecs-cwagent`– A container running the CloudWatch agent
+ `my-app`– This is the example application container in our documentation. In your actual workloads, this specific container might not exist or might be replaced with your own service containers.

## (Optional) Step 6: Monitor your application health
<a name="CloudWatch-Application-Signals-Monitor-sidecar"></a>

Once you have enabled your applications on Amazon ECS, you can monitor your application health. For more information, see [Monitor the operational health of your applications with Application Signals](Services.md).

# Deploy using the daemon strategy
<a name="CloudWatch-Application-Signals-ECS-Daemon"></a>

## Step 1: Enable Application Signals in your account
<a name="Application-Signals-ECS-Grant-Daemon"></a>

You must first enable Application Signals in your account. If you haven't, see [Enable Application Signals in your account](CloudWatch-Application-Signals-Enable.md).

## Step 2: Create IAM roles
<a name="Application-Signals-Enable-ECS-IAM-Daemon"></a>

You must create an IAM role. If you already have created this role, you might need to add permissions to it.
+ **ECS task role—** Containers use this role to run. The permissions should be whatever your applications need, plus **CloudWatchAgentServerPolicy**. 

For more information about creating IAM roles, see [Creating IAM Roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html).

## Step 3: Prepare CloudWatch agent configuration
<a name="Application-Signals-Enable-ECS-PrepareAgent-Daemon"></a>

First, prepare the agent configuration with Application Signals enabled. To do this, create a local file named `/tmp/ecs-cwagent.json`. 

```
{
  "traces": {
    "traces_collected": {
      "application_signals": {}
    }
  },
  "logs": {
    "metrics_collected": {
      "application_signals": {}
    }
  }
}
```

Then upload this configuration to the SSM Parameter Store. To do this, enter the following command. In the file, replace *\$1REGION* with your actual Region name.

```
aws ssm put-parameter \
--name "ecs-cwagent" \
--type "String" \
--value "`cat /tmp/ecs-cwagent.json`" \
--region "$REGION"
```

## Step 4: Deploy the CloudWatch agent daemon service
<a name="Application-Signals-Enable-ECS-Sidecar-Daemon"></a>

Create the following task definition and deploy it to your application cluster. Replace *\$1REGION* with your actual Region name. Replace *\$1TASK\$1ROLE\$1ARN* and *\$1EXECUTION\$1ROLE\$1ARN* with the IAM roles you prepared in [Step 2: Create IAM roles](#Application-Signals-Enable-ECS-IAM-Daemon). Replace *\$1IMAGE* with the path to the latest CloudWatch container image on Amazon Elastic Container Registry. For more information, see [ cloudwatch-agent](https://gallery.ecr.aws/cloudwatch-agent/cloudwatch-agent) on Amazon ECR. 

**Note**  
The daemon service exposes two ports on the host, with 4316 used as endpoint for receiving metrics and traces and 2000 as the CloudWatch trace sampler endpoint. This setup allows the agent to collect and transmit telemetry data from all application tasks running on the host. Ensure that these ports are not used by other services on the host to avoid conflicts.

```
{
  "family": "ecs-cwagent-daemon",
  "taskRoleArn": "$TASK_ROLE_ARN",
  "executionRoleArn": "$EXECUTION_ROLE_ARN",
  "networkMode": "bridge",
  "containerDefinitions": [
    {
      "name": "ecs-cwagent",
      "image": "$IMAGE",
      "essential": true,
      "portMappings": [
        {
          "containerPort": 4316,
          "hostPort": 4316
        },
        {
          "containerPort": 2000,
          "hostPort": 2000
        }
      ],
      "secrets": [
        {
          "name": "CW_CONFIG_CONTENT",
          "valueFrom": "ecs-cwagent"
        }
      ],
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-create-group": "true",
          "awslogs-group": "/ecs/ecs-cwagent",
          "awslogs-region": "$REGION",
          "awslogs-stream-prefix": "ecs"
        }
      }
    }
  ],
  "requiresCompatibilities": [
    "EC2"
  ],
  "cpu": "128",
  "memory": "64"
}
```

## Step 5: Instrument your application
<a name="Application-Signals-Enable-ECS-Instrument-Daemon"></a>

The next step is to instrument your application for Application Signals.

------
#### [ Java ]

**To instrument your application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation"
     }
   ]
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-java). 

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
       "cp",
       "/javaagent.jar",
       "/otel-auto-instrumentation/javaagent.jar"
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation",
         "containerPath": "/otel-auto-instrumentation",
         "readOnly": false
       }
     ]
   }
   ```

1. Add a dependency on the `init` container to make sure that this container finishes before your application container starts.

   ```
   "dependsOn": [
     {
       "containerName": "init",
       "condition": "SUCCESS"
     }
   ]
   ```

1. Add the following environment variables to your application container. You must be using version 1.32.2 or later of the AWS Distro for OpenTelemetry [auto-instrumentation agent for Java](https://opentelemetry.io/docs/zero-code/java/agent/).    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Daemon.html)

1. Mount the volume `opentelemetry-auto-instrumentation` that you defined in step 1 of this procedure. If you don't need to enable log correlation with metrics and traces, use the following example for a Java application. If you want to enable log correlation, see the next step instead.

   ```
   {
     "name": "my-app",
      ...
     "environment": [
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "service.name=$SVC_NAME"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "JAVA_TOOL_OPTIONS",
         "value": " -javaagent:/otel-auto-instrumentation/javaagent.jar"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/metrics"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/traces"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_PROPAGATORS",
         "value": "tracecontext,baggage,b3,xray"
       }
     ],
       "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation",
         "containerPath": "/otel-auto-instrumentation",
         "readOnly": false
       }
     ]
   }
   ```

------
#### [ Python ]

Before you enable Application Signals for your Python applications, be aware of the following considerations.
+ In some containerized applications, a missing `PYTHONPATH` environment variable can sometimes cause the application to fail to start. To resolve this, ensure that you set the `PYTHONPATH` environment variable to the location of your application’s working directory. This is due to a known issue with OpenTelemetry auto-instrumentation. For more information about this issue, see [ Python autoinstrumentation setting of PYTHONPATH is not compliant](https://github.com/open-telemetry/opentelemetry-operator/issues/2302).
+ For Django applications, there are additional required configurations, which are outlined in the [ OpenTelemetry Python documentation](https://opentelemetry-python.readthedocs.io/en/latest/examples/django/README.html).
  + Use the `--noreload` flag to prevent automatic reloading.
  + Set the `DJANGO_SETTINGS_MODULE` environment variable to the location of your Django application’s `settings.py` file. This ensures that OpenTelemetry can correctly access and integrate with your Django settings. 
+ If you're using a WSGI server for your Python application, in addition to the following steps in this section, see [No Application Signals data for Python application that uses a WSGI server](CloudWatch-Application-Signals-Enable-Troubleshoot.md#Application-Signals-troubleshoot-Python-WSGI) for information to make Application Signals work.

**To instrument your Python application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation-python"
     }
   ]
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-python).

   ```
   {
       "name": "init",
       "image": "$IMAGE",
       "essential": false,
       "command": [
           "cp",
           "-a",
           "/autoinstrumentation/.",
           "/otel-auto-instrumentation-python"
       ],
       "mountPoints": [
           {
               "sourceVolume": "opentelemetry-auto-instrumentation-python",
               "containerPath": "/otel-auto-instrumentation-python",
               "readOnly": false
           }
       ]
   }
   ```

1. Add a dependency on the `init` container to make sure that this container finishes before your application container starts.

   ```
   "dependsOn": [
     {
       "containerName": "init",
       "condition": "SUCCESS"
     }
   ]
   ```

1. Add the following environment variables to your application container.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Daemon.html)

1. Mount the volume `opentelemetry-auto-instrumentation-python` that you defined in step 1 of this procedure. If you don't need to enable log correlation with metrics and traces, use the following example for a Python application. If you want to enable log correlation, see the next step instead. 

   ```
   {
     "name": "my-app",
     ...
     "environment": [
       {
         "name": "PYTHONPATH",
         "value": "/otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation:$APP_PATH:/otel-auto-instrumentation-python"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://CW_CONTAINER_IP:2000"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_PYTHON_DISTRO",
         "value": "aws_distro"
       },
       {
         "name": "OTEL_PYTHON_CONFIGURATOR",
         "value": "aws_configurator"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/traces"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/metrics"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "service.name=$SVC_NAME"
       },
       {
         "name": "DJANGO_SETTINGS_MODULE",
         "value": "$PATH_TO_SETTINGS.settings"
       }
     ],
     "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-python",
         "containerPath": "/otel-auto-instrumentation-python",
         "readOnly": false
       }
     ]
   }
   ```

1. (Optional) To enable log correlation, do the following before you mount the volume. In `OTEL_RESOURCE_ATTRIBUTES`, set an additional environment variable `aws.log.group.names` for the log groups of your application. By doing so, the traces and metrics from your application can be correlated with the relevant log entries from these log groups. For this variable, replace *\$1YOUR\$1APPLICATION\$1LOG\$1GROUP* with the log group names for your application. If you have multiple log groups, you can use an ampersand (`&`) to separate them as in this example: `aws.log.group.names=log-group-1&log-group-2`. To enable metric to log correlation, setting this current environmental variable is enough. For more information, see [Enable metric to log correlation](Application-Signals-MetricLogCorrelation.md). To enable trace to log correlation, you'll also need to change the logging configuration in your application. For more information, see [Enable trace to log correlation](Application-Signals-TraceLogCorrelation.md). 

   The following is an example. To enable log correlation, use this example when you mount the volume `opentelemetry-auto-instrumentation-python` that you defined in step 1 of this procedure.

   ```
   {
     "name": "my-app",
     ...
     "environment": [
       {
         "name": "PYTHONPATH",
         "value": "/otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation:$APP_PATH:/otel-auto-instrumentation-python"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://CW_CONTAINER_IP:2000"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_PYTHON_DISTRO",
         "value": "aws_distro"
       },
       {
         "name": "OTEL_PYTHON_CONFIGURATOR",
         "value": "aws_configurator"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/traces"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/metrics"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=$SVC_NAME"
       },
       {
         "name": "DJANGO_SETTINGS_MODULE",
         "value": "$PATH_TO_SETTINGS.settings"
       }
     ],
     "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-python",
         "containerPath": "/otel-auto-instrumentation-python",
         "readOnly": false
       }
     ]
   }
   ```

------
#### [ .NET ]

**To instrument your application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation"
     }
   ]
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-dotnet). 

   For a Linux container instance, use the following.

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
         "cp",
         "-a",
         "autoinstrumentation/.",
         "/otel-auto-instrumentation"
     ],
     "mountPoints": [
         {
             "sourceVolume": "opentelemetry-auto-instrumentation",
             "containerPath": "/otel-auto-instrumentation",
             "readOnly": false
         }
     ]
   }
   ```

   For a Windows Server container instance, use the following.

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
         "CMD",
         "/c",
         "xcopy",
         "/e",
         "C:\\autoinstrumentation\\*",
         "C:\\otel-auto-instrumentation",
         "&&",
         "icacls",
         "C:\\otel-auto-instrumentation",
         "/grant",
         "*S-1-1-0:R",
         "/T"
     ],
     "mountPoints": [
         {
             "sourceVolume": "opentelemetry-auto-instrumentation",
             "containerPath": "C:\\otel-auto-instrumentation",
             "readOnly": false
         }
     ]
   }
   ```

1. Add a dependency on the `init` container to make sure that container finishes before your application container starts.

   ```
   "dependsOn": [
       {
           "containerName": "init",
           "condition": "SUCCESS"
       }
   ]
   ```

1. Add the following environment variables to your application container. You must be using version 1.1.0 or later of the AWS Distro for OpenTelemetry[ auto-instrumentation agent for .NET](https://opentelemetry.io/docs/zero-code/net/).    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Daemon.html)

1. Mount the volume `opentelemetry-auto-instrumentation` that you defined in step 1 of this procedure. For Linux, use the following.

   ```
   {
       "name": "my-app",
      ...
       "environment": [
           {
              "name": "OTEL_RESOURCE_ATTRIBUTES",
              "value": "service.name=$SVC_NAME"
          },
           {
               "name": "CORECLR_ENABLE_PROFILING",
               "value": "1"
           },
           {
               "name": "CORECLR_PROFILER",
               "value": "{918728DD-259F-4A6A-AC2B-B85E1B658318}"
           },
           {
               "name": "CORECLR_PROFILER_PATH",
               "value": "/otel-auto-instrumentation/linux-x64/OpenTelemetry.AutoInstrumentation.Native.so"
           },
           {
               "name": "DOTNET_ADDITIONAL_DEPS",
               "value": "/otel-auto-instrumentation/AdditionalDeps"
           },
           {
               "name": "DOTNET_SHARED_STORE",
               "value": "/otel-auto-instrumentation/store"
           },
           {
               "name": "DOTNET_STARTUP_HOOKS",
               "value": "/otel-auto-instrumentation/net/OpenTelemetry.AutoInstrumentation.StartupHook.dll"
           },
           {
               "name": "OTEL_DOTNET_AUTO_HOME",
               "value": "/otel-auto-instrumentation"
           },
           {
               "name": "OTEL_DOTNET_AUTO_PLUGINS",
               "value": "AWS.Distro.OpenTelemetry.AutoInstrumentation.Plugin, AWS.Distro.OpenTelemetry.AutoInstrumentation"
           },
           {
               "name": "OTEL_RESOURCE_ATTRIBUTES",
               "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=dotnet-service-name"
           },
           {
               "name": "OTEL_LOGS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_METRICS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
               "value": "http/protobuf"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
               "value": "true"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
               "value": "http://localhost:4316/v1/metrics"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
               "value": "http://CW_CONTAINER_IP:4316/v1/traces"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_ENDPOINT",
               "value": "http://CW_CONTAINER_IP:4316"
           },
           {
              "name": "OTEL_TRACES_SAMPLER",
              "value": "xray"
          },
          {
              "name": "OTEL_TRACES_SAMPLER_ARG",
              "value": "endpoint=http://CW_CONTAINER_IP:2000"
          },
           {
               "name": "OTEL_PROPAGATORS",
               "value": "tracecontext,baggage,b3,xray"
           }
       ],
         "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ],
       "mountPoints": [
           {
               "sourceVolume": "opentelemetry-auto-instrumentation",
               "containerPath": "/otel-auto-instrumentation",
               "readOnly": false
           }
       ],
       "dependsOn": [
           {
               "containerName": "init",
               "condition": "SUCCESS"
           }
      ]
   }
   ```

   For Windows Server, use the following.

   ```
   {
       "name": "my-app",
      ...
       "environment": [
          {
              "name": "OTEL_RESOURCE_ATTRIBUTES",
              "value": "service.name=$SVC_NAME"
          },
           {
               "name": "CORECLR_ENABLE_PROFILING",
               "value": "1"
           },
           {
               "name": "CORECLR_PROFILER",
               "value": "{918728DD-259F-4A6A-AC2B-B85E1B658318}"
           },
           {
               "name": "CORECLR_PROFILER_PATH",
               "value": "C:\\otel-auto-instrumentation\\win-x64\\OpenTelemetry.AutoInstrumentation.Native.dll"
           },
           {
               "name": "DOTNET_ADDITIONAL_DEPS",
               "value": "C:\\otel-auto-instrumentation\\AdditionalDeps"
           },
           {
               "name": "DOTNET_SHARED_STORE",
               "value": "C:\\otel-auto-instrumentation\\store"
           },
           {
               "name": "DOTNET_STARTUP_HOOKS",
               "value": "C:\\otel-auto-instrumentation\\net\\OpenTelemetry.AutoInstrumentation.StartupHook.dll"
           },
           {
               "name": "OTEL_DOTNET_AUTO_HOME",
               "value": "C:\\otel-auto-instrumentation"
           },
           {
               "name": "OTEL_DOTNET_AUTO_PLUGINS",
               "value": "AWS.Distro.OpenTelemetry.AutoInstrumentation.Plugin, AWS.Distro.OpenTelemetry.AutoInstrumentation"
           },
           {
               "name": "OTEL_RESOURCE_ATTRIBUTES",
               "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=dotnet-service-name"
           },
           {
               "name": "OTEL_LOGS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_METRICS_EXPORTER",
               "value": "none"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
               "value": "http/protobuf"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
               "value": "true"
           },
           {
               "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
               "value": "http://CW_CONTAINER_IP:4316/v1/metrics"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
               "value": "http://CW_CONTAINER_IP:4316/v1/traces"
           },
           {
               "name": "OTEL_EXPORTER_OTLP_ENDPOINT",
               "value": "http://CW_CONTAINER_IP:4316"
           },
           {
              "name": "OTEL_TRACES_SAMPLER",
              "value": "xray"
          },
          {
              "name": "OTEL_TRACES_SAMPLER_ARG",
              "value": "endpoint=http://CW_CONTAINER_IP:2000"
          },
           {
               "name": "OTEL_PROPAGATORS",
               "value": "tracecontext,baggage,b3,xray"
           }
       ],
       "mountPoints": [
           {
               "sourceVolume": "opentelemetry-auto-instrumentation",
               "containerPath": "C:\\otel-auto-instrumentation",
               "readOnly": false
           }
       ],
       "dependsOn": [
           {
               "containerName": "init",
               "condition": "SUCCESS"
           }
      ]
   }
   ```

------
#### [ Node.js ]

**Note**  
If you are enabling Application Signals for a Node.js application with ESM, see [Setting up a Node.js application with the ESM module format](#ECSDaemon-NodeJs-ESM) before you start these steps.

**To instrument your application on Amazon ECS with the CloudWatch agent**

1. First, specify a bind mount. The volume will be used to share files across containers in the next steps. You will use this bind mount later in this procedure.

   ```
   "volumes": [
     {
       "name": "opentelemetry-auto-instrumentation-node"
     }
   ]
   ```

1. Append a new container `init` to your application's task definition. Replace *\$1IMAGE* with the latest image from the [AWS Distro for OpenTelemetry Amazon ECR image repository](https://gallery.ecr.aws/aws-observability/adot-autoinstrumentation-node). 

   ```
   {
     "name": "init",
     "image": "$IMAGE",
     "essential": false,
     "command": [
       "cp",
       "-a",
       "/autoinstrumentation/.",
       "/otel-auto-instrumentation-node"
     ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-node",
         "containerPath": "/otel-auto-instrumentation-node",
         "readOnly": false
       }
     ],
   }
   ```

1. Add a dependency on the `init` container to make sure that this container finishes before your application container starts.

   ```
   "dependsOn": [
     {
       "containerName": "init",
       "condition": "SUCCESS"
     }
   ]
   ```

1. Add the following environment variables to your application container.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-ECS-Daemon.html)

1. Mount the volume `opentelemetry-auto-instrumentation-node` that you defined in step 1 of this procedure. If you don't need to enable log correlation with metrics and traces, use the following example for a Node.js application. If you want to enable log correlation, see the next step instead.

   For your Application Container, add a dependency on the `init` container to make sure that container finishes before your application container starts.

   ```
   {
     "name": "my-app",
      ...
     "environment": [
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "service.name=$SVC_NAME"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/metrics"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/traces"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://CW_CONTAINER_IP:2000"
       },
       {
         "name": "NODE_OPTIONS",
         "value": "--require /otel-auto-instrumentation-node/autoinstrumentation.js"
       }
       ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-node",
         "containerPath": "/otel-auto-instrumentation-node",
         "readOnly": false
       }
     ],
     "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ]
   }
   ```

1. (Optional) To enable log correlation, do the following before you mount the volume. In `OTEL_RESOURCE_ATTRIBUTES`, set an additional environment variable `aws.log.group.names` for the log groups of your application. By doing so, the traces and metrics from your application can be correlated with the relevant log entries from these log groups. For this variable, replace *\$1YOUR\$1APPLICATION\$1LOG\$1GROUP* with the log group names for your application. If you have multiple log groups, you can use an ampersand (`&`) to separate them as in this example: `aws.log.group.names=log-group-1&log-group-2`. To enable metric to log correlation, setting this current environmental variable is enough. For more information, see [Enable metric to log correlation](Application-Signals-MetricLogCorrelation.md). To enable trace to log correlation, you'll also need to change the logging configuration in your application. For more information, see [Enable trace to log correlation](Application-Signals-TraceLogCorrelation.md). 

   The following is an example. Use this example to enable log correlation when you mount the volume `opentelemetry-auto-instrumentation` that you defined in step 1 of this procedure.

   ```
   {
     "name": "my-app",
      ...
     "environment": [
       {
         "name": "OTEL_RESOURCE_ATTRIBUTES",
         "value": "aws.log.group.names=$YOUR_APPLICATION_LOG_GROUP,service.name=$SVC_NAME"
       },
       {
         "name": "OTEL_LOGS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_METRICS_EXPORTER",
         "value": "none"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_PROTOCOL",
         "value": "http/protobuf"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_ENABLED",
         "value": "true"
       },
       {
         "name": "OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/metrics"
       },
       {
         "name": "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT",
         "value": "http://CW_CONTAINER_IP:4316/v1/traces"
       },
       {
         "name": "OTEL_TRACES_SAMPLER",
         "value": "xray"
       },
       {
         "name": "OTEL_TRACES_SAMPLER_ARG",
         "value": "endpoint=http://CW_CONTAINER_IP:2000"
       },
       {
         "name": "NODE_OPTIONS",
         "value": "--require /otel-auto-instrumentation-node/autoinstrumentation.js"
       }
       ],
     "mountPoints": [
       {
         "sourceVolume": "opentelemetry-auto-instrumentation-node",
         "containerPath": "/otel-auto-instrumentation-node",
         "readOnly": false
       }
     ],
     "dependsOn": [
       {
         "containerName": "init",
         "condition": "SUCCESS"
       }
     ]
   }
   ```<a name="ECSDaemon-NodeJs-ESM"></a>

**Setting up a Node.js application with the ESM module format**

We provide limited support for Node.js applications with the ESM module format. For details, see [Known limitations about Node.js with ESM](CloudWatch-Application-Signals-supportmatrix.md#ESM-limitations).

For the ESM module format, using the `init` container to inject the Node.js instrumentation SDK doesn’t apply. To enable Application Signals for Node.js with ESM, skip steps 1 and 2 in of the previous procedure, and do the following instead.

**To enable Application Signals for a Node.js application with ESM**

1. Install the relevant dependencies to your Node.js application for autoinstrumentation:

   ```
   npm install @aws/aws-distro-opentelemetry-node-autoinstrumentation
   npm install @opentelemetry/instrumentation@0.54.0
   ```

1. In steps 4 and 5 in the previous procedure, remove the mounting of the volume `opentelemetry-auto-instrumentation-node`:

   ```
   "mountPoints": [
       {
           "sourceVolume": "opentelemetry-auto-instrumentation-node",
           "containerPath": "/otel-auto-instrumentation-node",
           "readOnly": false
       }
    ]
   ```

   Replace the node options with the following.

   ```
   {
       "name": "NODE_OPTIONS",
       "value": "--import @aws/aws-distro-opentelemetry-node-autoinstrumentation/register --experimental-loader=@opentelemetry/instrumentation/hook.mjs"
   }
   ```

------

## Step 6: Deploy your application
<a name="Application-Signals-Enable-ECS-Deploy-Daemon"></a>

Create a new revision of your task definition and deploy it to your application cluster. You should see two containers in the newly created task:
+ `init`– A required container for initializing Application Signals
+ `my-app`– This is the example application container in our documentation. In your actual workloads, this specific container might not exist or might be replaced with your own service containers.

## (Optional) Step 7: Monitor your application health
<a name="CloudWatch-Application-Signals-Monitor-daemon"></a>

Once you have enabled your applications on Amazon ECS, you can monitor your application health. For more information, see [Monitor the operational health of your applications with Application Signals](Services.md).

# Enable Application Signals on Amazon ECS using AWS CDK
<a name="CloudWatch-Application-Signals-EKS-CDK"></a>

To enable Application Signals on Amazon ECS using AWS CDK, do the following.

1. Enable Application Signals for your applications – If you haven't enabled Application Signals in this account yet, you must grant Application Signals the permissions it needs to discover your services.

   ```
   import { aws_applicationsignals as applicationsignals } from 'aws-cdk-lib';
   
   const cfnDiscovery = new applicationsignals.CfnDiscovery(this,
     'ApplicationSignalsServiceRole', { }
   );
   ```

   The Discovery CloudFormation resource grants Application Signals the following permissions:
   + `xray:GetServiceGraph`
   + `logs:StartQuery`
   + `logs:GetQueryResults`
   + `cloudwatch:GetMetricData`
   + `cloudwatch:ListMetrics`
   + `tag:GetResources`

   For more information about this role, see [Service-linked role permissions for CloudWatch Application Signals](using-service-linked-roles.md#service-linked-role-signals).

1. Instrument your application with the [AWS::ApplicationSignals Construct Library](https://www.npmjs.com/package/@aws-cdk/aws-applicationsignals-alpha) in the AWS CDK. The code snippets in this document are provided in *TypeScript*. For other language-specific alternatives, see [Supported programming languages for the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/languages.html). 
   + **Enable Application Signals on Amazon ECS with sidecar mode**

     1. Configure `instrumentation` to instrument the application with the AWS Distro for OpenTelemetry (ADOT) SDK Agent. The following is an example of instrumenting a Java application. See [InstrumentationVersion](https://docs.aws.amazon.com/cdk/api/v2/docs/@aws-cdk_aws-applicationsignals-alpha.InstrumentationVersion.html) for all supported language versions.

     1. Specify `cloudWatchAgentSidecar` to configure the CloudWatch Agent as a sidecar container.

        ```
        import { Construct } from 'constructs';
        import * as appsignals from '@aws-cdk/aws-applicationsignals-alpha';
        import * as cdk from 'aws-cdk-lib';
        import * as ec2 from 'aws-cdk-lib/aws-ec2';
        import * as ecs from 'aws-cdk-lib/aws-ecs';
        
        class MyStack extends cdk.Stack {
          public constructor(scope?: Construct, id?: string, props: cdk.StackProps = {}) {
            super();
            const vpc = new ec2.Vpc(this, 'TestVpc', {});
            const cluster = new ecs.Cluster(this, 'TestCluster', { vpc });
        
            const fargateTaskDefinition = new ecs.FargateTaskDefinition(this, 'SampleAppTaskDefinition', {
              cpu: 2048,
              memoryLimitMiB: 4096,
            });
        
            fargateTaskDefinition.addContainer('app', {
              image: ecs.ContainerImage.fromRegistry('test/sample-app'),
            });
        
            new appsignals.ApplicationSignalsIntegration(this, 'ApplicationSignalsIntegration', {
              taskDefinition: fargateTaskDefinition,
              instrumentation: {
                sdkVersion: appsignals.JavaInstrumentationVersion.V2_10_0,
              },
              serviceName: 'sample-app',
              cloudWatchAgentSidecar: {
                containerName: 'ecs-cwagent',
                enableLogging: true,
                cpu: 256,
                memoryLimitMiB: 512,
              }
            });
        
            new ecs.FargateService(this, 'MySampleApp', {
              cluster: cluster,
              taskDefinition: fargateTaskDefinition,
              desiredCount: 1,
            });
          }
        }
        ```
   + **Enable Application Signals on Amazon ECS with daemon mode**
**Note**  
The daemon deployment strategy is not supported on Amazon ECS Fargate and is only supported on Amazon ECS on Amazon EC2.

     1. Run CloudWatch Agent as a daemon service with `HOST` network mode.

     1. Configure `instrumentation` to instrument the application with the ADOT Python Agent.

        ```
        import { Construct } from 'constructs';
        import * as appsignals from '@aws-cdk/aws-applicationsignals-alpha';
        import * as cdk from 'aws-cdk-lib';
        import * as ec2 from 'aws-cdk-lib/aws-ec2';
        import * as ecs from 'aws-cdk-lib/aws-ecs';
        
        class MyStack extends cdk.Stack {
          public constructor(scope?: Construct, id?: string, props: cdk.StackProps = {}) {
            super(scope, id, props);
        
            const vpc = new ec2.Vpc(this, 'TestVpc', {});
            const cluster = new ecs.Cluster(this, 'TestCluster', { vpc });
        
            // Define Task Definition for CloudWatch agent (Daemon)
            const cwAgentTaskDefinition = new ecs.Ec2TaskDefinition(this, 'CloudWatchAgentTaskDefinition', {
              networkMode: ecs.NetworkMode.HOST,
            });
        
            new appsignals.CloudWatchAgentIntegration(this, 'CloudWatchAgentIntegration', {
              taskDefinition: cwAgentTaskDefinition,
              containerName: 'ecs-cwagent',
              enableLogging: false,
              cpu: 128,
              memoryLimitMiB: 64,
              portMappings: [
                {
                  containerPort: 4316,
                  hostPort: 4316,
                },
                {
                  containerPort: 2000,
                  hostPort: 2000,
                },
              ],
            });
        
            // Create the CloudWatch Agent daemon service
            new ecs.Ec2Service(this, 'CloudWatchAgentDaemon', {
              cluster,
              taskDefinition: cwAgentTaskDefinition,
              daemon: true,  // Runs one container per EC2 instance
            });
        
            // Define Task Definition for user application
            const sampleAppTaskDefinition = new ecs.Ec2TaskDefinition(this, 'SampleAppTaskDefinition', {
              networkMode: ecs.NetworkMode.HOST,
            });
        
            sampleAppTaskDefinition.addContainer('app', {
              image: ecs.ContainerImage.fromRegistry('test/sample-app'),
              cpu: 0,
              memoryLimitMiB: 512,
            });
        
            // No CloudWatch Agent sidecar is needed as application container communicates to CloudWatch Agent daemon through host network
            new appsignals.ApplicationSignalsIntegration(this, 'ApplicationSignalsIntegration', {
              taskDefinition: sampleAppTaskDefinition,
              instrumentation: {
                sdkVersion: appsignals.PythonInstrumentationVersion.V0_8_0
              },
              serviceName: 'sample-app'
            });
        
            new ecs.Ec2Service(this, 'MySampleApp', {
              cluster,
              taskDefinition: sampleAppTaskDefinition,
              desiredCount: 1,
            });
          }
        }
        ```
   + **Enable Application Signals on Amazon ECS with replica mode**
**Note**  
Running CloudWatch Agent service using replica mode requires specific security group configurations to enable communication with other services. For Application Signals functionality, configure the security group with the minimum inbound rules: Port 2000 (HTTP) and Port 4316 (HTTP). This configuration ensures proper connectivity between the CloudWatch Agent and dependent services.

     1. Run CloudWatch Agent as a replica service with service connect.

     1. Configure `instrumentation` to instrument the application with the ADOT Python Agent.

     1. Override environment variables by configuring `overrideEnvironments` to use service connect endpoints to communicate to the CloudWatch agent server.

        ```
        import { Construct } from 'constructs';
        import * as appsignals from '@aws-cdk/aws-applicationsignals-alpha';
        import * as cdk from 'aws-cdk-lib';
        import * as ec2 from 'aws-cdk-lib/aws-ec2';
        import * as ecs from 'aws-cdk-lib/aws-ecs';
        import { PrivateDnsNamespace } from 'aws-cdk-lib/aws-servicediscovery';
        
        class MyStack extends cdk.Stack {
          public constructor(scope?: Construct, id?: string, props: cdk.StackProps = {}) {
            super(scope, id, props);
        
            const vpc = new ec2.Vpc(this, 'TestVpc', {});
            const cluster = new ecs.Cluster(this, 'TestCluster', { vpc });
            const dnsNamespace = new PrivateDnsNamespace(this, 'Namespace', {
              vpc,
              name: 'local',
            });
            const securityGroup = new ec2.SecurityGroup(this, 'ECSSG', { vpc });
            securityGroup.addIngressRule(securityGroup, ec2.Port.tcpRange(0, 65535));
        
            // Define Task Definition for CloudWatch agent (Replica)
            const cwAgentTaskDefinition = new ecs.FargateTaskDefinition(this, 'CloudWatchAgentTaskDefinition', {});
        
            new appsignals.CloudWatchAgentIntegration(this, 'CloudWatchAgentIntegration', {
              taskDefinition: cwAgentTaskDefinition,
              containerName: 'ecs-cwagent',
              enableLogging: false,
              cpu: 128,
              memoryLimitMiB: 64,
              portMappings: [
                {
                  name: 'cwagent-4316',
                  containerPort: 4316,
                  hostPort: 4316,
                },
                {
                  name: 'cwagent-2000',
                  containerPort: 2000,
                  hostPort: 2000,
                },
              ],
            });
        
            // Create the CloudWatch Agent replica service with service connect
            new ecs.FargateService(this, 'CloudWatchAgentService', {
              cluster: cluster,
              taskDefinition: cwAgentTaskDefinition,
              securityGroups: [securityGroup],
              serviceConnectConfiguration: {
                namespace: dnsNamespace.namespaceArn,
                services: [
                  {
                    portMappingName: 'cwagent-4316',
                    dnsName: 'cwagent-4316-http',
                    port: 4316,
                  },
                  {
                    portMappingName: 'cwagent-2000',
                    dnsName: 'cwagent-2000-http',
                    port: 2000,
                  },
                ],
              },
              desiredCount: 1,
            });
        
            // Define Task Definition for user application
            const sampleAppTaskDefinition = new ecs.FargateTaskDefinition(this, 'SampleAppTaskDefinition', {});
        
            sampleAppTaskDefinition.addContainer('app', {
              image: ecs.ContainerImage.fromRegistry('test/sample-app'),
              cpu: 0,
              memoryLimitMiB: 512,
            });
        
            // Overwrite environment variables to connect to the CloudWatch Agent service just created
            new appsignals.ApplicationSignalsIntegration(this, 'ApplicationSignalsIntegration', {
              taskDefinition: sampleAppTaskDefinition,
              instrumentation: {
                sdkVersion: appsignals.PythonInstrumentationVersion.V0_8_0,
              },
              serviceName: 'sample-app',
              overrideEnvironments: [
                {
                  name: appsignals.CommonExporting.OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT,
                  value: 'http://cwagent-4316-http:4316/v1/metrics',
                },
                {
                  name: appsignals.TraceExporting.OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,
                  value: 'http://cwagent-4316-http:4316/v1/traces',
                },
                {
                  name: appsignals.TraceExporting.OTEL_TRACES_SAMPLER_ARG,
                  value: 'endpoint=http://cwagent-2000-http:2000',
                },
              ],
            });
        
            // Create ECS Service with service connect configuration
            new ecs.FargateService(this, 'MySampleApp', {
              cluster: cluster,
              taskDefinition: sampleAppTaskDefinition,
              serviceConnectConfiguration: {
                namespace: dnsNamespace.namespaceArn,
              },
              desiredCount: 1,
            });
          }
        }
        ```

1. Setting up a Node.js application with the ESM module format. There is limited support for Node.js applications with the ESM module format. For more information, see [Known limitations about Node.js with ESM](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-supportmatrix.html#ESM-limitations).

   For the ESM module format, enabling Application Signals by using the `init` container to inject the Node.js instrumentation SDK doesn't apply. Skip Step 2 in this procedure, and do the following instead.
   + Install the relevant dependencies to your Node.js application for autoinstrumentation.

     ```
     npm install @aws/aws-distro-opentelemetry-node-autoinstrumentation
     npm install @opentelemetry/instrumentation@0.54.
     ```
   +  Update TaskDefinition.

     1. Add additional configuration to your application container.

     1. Configure `NODE_OPTIONS`.

     1. (Optional) Add CloudWatch Agent if you choose sidecar mode.

        ```
        import { Construct } from 'constructs';
        import * as appsignals from '@aws-cdk/aws-applicationsignals-alpha';
        import * as ecs from 'aws-cdk-lib/aws-ecs';
        
        class MyStack extends cdk.Stack {
          public constructor(scope?: Construct, id?: string, props: cdk.StackProps = {}) {
            super(scope, id, props);
            const fargateTaskDefinition = new ecs.FargateTaskDefinition(stack, 'TestTaskDefinition', {
              cpu: 256,
              memoryLimitMiB: 512,
            });
            const appContainer = fargateTaskDefinition.addContainer('app', {
              image: ecs.ContainerImage.fromRegistry('docker/cdk-test'),
            });
        
            const volumeName = 'opentelemetry-auto-instrumentation'
            fargateTaskDefinition.addVolume({name: volumeName});
        
            // Inject additional configurations
            const injector = new appsignals.NodeInjector(volumeName, appsignals.NodeInstrumentationVersion.V0_5_0);
            injector.renderDefaultContainer(fargateTaskDefinition);
            // Configure NODE_OPTIONS
            appContainer.addEnvironment('NODE_OPTIONS', '--import @aws/aws-distro-opentelemetry-node-autoinstrumentation/register --experimental-loader=@opentelemetry/instrumentation/hook.mjs')
        
            // Optional: add CloudWatch agent
            const cwAgent = new appsignals.CloudWatchAgentIntegration(stack, 'AddCloudWatchAgent', {
              containerName: 'ecs-cwagent',
              taskDefinition: fargateTaskDefinition,
              memoryReservationMiB: 50,
            });
            appContainer.addContainerDependencies({
              container: cwAgent.agentContainer,
              condition: ecs.ContainerDependencyCondition.START,
            });
        }
        ```

1. Deploy the updated stack – Run the `cdk synth` command in your application's main directory. To deploy the service in your AWS account, run the `cdk deploy` command in your application's main directory.

   If you used the sidecar strategy, you'll see one service created:
   + *APPLICATION\$1SERVICE* is the service of your application. It includes the three following containers:
     + `init`– A required container for initializing Application Signals.
     + `ecs-cwagent`– A container running the CloudWatch agent
     + `my-app`– This is the example application container in our documentation. In your actual workloads, this specific container might not exist or might be replaced with your own service containers.

   If you used the daemon strategy, you'll see two services created:
   + *CloudWatchAgentDaemon * is the CloudWatch agent daemon service.
   + *APPLICATION\$1SERVICE* is the service of your application. It includes the two following containers:
     + `init`– A required container for initializing Application Signals.
     + `my-app`– This is the example application container in our documentation. In your actual workloads, this specific container might not exist or might be replaced with your own service containers.

   If you used the replica strategy, you'll see two services created:
   + *CloudWatchAgentService* is the CloudWatch agent replica service.
   + *APPLICATION\$1SERVICE* is the service of your application. It includes the two following containers:
     + `init`– A required container for initializing Application Signals.
     + `my-app`– This is the example application container in our documentation. In your actual workloads, this specific container might not exist or might be replaced with your own service containers.

## Enable Application Signals on Amazon ECS using Model Context Protocol (MCP)
<a name="CloudWatch-Application-Signals-ECS-MCP"></a>

You can use the CloudWatch Application Signals Model Context Protocol (MCP) server to enable Application Signals on your Amazon ECS clusters through conversational AI interactions. This provides a natural language interface for setting up Application Signals monitoring.

The MCP server automates the enablement process by understanding your requirements and generating the appropriate configuration. Instead of manually following console steps or writing CDK code, you can simply describe what you want to enable.

### Prerequisites
<a name="CloudWatch-Application-Signals-ECS-MCP-Prerequisites"></a>

Before using the MCP server to enable Application Signals, ensure you have:
+ A Development Environment that supports MCP (such as Kiro, Claude Desktop, VSCode with MCP extensions, or other MCP-compatible tools)
+ The CloudWatch Application Signals MCP server configured in your IDE. For detailed setup instructions, see [CloudWatch Application Signals MCP Server documentation](https://awslabs.github.io/mcp/servers/cloudwatch-applicationsignals-mcp-server).

### Using the MCP server
<a name="CloudWatch-Application-Signals-ECS-MCP-Usage"></a>

Once you have configured the CloudWatch Application Signals MCP server in your IDE, you can request enablement guidance using natural language prompts. While the coding assistant can infer context from your project structure, providing specific details in your prompts helps ensure more accurate and relevant guidance. Include information such as your application language, Amazon ECS cluster name, deployment strategy (sidecar or daemon), and absolute paths to your infrastructure and application code.

**Best practice prompts (specific and complete):**

```
"Enable Application Signals for my Python service running on ECS.
My app code is in /home/user/flask-api and IaC is in /home/user/flask-api/terraform"

"I want to add observability to my Node.js application on ECS cluster 'production-cluster' using sidecar deployment.
The application code is at /Users/dev/checkout-service and
the task definitions are at /Users/dev/checkout-service/ecs"

"Help me instrument my Java Spring Boot application on ECS with Application Signals using daemon strategy.
Application directory: /opt/apps/payment-api
CDK infrastructure: /opt/apps/payment-api/cdk"
```

**Less effective prompts:**

```
"Enable monitoring for my app"
→ Missing: platform, language, paths

"Enable Application Signals. My code is in ./src and IaC is in ./infrastructure"
→ Problem: Relative paths instead of absolute paths

"Enable Application Signals for my ECS service at /home/user/myapp"
→ Missing: programming language, deployment strategy
```

**Quick template:**

```
"Enable Application Signals for my [LANGUAGE] service on ECS.
Deployment strategy: [sidecar/daemon]
App code: [ABSOLUTE_PATH_TO_APP]
IaC code: [ABSOLUTE_PATH_TO_IAC]"
```

### Benefits of using the MCP server
<a name="CloudWatch-Application-Signals-ECS-MCP-Benefits"></a>

Using the CloudWatch Application Signals MCP server offers several advantages:
+ **Natural language interface:** Describe what you want to enable without memorizing commands or configuration syntax
+ **Context-aware guidance:** The MCP server understands your specific environment and provides tailored recommendations
+ **Reduced errors:** Automated configuration generation minimizes manual typing errors
+ **Faster setup:** Get from intention to implementation more quickly
+ **Learning tool:** See the generated configurations and understand how Application Signals works

### Additional resources
<a name="CloudWatch-Application-Signals-ECS-MCP-MoreInfo"></a>

For more information about configuring and using the CloudWatch Application Signals MCP server, see the [MCP server documentation](https://awslabs.github.io/mcp/servers/cloudwatch-applicationsignals-mcp-server).