

# Logging and monitoring on Amazon ECS
<a name="amazon-ecs-logging-monitoring"></a>

Amazon Elastic Container Service (Amazon ECS) provides [two launch types](https://docs.aws.amazon.com//AmazonECS/latest/developerguide/launch_types.html) for running containers and that determine the type of infrastructure that host tasks and services; these launch types are AWS Fargate and Amazon EC2. Both launch types integrate with CloudWatch but configurations and support vary. 

The following sections help you understand how to use CloudWatch for logging and monitoring on Amazon ECS.

**Topics**
+ [Configuring CloudWatch with an EC2 launch type](configure-cloudwatch-ec2-launch-type.md)
+ [Amazon ECS container logs for EC2 and Fargate launch types](ec2-fargate-logs.md)
+ [Using custom log routing with FireLens for Amazon ECS](firelens-custom-log-routing.md)
+ [Metrics for Amazon ECS](ecs-metrics.md)

# Configuring CloudWatch with an EC2 launch type
<a name="configure-cloudwatch-ec2-launch-type"></a>

With an EC2 launch type, you provision an Amazon ECS cluster of EC2 instances that use the CloudWatch agent for logging and monitoring. An Amazon ECS optimized AMI comes pre-installed with the [Amazon ECS container agent](https://docs.aws.amazon.com//AmazonECS/latest/developerguide/ECS_agent.html) and provides CloudWatch metrics for the Amazon ECS cluster. 

These default metrics are included in the cost of Amazon ECS, but the default configuration for Amazon ECS doesn’t monitor log files or additional metrics (for example, free disk space). You can use the AWS Management Console to provision an Amazon ECS cluster with the EC2 launch type, this creates an CloudFormation stack that deploys an Amazon EC2 Auto Scaling group with a launch configuration. However, this approach means that you can't choose a custom AMI or customize the launch configuration with different settings or additional boot up scripts.

To monitor additional logs and metrics, you must install the CloudWatch agent on your Amazon ECS container instances. You can use the installation approach for EC2 instances from the [Installing the CloudWatch agent using Systems Manager Distributor and State Manager](install-cloudwatch-systems-manager.md) section of this guide. However, the Amazon ECS AMI doesn’t include the required Systems Manager agent. You should use a custom launch configuration with a user data script that installs the Systems Manager agent when you create your Amazon ECS cluster. This allows your container instances to register with Systems Manager and apply the State Manager associations to install, configure, and update the CloudWatch agent. When State Manager runs and updates your CloudWatch agent configuration, it also applies your standardized systems-level CloudWatch configuration for Amazon EC2. You can also store standardized CloudWatch configurations for Amazon ECS in the S3 bucket for your CloudWatch configuration and automatically apply them with State Manager. 

You should make sure that the IAM role or instance profile applied to your Amazon ECS container instances includes the required `CloudWatchAgentServerPolicy` and `AmazonSSMManagedInstanceCore` policies. You can use the [ecs\$1cluster\$1with\$1cloudwatch\$1linux.yaml](https://github.com/aws-samples/logging-monitoring-apg-guide-examples/blob/main/examples/ecs/ecs_cluster_with_cloudwatch_linux.yaml) CloudFormation template to provision Linux-based Amazon ECS clusters. This template creates an Amazon ECS cluster with a custom launch configuration that installs Systems Manager and deploys a custom CloudWatch configuration to monitor log files specific to Amazon ECS.

You should capture the following logs for your Amazon ECS container instances, as well as your standard EC2 instance logs:
+ **Amazon ECS agent startup output** – `/var/log/ecs/ecs-init.log`
+ **Amazon ECS agent output** – `/var/log/ecs/ecs-agent.log`
+ **IAM credential provider requests log** – `/var/log/ecs/audit.log`

For more information about output level, formatting, and additional configuration options, see [Amazon ECS log file locations](https://docs.aws.amazon.com//AmazonECS/latest/developerguide/logs.html) in the Amazon ECS documentation.

**Important**  
 Agent installation or configuration is not required for the Fargate launch type because you don’t run or manage EC2 container instances.

Amazon ECS container instances should use the latest Amazon ECS optimized AMIs and container agent. AWS stores public Systems Manager Parameter Store parameters with Amazon ECS optimized AMI information, including the AMI ID. You can retrieve the latest most recently optimized AMI from the Parameter Store by using the [ Parameter Store parameter format](https://docs.aws.amazon.com//AmazonECS/latest/developerguide/retrieve-ecs-optimized_AMI.html) for Amazon ECS optimized AMIs. You can refer to the public Parameter Store parameter that references the most recent AMI or a specific AMI release in your CloudFormation templates.

AWS provides the same Parameter Store parameters in each supported Region. This means that CloudFormation templates referencing these parameters can be reused across Regions and accounts without the AMI to be updated. You can control the deployment of newer Amazon ECS AMIs to your organization by referring to a specific release, which helps you prevent the use of a new Amazon ECS optimized AMI until you test it. 

# Amazon ECS container logs for EC2 and Fargate launch types
<a name="ec2-fargate-logs"></a>

Amazon ECS uses a task definition to deploy and manage containers as tasks and services. You configure the containers that you want to launch into your Amazon ECS cluster within a task definition. Logging is configured with a log driver at the container level. Multiple log driver options provide your containers with different logging systems (for example, `awslogs`,`fluentd`,`gelf`,`json-file`,`journald`,`logentries`,`splunk`,`syslog`, or `awsfirelens`) depending on whether you use the EC2 or Fargate launch type. The Fargate launch type provides a subset of the following log driver options: `awslogs`, `splunk`, and `awsfirelens`. AWS provides the `awslogs` log driver to capture and transmit container output to CloudWatch Logs. Log driver settings enable you to customize the log group, Region, and log stream prefix along with many other options.

The default naming for log groups and the option used by the **Auto-configure CloudWatch Logs** option on the AWS Management Console is `/ecs/<task_name>`. The log stream name used by Amazon ECS has the `<awslogs-stream-prefix>/<container_name>/<task_id>` format. We recommend that you use a group name that groups your logs based on your organization's requirements. In the following table, the `image_name` and `image_tag` are included in the log stream's name.


|  |  | 
| --- |--- |
| Log group name | /<Business unit>/<Project or application name>/<Environment>/<Cluster name>/<Task name> | 
| Log stream name prefix |  `/<image_name>/<image_tag>`  | 

This information is also available in the task definition. However, tasks are regularly updated with new revisions, which means that the task definition might have used a different `image_name` and `image_tag` than those that the task definition is currently using. For more information and naming suggestions, see the [Planning your CloudWatch deployment](planning-cloudwatch-deployment.md) section of this guide.

If you use a continuous integration and continuous delivery (CI/CD) pipeline or automated process, you can create a new task definition revision for your application with each new Docker image build. For example, you can include the Docker image name, image tag, GitHub revision, or other important information in your task definition revision and logging configuration as a part of your CI/CD process.

# Using custom log routing with FireLens for Amazon ECS
<a name="firelens-custom-log-routing"></a>

FireLens for Amazon ECS helps you route logs to [Fluentd](https://www.fluentd.org/) or [Fluent Bit](https://docs.fluentbit.io/manual) so that you can directly send container logs to AWS services and AWS Partner Network (APN) destinations as well as support log shipping to CloudWatch Logs. 

AWS provides a [Docker image for Fluent Bit](https://docs.aws.amazon.com//AmazonECS/latest/developerguide/firelens-using-fluentbit.html) with pre-installed plugins for Amazon Kinesis Data Streams, Amazon Data Firehose, and CloudWatch Logs. You can use the FireLens log driver instead of the `awslogs` log driver for more customization and control over logs sent to CloudWatch Logs. 

For example, you can use the FireLens log driver to control the log format output. This means that an Amazon ECS container’s CloudWatch logs are automatically formatted as JSON objects and include JSON-formatted properties for `ecs_cluster`,`ecs_task_arn`, `ecs_task_definition`, `container_id`, `container_name`, and `ec2_instance_id`. The fluent host is exposed to your container through the `FLUENT_HOST` and `FLUENT_PORT` environment variables when you specify the `awsfirelens` driver. This means that you can directly log to the log router from your code by using fluent logger libraries. For example, your application might include the `fluent-logger-python` library to log to Fluent Bit by using the values available from the environment variables.

If you choose to use FireLens for Amazon ECS, you can configure the same settings as the `awslogs` log driver [and use other settings as well](https://github.com/aws/amazon-cloudwatch-logs-for-fluent-bit). For example, you can use the [ecs-task-nginx-firelense.json](https://github.com/aws-samples/logging-monitoring-apg-guide-examples/blob/main/examples/ecs/ecs-task-nginx-firelense.json) Amazon ECS task definition that launches an NGINX server configured to use FireLens for logging to CloudWatch. It also launches a FireLens Fluent Bit container as a sidecar for logging. 

# Metrics for Amazon ECS
<a name="ecs-metrics"></a>

[Amazon ECS provides standard CloudWatch metrics](https://docs.aws.amazon.com//AmazonECS/latest/developerguide/cloudwatch-metrics.html) (for example, CPU and memory utilization) for the EC2 and Fargate launch types at the cluster and service level with the Amazon ECS container agent. You can also capture metrics for your services, tasks, and containers by using CloudWatch Container Insights, or capture your own custom container metrics by using the embedded metric format.

Container Insights is a CloudWatch feature that provides metrics such as CPU utilization, memory utilization, network traffic, and storage at the cluster, container instance, service, and task levels. Container Insights also creates automatic dashboards that help you analyze services and tasks, and see the average memory or CPU utilization at the container level. Container Insights publishes custom metrics to the `ECS/ContainerInsights` [custom namespace](https://docs.aws.amazon.com//AmazonECS/latest/developerguide/cloudwatch-metrics.html) that you can use for graphing, alarming, and dashboarding.

You can turn on Container Insight metrics by enabling Container Insights for each individual Amazon ECS cluster. If you also want to see metrics at the container instance level, you can [launch the CloudWatch agent as a daemon container on your Amazon ECS cluster](https://docs.aws.amazon.com//AmazonCloudWatch/latest/monitoring/deploy-container-insights-ECS-instancelevel.html). You can use the [cwagent-ecs-instance-metric-cfn.yaml](https://github.com/aws-samples/logging-monitoring-apg-guide-examples/blob/main/examples/ecs/cwagent-ecs-instance-metric-cfn.yaml) CloudFormation template to deploy the CloudWatch agent as an Amazon ECS service. Importantly, this example assumes that you created an appropriate custom CloudWatch agent configuration and stored it in Parameter Store with the key `ecs-cwagent-daemon-service`. 

The [CloudWatch agent](https://docs.aws.amazon.com//AmazonCloudWatch/latest/monitoring/Container-Insights-metrics-ECS.html) deployed as a daemon container for CloudWatch Container Insights includes additional disk, memory, and CPU metrics such as `instance_cpu_reserved_capacity` and `instance_memory_reserved_capacity` with the `ClusterName`, `ContainerInstanceId`, `InstanceId` dimensions. Metrics at the container instance level are implemented by Container Insights by using the CloudWatch embedded metric format. You can configure additional system-level metrics for your Amazon ECS container instances by using the approach from the [Set up State Manager and Distributor for CloudWatch agent deployment and configuration](install-cloudwatch-systems-manager.md#set-up-systems-manager-distributor) section of this guide. 

## Creating custom application metrics in Amazon ECS
<a name="ecs-metrics-applications"></a>

You can create custom metrics for your applications by using the [CloudWatch embedded metric format](https://docs.aws.amazon.com//AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format.html). The `awslogs` log driver can interpret CloudWatch embedded metric format statements.

The `CW_CONFIG_CONTENT` environment variable in the following example is set to the contents of the `cwagentconfig` Systems Manager Parameter Store parameter. You can run the agent with this basic configuration to configure it as an embedded metric format endpoint. However, it is no longer necessary.

```
  {
  "logs": {
    "metrics_collected": {
      "emf": { }
    }
  }
}
```

If you have Amazon ECS deployments across multiple accounts and Regions, you can use an AWS Secrets Manager secret to store your CloudWatch configuration and configure the secret policy to share it with your organization. You can use the secrets option in your task definition to set the `CW_CONFIG_CONTENT` variable. 

You can use the AWS provided [open-source embedded metric format libraries](https://docs.aws.amazon.com//AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Libraries.html) in your application and specify the `AWS_EMF_AGENT_ENDPOINT` environment variable to connect to your CloudWatch agent sidecar container acting as an embedded metric format endpoint. For example, you can use the [ecs\$1cw\$1emf\$1example](https://github.com/aws-samples/logging-monitoring-apg-guide-examples/tree/main/examples/ecs/ecs_cw_emf_example) sample Python application to send metrics in embedded metric format to a CloudWatch agent sidecar container configured as an embedded metric format endpoint. 

The [Fluent Bit plugin](https://github.com/aws/amazon-cloudwatch-logs-for-fluent-bit) for CloudWatch can also be used to send embedded metric format messages. You can also use the [ecs\$1firelense\$1emf\$1example](https://github.com/aws-samples/logging-monitoring-apg-guide-examples/tree/main/examples/ecs/ecs_firelense_emf_example) sample Python application to send metrics in embedded metric format to a Firelens for Amazon ECS sidecar container.

If you don’t want to use embedded metric format, you can create and update CloudWatch metrics through the [AWS API](https://docs.aws.amazon.com//AmazonCloudWatch/latest/APIReference/Welcome.html) or [AWS SDK](https://aws.amazon.com/developer/tools/). We don't recommend this approach unless you have a specific use case, because it adds maintenance and management overhead to your code.