

# Tutorials for Amazon ECS
<a name="ecs-tutorials"></a>

The following tutorials show you how to perform common tasks when using Amazon ECS.

You can use any of the following tutorials to learn more about getting started with Amazon ECS.


| Tutorial overview | Learn more | 
| --- | --- | 
|  Get started with Amazon ECS on Fargate.  |  [Learn how to create an Amazon ECS Linux task for Fargate](getting-started-fargate.md)  | 
|  Get started with Windows containers on Fargate.  |  [Learn how to create an Amazon ECS Windows task for Fargate](Windows_fargate-getting_started.md)  | 
|  Get started with Windows containers for EC2.  |  [Learn how to create an Amazon ECS Windows task for EC2](getting-started-ecs-ec2-v2.md)  | 

You can use any of the following tutorials to deploy tasks on Amazon ECS using the AWS CLI


| Tutorial overview | Learn more | 
| --- | --- | 
|  Create a Linux task for the Fargate.  |  [Creating an Amazon ECS Linux task for the Fargate with the AWS CLI](ECS_AWSCLI_Fargate.md)  | 
|  Create a Windows task for the Fargate.  |  [Creating an Amazon ECS Windows task for the Fargate with the AWS CLI](ECS_AWSCLI_Fargate_windows.md)  | 
|  Create a Linux task for EC2.  |  [Creating an Amazon ECS task for EC2 with the AWS CLI](ECS_AWSCLI_EC2.md)  | 

You can use any of the following tutorials to learn more about monitoring and logging.


| Tutorial overview | Learn more | 
| --- | --- | 
|  Set up a simple Lambda function that listens for task events and writes them out to a CloudWatch Logs log stream.  |  [Configuring Amazon ECS to listen for CloudWatch Events events](ecs_cwet.md)  | 
|  Configure an Amazon EventBridge event rule that only captures task events where the task has stopped running because one of its essential containers has terminated.   |  [Sending Amazon Simple Notification Service alerts for Amazon ECS task stopped events](ecs_cwet2.md)  | 
|  Concatenate log messages that originally belong to one context but were split across multiple records or log lines.  |  [Concatenating multiline or stack-trace Amazon ECS log messages](firelens-concatanate-multiline.md)  | 
|  Deploy Fluent Bit containers on their Windows instances running in Amazon ECS to stream logs generated by the Windows tasks to Amazon CloudWatch for centralized logging.  |  [Deploying Fluent Bit on Amazon ECS Windows containers](tutorial-deploy-fluentbit-on-windows.md)  | 

You can use any of the following tutorials to learn more about how to use Active Directory authentication with group Managed Service Account on Amazon ECS.


| Tutorial overview | Learn more | 
| --- | --- | 
|  Use group Managed Service Account with Linux containers on EC2.  |  [Using gMSA for EC2 Linux containers on Amazon ECS](linux-gmsa.md)  | 
|  Use group Managed Service Account with Windows containers on EC2.  |  [Learn how to use gMSAs for EC2 Windows containers for Amazon ECS](windows-gmsa.md)  | 
|  Use group Managed Service Account with Linux containers on Fargate.  |  [Using gMSA for Linux containers on Fargate](fargate-linux-gmsa.md)  | 
|  Create a task that runs a Windows container that has credentials to access Active Directory with domainless group Managed Service Account.  |  [Using Amazon ECS Windows containers with domainless gMSA using the AWS CLI](tutorial-gmsa-windows.md)  | 

# Creating an Amazon ECS Linux task for the Fargate with the AWS CLI
<a name="ECS_AWSCLI_Fargate"></a>

The following steps help you set up a cluster, register a task definition, run a Linux task, and perform other common scenarios in Amazon ECS with the AWS CLI. Use the latest version of the AWS CLI. For more information on how to upgrade to the latest version, see [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

**Note**  
You can use dual-stack service endpoints to interact with Amazon ECS from the AWS CLI, SDKs, and the Amazon ECS API over both IPv4 and IPv6. For more information, see [Using Amazon ECS dual-stack endpoints](dual-stack-endpoint.md).

**Topics**
+ [

## Prerequisites
](#ECS_AWSCLI_Fargate_prereq)
+ [

## Step 1: Create a Cluster
](#ECS_AWSCLI_Fargate_create_cluster)
+ [

## Step 2: Register a Linux Task Definition
](#ECS_AWSCLI_Fargate_register_task_definition)
+ [

## Step 3: List Task Definitions
](#ECS_AWSCLI_Fargate_list_task_definitions)
+ [

## Step 4: Create a Service
](#ECS_AWSCLI_Fargate_create_service)
+ [

## Step 5: List Services
](#ECS_AWSCLI_Fargate_list_services)
+ [

## Step 6: Describe the Running Service
](#ECS_AWSCLI_Fargate_describe_service)
+ [

## Step 7: Test
](#ECS_AWSCLI_Fargate_test)
+ [

## Step 8: Clean Up
](#ECS_AWSCLI_Fargate_clean_up)

## Prerequisites
<a name="ECS_AWSCLI_Fargate_prereq"></a>

This tutorial assumes that the following prerequisites have been completed.
+ The latest version of the AWS CLI is installed and configured. For more information about installing or upgrading your AWS CLI, [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
+ The steps in [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md) have been completed.
+ Your IAM user has the required permissions specified in the [AmazonECS\$1FullAccess](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonECS_FullAccess) IAM policy example.
+ You have a VPC and security group created to use. This tutorial uses a container image hosted on Amazon ECR Public so your task must have internet access. To give your task a route to the internet, use one of the following options.
  + Use a private subnet with a NAT gateway that has an elastic IP address.
  + Use a public subnet and assign a public IP address to the task.

  For more information, see [Create a virtual private cloud](get-set-up-for-amazon-ecs.md#create-a-vpc).

  For information about security groups and rules, see, [Default security groups for your VPCs](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#DefaultSecurityGroup) and [Example rules](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#security-group-rule-examples) in the *Amazon Virtual Private Cloud User Guide*.
+  If you follow this tutorial using a private subnet, you can use Amazon ECS Exec to directly interact with your container and test the deployment. You will need to create a task IAM role to use ECS Exec. For more information on the task IAM role and other prerequisites, see [Monitor Amazon ECS containers with Amazon ECS Exec](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html).
+ (Optional) AWS CloudShell is a tool that gives customers a command line without needing to create their own EC2 instance. For more information, see [What is AWS CloudShell?](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html) in the *AWS CloudShell User Guide*.

## Step 1: Create a Cluster
<a name="ECS_AWSCLI_Fargate_create_cluster"></a>

By default, your account receives a `default` cluster.

**Note**  
The benefit of using the `default` cluster that is provided for you is that you don't have to specify the `--cluster cluster_name` option in the subsequent commands. If you do create your own, non-default, cluster, you must specify `--cluster cluster_name` for each command that you intend to use with that cluster.

Create your own cluster with a unique name with the following command:

```
aws ecs create-cluster --cluster-name fargate-cluster
```

Output:

```
{
    "cluster": {
        "status": "ACTIVE", 
        "defaultCapacityProviderStrategy": [], 
        "statistics": [], 
        "capacityProviders": [], 
        "tags": [], 
        "clusterName": "fargate-cluster", 
        "settings": [
            {
                "name": "containerInsights", 
                "value": "disabled"
            }
        ], 
        "registeredContainerInstancesCount": 0, 
        "pendingTasksCount": 0, 
        "runningTasksCount": 0, 
        "activeServicesCount": 0, 
        "clusterArn": "arn:aws:ecs:region:aws_account_id:cluster/fargate-cluster"
    }
}
```

## Step 2: Register a Linux Task Definition
<a name="ECS_AWSCLI_Fargate_register_task_definition"></a>

Before you can run a task on your ECS cluster, you must register a task definition. Task definitions are lists of containers grouped together. The following example is a simple task definition that creates a PHP web app using the httpd container image hosted on Docker Hub. For more information about the available task definition parameters, see [Amazon ECS task definitions](task_definitions.md). For this tutorial, the `taskRoleArn` is only needed if you are deploying the task in a private subnet and wish to test the deployment. Replace the `taskRoleArn` with the IAM task role you created to use ECS Exec as mentioned in [Prerequisites](#ECS_AWSCLI_Fargate_prereq).

```
 {
        "family": "sample-fargate",
        "networkMode": "awsvpc",
        "taskRoleArn": "arn:aws:iam::aws_account_id:role/execCommandRole", 
        "containerDefinitions": [
            {
                "name": "fargate-app",
                "image": "public.ecr.aws/docker/library/httpd:latest",
                "portMappings": [
                    {
                        "containerPort": 80,
                        "hostPort": 80,
                        "protocol": "tcp"
                    }
                ],
                "essential": true,
                "entryPoint": [
                    "sh",
                    "-c"
                ],
                "command": [
                    "/bin/sh -c \"echo '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p> </div></body></html>' >  /usr/local/apache2/htdocs/index.html && httpd-foreground\""
                ]
            }
        ],
        "requiresCompatibilities": [
            "FARGATE"
        ],
        "cpu": "256",
        "memory": "512"
}
```

Save the task definition JSON as a file and pass it with the `--cli-input-json file://path_to_file.json` option. 

To use a JSON file for container definitions:

```
aws ecs register-task-definition --cli-input-json file://$HOME/tasks/fargate-task.json
```

The **register-task-definition** command returns a description of the task definition after it completes its registration.

## Step 3: List Task Definitions
<a name="ECS_AWSCLI_Fargate_list_task_definitions"></a>

You can list the task definitions for your account at any time with the **list-task-definitions** command. The output of this command shows the `family` and `revision` values that you can use together when calling **run-task** or **start-task**.

```
aws ecs list-task-definitions
```

Output:

```
{
    "taskDefinitionArns": [
        "arn:aws:ecs:region:aws_account_id:task-definition/sample-fargate:1"
    ]
}
```

## Step 4: Create a Service
<a name="ECS_AWSCLI_Fargate_create_service"></a>

After you have registered a task for your account, you can create a service for the registered task in your cluster. For this example, you create a service with one instance of the `sample-fargate:1` task definition running in your cluster. The task requires a route to the internet, so there are two ways you can achieve this. One way is to use a private subnet configured with a NAT gateway with an elastic IP address in a public subnet. Another way is to use a public subnet and assign a public IP address to your task. We provide both examples below. 

Example using a private subnet. The ` enable-execute-command ` option is needed to use Amazon ECS Exec.

```
aws ecs create-service --cluster fargate-cluster --service-name fargate-service --task-definition sample-fargate:1 --desired-count 1 --launch-type "FARGATE" --network-configuration "awsvpcConfiguration={subnets=[subnet-abcd1234],securityGroups=[sg-abcd1234]}" --enable-execute-command
```

Example using a public subnet.

```
aws ecs create-service --cluster fargate-cluster --service-name fargate-service --task-definition sample-fargate:1 --desired-count 1 --launch-type "FARGATE" --network-configuration "awsvpcConfiguration={subnets=[subnet-abcd1234],securityGroups=[sg-abcd1234],assignPublicIp=ENABLED}"
```

The **create-service** command returns a description of the task definition after it completes its registration.

## Step 5: List Services
<a name="ECS_AWSCLI_Fargate_list_services"></a>

List the services for your cluster. You should see the service that you created in the previous section. You can take the service name or the full ARN that is returned from this command and use it to describe the service later.

```
aws ecs list-services --cluster fargate-cluster
```

Output:

```
{
    "serviceArns": [
        "arn:aws:ecs:region:aws_account_id:service/fargate-cluster/fargate-service"
    ]
}
```

## Step 6: Describe the Running Service
<a name="ECS_AWSCLI_Fargate_describe_service"></a>

Describe the service using the service name retrieved earlier to get more information about the task.

```
aws ecs describe-services --cluster fargate-cluster --services fargate-service
```

If successful, this will return a description of the service failures and services. For example, in the ` services ` section, you will find information on deployments, such as the status of the tasks as running or pending. You may also find information on the task definition, the network configuration and time-stamped events. In the failures section, you will find information on failures, if any, associated with the call. For troubleshooting, see [Service Event Messages](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-event-messages.html). For more information about the service description, see [Describe Services](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DescribeServices).

```
{
    "services": [
        {
            "networkConfiguration": {
                "awsvpcConfiguration": {
                    "subnets": [
                        "subnet-abcd1234"
                    ], 
                    "securityGroups": [
                        "sg-abcd1234"
                    ], 
                    "assignPublicIp": "ENABLED"
                }
            }, 
            "launchType": "FARGATE", 
            "enableECSManagedTags": false, 
            "loadBalancers": [], 
            "deploymentController": {
                "type": "ECS"
            }, 
            "desiredCount": 1, 
            "clusterArn": "arn:aws:ecs:region:aws_account_id:cluster/fargate-cluster", 
            "serviceArn": "arn:aws:ecs:region:aws_account_id:service/fargate-service", 
            "deploymentConfiguration": {
                "maximumPercent": 200, 
                "minimumHealthyPercent": 100
            }, 
            "createdAt": 1692283199.771, 
            "schedulingStrategy": "REPLICA", 
            "placementConstraints": [], 
            "deployments": [
                {
                    "status": "PRIMARY", 
                    "networkConfiguration": {
                        "awsvpcConfiguration": {
                            "subnets": [
                                "subnet-abcd1234"
                            ], 
                            "securityGroups": [
                                "sg-abcd1234"
                            ], 
                            "assignPublicIp": "ENABLED"
                        }
                    }, 
                    "pendingCount": 0, 
                    "launchType": "FARGATE", 
                    "createdAt": 1692283199.771, 
                    "desiredCount": 1, 
                    "taskDefinition": "arn:aws:ecs:region:aws_account_id:task-definition/sample-fargate:1", 
                    "updatedAt": 1692283199.771, 
                    "platformVersion": "1.4.0", 
                    "id": "ecs-svc/9223370526043414679", 
                    "runningCount": 0
                }
            ], 
            "serviceName": "fargate-service", 
            "events": [
                {
                    "message": "(service fargate-service) has started 2 tasks: (task 53c0de40-ea3b-489f-a352-623bf1235f08) (task d0aec985-901b-488f-9fb4-61b991b332a3).", 
                    "id": "92b8443e-67fb-4886-880c-07e73383ea83", 
                    "createdAt": 1510811841.408
                }, 
                {
                    "message": "(service fargate-service) has started 2 tasks: (task b4911bee-7203-4113-99d4-e89ba457c626) (task cc5853e3-6e2d-4678-8312-74f8a7d76474).", 
                    "id": "d85c6ec6-a693-43b3-904a-a997e1fc844d", 
                    "createdAt": 1510811601.938
                }, 
                {
                    "message": "(service fargate-service) has started 2 tasks: (task cba86182-52bf-42d7-9df8-b744699e6cfc) (task f4c1ad74-a5c6-4620-90cf-2aff118df5fc).", 
                    "id": "095703e1-0ca3-4379-a7c8-c0f1b8b95ace", 
                    "createdAt": 1510811364.691
                }
            ], 
            "runningCount": 0, 
            "status": "ACTIVE", 
            "serviceRegistries": [], 
            "pendingCount": 0, 
            "createdBy": "arn:aws:iam::aws_account_id:user/user_name", 
            "platformVersion": "LATEST", 
            "placementStrategy": [], 
            "propagateTags": "NONE", 
            "roleArn": "arn:aws:iam::aws_account_id:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS", 
            "taskDefinition": "arn:aws:ecs:region:aws_account_id:task-definition/sample-fargate:1"
        }
    ], 
    "failures": []
}
```

## Step 7: Test
<a name="ECS_AWSCLI_Fargate_test"></a>

### Testing task deployed using public subnet
<a name="ECS_AWSCLI_Fargate_test_public"></a>

Describe the task in the service so that you can get the Elastic Network Interface (ENI) for the task. 

First, get the task ARN.

```
aws ecs list-tasks --cluster fargate-cluster --service fargate-service
```

The output contains the task ARN.

```
{
    "taskArns": [
        "arn:aws:ecs:us-east-1:123456789012:task/fargate-service/EXAMPLE
    ]
}
```

Describe the task and locate the ENI ID. Use the task ARN for the `tasks` parameter.

```
aws ecs describe-tasks --cluster fargate-cluster --tasks arn:aws:ecs:us-east-1:123456789012:task/service/EXAMPLE
```

The attachment information is listed in the output. 

```
{
    "tasks": [
        {
            "attachments": [
                {
                    "id": "d9e7735a-16aa-4128-bc7a-b2d5115029e9",
                    "type": "ElasticNetworkInterface",
                    "status": "ATTACHED",
                    "details": [
                        {
                            "name": "subnetId",
                            "value": "subnetabcd1234"
                        },
                        {
                            "name": "networkInterfaceId",
                            "value": "eni-0fa40520aeEXAMPLE"
                        },
                    ]
                }
…
}
```

Describe the ENI to get the public IP address.

```
aws ec2 describe-network-interfaces --network-interface-id  eni-0fa40520aeEXAMPLE
```

The public IP address is in the output. 

```
{
    "NetworkInterfaces": [
        {
            "Association": {
                "IpOwnerId": "amazon",
                "PublicDnsName": "ec2-34-229-42-222.compute-1.amazonaws.com",
                "PublicIp": "198.51.100.2"
            },
…
}
```

Enter the public IP address in your web browser and you should see a webpage that displays the **Amazon ECS **sample application.

### Testing task deployed using private subnet
<a name="ECS_AWSCLI_Fargate_test_private.title"></a>

 Describe the task and locate `managedAgents` to verify that the `ExecuteCommandAgent` is running. Note the `privateIPv4Address` for later use.

```
aws ecs describe-tasks --cluster fargate-cluster --tasks arn:aws:ecs:us-east-1:123456789012:task/fargate-service/EXAMPLE
```

 The managed agent information is listed in the output. 

```
{
     "tasks": [
        {
            "attachments": [
                {
                    "id": "d9e7735a-16aa-4128-bc7a-b2d5115029e9",
                    "type": "ElasticNetworkInterface",
                    "status": "ATTACHED",
                    "details": [
                        {
                            "name": "subnetId",
                            "value": "subnetabcd1234"
                        },
                        {
                            "name": "networkInterfaceId",
                            "value": "eni-0fa40520aeEXAMPLE"
                        },
                        {
                            "name": "privateIPv4Address",
                            "value": "10.0.143.156"
                        }
                    ]
                }
            ],
     ...  
     "containers": [
         {
         ...
        "managedAgents": [
                        {
                            "lastStartedAt": "2023-08-01T16:10:13.002000+00:00",
                            "name": "ExecuteCommandAgent",
                            "lastStatus": "RUNNING"
                        } 
                ],
        ...
    }
```

 After verifying that the ` ExecuteCommandAgent` is running, you can run the following command to run an interactive shell on the container in the task. 

```
  aws ecs execute-command --cluster fargate-cluster \
      --task  arn:aws:ecs:us-east-1:123456789012:task/fargate-service/EXAMPLE  \
      --container  fargate-app \
      --interactive \
      --command "/bin/sh"
```

 After the interactive shell is running, run the following commands to install cURL. 

```
apt update 
```

```
apt install curl 
```

 After installing cURL, run the following command using the private IP address you obtained earlier.

```
 curl 10.0.143.156 
```

 You should see the HTML equivalent of the **Amazon ECS **sample application webpage.

```
<html>
    <head> 
     <title>Amazon ECS Sample App</title> 
     <style>body {margin-top: 40px; background-color: #333;} </style>
    </head>
      <body> 
      <div style=color:white;text-align:center> 
      <h1>Amazon ECS Sample App</h1> 
      <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p> 
      </div>
      </body>
</html>
```

## Step 8: Clean Up
<a name="ECS_AWSCLI_Fargate_clean_up"></a>

When you are finished with this tutorial, you should clean up the associated resources to avoid incurring charges for unused resources.

Delete the service.

```
aws ecs delete-service --cluster fargate-cluster --service fargate-service --force
```

Delete the cluster.

```
aws ecs delete-cluster --cluster fargate-cluster
```

# Creating an Amazon ECS Windows task for the Fargate with the AWS CLI
<a name="ECS_AWSCLI_Fargate_windows"></a>

The following steps help you set up a cluster, register a task definition, run a Windows task, and perform other common scenarios in Amazon ECS with the AWS CLI. Ensure that you are using the latest version of the AWS CLI. For more information on how to upgrade to the latest version, see [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

**Note**  
You can use dual-stack service endpoints to interact with Amazon ECS from the AWS CLI, SDKs, and the Amazon ECS API over both IPv4 and IPv6. For more information, see [Using Amazon ECS dual-stack endpoints](dual-stack-endpoint.md).

**Topics**
+ [

## Prerequisites
](#ECS_AWSCLI_Fargate_windows_prereq)
+ [

## Step 1: Create a Cluster
](#ECS_AWSCLI_Fargate_windows_create_cluster)
+ [

## Step 2: Register a Windows Task Definition
](#ECS_AWSCLI_Fargate_windows_register_task_definition)
+ [

## Step 3: List task definitions
](#ECS_AWSCLI_Fargate_windows__list_task_definitions)
+ [

## Step 4: Create a service
](#ECS_AWSCLI_Fargate_windows_create_service)
+ [

## Step 5: List services
](#ECS_AWSCLI_Fargate_windows_list_services)
+ [

## Step 6: Describe the Running Service
](#ECS_AWSCLI_Fargate_windows_describe_service)
+ [

## Step 7: Clean Up
](#ECS_AWSCLI_Fargate_windows_clean_up)

## Prerequisites
<a name="ECS_AWSCLI_Fargate_windows_prereq"></a>

This tutorial assumes that the following prerequisites have been completed.
+ The latest version of the AWS CLI is installed and configured. For more information about installing or upgrading your AWS CLI, see [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
+ The steps in [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md) have been completed.
+ Your IAM user has the required permissions specified in the [AmazonECS\$1FullAccess](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonECS_FullAccess) IAM policy example.
+ You have a VPC and security group created to use. This tutorial uses a container image hosted on Docker Hub so your task must have internet access. To give your task a route to the internet, use one of the following options.
  + Use a private subnet with a NAT gateway that has an elastic IP address.
  + Use a public subnet and assign a public IP address to the task.

  For more information, see [Create a virtual private cloud](get-set-up-for-amazon-ecs.md#create-a-vpc).

  For information about security groups and rules, see, [Default security groups for your VPCs](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#DefaultSecurityGroup) and [Example rules](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#security-group-rule-examples) in the *Amazon Virtual Private Cloud User Guide*.
+ (Optional) AWS CloudShell is a tool that gives customers a command line without needing to create their own EC2 instance. For more information, see [What is AWS CloudShell?](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html) in the *AWS CloudShell User Guide*.

## Step 1: Create a Cluster
<a name="ECS_AWSCLI_Fargate_windows_create_cluster"></a>

By default, your account receives a `default` cluster.

**Note**  
The benefit of using the `default` cluster that is provided for you is that you don't have to specify the `--cluster cluster_name` option in the subsequent commands. If you do create your own, non-default, cluster, you must specify `--cluster cluster_name` for each command that you intend to use with that cluster.

Create your own cluster with a unique name with the following command:

```
aws ecs create-cluster --cluster-name fargate-cluster
```

Output:

```
{
    "cluster": {
        "status": "ACTIVE", 
        "statistics": [], 
        "clusterName": "fargate-cluster", 
        "registeredContainerInstancesCount": 0, 
        "pendingTasksCount": 0, 
        "runningTasksCount": 0, 
        "activeServicesCount": 0, 
        "clusterArn": "arn:aws:ecs:region:aws_account_id:cluster/fargate-cluster"
    }
}
```

## Step 2: Register a Windows Task Definition
<a name="ECS_AWSCLI_Fargate_windows_register_task_definition"></a>

Before you can run a Windows task on your Amazon ECS cluster, you must register a task definition. Task definitions are lists of containers grouped together. The following example is a simple task definition that creates a web app. For more information about the available task definition parameters, see [Amazon ECS task definitions](task_definitions.md).

```
{
    "containerDefinitions": [
        {
            "command": ["New-Item -Path C:\\inetpub\\wwwroot\\index.html -Type file -Value '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p>'; C:\\ServiceMonitor.exe w3svc"],
            "entryPoint": [
                "powershell",
                "-Command"
            ],
            "essential": true,
            "cpu": 2048,
            "memory": 4096,
            "image": "mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019",
            "name": "sample_windows_app",
            "portMappings": [
                {
                    "hostPort": 80,
                    "containerPort": 80,
                    "protocol": "tcp"
                }
            ]
        }
    ],
    "memory": "4096",
    "cpu": "2048",
    "networkMode": "awsvpc",
    "family": "windows-simple-iis-2019-core",
    "executionRoleArn": "arn:aws:iam::012345678910:role/ecsTaskExecutionRole",
    "runtimePlatform": {"operatingSystemFamily": "WINDOWS_SERVER_2019_CORE"},
    "requiresCompatibilities": ["FARGATE"]
}
```

The above example JSON can be passed to the AWS CLI in two ways: You can save the task definition JSON as a file and pass it with the `--cli-input-json file://path_to_file.json` option.

To use a JSON file for container definitions:

```
aws ecs register-task-definition --cli-input-json file://$HOME/tasks/fargate-task.json
```

The **register-task-definition** command returns a description of the task definition after it completes its registration.

## Step 3: List task definitions
<a name="ECS_AWSCLI_Fargate_windows__list_task_definitions"></a>

You can list the task definitions for your account at any time with the **list-task-definitions** command. The output of this command shows the `family` and `revision` values that you can use together when calling **run-task** or **start-task**.

```
aws ecs list-task-definitions
```

Output:

```
{
    "taskDefinitionArns": [
        "arn:aws:ecs:region:aws_account_id:task-definition/sample-fargate-windows:1"
    ]
}
```

## Step 4: Create a service
<a name="ECS_AWSCLI_Fargate_windows_create_service"></a>

After you have registered a task for your account, you can create a service for the registered task in your cluster. For this example, you create a service with one instance of the `sample-fargate:1` task definition running in your cluster. The task requires a route to the internet, so there are two ways you can achieve this. One way is to use a private subnet configured with a NAT gateway with an elastic IP address in a public subnet. Another way is to use a public subnet and assign a public IP address to your task. We provide both examples below. 

Example using a private subnet.

```
aws ecs create-service --cluster fargate-cluster --service-name fargate-service --task-definition sample-fargate-windows:1 --desired-count 1 --launch-type "FARGATE" --network-configuration "awsvpcConfiguration={subnets=[subnet-abcd1234],securityGroups=[sg-abcd1234]}"
```

Example using a public subnet.

```
aws ecs create-service --cluster fargate-cluster --service-name fargate-service --task-definition sample-fargate-windows:1 --desired-count 1 --launch-type "FARGATE" --network-configuration "awsvpcConfiguration={subnets=[subnet-abcd1234],securityGroups=[sg-abcd1234],assignPublicIp=ENABLED}"
```

The **create-service** command returns a description of the task definition after it completes its registration.

## Step 5: List services
<a name="ECS_AWSCLI_Fargate_windows_list_services"></a>

List the services for your cluster. You should see the service that you created in the previous section. You can take the service name or the full ARN that is returned from this command and use it to describe the service later.

```
aws ecs list-services --cluster fargate-cluster
```

Output:

```
{
    "serviceArns": [
        "arn:aws:ecs:region:aws_account_id:service/fargate-service"
    ]
}
```

## Step 6: Describe the Running Service
<a name="ECS_AWSCLI_Fargate_windows_describe_service"></a>

Describe the service using the service name retrieved earlier to get more information about the task.

```
aws ecs describe-services --cluster fargate-cluster --services fargate-service
```

If successful, this will return a description of the service failures and services. For example, in services section, you will find information on deployments, such as the status of the tasks as running or pending. You may also find information on the task definition, the network configuration and time-stamped events. In the failures section, you will find information on failures, if any, associated with the call. For troubleshooting, see [Service Event Messages](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-event-messages.html). For more information about the service description, see [Describe Services](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DescribeServices). 

```
{
    "services": [
        {
            "status": "ACTIVE", 
            "taskDefinition": "arn:aws:ecs:region:aws_account_id:task-definition/sample-fargate-windows:1", 
            "pendingCount": 2, 
            "launchType": "FARGATE", 
            "loadBalancers": [], 
            "roleArn": "arn:aws:iam::aws_account_id:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS", 
            "placementConstraints": [], 
            "createdAt": 1510811361.128, 
            "desiredCount": 2, 
            "networkConfiguration": {
                "awsvpcConfiguration": {
                    "subnets": [
                        "subnet-abcd1234"
                    ], 
                    "securityGroups": [
                        "sg-abcd1234"
                    ], 
                    "assignPublicIp": "DISABLED"
                }
            }, 
            "platformVersion": "LATEST", 
            "serviceName": "fargate-service", 
            "clusterArn": "arn:aws:ecs:region:aws_account_id:cluster/fargate-cluster", 
            "serviceArn": "arn:aws:ecs:region:aws_account_id:service/fargate-service", 
            "deploymentConfiguration": {
                "maximumPercent": 200, 
                "minimumHealthyPercent": 100
            }, 
            "deployments": [
                {
                    "status": "PRIMARY", 
                    "networkConfiguration": {
                        "awsvpcConfiguration": {
                            "subnets": [
                                "subnet-abcd1234"
                            ], 
                            "securityGroups": [
                                "sg-abcd1234"
                            ], 
                            "assignPublicIp": "DISABLED"
                        }
                    }, 
                    "pendingCount": 2, 
                    "launchType": "FARGATE", 
                    "createdAt": 1510811361.128, 
                    "desiredCount": 2, 
                    "taskDefinition": "arn:aws:ecs:region:aws_account_id:task-definition/sample-fargate-windows:1", 
                    "updatedAt": 1510811361.128, 
                    "platformVersion": "0.0.1", 
                    "id": "ecs-svc/9223370526043414679", 
                    "runningCount": 0
                }
            ], 
            "events": [
                {
                    "message": "(service fargate-service) has started 2 tasks: (task 53c0de40-ea3b-489f-a352-623bf1235f08) (task d0aec985-901b-488f-9fb4-61b991b332a3).", 
                    "id": "92b8443e-67fb-4886-880c-07e73383ea83", 
                    "createdAt": 1510811841.408
                }, 
                {
                    "message": "(service fargate-service) has started 2 tasks: (task b4911bee-7203-4113-99d4-e89ba457c626) (task cc5853e3-6e2d-4678-8312-74f8a7d76474).", 
                    "id": "d85c6ec6-a693-43b3-904a-a997e1fc844d", 
                    "createdAt": 1510811601.938
                }, 
                {
                    "message": "(service fargate-service) has started 2 tasks: (task cba86182-52bf-42d7-9df8-b744699e6cfc) (task f4c1ad74-a5c6-4620-90cf-2aff118df5fc).", 
                    "id": "095703e1-0ca3-4379-a7c8-c0f1b8b95ace", 
                    "createdAt": 1510811364.691
                }
            ], 
            "runningCount": 0, 
            "placementStrategy": []
        }
    ], 
    "failures": []
}
```

## Step 7: Clean Up
<a name="ECS_AWSCLI_Fargate_windows_clean_up"></a>

When you are finished with this tutorial, you should clean up the associated resources to avoid incurring charges for unused resources.

Delete the service.

```
aws ecs delete-service --cluster fargate-cluster --service fargate-service --force
```

Delete the cluster.

```
aws ecs delete-cluster --cluster fargate-cluster
```

# Creating an Amazon ECS task for EC2 with the AWS CLI
<a name="ECS_AWSCLI_EC2"></a>

The following steps help you set up a cluster, register a task definition, run a task, and perform other common scenarios in Amazon ECS with the AWS CLI. Use the latest version of the AWS CLI. For more information on how to upgrade to the latest version, see [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

**Note**  
You can use dual-stack service endpoints to interact with Amazon ECS from the AWS CLI, SDKs, and the Amazon ECS API over both IPv4 and IPv6. For more information, see [Using Amazon ECS dual-stack endpoints](dual-stack-endpoint.md).

**Topics**
+ [

## Prerequisites
](#AWSCLI_EC2_prereq)
+ [

## Create a cluster
](#AWSCLI_EC2_create_cluster)
+ [

## Launch a container instance with the Amazon ECS AMI
](#AWSCLI_EC2_launch_container_instance)
+ [

## List container instances
](#AWSCLI_EC2_list_container_instances)
+ [

## Describe your container instance
](#AWSCLI_EC2_describe_container_instance)
+ [

## Register a task definition
](#AWSCLI_EC2_register_task_definition)
+ [

## List task definitions
](#AWSCLI_EC2_list_task_definitions)
+ [

## Create a service
](#AWSCLI_EC2_run_task)
+ [

## List services
](#AWSCLI_EC2_list_tasks)
+ [

## Describe the service
](#AWSCLI_EC2_describe_service)
+ [

## Describe the running task
](#AWSCLI_EC2_describe_task)
+ [

## Test the web server
](#AWSCLI_EC2_test_web_server)
+ [

## Clean up resources
](#AWSCLI_EC2_clean_up_resources)

## Prerequisites
<a name="AWSCLI_EC2_prereq"></a>

This tutorial assumes that the following prerequisites have been completed:
+ The latest version of the AWS CLI is installed and configured. For more information about installing or upgrading your AWS CLI, see [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
+ The steps in [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md) have been completed.
+ Your IAM user has the required permissions specified in the [AmazonECS\$1FullAccess](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonECS_FullAccess) IAM policy example.
+ You have a container instance IAM role created to use. For more informaton, see [Amazon ECS container instance IAM role](instance_IAM_role.md).
+ You have a VPC created to use. For more information, see [Create a virtual private cloud](get-set-up-for-amazon-ecs.md#create-a-vpc).
+ (Optional) AWS CloudShell is a tool that gives customers a command line without needing to create their own EC2 instance. For more information, see [What is AWS CloudShell?](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html) in the *AWS CloudShell User Guide*.

## Create a cluster
<a name="AWSCLI_EC2_create_cluster"></a>

By default, your account receives a `default` cluster when you launch your first container instance.

**Note**  
The benefit of using the `default` cluster that is provided for you is that you don't have to specify the `--cluster cluster_name` option in the subsequent commands. If you do create your own, non-default, cluster, you must specify `--cluster cluster_name` for each command that you intend to use with that cluster.

Create your own cluster with a unique name with the following command:

```
aws ecs create-cluster --cluster-name MyCluster
```

Output:

```
{
    "cluster": {
        "clusterName": "MyCluster",
        "status": "ACTIVE",
        "clusterArn": "arn:aws:ecs:region:aws_account_id:cluster/MyCluster"
    }
}
```

## Launch a container instance with the Amazon ECS AMI
<a name="AWSCLI_EC2_launch_container_instance"></a>

Container instances are EC2 instances that run the Amazon ECS container agent and have been registered into a cluster. In this section, you'll launch an EC2 instance using the ECS-optimized AMI.

**To launch a container instance with the AWS CLI**

1. Retrieve the latest ECS-optimized Amazon Linux 2 AMI ID for your AWS Region using the following command. This command uses AWS Systems Manager Parameter Store to get the latest ECS-optimized AMI ID. The AMI includes the Amazon ECS container agent and Docker runtime pre-installed.

   ```
   aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/recommended --query 'Parameters[0].Value' --output text | jq -r '.image_id'
   ```

   Output:

   ```
   ami-abcd1234
   ```

1. Create a security group that allows SSH access for managing your container instance and HTTP access for the web server.

   ```
   aws ec2 create-security-group --group-name ecs-tutorial-sg --description "ECS tutorial security group"
   ```

   Output:

   ```
   {
       "GroupId": "sg-abcd1234"
   }
   ```

1. Add an inbound rule to the security group by running the following command.

   ```
   aws ec2 authorize-security-group-ingress --group-id sg-abcd1234 --protocol tcp --port 80 --cidr 0.0.0.0/0
   ```

   Output:

   ```
   {
       "Return": true,
       "SecurityGroupRules": [
           {
               "SecurityGroupRuleId": "sgr-efgh5678",
               "GroupId": "sg-abcd1234",
               "GroupOwnerId": "123456789012",
               "IsEgress": false,
               "IpProtocol": "tcp",
               "FromPort": 80,
               "ToPort": 80,
               "CidrIpv4": "0.0.0.0/0"
           }
       ]
   }
   ```

   The security group now allows SSH access from the specified IP range and HTTP access from anywhere. In a production environment, you should restrict SSH access to your specific IP address and consider limiting HTTP access as needed.

1. Create an EC2 key pair for SSH access to your container instance.

   ```
   aws ec2 create-key-pair --key-name ecs-tutorial-key --query 'KeyMaterial' --output text > ecs-tutorial-key.pem
   chmod 400 ecs-tutorial-key.pem
   ```

   The private key is saved to your local machine with appropriate permissions for SSH access.

1. Launch an EC2 instance using the ECS-optimized AMI and configure it to join your cluster.

   ```
   aws ec2 run-instances --image-id ami-abcd1234 --instance-type t3.micro --key-name ecs-tutorial-key --security-group-ids sg-abcd1234 --iam-instance-profile Name=ecsInstanceRole --user-data '#!/bin/bash
   echo ECS_CLUSTER=MyCluster >> /etc/ecs/ecs.config'
   {
       "Instances": [
           {
               "InstanceId": "i-abcd1234",
               "ImageId": "ami-abcd1234",
               "State": {
                   "Code": 0,
                   "Name": "pending"
               },
               "PrivateDnsName": "",
               "PublicDnsName": "",
               "StateReason": {
                   "Code": "pending",
                   "Message": "pending"
               },
               "InstanceType": "t3.micro",
               "KeyName": "ecs-tutorial-key",
               "LaunchTime": "2025-01-13T10:30:00.000Z"
           }
       ]
   }
   ```

   The user data script configures the Amazon ECS agent to register the instance with your `MyCluster`. The instance uses the `ecsInstanceRole` IAM role, which provides the necessary permissions for the agent.

## List container instances
<a name="AWSCLI_EC2_list_container_instances"></a>

Within a few minutes of launching your container instance, the Amazon ECS agent registers the instance with your MyCluster cluster. You can list the container instances in a cluster by running the following command:

```
aws ecs list-container-instances --cluster MyCluster
```

Output:

```
{
    "containerInstanceArns": [
        "arn:aws:ecs:us-east-1:aws_account_id:container-instance/container_instance_ID"
    ]
}
```

## Describe your container instance
<a name="AWSCLI_EC2_describe_container_instance"></a>

After you have the ARN or ID of a container instance, you can use the **describe-container-instances** command to get valuable information on the instance, such as remaining and registered CPU and memory resources.

```
aws ecs describe-container-instances --cluster MyCluster --container-instances container_instance_ID
```

Output:

```
{
    "failures": [],
    "containerInstances": [
        {
            "status": "ACTIVE",
            "registeredResources": [
                {
                    "integerValue": 1024,
                    "longValue": 0,
                    "type": "INTEGER",
                    "name": "CPU",
                    "doubleValue": 0.0
                },
                {
                    "integerValue": 995,
                    "longValue": 0,
                    "type": "INTEGER",
                    "name": "MEMORY",
                    "doubleValue": 0.0
                },
                {
                    "name": "PORTS",
                    "longValue": 0,
                    "doubleValue": 0.0,
                    "stringSetValue": [
                        "22",
                        "2376",
                        "2375",
                        "51678"
                    ],
                    "type": "STRINGSET",
                    "integerValue": 0
                },
                {
                    "name": "PORTS_UDP",
                    "longValue": 0,
                    "doubleValue": 0.0,
                    "stringSetValue": [],
                    "type": "STRINGSET",
                    "integerValue": 0
                }
            ],
            "ec2InstanceId": "instance_id",
            "agentConnected": true,
            "containerInstanceArn": "arn:aws:ecs:us-west-2:aws_account_id:container-instance/container_instance_ID",
            "pendingTasksCount": 0,
            "remainingResources": [
                {
                    "integerValue": 1024,
                    "longValue": 0,
                    "type": "INTEGER",
                    "name": "CPU",
                    "doubleValue": 0.0
                },
                {
                    "integerValue": 995,
                    "longValue": 0,
                    "type": "INTEGER",
                    "name": "MEMORY",
                    "doubleValue": 0.0
                },
                {
                    "name": "PORTS",
                    "longValue": 0,
                    "doubleValue": 0.0,
                    "stringSetValue": [
                        "22",
                        "2376",
                        "2375",
                        "51678"
                    ],
                    "type": "STRINGSET",
                    "integerValue": 0
                },
                {
                    "name": "PORTS_UDP",
                    "longValue": 0,
                    "doubleValue": 0.0,
                    "stringSetValue": [],
                    "type": "STRINGSET",
                    "integerValue": 0
                }
            ],
            "runningTasksCount": 0,
            "attributes": [
                {
                    "name": "com.amazonaws.ecs.capability.privileged-container"
                },
                {
                    "name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
                },
                {
                    "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
                },
                {
                    "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
                },
                {
                    "name": "com.amazonaws.ecs.capability.logging-driver.json-file"
                },
                {
                    "name": "com.amazonaws.ecs.capability.logging-driver.syslog"
                }
            ],
            "versionInfo": {
                "agentVersion": "1.5.0",
                "agentHash": "b197edd",
                "dockerVersion": "DockerVersion: 1.7.1"
            }
        }
    ]
}
```

You can also find the Amazon EC2 instance ID that you can use to monitor the instance in the Amazon EC2 console or with the **aws ec2 describe-instances --instance-id *instance\$1id*** command.

## Register a task definition
<a name="AWSCLI_EC2_register_task_definition"></a>

Before you can run a task on your Amazon ECS cluster, you must register a task definition. Task definitions are lists of containers grouped together. The following example is a simple task definition that uses an `nginx` image. For more information about the available task definition parameters, see [Amazon ECS task definitions](task_definitions.md).

```
{
    "family": "nginx-task",
    "containerDefinitions": [
        {
            "name": "nginx",
            "image": "public.ecr.aws/ecs-sample-image/amazon-ecs-sample:latest",
            "cpu": 256,
            "memory": 512,
            "essential": true,
            "portMappings": [
                {
                    "containerPort": 80,
                    "hostPort": 80,
                    "protocol": "tcp"
                }
            ]
        }
    ],
    "requiresCompatibilities": ["EC2"],
    "networkMode": "bridge"
}
```

The above example JSON can be passed to the AWS CLI in two ways: You can save the task definition JSON as a file and pass it with the `--cli-input-json file://path_to_file.json` option. Or, you can escape the quotation marks in the JSON and pass the JSON container definitions on the command line. If you choose to pass the container definitions on the command line, your command additionally requires a `--family` parameter that is used to keep multiple versions of your task definition associated with each other.

To use a JSON file for container definitions:

```
aws ecs register-task-definition --cli-input-json file://$HOME/tasks/nginx.json
```

The **register-task-definition** returns a description of the task definition after it completes its registration.

```
{
    "taskDefinition": {
        "taskDefinitionArn": "arn:aws:ecs:us-east-1:123456789012:task-definition/nginx-task:1",
        "family": "nginx-task",
        "revision": 1,
        "status": "ACTIVE",
        "containerDefinitions": [
            {
                "name": "nginx",
                "image": "public.ecr.aws/docker/library/nginx:latest",
                "cpu": 256,
                "memory": 512,
                "essential": true,
                "portMappings": [
                    {
                        "containerPort": 80,
                        "hostPort": 80,
                        "protocol": "tcp"
                    }
                ],
                "environment": [],
                "mountPoints": [],
                "volumesFrom": []
            }
        ],
        "volumes": [],
        "networkMode": "bridge",
        "compatibilities": [
            "EC2"
        ],
        "requiresCompatibilities": [
            "EC2"
        ]
    }
}
```

## List task definitions
<a name="AWSCLI_EC2_list_task_definitions"></a>

You can list the task definitions for your account at any time with the **list-task-definitions** command. The output of this command shows the `family` and `revision` values that you can use together when calling **create-service**.

```
aws ecs list-task-definitions
```

Output:

```
{
    "taskDefinitionArns": [
        "arn:aws:ec2:us-east-1:aws_account_id:task-definition/sleep360:1",
        "arn:aws:ec2:us-east-1:aws_account_id:task-definition/sleep360:2",
        "arn:aws:ec2:us-east-1:aws_account_id:task-definition/nginx-task:1",
        "arn:aws:ec2:us-east-1:aws_account_id:task-definition/wordpress:3",
        "arn:aws:ec2:us-east-1:aws_account_id:task-definition/wordpress:4",
        "arn:aws:ec2:us-east-1:aws_account_id:task-definition/wordpress:5",
        "arn:aws:ec2:us-east-1:aws_account_id:task-definition/wordpress:6"
    ]
}
```

## Create a service
<a name="AWSCLI_EC2_run_task"></a>

After you have registered a task for your account and have launched a container instance that is registered to your cluster, you can create an Amazon ECS service that runs and maintains a desired number of tasks simultaneously using the task definition that you registered. For this example, you place a single instance of the `nginx:1` task definition in your MyCluster cluster.

```
aws ecs create-service --cluster MyCluster --service-name nginx-service --task-definition nginx-task:1 --desired-count 1
```

Output:

```
{
    "service": {
        "serviceArn": "arn:aws:ecs:us-east-1:aws_account_id:service/MyCluster/nginx-service",
        "serviceName": "nginx-service",
        "clusterArn": "arn:aws:ecs:us-east-1:aws_account_id:cluster/MyCluster",
        "taskDefinition": "arn:aws:ecs:us-east-1:aws_account_id:task-definition/nginx-task:1",
        "desiredCount": 1,
        "runningCount": 0,
        "pendingCount": 0,
        "launchType": "EC2",
        "status": "ACTIVE",
        "createdAt": "2025-01-13T10:45:00.000Z"
    }
}
```

## List services
<a name="AWSCLI_EC2_list_tasks"></a>

List the services for your cluster. You should see the service that you created in the previous section. You can note the service ID or the full ARN that is returned from this command and use it to describe the service later.

```
aws ecs list-services --cluster MyCluster
```

Output:

```
{
    "taskArns": [
        "arn:aws:ecs:us-east-1:aws_account_id:task/task_ID"
    ]
}
```

## Describe the service
<a name="AWSCLI_EC2_describe_service"></a>

Describe the service using the following command to get more information about the service.

```
aws ecs describe-services --cluster MyCluster --services nginx-service
```

Output:

```
{
    "services": [
        {
            "serviceArn": "arn:aws:ecs:us-east-1:aws_account_id:service/MyCluster/nginx-service",
            "serviceName": "nginx-service",
            "clusterArn": "arn:aws:ecs:us-east-1:aws_account_id:cluster/MyCluster",
            "taskDefinition": "arn:aws:ecs:us-east-1:aws_account_id:task-definition/nginx-task:1",
            "desiredCount": 1,
            "runningCount": 1,
            "pendingCount": 0,
            "launchType": "EC2",
            "status": "ACTIVE",
            "createdAt": "2025-01-13T10:45:00.000Z",
            "events": [
                {
                    "id": "abcd1234-5678-90ab-cdef-1234567890ab",
                    "createdAt": "2025-01-13T10:45:30.000Z",
                    "message": "(service nginx-service) has started 1 tasks: (task abcd1234-5678-90ab-cdef-1234567890ab)."
                }
            ]
        }
    ]
}
```

## Describe the running task
<a name="AWSCLI_EC2_describe_task"></a>

After describing the service, run the following command to get more information about the task that is running as part of your service.

```
aws ecs list-tasks --cluster MyCluster --service-name nginx-service
```

 Output: 

```
{
    "tasks": [
        {
            "taskArn": "arn:aws:ecs:us-east-1:aws_account_id:task/MyCluster/abcd1234-5678-90ab-cdef-1234567890ab",
            "clusterArn": "arn:aws:ecs:us-east-1:aws_account_id:cluster/MyCluster",
            "taskDefinitionArn": "arn:aws:ecs:us-east-1:aws_account_id:task-definition/nginx-task:1",
            "containerInstanceArn": "arn:aws:ecs:us-east-1:aws_account_id:container-instance/MyCluster/abcd1234-5678-90ab-cdef-1234567890ab",
            "lastStatus": "RUNNING",
            "desiredStatus": "RUNNING",
            "containers": [
                {
                    "containerArn": "arn:aws:ecs:us-east-1:aws_account_id:container/MyCluster/abcd1234-5678-90ab-cdef-1234567890ab/abcd1234-5678-90ab-cdef-1234567890ab",
                    "taskArn": "arn:aws:ecs:us-east-1:aws_account_id:task/MyCluster/abcd1234-5678-90ab-cdef-1234567890ab",
                    "name": "nginx",
                    "lastStatus": "RUNNING",
                    "networkBindings": [
                        {
                            "bindIP": "0.0.0.0",
                            "containerPort": 80,
                            "hostPort": 80,
                            "protocol": "tcp"
                        }
                    ]
                }
            ],
            "createdAt": "2025-01-13T10:45:00.000Z",
            "startedAt": "2025-01-13T10:45:30.000Z"
        }
    ]
}
```

## Test the web server
<a name="AWSCLI_EC2_test_web_server"></a>

**To test the web server**

1. Retrieve the public IP address of your container instance by running the following command.

   ```
   aws ec2 describe-instances --instance-ids i-abcd1234 --query 'Reservations[0].Instances[0].PublicIpAddress' --output text
   ```

   Output:

   ```
   203.0.113.25
   ```

1. After retrieving the IP address, run the following `curl` command with the IP address.

   ```
   curl http://203.0.113.25
   ```

   Output:

   ```
   <!DOCTYPE html>
   <html>
   <head>
   <title>Welcome to nginx!</title>
   ...
   </head>
   <body>
   <h1>Welcome to nginx!</h1>
   <p>If you can see this page, the nginx web server is successfully installed and working.</p>
   ...
   </body>
   </html>
   ```

   The nginx welcome page confirms that your service is running successfully and accessible from the internet.

## Clean up resources
<a name="AWSCLI_EC2_clean_up_resources"></a>

To avoid incurring charges, clean up the resources that you created in this tutorial.

**To clean up resources**

1. Update the service to have zero desired tasks, then delete the service.

   ```
   aws ecs update-service --cluster MyCluster --service nginx-service --desired-count 0
   {
       "service": {
           "serviceArn": "arn:aws:ecs:us-east-1:123456789012:service/MyCluster/nginx-service",
           "serviceName": "nginx-service",
           "desiredCount": 0,
           "runningCount": 1,
           "pendingCount": 0,
           "status": "ACTIVE"
       }
   }
   ```

1. Wait for the running tasks to stop, then delete the service.

   ```
   aws ecs delete-service --cluster MyCluster --service nginx-service
   {
       "service": {
           "serviceArn": "arn:aws:ecs:us-east-1:123456789012:service/MyCluster/nginx-service",
           "serviceName": "nginx-service",
           "status": "DRAINING"
       }
   }
   ```

1. Terminate the container instance you created.

   ```
   aws ec2 terminate-instances --instance-ids i-abcd1234
   {
       "TerminatingInstances": [
           {
               "InstanceId": "i-abcd1234",
               "CurrentState": {
                   "Code": 32,
                   "Name": "shutting-down"
               },
               "PreviousState": {
                   "Code": 16,
                   "Name": "running"
               }
           }
       ]
   }
   ```

1. Clean up the security group and key pair that you created.

   ```
   aws ec2 delete-security-group --group-id sg-abcd1234
   aws ec2 delete-key-pair --key-name ecs-tutorial-key
   rm ecs-tutorial-key.pem
   ```

1. Delete the Amazon ECS cluster.

   ```
   aws ecs delete-cluster --cluster MyCluster
   {
       "cluster": {
           "clusterArn": "arn:aws:ecs:us-east-1:123456789012:cluster/MyCluster",
           "clusterName": "MyCluster",
           "status": "INACTIVE"
       }
   }
   ```

# Configuring Amazon ECS to listen for CloudWatch Events events
<a name="ecs_cwet"></a>

Learn how to set up a simple Lambda function that listens for task events and writes them out to a CloudWatch Logs log stream.

## Prerequisite: Set up a test cluster
<a name="cwet_step_1"></a>

If you do not have a running cluster to capture events from, follow the steps in [Creating an Amazon ECS cluster for Fargate workloads](create-cluster-console-v2.md) to create one. At the end of this tutorial, you run a task on this cluster to test that you have configured your Lambda function correctly. 

## Step 1: Create the Lambda function
<a name="cwet_step_2"></a>

In this procedure, you create a simple Lambda function to serve as a target for Amazon ECS event stream messages. 

1. Open the AWS Lambda console at [https://console.aws.amazon.com/lambda/](https://console.aws.amazon.com/lambda/).

1. Choose **Create function**. 

1. On the **Author from scratch** screen, do the following:

   1. For **Name**, enter a value. 

   1. For **Runtime**, choose your version of Python, for example, **Python 3.9**.

   1. For **Role**, choose **Create a new role with basic Lambda permissions**.

1. Choose **Create function**.

1. In the **Function code** section, edit the sample code to match the following example:

   ```
   import json
   
   def lambda_handler(event, context):
       if event["source"] != "aws.ecs":
          raise ValueError("Function only supports input from events with a source type of: aws.ecs")
          
       print('Here is the event:')
       print(json.dumps(event))
   ```

   This is a simple Python 3.9 function that prints the event sent by Amazon ECS. If everything is configured correctly, at the end of this tutorial, you see that the event details appear in the CloudWatch Logs log stream associated with this Lambda function.

1. Choose **Save**.

## Step 2: Register an event rule
<a name="cwet_step_3"></a>

 Next, you create a CloudWatch Events event rule that captures task events coming from your Amazon ECS clusters. This rule captures all events coming from all clusters within the account where it is defined. The task messages themselves contain information about the event source, including the cluster on which it resides, that you can use to filter and sort events programmatically. 

**Note**  
When you use the AWS Management Console to create an event rule, the console automatically adds the IAM permissions necessary to grant CloudWatch Events permission to call your Lambda function. If you are creating an event rule using the AWS CLI, you need to grant this permission explicitly. For more information, see [Events in Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-events.html) and [Amazon EventBridge event patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html) in the *Amazon EventBridge User Guide*.

**To route events to your Lambda function**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. On the navigation pane, choose **Events**, **Rules**, **Create rule**.

1. For **Event Source**, choose **ECS** as the event source. By default, the rule applies to all Amazon ECS events for all of your Amazon ECS groups. Alternatively, you can select specific events or a specific Amazon ECS group.

1. For **Targets**, choose **Add target**, for **Target type**, choose **Lambda function**, and then select your Lambda function.

1. Choose **Configure details**.

1. For **Rule definition**, type a name and description for your rule and choose **Create rule**.

## Step 3: Create a task definition
<a name="cwet_step_task-def"></a>

Create a task definition.

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Task Definitions**.

1. Choose **Create new Task Definition**, **Create new revision with JSON**.

1. Copy and paste the following example task definition into the box and then choose **Save**.

   ```
   {
      "containerDefinitions": [ 
         { 
            "command": [
               "/bin/sh -c \"echo '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p> </div></body></html>' >  /usr/local/apache2/htdocs/index.html && httpd-foreground\""
            ],
            "entryPoint": [
               "sh",
               "-c"
            ],
            "essential": true,
            "image": "public.ecr.aws/docker/library/httpd:2.4",
            "logConfiguration": { 
               "logDriver": "awslogs",
               "options": { 
                  "awslogs-group" : "/ecs/fargate-task-definition",
                  "awslogs-region": "us-east-1",
                  "awslogs-stream-prefix": "ecs"
               }
            },
            "name": "sample-fargate-app",
            "portMappings": [ 
               { 
                  "containerPort": 80,
                  "hostPort": 80,
                  "protocol": "tcp"
               }
            ]
         }
      ],
      "cpu": "256",
      "executionRoleArn": "arn:aws:iam::012345678910:role/ecsTaskExecutionRole",
      "family": "fargate-task-definition",
      "memory": "512",
      "networkMode": "awsvpc",
      "runtimePlatform": {
           "operatingSystemFamily": "LINUX"
       },
      "requiresCompatibilities": [ 
          "FARGATE" 
       ]
   }
   ```

1. Choose **Create**.

## Step 4: Test your rule
<a name="cwet_step_4"></a>

 Finally, you create a CloudWatch Events event rule that captures task events coming from your Amazon ECS clusters. This rule captures all events coming from all clusters within the account where it is defined. The task messages themselves contain information about the event source, including the cluster on which it resides, that you can use to filter and sort events programmatically. 

**To test your rule**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. Choose **Task definitions**.

1. Choose **console-sample-app-static**, and then choose **Deploy**, **Run new task**.

1. For **Cluster**, choose default, and then choose **Deploy**.

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. On the navigation pane, choose **Logs** and select the log group for your Lambda function (for example, **/aws/lambda/***my-function*).

1. Select a log stream to view the event data. 

# Sending Amazon Simple Notification Service alerts for Amazon ECS task stopped events
<a name="ecs_cwet2"></a>

Configure an Amazon EventBridge event rule that only captures task events where the task has stopped running because one of its essential containers has terminated. The event sends only task events with a specific `stoppedReason` property to the designated Amazon SNS topic.

## Prerequisite: Set up a test cluster
<a name="cwet2_step_1"></a>

 If you do not have a running cluster to capture events from, follow the steps in [Getting started with the console using Linux containers on AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/getting-started-fargate.html#get-started-fargate-cluster) to create one. At the end of this tutorial, you run a task on this cluster to test that you have configured your Amazon SNS topic and EventBridge rule correctly. 

## Prerequisite: Configure permissions for Amazon SNS
<a name="cwet2_step_1a"></a>

To allow EventBridge to publish to an Amazon SNS topic, use the aws sns get-topic-attributes and the aws sns set-topic-attributes commands. 

For information about how to add the permission, see [Amazon SNS permissions](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-use-resource-based.html#eb-sns-permissions) in the *Amazon Simple Notification Service Developer Guide *

Add the following permissions:

```
{
  "Sid": "PublishEventsToMyTopic",
  "Effect": "Allow",
  "Principal": {
     "Service": "events.amazonaws.com"
  },
  "Action": "sns: Publish",
  "Resource": "arn:aws:sns:region:account-id:TaskStoppedAlert",
}
```

## Step 1: Create and subscribe to an Amazon SNS topic
<a name="cwet2_step_2"></a>

 For this tutorial, you configure an Amazon SNS topic to serve as an event target for your new event rule. 

For information about how to create and subscribe to an Amazon SNS topic , see [Getting started with Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/sns-getting-started.html#step-create-queue) in the *Amazon Simple Notification Service Developer Guide * and use the following table to determine what options to select.


| Option | Value | 
| --- | --- | 
|  Type  | Standard | 
| Name |  TaskStoppedAlert  | 
| Protocol | Email | 
| Endpoint |  An email address to which you currently have access  | 

## Step 2: Register an event rule
<a name="cwet2_step_3"></a>

 Next, you register an event rule that captures only task-stopped events for tasks with stopped containers. 

For information about how to create and subscribe to an Amazon SNS topic , see [Create a rule in Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-get-started.html) in the *Amazon EventBridge User Guide* and use the following table to determine what options to select.


| Option | Value | 
| --- | --- | 
|  Rule type  |  Rule with an event pattern  | 
| Event source | AWS events or EventBridge partner events | 
| Event pattern |  Custom pattern (JSON editor)  | 
| Event pattern |  <pre>{<br />   "source":[<br />      "aws.ecs"<br />   ],<br />   "detail-type":[<br />      "ECS Task State Change"<br />   ],<br />   "detail":{<br />      "lastStatus":[<br />         "STOPPED"<br />      ],<br />      "stoppedReason":[<br />         "Essential container in task exited"<br />      ]<br />   }<br />}</pre> | 
| Target type |  AWS service  | 
| Target | SNS topic | 
| Topic |  TaskStoppedAlert (The topic you created in Step 1)  | 

## Step 3: Test your rule
<a name="cwet2_step_4"></a>

Verify that the rule is working by running a task that exits shortly after it starts. If your event rule is configured correctly, you receive an email message within a few minutes with the event text. If you have an existing task definition that can satisfy the rule requirements, run a task using it. If you do not, the following steps will walk you through registering a Fargate task definition and running it that will.

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Task definitions**.

1. Choose **Create new task definition**, **Create new task definition with JSON**.

1. In the JSON editor box, edit your JSON file, copy the following into the editor.

   ```
   {
      "containerDefinitions":[
         {
            "command":[
               "sh",
               "-c",
               "sleep 5"
            ],
            "essential":true,
            "image":"public.ecr.aws/amazonlinux/amazonlinux:latest",
            "name":"test-sleep"
         }
      ],
      "cpu":"256",
      "executionRoleArn":"arn:aws:iam::012345678910:role/ecsTaskExecutionRole",
      "family":"fargate-task-definition",
      "memory":"512",
      "networkMode":"awsvpc",
      "requiresCompatibilities":[
         "FARGATE"
      ]
   }
   ```

1. Choose **Create**.

**To run a task from the console**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. On the **Clusters** page, choose the cluster you created in the prerequisites.

1. From the **Tasks** tab, choose **Run new task**.

1. For **Application type**, choose **Task**.

1. For **Task definition**, choose **fargate-task-definition**.

1. For **Desired tasks**, enter the number of tasks to launch.

1. Choose **Create**.

# Concatenating multiline or stack-trace Amazon ECS log messages
<a name="firelens-concatanate-multiline"></a>

Beginning with AWS for Fluent Bit version 2.22.0, a multiline filter is included. The multiline filter helps concatenate log messages that originally belong to one context but were split across multiple records or log lines. For more information about the multiline filter, see the [ Fluent Bit documentation](https://docs.fluentbit.io/manual/pipeline/filters/multiline-stacktrace). 

Common examples of split log messages are:
+ Stack traces. 
+ Applications that print logs on multiple lines. 
+ Log messages that were split because they were longer than the specified runtime max buffer size. You can concatenate log messages split by the container runtime by following the example on GitHub: [FireLens Example: Concatenate Partial/Split Container Logs](https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/mainline/examples/fluent-bit/filter-multiline-partial-message-mode).

## Required IAM permissions
<a name="iam-permissions"></a>

You have the necessary IAM permissions for the container agent to pull the container images from Amazon ECR and for the container to route logs to CloudWatch Logs.

For these permissions, you must have the following roles: 
+ A task IAM role. 
+ A task execution IAM role. 

You need the following permissions:
+ `logs:CreateLogStream`
+ `logs:CreateLogGroup`
+ `logs:PutLogEvents`

## Determine when to use the multiline log setting
<a name="determine-filter"></a>

The following are example log snippets that you see in the CloudWatch Logs console with the default log setting. You can look at the line that starts with `log` to determine if you need the multiline filter. When the context is the same, you can use the multiline log setting, In this example, the context is "com.myproject.model.MyProject".

```
2022-09-20T15:47:56:595-05-00                           {"container_id": "82ba37cada1d44d389b03e78caf74faa-EXAMPLE", "container_name": "example-app", "source=": "stdout", "log": ": "     at com.myproject.modele.(MyProject.badMethod.java:22)",
    {
      "container_id":  "82ba37cada1d44d389b03e78caf74faa-EXAMPLE",
      "container_name: ": "example-app",
      "source": "stdout",
      "log": ": "     at com.myproject.model.MyProject.badMethod(MyProject.java:22)",
      "ecs_cluster": "default",
      "ecs_task_arn": "arn:aws:region:123456789012:task/default/b23c940d29ed4714971cba72cEXAMPLE",
      "ecs_task_definition": "firelense-example-multiline:3"
     }
```

```
2022-09-20T15:47:56:595-05-00                           {"container_id": "82ba37cada1d44d389b03e78caf74faa-EXAMPLE", "container_name": "example-app", "stdout", "log": ": "     at com.myproject.modele.(MyProject.oneMoreMethod.java:18)",
    {
      "container_id":  "82ba37cada1d44d389b03e78caf74faa-EXAMPLE",
      "container_name: ": "example-app",
      "source": "stdout",
      "log": ": "     at com.myproject.model.MyProject.oneMoreMethod(MyProject.java:18)",
      "ecs_cluster": "default",
      "ecs_task_arn": "arn:aws:region:123456789012:task/default/b23c940d29ed4714971cba72cEXAMPLE,
      "ecs_task_definition": "firelense-example-multiline:3"
     }
```

After you use the multiline log setting, the output will look similar to the example below. 

```
2022-09-20T15:47:56:595-05-00                           {"container_id": "82ba37cada1d44d389b03e78caf74faa-EXAMPLE", "container_name": "example-app", "stdout",...
    {
      "container_id":  "82ba37cada1d44d389b03e78caf74faa-EXAMPLE",
      "container_name: ": "example-app",
      "source": "stdout",
      "log:    "September 20, 2022 06:41:48 Exception in thread \"main\" java.lang.RuntimeException: Something has gone wrong, aborting!\n    
    at com.myproject.module.MyProject.badMethod(MyProject.java:22)\n    at   
    at com.myproject.model.MyProject.oneMoreMethod(MyProject.java:18) com.myproject.module.MyProject.main(MyProject.java:6)",
      "ecs_cluster": "default",
      "ecs_task_arn": "arn:aws:region:123456789012:task/default/b23c940d29ed4714971cba72cEXAMPLE",
      "ecs_task_definition": "firelense-example-multiline:2"
     }
```

## Parse and concatenate options
<a name="parse-multiline-log"></a>

To parse logs and concatenate lines that were split because of newlines, you can use either of these two options.
+ Use your own parser file that contains the rules to parse and concatenate lines that belong to the same message.
+ Use a Fluent Bit built-in parser. For a list of languages supported by the Fluent Bit built-in parsers, see [ Fluent Bit documentation](https://docs.fluentbit.io/manual/pipeline/filters/multiline-stacktrace).

The following tutorial walks you through the steps for each use case. The steps show you how to concatenate multilines and send the logs to Amazon CloudWatch. You can specify a different destination for your logs.

### Example: Use a parser that you create
<a name="customer-parser"></a>

In this example, you will complete the following steps: 

1. Build and upload the image for a Fluent Bit container. 

1. Build and upload the image for a demo multiline application that runs, fails, and generates a multiline stack trace.

1. Create the task definition and run the task. 

1. View the logs to verify that messages that span multiple lines appear concatenated. 

**Build and upload the image for a Fluent Bit container**

This image will include the parser file where you specify the regular expression and a configuration file that references the parser file. 

1. Create a folder with the name `FluentBitDockerImage`. 

1. Within the folder, create a parser file that contains the rules to parse the log and concatenate lines that belong in the same message.

   1. Paste the following contents in the parser file:

      ```
      [MULTILINE_PARSER]
          name          multiline-regex-test
          type          regex
          flush_timeout 1000
          #
          # Regex rules for multiline parsing
          # ---------------------------------
          #
          # configuration hints:
          #
          #  - first state always has the name: start_state
          #  - every field in the rule must be inside double quotes
          #
          # rules |   state name  | regex pattern                  | next state
          # ------|---------------|--------------------------------------------
          rule      "start_state"   "/(Dec \d+ \d+\:\d+\:\d+)(.*)/"  "cont"
          rule      "cont"          "/^\s+at.*/"                     "cont"
      ```

      As you customize your regex pattern, we recommend you use a regular expression editor to test the expression.

   1. Save the file as `parsers_multiline.conf`. 

1. Within the `FluentBitDockerImage` folder, create a custom configuration file that references the parser file that you created in the previous step.

   For more information about the custom configuration file, see [Specifying a custom configuration file](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/firelens-taskdef.html#firelens-taskdef-customconfig) in the *Amazon Elastic Container Service Developer Guide* 

   1. Paste the following contents in the file:

      ```
      [SERVICE]
          flush                 1
          log_level             info
          parsers_file          /parsers_multiline.conf
          
      [FILTER]
          name                  multiline
          match                 *
          multiline.key_content log
          multiline.parser      multiline-regex-test
      ```
**Note**  
You must use the absolute path of the parser. 

   1. Save the file as `extra.conf`. 

1. Within the `FluentBitDockerImage` folder, create the Dockerfile with the Fluent Bit image and the parser and configuration files that you created.

   1. Paste the following contents in the file:

      ```
      FROM public.ecr.aws/aws-observability/aws-for-fluent-bit:latest
      
      ADD parsers_multiline.conf /parsers_multiline.conf
      ADD extra.conf /extra.conf
      ```

   1. Save the file as `Dockerfile`.

1. Using the Dockerfile, build a custom Fluent Bit image with the parser and custom configuration files included.
**Note**  
You can place the parser file and configuration file anywhere in the Docker image except `/fluent-bit/etc/fluent-bit.conf` as this file path is used by FireLens.

   1. Build the image: `docker build -t fluent-bit-multiline-image.`

      Where: `fluent-bit-multiline-image` is the name for the image in this example.

   1. Verify that the image was created correctly: `docker images —filter reference=fluent-bit-multiline-image` 

      If successful, the output shows the image and the `latest` tag.

1. Upload the custom Fluent Bit image to Amazon Elastic Container Registry.

   1. Create an Amazon ECR repository to store the image: `aws ecr create-repository --repository-name fluent-bit-multiline-repo --region us-east-1`

      Where: `fluent-bit-multiline-repo` is the name for the repository and `us-east-1` is the region in this example. 

      The output gives you the details of the new repository. 

   1. Tag your image with the `repositoryUri` value from the previous output: `docker tag fluent-bit-multiline-image repositoryUri` 

      Example: `docker tag fluent-bit-multiline-image xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/fluent-bit-multiline-repo` 

   1. Run the docker image to verify it ran correctly: `docker images —filter reference=repositoryUri`

      In the output, the repository name changes from fluent-bit-multiline-repo to the `repositoryUri`.

   1. Authenticate to Amazon ECR by running the `aws ecr get-login-password` command and specifying the registry ID you want to authenticate to: `aws ecr get-login-password | docker login --username AWS --password-stdin registry ID.dkr.ecr.region.amazonaws.com` 

      Example: `ecr get-login-password | docker login --username AWS --password-stdin xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com`

      A successful login message appears.

   1. Push the image to Amazon ECR: `docker push registry ID.dkr.ecr.region.amazonaws.com/repository name` 

      Example: `docker push xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/fluent-bit-multiline-repo`

**Build and upload the image for a demo multiline application**

This image will include a Python script file that runs the application and a sample log file. 

When you run the task, the application simulates runs, then fails and creates a stack trace. 

1. Create a folder named `multiline-app`: `mkdir multiline-app` 

1. Create a Python script file.

   1. Within the `multiline-app` folder, create a file and name it `main.py`.

   1. Paste the following contents in the file:

      ```
      import os
      import time
      file1 = open('/test.log', 'r')
      Lines = file1.readlines()
       
      count = 0
      
      for i in range(10):
          print("app running normally...")
          time.sleep(1)
      
      # Strips the newline character
      for line in Lines:
          count += 1
          print(line.rstrip())
      print(count)
      print("app terminated.")
      ```

   1. Save the `main.py` file.

1. Create a sample log file. 

   1. Within the `multiline-app` folder, create a file and name it `test.log`.

   1. Paste the following contents in the file:

      ```
      single line...
      Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!
          at com.myproject.module.MyProject.badMethod(MyProject.java:22)
          at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)
          at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)
          at com.myproject.module.MyProject.someMethod(MyProject.java:10)
          at com.myproject.module.MyProject.main(MyProject.java:6)
      another line...
      ```

   1. Save the `test.log` file.

1. Within the `multiline-app` folder, create the Dockerfile.

   1. Paste the following contents in the file:

      ```
      FROM public.ecr.aws/amazonlinux/amazonlinux:latest
      ADD test.log /test.log
      
      RUN yum upgrade -y && yum install -y python3
      
      WORKDIR /usr/local/bin
      
      COPY main.py .
      
      CMD ["python3", "main.py"]
      ```

   1. Save the `Dockerfile` file.

1. Using the Dockerfile, build an image.

   1. Build the image: `docker build -t multiline-app-image `

      Where: `multiline-app-image` is the name for the image in this example.

   1. Verify that the image was created correctly: `docker images —filter reference=multiline-app-image` 

      If successful, the output shows the image and the `latest` tag.

1. Upload the image to Amazon Elastic Container Registry.

   1. Create an Amazon ECR repository to store the image: `aws ecr create-repository --repository-name multiline-app-repo --region us-east-1`

      Where: `multiline-app-repo` is the name for the repository and `us-east-1` is the region in this example. 

      The output gives you the details of the new repository. Note the `repositoryUri` value as you will need it in the next steps. 

   1. Tag your image with the `repositoryUri` value from the previous output: `docker tag multiline-app-image repositoryUri` 

      Example: `docker tag multiline-app-image xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/multiline-app-repo` 

   1. Run the docker image to verify it ran correctly: `docker images —filter reference=repositoryUri`

      In the output, the repository name changes from `multiline-app-repo` to the `repositoryUri` value.

   1. Push the image to Amazon ECR: `docker push aws_account_id.dkr.ecr.region.amazonaws.com/repository name` 

      Example: `docker push xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/multiline-app-repo`

**Create the task definition and run the task**

1. Create a task definition file with the file name `multiline-task-definition.json`. 

1. Paste the following contents in the `multiline-task-definition.json` file: 

   ```
   {
       "family": "firelens-example-multiline",
       "taskRoleArn": "task role ARN,
       "executionRoleArn": "execution role ARN",
       "containerDefinitions": [
           {
               "essential": true,
               "image": "aws_account_id.dkr.ecr.us-east-1.amazonaws.com/fluent-bit-multiline-image:latest",
               "name": "log_router",
               "firelensConfiguration": {
                   "type": "fluentbit",
                   "options": {
                       "config-file-type": "file",
                       "config-file-value": "/extra.conf"
                   }
               },
               "memoryReservation": 50
           },
           {
               "essential": true,
               "image": "aws_account_id.dkr.ecr.us-east-1.amazonaws.com/multiline-app-image:latest",
               "name": "app",
               "logConfiguration": {
                   "logDriver": "awsfirelens",
                   "options": {
                       "Name": "cloudwatch_logs",
                       "region": "us-east-1",
                       "log_group_name": "multiline-test/application",
                       "auto_create_group": "true",
                       "log_stream_prefix": "multiline-"
                   }
               },
               "memoryReservation": 100
           }
       ],
       "requiresCompatibilities": ["FARGATE"],
       "networkMode": "awsvpc",
       "cpu": "256",
       "memory": "512"
   }
   ```

   Replace the following in the `multiline-task-definition.json` task definition:

   1. `task role ARN`

      To find the task role ARN, go to the IAM console. Choose **Roles** and find the `ecs-task-role-for-firelens` task role that you created. Choose the role and copy the **ARN** that appears in the **Summary** section.

   1. `execution role ARN`

      To find the execution role ARN, go to the IAM console. Choose **Roles** and find the `ecsTaskExecutionRole` role. Choose the role and copy the **ARN** that appears in the **Summary** section.

   1. `aws_account_id`

      To find your `aws_account_id`, log into the AWS Management Console. Choose your user name on the top right and copy your Account ID.

   1. `us-east-1`

      Replace the region if necessary.

1. Register the task definition file: `aws ecs register-task-definition --cli-input-json file://multiline-task-definition.json --region region` 

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Task Definitions** and then choose the `firelens-example-multiline` family because we registered the task definition to this family in the first line of the task definition above.

1. Choose the latest version. 

1. Choose the **Deploy**, **Run task**. 

1. On the **Run Task** page, For **Cluster**, choose the cluster, and then under **Networking**, for **Subnets**, choose the available subnets for your task. 

1. Choose **Create**. 

**Verify that multiline log messages in Amazon CloudWatch appear concatenated**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. From the navigation pane, expand **Logs** and choose **Log groups**. 

1. Choose the `multiline-test/applicatio` log group. 

1. Choose the log. View messages. Lines that matched the rules in the parser file are concatenated and appear as a single message. 

   The following log snippet shows lines concatenated in a single Java stack trace event: 

   ```
   {
       "container_id": "xxxxxx",
       "container_name": "app",
       "source": "stdout",
       "log": "Dec 14 06:41:08 Exception in thread \"main\" java.lang.RuntimeException: Something has gone wrong, aborting!\n    at com.myproject.module.MyProject.badMethod(MyProject.java:22)\n    at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)\n    at com.myproject.module.MyProject.anotherMethod(MyProject.java:14)\n    at com.myproject.module.MyProject.someMethod(MyProject.java:10)\n    at com.myproject.module.MyProject.main(MyProject.java:6)",
       "ecs_cluster": "default",
       "ecs_task_arn": "arn:aws:ecs:us-east-1:xxxxxxxxxxxx:task/default/xxxxxx",
       "ecs_task_definition": "firelens-example-multiline:2"
   }
   ```

   The following log snippet shows how the same message appears with just a single line if you run an Amazon ECS container that is not configured to concatenate multiline log messages. 

   ```
   {
       "log": "Dec 14 06:41:08 Exception in thread \"main\" java.lang.RuntimeException: Something has gone wrong, aborting!",
       "container_id": "xxxxxx-xxxxxx",
       "container_name": "app",
       "source": "stdout",
       "ecs_cluster": "default",
       "ecs_task_arn": "arn:aws:ecs:us-east-1:xxxxxxxxxxxx:task/default/xxxxxx",
       "ecs_task_definition": "firelens-example-multiline:3"
   }
   ```

### Example: Use a Fluent Bit built-in parser
<a name="fluent-bit-parser"></a>

In this example, you will complete the following steps: 

1. Build and upload the image for a Fluent Bit container. 

1. Build and upload the image for a demo multiline application that runs, fails, and generates a multiline stack trace.

1. Create the task definition and run the task. 

1. View the logs to verify that messages that span multiple lines appear concatenated. 

**Build and upload the image for a Fluent Bit container**

This image will include a configuration file that references the Fluent Bit parser. 

1. Create a folder with the name `FluentBitDockerImage`. 

1. Within the `FluentBitDockerImage` folder, create a custom configuration file that references the Fluent Bit built-in parser file.

   For more information about the custom configuration file, see [Specifying a custom configuration file](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/firelens-taskdef.html#firelens-taskdef-customconfig) in the *Amazon Elastic Container Service Developer Guide* 

   1. Paste the following contents in the file:

      ```
      [FILTER]
          name                  multiline
          match                 *
          multiline.key_content log
          multiline.parser      go
      ```

   1. Save the file as `extra.conf`. 

1. Within the `FluentBitDockerImage` folder, create the Dockerfile with the Fluent Bit image and the parser and configuration files that you created.

   1. Paste the following contents in the file:

      ```
      FROM public.ecr.aws/aws-observability/aws-for-fluent-bit:latest
      ADD extra.conf /extra.conf
      ```

   1. Save the file as `Dockerfile`.

1. Using the Dockerfile, build a custom Fluent Bit image with the custom configuration file included.
**Note**  
You can place the configuration file anywhere in the Docker image except `/fluent-bit/etc/fluent-bit.conf` as this file path is used by FireLens.

   1. Build the image: `docker build -t fluent-bit-multiline-image.`

      Where: `fluent-bit-multiline-image` is the name for the image in this example.

   1. Verify that the image was created correctly: `docker images —filter reference=fluent-bit-multiline-image` 

      If successful, the output shows the image and the `latest` tag.

1. Upload the custom Fluent Bit image to Amazon Elastic Container Registry.

   1. Create an Amazon ECR repository to store the image: `aws ecr create-repository --repository-name fluent-bit-multiline-repo --region us-east-1`

      Where: `fluent-bit-multiline-repo` is the name for the repository and `us-east-1` is the region in this example. 

      The output gives you the details of the new repository. 

   1. Tag your image with the `repositoryUri` value from the previous output: `docker tag fluent-bit-multiline-image repositoryUri` 

      Example: `docker tag fluent-bit-multiline-image xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/fluent-bit-multiline-repo` 

   1. Run the docker image to verify it ran correctly: `docker images —filter reference=repositoryUri`

      In the output, the repository name changes from fluent-bit-multiline-repo to the `repositoryUri`.

   1. Authenticate to Amazon ECR by running the `aws ecr get-login-password` command and specifying the registry ID you want to authenticate to: `aws ecr get-login-password | docker login --username AWS --password-stdin registry ID.dkr.ecr.region.amazonaws.com` 

      Example: `ecr get-login-password | docker login --username AWS --password-stdin xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com`

      A successful login message appears.

   1. Push the image to Amazon ECR: `docker push registry ID.dkr.ecr.region.amazonaws.com/repository name` 

      Example: `docker push xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/fluent-bit-multiline-repo`

**Build and upload the image for a demo multiline application**

This image will include a Python script file that runs the application and a sample log file. 

1. Create a folder named `multiline-app`: `mkdir multiline-app` 

1. Create a Python script file.

   1. Within the `multiline-app` folder, create a file and name it `main.py`.

   1. Paste the following contents in the file:

      ```
      import os
      import time
      file1 = open('/test.log', 'r')
      Lines = file1.readlines()
       
      count = 0
      
      for i in range(10):
          print("app running normally...")
          time.sleep(1)
      
      # Strips the newline character
      for line in Lines:
          count += 1
          print(line.rstrip())
      print(count)
      print("app terminated.")
      ```

   1. Save the `main.py` file.

1. Create a sample log file. 

   1. Within the `multiline-app` folder, create a file and name it `test.log`.

   1. Paste the following contents in the file:

      ```
      panic: my panic
      
      goroutine 4 [running]:
      panic(0x45cb40, 0x47ad70)
        /usr/local/go/src/runtime/panic.go:542 +0x46c fp=0xc42003f7b8 sp=0xc42003f710 pc=0x422f7c
      main.main.func1(0xc420024120)
        foo.go:6 +0x39 fp=0xc42003f7d8 sp=0xc42003f7b8 pc=0x451339
      runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:2337 +0x1 fp=0xc42003f7e0 sp=0xc42003f7d8 pc=0x44b4d1
      created by main.main
        foo.go:5 +0x58
      
      goroutine 1 [chan receive]:
      runtime.gopark(0x4739b8, 0xc420024178, 0x46fcd7, 0xc, 0xc420028e17, 0x3)
        /usr/local/go/src/runtime/proc.go:280 +0x12c fp=0xc420053e30 sp=0xc420053e00 pc=0x42503c
      runtime.goparkunlock(0xc420024178, 0x46fcd7, 0xc, 0x1000f010040c217, 0x3)
        /usr/local/go/src/runtime/proc.go:286 +0x5e fp=0xc420053e70 sp=0xc420053e30 pc=0x42512e
      runtime.chanrecv(0xc420024120, 0x0, 0xc420053f01, 0x4512d8)
        /usr/local/go/src/runtime/chan.go:506 +0x304 fp=0xc420053f20 sp=0xc420053e70 pc=0x4046b4
      runtime.chanrecv1(0xc420024120, 0x0)
        /usr/local/go/src/runtime/chan.go:388 +0x2b fp=0xc420053f50 sp=0xc420053f20 pc=0x40439b
      main.main()
        foo.go:9 +0x6f fp=0xc420053f80 sp=0xc420053f50 pc=0x4512ef
      runtime.main()
        /usr/local/go/src/runtime/proc.go:185 +0x20d fp=0xc420053fe0 sp=0xc420053f80 pc=0x424bad
      runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:2337 +0x1 fp=0xc420053fe8 sp=0xc420053fe0 pc=0x44b4d1
      
      goroutine 2 [force gc (idle)]:
      runtime.gopark(0x4739b8, 0x4ad720, 0x47001e, 0xf, 0x14, 0x1)
        /usr/local/go/src/runtime/proc.go:280 +0x12c fp=0xc42003e768 sp=0xc42003e738 pc=0x42503c
      runtime.goparkunlock(0x4ad720, 0x47001e, 0xf, 0xc420000114, 0x1)
        /usr/local/go/src/runtime/proc.go:286 +0x5e fp=0xc42003e7a8 sp=0xc42003e768 pc=0x42512e
      runtime.forcegchelper()
        /usr/local/go/src/runtime/proc.go:238 +0xcc fp=0xc42003e7e0 sp=0xc42003e7a8 pc=0x424e5c
      runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:2337 +0x1 fp=0xc42003e7e8 sp=0xc42003e7e0 pc=0x44b4d1
      created by runtime.init.4
        /usr/local/go/src/runtime/proc.go:227 +0x35
      
      goroutine 3 [GC sweep wait]:
      runtime.gopark(0x4739b8, 0x4ad7e0, 0x46fdd2, 0xd, 0x419914, 0x1)
        /usr/local/go/src/runtime/proc.go:280 +0x12c fp=0xc42003ef60 sp=0xc42003ef30 pc=0x42503c
      runtime.goparkunlock(0x4ad7e0, 0x46fdd2, 0xd, 0x14, 0x1)
        /usr/local/go/src/runtime/proc.go:286 +0x5e fp=0xc42003efa0 sp=0xc42003ef60 pc=0x42512e
      runtime.bgsweep(0xc42001e150)
        /usr/local/go/src/runtime/mgcsweep.go:52 +0xa3 fp=0xc42003efd8 sp=0xc42003efa0 pc=0x419973
      runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:2337 +0x1 fp=0xc42003efe0 sp=0xc42003efd8 pc=0x44b4d1
      created by runtime.gcenable
        /usr/local/go/src/runtime/mgc.go:216 +0x58
      one more line, no multiline
      ```

   1. Save the `test.log` file.

1. Within the `multiline-app` folder, create the Dockerfile.

   1. Paste the following contents in the file:

      ```
      FROM public.ecr.aws/amazonlinux/amazonlinux:latest
      ADD test.log /test.log
      
      RUN yum upgrade -y && yum install -y python3
      
      WORKDIR /usr/local/bin
      
      COPY main.py .
      
      CMD ["python3", "main.py"]
      ```

   1. Save the `Dockerfile` file.

1. Using the Dockerfile, build an image.

   1. Build the image: `docker build -t multiline-app-image `

      Where: `multiline-app-image` is the name for the image in this example.

   1. Verify that the image was created correctly: `docker images —filter reference=multiline-app-image` 

      If successful, the output shows the image and the `latest` tag.

1. Upload the image to Amazon Elastic Container Registry.

   1. Create an Amazon ECR repository to store the image: `aws ecr create-repository --repository-name multiline-app-repo --region us-east-1`

      Where: `multiline-app-repo` is the name for the repository and `us-east-1` is the region in this example. 

      The output gives you the details of the new repository. Note the `repositoryUri` value as you will need it in the next steps. 

   1. Tag your image with the `repositoryUri` value from the previous output: `docker tag multiline-app-image repositoryUri` 

      Example: `docker tag multiline-app-image xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/multiline-app-repo` 

   1. Run the docker image to verify it ran correctly: `docker images —filter reference=repositoryUri`

      In the output, the repository name changes from `multiline-app-repo` to the `repositoryUri` value.

   1. Push the image to Amazon ECR: `docker push aws_account_id.dkr.ecr.region.amazonaws.com/repository name` 

      Example: `docker push xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/multiline-app-repo`

**Create the task definition and run the task**

1. Create a task definition file with the file name `multiline-task-definition.json`. 

1. Paste the following contents in the `multiline-task-definition.json` file: 

   ```
   {
       "family": "firelens-example-multiline",
       "taskRoleArn": "task role ARN,
       "executionRoleArn": "execution role ARN",
       "containerDefinitions": [
           {
               "essential": true,
               "image": "aws_account_id.dkr.ecr.us-east-1.amazonaws.com/fluent-bit-multiline-image:latest",
               "name": "log_router",
               "firelensConfiguration": {
                   "type": "fluentbit",
                   "options": {
                       "config-file-type": "file",
                       "config-file-value": "/extra.conf"
                   }
               },
               "memoryReservation": 50
           },
           {
               "essential": true,
               "image": "aws_account_id.dkr.ecr.us-east-1.amazonaws.com/multiline-app-image:latest",
               "name": "app",
               "logConfiguration": {
                   "logDriver": "awsfirelens",
                   "options": {
                       "Name": "cloudwatch_logs",
                       "region": "us-east-1",
                       "log_group_name": "multiline-test/application",
                       "auto_create_group": "true",
                       "log_stream_prefix": "multiline-"
                   }
               },
               "memoryReservation": 100
           }
       ],
       "requiresCompatibilities": ["FARGATE"],
       "networkMode": "awsvpc",
       "cpu": "256",
       "memory": "512"
   }
   ```

   Replace the following in the `multiline-task-definition.json` task definition:

   1. `task role ARN`

      To find the task role ARN, go to the IAM console. Choose **Roles** and find the `ecs-task-role-for-firelens` task role that you created. Choose the role and copy the **ARN** that appears in the **Summary** section.

   1. `execution role ARN`

      To find the execution role ARN, go to the IAM console. Choose **Roles** and find the `ecsTaskExecutionRole` role. Choose the role and copy the **ARN** that appears in the **Summary** section.

   1. `aws_account_id`

      To find your `aws_account_id`, log into the AWS Management Console. Choose your user name on the top right and copy your Account ID.

   1. `us-east-1`

      Replace the region if necessary.

1. Register the task definition file: `aws ecs register-task-definition --cli-input-json file://multiline-task-definition.json --region us-east-1` 

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Task Definitions** and then choose the `firelens-example-multiline` family because we registered the task definition to this family in the first line of the task definition above.

1. Choose the latest version. 

1. Choose the **Deploy**, **Run task**. 

1. On the **Run Task** page, For **Cluster**, choose the cluster, and then under **Networking**, for **Subnets**, choose the available subnets for your task. 

1. Choose **Create**. 

**Verify that multiline log messages in Amazon CloudWatch appear concatenated**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. From the navigation pane, expand **Logs** and choose **Log groups**. 

1. Choose the `multiline-test/applicatio` log group. 

1. Choose the log and view the messages. Lines that matched the rules in the parser file are concatenated and appear as a single message. 

   The following log snippet shows a Go stack trace that is concatenated into a single event: 

   ```
   {
       "log": "panic: my panic\n\ngoroutine 4 [running]:\npanic(0x45cb40, 0x47ad70)\n  /usr/local/go/src/runtime/panic.go:542 +0x46c fp=0xc42003f7b8 sp=0xc42003f710 pc=0x422f7c\nmain.main.func1(0xc420024120)\n  foo.go:6 +0x39 fp=0xc42003f7d8 sp=0xc42003f7b8 pc=0x451339\nruntime.goexit()\n  /usr/local/go/src/runtime/asm_amd64.s:2337 +0x1 fp=0xc42003f7e0 sp=0xc42003f7d8 pc=0x44b4d1\ncreated by main.main\n  foo.go:5 +0x58\n\ngoroutine 1 [chan receive]:\nruntime.gopark(0x4739b8, 0xc420024178, 0x46fcd7, 0xc, 0xc420028e17, 0x3)\n  /usr/local/go/src/runtime/proc.go:280 +0x12c fp=0xc420053e30 sp=0xc420053e00 pc=0x42503c\nruntime.goparkunlock(0xc420024178, 0x46fcd7, 0xc, 0x1000f010040c217, 0x3)\n  /usr/local/go/src/runtime/proc.go:286 +0x5e fp=0xc420053e70 sp=0xc420053e30 pc=0x42512e\nruntime.chanrecv(0xc420024120, 0x0, 0xc420053f01, 0x4512d8)\n  /usr/local/go/src/runtime/chan.go:506 +0x304 fp=0xc420053f20 sp=0xc420053e70 pc=0x4046b4\nruntime.chanrecv1(0xc420024120, 0x0)\n  /usr/local/go/src/runtime/chan.go:388 +0x2b fp=0xc420053f50 sp=0xc420053f20 pc=0x40439b\nmain.main()\n  foo.go:9 +0x6f fp=0xc420053f80 sp=0xc420053f50 pc=0x4512ef\nruntime.main()\n  /usr/local/go/src/runtime/proc.go:185 +0x20d fp=0xc420053fe0 sp=0xc420053f80 pc=0x424bad\nruntime.goexit()\n  /usr/local/go/src/runtime/asm_amd64.s:2337 +0x1 fp=0xc420053fe8 sp=0xc420053fe0 pc=0x44b4d1\n\ngoroutine 2 [force gc (idle)]:\nruntime.gopark(0x4739b8, 0x4ad720, 0x47001e, 0xf, 0x14, 0x1)\n  /usr/local/go/src/runtime/proc.go:280 +0x12c fp=0xc42003e768 sp=0xc42003e738 pc=0x42503c\nruntime.goparkunlock(0x4ad720, 0x47001e, 0xf, 0xc420000114, 0x1)\n  /usr/local/go/src/runtime/proc.go:286 +0x5e fp=0xc42003e7a8 sp=0xc42003e768 pc=0x42512e\nruntime.forcegchelper()\n  /usr/local/go/src/runtime/proc.go:238 +0xcc fp=0xc42003e7e0 sp=0xc42003e7a8 pc=0x424e5c\nruntime.goexit()\n  /usr/local/go/src/runtime/asm_amd64.s:2337 +0x1 fp=0xc42003e7e8 sp=0xc42003e7e0 pc=0x44b4d1\ncreated by runtime.init.4\n  /usr/local/go/src/runtime/proc.go:227 +0x35\n\ngoroutine 3 [GC sweep wait]:\nruntime.gopark(0x4739b8, 0x4ad7e0, 0x46fdd2, 0xd, 0x419914, 0x1)\n  /usr/local/go/src/runtime/proc.go:280 +0x12c fp=0xc42003ef60 sp=0xc42003ef30 pc=0x42503c\nruntime.goparkunlock(0x4ad7e0, 0x46fdd2, 0xd, 0x14, 0x1)\n  /usr/local/go/src/runtime/proc.go:286 +0x5e fp=0xc42003efa0 sp=0xc42003ef60 pc=0x42512e\nruntime.bgsweep(0xc42001e150)\n  /usr/local/go/src/runtime/mgcsweep.go:52 +0xa3 fp=0xc42003efd8 sp=0xc42003efa0 pc=0x419973\nruntime.goexit()\n  /usr/local/go/src/runtime/asm_amd64.s:2337 +0x1 fp=0xc42003efe0 sp=0xc42003efd8 pc=0x44b4d1\ncreated by runtime.gcenable\n  /usr/local/go/src/runtime/mgc.go:216 +0x58",
       "container_id": "xxxxxx-xxxxxx",
       "container_name": "app",
       "source": "stdout",
       "ecs_cluster": "default",
       "ecs_task_arn": "arn:aws:ecs:us-east-1:xxxxxxxxxxxx:task/default/xxxxxx",
       "ecs_task_definition": "firelens-example-multiline:2"
   }
   ```

   The following log snippet shows how the same event appears if you run an ECS container that is not configured to concatenate multiline log messages. The log field contains a single line.

   ```
   {
       "log": "panic: my panic",
       "container_id": "xxxxxx-xxxxxx",
       "container_name": "app",
       "source": "stdout",
       "ecs_cluster": "default",
       "ecs_task_arn": "arn:aws:ecs:us-east-1:xxxxxxxxxxxx:task/default/xxxxxx",
       "ecs_task_definition": "firelens-example-multiline:3"
   ```

**Note**  
If your logs go to log files instead of the standard output, we recommend specifying the `multiline.parser` and `multiline.key_content` configuration parameters in the [Tail input plugin](https://docs.fluentbit.io/manual/pipeline/inputs/tail#multiline-support) instead of the Filter.

# Deploying Fluent Bit on Amazon ECS Windows containers
<a name="tutorial-deploy-fluentbit-on-windows"></a>

Fluent Bit is a fast and flexible log processor and router supported by various operating systems. It can be used to route logs to various AWS destinations such as Amazon CloudWatch Logs, Firehose Amazon S3, and Amazon OpenSearch Service. Fluent Bit supports common partner solutions such as [Datadog](https://www.datadoghq.com/), [Splunk](https://www.splunk.com/), and custom HTTP servers. For more information about Fluent Bit, see the [https://fluentbit.io/](https://fluentbit.io/) website.

The **AWS for Fluent Bit** image is available on Amazon ECR on both the Amazon ECR Public Gallery and in an Amazon ECR repository in most Regions for high availability. For more information, see [https://github.com/aws/aws-for-fluent-bit](https://github.com/aws/aws-for-fluent-bit) on the GitHub website.

This tutorial walks you through how to deploy Fluent Bit containers on their Windows instances running in Amazon ECS to stream logs generated by the Windows tasks to Amazon CloudWatch for centralized logging. 

This tutorial uses the following approach:
+ Fluent Bit runs as a service with the Daemon scheduling strategy. This strategy ensures that a single instance of Fluent Bit always runs on the container instances in the cluster.
  + Listens on port 24224 using the forward input plug-in.
  + Expose port 24224 to the host so that the docker runtime can send logs to Fluent Bit using this exposed port.
  + Has a configuration which allows Fluent Bit to send the logs records to specified destinations.
+ Launch all other Amazon ECS task containers using the fluentd logging driver. For more information, see [Fluentd logging driver](https://docs.docker.com/engine/logging/drivers/fluentd/) on the Docker documentation website.
  + Docker connects to the TCP socket 24224 on localhost inside the host namespace.
  + The Amazon ECS agent adds labels to the containers which includes the cluster name, task definition family name, task definition revision number, task ARN, and the container name. The same information is added to the log record using the labels option of the fluentd docker logging driver. For more information, see [labels, labels-regex, env, and env-regex](https://docs.docker.com/config/containers/logging/fluentd/#labels-labels-regex-env-and-env-regex) on the Docker documentation website.
  + Because the `async` option of the fluentd logging driver is set to `true`, when the Fluent Bit container is restarted, docker buffers the logs until the Fluent Bit container is restarted. You can increase the buffer limit by setting the fluentd-buffer-limit option. For more information, see [fluentd-buffer-limit](https://docs.docker.com/config/containers/logging/fluentd/#fluentd-buffer-limit) on the Docker documentation website.

 The work flow is as follows:
+ The Fluent Bit container starts and listens on port 24224 which is exposed to the host.
+ Fluent Bit uses the task IAM role credentials specified in its task definition.
+ Other tasks launched on the same instance use the fluentd docker logging driver to connect to the Fluent Bit container on port 24224. 
+ When the application containers generate logs, docker runtime tags those records, adds additional metadata specified in labels, and then forwards them on port 24224 in the host namespace. 
+ Fluent Bit receives the log record on port 24224 because it is exposed to the host namespace.
+ Fluent Bit performs its internal processing and routes the logs as specified.

This tutorial uses the default CloudWatch Fluent Bit configuration which does the following:
+ Creates a new log group for each cluster and task definition family.
+ Creates a new log stream for each task container in above generated log group whenever a new task is launched. Each stream will be marked with the task id to which the container belongs.
+ Adds additional metadata including the cluster name, task ARN, task container name, task definition family, and the task definition revision number in each log entry.

  For example, if you have `task_1` with `container_1` and `container_2` and t`ask_2` with `container_3`, then the following are the CloudWatch log streams:
  + `/aws/ecs/windows.ecs_task_1`

    `task-out.TASK_ID.container_1`

    `task-out.TASK_ID.container_2`
  + `/aws/ecs/windows.ecs_task_2`

    `task-out.TASK_ID.container_3`

**Note**  
You can use dual-stack service endpoints to interact with Amazon ECS from the AWS CLI, SDKs, and the Amazon ECS API over both IPv4 and IPv6. For more information, see [Using Amazon ECS dual-stack endpoints](dual-stack-endpoint.md).

**Topics**
+ [

## Prerequisites
](#tutorial-deploy-fluentbit-on-windows-prereqs)
+ [

## Step 1: Create the IAM access roles
](#tutorial-deploy-fluentbit-on-windows-iam-access-role)
+ [

## Step 2: Create an Amazon ECS Windows container instance
](#tutorial-deploy-fluentbit-on-windows-instance)
+ [

## Step 3: Configure Fluent Bit
](#tutorial-deploy-fluentbit-on-windows-configure-fluentbit)
+ [

## Step 4: Register a Windows Fluent Bit task definition which routes the logs to CloudWatch
](#tutorial-deploy-fluentbit-on-windows-register-task-definition)
+ [

## Step 5: Run the `ecs-windows-fluent-bit` task definition as an Amazon ECS service using the daemon scheduling strategy
](#tutorial-deploy-fluentbit-on-windows-run-task)
+ [

## Step 6: Register a Windows task definition which generates the logs
](#tutorial-deploy-fluentbit-on-windows-register-task-def-logs)
+ [

## Step 7: Run the `windows-app-task` task definition
](#tutorial-deploy-fluentbit-on-windows-run-task-fluentbit)
+ [

## Step 8: Verify the logs on CloudWatch
](#tutorial-deploy-fluentbit-on-windows-verify)
+ [

## Step 9: Clean up
](#tutorial-deploy-fluentbit-on-windows-cleanup)

## Prerequisites
<a name="tutorial-deploy-fluentbit-on-windows-prereqs"></a>

This tutorial assumes that the following prerequisites have been completed:
+ The latest version of the AWS CLI is installed and configured. For more information, see [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
+ The `aws-for-fluent-bit` container image is available for the following Windows operating systems:
  + Windows Server 2019 Core
  + Windows Server 2019 Full
  + Windows Server 2022 Core
  + Windows Server 2022 Full
+ The steps in [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md) have been completed.
+ You have a cluster. In this tutorial, the cluster name is **FluentBit-cluster**.
+ You have a VPC with a public subnet where the EC2 instance will be launched. You can use your default VPC. You can also use a private subnet that allows Amazon CloudWatch endpoints to reach the subnet. For more information about Amazon CloudWatch endpoints, see [Amazon CloudWatch endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/cw_region.html) in the *AWS General Reference*. For information about how to use the Amazon VPC wizard to create a VPC, see [Create a virtual private cloud](get-set-up-for-amazon-ecs.md#create-a-vpc).

## Step 1: Create the IAM access roles
<a name="tutorial-deploy-fluentbit-on-windows-iam-access-role"></a>

Create the Amazon ECS IAM roles.

1.  Create the Amazon ECS container instance role named "ecsInstanceRole". For more information, see [Amazon ECS container instance IAM role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html).

1. Create an IAM role for the Fluent Bit task named `fluentTaskRole`. For more information, see [Amazon ECS task IAM role](task-iam-roles.md).

    The IAM permissions granted in this IAM role are assumed by the task containers. In order to allow Fluent Bit to send logs to CloudWatch, you need to attach the following permissions to the task IAM role.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
       {
           "Effect": "Allow",
           "Action": [
               "logs:CreateLogStream",
               "logs:CreateLogGroup",
               "logs:DescribeLogStreams",
               "logs:PutLogEvents"
           ],
           "Resource": "*"
       }
       ]
   }
   ```

------

1. Attach the policy to the role.

   1. Save the above content in a file named `fluent-bit-policy.json`.

   1. Run the following command to attach the inline policy to `fluentTaskRole` IAM role.

      ```
      aws iam put-role-policy --role-name fluentTaskRole --policy-name fluentTaskPolicy --policy-document file://fluent-bit-policy.json
      ```

## Step 2: Create an Amazon ECS Windows container instance
<a name="tutorial-deploy-fluentbit-on-windows-instance"></a>

Create an Amazon ECS Windows container instance.

**To create an Amazon ECS instance**

1. Use the `aws ssm get-parameters` command to retrieve the AMI ID for the Region that hosts your VPC. For more information, see [Retrieving Amazon ECS-Optimized AMI metadata](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/retrieve-ecs-optimized_windows_AMI.html).

1. Use the Amazon EC2 console to launch the instance.

   1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

   1. From the navigation bar, select the Region to use.

   1. From the **EC2 Dashboard**, choose **Launch instance**.

   1. For **Name**, enter a unique name.

   1. For **Application and OS Images (Amazon Machine Image)**, choose the AMI that you retrieved in the first step.

   1. For **Instance type**, choose `t3.xlarge`.

   1. For **Key pair (login)**, choose a key pair. 

   1. Under **Network settings**, for **Security group**, choose an existing security group, or create a new one.

   1. Under **Network settings**, for **Auto-assign Public IP**, select **Enable**. 

   1. Under **Advanced details**, for **IAM instance profile** , choose **ecsInstanceRole**.

   1. Configure your Amazon ECS container instance with the following user data. Under **Advanced Details**, paste the following script into the **User data** field, replacing *cluster\$1name* with the name of your cluster.

      ```
      <powershell>
      Import-Module ECSTools
      Initialize-ECSAgent -Cluster cluster-name -EnableTaskENI -EnableTaskIAMRole -LoggingDrivers '["awslogs","fluentd"]'
      </powershell>
      ```

   1. When you are ready, select the acknowledgment field, and then choose **Launch Instances**. 

   1. A confirmation page lets you know that your instance is launching. Choose **View Instances** to close the confirmation page and return to the console.

## Step 3: Configure Fluent Bit
<a name="tutorial-deploy-fluentbit-on-windows-configure-fluentbit"></a>

You can use the following default configuration provided by AWS to get quickly started:
+ [Amazon CloudWatch](https://github.com/aws/aws-for-fluent-bit/blob/mainline/ecs_windows_forward_daemon/cloudwatch.conf) which is based on the Fluent Bit plug-in for [Amazon CloudWatch](https://docs.fluentbit.io/manual/v/1.9-pre/pipeline/outputs/cloudwatch) on the *Fluent Bit Official Manual*.

Alternatively, you can use other default configurations provided by AWS. For more information, see [Overriding the entrypoint for the Windows image](https://github.com/aws/aws-for-fluent-bit/tree/mainline/ecs_windows_forward_daemon#overriding-the-entrypoint-for-the-windows-image) on the `aws-for-fluent-bit` the Github website.

The default Amazon CloudWatch Fluent Bit configuration is shown below.

Replace the following variables:
+ *region* with the Region where you want to send the Amazon CloudWatch logs.

```
[SERVICE]
    Flush               5
    Log_Level           info
    Daemon              off

[INPUT]
    Name                forward
    Listen              0.0.0.0
    Port                24224
    Buffer_Chunk_Size   1M
    Buffer_Max_Size     6M
    Tag_Prefix          ecs.

# Amazon ECS agent adds the following log keys as labels to the docker container.
# We would use fluentd logging driver to add these to log record while sending it to Fluent Bit.
[FILTER]
    Name                modify
    Match               ecs.*
    Rename              com.amazonaws.ecs.cluster ecs_cluster
    Rename              com.amazonaws.ecs.container-name ecs_container_name
    Rename              com.amazonaws.ecs.task-arn ecs_task_arn
    Rename              com.amazonaws.ecs.task-definition-family ecs_task_definition_family
    Rename              com.amazonaws.ecs.task-definition-version ecs_task_definition_version

[FILTER]
    Name                rewrite_tag
    Match               ecs.*
    Rule                $ecs_task_arn ^([a-z-:0-9]+)/([a-zA-Z0-9-_]+)/([a-z0-9]+)$  out.$3.$ecs_container_name false
    Emitter_Name        re_emitted

[OUTPUT]
    Name                cloudwatch_logs
    Match               out.*
    region              region
    log_group_name      fallback-group
    log_group_template  /aws/ecs/$ecs_cluster.$ecs_task_definition_family
    log_stream_prefix   task-
    auto_create_group   On
```

Every log which gets into Fluent Bit has a tag which you specify, or is automatically generated when you do not supply one. The tags can be used to route different logs to different destinations. For additional information, see [Tag](https://docs.fluentbit.io/manual/concepts/key-concepts#tag) in the *Fluent Bit Official Manual*. 

The Fluent Bit configuration described above has the following properties:
+ The forward input plug-in listens for incoming traffic on TCP port 24224. 
+ Each log entry received on that port has a tag which the forward input plug-in modifies to prefix the record with `ecs.` string. 
+ The Fluent Bit internal pipeline routes the log entry to modify the filter using the Match regex. This filter replaces the keys in the log record JSON to the format which Fluent Bit can consume. 
+ The modified log entry is then consumed by the rewrite\$1tag filter. This filter changes the tag of the log record to the format out.*TASK\$1ID*.*CONTAINER\$1NAME*. 
+ The new tag will be routed to output cloudwatch\$1logs plug-in which creates the log groups and streams as described earlier by using the `log_group_template` and `log_stream_prefix` options of the CloudWatch output plug-in. For additional information, see [Configuration parameters](https://docs.fluentbit.io/manual/v/1.9-pre/pipeline/outputs/cloudwatch#configuration-parameters) in the *Fluent Bit Official Manual*. 

## Step 4: Register a Windows Fluent Bit task definition which routes the logs to CloudWatch
<a name="tutorial-deploy-fluentbit-on-windows-register-task-definition"></a>

Register a Windows Fluent Bit task definition which routes the logs to CloudWatch.

**Note**  
This task definition exposes Fluent Bit container port 24224 to the host port 24224. Verify that this port is not open in your EC2 instance security group to prevent access from outside.

**To register a task definition**

1. Create a file named `fluent-bit.json` with the following contents.

   Replace the following variables:
   + *task-iam-role* with the Amazon Resource Name (ARN) of your task IAM role
   + *region* with the Region where your task runs

   ```
   {
     "family": "ecs-windows-fluent-bit",
     "taskRoleArn": "task-iam-role",
     "containerDefinitions": [
       {
         "name": "fluent-bit",
         "image": "public.ecr.aws/aws-observability/aws-for-fluent-bit:windowsservercore-latest",
         "cpu": 512,
         "portMappings": [
           {
             "hostPort": 24224,
             "containerPort": 24224,
             "protocol": "tcp"
           }
         ],
         "entryPoint": [
           "Powershell",
           "-Command"
         ],
         "command": [
           "C:\\entrypoint.ps1 -ConfigFile C:\\ecs_windows_forward_daemon\\cloudwatch.conf"
         ],
         "environment": [
           {
             "name": "AWS_REGION",
             "value": "region"
           }
         ],
         "memory": 512,
         "essential": true,
         "logConfiguration": {
           "logDriver": "awslogs",
           "options": {
             "awslogs-group": "/ecs/fluent-bit-logs",
             "awslogs-region": "region",
             "awslogs-stream-prefix": "flb",
             "awslogs-create-group": "true"
           }
         }
       }
     ],
     "memory": "512",
     "cpu": "512"
   }
   ```

1. Run the following command to register the task definition.

   ```
   aws ecs register-task-definition --cli-input-json file://fluent-bit.json --region region
   ```

   You can list the task definitions for your account by running the `list-task-definitions` command. The output of displays the family and revision values that you can use together with `run-task` or `start-task`.

## Step 5: Run the `ecs-windows-fluent-bit` task definition as an Amazon ECS service using the daemon scheduling strategy
<a name="tutorial-deploy-fluentbit-on-windows-run-task"></a>

After you register a task definition for your account, you can run a task in the cluster. For this tutorial, you run one instance of the `ecs-windows-fluent-bit:1` task definition in your `FluentBit-cluster` cluster. Run the task in a service which uses the daemon scheduling strategy, which ensures that a single instance of Fluent Bit always runs on each of your container instances.

**To run a task**

1. Run the following command to start the `ecs-windows-fluent-bit:1` task definition (registered in the previous step) as a service.
**Note**  
This task definition uses the `awslogs` logging driver, your container instance need to have the necessary permissions.

   Replace the following variables:
   + *region* with the Region where your service runs

   ```
   aws ecs create-service \
       --cluster FluentBit-cluster \
       --service-name FluentBitForwardDaemonService \
       --task-definition ecs-windows-fluent-bit:1 \
       --launch-type EC2 \
       --scheduling-strategy DAEMON \
       --region region
   ```

1. Run the following command to list your tasks.

   Replace the following variables:
   + *region* with the Region where your service tasks run

   ```
   aws ecs list-tasks --cluster FluentBit-cluster --region region
   ```

## Step 6: Register a Windows task definition which generates the logs
<a name="tutorial-deploy-fluentbit-on-windows-register-task-def-logs"></a>

Register a task definition which generates the logs. This task definition deploys Windows container image which will write a incremental number to `stdout` every second.

The task definition uses the fluentd logging driver which connects to port 24224 which the Fluent Bit plug-in listens to. The Amazon ECS agent labels each Amazon ECS container with tags including the cluster name, task ARN, task definition family name, task definition revision number and the task container name. These key-value labels are passed to Fluent Bit.

**Note**  
This task uses the `default` network mode. However, you can also use the `awsvpc` network mode with the task.

**To register a task definition**

1. Create a file named `windows-app-task.json` with the following contents.

   ```
   {
     "family": "windows-app-task",
     "containerDefinitions": [
       {
         "name": "sample-container",
         "image": "mcr.microsoft.com/windows/servercore:ltsc2019",
         "cpu": 512,
         "memory": 512,
         "essential": true,
         "entryPoint": [
           "Powershell",
           "-Command"
         ],
         "command": [
           "$count=1;while(1) { Write-Host $count; sleep 1; $count=$count+1;}"
         ],
         "logConfiguration": {
           "logDriver": "fluentd",
           "options": {
             "fluentd-address": "localhost:24224",
             "tag": "{{ index .ContainerLabels \"com.amazonaws.ecs.task-definition-family\" }}",
             "fluentd-async": "true",
             "labels": "com.amazonaws.ecs.cluster,com.amazonaws.ecs.container-name,com.amazonaws.ecs.task-arn,com.amazonaws.ecs.task-definition-family,com.amazonaws.ecs.task-definition-version"
           }
         }
       }
     ],
     "memory": "512",
     "cpu": "512"
   }
   ```

1. Run the following command to register the task definition.

   Replace the following variables:
   + *region* with the Region where your task runs

   ```
   aws ecs register-task-definition --cli-input-json file://windows-app-task.json --region region
   ```

   You can list the task definitions for your account by running the `list-task-definitions` command. The output of displays the family and revision values that you can use together with `run-task` or `start-task`.

## Step 7: Run the `windows-app-task` task definition
<a name="tutorial-deploy-fluentbit-on-windows-run-task-fluentbit"></a>

After you register the `windows-app-task` task definition, run it in your `FluentBit-cluster` cluster.

**To run a task**

1. Run the `windows-app-task:1` task definition you registered in the previous step.

   Replace the following variables:
   + *region* with the Region where your task runs

   ```
   aws ecs run-task --cluster FluentBit-cluster --task-definition windows-app-task:1 --count 2 --region region
   ```

1. Run the following command to list your tasks.

   ```
   aws ecs list-tasks --cluster FluentBit-cluster
   ```

## Step 8: Verify the logs on CloudWatch
<a name="tutorial-deploy-fluentbit-on-windows-verify"></a>

In order to verify your Fluent Bit setup, check for the following log groups in the CloudWatch console:
+ `/ecs/fluent-bit-logs` - This is the log group which corresponds to the Fluent Bit daemon container which is running on the container instance.
+ `/aws/ecs/FluentBit-cluster.windows-app-task` - This is the log group which corresponds to all the tasks launched for `windows-app-task` task definition family inside `FluentBit-cluster` cluster.

   `task-out.FIRST_TASK_ID.sample-container` - This log stream contains all the logs generated by the first instance of the task in the sample-container task container. 

  `task-out.SECOND_TASK_ID.sample-container` - This log stream contains all the logs generated by the second instance of the task in the sample-container task container. 

 The `task-out.TASK_ID.sample-container` log stream has fields similar to the following:

```
{
    "source": "stdout",
    "ecs_task_arn": "arn:aws:ecs:region:0123456789012:task/FluentBit-cluster/13EXAMPLE",
    "container_name": "/ecs-windows-app-task-1-sample-container-cEXAMPLE",
    "ecs_cluster": "FluentBit-cluster",
    "ecs_container_name": "sample-container",
    "ecs_task_definition_version": "1",
    "container_id": "61f5e6EXAMPLE",
    "log": "10",
    "ecs_task_definition_family": "windows-app-task"
}
```

**To verify the Fluent Bit setup**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Log groups**. Make sure that you're in the Region where you deployed Fluent Bit to your containers.

   In the list of log groups in the AWS Region, you should see the following:
   + `/ecs/fluent-bit-logs`
   + `/aws/ecs/FluentBit-cluster.windows-app-task`

   If you see these log groups, the Fluent Bit setup is verified.

## Step 9: Clean up
<a name="tutorial-deploy-fluentbit-on-windows-cleanup"></a>

When you have finished this tutorial, clean up the resources associated with it to avoid incurring charges for resources that you aren't using. 

**To clean up the tutorial resources**

1. Stop the `windows-simple-task` task and the `ecs-fluent-bit` task. For more information, see [Stopping an Amazon ECS task](standalone-task-stop.md).

1. Run the following command to delete the `/ecs/fluent-bit-logs` log group. For more information, about deleting log groups see [delete-log-group](https://docs.aws.amazon.com/cli/latest/reference/logs/delete-log-group.html) in the *AWS Command Line Interface Reference*.

   ```
   aws logs delete-log-group --log-group-name /ecs/fluent-bit-logs
   aws logs delete-log-group --log-group-name /aws/ecs/FluentBit-cluster.windows-app-task
   ```

1. Run the following command to terminate the instance.

   ```
   aws ec2 terminate-instances --instance-ids instance-id
   ```

1. Run the following commands to delete the IAM roles. 

   ```
   aws iam delete-role --role-name ecsInstanceRole
   aws iam delete-role --role-name fluentTaskRole
   ```

1. Run the following command to delete the Amazon ECS cluster.

   ```
   aws ecs delete-cluster --cluster FluentBit-cluster
   ```

# Using gMSA for EC2 Linux containers on Amazon ECS
<a name="linux-gmsa"></a>

Amazon ECS supports Active Directory authentication for Linux containers on EC2 through a special kind of service account called a *group Managed Service Account* (gMSA).

Linux based network applications, such as .NET Core applications, can use Active Directory to facilitate authentication and authorization management between users and services. You can use this feature by designing applications that integrate with Active Directory and run on domain-joined servers. But, because Linux containers can't be domain-joined, you need to configure a Linux container to run with gMSA.

A Linux container that runs with gMSA relies on the `credentials-fetcher` daemon that runs on the container's host Amazon EC2 instance. That is, the daemon retrieves the gMSA credentials from the Active Directory domain controller and then transfers these credentials to the container instance. For more information about service accounts, see [Create gMSAs for Windows containers](https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts) on the Microsoft Learn website.

## Considerations
<a name="linux-gmsa-considerations"></a>

Consider the following before you use gMSA for Linux containers:
+ If your containers run on EC2, you can use gMSA for Windows containers and Linux containers. For information about how to use gMSA for Linux container on Fargate, see [Using gMSA for Linux containers on Fargate](fargate-linux-gmsa.md).
+ You might need a Windows computer that's joined to the domain to complete the prerequisites. For example, you might need a Windows computer that's joined to the domain to create the gMSA in Active Directory with PowerShell. The RSAT Active Director PowerShell tools are only available for Windows. For more information, see [Installing the Active Directory administration tools](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_install_ad_tools.html).
+ You chose between **domainless gMSA** and ** joining each instance to a single domain**. By using domainless gMSA, the container instance isn't joined to the domain, other applications on the instance can't use the credentials to access the domain, and tasks that join different domains can run on the same instance.

  Then, choose the data storage for the CredSpec and optionally, for the Active Directory user credentials for domainless gMSA.

  Amazon ECS uses an Active Directory credential specification file (CredSpec). This file contains the gMSA metadata that's used to propagate the gMSA account context to the container. You generate the CredSpec file and then store it in one of the CredSpec storage options in the following table, specific to the Operating System of the container instances. To use the domainless method, an optional section in the CredSpec file can specify credentials in one of the *domainless user credentials* storage options in the following table, specific to the Operating System of the container instances.    
<a name="gmsa-table"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/linux-gmsa.html)

## Prerequisites
<a name="linux-gmsa-prerequisites"></a>

Before you use the gMSA for Linux containers feature with Amazon ECS, make sure to complete the following:
+ You set up an Active Directory domain with the resources that you want your containers to access. Amazon ECS supports the following setups:
  + An Directory Service Active Directory. Directory Service is an AWS managed Active Directory that's hosted on Amazon EC2. For more information, see [Getting Started with AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started.html) in the *AWS Directory Service Administration Guide*.
  + An on-premises Active Directory. You must ensure that the Amazon ECS Linux container instance can join the domain. For more information, see [AWS Direct Connect](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect.html).
+ You have an existing gMSA account in the Active Directory. For more information, see [Using gMSA for EC2 Linux containers on Amazon ECS](#linux-gmsa).
+ You installed and are running the `credentials-fetcher` daemon on an Amazon ECS Linux container instance. You also added an initial set of credentials to the `credentials-fetcher` daemon to authenticate with the Active Directory.
**Note**  
The `credentials-fetcher` daemon is only available for Amazon Linux 2023 and Fedora 37 and later. The daemon isn't available for Amazon Linux 2. For more information, see [aws/credentials-fetcher](https://github.com/aws/credentials-fetcher) on GitHub.
+ You set up the credentials for the `credentials-fetcher` daemon to authenticate with the Active Directory. The credentials must be a member of the Active Directory security group that has access to the gMSA account. There are multiple options in [Decide if you want to join the instances to the domain, or use domainless gMSA.](#linux-gmsa-initial-creds).
+ You added the required IAM permissions. The permissions that are required depend on the methods that you choose for the initial credentials and for storing the credential specification:
  + If you use *domainless gMSA* for initial credentials, IAM permissions for AWS Secrets Manager are required on the task execution role.
  + If you store the credential specification in SSM Parameter Store, IAM permissions for Amazon EC2 Systems Manager Parameter Store are required on the task execution role.
  + If you store the credential specification in Amazon S3, IAM permissions for Amazon Simple Storage Service are required on the task execution role.

## Setting up gMSA-capable Linux Containers on Amazon ECS
<a name="linux-gmsa-setup"></a>
<a name="linux-gmsa-setup-infra"></a>
**Prepare the infrastructure**  
The following steps are considerations and setup that are performed once. After you complete these steps, you can automate creating container instances to reuse this configuration.

Decide how the initial credentials are provided and configure the EC2 user data in a reusable EC2 launch template to install the `credentials-fetcher` daemon.

1. <a name="linux-gmsa-initial-creds"></a>

**Decide if you want to join the instances to the domain, or use domainless gMSA.**
   + <a name="linux-gmsa-initial-join"></a>

**Join EC2 instances to the Active Directory domain**

     
     + <a name="linux-gmsa-initial-join-userdata"></a>

**Join the instances by user data**

       Add the steps to join the Active Directory domain to your EC2 user data in an EC2 launch template. Multiple Amazon EC2 Auto Scaling groups can use the same launch template.

       You can use these steps [Joining an Active Directory or FreeIPA domain](https://docs.fedoraproject.org/en-US/quick-docs/join-active-directory-freeipa/) in the Fedora Docs.
   + <a name="linux-gmsa-initial-domainless"></a>

**Make an Active Directory user for domainless gMSA**

     The `credentials-fetcher` daemon has a feature that's called *domainless gMSA*. This feature requires a domain, but the EC2 instance doesn't need to be joined to the domain. By using domainless gMSA, the container instance isn't joined to the domain, other applications on the instance can't use the credentials to access the domain, and tasks that join different domains can run on the same instance. Instead, you provide the name of a secret in AWS Secrets Manager in the CredSpec file. The secret must contain a username, password, and the domain to log in to.

     This feature is supported and can be used with Linux and Windows containers.

     This feature is similar to the *gMSA support for non-domain-joined container hosts* feature. For more information about the Windows feature, see [gMSA architecture and improvements](https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts#gmsa-architecture-and-improvements) on the Microsoft Learn website.

     1. Make a user in your Active Directory domain. The user in Active Directory must have permission to access the gMSA service accounts that you use in the tasks.

     1. Create a secret in AWS Secrets Manager, after you made the user in Active Directory. For more information, see [Create an AWS Secrets Manager secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html).

     1. Enter the user's username, password, and the domain into JSON key-value pairs called `username`, `password` and `domainName`, respectively.

        ```
        {"username":"username","password":"passw0rd", "domainName":"example.com"}
        ```

     1. Add configuration to the CredSpec file for the service account. The additional `HostAccountConfig` contains the Amazon Resource Name (ARN) of the secret in Secrets Manager.

        On Windows, the `PluginGUID` must match the GUID in the following example snippet. On Linux, the `PluginGUID` is ignored. Replace `MySecret` with example with the Amazon Resource Name (ARN) of your secret.

        ```
            "ActiveDirectoryConfig": {
                "HostAccountConfig": {
                    "PortableCcgVersion": "1",
                    "PluginGUID": "{859E1386-BDB4-49E8-85C7-3070B13920E1}",
                    "PluginInput": {
                        "CredentialArn": "arn:aws:secretsmanager:aws-region:111122223333:secret:MySecret"
                    }
                }
        ```

     1. The *domainless gMSA* feature needs additional permissions in the task execution role. Follow the step [(Optional) domainless gMSA secret](#linux-gmsa-domainless-secret).

1. <a name="linux-gmsa-install"></a>

**Configure instances and install `credentials-fetcher` daemon**

   You can install the `credentials-fetcher` daemon with a user data script in your EC2 Launch Template. The following examples demonstrate two types of user data, `cloud-config` YAML or bash script. These examples are for Amazon Linux 2023 (AL2023). Replace `MyCluster` with the name of the Amazon ECS cluster that you want these instances to join.
   + <a name="linux-gmsa-install-yaml"></a>

**`cloud-config` YAML**

     ```
     Content-Type: text/cloud-config
     package_reboot_if_required: true
     packages:
       # prerequisites
       - dotnet
       - realmd
       - oddjob
       - oddjob-mkhomedir
       - sssd
       - adcli
       - krb5-workstation
       - samba-common-tools
       # https://github.com/aws/credentials-fetcher gMSA credentials management for containers
       - credentials-fetcher
     write_files:
     # configure the ECS Agent to join your cluster.
     # replace MyCluster with the name of your cluster.
     - path: /etc/ecs/ecs.config
       owner: root:root
       permissions: '0644'
       content: |
         ECS_CLUSTER=MyCluster
         ECS_GMSA_SUPPORTED=true
     runcmd:
     # start the credentials-fetcher daemon and if it succeeded, make it start after every reboot
     - "systemctl start credentials-fetcher"
     - "systemctl is-active credentials-fetcher && systemctl enable credentials-fetcher"
     ```
   + <a name="linux-gmsa-install-userdata"></a>

**bash script**

     If you're more comfortable with bash scripts and have multiple variables to write to `/etc/ecs/ecs.config`, use the following `heredoc` format. This format writes everything between the lines beginning with **cat** and `EOF` to the configuration file.

     ```
     #!/usr/bin/env bash
     set -euxo pipefail
     
     # prerequisites
     timeout 30 dnf install -y dotnet realmd oddjob oddjob-mkhomedir sssd adcli krb5-workstation samba-common-tools
     # install https://github.com/aws/credentials-fetcher gMSA credentials management for containers
     timeout 30 dnf install -y credentials-fetcher
     
     # start credentials-fetcher
     systemctl start credentials-fetcher
     systemctl is-active credentials-fetcher && systemctl enable credentials-fetcher
     
     cat <<'EOF' >> /etc/ecs/ecs.config
     ECS_CLUSTER=MyCluster
     ECS_GMSA_SUPPORTED=true
     EOF
     ```

   There are optional configuration variables for the `credentials-fetcher` daemon that you can set in `/etc/ecs/ecs.config`. We recommend that you set the variables in the user data in the YAML block or `heredoc` similar to the previous examples. Doing so prevents issues with partial configuration that can happen with editing a file multiple times. For more information about the ECS agent configuration, see [Amazon ECS Container Agent](https://github.com/aws/amazon-ecs-agent/blob/master/README.md#environment-variables) on GitHub.
   + Optionally, you can use the variable `CREDENTIALS_FETCHER_HOST` if you change the `credentials-fetcher` daemon configuration to move the socket to another location.

**Setting up permissions and secrets**  
Do the following steps once for each application and each task definition. We recommend that you use the best practice of granting the least privilege and narrow the permissions used in the policy. This way, each task can only read the secrets that it needs.

1. <a name="linux-gmsa-domainless-secret"></a>

**(Optional) domainless gMSA secret**

   If you use the domainless method where the instance isn't joined to the domain, follow this step.

   You must add the following permissions as an inline policy to the task execution IAM role. Doing so gives the `credentials-fetcher` daemon access to the Secrets Manager secret. Replace the `MySecret` example with the Amazon Resource Name (ARN) of your secret in the `Resource` list.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "secretsmanager:GetSecretValue"
               ],
               "Resource": "arn:aws:secretsmanager:us-east-1:123456789012:secret:my-secret-AbCdEf"
           }
       ]
   }
   ```

------
**Note**  
If you use your own KMS key to encrypt your secret, you must add the necessary permissions to this role and add this role to the AWS KMS key policy.

1. 

**Decide if you're using SSM Parameter Store or S3 to store the CredSpec**

   Amazon ECS supports the following ways to reference the file path in the `credentialSpecs` field of the task definition.

   If you join the instances to a single domain, use the prefix `credentialspec:` at the start of the ARN in the string. If you use domainless gMSA, then use `credentialspecdomainless:`.

   For more information about the CredSpec, see [Credential specification file](#linux-gmsa-credentialspec).
   + <a name="linux-gmsa-credspec-s3"></a>

**Amazon S3 Bucket**

     Add the credential spec to an Amazon S3 bucket. Then, reference the Amazon Resource Name (ARN) of the Amazon S3 bucket in the `credentialSpecs` field of the task definition.

     ```
     {
         "family": "",
         "executionRoleArn": "",
         "containerDefinitions": [
             {
                 "name": "",
                 ...
                 "credentialSpecs": [
                     "credentialspecdomainless:arn:aws:s3:::${BucketName}/${ObjectName}"
                 ],
                 ...
             }
         ],
         ...
     }
     ```

     To give your tasks access to the S3 bucket, add the following permissions as an inline policy to the Amazon ECS task execution IAM role.

------
#### [ JSON ]

****  

     ```
     {
         "Version":"2012-10-17",		 	 	 
         "Statement": [
             {
                 "Sid": "VisualEditor",
                 "Effect": "Allow",
                 "Action": [
                     "s3:Get*",
                     "s3:List*"
                 ],
                 "Resource": [
                     "arn:aws:s3:::amzn-s3-demo-bucket",
                     "arn:aws:s3:::amzn-s3-demo-bucket/{object}"
                 ]
             }
         ]
     }
     ```

------
   + <a name="linux-gmsa-credspec-ssm"></a>

**SSM Parameter Store parameter**

     Add the credential spec to an SSM Parameter Store parameter. Then, reference the Amazon Resource Name (ARN) of the SSM Parameter Store parameter in the `credentialSpecs` field of the task definition.

     ```
     {
         "family": "",
         "executionRoleArn": "",
         "containerDefinitions": [
             {
                 "name": "",
                 ...
                 "credentialSpecs": [
                     "credentialspecdomainless:arn:aws:ssm:aws-region:111122223333:parameter/parameter_name"
                 ],
                 ...
             }
         ],
         ...
     }
     ```

     To give your tasks access to the SSM Parameter Store parameter, add the following permissions as an inline policy to the Amazon ECS task execution IAM role.

------
#### [ JSON ]

****  

     ```
     {
         "Version":"2012-10-17",		 	 	 
         "Statement": [
             {
                 "Effect": "Allow",
                 "Action": [
                     "ssm:GetParameters"
                 ],
                 "Resource": "arn:aws:ssm:us-east-1:123456789012:parameter/my-parameter"
             }
         ]
     }
     ```

------

## Credential specification file
<a name="linux-gmsa-credentialspec"></a>

Amazon ECS uses an Active Directory credential specification file (*CredSpec*). This file contains the gMSA metadata that's used to propagate the gMSA account context to the Linux container. You generate the CredSpec and reference it in the `credentialSpecs` field in your task definition. The CredSpec file doesn't contain any secrets.

The following is an example CredSpec file.

```
{
    "CmsPlugins": [
        "ActiveDirectory"
    ],
    "DomainJoinConfig": {
        "Sid": "S-1-5-21-2554468230-2647958158-2204241789",
        "MachineAccountName": "WebApp01",
        "Guid": "8665abd4-e947-4dd0-9a51-f8254943c90b",
        "DnsTreeName": "example.com",
        "DnsName": "example.com",
        "NetBiosName": "example"
    },
    "ActiveDirectoryConfig": {
        "GroupManagedServiceAccounts": [
            {
                "Name": "WebApp01",
                "Scope": "example.com"
            }
        ],
        "HostAccountConfig": {
            "PortableCcgVersion": "1",
            "PluginGUID": "{859E1386-BDB4-49E8-85C7-3070B13920E1}",
            "PluginInput": {
                "CredentialArn": "arn:aws:secretsmanager:aws-region:111122223333:secret:MySecret"
            }
        }
    }
}
```
<a name="linux-gmsa-credentialspec-create"></a>
**Creating a CredSpec**  
You create a CredSpec by using the CredSpec PowerShell module on a Windows computer that's joined to the domain. Follow the steps in [Create a credential spec](https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts#create-a-credential-spec) on the Microsoft Learn website.

# Using gMSA for Linux containers on Fargate
<a name="fargate-linux-gmsa"></a>

Amazon ECS supports Active Directory authentication for Linux containers on Fargate through a special kind of service account called a *group Managed Service Account* (gMSA).

Linux based network applications, such as .NET Core applications, can use Active Directory to facilitate authentication and authorization management between users and services. You can use this feature by designing applications that integrate with Active Directory and run on domain-joined servers. But, because Linux containers can't be domain-joined, you need to configure a Linux container to run with gMSA.

## Considerations
<a name="fargate-linux-gmsa-considerations"></a>

Consider the following before you use gMSA for Linux containers on Fargate:
+ You must be running Platform Version 1.4 or later.
+ You might need a Windows computer that's joined to the domain to complete the prerequisites. For example, you might need a Windows computer that's joined to the domain to create the gMSA in Active Directory with PowerShell. The RSAT Active Director PowerShell tools are only available for Windows. For more information, see [Installing the Active Directory administration tools](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_install_ad_tools.html).
+ You must use **domainless gMSA**. 

  Amazon ECS uses an Active Directory credential specification file (CredSpec). This file contains the gMSA metadata that's used to propagate the gMSA account context to the container. You generate the CredSpec file, and then store it in an Amazon S3 bucket.
+ A task can only support one Active Directory.

## Prerequisites
<a name="fargate-linux-gmsa-prerequisites"></a>

Before you use the gMSA for Linux containers feature with Amazon ECS, make sure to complete the following:
+ You set up an Active Directory domain with the resources that you want your containers to access. Amazon ECS supports the following setups:
  + An Directory Service Active Directory. Directory Service is an AWS managed Active Directory that's hosted on Amazon EC2. For more information, see [Getting Started with AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started.html) in the *AWS Directory Service Administration Guide*.
  + An on-premises Active Directory. You must ensure that the Amazon ECS Linux container instance can join the domain. For more information, see [AWS Direct Connect](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-network-to-amazon.html).
+ You have an existing gMSA account in the Active Directory and a user that has permission to access the gMSA service account. For more information, see [Make an Active Directory user for domainless gMSA](#fargate-linux-gmsa-initial-domainless).
+ You have an Amazon S3 bucket. For more information, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon S3 User Guide*.

## Setting up gMSA-capable Linux Containers on Amazon ECS
<a name="fargate-linux-gmsa-setup"></a>
<a name="linux-gmsa-setup-infra"></a>
**Prepare the infrastructure**  
The following steps are considerations and setup that are performed once. 
+ <a name="fargate-linux-gmsa-initial-domainless"></a>

**Make an Active Directory user for domainless gMSA**

  When you use domainless gMSA, the container isn't joined to the domain. Other applications that run on the container can't use the credentials to access the domain. Tasks that use a different domain can run on the same container. You provide the name of a secret in AWS Secrets Manager in the CredSpec file. The secret must contain a username, password, and the domain to log in to.

  This feature is similar to the *gMSA support for non-domain-joined container hosts* feature. For more information about the Windows feature, see [gMSA architecture and improvements](https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts#gmsa-architecture-and-improvements) on the Microsoft Learn website.

  1. Configure a user in your Active Directory domain. The user in the Active Directory must have permission to access the gMSA service account that you use in the tasks.

  1. You have a VPC and subnets that can resolve the Active Directory domain name. Configure the VPC with DHCP options with the domain name that points to the Active Directory service name. For information about how to configure DHCP options for a VPC, see [Work with DHCP option sets ](https://docs.aws.amazon.com/vpc/latest/userguide/DHCPOptionSet.html) in the *Amazon Virtual Private Cloud User Guide*.

  1. Create a secret in AWS Secrets Manager. 

  1. Create the credential specification file.

**Setting up permissions and secrets**  
Do the following steps one time for each application and each task definition. We recommend that you use the best practice of granting the least privilege and narrow the permissions used in the policy. This way, each task can only read the secrets that it needs.

1. Make a user in your Active Directory domain. The user in Active Directory must have permission to access the gMSA service accounts that you use in the tasks.

1. After you make the Active Directory user, create a secret in AWS Secrets Manager. For more information, see [Create an AWS Secrets Manager secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html).

1. Enter the user's username, password, and the domain into JSON key-value pairs called `username`, `password` and `domainName`, respectively.

   ```
   {"username":"username","password":"passw0rd", "domainName":"example.com"}
   ```

1. <a name="fargate-linux-gmsa-domainless-secret"></a>You must add the following permissions as an inline policy to the task execution IAM role. Doing so gives the `credentials-fetcher` daemon access to the Secrets Manager secret. Replace the `MySecret` example with the Amazon Resource Name (ARN) of your secret in the `Resource` list.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "secretsmanager:GetSecretValue"
               ],
               "Resource": [
               "arn:aws:secretsmanager:us-east-1:111122223333:secret:MySecret"
               ]
           }
       ]
   }
   ```

------
**Note**  
If you use your own KMS key to encrypt your secret, you must add the necessary permissions to this role and add this role to the AWS KMS key policy.

1. <a name="linux-gmsa-credspec-ssm"></a>Add the credential spec to an Amazon S3 bucket. Then, reference the Amazon Resource Name (ARN) of the Amazon S3 bucket in the `credentialSpecs` field of the task definition.

   ```
   {
       "family": "",
       "executionRoleArn": "",
       "containerDefinitions": [
           {
               "name": "",
               ...
               "credentialSpecs": [
                   "credentialspecdomainless:arn:aws:s3:::${BucketName}/${ObjectName}"
               ],
               ...
           }
       ],
       ...
   }
   ```

   To give your tasks access to the S3 bucket, add the following permissions as an inline policy to the Amazon ECS task execution IAM role.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "VisualEditor",
               "Effect": "Allow",
               "Action": [
               "s3:GetObjectVersion",
               "s3:ListBucket"
               ],
               "Resource": [
                   "arn:aws:s3:::{bucket_name}",
                   "arn:aws:s3:::{bucket_name}/{object}"
               ]
           }
       ]
   }
   ```

------

## Credential specification file
<a name="fargate-linux-gmsa-credentialspec"></a>

Amazon ECS uses an Active Directory credential specification file (*CredSpec*). This file contains the gMSA metadata that's used to propagate the gMSA account context to the Linux container. You generate the CredSpec and reference it in the `credentialSpecs` field in your task definition. The CredSpec file doesn't contain any secrets.

The following is an example CredSpec file.

```
{
    "CmsPlugins": [
        "ActiveDirectory"
    ],
    "DomainJoinConfig": {
        "Sid": "S-1-5-21-2554468230-2647958158-2204241789",
        "MachineAccountName": "WebApp01",
        "Guid": "8665abd4-e947-4dd0-9a51-f8254943c90b",
        "DnsTreeName": "example.com",
        "DnsName": "example.com",
        "NetBiosName": "example"
    },
    "ActiveDirectoryConfig": {
        "GroupManagedServiceAccounts": [
            {
                "Name": "WebApp01",
                "Scope": "example.com"
            }
        ],
        "HostAccountConfig": {
            "PortableCcgVersion": "1",
            "PluginGUID": "{859E1386-BDB4-49E8-85C7-3070B13920E1}",
            "PluginInput": {
                "CredentialArn": "arn:aws:secretsmanager:aws-region:111122223333:secret:MySecret"
            }
        }
    }
}
```
<a name="fargate-linux-gmsa-credentialspec-create"></a>
**Creating a CredSpec and uploading it to an Amazon S3**  
You create a CredSpec by using the CredSpec PowerShell module on a Windows computer that's joined to the domain. Follow the steps in [Create a credential spec](https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts#create-a-credential-spec) on the Microsoft Learn website.

After you create the credential specification file, upload it to an Amazon S3 bucket. Copy the CredSpec file to the computer or environment that you are running AWS CLI commands in.

Run the following AWS CLI command to upload the CredSpec to Amazon S3. Replace `amzn-s3-demo-bucket` with the name of your Amazon S3 bucket. You can store the file as an object in any bucket and location, but you must allow access to that bucket and location in the policy that you attach to the task execution role.

For PowerShell, use the following command:

```
$ Write-S3Object -BucketName "amzn-s3-demo-bucket" -Key "ecs-domainless-gmsa-credspec" -File "gmsa-cred-spec.json"
```

The following AWS CLI command uses backslash continuation characters that are used by `sh` and compatible shells. 

```
$ aws s3 cp gmsa-cred-spec.json \
s3://amzn-s3-demo-bucket/ecs-domainless-gmsa-credspec
```

# Using Amazon ECS Windows containers with domainless gMSA using the AWS CLI
<a name="tutorial-gmsa-windows"></a>

The following tutorial shows how to create an Amazon ECS task that runs a Windows container that has credentials to access Active Directory with the AWS CLI. By using domainless gMSA, the container instance isn't joined to the domain, other applications on the instance can't use the credentials to access the domain, and tasks that join different domains can run on the same instance.

**Topics**
+ [

## Prerequisites
](#tutorial-gmsa-windows-prerequisites)
+ [

## Step 1: Create and configure the gMSA account on Active Directory Domain Services (AD DS)
](#tutorial-gmsa-windows-step1)
+ [

## Step 2: Upload Credentials to Secrets Manager
](#tutorial-gmsa-windows-step2)
+ [

## Step 3: Modify your CredSpec JSON to include domainless gMSA information
](#tutorial-gmsa-windows-step3)
+ [

## Step 4: Upload CredSpec to Amazon S3
](#tutorial-gmsa-windows-step4)
+ [

## Step 5: (Optional) Create an Amazon ECS cluster
](#tutorial-gmsa-windows-step5)
+ [

## Step 6: Create an IAM role for container instances
](#tutorial-gmsa-windows-step6)
+ [

## Step 7: Create a custom task execution role
](#tutorial-gmsa-windows-step7)
+ [

## Step 8: Create a task role for Amazon ECS Exec
](#tutorial-gmsa-windows-step8)
+ [

## Step 9: Register a task definition that uses domainless gMSA
](#tutorial-gmsa-windows-step9)
+ [

## Step 10: Register a Windows container instance to the cluster
](#tutorial-gmsa-windows-step10)
+ [

## Step 11: Verify the container instance
](#tutorial-gmsa-windows-step11)
+ [

## Step 12: Run a Windows task
](#tutorial-gmsa-windows-step12)
+ [

## Step 13: Verify the container has gMSA credentials
](#tutorial-gmsa-windows-step13)
+ [

## Step 14: Clean up
](#tutorial-gmsa-windows-step14)
+ [

## Debugging Amazon ECS domainless gMSA for Windows containers
](#tutorial-gmsa-windows-debugging)

## Prerequisites
<a name="tutorial-gmsa-windows-prerequisites"></a>

This tutorial assumes that the following prerequisites have been completed:
+ The steps in [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md) have been completed.
+ Your IAM user has the required permissions specified in the [AmazonECS\$1FullAccess](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonECS_FullAccess) IAM policy example.
+  The latest version of the AWS CLI is installed and configured. For more information about installing or upgrading your AWS CLI, see [Installing the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
**Note**  
You can use dual-stack service endpoints to interact with Amazon ECS from the AWS CLI, SDKs, and the Amazon ECS API over both IPv4 and IPv6. For more information, see [Using Amazon ECS dual-stack endpoints](dual-stack-endpoint.md).
+ You set up an Active Directory domain with the resources that you want your containers to access. Amazon ECS supports the following setups:
  + An Directory Service Active Directory. Directory Service is an AWS managed Active Directory that's hosted on Amazon EC2. For more information, see [Getting Started with AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started.html) in the *AWS Directory Service Administration Guide*.
  + An on-premises Active Directory. You must ensure that the Amazon ECS Linux container instance can join the domain. For more information, see [AWS Direct Connect](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-network-to-amazon.html).
+ You have a VPC and subnets that can resolve the Active Directory domain name.
+ You chose between **domainless gMSA** and ** joining each instance to a single domain**. By using domainless gMSA, the container instance isn't joined to the domain, other applications on the instance can't use the credentials to access the domain, and tasks that join different domains can run on the same instance.

  Then, choose the data storage for the CredSpec and optionally, for the Active Directory user credentials for domainless gMSA.

  Amazon ECS uses an Active Directory credential specification file (CredSpec). This file contains the gMSA metadata that's used to propagate the gMSA account context to the container. You generate the CredSpec file and then store it in one of the CredSpec storage options in the following table, specific to the Operating System of the container instances. To use the domainless method, an optional section in the CredSpec file can specify credentials in one of the *domainless user credentials* storage options in the following table, specific to the Operating System of the container instances.    
<a name="gmsa-table"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/tutorial-gmsa-windows.html)
+ (Optional) AWS CloudShell is a tool that gives customers a command line without needing to create their own EC2 instance. For more information, see [What is AWS CloudShell?](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html) in the *AWS CloudShell User Guide*.

## Step 1: Create and configure the gMSA account on Active Directory Domain Services (AD DS)
<a name="tutorial-gmsa-windows-step1"></a>

Create and configure a gMSA account on the Active Directory domain.

**Note**  
This step creates two separate accounts: a Group Managed Service Account (gMSA) that provides the identity for your containers, and a regular user account that is used for domain authentication. These accounts serve different purposes and should have different names.

1. 

**Generate a Key Distribution Service root key**
**Note**  
If you are using Directory Service, then you can skip this step.

   The KDS root key and gMSA permissions are configured with your AWS managed Microsoft AD.

   If you have not already created a gMSA Service Account in your domain, you’ll need to first generate a Key Distribution Service (KDS) root key. The KDS is responsible for creating, rotating, and releasing the gMSA password to authorized hosts. When the `ccg.exe` needs to retrieve gMSA credentials, it contact KDS to retrieve the current password.

   To check if the KDS root key has already been created, run the following PowerShell cmdlet with domain admin privileges on a domain controller using the `ActiveDirectory` PowerShell module. For more information about the module, see [ActiveDirectory Module](https://learn.microsoft.com/en-us/powershell/module/activedirectory/?view=windowsserver2022-ps) on the Microsoft Learn website.

   ```
   PS C:\> Get-KdsRootKey
   ```

   If the command returns a key ID, you can skip the rest of this step. Otherwise, create the KDS root key by running the following command:

   ```
   PS C:\> Add-KdsRootKey -EffectiveImmediately
   ```

   Although the argument `EffectiveImmediately` to the command implies the key is effective immediately, you need to wait 10 hours before the KDS root key is replicated and available for use on all domain controllers.

1. 

**Create the gMSA account**

   To create the gMSA account and allow the `ccg.exe` to retrieve the gMSA password, run the following PowerShell commands from a Windows Server or client with access to the domain. Replace `ExampleAccount` with the name that you want for your gMSA account, and replace `example-domain` with your Active Directory domain name (for example, if your domain is `contoso.com`, use `contoso`).

   1. 

      ```
      PS C:\> Install-WindowsFeature RSAT-AD-PowerShell
      ```

   1. 

      ```
      PS C:\> New-ADGroup -Name "ExampleAccount Authorized Hosts" -SamAccountName "ExampleAccountHosts" -GroupScope DomainLocal
      ```

   1. 

      ```
      PS C:\> New-ADServiceAccount -Name "ExampleAccount" -DnsHostName "example-domain" -ServicePrincipalNames "host/ExampleAccount", "host/example-domain" -PrincipalsAllowedToRetrieveManagedPassword "ExampleAccountHosts"
      ```

   1. Create a user with a permanent password that doesn't expire. These credentials are stored in AWS Secrets Manager and used by each task to join the domain. This is a separate user account from the gMSA account created above. Replace `ExampleServiceUser` with the name you want for this service user account.

      ```
      PS C:\> New-ADUser -Name "ExampleServiceUser" -AccountPassword (ConvertTo-SecureString -AsPlainText "Test123" -Force) -Enabled 1 -PasswordNeverExpires 1
      ```

   1. 

      ```
      PS C:\> Add-ADGroupMember -Identity "ExampleAccountHosts" -Members "ExampleServiceUser"
      ```

   1. Install the PowerShell module for creating CredSpec objects in Active Directory and output the CredSpec JSON.

      ```
      PS C:\> Install-PackageProvider -Name NuGet -Force
      ```

      ```
      PS C:\> Install-Module CredentialSpec
      ```

   1. 

      ```
      PS C:\> New-CredentialSpec -AccountName ExampleAccount
      ```

1. Copy the JSON output from the previous command into a file called `gmsa-cred-spec.json`. This is the CredSpec file. It is used in Step 3, [Step 3: Modify your CredSpec JSON to include domainless gMSA information](#tutorial-gmsa-windows-step3).

## Step 2: Upload Credentials to Secrets Manager
<a name="tutorial-gmsa-windows-step2"></a>

Copy the Active Directory credentials into a secure credential storage system, so that each task retrieves it. This is the domainless gMSA method. By using domainless gMSA, the container instance isn't joined to the domain, other applications on the instance can't use the credentials to access the domain, and tasks that join different domains can run on the same instance.

This step uses the AWS CLI. You can run these commands in AWS CloudShell in the default shell, which is `bash`.
+ Run the following AWS CLI command and replace the username, password, and domain name to match your environment. Use the service user account name (not the gMSA account name) for the username. Keep the ARN of the secret to use in the next step, [Step 3: Modify your CredSpec JSON to include domainless gMSA information](#tutorial-gmsa-windows-step3)

  The following command uses backslash continuation characters that are used by `sh` and compatible shells. This command isn't compatible with PowerShell. You must modify the command to use it with PowerShell.

  ```
  $ aws secretsmanager create-secret \
  --name gmsa-plugin-input \
  --description "Amazon ECS - gMSA Portable Identity." \
  --secret-string "{\"username\":\"ExampleServiceUser\",\"password\":\"Test123\",\"domainName\":\"contoso.com\"}"
  ```

## Step 3: Modify your CredSpec JSON to include domainless gMSA information
<a name="tutorial-gmsa-windows-step3"></a>

Before uploading the CredSpec to one of the storage options, add information to the CredSpec with the ARN of the secret in Secrets Manager from the previous step. For more information, see [Additional credential spec configuration for non-domain-joined container host use case](https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts#additional-credential-spec-configuration-for-non-domain-joined-container-host-use-case) on the Microsoft Learn website.

1. Add the following information to the CredSpec file inside the `ActiveDirectoryConfig`. Replace the ARN with the secret in Secrets Manager from the previous step.

   Note that the `PluginGUID` value must match the GUID in the following example snippet and is required.

   ```
   "HostAccountConfig": {
         "PortableCcgVersion": "1",
         "PluginGUID": "{859E1386-BDB4-49E8-85C7-3070B13920E1}",
         "PluginInput": "{\"credentialArn\": \"arn:aws:secretsmanager:aws-region:111122223333:secret:gmsa-plugin-input\"}"
       }
   ```

   You can also use a secret in SSM Parameter Store by using the ARN in this format: `\"arn:aws:ssm:aws-region:111122223333:parameter/gmsa-plugin-input\"`.

1. After you modify the CredSpec file, it should look like the following example:

   ```
   {
     "CmsPlugins": [
       "ActiveDirectory"
     ],
     "DomainJoinConfig": {
       "Sid": "S-1-5-21-4066351383-705263209-1606769140",
       "MachineAccountName": "ExampleAccount",
       "Guid": "ac822f13-583e-49f7-aa7b-284f9a8c97b6",
       "DnsTreeName": "example-domain",
       "DnsName": "example-domain",
       "NetBiosName": "example-domain"
     },
     "ActiveDirectoryConfig": {
       "GroupManagedServiceAccounts": [
         {
           "Name": "ExampleAccount",
           "Scope": "example-domain"
         },
         {
           "Name": "ExampleAccount",
           "Scope": "example-domain"
         }
       ],
       "HostAccountConfig": {
         "PortableCcgVersion": "1",
         "PluginGUID": "{859E1386-BDB4-49E8-85C7-3070B13920E1}",
         "PluginInput": "{\"credentialArn\": \"arn:aws:secretsmanager:aws-region:111122223333:secret:gmsa-plugin-input\"}"
       }
     }
   }
   ```

## Step 4: Upload CredSpec to Amazon S3
<a name="tutorial-gmsa-windows-step4"></a>



This step uses the AWS CLI. You can run these commands in AWS CloudShell in the default shell, which is `bash`.

1. Copy the CredSpec file to the computer or environment that you are running AWS CLI commands in.

1. Run the following AWS CLI command to upload the CredSpec to Amazon S3. Replace `MyBucket` with the name of your Amazon S3 bucket. You can store the file as an object in any bucket and location, but you must allow access to that bucket and location in the policy that you attach to the task execution role.

   The following command uses backslash continuation characters that are used by `sh` and compatible shells. This command isn't compatible with PowerShell. You must modify the command to use it with PowerShell.

   ```
   $ aws s3 cp gmsa-cred-spec.json \
   s3://MyBucket/ecs-domainless-gmsa-credspec
   ```

## Step 5: (Optional) Create an Amazon ECS cluster
<a name="tutorial-gmsa-windows-step5"></a>

By default, your account has an Amazon ECS cluster named `default`. This cluster is used by default in the AWS CLI, SDKs, and CloudFormation. You can use additional clusters to group and organize tasks and infrastructure, and assign defaults for some configuration.

You can create a cluster from the AWS Management Console, AWS CLI, SDKs, or CloudFormation. The settings and configuration in the cluster don't affect gMSA.

This step uses the AWS CLI. You can run these commands in AWS CloudShell in the default shell, which is `bash`.

```
$ aws ecs create-cluster --cluster-name windows-domainless-gmsa-cluster
```

**Important**  
If you choose to create your own cluster, you must specify `--cluster clusterName` for each command that you intend to use with that cluster.

## Step 6: Create an IAM role for container instances
<a name="tutorial-gmsa-windows-step6"></a>

A *container instance* is a host computer to run containers in ECS tasks, for example Amazon EC2 instances. Each container instance registers to an Amazon ECS cluster. Before you launch Amazon EC2 instances and register them to a cluster, you must create an IAM role for your container instances to use.

To create the container instance role, see [Amazon ECS container instance IAM role](instance_IAM_role.md). The default `ecsInstanceRole` has sufficient permissions to complete this tutorial.

## Step 7: Create a custom task execution role
<a name="tutorial-gmsa-windows-step7"></a>

Amazon ECS can use a different IAM role for the permissions needed to start each task, instead of the container instance role. This role is the *task execution role*. We recommend creating a task execution role with only the permissions required for ECS to run the task, also known as *least-privilege permissions*. For more information about the principle of least privilege, see [SEC03-BP02 Grant least privilege access](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_permissions_least_privileges.html) in the *AWS Well-Architected Framework*.

1. To create a task execution role, see [Creating the task execution role](task_execution_IAM_role.md#create-task-execution-role). The default permissions allow the container instance to pull container images from Amazon Elastic Container Registry and `stdout` and `stderr` from your applications to be logged to Amazon CloudWatch Logs.

   Because the role needs custom permissions for this tutorial, you can give the role a different name than `ecsTaskExecutionRole`. This tutorial uses `ecsTaskExecutionRole` in further steps.

1. Add the following permissions by creating a custom policy, either an inline policy that only exists in for this role, or a policy that you can reuse. Replace the ARN for the `Resource` in the first statement with the Amazon S3 bucket and location, and the second `Resource` with the ARN of the secret in Secrets Manager.

   If you encrypt the secret in Secrets Manager with a custom key, you must also allow `kms:Decrypt` for the key.

   If you use SSM Parameter Store instead of Secrets Manager, you must allow `ssm:GetParameter` for the parameter, instead of `secretsmanager:GetSecretValue`.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "s3:GetObject"
               ],
               "Resource": "arn:aws:s3:::MyBucket/ecs-domainless-gmsa-credspec/gmsa-cred-spec.json"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "secretsmanager:GetSecretValue"
               ],
               "Resource": "arn:aws:secretsmanager:us-east-1:111122223333:secret:gmsa-plugin-AbCdEf"
           }
       ]
   }
   ```

------

## Step 8: Create a task role for Amazon ECS Exec
<a name="tutorial-gmsa-windows-step8"></a>

This tutorial uses Amazon ECS Exec to verify functionality by running a command inside a running task. To use ECS Exec, the service or task must turn on ECS Exec and the task role (but not the task execution role) must have `ssmmessages` permissions. For the required IAM policy, see [ECS Exec permissions](task-iam-roles.md#ecs-exec-required-iam-permissions).

This step uses the AWS CLI. You can run these commands in AWS CloudShell in the default shell, which is `bash`.

To create a task role using the AWS CLI, follow these steps.

1. Create a file called `ecs-tasks-trust-policy.json` with the following contents:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Service": "ecs-tasks.amazonaws.com"
               },
               "Action": "sts:AssumeRole"
           }
       ]
   }
   ```

------

1. Create an IAM role. You can replace the name `ecs-exec-demo-task-role` but keep the name for following steps.

   The following command uses backslash continuation characters that are used by `sh` and compatible shells. This command isn't compatible with PowerShell. You must modify the command to use it with PowerShell.

   ```
   $ aws iam create-role --role-name ecs-exec-demo-task-role \
   --assume-role-policy-document file://ecs-tasks-trust-policy.json
   ```

   You can delete the file `ecs-tasks-trust-policy.json`.

1. Create a file called `ecs-exec-demo-task-role-policy.json` with the following contents:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "ssmmessages:CreateControlChannel",
                   "ssmmessages:CreateDataChannel",
                   "ssmmessages:OpenControlChannel",
                   "ssmmessages:OpenDataChannel"
               ],
               "Resource": "*"
           }
       ]
   }
   ```

------

1. Create an IAM policy and attach it to the role from the previous step.

   The following command uses backslash continuation characters that are used by `sh` and compatible shells. This command isn't compatible with PowerShell. You must modify the command to use it with PowerShell.

   ```
   $ aws iam put-role-policy \
       --role-name ecs-exec-demo-task-role \
       --policy-name ecs-exec-demo-task-role-policy \
       --policy-document file://ecs-exec-demo-task-role-policy.json
   ```

   You can delete the file `ecs-exec-demo-task-role-policy.json`.

## Step 9: Register a task definition that uses domainless gMSA
<a name="tutorial-gmsa-windows-step9"></a>



This step uses the AWS CLI. You can run these commands in AWS CloudShell in the default shell, which is `bash`.

1. Create a file called `windows-gmsa-domainless-task-def.json` with the following contents:

   ```
   {
     "family": "windows-gmsa-domainless-task",
     "containerDefinitions": [
       {
         "name": "windows_sample_app",
         "image": "mcr.microsoft.com/windows/servercore/iis",
         "cpu": 1024,
         "memory": 1024,
         "essential": true,
         "credentialSpecs": [
                   "credentialspecdomainless:arn:aws:s3:::ecs-domainless-gmsa-credspec/gmsa-cred-spec.json"
         ],
         "entryPoint": [
           "powershell",
           "-Command"
         ],
         "command": [
           "New-Item -Path C:\\inetpub\\wwwroot\\index.html -ItemType file -Value '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p>' -Force ; C:\\ServiceMonitor.exe w3svc"
         ],
         "portMappings": [
           {
             "protocol": "tcp",
             "containerPort": 80,
             "hostPort": 8080
           }
         ]
       }
     ],
     "taskRoleArn": "arn:aws:iam::111122223333:role/ecs-exec-demo-task-role",
     "executionRoleArn": "arn:aws:iam::111122223333:role/ecsTaskExecutionRole"
   }
   ```

1. Register the task definition by running the following command:

   The following command uses backslash continuation characters that are used by `sh` and compatible shells. This command isn't compatible with PowerShell. You must modify the command to use it with PowerShell.

   ```
   $ aws ecs register-task-definition \
   --cli-input-json file://windows-gmsa-domainless-task-def.json
   ```

## Step 10: Register a Windows container instance to the cluster
<a name="tutorial-gmsa-windows-step10"></a>

Launch an Amazon EC2 Windows instance and run the ECS container agent to register it as a container instance in the cluster. ECS runs tasks on the container instances that are registered to the cluster that the tasks are started in.

1. To launch an Amazon EC2 Windows instance that is configured for Amazon ECS in the AWS Management Console, see [Launching an Amazon ECS Windows container instance](launch_window-container_instance.md). Stop at the step for *user data*.

1. For gMSA, the user data must set the environment variable `ECS_GMSA_SUPPORTED` before starting the ECS container agent.

   For ECS Exec, the agent must start with the argument `-EnableTaskIAMRole`.

   To secure the instance IAM role by preventing tasks from reaching the EC2 IMDS web service to retrieve the role credentials, add the argument `-AwsvpcBlockIMDS`. This only applies to tasks that use the `awsvpc` network mode.

   ```
   <powershell>
   [Environment]::SetEnvironmentVariable("ECS_GMSA_SUPPORTED", $TRUE, "Machine")
   Import-Module ECSTools
   Initialize-ECSAgent -Cluster windows-domainless-gmsa-cluster -EnableTaskIAMRole -AwsvpcBlockIMDS
   </powershell>
   ```

1. Review a summary of your instance configuration in the **Summary** panel, and when you're ready, choose **Launch instance**.

## Step 11: Verify the container instance
<a name="tutorial-gmsa-windows-step11"></a>

You can verify that there is a container instance in the cluster using the AWS Management Console. However, gMSA needs additional features that are indicated as *attributes*. These attributes aren't visible in the AWS Management Console, so this tutorial uses the AWS CLI.

This step uses the AWS CLI. You can run these commands in AWS CloudShell in the default shell, which is `bash`.

1. List the container instances in the cluster. Container instances have an ID that is different from the ID of the EC2 instance.

   ```
   $ aws ecs list-container-instances
   ```

   Output:

   ```
   {
       "containerInstanceArns": [
           "arn:aws:ecs:aws-region:111122223333:container-instance/default/MyContainerInstanceID"
       ]
   }
   ```

   For example, `526bd5d0ced448a788768334e79010fd` is a valid container instance ID.

1. Use the container instance ID from the previous step to get the details for the container instance. Replace `MyContainerInstanceID` with the ID.

   The following command uses backslash continuation characters that are used by `sh` and compatible shells. This command isn't compatible with PowerShell. You must modify the command to use it with PowerShell.

   ```
   $ aws ecs describe-container-instances \
        ----container-instances MyContainerInstanceID
   ```

   Note that the output is very long.

1. Verify that the `attributes` list has an object with the key called `name` and a value `ecs.capability.gmsa-domainless`. The following is an example of the object.

   Output:

   ```
   {
       "name": "ecs.capability.gmsa-domainless"
   }
   ```

## Step 12: Run a Windows task
<a name="tutorial-gmsa-windows-step12"></a>

Run an Amazon ECS task. If there is only 1 container instance in the cluster, you can use `run-task`. If there are many different container instances, it might be easier to use `start-task` and specify the container instance ID to run the task on, than to add placement constraints to the task definition to control what type of container instance to run this task on.

This step uses the AWS CLI. You can run these commands in AWS CloudShell in the default shell, which is `bash`.

1. 

   The following command uses backslash continuation characters that are used by `sh` and compatible shells. This command isn't compatible with PowerShell. You must modify the command to use it with PowerShell.

   ```
   aws ecs run-task --task-definition windows-gmsa-domainless-task \
       --enable-execute-command --cluster windows-domainless-gmsa-cluster
   ```

   Note the task ID that is returned by the command.

1. Run the following command to verify that the task has started. This command waits and doesn't return the shell prompt until the task starts. Replace `MyTaskID` with the task ID from the previous step.

   ```
   $ aws ecs wait tasks-running --task MyTaskID
   ```

## Step 13: Verify the container has gMSA credentials
<a name="tutorial-gmsa-windows-step13"></a>

Verify that the container in the task has a Kerberos token. gMSA 

This step uses the AWS CLI. You can run these commands in AWS CloudShell in the default shell, which is `bash`.

1. 

   The following command uses backslash continuation characters that are used by `sh` and compatible shells. This command isn't compatible with PowerShell. You must modify the command to use it with PowerShell.

   ```
   $ aws ecs execute-command \
   --task MyTaskID \
   --container windows_sample_app \
   --interactive \
   --command powershell.exe
   ```

   The output will be a PowerShell prompt.

1. Run the following command in the PowerShell terminal inside the container.

   ```
   PS C:\> klist get ExampleAccount$
   ```

   In the output, note the `Principal` is the one that you created previously.

## Step 14: Clean up
<a name="tutorial-gmsa-windows-step14"></a>

When you are finished with this tutorial, you should clean up the associated resources to avoid incurring charges for unused resources.

This step uses the AWS CLI. You can run these commands in AWS CloudShell in the default shell, which is `bash`.

1. Stop the task. Replace `MyTaskID` with the task ID from step 12, [Step 12: Run a Windows task](#tutorial-gmsa-windows-step12).

   ```
   $ aws ecs stop-task --task MyTaskID
   ```

1. Terminate the Amazon EC2 instance. Afterwards, the container instance in the cluster will be deleted automatically after one hour.

   You can find and terminate the instance by using the Amazon EC2 console. Or, you can run the following command. To run the command, find the EC2 instance ID in the output of the `aws ecs describe-container-instances` command from step 1, [Step 11: Verify the container instance](#tutorial-gmsa-windows-step11). i-10a64379 is an example of an EC2 instance ID.

   ```
   $ aws ec2 terminate-instances --instance-ids MyInstanceID
   ```

1. Delete the CredSpec file in Amazon S3. Replace `MyBucket` with the name of your Amazon S3 bucket.

   ```
   $ aws s3api delete-object --bucket MyBucket --key ecs-domainless-gmsa-credspec/gmsa-cred-spec.json
   ```

1. Delete the secret from Secrets Manager. If you used SSM Parameter Store instead, delete the parameter.

   The following command uses backslash continuation characters that are used by `sh` and compatible shells. This command isn't compatible with PowerShell. You must modify the command to use it with PowerShell.

   ```
   $ aws secretsmanager delete-secret --secret-id gmsa-plugin-input \
        --force-delete-without-recovery
   ```

1. Deregister and delete the task definition. By deregistering the task definition, you mark it as inactive so it can't be used to start new tasks. Then, you can delete the task definition.

   1. Deregister the task definition by specifying the version. ECS automatically makes versions of task definitions, that are numbered starting from 1. You refer to the versions in the same format as the labels on container images, such as `:1`.

      ```
      $ aws ecs deregister-task-definition --task-definition windows-gmsa-domainless-task:1
      ```

   1. Delete the task definition.

      ```
      $ aws ecs delete-task-definitions --task-definition windows-gmsa-domainless-task:1
      ```

1. (Optional) Delete the ECS cluster, if you created a cluster.

   ```
   $ aws ecs delete-cluster --cluster windows-domainless-gmsa-cluster
   ```

## Debugging Amazon ECS domainless gMSA for Windows containers
<a name="tutorial-gmsa-windows-debugging"></a>



Amazon ECS task status  
ECS tries to start a task exactly once. Any task that has an issue is stopped, and set to the status `STOPPED`. There are two common types of issues with tasks. First, tasks that couldn't be started. Second, tasks where the application has stopped inside one of the containers. In the AWS Management Console, look at the **Stopped reason** field of the task for the reason why the task was stopped. In the AWS CLI, describe the task and look at the `stoppedReason`. For steps in the AWS Management Console and AWS CLI, see [Viewing Amazon ECS stopped task errors](stopped-task-errors.md).

Windows Events  
Windows Events for gMSA in containers are logged in the `Microsoft-Windows-Containers-CCG` log file and can be found in the Event Viewer in the section Applications and Services in `Logs\Microsoft\Windows\Containers-CCG\Admin`. For more debugging tips, see [Troubleshoot gMSAs for Windows containers](https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/gmsa-troubleshooting#non-domain-joined-container-hosts-use-event-logs-to-identify-configuration-issues) on the Microsoft Learn website.

ECS agent gMSA plugin  
Logging for gMSA plugin for the ECS agent on the Windows container instance is in the following directory, `C:/ProgramData/Amazon/gmsa-plugin/`. Look in this log to see if the domainless user credentials were downloaded from the storage location, such as Secrets Manager, and that the credential format was read correctly.

# Learn how to use gMSAs for EC2 Windows containers for Amazon ECS
<a name="windows-gmsa"></a>

Amazon ECS supports Active Directory authentication for Windows containers through a special kind of service account called a *group Managed Service Account* (gMSA).

Windows based network applications such as .NET applications often use Active Directory to facilitate authentication and authorization management between users and services. Developers commonly design their applications to integrate with Active Directory and run on domain-joined servers for this purpose. Because Windows containers cannot be domain-joined, you must configure a Windows container to run with gMSA.

A Windows container running with gMSA relies on its host Amazon EC2 instance to retrieve the gMSA credentials from the Active Directory domain controller and provide them to the container instance. For more information, see [Create gMSAs for Windows containers](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts).

**Note**  
This feature is not supported on Windows containers on Fargate.

**Topics**
+ [

## Considerations
](#windows-gmsa-considerations)
+ [

## Prerequisites
](#windows-gmsa-prerequisites)
+ [

## Setting up gMSA for Windows Containers on Amazon ECS
](#windows-gmsa-setup)

## Considerations
<a name="windows-gmsa-considerations"></a>

The following should be considered when using gMSAs for Windows containers:
+ When using the Amazon ECS-optimized Windows Server 2016 Full AMI for your container instances, the container hostname must be the same as the gMSA account name defined in the credential spec file. To specify a hostname for a container, use the `hostname` container definition parameter. For more information, see [Network settings](task_definition_parameters.md#container_definition_network).
+ You chose between **domainless gMSA** and ** joining each instance to a single domain**. By using domainless gMSA, the container instance isn't joined to the domain, other applications on the instance can't use the credentials to access the domain, and tasks that join different domains can run on the same instance.

  Then, choose the data storage for the CredSpec and optionally, for the Active Directory user credentials for domainless gMSA.

  Amazon ECS uses an Active Directory credential specification file (CredSpec). This file contains the gMSA metadata that's used to propagate the gMSA account context to the container. You generate the CredSpec file and then store it in one of the CredSpec storage options in the following table, specific to the Operating System of the container instances. To use the domainless method, an optional section in the CredSpec file can specify credentials in one of the *domainless user credentials* storage options in the following table, specific to the Operating System of the container instances.    
<a name="gmsa-table"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/windows-gmsa.html)

## Prerequisites
<a name="windows-gmsa-prerequisites"></a>

Before you use the gMSA for Windows containers feature with Amazon ECS, make sure to complete the following:
+ You set up an Active Directory domain with the resources that you want your containers to access. Amazon ECS supports the following setups:
  + An Directory Service Active Directory. Directory Service is an AWS managed Active Directory that's hosted on Amazon EC2. For more information, see [Getting Started with AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started.html) in the *AWS Directory Service Administration Guide*.
  + An on-premises Active Directory. You must ensure that the Amazon ECS Linux container instance can join the domain. For more information, see [AWS Direct Connect](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-network-to-amazon.html).
+ You have an existing gMSA account in the Active Directory. For more information, see [Create gMSAs for Windows containers](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts).
+ **You chose to use *domainless gMSA* or the Amazon ECS Windows container instance hosting the Amazon ECS task must be *domain joined* to the Active Directory and be a member of the Active Directory security group that has access to the gMSA account.**

  By using domainless gMSA, the container instance isn't joined to the domain, other applications on the instance can't use the credentials to access the domain, and tasks that join different domains can run on the same instance.
+ You added the required IAM permissions. The permissions that are required depend on the methods that you choose for the initial credentials and for storing the credential specification:
  + If you use *domainless gMSA* for initial credentials, IAM permissions for AWS Secrets Manager are required on the Amazon EC2 instance role.
  + If you store the credential specification in SSM Parameter Store, IAM permissions for Amazon EC2 Systems Manager Parameter Store are required on the task execution role.
  + If you store the credential specification in Amazon S3, IAM permissions for Amazon Simple Storage Service are required on the task execution role.

## Setting up gMSA for Windows Containers on Amazon ECS
<a name="windows-gmsa-setup"></a>

To set up gMSA for Windows Containers on Amazon ECS, you can follow the complete tutorial that includes configuring the prerequisites [Using Amazon ECS Windows containers with domainless gMSA using the AWS CLI](tutorial-gmsa-windows.md).

The following sections cover the CredSpec configuration in detail.

**Topics**
+ [

### Example CredSpec
](#windows-gmsa-example)
+ [

### Domainless gMSA setup
](#windows-gmsa-domainless)
+ [

### Referencing a Credential Spec File in a Task Definition
](#windows-gmsa-credentialspec)

### Example CredSpec
<a name="windows-gmsa-example"></a>

Amazon ECS uses a credential spec file that contains the gMSA metadata used to propagate the gMSA account context to the Windows container. You can generate the credential spec file and reference it in the `credentialSpec` field in your task definition. The credential spec file does not contain any secrets.

The following is an example credential spec file:

```
{
    "CmsPlugins": [
        "ActiveDirectory"
    ],
    "DomainJoinConfig": {
        "Sid": "S-1-5-21-2554468230-2647958158-2204241789",
        "MachineAccountName": "WebApp01",
        "Guid": "8665abd4-e947-4dd0-9a51-f8254943c90b",
        "DnsTreeName": "contoso.com",
        "DnsName": "contoso.com",
        "NetBiosName": "contoso"
    },
    "ActiveDirectoryConfig": {
        "GroupManagedServiceAccounts": [
            {
                "Name": "WebApp01",
                "Scope": "contoso.com"
            }
        ]
    }
}
```

### Domainless gMSA setup
<a name="windows-gmsa-domainless"></a>

We recommend domainless gMSA instead of joining the container instances to a single domain. By using domainless gMSA, the container instance isn't joined to the domain, other applications on the instance can't use the credentials to access the domain, and tasks that join different domains can run on the same instance.

1. Before uploading the CredSpec to one of the storage options, add information to the CredSpec with the ARN of the secret in Secrets Manager or SSM Parameter Store. For more information, see [Additional credential spec configuration for non-domain-joined container host use case](https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts#additional-credential-spec-configuration-for-non-domain-joined-container-host-use-case) on the Microsoft Learn website.

**Domainless gMSA credential format**  
The following is the JSON format for the domainless gMSA credentials for your Active Directory. Store the credentials in Secrets Manager or SSM Parameter Store.

   ```
   {
       "username":"WebApp01",
       "password":"Test123!",
       "domainName":"contoso.com"
   }
   ```

1. Add the following information to the CredSpec file inside the `ActiveDirectoryConfig`. Replace the ARN with the secret in Secrets Manager or SSM Parameter Store.

   Note that the `PluginGUID` value must match the GUID in the following example snippet and is required.

   ```
       "HostAccountConfig": {
           "PortableCcgVersion": "1",
           "PluginGUID": "{859E1386-BDB4-49E8-85C7-3070B13920E1}",
           "PluginInput": "{\"credentialArn\": \"arn:aws:secretsmanager:aws-region:111122223333:secret:gmsa-plugin-input\"}"
       }
   ```

   You can also use a secret in SSM Parameter Store by using the ARN in this format: `\"arn:aws:ssm:aws-region:111122223333:parameter/gmsa-plugin-input\"`.

1. After you modify the CredSpec file, it should look like the following example:

   ```
   {
     "CmsPlugins": [
       "ActiveDirectory"
     ],
     "DomainJoinConfig": {
       "Sid": "S-1-5-21-4066351383-705263209-1606769140",
       "MachineAccountName": "WebApp01",
       "Guid": "ac822f13-583e-49f7-aa7b-284f9a8c97b6",
       "DnsTreeName": "contoso",
       "DnsName": "contoso",
       "NetBiosName": "contoso"
     },
     "ActiveDirectoryConfig": {
       "GroupManagedServiceAccounts": [
         {
           "Name": "WebApp01",
           "Scope": "contoso"
         },
         {
           "Name": "WebApp01",
           "Scope": "contoso"
         }
       ],
       "HostAccountConfig": {
         "PortableCcgVersion": "1",
         "PluginGUID": "{859E1386-BDB4-49E8-85C7-3070B13920E1}",
         "PluginInput": "{\"credentialArn\": \"arn:aws:secretsmanager:aws-region:111122223333:secret:gmsa-plugin-input\"}"
       }
     }
   }
   ```

### Referencing a Credential Spec File in a Task Definition
<a name="windows-gmsa-credentialspec"></a>

Amazon ECS supports the following ways to reference the file path in the `credentialSpecs` field of the task definition. For each of these options, you can provide `credentialspec:` or `domainlesscredentialspec:`, depending on whether you are joining the container instances to a single domain, or using domainless gMSA, respectively.

#### Amazon S3 Bucket
<a name="gmsa-credspec-s3"></a>

Add the credential spec to an Amazon S3 bucket and then reference the Amazon Resource Name (ARN) of the Amazon S3 bucket in the `credentialSpecs` field of the task definition.

```
{
    "family": "",
    "executionRoleArn": "",
    "containerDefinitions": [
        {
            "name": "",
            ...
            "credentialSpecs": [
                "credentialspecdomainless:arn:aws:s3:::${BucketName}/${ObjectName}"
            ],
            ...
        }
    ],
    ...
}
```

You must also add the following permissions as an inline policy to the Amazon ECS task execution IAM role to give your tasks access to the Amazon S3 bucket.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor",
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
                "arn:aws:s3:::{bucket_name}",
                "arn:aws:s3:::{bucket_name}/{object}"
            ]
        }
    ]
}
```

------

#### SSM Parameter Store parameter
<a name="gmsa-credspec-ssm"></a>

Add the credential spec to an SSM Parameter Store parameter and then reference the Amazon Resource Name (ARN) of the SSM Parameter Store parameter in the `credentialSpecs` field of the task definition.

```
{
    "family": "",
    "executionRoleArn": "",
    "containerDefinitions": [
        {
            "name": "",
            ...
            "credentialSpecs": [
                "credentialspecdomainless:arn:aws:ssm:region:111122223333:parameter/parameter_name"
            ],
            ...
        }
    ],
    ...
}
```

You must also add the following permissions as an inline policy to the Amazon ECS task execution IAM role to give your tasks access to the SSM Parameter Store parameter.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetParameters"
            ],
            "Resource": "arn:aws:ssm:*:111122223333:parameter/example*"
        }
    ]
}
```

------

#### Local File
<a name="gmsa-credspec-file"></a>

With the credential spec details in a local file, reference the file path in the `credentialSpecs` field of the task definition. The file path referenced must be relative to the `C:\ProgramData\Docker\CredentialSpecs` directory and use the backslash ('\$1') as the file path separator.

```
{
    "family": "",
    "executionRoleArn": "",
    "containerDefinitions": [
        {
            "name": "",
            ...
            "credentialSpecs": [
                "credentialspec:file://CredentialSpecDir\CredentialSpecFile.json"
            ],
            ...
        }
    ],
    ...
}
```

# Using EC2 Image Builder to build customized Amazon ECS-optimized AMIs
<a name="image-builder-tutorial"></a>

AWS recommends that you use the Amazon ECS-optimized AMIs because they are preconfigured with the requirements and recommendations to run your container workloads. There might be times when you need to customize your AMI to add additional software. You can use EC2 Image Builder for the creation, management, and deployment of your server images. You retain ownership of the customized images created in your account. You can use EC2 Image Builder pipelines to automate updates and system patching for your images, or use a stand-alone command to create an image with your defined configuration resources.

You create a recipe for your image. The recipe includes a parent image, and any additional components. You also create a pipeline which distributes your customized AMI.

You create a recipe for your image. An Image Builder image recipe is a document that defines the base image and the components that are applied to the base image to produce the desired configuration for the output AMI image. You also create a pipeline which distributes your customized AMI. For more information, see [How EC2 Image Builder works](https://docs.aws.amazon.com/imagebuilder/latest/userguide/how-image-builder-works.html) in the *EC2 Image Builder User Guide*.

We recommend that you use one of the following Amazon ECS-optimized AMIs as your "Parent image" in EC2 Image Builder:
+ Linux
  + Amazon ECS-optimized AL2023 x86
  + Amazon ECS-optimized Amazon Linux 2023 (arm64) AMI
  + Amazon ECS-optimized Amazon Linux 2 kernel 5 AMI
  + Amazon ECS-optimized Amazon Linux 2 x86 AMI
+ Windows
  + Amazon ECS-optimized Windows 2022 Full x86
  + Amazon ECS-optimized Windows 2022 Core x86
  + Amazon ECS-optimized Windows 2019 Full x86
  + Amazon ECS-optimized Windows 2019 Core x86
  + Amazon ECS-optimized Windows 2016 Full x86

We also recommend that you select "Use latest available OS version". The pipeline will use semantic versioning for the parent image, which helps detect the dependency updates in automatically scheduled jobs. For more information, see [Semantic versioning](https://docs.aws.amazon.com/imagebuilder/latest/userguide/ibhow-semantic-versioning.html) in the *EC2 Image Builder User Guide*.

AWS regularly updates Amazon ECS-optimized AMI images with security patches and the new container agent version. When you use an AMI ID as your parent image in your image recipe, you need to regularly check for updates to the parent image. If there are updates, you must create a new version of your recipe with the updated AMI. This ensures that your custom images incorporate the latest version of the parent image. For information about how to create a workflow to automatically update your EC2 instances in your Amazon ECS cluster with the newly created AMIs, see [How to create an AMI hardening pipeline and automate updates to your ECS instance fleet](https://aws.amazon.com/blogs/security/how-to-create-an-ami-hardening-pipeline-and-automate-updates-to-your-ecs-instance-fleet/).

You can also specify the Amazon Resource Name (ARN) of a parent image that's published through a managed EC2 Image Builder pipeline. Amazon routinely publishes Amazon ECS-optimized AMI images through managed pipelines. These images are publicly accessible. You must have the correct permissions to access the image. When you use an image ARN instead of an AMI in your Image Builder recipe, your pipeline automatically uses the most recent version of the parent image each time it runs. This approach eliminates the need to manually create new recipe versions for each update. 

## Clean up the Amazon ECS-optimized AMI
<a name="cleanup-ecs-optimized-ami"></a>

When using an Amazon ECS-optimized AMI as a parent image, you must clean up the image to prevent transient issues. The Amazon ECS-optimized AMI is preconfigured for the Amazon ECS agent to auto-start and register as a container instance with Amazon ECS. Using it as a base image without proper cleanup can cause problems in your custom AMI.

To clean up the image for future use, create a component that runs the following commands to stop the ecs-init package and Docker processes:

```
sudo systemctl stop ecs
sudo systemctl stop docker
```

Remove all the log files from the current instance to prevent preserving them when saving the image. Use the example script in [Security best practices for EC2 Image Builder](https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html) to clean up the various files from the instance.

To clean up the Amazon ECS specific data, run the following commands:

```
sudo rm -rf /var/log/ecs/*
sudo rm /var/lib/ecs/data/agent.db
```

For more information about creating custom Amazon ECS-optimized AMIs, see [How do I create a custom AMI from an Amazon ECS-optimized AMI?](https://forums.aws.amazon.com/knowledge-center/ecs-create-custom-amis/) in the AWS Knowledge Center.

## Using the image ARN with infrastructure as code (IaC)
<a name="infrastructure-as-code-arn"></a>

You can configure the recipe using the EC2 Image Builder console, or infrastructure as code (for example, CloudFormation), or the AWS SDK. When you specify a parent image in your recipe, you can specify an EC2 AMI ID, Image Builder image ARN, AWS Marketplace product ID, or container image. AWS publishes both AMI IDs and Image Builder image ARNs of Amazon ECS-Optimized AMIs publicly. The following is the ARN format for the image:

```
arn:${Partition}:imagebuilder:${Region}:${Account}:image/${ImageName}/${ImageVersion}
```

The `ImageVersion` has the following format. Replace *major*, *minor* and *patch* with the latest values.

```
<major>.<minor>.<patch>
```

You can replace `major`, `minor` and `patch` to specific values or you can use Versionless ARN of an image, so your pipeline remains up-to-date with the latest version of the parent image. A versionless ARN uses wildcard format ‘x.x.x’ to represent the image version. This approach allows the Image Builder service to automatically resolve to the most recent version of the image. Using versionless ARN ensures that your reference always point to the latest image available, streamlining the process of maintaining up to date images in your deployment. When you create a recipe with the console, EC2 Image Builder automatically identifies the ARN for your parent image. When you use IaC to create the recipe, you must identify the ARN and select the desired image version or use versionless arn to indicate latest available image. We recommend that you create an automated script to filter and only display images that align with your criteria. The following Python script shows how to retrieve a list of Amazon ECS-optimized AMIs. 

The script accepts two optional arguments: `owner` and `platform`, with default values of “Amazon”, and “Windows” respectively. The valid values for the owner argument are: `Self`, `Shared`, `Amazon`, and `ThirdParty`. The valid values for the platform argument are `Windows` and `Linux`. For example, if you run the script with the `owner` argument set to `Amazon` and the `platform` set to `Linux`, the script generates a list of images published by Amazon including Amazon ECS-Optimized images.

```
import boto3
import argparse

def list_images(owner, platform):
    # Create a Boto3 session
    session = boto3.Session()
    
    # Create an EC2 Image Builder client
    client = session.client('imagebuilder')

    # Define the initial request parameters
    request_params = {
        'owner': owner,
        'filters': [
            {
                'name': 'platform',
                'values': [platform]
            }
        ]
    }

    # Initialize the results list
    all_images = []

    # Get the initial response with the first page of results
    response = client.list_images(**request_params)

    # Extract images from the response
    all_images.extend(response['imageVersionList'])

    # While 'nextToken' is present, continue paginating
    while 'nextToken' in response and response['nextToken']:
        # Update the token for the next request
        request_params['nextToken'] = response['nextToken']

        # Get the next page of results
        response = client.list_images(**request_params)

        # Extract images from the response
        all_images.extend(response['imageVersionList'])

    return all_images

def main():
    # Initialize the parser
    parser = argparse.ArgumentParser(description="List AWS images based on owner and platform")
    
    # Add the parameters/arguments
    parser.add_argument("--owner", default="Amazon", help="The owner of the images. Default is 'Amazon'.")
    parser.add_argument("--platform", default="Windows", help="The platform type of the images. Default is 'Windows'.")

    # Parse the arguments
    args = parser.parse_args()

    # Retrieve all images based on the provided owner and platform
    images = list_images(args.owner, args.platform)

    # Print the details of the images
    for image in images:
        print(f"Name: {image['name']}, Version: {image['version']}, ARN: {image['arn']}")

if __name__ == "__main__":
    main()
```

## Using the image ARN with CloudFormation
<a name="arn-with-cloudformation"></a>

An Image Builder image recipe is a blueprint that specifies the parent image and components required to achieve the intended configuration of the output image. You use the `AWS::ImageBuilder::ImageRecipe` resource. Set the `ParentImage` value to the image ARN. Use the versionless ARN of your desired image to ensure your pipeline always uses the most recent version of the image. For example, `arn:aws:imagebuilder:us-east-1:aws:image/amazon-linux-2023-ecs-optimized-x86/x.x.x`. The following `AWS::ImageBuilder::ImageRecipe` resource definition uses an Amazon managed image ARN:

```
ECSRecipe:
    Type: AWS::ImageBuilder::ImageRecipe
    Properties:
      Name: MyRecipe
      Version: '1.0.0'
      Components:
        - ComponentArn: [<The component arns of the image recipe>]
      ParentImage: "arn:aws:imagebuilder:us-east-1:aws:image/amazon-linux-2023-ecs-optimized-x86/x.x.x"
```

For more information about the [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-imagebuilder-imagerecipe.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-imagebuilder-imagerecipe.html) resource see in the *AWS CloudFormation User Guide*.

You can automate the creation of new images in your pipeline by setting the `Schedule` property of the `AWS::ImageBuilder::ImagePipeline` resource. The schedule includes a start condition and cron expression. For more information, see [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-imagebuilder-imagepipeline.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-imagebuilder-imagepipeline.html) in the *AWS CloudFormation User Guide*.

 The following of `AWS::ImageBuilder::ImagePipeline` example has the pipeline run a build at 10:00AM Coordinated Universal Time (UTC) every day. Set the following `Schedule` values:
+ Set `PipelineExecutionStartCondition` to `EXPRESSION_MATCH_AND_DEPENDENCY_UPDATES_AVAILABLE`. The build initiates only if dependent resources like the parent image or components, which use the wildcard ‘x’ in their semantic versions, are updated. This ensures the build incorporates the latest updates of those resources.
+ Set ScheduleExpression to the cron expression `(0 10 * * ? *)`.

```
ECSPipeline:
    Type: AWS::ImageBuilder::ImagePipeline
    Properties:
      Name: my-pipeline
      ImageRecipeArn: <arn of the recipe you created in previous step>
      InfrastructureConfigurationArn: <ARN of the infrastructure configuration associated with this image pipeline>
      Schedule:
        PipelineExecutionStartCondition: EXPRESSION_MATCH_AND_DEPENDENCY_UPDATES_AVAILABLE
        ScheduleExpression: 'cron(0 10 * * ? *)'
```

## Using the image ARN with Terraform
<a name="arn-with-terraform"></a>

The approach for specifying your pipeline's parent image and schedule in Terraform aligns with that in AWS CloudFormation. You use the `aws_imagebuilder_image_recipe` resource. Set the `parent_image` value to the image ARN. Use the versionless ARN of your desired image to ensure your pipeline always uses the most recent version of the image. For more information, see [https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/imagebuilder_image_recipe#argument-reference](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/imagebuilder_image_recipe#argument-reference)in the Terraform documentation. 

In the schedule configuration block of the `aws_imagebuilder_image_pipeline resource`, set the `schedule_expression` argument value to a cron expression of your choice to specify how often the pipeline runs, and set the `pipeline_execution_start_condition` to `EXPRESSION_MATCH_AND_DEPENDENCY_UPDATES_AVAILABLE`. For more information, see [https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/imagebuilder_image_pipeline#argument-reference](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/imagebuilder_image_pipeline#argument-reference)in the Terraform documentation. 

# Using AWS Deep Learning Containers on Amazon ECS
<a name="deep-learning-containers"></a>

AWS Deep Learning Containers provide a set of Docker images for training and serving models in TensorFlow and Apache MXNet (Incubating) on Amazon ECS. Deep Learning Containers enable optimized environments with TensorFlow, NVIDIA CUDA (for GPU instances), and Intel MKL (for CPU instances) libraries. Container images for Deep Learning Containers are available in Amazon ECR to reference in Amazon ECS task definitions. You can use Deep Learning Containers along with Amazon Elastic Inference to lower your inference costs.

To get started using Deep Learning Containers without Elastic Inference on Amazon ECS, see [ Amazon ECS setup](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/deep-learning-containers-ecs-setup.html) in the *AWS Deep Learning AMIs Developer Guide*.

# Tutorial: Creating a Service Using a Blue/Green Deployment
<a name="create-blue-green"></a>

Amazon ECS has integrated blue/green deployments into the Create Service wizard on the Amazon ECS console. For more information, see [Creating an Amazon ECS rolling update deployment](create-service-console-v2.md).

The following tutorial shows how to create an Amazon ECS service containing a Fargate task that uses the blue/green deployment type with the AWS CLI.

**Note**  
Support for performing a blue/green deployment has been added for CloudFormation. For more information, see [Perform Amazon ECS blue/green deployments through CodeDeploy using CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/blue-green.html) in the *AWS CloudFormation User Guide*.

## Prerequisites
<a name="create-blue-green-prereqs"></a>

This tutorial assumes that you have completed the following prerequisites:
+ The latest version of the AWS CLI is installed and configured. For more information about installing or upgrading the AWS CLI, see [Installing the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/installing.html).
+ The steps in [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md) have been completed.
+ Your IAM user has the required permissions specified in the [AmazonECS\$1FullAccess](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonECS_FullAccess) IAM policy example.
+ You have a VPC and security group created to use. For more information, see [Create a virtual private cloud](get-set-up-for-amazon-ecs.md#create-a-vpc).
+ The Amazon ECS CodeDeploy IAM role is created. For more information, see [Amazon ECS CodeDeploy IAM Role](codedeploy_IAM_role.md).

## Step 1: Create an Application Load Balancer
<a name="create-blue-green-loadbalancer"></a>

Amazon ECS services using the blue/green deployment type require the use of either an Application Load Balancer or a Network Load Balancer. This tutorial uses an Application Load Balancer.

**To create an Application Load Balancer**

1. Use the [create-load-balancer](https://docs.aws.amazon.com/cli/latest/reference/elbv2/create-load-balancer.html) command to create an Application Load Balancer. Specify two subnets that aren't from the same Availability Zone as well as a security group.

   ```
   aws elbv2 create-load-balancer \
        --name bluegreen-alb \
        --subnets subnet-abcd1234 subnet-abcd5678 \
        --security-groups sg-abcd1234 \
        --region us-east-1
   ```

   The output includes the Amazon Resource Name (ARN) of the load balancer, with the following format:

   ```
   arn:aws:elasticloadbalancing:region:aws_account_id:loadbalancer/app/bluegreen-alb/e5ba62739c16e642
   ```

1. Use the [create-target-group](https://docs.aws.amazon.com/cli/latest/reference/elbv2/create-target-group.html) command to create a target group. This target group will route traffic to the original task set in your service.

   ```
   aws elbv2 create-target-group \
        --name bluegreentarget1 \
        --protocol HTTP \
        --port 80 \
        --target-type ip \
        --vpc-id vpc-abcd1234 \
        --region us-east-1
   ```

   The output includes the ARN of the target group, with the following format:

   ```
   arn:aws:elasticloadbalancing:region:aws_account_id:targetgroup/bluegreentarget1/209a844cd01825a4
   ```

1. Use the [create-listener](https://docs.aws.amazon.com/cli/latest/reference/elbv2/create-listener.html) command to create a load balancer listener with a default rule that forwards requests to the target group.

   ```
   aws elbv2 create-listener \
        --load-balancer-arn arn:aws:elasticloadbalancing:region:aws_account_id:loadbalancer/app/bluegreen-alb/e5ba62739c16e642 \
        --protocol HTTP \
        --port 80 \
        --default-actions Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:region:aws_account_id:targetgroup/bluegreentarget1/209a844cd01825a4 \
        --region us-east-1
   ```

   The output includes the ARN of the listener, with the following format:

   ```
   arn:aws:elasticloadbalancing:region:aws_account_id:listener/app/bluegreen-alb/e5ba62739c16e642/665750bec1b03bd4
   ```

## Step 2: Create an Amazon ECS Cluster
<a name="create-blue-green-cluster"></a>

Use the [create-cluster](https://docs.aws.amazon.com/cli/latest/reference/ecs/create-cluster.html) command to create a cluster named `tutorial-bluegreen-cluster` to use.

```
aws ecs create-cluster \
     --cluster-name tutorial-bluegreen-cluster \
     --region us-east-1
```

The output includes the ARN of the cluster, with the following format:

```
arn:aws:ecs:region:aws_account_id:cluster/tutorial-bluegreen-cluster
```

## Step 3: Register a Task Definition
<a name="create-blue-green-taskdef"></a>

Use the [register-task-definition](https://docs.aws.amazon.com/cli/latest/reference/ecs/register-task-definition.html) command to register a task definition that is compatible with Fargate. It requires the use of the `awsvpc` network mode. The following is the example task definition used for this tutorial.

First, create a file named `fargate-task.json` with the following contents. Ensure that you use the ARN for your task execution role. For more information, see [Amazon ECS task execution IAM role](task_execution_IAM_role.md).

```
{
    "family": "sample-fargate", 
    "networkMode": "awsvpc", 
    "containerDefinitions": [
        {
            "name": "sample-app", 
            "image": "public.ecr.aws/docker/library/httpd:latest", 
            "portMappings": [
                {
                    "containerPort": 80, 
                    "hostPort": 80, 
                    "protocol": "tcp"
                }
            ], 
            "essential": true, 
            "entryPoint": [
                "sh",
		"-c"
            ], 
            "command": [
                "/bin/sh -c \"echo '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p> </div></body></html>' >  /usr/local/apache2/htdocs/index.html && httpd-foreground\""
            ]
        }
    ], 
    "requiresCompatibilities": [
        "FARGATE"
    ], 
    "cpu": "256", 
    "memory": "512"
}
```

Then register the task definition using the `fargate-task.json` file that you created.

```
aws ecs register-task-definition \
     --cli-input-json file://fargate-task.json \
     --region us-east-1
```

## Step 4: Create an Amazon ECS Service
<a name="create-blue-green-service"></a>

Use the [create-service](https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html) command to create a service. 

First, create a file named `service-bluegreen.json` with the following contents.

```
{
    "cluster": "tutorial-bluegreen-cluster",
    "serviceName": "service-bluegreen",
    "taskDefinition": "tutorial-task-def",
    "loadBalancers": [
        {
            "targetGroupArn": "arn:aws:elasticloadbalancing:region:aws_account_id:targetgroup/bluegreentarget1/209a844cd01825a4",
            "containerName": "sample-app",
            "containerPort": 80
        }
    ],
    "launchType": "FARGATE",
    "schedulingStrategy": "REPLICA",
    "deploymentController": {
        "type": "CODE_DEPLOY"
    },
    "platformVersion": "LATEST",
    "networkConfiguration": {
       "awsvpcConfiguration": {
          "assignPublicIp": "ENABLED",
          "securityGroups": [ "sg-abcd1234" ],
          "subnets": [ "subnet-abcd1234", "subnet-abcd5678" ]
       }
    },
    "desiredCount": 1
}
```

Then create your service using the `service-bluegreen.json` file that you created.

```
aws ecs create-service \
     --cli-input-json file://service-bluegreen.json \
     --region us-east-1
```

The output includes the ARN of the service, with the following format:

```
arn:aws:ecs:region:aws_account_id:service/service-bluegreen
```

## Step 5: Create the AWS CodeDeploy Resources
<a name="create-blue-green-codedeploy"></a>

Use the following steps to create your CodeDeploy application, the Application Load Balancer target group for the CodeDeploy deployment group, and the CodeDeploy deployment group.

**To create CodeDeploy resources**

1. Use the [create-application](https://docs.aws.amazon.com/cli/latest/reference/deploy/create-application.html) command to create an CodeDeploy application. Specify the `ECS` compute platform.

   ```
   aws deploy create-application \
        --application-name tutorial-bluegreen-app \
        --compute-platform ECS \
        --region us-east-1
   ```

   The output includes the application ID, with the following format:

   ```
   {
       "applicationId": "b8e9c1ef-3048-424e-9174-885d7dc9dc11"
   }
   ```

1. Use the [create-target-group](https://docs.aws.amazon.com/cli/latest/reference/elbv2/create-target-group.html) command to create a second Application Load Balancer target group, which will be used when creating your CodeDeploy deployment group.

   ```
   aws elbv2 create-target-group \
        --name bluegreentarget2 \
        --protocol HTTP \
        --port 80 \
        --target-type ip \
        --vpc-id "vpc-0b6dd82c67d8012a1" \
        --region us-east-1
   ```

   The output includes the ARN for the target group, with the following format:

   ```
   arn:aws:elasticloadbalancing:region:aws_account_id:targetgroup/bluegreentarget2/708d384187a3cfdc
   ```

1. Use the [create-deployment-group](https://docs.aws.amazon.com/cli/latest/reference/deploy/create-deployment-group.html) command to create an CodeDeploy deployment group.

   First, create a file named `tutorial-deployment-group.json` with the following contents. This example uses the resource that you created. For the `serviceRoleArn`, specify the ARN of your Amazon ECS CodeDeploy IAM role. For more information, see [Amazon ECS CodeDeploy IAM Role](codedeploy_IAM_role.md).

   ```
   {
      "applicationName": "tutorial-bluegreen-app",
      "autoRollbackConfiguration": {
         "enabled": true,
         "events": [ "DEPLOYMENT_FAILURE" ]
      },
      "blueGreenDeploymentConfiguration": {
         "deploymentReadyOption": {
            "actionOnTimeout": "CONTINUE_DEPLOYMENT",
            "waitTimeInMinutes": 0
         },
         "terminateBlueInstancesOnDeploymentSuccess": {
            "action": "TERMINATE",
            "terminationWaitTimeInMinutes": 5
         }
      },
      "deploymentGroupName": "tutorial-bluegreen-dg",
      "deploymentStyle": {
         "deploymentOption": "WITH_TRAFFIC_CONTROL",
         "deploymentType": "BLUE_GREEN"
      },
      "loadBalancerInfo": {
         "targetGroupPairInfoList": [
           {
             "targetGroups": [
                {
                    "name": "bluegreentarget1"
                },
                {
                    "name": "bluegreentarget2"
                }
             ],
             "prodTrafficRoute": {
                 "listenerArns": [
                     "arn:aws:elasticloadbalancing:region:aws_account_id:listener/app/bluegreen-alb/e5ba62739c16e642/665750bec1b03bd4"
                 ]
             }
           }
         ]
      },
      "serviceRoleArn": "arn:aws:iam::aws_account_id:role/ecsCodeDeployRole",
      "ecsServices": [
          {
              "serviceName": "service-bluegreen",
              "clusterName": "tutorial-bluegreen-cluster"
          }
      ]
   }
   ```

   Then create the CodeDeploy deployment group.

   ```
   aws deploy create-deployment-group \
        --cli-input-json file://tutorial-deployment-group.json \
        --region us-east-1
   ```

   The output includes the deployment group ID, with the following format:

   ```
   {
       "deploymentGroupId": "6fd9bdc6-dc51-4af5-ba5a-0a4a72431c88"
   }
   ```

## Step 6: Create and Monitor an CodeDeploy Deployment
<a name="create-blue-green-verify"></a>

Use the following steps to create and upload an application specification file (AppSpec file) and an CodeDeploy deployment.

**To create and monitor an CodeDeploy deployment**

1. Create and upload an AppSpec file using the following steps.

   1. Create a file named `appspec.yaml` with the contents of the CodeDeploy deployment group. This example uses the resources that you created earlier in the tutorial.

      ```
      version: 0.0
      Resources:
        - TargetService:
            Type: AWS::ECS::Service
            Properties:
              TaskDefinition: "arn:aws:ecs:region:aws_account_id:task-definition/first-run-task-definition:7"
              LoadBalancerInfo:
                ContainerName: "sample-app"
                ContainerPort: 80
              PlatformVersion: "LATEST"
      ```

   1. Use the [s3 mb](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) command to create an Amazon S3 bucket for the AppSpec file.

      ```
      aws s3 mb s3://tutorial-bluegreen-bucket
      ```

   1. Use the [s3 cp](https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html) command to upload the AppSpec file to the Amazon S3 bucket.

      ```
      aws s3 cp ./appspec.yaml s3://tutorial-bluegreen-bucket/appspec.yaml
      ```

1. Create the CodeDeploy deployment using the following steps.

   1. Create a file named `create-deployment.json` with the contents of the CodeDeploy deployment. This example uses the resources that you created earlier in the tutorial.

      ```
      {
          "applicationName": "tutorial-bluegreen-app",
          "deploymentGroupName": "tutorial-bluegreen-dg",
          "revision": {
              "revisionType": "S3",
              "s3Location": {
                  "bucket": "tutorial-bluegreen-bucket",
                  "key": "appspec.yaml",
                  "bundleType": "YAML"
              }
          }
      }
      ```

   1. Use the [create-deployment](https://docs.aws.amazon.com/cli/latest/reference/deploy/create-deployment.html) command to create the deployment.

      ```
      aws deploy create-deployment \
           --cli-input-json file://create-deployment.json \
           --region us-east-1
      ```

      The output includes the deployment ID, with the following format:

      ```
      {
          "deploymentId": "d-RPCR1U3TW"
      }
      ```

   1. Use the [get-deployment-target](https://docs.aws.amazon.com/cli/latest/reference/deploy/get-deployment-target.html) command to get the details of the deployment, specifying the `deploymentId` from the previous output.

      ```
      aws deploy get-deployment-target \
           --deployment-id "d-IMJU3A8TW" \
           --target-id tutorial-bluegreen-cluster:service-bluegreen \
           --region us-east-1
      ```

      Continue to retrieve the deployment details until the status is `Succeeded`, as shown in the following output.

      ```
      {
          "deploymentTarget": {
              "deploymentTargetType": "ECSTarget",
              "ecsTarget": {
                  "deploymentId": "d-RPCR1U3TW",
                  "targetId": "tutorial-bluegreen-cluster:service-bluegreen",
                  "targetArn": "arn:aws:ecs:region:aws_account_id:service/service-bluegreen",
                  "lastUpdatedAt": 1543431490.226,
                  "lifecycleEvents": [
                      {
                          "lifecycleEventName": "BeforeInstall",
                          "startTime": 1543431361.022,
                          "endTime": 1543431361.433,
                          "status": "Succeeded"
                      },
                      {
                          "lifecycleEventName": "Install",
                          "startTime": 1543431361.678,
                          "endTime": 1543431485.275,
                          "status": "Succeeded"
                      },
                      {
                          "lifecycleEventName": "AfterInstall",
                          "startTime": 1543431485.52,
                          "endTime": 1543431486.033,
                          "status": "Succeeded"
                      },
                      {
                          "lifecycleEventName": "BeforeAllowTraffic",
                          "startTime": 1543431486.838,
                          "endTime": 1543431487.483,
                          "status": "Succeeded"
                      },
                      {
                          "lifecycleEventName": "AllowTraffic",
                          "startTime": 1543431487.748,
                          "endTime": 1543431488.488,
                          "status": "Succeeded"
                      },
                      {
                          "lifecycleEventName": "AfterAllowTraffic",
                          "startTime": 1543431489.152,
                          "endTime": 1543431489.885,
                          "status": "Succeeded"
                      }
                  ],
                  "status": "Succeeded",
                  "taskSetsInfo": [
                      {
                          "identifer": "ecs-svc/9223370493425779968",
                          "desiredCount": 1,
                          "pendingCount": 0,
                          "runningCount": 1,
                          "status": "ACTIVE",
                          "trafficWeight": 0.0,
                          "targetGroup": {
                              "name": "bluegreentarget1"
                          }
                      },
                      {
                          "identifer": "ecs-svc/9223370493423413672",
                          "desiredCount": 1,
                          "pendingCount": 0,
                          "runningCount": 1,
                          "status": "PRIMARY",
                          "trafficWeight": 100.0,
                          "targetGroup": {
                              "name": "bluegreentarget2"
                          }
                      }
                  ]
              }
          }
      }
      ```

## Step 7: Clean Up
<a name="create-blue-green-cleanup"></a>

When you have finished this tutorial, clean up the resources associated with it to avoid incurring charges for resources that you aren't using.

**Cleaning up the tutorial resources**

1. Use the [delete-deployment-group](https://docs.aws.amazon.com/cli/latest/reference/deploy/delete-deployment-group.html) command to delete the CodeDeploy deployment group.

   ```
   aws deploy delete-deployment-group \
        --application-name tutorial-bluegreen-app \
        --deployment-group-name tutorial-bluegreen-dg \
        --region us-east-1
   ```

1. Use the [delete-application](https://docs.aws.amazon.com/cli/latest/reference/deploy/delete-application.html) command to delete the CodeDeploy application.

   ```
   aws deploy delete-application \
        --application-name tutorial-bluegreen-app \
        --region us-east-1
   ```

1. Use the [delete-service](https://docs.aws.amazon.com/cli/latest/reference/ecs/delete-service.html) command to delete the Amazon ECS service. Using the `--force` flag allows you to delete a service even if it has not been scaled down to zero tasks.

   ```
   aws ecs delete-service \
        --service arn:aws:ecs:region:aws_account_id:service/service-bluegreen \
        --force \
        --region us-east-1
   ```

1. Use the [delete-cluster](https://docs.aws.amazon.com/cli/latest/reference/ecs/delete-cluster.html) command to delete the Amazon ECS cluster.

   ```
   aws ecs delete-cluster \
        --cluster tutorial-bluegreen-cluster \
        --region us-east-1
   ```

1. Use the [s3 rm](https://docs.aws.amazon.com/cli/latest/reference/s3/rm.html) command to delete the AppSpec file from the Amazon S3 bucket.

   ```
   aws s3 rm s3://tutorial-bluegreen-bucket/appspec.yaml
   ```

1. Use the [s3 rb](https://docs.aws.amazon.com/cli/latest/reference/s3/rb.html) command to delete the Amazon S3 bucket.

   ```
   aws s3 rb s3://tutorial-bluegreen-bucket
   ```

1. Use the [delete-load-balancer](https://docs.aws.amazon.com/cli/latest/reference/elbv2/delete-load-balancer.html) command to delete the Application Load Balancer.

   ```
   aws elbv2 delete-load-balancer \
        --load-balancer-arn arn:aws:elasticloadbalancing:region:aws_account_id:loadbalancer/app/bluegreen-alb/e5ba62739c16e642 \
        --region us-east-1
   ```

1. Use the [delete-target-group](https://docs.aws.amazon.com/cli/latest/reference/elbv2/delete-target-group.html) command to delete the two Application Load Balancer target groups.

   ```
   aws elbv2 delete-target-group \
        --target-group-arn arn:aws:elasticloadbalancing:region:aws_account_id:targetgroup/bluegreentarget1/209a844cd01825a4 \
        --region us-east-1
   ```

   ```
   aws elbv2 delete-target-group \
        --target-group-arn arn:aws:elasticloadbalancing:region:aws_account_id:targetgroup/bluegreentarget2/708d384187a3cfdc \
        --region us-east-1
   ```