

• The AWS Systems Manager CloudWatch Dashboard will no longer be available after April 30, 2026. Customers can continue to use Amazon CloudWatch console to view, create, and manage their Amazon CloudWatch dashboards, just as they do today. For more information, see [Amazon CloudWatch Dashboard documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html). 

# AWS Systems Manager Node tools
Node tools

AWS Systems Manager provides the following tools for accessing, managing, and configuring your *managed nodes*. A managed node is any machine configured for use with Systems Manager in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment.

**Topics**
+ [

# AWS Systems Manager Compliance
](systems-manager-compliance.md)
+ [

# AWS Systems Manager Distributor
](distributor.md)
+ [

# AWS Systems Manager Fleet Manager
](fleet-manager.md)
+ [

# AWS Systems Manager Hybrid Activations
](activations.md)
+ [

# AWS Systems Manager Inventory
](systems-manager-inventory.md)
+ [

# AWS Systems Manager Patch Manager
](patch-manager.md)
+ [

# AWS Systems Manager Run Command
](run-command.md)
+ [

# AWS Systems Manager Session Manager
](session-manager.md)
+ [

# AWS Systems Manager State Manager
](systems-manager-state.md)

# AWS Systems Manager Compliance
Compliance

You can use Compliance, a tool in AWS Systems Manager, to scan your fleet of managed nodes for patch compliance and configuration inconsistencies. You can collect and aggregate data from multiple AWS accounts and Regions, and then drill down into specific resources that aren’t compliant. By default, Compliance displays current compliance data about patching in Patch Manager and associations in State Manager. (Patch Manager and State Manager are also both tools in AWS Systems Manager.) To get started with Compliance, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/compliance). In the navigation pane, choose **Compliance**.

Patch compliance data from Patch Manager can be sent to AWS Security Hub CSPM. Security Hub CSPM gives you a comprehensive view of your high-priority security alerts and compliance status. It also monitors the patching status of your fleet. For more information, see [Integrating Patch Manager with AWS Security Hub CSPM](patch-manager-security-hub-integration.md). 

Compliance offers the following additional benefits and features: 
+ View compliance history and change tracking for Patch Manager patching data and State Manager associations by using AWS Config.
+ Customize Compliance to create your own compliance types based on your IT or business requirements.
+ Remediate issues by using Run Command, another tool in AWS Systems Manager, State Manager, or Amazon EventBridge.
+ Port data to Amazon Athena and Amazon Quick to generate fleet-wide reports.

**EventBridge support**  
This Systems Manager tool is supported as an *event* type in Amazon EventBridge rules. For information, see [Monitoring Systems Manager events with Amazon EventBridge](monitoring-eventbridge-events.md) and [Reference: Amazon EventBridge event patterns and types for Systems Manager](reference-eventbridge-events.md).

**Chef InSpec integration**  
Systems Manager integrates with [https://www.chef.io/inspec/](https://www.chef.io/inspec/). InSpec is an open-source, runtime framework that allows you to create human-readable profiles on GitHub or Amazon Simple Storage Service (Amazon S3). You can then use Systems Manager to run compliance scans and view compliant and noncompliant managed nodes. For more information, see [Using Chef InSpec profiles with Systems Manager Compliance](integration-chef-inspec.md).

**Pricing**  
Compliance is offered at no additional charge. You only pay for the AWS resources that you use.

**Topics**
+ [

# Getting started with Compliance
](compliance-prerequisites.md)
+ [

# Configuring permissions for Compliance
](compliance-permissions.md)
+ [

# Creating a resource data sync for Compliance
](compliance-datasync-create.md)
+ [

# Learn details about Compliance
](compliance-about.md)
+ [

# Deleting a resource data sync for Compliance
](systems-manager-compliance-delete-RDS.md)
+ [

# Remediating compliance issues using EventBridge
](compliance-fixing.md)
+ [

# Assign custom compliance metadata using the AWS CLI
](compliance-custom-metadata-cli.md)

# Getting started with Compliance


To get started with Compliance, a tool in AWS Systems Manager, complete the following tasks.


****  

| Task | For more information | 
| --- | --- | 
|  Compliance works with patch data in Patch Manager and associations in State Manager. (Patch Manager and State Manager are also both tools in AWS Systems Manager.) Compliance also works with custom compliance types on managed nodes that are managed using Systems Manager. Verify that you have completed the setup requirements for your Amazon Elastic Compute Cloud (Amazon EC2) instances and non-EC2 machines in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment.  |  [Setting up Systems Manager unified console for an organization](systems-manager-setting-up-organizations.md)  | 
|  Update the AWS Identity and Access Management (IAM) role used by your managed nodes to restrict Compliance permissions.  |  [Configuring permissions for Compliance](compliance-permissions.md)  | 
|  If you plan to monitor patch compliance, verify that you've configured Patch Manager. You must perform patching operations by using Patch Manager before Compliance can display patch compliance data.  |  [AWS Systems Manager Patch Manager](patch-manager.md)  | 
|  If you plan to monitor association compliance, verify that you've created State Manager associations. You must create associations before Compliance can display association compliance data.  |  [AWS Systems Manager State Manager](systems-manager-state.md)  | 
|  (Optional) Configure the system to view compliance history and change tracking.   |  [Viewing compliance configuration history and change tracking](compliance-about.md#compliance-history)  | 
|  (Optional) Create custom compliance types.   |  [Assign custom compliance metadata using the AWS CLI](compliance-custom-metadata-cli.md)  | 
|  (Optional) Create a resource data sync to aggregate all compliance data in a target Amazon Simple Storage Service (Amazon S3) bucket.  |  [Creating a resource data sync for Compliance](compliance-datasync-create.md)  | 

# Configuring permissions for Compliance


As a security best practice, we recommend that you update the AWS Identity and Access Management (IAM) role used by your managed nodes with the following permissions to restrict the node's ability to use the [PutComplianceItems](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutComplianceItems.html) API action. This API action registers a compliance type and other compliance details on a designated resource, such as an Amazon EC2 instance or a managed node.

If your node is an Amazon EC2 instance, you must update the IAM instance profile used by the instance with the following permissions. For more information about instance profiles for EC2 instance managed by Systems Manager, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md). For other types of managed nodes, update the IAM role used by the node with the following permissions. For more information, see [Update permissions for a role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_update-role-permissions.html) in the *IAM User Guide*.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:PutComplianceItems"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:SourceInstanceARN": "${ec2:SourceInstanceARN}"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:PutComplianceItems"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ssm:SourceInstanceARN": "${ssm:SourceInstanceARN}"
                }
            }
        }
    ]
}
```

------

# Creating a resource data sync for Compliance


You can use the resource data sync feature in AWS Systems Manager to send compliance data from all of your managed nodes to a target Amazon Simple Storage Service (Amazon S3) bucket. When you create the sync, you can specify managed nodes from multiple AWS accounts, AWS Regions, and your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. Resource data sync then automatically updates the centralized data when new compliance data is collected. With all compliance data stored in a target S3 bucket, you can use services like Amazon Athena and Amazon Quick to query and analyze the aggregated data. Configuring resource data sync for Compliance is a one-time operation.

Use the following procedure to create a resource data sync for Compliance by using the AWS Management Console.

**To create and configure an S3 bucket for resource data sync (console)**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Create a bucket to store your aggregated compliance data. For more information, see [Create a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingABucket.html) in the *Amazon Simple Storage Service User Guide*. Make a note of the bucket name and the AWS Region where you created it.

1. Open the bucket, choose the **Permissions** tab, and then choose **Bucket Policy**.

1. Copy and paste the following bucket policy into the policy editor. Replace amzn-s3-demo-bucket and *Account-ID* with the name of the S3 bucket you created and a valid AWS account ID. Optionally, replace *Bucket-Prefix* with the name of an Amazon S3 prefix (subdirectory). If you didn't create a prefix, remove *Bucket-Prefix*/ from the ARN in the policy. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "SSMBucketPermissionsCheck",
               "Effect": "Allow",
               "Principal": {
                   "Service": "ssm.amazonaws.com"
               },
               "Action": "s3:GetBucketAcl",
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
           },
           {
               "Sid": " SSMBucketDelivery",
               "Effect": "Allow",
               "Principal": {
                   "Service": "ssm.amazonaws.com"
               },
               "Action": "s3:PutObject",
               "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket/Bucket-Prefix/*/accountid=111122223333/*"],
               "Condition": {
                   "StringEquals": {
                       "s3:x-amz-acl": "bucket-owner-full-control"
                   }
               }
           }
       ]
   }
   ```

------

**To create a resource data sync**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose **Account management**, **Resource Data Syncs**, and then choose **Create resource data sync**.

1. In the **Sync name** field, enter a name for the sync configuration.

1. In the **Bucket name** field, enter the name of the Amazon S3 bucket you created at the start of this procedure.

1. (Optional) In the **Bucket prefix** field, enter the name of an S3 bucket prefix (subdirectory).

1. In the **Bucket region** field, choose **This region** if the S3 bucket you created is located in the current AWS Region. If the bucket is located in a different AWS Region, choose **Another region**, and enter the name of the Region.
**Note**  
If the sync and the target S3 bucket are located in different Regions, you might be subject to data transfer pricing. For more information, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

1. Choose **Create**.

# Learn details about Compliance


Compliance, a tool in AWS Systems Manager, collects and reports data about the status of patching in Patch Manager patching and associations in State Manager. (Patch Manager and State Manager are also both tools in AWS Systems Manager.) Compliance also reports on custom compliance types you have specified for your managed nodes. This section includes details about each of these compliance types and how to view Systems Manager compliance data. This section also includes information about how to view compliance history and change tracking.

**Note**  
Systems Manager integrates with [https://www.chef.io/inspec/](https://www.chef.io/inspec/). InSpec is an open-source, runtime framework that allows you to create human-readable profiles on GitHub or Amazon Simple Storage Service (Amazon S3). Then you can use Systems Manager to run compliance scans and view compliant and noncompliant instances. For more information, see [Using Chef InSpec profiles with Systems Manager Compliance](integration-chef-inspec.md).

## About patch compliance


After you use Patch Manager to install patches on your instances, compliance status information is immediately available to you in the console or in response to AWS Command Line Interface (AWS CLI) commands or corresponding Systems Manager API operations.

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

## About State Manager association compliance


After you create one or more State Manager associations, compliance status information is immediately available to you in the console or in response to AWS CLI commands or corresponding Systems Manager API operations. For associations, Compliance shows statuses of `Compliant` or `Non-compliant` and the severity level assigned to the association, such as `Critical` or `Medium`.

When State Manager executes an association on a managed node, it triggers a compliance aggregation process that updates compliance status for all associations on that node. The `ExecutionTime` value in compliance reports represents when the compliance status was captured by Systems Manager, not when the association was executed on the managed node. This means multiple associations might display identical `ExecutionTime` values even if they were executed at different times. To determine actual association execution times, refer to the association execution history using the AWS CLI command [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-association-execution-targets.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-association-execution-targets.html) or by viewing the execution details in the console.

## About custom compliance


You can assign compliance metadata to a managed node. This metadata can then be aggregated with other compliance data for compliance reporting purposes. For example, say that your business runs versions 2.0, 3.0, and 4.0 of software X on your managed nodes. The company wants to standardize on version 4.0, meaning that instances running versions 2.0 and 3.0 are non-compliant. You can use the [PutComplianceItems](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutComplianceItems.html) API operation to explicitly note which managed nodes are running older versions of software X. You can only assign compliance metadata by using the AWS CLI, AWS Tools for Windows PowerShell, or the SDKs. The following CLI sample command assigns compliance metadata to a managed instance and specifies the compliance type in the required format `Custom:`. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

```
aws ssm put-compliance-items \
    --resource-id i-1234567890abcdef0 \
    --resource-type ManagedInstance \
    --compliance-type Custom:SoftwareXCheck \
    --execution-summary ExecutionTime=AnyStringToDenoteTimeOrDate \
    --items Id=Version2.0,Title=SoftwareXVersion,Severity=CRITICAL,Status=NON_COMPLIANT
```

------
#### [ Windows ]

```
aws ssm put-compliance-items ^
    --resource-id i-1234567890abcdef0 ^
    --resource-type ManagedInstance ^
    --compliance-type Custom:SoftwareXCheck ^
    --execution-summary ExecutionTime=AnyStringToDenoteTimeOrDate ^
    --items Id=Version2.0,Title=SoftwareXVersion,Severity=CRITICAL,Status=NON_COMPLIANT
```

------

**Note**  
The `ResourceType` parameter only supports `ManagedInstance`. If you add custom compliance to a managed AWS IoT Greengrass core device, you must specify a `ResourceType` of `ManagedInstance`.

Compliance managers can then view summaries or create reports about which managed nodes are or aren't compliant. You can assign a maximum of 10 different custom compliance types to a managed node.

For an example of how to create a custom compliance type and view compliance data, see [Assign custom compliance metadata using the AWS CLI](compliance-custom-metadata-cli.md).

## Viewing current compliance data


This section describes how to view compliance data in the Systems Manager console and by using the AWS CLI. For information about how to view patch and association compliance history and change tracking, see [Viewing compliance configuration history and change tracking](#compliance-history).

**Topics**
+ [

### Viewing current compliance data (console)
](#compliance-view-results-console)
+ [

### Viewing current compliance data (AWS CLI)
](#compliance-view-data-cli)

### Viewing current compliance data (console)


Use the following procedure to view compliance data in the Systems Manager console.

**To view current compliance reports in the Systems Manager console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Compliance**.

1. In the **Compliance dashboard filtering** section, choose an option to filter compliance data. The **Compliance resources summary** section displays counts of compliance data based on the filter you chose.

1. To drill down into a resource for more information, scroll down to the **Details overview for resources** area and choose the ID of a managed node.

1. On the **Instance ID** or **Name** details page, choose the **Configuration compliance** tab to view a detailed configuration compliance report for the managed node.

**Note**  
For information about fixing compliance issues, see [Remediating compliance issues using EventBridge](compliance-fixing.md).

### Viewing current compliance data (AWS CLI)


You can view summaries of compliance data for patching, associations, and custom compliance types in the in the AWS CLI by using the following AWS CLI commands. 

[https://docs.aws.amazon.com/cli/latest/reference/ssm/list-compliance-summaries.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/list-compliance-summaries.html)  
Returns a summary count of compliant and non-compliant association statuses according to the filter you specify. (API: [ListComplianceSummaries](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ListComplianceSummaries.html))

[https://docs.aws.amazon.com/cli/latest/reference/ssm/list-resource-compliance-summaries.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/list-resource-compliance-summaries.html)  
Returns a resource-level summary count. The summary includes information about compliant and non-compliant statuses and detailed compliance-item severity counts, according to the filter criteria you specify. (API: [ListResourceComplianceSummaries](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ListResourceComplianceSummaries.html))

You can view additional compliance data for patching by using the following AWS CLI commands.

[https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-group-state.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-group-state.html)  
Returns high-level aggregated patch compliance state for a patch group. (API: [DescribePatchGroupState](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchGroupState.html))

[https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-instance-patch-states-for-patch-group.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-instance-patch-states-for-patch-group.html)  
Returns the high-level patch state for the instances in the specified patch group. (API: [DescribeInstancePatchStatesForPatchGroup](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeInstancePatchStatesForPatchGroup.html))

**Note**  
For an illustration of how to configure patching and view patch compliance details by using the AWS CLI, see [Tutorial: Patch a server environment using the AWS CLI](patch-manager-patch-servers-using-the-aws-cli.md).

## Viewing compliance configuration history and change tracking


Systems Manager Compliance displays *current* patching and association compliance data for your managed nodes. You can view patching and association compliance history and change tracking by using [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/). AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time. To view patching and association compliance history and change tracking, you must turn on the following resources in AWS Config: 
+ `SSM:PatchCompliance`
+ `SSM:AssociationCompliance`

For information about how to choose and configure these specific resources in AWS Config, see [Selecting Which Resources AWS Config Records](https://docs.aws.amazon.com/config/latest/developerguide/select-resources.html) in the *AWS Config Developer Guide*.

**Note**  
For information about AWS Config pricing, see [Pricing](https://aws.amazon.com/config/pricing/).

# Deleting a resource data sync for Compliance


If you no longer want to use AWS Systems Manager Compliance to view compliance data, then we also recommend deleting resource data syncs used for Compliance data collection.

**To delete a Compliance resource data sync**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose **Account management**, **Resource data syncs**.

1. Choose a sync in the list. 
**Important**  
Make sure you choose the sync used for Compliance. Systems Manager supports resource data sync for multiple tools. If you choose the wrong sync, you could disrupt data aggregation for Systems Manager Explorer or Systems Manager Inventory.

1. Choose **Delete**.

1. Delete the Amazon Simple Storage Service (Amazon S3) bucket where the data was stored. For information about deleting an S3 bucket, see [Deleting a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.html).

# Remediating compliance issues using EventBridge


You can quickly remediate patch and association compliance issues by using Run Command, a tool in AWS Systems Manager. You can target instance or AWS IoT Greengrass core device IDs or tags and run the `AWS-RunPatchBaseline` document or the `AWS-RefreshAssociation` document. If refreshing the association or re-running the patch baseline fails to resolve the compliance issue, then you need to investigate your associations, patch baselines, or instance configurations to understand why the Run Command operations didn't resolve the problem. 

For more information about patching, see [AWS Systems Manager Patch Manager](patch-manager.md) and [SSM Command document for patching: `AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md).

For more information about associations, see [Working with associations in Systems Manager](state-manager-associations.md).

For more information about running a command, see [AWS Systems Manager Run Command](run-command.md).

**Specify Compliance as the target of an EventBridge event**  
You can also configure Amazon EventBridge to perform an action in response to Systems Manager Compliance events. For example, if one or more managed nodes fail to install Critical patch updates or run an association that installs anti-virus software, then you can configure EventBridge to run the `AWS-RunPatchBaseline` document or the `AWS-RefreshAssocation` document when the Compliance event occurs. 

Use the following procedure to configure Compliance as the target of an EventBridge event.

**To configure Compliance as the target of a EventBridge event (console)**

1. Open the Amazon EventBridge console at [https://console.aws.amazon.com/events/](https://console.aws.amazon.com/events/).

1. In the navigation pane, choose **Rules**.

1. Choose **Create rule**.

1. Enter a name and description for the rule.

   A rule can't have the same name as another rule in the same AWS Region and on the same event bus.

1. For **Event bus**, choose the event bus that you want to associate with this rule. If you want this rule to respond to matching events that come from your own AWS account, select **default**. When an AWS service in your account emits an event, it always goes to your account’s default event bus.

1. For **Rule type**, choose **Rule with an event pattern**.

1. Choose **Next**.

1. For **Event source**, choose **AWS events or EventBridge partner events**.

1. In the **Event pattern** section, choose **Event pattern form**.

1. For **Event source**, choose **AWS services**.

1. For **AWS service**, choose **Systems Manager**.

1. For **Event type**, choose **Configuration Compliance**.

1. For **Specific detail type(s)**, choose **Configuration Compliance State Change**.

1. Choose **Next**.

1. For **Target types**, choose **AWS service**.

1. For **Select a target**, choose **Systems Manager Run Command**.

1. In the **Document** list, choose a Systems Manager document (SSM document) to run when your target is invoked. For example, choose `AWS-RunPatchBaseline` for a non-compliant patch event, or choose `AWS-RefreshAssociation` for a non-compliant association event.

1. Specify information for the remaining fields and parameters.
**Note**  
Required fields and parameters have an asterisk (\$1) next to the name. To create a target, you must specify a value for each required parameter or field. If you don't, the system creates the rule, but the rule won't be run.

1. Choose **Next**.

1. (Optional) Enter one or more tags for the rule. For more information, see [Tagging Your Amazon EventBridge Resources](https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-tagging.html) in the *Amazon EventBridge User Guide*.

1. Choose **Next**.

1. Review the details of the rule and choose **Create rule**.

# Assign custom compliance metadata using the AWS CLI


The following procedure walks you through the process of using the AWS Command Line Interface (AWS CLI) to call the AWS Systems Manager [PutComplianceItems](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutComplianceItems.html) API operation to assign custom compliance metadata to a resource. You can also use this API operation to manually assign patch or association compliance metadata to a managed nodes, as shown in the following walkthrough. For more information about custom compliance, see [About custom compliance](compliance-about.md#compliance-custom).

**To assign custom compliance metadata to a managed instance (AWS CLI)**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to assign custom compliance metadata to a managed node. Replace each *example resource placeholder* with your own information. The `ResourceType` parameter only supports a value of `ManagedInstance`. Specify this value even if you are assigning custom compliance metadata to a managed AWS IoT Greengrass core device.

------
#### [ Linux & macOS ]

   ```
   aws ssm put-compliance-items \
       --resource-id instance_ID \
       --resource-type ManagedInstance \
       --compliance-type Custom:user-defined_string \
       --execution-summary ExecutionTime=user-defined_time_and/or_date_value \
       --items Id=user-defined_ID,Title=user-defined_title,Severity=one_or_more_comma-separated_severities:CRITICAL, MAJOR, MINOR,INFORMATIONAL, or UNSPECIFIED,Status=COMPLIANT or NON_COMPLIANT
   ```

------
#### [ Windows ]

   ```
   aws ssm put-compliance-items ^
       --resource-id instance_ID ^
       --resource-type ManagedInstance ^
       --compliance-type Custom:user-defined_string ^
       --execution-summary ExecutionTime=user-defined_time_and/or_date_value ^
       --items Id=user-defined_ID,Title=user-defined_title,Severity=one_or_more_comma-separated_severities:CRITICAL, MAJOR, MINOR,INFORMATIONAL, or UNSPECIFIED,Status=COMPLIANT or NON_COMPLIANT
   ```

------

1. Repeat the previous step to assign additional custom compliance metadata to one or more nodes. You can also manually assign patch or association compliance metadata to managed nodes by using the following commands:

   Association compliance metadata

------
#### [ Linux & macOS ]

   ```
   aws ssm put-compliance-items \
       --resource-id instance_ID \
       --resource-type ManagedInstance \
       --compliance-type Association \
       --execution-summary ExecutionTime=user-defined_time_and/or_date_value \
       --items Id=user-defined_ID,Title=user-defined_title,Severity=one_or_more_comma-separated_severities:CRITICAL, MAJOR, MINOR,INFORMATIONAL, or UNSPECIFIED,Status=COMPLIANT or NON_COMPLIANT
   ```

------
#### [ Windows ]

   ```
   aws ssm put-compliance-items ^
       --resource-id instance_ID ^
       --resource-type ManagedInstance ^
       --compliance-type Association ^
       --execution-summary ExecutionTime=user-defined_time_and/or_date_value ^
       --items Id=user-defined_ID,Title=user-defined_title,Severity=one_or_more_comma-separated_severities:CRITICAL, MAJOR, MINOR,INFORMATIONAL, or UNSPECIFIED,Status=COMPLIANT or NON_COMPLIANT
   ```

------

   Patch compliance metadata

------
#### [ Linux & macOS ]

   ```
   aws ssm put-compliance-items \
       --resource-id instance_ID \
       --resource-type ManagedInstance \
       --compliance-type Patch \
       --execution-summary ExecutionTime=user-defined_time_and/or_date_value,ExecutionId=user-defined_ID,ExecutionType=Command  \
       --items Id=for_example, KB12345,Title=user-defined_title,Severity=one_or_more_comma-separated_severities:CRITICAL, MAJOR, MINOR,INFORMATIONAL, or UNSPECIFIED,Status=COMPLIANT or NON_COMPLIANT,Details="{PatchGroup=name_of_group,PatchSeverity=the_patch_severity, for example, CRITICAL}"
   ```

------
#### [ Windows ]

   ```
   aws ssm put-compliance-items ^
       --resource-id instance_ID ^
       --resource-type ManagedInstance ^
       --compliance-type Patch ^
       --execution-summary ExecutionTime=user-defined_time_and/or_date_value,ExecutionId=user-defined_ID,ExecutionType=Command  ^
       --items Id=for_example, KB12345,Title=user-defined_title,Severity=one_or_more_comma-separated_severities:CRITICAL, MAJOR, MINOR,INFORMATIONAL, or UNSPECIFIED,Status=COMPLIANT or NON_COMPLIANT,Details="{PatchGroup=name_of_group,PatchSeverity=the_patch_severity, for example, CRITICAL}"
   ```

------

1. Run the following command to view a list of compliance items for a specific managed node. Use filters to drill down into specific compliance data.

------
#### [ Linux & macOS ]

   ```
   aws ssm list-compliance-items \
       --resource-ids instance_ID \
       --resource-types ManagedInstance \
       --filters one_or_more_filters
   ```

------
#### [ Windows ]

   ```
   aws ssm list-compliance-items ^
       --resource-ids instance_ID ^
       --resource-types ManagedInstance ^
       --filters one_or_more_filters
   ```

------

   The following examples show you how to use this command with filters.

------
#### [ Linux & macOS ]

   ```
   aws ssm list-compliance-items \
       --resource-ids i-02573cafcfEXAMPLE \
       --resource-type ManagedInstance \
       --filters Key=DocumentName,Values=AWS-RunPowerShellScript Key=Status,Values=NON_COMPLIANT,Type=NotEqual Key=Id,Values=cee20ae7-6388-488e-8be1-a88ccEXAMPLE Key=Severity,Values=UNSPECIFIED
   ```

------
#### [ Windows ]

   ```
   aws ssm list-compliance-items ^
       --resource-ids i-02573cafcfEXAMPLE ^
       --resource-type ManagedInstance ^
       --filters Key=DocumentName,Values=AWS-RunPowerShellScript Key=Status,Values=NON_COMPLIANT,Type=NotEqual Key=Id,Values=cee20ae7-6388-488e-8be1-a88ccEXAMPLE Key=Severity,Values=UNSPECIFIED
   ```

------

------
#### [ Linux & macOS ]

   ```
   aws ssm list-resource-compliance-summaries \
       --filters Key=OverallSeverity,Values=UNSPECIFIED
   ```

------
#### [ Windows ]

   ```
   aws ssm list-resource-compliance-summaries ^
       --filters Key=OverallSeverity,Values=UNSPECIFIED
   ```

------

------
#### [ Linux & macOS ]

   ```
   aws ssm list-resource-compliance-summaries \
       --filters Key=OverallSeverity,Values=UNSPECIFIED Key=ComplianceType,Values=Association Key=InstanceId,Values=i-02573cafcfEXAMPLE
   ```

------
#### [ Windows ]

   ```
   aws ssm list-resource-compliance-summaries ^
       --filters Key=OverallSeverity,Values=UNSPECIFIED Key=ComplianceType,Values=Association Key=InstanceId,Values=i-02573cafcfEXAMPLE
   ```

------

1. Run the following command to view a summary of compliance statuses. Use filters to drill down into specific compliance data.

   ```
   aws ssm list-resource-compliance-summaries --filters One or more filters.
   ```

   The following examples show you how to use this command with filters.

------
#### [ Linux & macOS ]

   ```
   aws ssm list-resource-compliance-summaries \
       --filters Key=ExecutionType,Values=Command
   ```

------
#### [ Windows ]

   ```
   aws ssm list-resource-compliance-summaries ^
       --filters Key=ExecutionType,Values=Command
   ```

------

------
#### [ Linux & macOS ]

   ```
   aws ssm list-resource-compliance-summaries \
       --filters Key=AWS:InstanceInformation.PlatformType,Values=Windows Key=OverallSeverity,Values=CRITICAL
   ```

------
#### [ Windows ]

   ```
   aws ssm list-resource-compliance-summaries ^
       --filters Key=AWS:InstanceInformation.PlatformType,Values=Windows Key=OverallSeverity,Values=CRITICAL
   ```

------

1. Run the following command to view a summary count of compliant and non-compliant resources for a compliance type. Use filters to drill down into specific compliance data.

   ```
   aws ssm list-compliance-summaries --filters One or more filters.
   ```

   The following examples show you how to use this command with filters.

------
#### [ Linux & macOS ]

   ```
   aws ssm list-compliance-summaries \
       --filters Key=AWS:InstanceInformation.PlatformType,Values=Windows Key=PatchGroup,Values=TestGroup
   ```

------
#### [ Windows ]

   ```
   aws ssm list-compliance-summaries ^
       --filters Key=AWS:InstanceInformation.PlatformType,Values=Windows Key=PatchGroup,Values=TestGroup
   ```

------

------
#### [ Linux & macOS ]

   ```
   aws ssm list-compliance-summaries \
       --filters Key=AWS:InstanceInformation.PlatformType,Values=Windows Key=ExecutionId,Values=4adf0526-6aed-4694-97a5-14522EXAMPLE
   ```

------
#### [ Windows ]

   ```
   aws ssm list-compliance-summaries ^
       --filters Key=AWS:InstanceInformation.PlatformType,Values=Windows Key=ExecutionId,Values=4adf0526-6aed-4694-97a5-14522EXAMPLE
   ```

------

# AWS Systems Manager Distributor
Distributor

Distributor, a tool in AWS Systems Manager, helps you package and publish software to AWS Systems Manager managed nodes. You can package and publish your own software or use Distributor to find and publish AWS-provided agent software packages, such as **AmazonCloudWatchAgent**, or third-party packages such as **Trend Micro. **Publishing a package advertises specific versions of the package's document to managed nodes that you identify using node IDs, AWS account IDs, tags, or an AWS Region. To get started with Distributor, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/distributor). In the navigation pane, choose **Distributor**.

After you create a package in Distributor, you can install the package in one of the following ways:
+ One time by using [AWS Systems Manager Run Command](run-command.md)
+ On a schedule by using [AWS Systems Manager State Manager](systems-manager-state.md)

**Important**  
Packages distributed by third-party sellers are not managed by AWS and are published by the vendor of the package. We encourage you to conduct additional due diligence to ensure compliance with your internal security controls. Security is a shared responsibility between AWS and you. This is described as the shared responsibility model. To learn more, see the [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/).

## How can Distributor benefit my organization?


Distributor offers these benefits:
+  **One package, many platforms** 

  When you create a package in Distributor, the system creates an AWS Systems Manager document (SSM document). You can attach .zip files to this document. When you run Distributor, the system processes the instructions in the SSM document and installs the software package in the .zip file on the specified targets. Distributor supports multiple operating systems, including Windows, Ubuntu Server, Debian Server, and Red Hat Enterprise Linux. For more information about supported platforms, see [Supported package platforms and architectures](#what-is-a-package-platforms).
+  **Control package access across groups of managed instances** 

  You can use Run Command or State Manager to control which of your managed nodes get a package and which version of that package. Run Command and State Manager are tools in AWS Systems Manager. Managed nodes can be grouped by instance or device IDs, AWS account numbers, tags, or AWS Regions. You can use State Manager associations to deliver different versions of a package to different groups of instances.
+  **Many AWS agent packages included and ready to use** 

  Distributor includes many AWS agent packages that are ready for you to deploy to managed nodes. Look for packages in the Distributor `Packages` list page that are published by `Amazon`. Examples include `AmazonCloudWatchAgent` and `AWSPVDriver`.
+  **Automate deployment ** 

  To keep your environment current, use State Manager to schedule packages for automatic deployment on target managed nodes when those machines are first launched.

## Who should use Distributor?

+ Any AWS customer who wants to create new or deploy existing software packages, including AWS published packages, to multiple Systems Manager managed nodes at one time.
+ Software developers who create software packages.
+ Administrators who are responsible for keeping Systems Manager managed nodes current with the most up-to-date software packages.

## What are the features of Distributor?

+  **Deployment of packages to both Windows and Linux instances** 

  With Distributor, you can deploy software packages to Amazon Elastic Compute Cloud (Amazon EC2) instances and AWS IoT Greengrass core devices for Linux and Windows Server. For a list of supported instance operating system types, see [Supported package platforms and architectures](#what-is-a-package-platforms).
**Note**  
Distributor isn't supported on the macOS operating system.
+  **Deploy packages one time, or on an automated schedule** 

  You can choose to deploy packages one time, on a regular schedule, or whenever the default package version is changed to a different version. 
+  **Completely reinstall packages, or perform in-place updates** 

  To install a new package version, you can completely uninstall the current version and install a new one in its place, or only update the current version with new and updated components, according to an *update script* that you provide. Your package application is unavailable during a reinstallation, but can remain available during an in-place update. In-place updates are especially useful for security monitoring applications or other scenarios where you need to avoid application downtime.
+  **Console, CLI, PowerShell, and SDK access to Distributor capabilities** 

  You can work with Distributor by using the Systems Manager console, AWS Command Line Interface (AWS CLI), AWS Tools for PowerShell, or the AWS SDK of your choice.
+  **IAM access control** 

  By using AWS Identity and Access Management (IAM) policies, you can control which members of your organization can create, update, deploy, or delete packages or package versions. For example, you might want to give an administrator permissions to deploy packages, but not to change packages or create new package versions.
+  **Logging and auditing capability support** 

  You can audit and log Distributor user actions in your AWS account through integration with other AWS services. For more information, see [Auditing and logging Distributor activity](distributor-logging-auditing.md).

## What is a package in Distributor?


A *package* is a collection of installable software or assets that includes the following.
+ A .zip file of software per target operating system platform. Each .zip file must include the following.
  + An **install** and an **uninstall** script. Windows Server-based managed nodes require PowerShell scripts (scripts named `install.ps1` and `uninstall.ps1`). Linux-based managed nodes require shell scripts (scripts named `install.sh` and `uninstall.sh`). AWS Systems Manager SSM Agent reads and carries out the instructions in the **install** and **uninstall** scripts.
  + An executable file. SSM Agent must find this executable to install the package on target managed nodes.
+ A JSON-formatted manifest file that describes the package contents. The manifest isn't included in the .zip file, but it's stored in the same Amazon Simple Storage Service (Amazon S3) bucket as the .zip files that form the package. The manifest identifies the package version and maps the .zip files in the package to target managed node attributes, such as operating system version or architecture. For information about how to create the manifest, see [Step 2: Create the JSON package manifest](distributor-working-with-packages-create.md#packages-manifest).

When you choose **Simple** package creation in the Distributor console, Distributor generates the installation and uninstallation scripts, file hashes, and the JSON package manifest for you, based on the software executable file name and target platforms and architectures.

### Supported package platforms and architectures


You can use Distributor to publish packages to the following Systems Manager managed node platforms. A version value must match the exact release version of the operating system Amazon Machine Image (AMI) that you're targeting. For more information about determining this version, see step 4 of [Step 2: Create the JSON package manifest](distributor-working-with-packages-create.md#packages-manifest).

**Note**  
Systems Manager doesn't support all of the following operating systems for AWS IoT Greengrass core devices. For more information, see [Setting up AWS IoT Greengrass core devices](https://docs.aws.amazon.com/greengrass/v2/developerguide/setting-up.html) in the *AWS IoT Greengrass Version 2 Developer Guide*.


| Platform | Code value in manifest file | Supported architectures | 
| --- | --- | --- | 
| AlmaLinux | almalinux |  x86\$164 ARM64  | 
|  Amazon Linux 2 and Amazon Linux 2023  |   `amazon`   |  x86\$164 or x86 ARM64 (Amazon Linux 2 and AL2023, A1 instance types)  | 
|  Debian Server  |   `debian`   |  x86\$164 or x86  | 
|  openSUSE  |   `opensuse`   |  x86\$164  | 
|  openSUSE Leap  |   `opensuseleap`   |  x86\$164  | 
|  Oracle Linux  |   `oracle`   |  x86\$164  | 
|  Red Hat Enterprise Linux (RHEL)  |   `redhat`   |  x86\$164 ARM64 (RHEL 7.6 and later, A1 instance types)  | 
| Rocky Linux | rocky |  x86\$164 ARM64  | 
|  SUSE Linux Enterprise Server (SLES)  |   `suse`   |  x86\$164  | 
|  Ubuntu Server  |   `ubuntu`   |  x86\$164 or x86 ARM64 (Ubuntu Server 16 and later, A1 instance types)  | 
|  Windows Server  |   `windows`   |  x86\$164  | 

**Topics**
+ [

## How can Distributor benefit my organization?
](#distributor-benefits)
+ [

## Who should use Distributor?
](#distributor-who)
+ [

## What are the features of Distributor?
](#distributor-features)
+ [

## What is a package in Distributor?
](#what-is-a-package)
+ [

# Setting up Distributor
](distributor-getting-started.md)
+ [

# Working with Distributor packages
](distributor-working-with.md)
+ [

# Auditing and logging Distributor activity
](distributor-logging-auditing.md)
+ [

# Troubleshooting AWS Systems Manager Distributor
](distributor-troubleshooting.md)

# Setting up Distributor


Before you use Distributor, a tool in AWS Systems Manager, to create, manage, and deploy software packages, follow these steps.

## Complete Distributor prerequisites


Before you use Distributor, a tool in AWS Systems Manager, be sure your environment meets the following requirements.


**Distributor prerequisites**  

| Requirement | Description | 
| --- | --- | 
|  SSM Agent  |  AWS Systems Manager SSM Agent version 2.3.274.0 or later must be installed on the managed nodes on which you want to deploy or from which you want to remove packages. To install or update SSM Agent, see [Working with SSM Agent](ssm-agent.md).  | 
|  AWS CLI  |  (Optional) To use the AWS Command Line Interface (AWS CLI) instead of the Systems Manager console to create and manage packages, install the newest release of the AWS CLI on your local computer. For more information about how to install or upgrade the CLI, see [Installing the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) in the *AWS Command Line Interface User Guide*.  | 
|  AWS Tools for PowerShell  |  (Optional) To use the Tools for PowerShell instead of the Systems Manager console to create and manage packages, install the newest release of Tools for PowerShell on your local computer. For more information about how to install or upgrade the Tools for PowerShell, see [Setting up the AWS Tools for Windows PowerShell or AWS Tools for PowerShell Core](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html) in the *AWS Tools for PowerShell User Guide*.  | 

**Note**  
Systems Manager doesn't support distributing packages to Oracle Linux managed nodes by using Distributor.

## Verify or create an IAM instance profile with Distributor permissions


By default, AWS Systems Manager doesn't have permission to perform actions on your instances. You must grant access by using an AWS Identity and Access Management (IAM) instance profile. An instance profile is a container that passes IAM role information to an Amazon Elastic Compute Cloud (Amazon EC2) instance at launch. This requirement applies to permissions for all Systems Manager tools, not just Distributor.

**Note**  
When you configure your edge devices to run AWS IoT Greengrass Core software and SSM Agent, you specify an IAM service role that enables Systems Manager to peform actions on it. You don't need to configure managed edge devices with an instance profile. 

If you already use other Systems Manager tools, such as Run Command and State Manager, an instance profile with the required permissions for Distributor is already attached to your instances. The simplest way to ensure that you have permissions to perform Distributor tasks is to attach the **AmazonSSMManagedInstanceCore** policy to your instance profile. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md).

## Control user access to packages


Using AWS Identity and Access Management (IAM) policies, you can control who can create, deploy, and manage packages. You also control which Run Command and State Manager API operations they can perform on managed nodes. Like Distributor, both Run Command and State Manager, are tools in AWS Systems Manager.

**ARN Format**  
User-defined packages are associated with document Amazon Resource Names (ARNs) and have the following format.

```
arn:aws:ssm:region:account-id:document/document-name
```

The following is an example.

```
arn:aws:ssm:us-west-1:123456789012:document/ExampleDocumentName
```

You can use a pair of AWS supplied default IAM policies, one for end users and one for administrators, to grant permissions for Distributor activities. Or you can create custom IAM policies appropriate for your permissions requirements.

For more information about using variables in IAM policies, see [IAM Policy Elements: Variables](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html). 

For information about how to create policies and attach them to users or groups, see [Creating IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) and [Adding and Removing IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

## Create or choose an Amazon S3 bucket to store Distributor packages


When you create a package by using the **Simple** workflow in the AWS Systems Manager console, you choose an existing Amazon Simple Storage Service (Amazon S3) bucket to which Distributor uploads your software. Distributor is a tool in AWS Systems Manager. In the **Advanced** workflow, you must upload .zip files of your software or assets to an Amazon S3 bucket before you begin. Whether you create a package by using the **Simple** or **Advanced** workflows in the console, or by using the API, you must have an Amazon S3 bucket before you start creating your package. As part of the package creation process, Distributor copies your installable software and assets from this bucket to an internal Systems Manager store. Because the assets are copied to an internal store, you can delete or repurpose your Amazon S3 bucket when package creation is finished.

For more information about how to create a bucket, see [Create a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingABucket.html) in the *Amazon Simple Storage Service Getting Started Guide*. For more information about how to run an AWS CLI command to create a bucket, see [https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) in the *AWS CLI Command Reference*.

# Working with Distributor packages


You can use the AWS Systems Manager console, AWS command line tools (AWS CLI and AWS Tools for PowerShell), and AWS SDKs to add, manage, or deploy packages in Distributor. Distributor is a tool in AWS Systems Manager. Before you add a package to Distributor:
+ Create and zip installable assets.
+ (Optional) Create a JSON manifest file for the package. This isn't required to use the **Simple** package creation process in the Distributor console. Simple package creation generates a JSON manifest file for you.

  You can use the AWS Systems Manager console or a text or JSON editor to create the manifest file.
+ Have an Amazon Simple Storage Service (Amazon S3) bucket ready to store your installable assets or software. If you're using the **Advanced** package creation process, upload your assets to the Amazon S3 bucket before you begin.
**Note**  
You can delete or repurpose this bucket after you finish creating your package because Distributor moves the package contents to an internal Systems Manager bucket as part of the package creation process.

AWS published packages are already packaged and ready for deployment. To deploy an AWS-published package to managed nodes, see [Install or update Distributor packages](distributor-working-with-packages-deploy.md).

You can share Distributor packages between AWS accounts. When using a package shared from another account in AWS CLI commands use the package Amazon Resource Name (ARN) instead of the package name.

**Topics**
+ [

# View packages in Distributor
](distributor-view-packages.md)
+ [

# Create a package in Distributor
](distributor-working-with-packages-create.md)
+ [

# Edit Distributor package permissions in the console
](distributor-working-with-packages-ep.md)
+ [

# Edit Distributor package tags in the console
](distributor-working-with-packages-tags.md)
+ [

# Add a version to a Distributor package
](distributor-working-with-packages-version.md)
+ [

# Install or update Distributor packages
](distributor-working-with-packages-deploy.md)
+ [

# Uninstall a Distributor package
](distributor-working-with-packages-uninstall.md)
+ [

# Delete a Distributor package
](distributor-working-with-packages-dpkg.md)

# View packages in Distributor
View packages

To view packages that are available for installation, you can use the AWS Systems Manager console or your preferred AWS command line tool. Distributor is a tool in AWS Systems Manager. To access Distributor, open the AWS Systems Manager console and choose **Distributor** in the left navigation pane. You will see all of the packages available to you.

The following section describes how you can view Distributor packages using your preferred command line tool.

## View packages using the command line


This section contains information about how you can use your preferred command line tool to view Distributor packages using the provided commands.

------
#### [ Linux & macOS ]

**To view packages using the AWS CLI on Linux**
+ To view all packages, excluding shared packages, run the following command.

  ```
  aws ssm list-documents \
      --filters Key=DocumentType,Values=Package
  ```
+ To view all packages owned by Amazon, run the following command.

  ```
  aws ssm list-documents \
      --filters Key=DocumentType,Values=Package Key=Owner,Values=Amazon
  ```
+ To view all packages owned by third parties, run the following command.

  ```
  aws ssm list-documents \
      --filters Key=DocumentType,Values=Package Key=Owner,Values=ThirdParty
  ```

------
#### [ Windows ]

**To view packages using the AWS CLI on Windows**
+ To view all packages, excluding shared packages, run the following command.

  ```
  aws ssm list-documents ^
      --filters Key=DocumentType,Values=Package
  ```
+ To view all packages owned by Amazon, run the following command.

  ```
  aws ssm list-documents ^
      --filters Key=DocumentType,Values=Package Key=Owner,Values=Amazon
  ```
+ To view all packages owned by third parties, run the following command.

  ```
  aws ssm list-documents ^
      --filters Key=DocumentType,Values=Package Key=Owner,Values=ThirdParty
  ```

------
#### [ PowerShell ]

**To view packages using the Tools for PowerShell**
+ To view all packages, excluding shared packages, run the following command.

  ```
  $filter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
  $filter.Key = "DocumentType"
  $filter.Values = "Package"
  
  Get-SSMDocumentList `
      -Filters @($filter)
  ```
+ To view all packages owned by Amazon, run the following command.

  ```
  $typeFilter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
  $typeFilter.Key = "DocumentType"
  $typeFilter.Values = "Package"
  
  $ownerFilter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
  $ownerFilter.Key = "Owner"
  $ownerFilter.Values = "Amazon"
  
  Get-SSMDocumentList `
      -Filters @($typeFilter,$ownerFilter)
  ```
+ To view all packages owned by third parties, run the following command.

  ```
  $typeFilter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
  $typeFilter.Key = "DocumentType"
  $typeFilter.Values = "Package"
  
  $ownerFilter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
  $ownerFilter.Key = "Owner"
  $ownerFilter.Values = "ThirdParty"
  
  Get-SSMDocumentList `
      -Filters @($typeFilter,$ownerFilter)
  ```

------

# Create a package in Distributor
Create a package

To create a package, prepare your installable software or assets, one file per operating system platform. At least one file is required to create a package.

Different platforms might sometimes use the same file, but all files that you attach to your package must be listed in the `Files` section of the manifest. If you're creating a package by using the simple workflow in the console, the manifest is generated for you. The maximum number of files that you can attach to a single document is 20. The maximum size of each file is 1 GB. For more information about supported platforms, see [Supported package platforms and architectures](distributor.md#what-is-a-package-platforms).

When you create a package, the system creates a new [SSM document](documents.md). This document allows you to deploy the package to managed nodes.

For demonstration purposes only, an example package, [ExamplePackage.zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/ExamplePackage.zip), is available for you to download from our website. The example package includes a completed JSON manifest and three .zip files containing installers for PowerShell v7.0.0. The installation and uninstallation scripts don't contain valid commands. Although you must zip each software installable and scripts into a .zip file to create a package in the **Advanced** workflow, you don't zip installable assets in the **Simple** workflow.

**Topics**
+ [

## Create a package using the Simple workflow
](#distributor-working-with-packages-create-simple)
+ [

## Create a package using the Advanced workflow
](#distributor-working-with-packages-create-adv)

## Create a package using the Simple workflow


This section describes how to create a package in Distributor by choosing the **Simple** package creation workflow in the Distributor console. Distributor is a tool in AWS Systems Manager. To create a package, prepare your installable assets, one file per operating system platform. At least one file is required to create a package. The **Simple** package creation process generates installation and uninstallation scripts, file hashes, and a JSON-formatted manifest for you. The **Simple** workflow handles the process of uploading and zipping your installable files, and creating a new package and associated [SSM document](documents.md). For more information about supported platforms, see [Supported package platforms and architectures](distributor.md#what-is-a-package-platforms).

When you use the Simple method to create a package, Distributor creates `install` and `uninstall` scripts for you. However, when you create a package for an in-place update, you must provide your own `update` script content on the **Update script** tab. When you add input commands for an `update` script, Distributor includes this script in the .zip package it creates for you, along with the `install` and `uninstall` scripts.

**Note**  
Use the `In-place` update option to add new or updated files to an existing package installation without taking the associated application offline.

**To create a package using the Simple workflow**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the Distributor home page, choose **Create package**, and then choose **Simple**.

1. On the **Create package** page, enter a name for your package. Package names can contain letters, numbers, periods, dashes, and underscores. The name should be generic enough to apply to all versions of the package attachments, but specific enough to identify the purpose of the package.

1. (Optional) For **Version name**, enter a version name. Version names can be a maximum of 512 characters, and can't contain special characters.

1. For **Location**, choose a bucket by using the bucket name and prefix or by using the bucket URL.

1. For **Upload software**, choose **Add software**, and then navigate to installable software files with `.rpm`, `.msi`, or `.deb` extensions. If the file name contains spaces, the upload fails. You can upload more than one software file in a single action.

1. For **Target platform**, verify that the target operating system platform shown for each installable file is correct. If the operating system shown isn't correct, choose the correct operating system from the dropdown list.

   For the **Simple** package creation workflow, because you upload each installable file only once, extra steps are required to instruct Distributor to target a single file at multiple operating systems. For example, if you upload an installable software file named `Logtool_v1.1.1.rpm`, you must change some defaults in the **Simple** workflow to target the same software on supported versions of both Amazon Linux and Ubuntu Server operating systems. When targeting multiple platforms, do one of the following.
   + Use the **Advanced** workflow instead, zip each installable file into a .zip file before you begin, and manually author the manifest so that one installable file can be targeted at multiple operating system platforms or versions. For more information, see [Create a package using the Advanced workflow](#distributor-working-with-packages-create-adv).
   + Manually edit the manifest file in the **Simple** workflow so that your .zip file is targeted at multiple operating system platforms or versions. For more information about how to do this, see the end of step 4 in [Step 2: Create the JSON package manifest](#packages-manifest).

1. For **Platform version**, verify that the operating system platform version shown is either **\$1any**, a major release version followed by a wildcard (7.\$1), or the exact operating system release version to which you want your software to apply. For more information about specifying an operating system platform version, see step 4 in [Step 2: Create the JSON package manifest](#packages-manifest).

1. For **Architecture**, choose the correct processor architecture for each installable file from the dropdown list. For more information about supported processor architectures, see [Supported package platforms and architectures](distributor.md#what-is-a-package-platforms).

1. (Optional) Expand **Scripts**, and review the scripts that Distributor generates for your installable software.

1. (Optional) To provide an update script for use with in-place updates, expand **Scripts**, choose the **Update script** tab, and enter your update script commands.

   Systems Manager doesn't generate update scripts on your behalf.

1. To add more installable software files, choose **Add software**. Otherwise, go to the next step.

1. (Optional) Expand **Manifest**, and review the JSON package manifest that Distributor generates for your installable software. If you changed any information about your software since you began this procedure, such as platform version or target platform, choose **Generate manifest** to show the updated package manifest.

   You can edit the manifest manually if you want to target a software installable at more than one operating system, as described in step 8. For more information about editing the manifest, see [Step 2: Create the JSON package manifest](#packages-manifest).

1. Choose **Create package**.

Wait for Distributor to finish uploading your software and creating your package. Distributor shows upload status for each installable file. Depending on the number and size of packages you're adding, this can take a few minutes. Distributor automatically redirects you to the **Package details** page for the new package, but you can choose to open this page yourself after the software is uploaded. The **Package details** page doesn't show all information about your package until Distributor finishes the package creation process. To stop the upload and package creation process, choose **Cancel**.

If Distributor can't upload any of the software installable files, it displays an **Upload failed** message. To retry the upload, choose **Retry upload**. For more information about how to troubleshoot package creation failures, see [Troubleshooting AWS Systems Manager Distributor](distributor-troubleshooting.md).

## Create a package using the Advanced workflow


In this section, learn about how advanced users can create a package in Distributor after uploading installable assets zipped with installation and uninstallation scripts, and a JSON manifest file, to an Amazon S3 bucket.

To create a package, prepare your .zip files of installable assets, one .zip file per operating system platform. At least one .zip file is required to create a package. Next, create a JSON manifest. The manifest includes pointers to your package code files. When you have your required code files added to a folder or directory, and the manifest is populated with correct values, upload your package to an S3 bucket.

An example package, [ExamplePackage.zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/ExamplePackage.zip), is available for you to download from our website. The example package includes a completed JSON manifest and three .zip files.

**Topics**
+ [

### Step 1: Create the ZIP files
](#packages-zip)
+ [

### Step 2: Create the JSON package manifest
](#packages-manifest)
+ [

### Step 3: Upload the package and manifest to an Amazon S3 bucket
](#packages-upload-s3)
+ [

### Step 4: Add a package to Distributor
](#distributor-working-with-packages-add)

### Step 1: Create the ZIP files


The foundation of your package is at least one .zip file of software or installable assets. A package includes one .zip file per operating system that you want to support, unless one .zip file can be installed on multiple operating systems. For example, supported versions of Red Hat Enterprise Linux and Amazon Linux instances can typically run the same .RPM executable files, so you need to attach only one .zip file to your package to support both operating systems.

**Required files**  
The following items are required in each .zip file:
+ An **install** and an **uninstall** script. Windows Server-based managed nodes require PowerShell scripts (scripts named `install.ps1` and `uninstall.ps1`). Linux-based managed nodes require shell scripts (scripts named `install.sh` and `uninstall.sh`). SSM Agent runs the instructions in the **install** and **uninstall** scripts.

  For example, your installation scripts might run an installer (such as .rpm or .msi), they might copy files, or they might set configurations.
+ An executable file, installer packages (.rpm, .deb, .msi, etc.), other scripts, or configuration files.

**Optional files**  
The following item is optional in each .zip file:
+ An **update** script. Providing an update script makes it possible for you to use the `In-place update` option to install a package. When you want to add new or updated files to an existing package installation, the `In-place update` option doesn't take the package application offline while the update is performed. Windows Server-based managed nodes require a PowerShell script (script named `update.ps1`). Linux-based managed nodes require a shell script (script named `update.sh`). SSM Agent runs the instructions in the **update** script.

For more information about installing or updating packages, see [Install or update Distributor packages](distributor-working-with-packages-deploy.md).

For examples of .zip files, including sample **install** and **uninstall** scripts, download the example package, [ExamplePackage.zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/ExamplePackage.zip).

### Step 2: Create the JSON package manifest


After you prepare and zip your installable files, create a JSON manifest. The following is a template. The parts of the manifest template are described in the procedure in this section. You can use a JSON editor to create this manifest in a separate file. Alternatively, you can author the manifest in the AWS Systems Manager console when you create a package.

```
{
  "schemaVersion": "2.0",
  "version": "your-version",
  "publisher": "optional-publisher-name",
  "packages": {
    "platform": {
      "platform-version": {
        "architecture": {
          "file": ".zip-file-name-1.zip"
        }
      }
    },
    "another-platform": {
      "platform-version": {
        "architecture": {
          "file": ".zip-file-name-2.zip"
        }
      }
    },
    "another-platform": {
      "platform-version": {
        "architecture": {
          "file": ".zip-file-name-3.zip"
        }
      }
    }
  },
  "files": {
    ".zip-file-name-1.zip": {
      "checksums": {
        "sha256": "checksum"
      }
    },
    ".zip-file-name-2.zip": {
      "checksums": {
        "sha256": "checksum"
      }
    }
  }
}
```

**To create a JSON package manifest**

1. Add the schema version to your manifest. In this release, the schema version is always `2.0`.

   ```
   { "schemaVersion": "2.0",
   ```

1. Add a user-defined package version to your manifest. This is also the value of **Version name** that you specify when you add your package to Distributor. It becomes part of the AWS Systems Manager document that Distributor creates when you add your package. You also provide this value as an input in the `AWS-ConfigureAWSPackage` document to install a version of the package other than the latest. A `version` value can contain letters, numbers, underscores, hyphens, and periods, and be a maximum of 128 characters in length. We recommend that you use a human-readable package version to make it easier for you and other administrators to specify exact package versions when you deploy. The following is an example.

   ```
   "version": "1.0.1",
   ```

1. (Optional) Add a publisher name. The following is an example.

   ```
   "publisher": "MyOrganization",
   ```

1. Add packages. The `"packages"` section describes the platforms, release versions, and architectures supported by the .zip files in your package. For more information, see [Supported package platforms and architectures](distributor.md#what-is-a-package-platforms).

   The *platform-version* can be the wildcard value, `_any`. Use it to indicate that a .zip file supports any release of the platform. You can also specify a major release version followed by a wildcard so all minor versions are supported, for example 7.\$1. If you choose to specify a *platform-version* value for a specific operating system version, be sure that it matches the exact release version of the operating system AMI that you're targeting. The following are suggested resources for getting the correct value of the operating system.
   + On a Windows Server-based managed nodes, the release version is available as Windows Management Instrumentation (WMI) data. You can run the following command from a command prompt to get version information, then parse the results for `version`.

     ```
     wmic OS get /format:list
     ```
   + On a Linux-based managed node, get the version by first scanning for operating system release (the following command). Look for the value of `VERSION_ID`.

     ```
     cat /etc/os-release
     ```

     If that doesn't return the results that you need, run the following command to get LSB release information from the `/etc/lsb-release` file, and look for the value of `DISTRIB_RELEASE`.

     ```
     lsb_release -a
     ```

     If these methods fail, you can usually find the release based on the distribution. For example, on Debian Server, you can scan the `/etc/debian_version` file, or on Red Hat Enterprise Linux, the `/etc/redhat-release` file.

     ```
     hostnamectl
     ```

   ```
   "packages": {
       "platform": {
         "platform-version": {
           "architecture": {
             "file": ".zip-file-name-1.zip"
           }
         }
       },
       "another-platform": {
         "platform-version": {
           "architecture": {
             "file": ".zip-file-name-2.zip"
           }
         }
       },
       "another-platform": {
         "platform-version": {
           "architecture": {
             "file": ".zip-file-name-3.zip"
           }
         }
       }
     }
   ```

   The following is an example. In this example, the operating system platform is `amazon`, the supported release version is `2016.09`, the architecture is `x86_64`, and the .zip file that supports this platform is `test.zip`.

   ```
   {
       "amazon": {
           "2016.09": {
               "x86_64": {
                   "file": "test.zip"
               }
           }
       }
   },
   ```

   You can add the `_any` wildcard value to indicate that the package supports all versions of the parent element. For example, to indicate that the package is supported on any release version of Amazon Linux, your package statement should be similar to the following. You can use the `_any` wildcard at the version or architecture levels to support all versions of a platform, or all architectures in a version, or all versions and all architectures of a platform.

   ```
   {
       "amazon": {
           "_any": {
               "x86_64": {
                   "file": "test.zip"
               }
           }
       }
   },
   ```

   The following example adds `_any` to show that the first package, `data1.zip`, is supported for all architectures of Amazon Linux 2016.09. The second package, `data2.zip`, is supported for all releases of Amazon Linux, but only for managed nodes with `x86_64` architecture. Both the `2023.8` and `_any` versions are entries under `amazon`. There is one platform (Amazon Linux), but different supported versions, architectures, and associated .zip files.

   ```
   {
       "amazon": {
           "2023.8": {
               "_any": {
                   "file": "data1.zip"
               }
           },
           "_any": {
               "x86_64": {
                   "file": "data2.zip"
               }
           }
       }
   }
   ```

   You can refer to a .zip file more than once in the `"packages"` section of the manifest, if the .zip file supports more than one platform. For example, if you have a .zip file that supports both Red Hat Enterprise Linux 8.x versions and Amazon Linux, you have two entries in the `"packages"` section that point to the same .zip file, as shown in the following example.

   ```
   {
       "amazon": {
           "2023.8.20250715 ": {
               "x86_64": {
                   "file": "test.zip"
               }
           }
       },
       "redhat": {
           "8.*": {
               "x86_64": {
                   "file": "test.zip"
               }
           }
       }
   },
   ```

1. Add the list of .zip files that are part of this package from step 4. Each file entry requires the file name and `sha256` hash value checksum. Checksum values in the manifest must match the `sha256` hash value in the zipped assets to prevent the package installation from failing.

   To get the exact checksum from your installables, you can run the following commands. On Linux, run `shasum -a 256 file-name.zip` or `openssl dgst -sha256 file-name.zip`. On Windows, run the `Get-FileHash -Path path-to-.zip-file` cmdlet in [PowerShell](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/get-filehash?view=powershell-6).

   The `"files"` section of the manifest includes one reference to each of the .zip files in your package.

   ```
   "files": {
           "test-agent-x86.deb.zip": {
               "checksums": {
                   "sha256": "EXAMPLE2706223c7616ca9fb28863a233b38e5a23a8c326bb4ae241dcEXAMPLE"
               }
           },
           "test-agent-x86_64.deb.zip": {
               "checksums": {
                   "sha256": "EXAMPLE572a745844618c491045f25ee6aae8a66307ea9bff0e9d1052EXAMPLE"
               }
           },
           "test-agent-x86_64.nano.zip": {
               "checksums": {
                   "sha256": "EXAMPLE63ccb86e830b63dfef46995af6b32b3c52ce72241b5e80c995EXAMPLE"
               }
           },
           "test-agent-rhel8-x86.nano.zip": {
               "checksums": {
                   "sha256": "EXAMPLE13df60aa3219bf117638167e5bae0a55467e947a363fff0a51EXAMPLE"
               }
           },
           "test-agent-x86.msi.zip": {
               "checksums": {
                   "sha256": "EXAMPLE12a4abb10315aa6b8a7384cc9b5ca8ad8e9ced8ef1bf0e5478EXAMPLE"
               }
           },
           "test-agent-x86_64.msi.zip": {
               "checksums": {
                   "sha256": "EXAMPLE63ccb86e830b63dfef46995af6b32b3c52ce72241b5e80c995EXAMPLE"
               }
           },
           "test-agent-rhel8-x86.rpm.zip": {
               "checksums": {
                   "sha256": "EXAMPLE13df60aa3219bf117638167e5bae0a55467e947a363fff0a51EXAMPLE"
               }
           }
       }
   ```

1. After you add your package information, save and close the manifest file.

The following is an example of a completed manifest. In this example, you have a .zip file, `NewPackage_LINUX.zip`, that supports more than one platform, but is referenced in the `"files"` section only once.

```
{
    "schemaVersion": "2.0",
    "version": "1.7.1",
    "publisher": "Amazon Web Services",
    "packages": {
        "windows": {
            "_any": {
                "x86_64": {
                    "file": "NewPackage_WINDOWS.zip"
                }
            }
        },
        "amazon": {
            "_any": {
                "x86_64": {
                    "file": "NewPackage_LINUX.zip"
                }
            }
        },
        "ubuntu": {
            "_any": {
                "x86_64": {
                    "file": "NewPackage_LINUX.zip"
                }
            }
        }
    },
    "files": {
        "NewPackage_WINDOWS.zip": {
            "checksums": {
                "sha256": "EXAMPLEc2c706013cf8c68163459678f7f6daa9489cd3f91d52799331EXAMPLE"
            }
        },
        "NewPackage_LINUX.zip": {
            "checksums": {
                "sha256": "EXAMPLE2b8b9ed71e86f39f5946e837df0d38aacdd38955b4b18ffa6fEXAMPLE"
            }
        }
    }
}
```

#### Package example


An example package, [ExamplePackage.zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/ExamplePackage.zip), is available for you to download from our website. The example package includes a completed JSON manifest and three .zip files.

### Step 3: Upload the package and manifest to an Amazon S3 bucket


Prepare your package by copying or moving all .zip files into a folder or directory. A valid package requires the manifest that you created in [Step 2: Create the JSON package manifest](#packages-manifest) and all .zip files identified in the manifest file list.

**To upload the package and manifest to Amazon S3**

1. Copy or move all .zip archive files that you specified in the manifest to a folder or directory. Don't zip the folder or directory you move your .zip archive files and manifest file to.

1. Create a bucket or choose an existing bucket. For more information, see [Create a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingABucket.html) in the *Amazon Simple Storage Service Getting Started Guide*. For more information about how to run an AWS CLI command to create a bucket, see [https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) in the *AWS CLI Command Reference*.

1. Upload the folder or directory to the bucket. For more information, see [Add an Object to a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/PuttingAnObjectInABucket.html) in the *Amazon Simple Storage Service Getting Started Guide*. If you plan to paste your JSON manifest into the AWS Systems Manager console, don't upload the manifest. For more information about how to run an AWS CLI command to upload files to a bucket, see [https://docs.aws.amazon.com/cli/latest/reference/s3/mv.html](https://docs.aws.amazon.com/cli/latest/reference/s3/mv.html) in the *AWS CLI Command Reference*.

1. On the bucket's home page, choose the folder or directory that you uploaded. If you uploaded your files to a subfolder in a bucket, be sure to note the subfolder (also known as a *prefix*). You need the prefix to add your package to Distributor.

### Step 4: Add a package to Distributor
Add a package to Distributor

You can use the AWS Systems Manager console, AWS command line tools (AWS CLI and AWS Tools for PowerShell), or AWS SDKs to add a new package to Distributor. When you add a package, you're adding a new [SSM document](documents.md). The document allows you to deploy the package to managed nodes.

**Topics**
+ [

#### Add a package using the console
](#create-pkg-console)
+ [

#### Add a package using the AWS CLI
](#create-pkg-cli)

#### Add a package using the console


You can use the AWS Systems Manager console to create a package. Have ready the name of the bucket to which you uploaded your package in [Step 3: Upload the package and manifest to an Amazon S3 bucket](#packages-upload-s3).

**To add a package to Distributor (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the Distributor home page, choose **Create package**, and then choose **Advanced**.

1. On the **Create package** page, enter a name for your package. Package names can contain letters, numbers, periods, dashes, and underscores. The name should be generic enough to apply to all versions of the package attachments, but specific enough to identify the purpose of the package.

1. For **Version name**, enter the exact value of the `version` entry in your manifest file.

1. For **S3 bucket name**, choose the name of the bucket to which you uploaded your .zip files and manifest in [Step 3: Upload the package and manifest to an Amazon S3 bucket](#packages-upload-s3).

1. For **S3 key prefix**, enter the subfolder of the bucket where your .zip files and manifest are stored.

1. For **Manifest**, choose **Extract from package** to use a manifest that you have uploaded to the Amazon S3 bucket with your .zip files.

   (Optional) If you didn't upload your JSON manifest to the S3 bucket where you stored your .zip files, choose **New manifest**. You can author or paste the entire manifest in the JSON editor field. For more information about how to create the JSON manifest, see [Step 2: Create the JSON package manifest](#packages-manifest).

1. When you're finished with the manifest, choose **Create package**.

1. Wait for Distributor to create your package from your .zip files and manifest. Depending on the number and size of packages you are adding, this can take a few minutes. Distributor automatically redirects you to the **Package details** page for the new package, but you can choose to open this page yourself after the software is uploaded. The **Package details** page doesn't show all information about your package until Distributor finishes the package creation process. To stop the upload and package creation process, choose **Cancel**.

#### Add a package using the AWS CLI


You can use the AWS CLI to create a package. Have the URL ready from the bucket to which you uploaded your package in [Step 3: Upload the package and manifest to an Amazon S3 bucket](#packages-upload-s3).

**To add a package to Amazon S3 using the AWS CLI**

1. To use the AWS CLI to create a package, run the following command, replacing *package-name* with the name of your package and *path-to-manifest-file* with the file path for your JSON manifest file. amzn-s3-demo-bucket is the URL of the Amazon S3 bucket where the entire package is stored. When you run the **create-document** command in Distributor, you specify the `Package` value for `--document-type`.

   If you didn't add your manifest file to the Amazon S3 bucket, the `--content` parameter value is the file path to the JSON manifest file.

   ```
   aws ssm create-document \
       --name "package-name" \
       --content file://path-to-manifest-file \
       --attachments Key="SourceUrl",Values="amzn-s3-demo-bucket" \
       --version-name version-value-from-manifest \
       --document-type Package
   ```

   The following is an example.

   ```
   aws ssm create-document \
       --name "ExamplePackage" \
       --content file://path-to-manifest-file \
       --attachments Key="SourceUrl",Values="https://s3.amazonaws.com/amzn-s3-demo-bucket/ExamplePackage" \
       --version-name 1.0.1 \
       --document-type Package
   ```

1. Verify that your package was added and show the package manifest by running the following command, replacing *package-name* with the name of your package. To get a specific version of the document (not the same as the version of a package), you can add the `--document-version` parameter.

   ```
   aws ssm get-document \
       --name "package-name"
   ```

For information about other options you can use with the **create-document** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-document.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-document.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*. For information about other options you can use with the **get-document** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/get-document.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/get-document.html).

# Edit Distributor package permissions in the console
Edit package permissions

After you add a package to Distributor, a tool in AWS Systems Manager, you can edit the package's permissions in the Systems Manager console. You can add other AWS accounts to a package's permissions. Packages can be shared with other accounts in the same AWS Region only. Cross-Region sharing isn't supported. By default, packages are set to **Private**, meaning only those with access to the package creator's AWS account can view package information and update or delete the package. If **Private** permissions are acceptable, you can skip this procedure.

**Note**  
You can update the permissions of packages that are shared with 20 or fewer accounts. 

**To edit package permissions in the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the **Packages** page, choose the package for which you want to edit permissions.

1. On the **Package details** tab, choose **Edit permissions** to change permissions.

1. For **Edit permissions**, choose **Shared with specific accounts**.

1. Under **Shared with specific accounts**, add AWS account numbers, one at a time. When you're finished, choose **Save**.

# Edit Distributor package tags in the console
Edit package tags

After you have added a package to Distributor, a tool in AWS Systems Manager, you can edit the package's tags in the Systems Manager console. These tags are applied to the package, and aren't connected to tags on the managed node on which you want to deploy the package. Tags are case sensitive key and value pairs that can help you group and filter your packages by criteria that are relevant to your organization. If you don't want to add tags, you're ready to install your package or add a new version.

**To edit package tags in the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the **Packages** page, choose the package for which you want to edit tags.

1. On the **Package details** tab, in **Tags**, choose **Edit**.

1. For **Add tags**, enter a tag key, or a tag key and value pair, and then choose **Add**. Repeat if you want to add more tags. To delete tags, choose **X** on the tag at the bottom of the window.

1. When you're finished adding tags to your package, choose **Save**.

# Add a version to a Distributor package
Add a package version

To add a package version, [create a package](distributor-working-with-packages-create.md), and then use Distributor to add a package version by adding an entry to the AWS Systems Manager (SSM) document that already exists for older versions. Distributor is a tool in AWS Systems Manager. To save time, update the manifest for an older version of the package, change the value of the `version` entry in the manifest (for example, from `Test_1.0` to `Test_2.0`) and save it as the manifest for the new version. The simple **Add version** workflow in the Distributor console updates the manifest file for you.

A new package version can:
+ Replace at least one of the installable files attached to the current version.
+ Add new installable files to support additional platforms.
+ Delete files to discontinue support for specific platforms.

A newer version can use the same Amazon Simple Storage Service (Amazon S3) bucket, but must have a URL with a different file name shown at the end. You can use the Systems Manager console or the AWS Command Line Interface (AWS CLI) to add the new version. Uploading an installable file with the exact name as an existing installable file in the Amazon S3 bucket overwrites the existing file. No installable files are copied over from the older version to the new version; you must upload installable files from the older version to have them be part of a new version. After Distributor is finished creating your new package version, you can delete or repurpose the Amazon S3 bucket, because Distributor copies your software to an internal Systems Manager bucket as part of the versioning process.

**Note**  
Each package is held to a maximum of 25 versions. You can delete versions that are no longer required.

**Topics**
+ [

## Adding a package version using the console
](#add-pkg-version)
+ [

## Adding a package version using the AWS CLI
](#add-pkg-version-cli)

## Adding a package version using the console


Before you perform these steps, follow the instructions in [Create a package in Distributor](distributor-working-with-packages-create.md) to create a new package for the version. Then, use the Systems Manager console to add a new package version to Distributor.

### Adding a package version using the Simple workflow


To add a package version by using the **Simple** workflow, prepare updated installable files or add installable files to support more platforms and architectures. Then, use Distributor to upload new and updated installable files and add a package version. The simplified **Add version** workflow in the Distributor console updates the manifest file and associated SSM document for you.

**To add a package version using the Simple workflow**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the Distributor home page, choose the package to which you want to add another version.

1. On the **Add version** page, choose **Simple**.

1. For **Version name**, enter a version name. The version name for the new version must be different from older version names. Version names can be a maximum of 512 characters, and can't contain special characters.

1. For **S3 bucket name**, choose an existing S3 bucket from the list. This can be the same bucket that you used to store installable files for older versions, but the installable file names must be different to avoid overwriting existing installable files in the bucket.

1. For **S3 key prefix**, enter the subfolder of the bucket where your installable assets are stored.

1. For **Upload software**, navigate to the installable software files that you want to attach to the new version. Installable files from existing versions aren't automatically copied over to a new version; you must upload any installable files from older versions of the package if you want any of the same installable files to be part of the new version. You can upload more than one software file in a single action.

1. For **Target platform**, verify that the target operating system platform shown for each installable file is correct. If the operating system shown isn't correct, choose the correct operating system from the dropdown list.

   In the **Simple** versioning workflow, because you upload each installable file only once, extra steps are required to target a single file at multiple operating systems. For example, if you upload an installable software file named `Logtool_v1.1.1.rpm`, you must change some defaults in the **Simple** workflow to instruct Distributor to target the same software at both Amazon Linux and Ubuntu Server operating systems. You can do one of the following to work around this limitation.
   + Use the **Advanced** versioning workflow instead, zip each installable file into a .zip file before you begin, and manually author the manifest so that one installable file can be targeted at multiple operating system platforms or versions. For more information, see [Adding a package version using the Advanced workflow](#add-pkg-version-adv).
   + Manually edit the manifest file in the **Simple** workflow so that your .zip file is targeted at multiple operating system platforms or versions. For more information about how to do this, see the end of step 4 in [Step 2: Create the JSON package manifest](distributor-working-with-packages-create.md#packages-manifest).

1. For **Platform version**, verify that the operating system platform version shown is either **\$1any**, a major release version followed by a wildcard (8.\$1), or the exact operating system release version to which you want your software to apply. For more information about specifying a platform version, see step 4 in [Step 2: Create the JSON package manifest](distributor-working-with-packages-create.md#packages-manifest).

1. For **Architecture**, choose the correct processor architecture for each installable file from the drop-down list. For more information about supported architectures, see [Supported package platforms and architectures](distributor.md#what-is-a-package-platforms).

1. (Optional) Expand **Scripts**, and review the installation and uninstallation scripts that Distributor generates for your installable software.

1. To add more installable software files to the new version, choose **Add software**. Otherwise, go to the next step.

1. (Optional) Expand **Manifest**, and review the JSON package manifest that Distributor generates for your installable software. If you changed any information about your installable software since you began this procedure, such as platform version or target platform, choose **Generate manifest** to show the updated package manifest.

   You can edit the manifest manually if you want to target a software installable at more than one operating system, as described in step 9. For more information about editing the manifest, see [Step 2: Create the JSON package manifest](distributor-working-with-packages-create.md#packages-manifest).

1. When you finish adding software and reviewing the target platform, version, and architecture data, choose **Add version**.

1. Wait for Distributor to finish uploading your software and creating the new package version. Distributor shows upload status for each installable file. Depending on the number and size of packages you are adding, this can take a few minutes. Distributor automatically redirects you to the **Package details** page for the package, but you can choose to open this page yourself after the software is uploaded. The **Package details** page doesn't show all information about your package until Distributor finishes creating the new package version. To stop the upload and package version creation, choose **Stop upload**.

1. If Distributor can't upload any of the software installable files, it displays an **Upload failed** message. To retry the upload, choose **Retry upload**. For more information about how to troubleshoot package version creation failures, see [Troubleshooting AWS Systems Manager Distributor](distributor-troubleshooting.md).

1. When Distributor is finished creating the new package version, on the package's **Details** page, on the **Versions** tab, view the new version in the list of available package versions. Set a default version of the package by choosing a version, and then choosing **Set default version**.

   If you don't set a default version, the newest package version is the default version.

### Adding a package version using the Advanced workflow


To add a package version, [create a package](distributor-working-with-packages-create.md), and then use Distributor to add a package version by adding an entry to the SSM document that exists for older versions. To save time, update the manifest for an older version of the package, change the value of the `version` entry in the manifest (for example, from `Test_1.0` to `Test_2.0`) and save it as the manifest for the new version. You must have an updated manifest to add a new package version by using the **Advanced** workflow.

**To add a package version using the Advanced workflow**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the Distributor home page, choose the package to which you want to add another version, and then choose **Add version**.

1. For **Version name**, enter the exact value that is in the `version` entry of your manifest file.

1. For **S3 bucket name**, choose an existing S3 bucket from the list. This can be the same bucket that you used to store installable files for older versions, but the installable file names must be different to avoid overwriting existing installable files in the bucket.

1. For **S3 key prefix**, enter the subfolder of the bucket where your installable assets are stored.

1. For **Manifest**, choose **Extract from package** to use a manifest that you uploaded to the S3 bucket with your .zip files.

   (Optional) If you didn't upload your revised JSON manifest to the Amazon S3 bucket where you stored your .zip files, choose **New manifest**. You can author or paste the entire manifest in the JSON editor field. For more information about how to create the JSON manifest, see [Step 2: Create the JSON package manifest](distributor-working-with-packages-create.md#packages-manifest).

1. When you're finished with the manifest, choose **Add package version**.

1. On the package's **Details** page, on the **Versions** tab, view the new version in the list of available package versions. Set a default version of the package by choosing a version, and then choosing **Set default version**.

   If you don't set a default version, the newest package version is the default version.

## Adding a package version using the AWS CLI


You can use the AWS CLI to add a new package version to Distributor. Before you run these commands, you must create a new package version and upload it to S3, as described at the start of this topic.

**To add a package version using the AWS CLI**

1. Run the following command to edit the AWS Systems Manager document with an entry for a new package version. Replace *document-name* with the name of your document. Replace *amzn-s3-demo-bucket* with the URL of the JSON manifest that you copied in [Step 3: Upload the package and manifest to an Amazon S3 bucket](distributor-working-with-packages-create.md#packages-upload-s3). *S3-bucket-URL-of-package* is the URL of the Amazon S3 bucket where the entire package is stored. Replace *version-name-from-updated-manifest* with the value of `version` in the manifest. Set the `--document-version` parameter to `$LATEST` to make the document associated with this package version the latest version of the document.

   ```
   aws ssm update-document \
       --name "document-name" \
       --content "S3-bucket-URL-to-manifest-file" \
       --attachments Key="SourceUrl",Values="amzn-s3-demo-bucket" \
       --version-name version-name-from-updated-manifest \
       --document-version $LATEST
   ```

   The following is an example.

   ```
   aws ssm update-document \
       --name ExamplePackage \
       --content "https://s3.amazonaws.com/amzn-s3-demo-bucket/ExamplePackage/manifest.json" \
       --attachments Key="SourceUrl",Values="https://s3.amazonaws.com/amzn-s3-demo-bucket/ExamplePackage" \
       --version-name 1.1.1 \
       --document-version $LATEST
   ```

1. Run the following command to verify that your package was updated and show the package manifest. Replace *package-name* with the name of your package, and optionally, *document-version* with the version number of the document (not the same as the package version) that you updated. If this package version is associated with the latest version of the document, you can specify `$LATEST` for the value of the optional `--document-version` parameter.

   ```
   aws ssm get-document \
       --name "package-name" \
       --document-version "document-version"
   ```

For information about other options you can use with the **update-document** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/update-document.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/update-document.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*.

# Install or update Distributor packages
Install or update packages

You can deploy packages to your AWS Systems Manager managed nodes by using Distributor, a tool in AWS Systems Manager. To deploy the packages, use either the AWS Management Console or AWS Command Line Interface (AWS CLI). You can deploy one version of one package per command. You can install new packages or update existing installations in place. You can choose to deploy a specific version or choose to always deploy the latest version of a package for deployment. We recommend using State Manager, a tool in AWS Systems Manager, to install packages. Using State Manager helps ensure that your managed nodes are always running the most up-to-date version of your package.

**Important**  
Packages that you install using Distributor should be uninstalled only by using Distributor. Otherwise, Systems Manager can still register the application as `INSTALLED` and lead to other unintended results.


| Preference | AWS Systems Manager action | More info | 
| --- | --- | --- | 
|  Install or update a package immediately.  |  Run Command  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/distributor-working-with-packages-deploy.html)  | 
|  Install or update a package on a schedule, so that the installation always includes the default version.  |  State Manager  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/distributor-working-with-packages-deploy.html)  | 
|  Automatically install a package on new managed nodes that have a specific tag or set of tags. For example, installing the Amazon CloudWatch agent on new instances.  |  State Manager  |  One way to do this is to apply tags to new managed nodes, and then specify the tags as targets in your State Manager association. State Manager automatically installs the package in an association on managed nodes that have matching tags. See [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).  | 

**Topics**
+ [

## Installing or updating a package one time using the console
](#distributor-deploy-pkg-console)
+ [

## Scheduling a package installation or update using the console
](#distributor-deploy-sm-pkg-console)
+ [

## Installing a package one time using the AWS CLI
](#distributor-deploy-pkg-cli)
+ [

## Updating a package one time using the AWS CLI
](#distributor-update-pkg-cli)
+ [

## Scheduling a package installation using the AWS CLI
](#distributor-smdeploy-pkg-cli)
+ [

## Scheduling a package update using the AWS CLI
](#distributor-smupdate-pkg-cli)

## Installing or updating a package one time using the console


You can use the AWS Systems Manager console to install or update a package one time. When you configure a one-time installation, Distributor uses [AWS Systems Manager Run Command](run-command.md), a tool in AWS Systems Manager, to perform the installation.

**To install or update a package one time using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the Distributor home page, choose the package that you want to install.

1. Choose **Install one time**.

   This command opens Run Command with the command document `AWS-ConfigureAWSPackage` and your Distributor package already selected.

1. For **Document version**, select the version of the `AWS-ConfigureAWSPackage` document that you want to run.

1. For **Action**, choose **Install**.

1. For **Installation type**, choose one of the following: 
   + **Uninstall and reinstall**: The package is completely uninstalled, and then reinstalled. The application is unavailable until the reinstallation is complete.
   + **In-place update**: Only new or changed files are added to the existing installation according to instructions you provide in an `update` script. The application remains available throughout the update process. This option isn't supported for AWS published packages except the `AWSEC2Launch-Agent` package.

1. For **Name**, verify that the name of the package you selected is entered.

1. (Optional) For **Version**, enter the version name value of the package. If you leave this field blank, Run Command installs the default version that you selected in Distributor.

1. In the **Targets** section, choose the managed nodes on which you want to run this operation by specifying tags, selecting instances or devices manually, or by specifying a resource group.
**Note**  
If you don't see a managed node in the list, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md).

1. For **Other parameters**:
   + For **Comment**, enter information about this command.
   + For **Timeout (seconds)**, specify the number of seconds for the system to wait before failing the overall command execution. 

1. For **Rate Control**:
   + For **Concurrency**, specify either a number or a percentage of targets on which to run the command at the same time.
**Note**  
If you selected targets by specifying tags or resource groups and you aren't certain how many managed nodes are targeted, then restrict the number of targets that can run the document at the same time by specifying a percentage.
   + For **Error threshold**, specify when to stop running the command on other targets after it fails on either a number or a percentage of managed nodes. For example, if you specify three errors, then Systems Manager stops sending the command when the fourth error is received. Managed nodes still processing the command might also send errors. 

1. (Optional) For **Output options**, to save the command output to a file, select the **Write command output to an S3 bucket** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile (for EC2 instances) or IAM service role (hybrid-activated machines) assigned to the instance, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, make sure that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. In the **SNS notifications** section, if you want notifications sent about the status of the command execution, select the **Enable SNS notifications** check box.

   For more information about configuring Amazon SNS notifications for Run Command, see [Monitoring Systems Manager status changes using Amazon SNS notifications](monitoring-sns-notifications.md).

1. When you're ready to install the package, choose **Run**.

1. The **Command status** area reports the progress of the execution. If the command is still in progress, choose the refresh icon in the top-left corner of the console until the **Overall status** or **Detailed status** column shows **Success** or **Failed**.

1. In the **Targets and outputs** area, choose the button next to a managed node name, and then choose **View output**.

   The command output page shows the results of your command execution. 

1. (Optional) If you chose to write command output to an Amazon S3 bucket, choose **Amazon S3** to view the output log data.

## Scheduling a package installation or update using the console


You can use the AWS Systems Manager console to schedule the installation or update of a package. When you schedule package installation or update, Distributor uses [AWS Systems Manager State Manager](systems-manager-state.md) to install or update.

**To schedule a package installation using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the Distributor home page, choose the package that you want to install or update.

1. For **Package**, choose **Install on a schedule**.

   This command opens State Manager to a new association that is created for you.

1. For **Name**, enter a name (for example, **Deploy-test-agent-package**). This is optional, but recommended. Spaces aren't allowed in the name.

1. In the **Document** list, the document name `AWS-ConfigureAWSPackage` is already selected. 

1. For **Action**, verify that **Install** is selected.

1. For **Installation type**, choose one of the following: 
   + **Uninstall and reinstall**: The package is completely uninstalled, and then reinstalled. The application is unavailable until the reinstallation is complete.
   + **In-place update**: Only new or changed files are added to the existing installation according to instructions you provide in an `update` script. The application remains available throughout the update process.

1. For **Name**, verify that the name of your package is entered.

1. For **Version**, if you to want to install a package version other than the latest published version, enter the version identifier.

1. For **Targets**, choose **Selecting all managed instances in this account**, **Specifying tags**, or **Manually Selecting Instance**. If you target resources by using tags, enter a tag key and a tag value in the fields provided.
**Note**  
You can choose managed AWS IoT Greengrass core devices by choosing either **Selecting all managed instances in this account** or **Manually Selecting Instance**.

1. For **Specify schedule**, choose **On Schedule** to run the association on a regular schedule, or **No Schedule** to run the association once. For more information about these options, see [Working with associations in Systems Manager](state-manager-associations.md). Use the controls to create a `cron` or rate schedule for the association.

1. Choose **Create Association**.

1. On the **Association** page, choose the button next to the association you created, and then choose **Apply association now**.

   State Manager creates and immediately runs the association on the specified targets. For more information about the results of running associations, see [Working with associations in Systems Manager](state-manager-associations.md) in this guide.

For more information about working with the options in **Advanced options**, **Rate control**, and **Output options**, see [Working with associations in Systems Manager](state-manager-associations.md). 

## Installing a package one time using the AWS CLI


You can run **send-command** in the AWS CLI to install a Distributor package one time. If the package is already installed, the application will be taken offline while the package is uninstalled and the new version installed in its place.

**To install a package one time using the AWS CLI**
+ Run the following command in the AWS CLI.

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureAWSPackage" \
      --instance-ids "instance-IDs" \
      --parameters '{"action":["Install"],"installationType":["Uninstall and reinstall"],"name":["package-name (in same account) or package-ARN (shared from different account)"]}'
  ```
**Note**  
The default behavior for `installationType` is `Uninstall and reinstall`. You can omit `"installationType":["Uninstall and reinstall"]` from this command when you're installing a complete package.

  The following is an example.

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureAWSPackage" \
      --instance-ids "i-00000000000000" \
      --parameters '{"action":["Install"],"installationType":["Uninstall and reinstall"],"name":["ExamplePackage"]}'
  ```

For information about other options you can use with the **send-command** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*.

## Updating a package one time using the AWS CLI


You can run **send-command** in the AWS CLI to update a Distributor package without taking the associated application offline. Only new or updated files in the package are replaced.

**To update a package one time using the AWS CLI**
+ Run the following command in the AWS CLI.

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureAWSPackage" \
      --instance-ids "instance-IDs" \
      --parameters '{"action":["Install"],"installationType":["In-place update"],"name":["package-name (in same account) or package-ARN (shared from different account)"]}'
  ```
**Note**  
When you add new or changed files, you must include `"installationType":["In-place update"]` in the command.

  The following is an example.

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureAWSPackage" \
      --instance-ids "i-02573cafcfEXAMPLE" \
      --parameters '{"action":["Install"],"installationType":["In-place update"],"name":["ExamplePackage"]}'
  ```

For information about other options you can use with the **send-command** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*.

## Scheduling a package installation using the AWS CLI


You can run **create-association** in the AWS CLI to install a Distributor package on a schedule. The value of `--name`, the document name, is always `AWS-ConfigureAWSPackage`. The following command uses the key `InstanceIds` to specify target managed nodes. If the package is already installed, the application will be taken offline while the package is uninstalled and the new version installed in its place.

```
aws ssm create-association \
    --name "AWS-ConfigureAWSPackage" \
    --parameters '{"action":["Install"],"installationType":["Uninstall and reinstall"],"name":["package-name (in same account) or package-ARN (shared from different account)"]}' \
    --targets [{\"Key\":\"InstanceIds\",\"Values\":[\"instance-ID1\",\"instance-ID2\"]}]
```

**Note**  
The default behavior for `installationType` is `Uninstall and reinstall`. You can omit `"installationType":["Uninstall and reinstall"]` from this command when you're installing a complete package.

The following is an example.

```
aws ssm create-association \
    --name "AWS-ConfigureAWSPackage" \
    --parameters '{"action":["Install"],"installationType":["Uninstall and reinstall"],"name":["Test-ConfigureAWSPackage"]}' \
    --targets [{\"Key\":\"InstanceIds\",\"Values\":[\"i-02573cafcfEXAMPLE\",\"i-0471e04240EXAMPLE\"]}]
```

For information about other options you can use with the **create-association** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*.

## Scheduling a package update using the AWS CLI


You can run **create-association** in the AWS CLI to update a Distributor package on a schedule without taking the associated application offline. Only new or updated files in the package are replaced. The value of `--name`, the document name, is always `AWS-ConfigureAWSPackage`. The following command uses the key `InstanceIds` to specify target instances.

```
aws ssm create-association \
    --name "AWS-ConfigureAWSPackage" \
    --parameters '{"action":["Install"],"installationType":["In-place update"],"name":["package-name (in same account) or package-ARN (shared from different account)"]}' \
    --targets [{\"Key\":\"InstanceIds\",\"Values\":[\"instance-ID1\",\"instance-ID2\"]}]
```

**Note**  
When you add new or changed files, you must include `"installationType":["In-place update"]` in the command.

The following is an example.

```
aws ssm create-association \
    --name "AWS-ConfigureAWSPackage" \
    --parameters '{"action":["Install"],"installationType":["In-place update"],"name":["Test-ConfigureAWSPackage"]}' \
    --targets [{\"Key\":\"InstanceIds\",\"Values\":[\"i-02573cafcfEXAMPLE\",\"i-0471e04240EXAMPLE\"]}]
```

For information about other options you can use with the **create-association** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*.

# Uninstall a Distributor package
Uninstall a package

You can use the AWS Management Console or the AWS Command Line Interface (AWS CLI) to uninstall Distributor packages from your AWS Systems Manager managed nodes by using Run Command. Distributor and Run Command are tools in AWS Systems Manager. In this release, you can uninstall one version of one package per command. You can uninstall a specific version or the default version.

**Important**  
Packages that you install using Distributor should be uninstalled only by using Distributor. Otherwise, Systems Manager can still register the application as `INSTALLED` and lead to other unintended results.

**Topics**
+ [

## Uninstalling a package using the console
](#distributor-pkg-uninstall-console)
+ [

## Uninstalling a package using the AWS CLI
](#distributor-pkg-uninstall-cli)

## Uninstalling a package using the console


You can use Run Command in the Systems Manager console to uninstall a package one time. Distributor uses [AWS Systems Manager Run Command](run-command.md) to uninstall packages.

**To uninstall a package using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. On the Run Command home page, choose **Run command**.

1. Choose the `AWS-ConfigureAWSPackage` command document.

1. From **Action**, choose **Uninstall** 

1. For **Name**, enter the name of the package that you want to uninstall.

1. For **Targets**, choose how you want to target your managed nodes. You can specify a tag key and values that are shared by the targets. You can also specify targets by choosing attributes, such as an ID, platform, and SSM Agent version.

1. You can use the advanced options to add comments about the operation, change **Concurrency** and **Error threshold** values in **Rate control**, specify output options, or configure Amazon Simple Notification Service (Amazon SNS) notifications. For more information, see [Running Commands from the Console](https://docs.aws.amazon.com/systems-manager/latest/userguide/rc-console.html) in this guide.

1. When you're ready to uninstall the package, choose **Run**, and then choose **View results**.

1. In the commands list, choose the `AWS-ConfigureAWSPackage` command that you ran. If the command is still in progress, choose the refresh icon in the top-right corner of the console.

1. When the **Status** column shows **Success** or **Failed**, choose the **Output** tab.

1. Choose **View output**. The command output page shows the results of your command execution.

## Uninstalling a package using the AWS CLI


You can use the AWS CLI to uninstall a Distributor package from managed nodes by using Run Command.

**To uninstall a package using the AWS CLI**
+ Run the following command in the AWS CLI.

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureAWSPackage" \
      --instance-ids "instance-IDs" \
      --parameters '{"action":["Uninstall"],"name":["package-name (in same account) or package-ARN (shared from different account)"]}'
  ```

  The following is an example.

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureAWSPackage" \
      --instance-ids "i-02573cafcfEXAMPLE" \
      --parameters '{"action":["Uninstall"],"name":["Test-ConfigureAWSPackage"]}'
  ```

For information about other options you can use with the **send-command** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*.

# Delete a Distributor package
Delete a package

This section describes how to a delete a package. You can't delete a version of a package, only the entire package.

## Deleting a package using the console


You can use the AWS Systems Manager console to delete a package or a package version from Distributor, a tool in AWS Systems Manager. Deleting a package deletes all versions of a package from Distributor.

**To delete a package using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the **Distributor** home page, choose the package that you want to delete.

1. On the package's details page, choose **Delete package**.

1. When you're prompted to confirm the deletion, choose **Delete package**.

## Deleting a package version using the console


You can use the Systems Manager console to delete a package version from Distributor.

**To delete a package version using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the **Distributor** home page, choose the package that you want to delete a version of.

1. On the versions page for the package, choose the version to delete and choose **Delete version**.

1. When you're prompted to confirm the deletion, choose **Delete package version**.

## Deleting a package using the command line


You can use your preferred command line tool to delete a package from Distributor.

------
#### [ Linux & macOS ]

**To delete a package using the AWS CLI**

1. Run the following command to list documents for specific packages. In the results of this command, look for the package that you want to delete.

   ```
   aws ssm list-documents \
       --filters Key=Name,Values=package-name
   ```

1. Run the following command to delete a package. Replace *package-name* with the package name.

   ```
   aws ssm delete-document \
       --name "package-name"
   ```

1. Run the **list-documents** command again to verify that the package was deleted. The package you deleted shouldn't be included in the list.

   ```
   aws ssm list-documents \
       --filters Key=Name,Values=package-name
   ```

------
#### [ Windows ]

**To delete a package using the AWS CLI**

1. Run the following command to list documents for specific packages. In the results of this command, look for the package that you want to delete.

   ```
   aws ssm list-documents ^
       --filters Key=Name,Values=package-name
   ```

1. Run the following command to delete a package. Replace *package-name* with the package name.

   ```
   aws ssm delete-document ^
       --name "package-name"
   ```

1. Run the **list-documents** command again to verify that the package was deleted. The package you deleted shouldn't be included in the list.

   ```
   aws ssm list-documents ^
       --filters Key=Name,Values=package-name
   ```

------
#### [ PowerShell ]

**To delete a package using Tools for PowerShell**

1. Run the following command to list documents for specific packages. In the results of this command, look for the package that you want to delete.

   ```
   $filter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
   $filter.Key = "Name"
   $filter.Values = "package-name"
   
   Get-SSMDocumentList `
       -Filters @($filter)
   ```

1. Run the following command to delete a package. Replace *package-name* with the package name.

   ```
   Remove-SSMDocument `
       -Name "package-name"
   ```

1. Run the **Get-SSMDocumentList** command again to verify that the package was deleted. The package you deleted shouldn't be included in the list.

   ```
   $filter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
   $filter.Key = "Name"
   $filter.Values = "package-name"
   
   Get-SSMDocumentList `
       -Filters @($filter)
   ```

------

## Deleting a package version using the command line


You can use your preferred command line tool to delete a package version from Distributor.

------
#### [ Linux & macOS ]

**To delete a package version using the AWS CLI**

1. Run the following command to list the versions of your package. In the results of this command, look for the package version that you want to delete.

   ```
   aws ssm list-document-versions \
       --name "package-name"
   ```

1. Run the following command to delete a package version. Replace *package-name* with the package name and *version* with the version number.

   ```
   aws ssm delete-document \
       --name "package-name" \
       --document-version version
   ```

1. Run the **list-document-versions** command to verify that the version of the package was deleted. The package version that you deleted shouldn't be found.

   ```
   aws ssm list-document-versions \
       --name "package-name"
   ```

------
#### [ Windows ]

**To delete a package version using the AWS CLI**

1. Run the following command to list the versions of your package. In the results of this command, look for the package version that you want to delete.

   ```
   aws ssm list-document-versions ^
       --name "package-name"
   ```

1. Run the following command to delete a package version. Replace *package-name* with the package name and *version* with the version number.

   ```
   aws ssm delete-document ^
       --name "package-name" ^
       --document-version version
   ```

1. Run the **list-document-versions** command to verify that the version of the package was deleted. The package version that you deleted shouldn't be found.

   ```
   aws ssm list-document-versions ^
       --name "package-name"
   ```

------
#### [ PowerShell ]

**To delete a package version using Tools for PowerShell**

1. Run the following command to list the versions of your package. In the results of this command, look for the package version that you want to delete.

   ```
   Get-SSMDocumentVersionList `
       -Name "package-name"
   ```

1. Run the following command to delete a package version. Replace *package-name* with the package name and *version* with the version number.

   ```
   Remove-SSMDocument `
       -Name "package-name" `
       -DocumentVersion version
   ```

1. Run the **Get-SSMDocumentVersionList** command to verify that the version of the package was deleted. The package version that you deleted shouldn't be found.

   ```
   Get-SSMDocumentVersionList `
       -Name "package-name"
   ```

------

For information about other options you can use with the **list-documents** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/list-documents.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/list-documents.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*. For information about other options you can use with the **delete-document** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/delete-document.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/delete-document.html).

# Auditing and logging Distributor activity


You can use AWS CloudTrail to audit activity related to Distributor, a tool in AWS Systems Manager. For more information about auditing and logging options for Systems Manager, see [Logging and monitoring in AWS Systems Manager](monitoring.md).

## Audit Distributor activity using CloudTrail


CloudTrail captures API calls made in the AWS Systems Manager console, the AWS Command Line Interface (AWS CLI), and the Systems Manager SDK. The information can be viewed in the CloudTrail console or stored in an Amazon Simple Storage Service (Amazon S3) bucket. One bucket is used for all CloudTrail logs for your account.

Logs of Run Command and State Manager actions show document creation, package installation, and package uninstallation activity. Run Command and State Manager are tools in AWS Systems Manager. For more information about viewing and using CloudTrail logs of Systems Manager activity, see [Logging AWS Systems Manager API calls with AWS CloudTrail](monitoring-cloudtrail-logs.md).

# Troubleshooting AWS Systems Manager Distributor
Troubleshooting Distributor

The following information can help you troubleshoot problems that might occur when you use Distributor, a tool in AWS Systems Manager.

**Topics**
+ [

## Wrong package with the same name is installed
](#distributor-tshoot-1)
+ [

## Error: Failed to retrieve manifest: Could not find latest version of package
](#distributor-tshoot-2)
+ [

## Error: Failed to retrieve manifest: Validation exception
](#distributor-tshoot-3)
+ [

## Package isn't supported (package is missing install action)
](#distributor-tshoot-4)
+ [

## Error: Failed to download manifest : Document with name does not exist
](#distributor-tshoot-5)
+ [

## Upload failed.
](#distributor-tshoot-6)
+ [

## Error: Failed to find platform: no manifest found for platform: oracle, version 8.9, architecture x86\$164
](#distributor-tshoot-7)

## Wrong package with the same name is installed


**Problem:** You've installed a package, but Distributor installed a different package instead.

**Cause:** During installation, Systems Manager finds AWS published packages as results before user-defined external packages. If your user-defined package name is the same as an AWS published package name, the AWS package is installed instead of your package.

**Solution:** To avoid this problem, name your package something different from the name for an AWS published package.

## Error: Failed to retrieve manifest: Could not find latest version of package


**Problem:** You received an error like the following.

```
Failed to retrieve manifest: ResourceNotFoundException: Could not find the latest version of package 
arn:aws:ssm:::package/package-name status code: 400, request id: guid
```

**Cause:** You're using a version of SSM Agent with Distributor that is earlier than version 2.3.274.0.

**Solution:** Update the version of SSM Agent to version 2.3.274.0 or later. For more information, see [Updating the SSM Agent using Run Command](run-command-tutorial-update-software.md#rc-console-agentexample) or [Walkthrough: Automatically update SSM Agent with the AWS CLI](state-manager-update-ssm-agent-cli.md).

## Error: Failed to retrieve manifest: Validation exception


**Problem:** You received an error like the following.

```
Failed to retrieve manifest: ValidationException: 1 validation error detected: Value 'documentArn'
at 'packageName' failed to satisfy constraint: Member must satisfy regular expression pattern:
arn:aws:ssm:region-id:account-id:package/package-name
```

**Cause:** You're using a version of SSM Agent with Distributor that is earlier than version 2.3.274.0.

**Solution:** Update the version of SSM Agent to version 2.3.274.0 or later. For more information, see [Updating the SSM Agent using Run Command](run-command-tutorial-update-software.md#rc-console-agentexample) or [Walkthrough: Automatically update SSM Agent with the AWS CLI](state-manager-update-ssm-agent-cli.md).

## Package isn't supported (package is missing install action)


**Problem:** You received an error like the following.

```
Package is not supported (package is missing install action)
```

**Cause:** The package directory structure is incorrect.

**Solution:** Don't zip a parent directory containing the software and required scripts. Instead, create a `.zip` file of all the required contents directly in the absolute path. To verify the `.zip` file was created correctly, unzip the target platform directory and review the directory structure. For example, the install script absolute path should be `/ExamplePackage_targetPlatform/install.sh`.

## Error: Failed to download manifest : Document with name does not exist


**Problem:** You received an error like the following. 

```
Failed to download manifest - failed to retrieve package document description: InvalidDocument: Document with name filename does not exist.
```

**Cause 1:** Distributor can't find the package by the package name when sharing a Distributor package from another account.

**Solution 1:** When sharing a package from another account, use the full Amazon Resource Name (ARN) for the package and not just its name.

**Cause 2:** When using a VPC, you haven't provided your IAM instance profile with access to the AWS managed S3 bucket that contains the document `AWS-ConfigureAWSPackage` for the AWS Region you are targeting.

**Solution 2:** Ensure that your IAM instance profile provides SSM Agent with access to the AWS managed S3 bucket that contains the document `AWS-ConfigureAWSPackage` for the AWS Region you are targeting, as explained in [SSM Agent communications with AWS managed S3 buckets](ssm-agent-technical-details.md#ssm-agent-minimum-s3-permissions).

## Upload failed.


**Problem:** You received an error like the following. 

```
Upload failed. At least one of your files was not successfully uploaded to your S3 bucket.
```

**Cause:** The name of your software package includes a space. For example, `Hello World.msi` would fail to upload.

## Error: Failed to find platform: no manifest found for platform: oracle, version 8.9, architecture x86\$164


**Problem:** You received an error like the following.

```
Failed to find platform: no manifest found for platform: oracle, version 8.9, architecture x86_64
```

**Cause:** The error means that the JSON package manifest does not have any definition for that platform, in this case, Oracle Linux..

**Solution:** Download the package you want to distribute from the [Trend Micro Deep Security Software](https://help.deepsecurity.trendmicro.com/software.html) site. Create an `.rpm` software package using the [Create a package using the Simple workflow](distributor-working-with-packages-create.md#distributor-working-with-packages-create-simple). Set the following values for the package and complete the upload software package using Distributor:

```
Platform version: _any
Target platform: oracle
Architecture: x86_64
```

# AWS Systems Manager Fleet Manager
Fleet Manager

Fleet Manager, a tool in AWS Systems Manager, is a unified user interface (UI) experience that helps you remotely manage your nodes running on AWS or on premises. With Fleet Manager, you can view the health and performance status of your entire server fleet from one console. You can also gather data from individual nodes to perform common troubleshooting and management tasks from the console. This includes connecting to Windows instances using the Remote Desktop Protocol (RDP), viewing folder and file contents, Windows registry management, operating system user management, and more. 

To get started with Fleet Manager, open the [Systems Manager console](https://console.aws.amazon.com/systems-manager/fleet-manager). In the navigation pane, choose **Fleet Manager**.

## Who should use Fleet Manager?


Any AWS customer who wants a centralized way to manage their node fleet should use Fleet Manager.

## How can Fleet Manager benefit my organization?


Fleet Manager offers these benefits:
+ Perform a variety of common systems administration tasks without having to manually connect to your managed nodes.
+ Manage nodes running on multiple platforms from a single unified console.
+ Manage nodes running different operating systems from a single unified console.
+ Improve the efficiency of your systems administration.

## What are the features of Fleet Manager?


Key features of Fleet Manager include the following:
+ **Access the Red Hat Knowledgebase Portal**

  Access binaries, knowledge-shares, and discussion forums on the Red Hat Knowledgebase Portal through your Red Hat Enterprise Linux (RHEL) instances.
+ **Managed node status** 

  View which managed instances are `running` and which are `stopped`. For more information about stopped instances, see [Stop and start your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html) in the *Amazon EC2 User Guide*. For AWS IoT Greengrass core devices, you can view which are `online`, `offline`, or show a status of `Connection lost`.
**Note**  
If you stopped your managed instance before July 12, 2021, it won't display the `stopped` marker. To show the marker, start and stop the instance.
+ **View instance information**

  View information about the folder and file data stored on the volumes attached to your managed instances, performance data about your instances in real-time, and log data stored on your instances.
+ **View edge device information**

  View the AWS IoT Greengrass Thing name for the device, SSM Agent ping status and version, and more.
+ **Manage accounts and registry**

  Manage operating system (OS) user accounts on your instances and registry on your Windows instances.
+ **Control access to features**

  Control access to Fleet Manager features using AWS Identity and Access Management (IAM) policies. With these policies, you can control which individual users or groups in your organization can use various Fleet Manager features, and which managed nodes they can manage.

**Topics**
+ [

## Who should use Fleet Manager?
](#fleet-who)
+ [

## How can Fleet Manager benefit my organization?
](#fleet-benefits)
+ [

## What are the features of Fleet Manager?
](#fleet-features)
+ [

# Setting up Fleet Manager
](setting-up-fleet-manager.md)
+ [

# Working with managed nodes
](fleet-manager-managed-nodes.md)
+ [

# Managing EC2 instances automatically with Default Host Management Configuration
](fleet-manager-default-host-management-configuration.md)
+ [

# Connecting to a Windows Server managed instance using Remote Desktop
](fleet-manager-remote-desktop-connections.md)
+ [

# Managing Amazon EBS volumes on managed instances
](fleet-manager-manage-amazon-ebs-volumes.md)
+ [

# Accessing the Red Hat Knowledge base portal
](fleet-manager-red-hat-knowledge-base-access.md)
+ [

# Troubleshooting managed node availability
](fleet-manager-troubleshooting-managed-nodes.md)

# Setting up Fleet Manager


Before users in your AWS account can use Fleet Manager, a tool in AWS Systems Manager, to monitor and manage your managed nodes, they must be granted the necessary permissions. In addition, any Amazon Elastic Compute Cloud (Amazon EC2) instances; AWS IoT Greengrass core devices; and on-premises servers, edge devices, and virtual machines (VMs) to be monitored and managed using Fleet Manager must be Systems Manager* managed nodes*. A managed node is any machine configured for use with Systems Manager in [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environments.

This means your nodes must meet certain prerequisites and be configured with the AWS Systems Manager Agent (SSM Agent).

Depending on the machine type, refer to one of the following topics to ensure your machines meet the requirements for managed nodes.
+ Amazon EC2 instances: [Managing EC2 instances with Systems Manager](systems-manager-setting-up-ec2.md)
**Tip**  
You can also use Quick Setup, a tool in AWS Systems Manager, to help you quickly configure your Amazon EC2 instances as managed instances in an individual account. If your business or organization uses AWS Organizations, you can also configure instances across multiple organizational units (OUs) and AWS Regions. For more information about using Quick Setup to configure managed instances, see [Set up Amazon EC2 host management using Quick Setup](quick-setup-host-management.md).
+ On-premises and other server types in the cloud: [Managing nodes in hybrid and multicloud environments with Systems Manager](systems-manager-hybrid-multicloud.md)
+ AWS IoT Greengrass (edge) devices: [Managing edge devices with Systems Manager](systems-manager-setting-up-edge-devices.md)

**Topics**
+ [

# Controlling access to Fleet Manager
](configuring-fleet-manager-permissions.md)

# Controlling access to Fleet Manager


To use Fleet Manager, a tool in AWS Systems Manager, your AWS Identity and Access Management (IAM) user or role must have the required permissions. You can create an IAM policy that provides access to all Fleet Manager features, or modify your policy to grant access to the features you choose. You then grant these permissions to users, or identities, in your account.

**Task 1: Create IAM policies to define access permissions**  
Follow one of the methods provided in the followig topic in the *IAM User Guide* to create an IAM to provide identities (users, roles, or user groupss) with access to Fleet Manager:  
+ [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html)
You can use one of the sample policies we provide below, or modify them according to the permissions you want to grant. We provide sample policies for full Fleet Manager access and read-only access. 

**Task 2: Attach the IAM policies to users to grant permissions**  
After you have created the IAM policy or policies that define access permissions to Fleet Manager, use one of the following procedures in the *IAM User Guide* to grant these permissions to identities in your account:  
+ [Adding IAM identity permissions (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console)
+ [Adding IAM identity permissions (AWS CLI)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policy-cli)
+ [Adding IAM identity permissions (AWS API)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policy-api)

**Topics**
+ [

## Sample policy for Fleet Manager administrator access
](#admin-policy-sample)
+ [

## Sample policy for Fleet Manager read-only access
](#read-only-policy-sample)

## Sample policy for Fleet Manager administrator access


The following policy provides permissions to all Fleet Manager features. This means a user can create and delete local users and groups, modify group membership for any local group, and modify Windows Server registry keys or values. Replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "EC2",
            "Effect": "Allow",
            "Action": [
                "ec2:CreateTags",
                "ec2:DeleteTags",
                "ec2:DescribeInstances",
                "ec2:DescribeTags"
            ],
            "Resource": "*"
        },
        {
            "Sid": "General",
            "Effect": "Allow",
            "Action": [
                "ssm:AddTagsToResource",
                "ssm:DescribeInstanceAssociationsStatus",
                "ssm:DescribeInstancePatches",
                "ssm:DescribeInstancePatchStates",
                "ssm:DescribeInstanceProperties",
                "ssm:GetCommandInvocation",
                "ssm:GetServiceSetting",
                "ssm:GetInventorySchema",
                "ssm:ListComplianceItems",
                "ssm:ListInventoryEntries",
                "ssm:ListTagsForResource",
                "ssm:ListCommandInvocations",
                "ssm:ListAssociations",
                "ssm:RemoveTagsFromResource"
            ],
            "Resource": "*"
        },
        {
            "Sid": "DefaultHostManagement",
            "Effect": "Allow",
            "Action": [
                "ssm:ResetServiceSetting",
                "ssm:UpdateServiceSetting"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:servicesetting/ssm/managed-instance/default-ec2-instance-management-role"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::111122223333:role/service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole",
            "Condition": {
                "StringEquals": {
                    "iam:PassedToService": [
                        "ssm.amazonaws.com"
                    ]
                }
            }
        },
        {
            "Sid": "SendCommand",
            "Effect": "Allow",
            "Action": [
                "ssm:GetDocument",
                "ssm:SendCommand",
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:*:111122223333:instance/*",
                "arn:aws:ssm:*:111122223333:managed-instance/*",
                "arn:aws:ssm:*:111122223333:document/SSM-SessionManagerRunShell",
                "arn:aws:ssm:*:*:document/AWS-PasswordReset",
                "arn:aws:ssm:*:*:document/AWSFleetManager-AddUsersToGroups",
                "arn:aws:ssm:*:*:document/AWSFleetManager-CopyFileSystemItem",
                "arn:aws:ssm:*:*:document/AWSFleetManager-CreateDirectory",
                "arn:aws:ssm:*:*:document/AWSFleetManager-CreateGroup",
                "arn:aws:ssm:*:*:document/AWSFleetManager-CreateUser",
                "arn:aws:ssm:*:*:document/AWSFleetManager-CreateUserInteractive",
                "arn:aws:ssm:*:*:document/AWSFleetManager-CreateWindowsRegistryKey",
                "arn:aws:ssm:*:*:document/AWSFleetManager-DeleteFileSystemItem",
                "arn:aws:ssm:*:*:document/AWSFleetManager-DeleteGroup",
                "arn:aws:ssm:*:*:document/AWSFleetManager-DeleteUser",
                "arn:aws:ssm:*:*:document/AWSFleetManager-DeleteWindowsRegistryKey",
                "arn:aws:ssm:*:*:document/AWSFleetManager-DeleteWindowsRegistryValue",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetDiskInformation",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetFileContent",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetFileSystemContent",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetGroups",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetPerformanceCounters",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetProcessDetails",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetUsers",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetWindowsEvents",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetWindowsRegistryContent",
                "arn:aws:ssm:*:*:document/AWSFleetManager-MountVolume",
                "arn:aws:ssm:*:*:document/AWSFleetManager-MoveFileSystemItem",
                "arn:aws:ssm:*:*:document/AWSFleetManager-RemoveUsersFromGroups",
                "arn:aws:ssm:*:*:document/AWSFleetManager-RenameFileSystemItem",
                "arn:aws:ssm:*:*:document/AWSFleetManager-SetWindowsRegistryValue",
                "arn:aws:ssm:*:*:document/AWSFleetManager-StartProcess",
                "arn:aws:ssm:*:*:document/AWSFleetManager-TerminateProcess"
            ]
        },
        {
            "Sid": "TerminateSession",
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/aws:ssmmessages:session-id": [
                        "${aws:userid}"
                    ]
                }
            }
        }
    ]
}
```

------

## Sample policy for Fleet Manager read-only access


The following policy provides permissions to read-only Fleet Manager features. Replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "EC2",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:DescribeTags"
            ],
            "Resource": "*"
        },
        {
            "Sid": "General",
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeInstanceAssociationsStatus",
                "ssm:DescribeInstancePatches",
                "ssm:DescribeInstancePatchStates",
                "ssm:DescribeInstanceProperties",
                "ssm:GetCommandInvocation",
                "ssm:GetServiceSetting",
                "ssm:GetInventorySchema",
                "ssm:ListComplianceItems",
                "ssm:ListInventoryEntries",
                "ssm:ListTagsForResource",
                "ssm:ListCommandInvocations",
                "ssm:ListAssociations"
            ],
            "Resource": "*"
        },
        {
            "Sid": "SendCommand",
            "Effect": "Allow",
            "Action": [
                "ssm:GetDocument",
                "ssm:SendCommand",
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:*:111122223333:instance/*",
                "arn:aws:ssm:*:111122223333:managed-instance/*",
                "arn:aws:ssm:*:111122223333:document/SSM-SessionManagerRunShell",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetDiskInformation",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetFileContent",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetFileSystemContent",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetGroups",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetPerformanceCounters",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetProcessDetails",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetUsers",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetWindowsEvents",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetWindowsRegistryContent"
            ]
        },
        {
            "Sid": "TerminateSession",
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/aws:ssmmessages:session-id": [
                        "${aws:userid}"
                    ]
                }
            }
        }
    ]
}
```

------

# Working with managed nodes


A *managed node* is any machine configured for AWS Systems Manager. You can configure the following machine types as managed nodes: 
+ Amazon Elastic Compute Cloud (Amazon EC2) instances
+ Servers on your own premises (on-premises servers)
+ AWS IoT Greengrass core devices
+ AWS IoT and non-AWS edge devices
+ Virtual machines (VMs), including VMs in other cloud environments

In the Systems Manager console, any machine prefixed with "mi-" has been configured as a managed node using a [*hybrid activation*](activations.md). Edge devices display their AWS IoT Thing name.

**Note**  
The only supported feature for macOS instances is viewing the file system.

**About Systems Manager instances tiers**  
AWS Systems Manager offers a standard-instances tier and an advanced-instances tier. Both support managed nodes in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. The standard-instances tier allows you to register a maximum of 1,000 machines per AWS account per AWS Region. If you need to register more than 1,000 machines in a single account and Region, then use the advanced-instances tier. You can create as many managed nodes as you like in the advanced-instances tier. All managed nodes configured for Systems Manager are priced on a pay-per-use basis. For more information about enabling the advanced instances tier, see [Turning on the advanced-instances tier](fleet-manager-enable-advanced-instances-tier.md). For more information about pricing, see [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/).

Note the following additional information about the standard-instances tier and advanced-instances tier:
+ Advanced instances also allow you to connect to your non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment by using AWS Systems Manager Session Manager. Session Manager provides interactive shell access to your instances. For more information, see [AWS Systems Manager Session Manager](session-manager.md).
+ The standard-instances quota also applies to EC2 instances that use a Systems Manager on-premises activation (which isn't a common scenario).
+ To patch applications released by Microsoft on virtual machines (VMs) on-premises instances, activate the advanced-instances tier. There is a charge to use the advanced-instances tier. There is no additional charge to patch applications released by Microsoft on Amazon Elastic Compute Cloud (Amazon EC2) instances. For more information, see [Patching applications released by Microsoft on Windows Server](patch-manager-patching-windows-applications.md).

**Display managed nodes**  
If you don't see your managed nodes listed in the console, then do the following:

1. Verify that the console is open in the AWS Region where you created your managed nodes. You can switch Regions by using the list in the top, right corner of the console. 

1. Verify that the setup steps for your managed nodes meet Systems Manager requirements. For information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md).

1. For non-EC2 machines, verify that you completed the hybrid activation process. For more information, see [Managing nodes in hybrid and multicloud environments with Systems Manager](systems-manager-hybrid-multicloud.md).

Note the following additional information:
+ The Fleet Manager console does not display Amazon EC2 nodes that have been terminated.
+ Systems Manager requires accurate time references in order to perform operations on your machines. If the date and time aren't set correctly on your managed nodes, the machines might not match the signature date of your API requests. For more information, see [Use cases and best practices](systems-manager-best-practices.md).
+ When you create or edit tags, the system can take up to one hour to display changes in the table filter.
+ After the status of a managed node has been `Connection Lost` for at least 30 days, the node might no longer be listed in the Fleet Manager console. To restore it to the list, the issue that caused the lost connection must be resolved. For troubleshooting tips, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md).

**Verify Systems Manager support on a managed node**  
AWS Config provides AWS Managed Rules, which are predefined, customizable rules that AWS Config uses to evaluate whether your AWS resource configurations comply with common best practices. AWS Config Managed Rules include the [ec2-instance-managed-by-systems-manager](https://docs.aws.amazon.com/config/latest/developerguide/ec2-instance-managed-by-systems-manager.html) rule. This rule checks whether the Amazon EC2 instances in your account are managed by Systems Manager. For more information, see [AWS Config Managed Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html). 

**Increase security posture on managed nodes**  
For information about increasing your security posture against unauthorized root-level commands on your managed nodes, see [Restricting access to root-level commands through SSM Agent](ssm-agent-restrict-root-level-commands.md).

**Deregister managed nodes**  
You can deregister managed nodes at any time. For example, if you're managing multiple nodes with the same AWS Identity and Access Management (IAM) role and you notice any kind of malicious behavior, you can deregister any number of machines at any point. (In order to re-register the same machine, you must use a different hybrid Activation Code and Activation ID than previously used to register it.) For information about deregistering managed nodes, see [Deregistering managed nodes in a hybrid and multicloud environment](fleet-manager-deregister-hybrid-nodes.md).

**Topics**
+ [

# Configuring instance tiers
](fleet-manager-configure-instance-tiers.md)
+ [

# Resetting passwords on managed nodes
](fleet-manager-reset-password.md)
+ [

# Deregistering managed nodes in a hybrid and multicloud environment
](fleet-manager-deregister-hybrid-nodes.md)
+ [

# Working with OS file systems using Fleet Manager
](fleet-manager-file-system-management.md)
+ [

# Monitoring managed node performance
](fleet-manager-monitoring-node-performance.md)
+ [

# Working with processes
](fleet-manager-manage-processes.md)
+ [

# Viewing logs on managed nodes
](fleet-manager-view-node-logs.md)
+ [

# Managing OS user accounts and groups on managed nodes using Fleet Manager
](fleet-manager-manage-os-user-accounts.md)
+ [

# Managing the Windows registry on managed nodes
](fleet-manager-manage-windows-registry.md)

# Configuring instance tiers


This topic describes the scenarios when you must activate the advanced-instanced tier. 

AWS Systems Manager offers a standard-instances tier and an advanced-instances tier for non-EC2 machines in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. 

You can register up to 1,000 standard [hybrid-activated nodes](activations.md) per account per AWS Region at no additional cost. However, registering more than 1,000 hybrid nodes requires that you activate the advanced-instances tier. There is a charge to use the advanced-instances tier. For more information, see [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/).

Even with fewer than 1,000 registered hybrid-activated nodes, two other scenarios require the advanced-instances tier: 
+ You want to use Session Manager to connect to non-EC2 nodes.
+ You want to patch applications (not operating systems) released by Microsoft on non-EC2 nodes.
**Note**  
There is no charge to patch applications released by Microsoft on Amazon EC2 instances.

## Advanced-instances tier detailed scenarios


The following information provides details on the three scenarios for which you must activate the advanced-instances tier.

Scenario 1: You want to register more than 1,000 hybrid-activated nodes  
Using the standard-instances tier, you can register a maximum of 1,000 non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment per AWS Region in a specific account without additional charge. If you need to register more than 1,000 non-EC2 nodes in a Region, you must use the advanced-instances tier. You can then activate as many machines for your hybrid and multicloud environment as you want. Charges for the advanced-instances tier are based on the number of advanced nodes activated as Systems Manager managed nodes and the hours those nodes are running.  
All Systems Manager managed nodes that use the activation process described in [Create a hybrid activation to register nodes with Systems Manager](hybrid-activation-managed-nodes.md) are then subject to charge if you exceed 1,000 on-premises nodes in a Region in a specific account .   
You can also activate existing Amazon Elastic Compute Cloud (Amazon EC2) instances using Systems Manager hybrid activations and work with them as non-EC2 instances, such as for testing. These also qualify as hybrid nodes. This isn't a common scenario.

Scenario 2: Patching Microsoft-released applications on hybrid-activated nodes  
The advanced-instances tier is also required if you want to patch Microsoft-released applications on non-EC2 nodes in a hybrid and multicloud environment. If you activate the advanced-instances tier to patch Microsoft applications on non-EC2 nodes, charges are then incurred for all on-premises nodes, even if you have fewer than 1,000.  
There is no additional charge to patch applications released by Microsoft on Amazon Elastic Compute Cloud (Amazon EC2) instances. For more information, see [Patching applications released by Microsoft on Windows Server](patch-manager-patching-windows-applications.md).

Scenario 3: Connecting to hybrid-activated nodes using Session Manager  
Session Manager provides interactive shell access to your instances. To connect to hybrid-activated managed nodes using Session Manager, you must activate the advanced-instances tier. Charges are then incurred for all hybrid-activated nodes, even if you have fewer than 1,000.

**Summary: When do I need the advanced-instances tier?**  
Use the following table to review when you must use the advanced-instances tier, and for which scenarios additional charges apply.


****  

| Scenario | Advanced-instances tier required? | Additional charges apply? | 
| --- | --- | --- | 
|  The number of hybrid-activated nodes in my Region in a specific account is more than 1,000.  | Yes | Yes | 
|  I want to use Patch Manager to patch Microsoft-released applications on any number of hybrid-activated nodes, even less than 1,000.  | Yes | Yes | 
|  I want to use Session Manager to connect to any number of hybrid-activated nodes, even less than 1,000.  | Yes | Yes | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/fleet-manager-configure-instance-tiers.html)  | No | No | 

**Topics**
+ [

## Advanced-instances tier detailed scenarios
](#systems-manager-managed-instances-tier-scenarios)
+ [

# Turning on the advanced-instances tier
](fleet-manager-enable-advanced-instances-tier.md)
+ [

# Reverting from the advanced-instances tier to the standard-instances tier
](fleet-manager-revert-to-standard-tier.md)

# Turning on the advanced-instances tier


AWS Systems Manager offers a standard-instances tier and an advanced-instances tier for non-EC2 machines in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. The standard-instances tier lets you register a maximum of 1,000 hybrid-activated machines per AWS account per AWS Region. The advanced-instances tier is also required to use Patch Manager to patch Microsoft-released applications on non-EC2 nodes, and to connect to non-EC2 nodes using Session Manager. For more information, see [Turning on the advanced-instances tier](#fleet-manager-enable-advanced-instances-tier).

This section describes how to configure your hybrid and multicloud environment to use the advanced-instances tier.

**Before you begin**  
Review pricing details for advanced instances. Advanced instances are available on a per-use-basis. For more information see, [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/). 

## Configuring permissions to turn on the advanced-instances tier


Verify that you have permission in AWS Identity and Access Management (IAM) to change your environment from the standard-instances tier to the advanced-instances tier. You must either have the `AdministratorAccess` IAM policy attached to your user, group, or role, or you must have permission to change the Systems Manager activation-tier service setting. The activation-tier setting uses the following API operations: 
+ [GetServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetServiceSetting.html)
+ [UpdateServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateServiceSetting.html)
+ [ResetServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ResetServiceSetting.html)

Use the following procedure to add an inline IAM policy to a user account. This policy allows a user to view the current managed-instance tier setting. This policy also allows the user to change or reset the current setting in the specified AWS account and AWS Region.

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Users**.

1. In the list, choose the name of the user to embed a policy in.

1. Choose the **Permissions** tab.

1. On the right side of the page, under **Permission policies**, choose **Add inline policy**. 

1. Choose the **JSON** tab.

1. Replace the default content with the following:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "ssm:GetServiceSetting"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "ssm:ResetServiceSetting",
                   "ssm:UpdateServiceSetting"
               ],
               "Resource": "arn:aws:ssm:us-east-1:111122223333:servicesetting/ssm/managed-instance/activation-tier"
           }
       ]
   }
   ```

------

1. Choose **Review policy**.

1. On the **Review policy** page, for **Name**, enter a name for the inline policy. For example: **Managed-Instances-Tier**.

1. Choose **Create policy**.

Administrators can specify read-only permission by assigning the following inline policy to the user.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetServiceSetting"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Deny",
            "Action": [
                "ssm:ResetServiceSetting",
                "ssm:UpdateServiceSetting"
            ],
            "Resource": "*"
        }
    ]
}
```

------

For more information about creating and editing IAM policies, see [Creating IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

## Turning on the advanced-instances tier (console)


The following procedure shows you how to use the Systems Manager console to change *all* non-EC2 nodes that were added using managed-instance activation, in the specified AWS account and AWS Region, to use the advanced-instances tier.

**Before you begin**  
Verify that the console is open in the AWS Region where you created your managed instances. You can switch Regions by using the list in the top, right corner of the console. 

Verify that you have completed the setup requirements for your Amazon Elastic Compute Cloud (Amazon EC2) instances and non-EC2 machines in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. For information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md).

**Important**  
The following procedure describes how to change an account-level setting. This change results in charges being billed to your account.

**To turn on the advanced-instances tier (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose **Settings**, and then choose **Change instance tier settings**.

1. Review the information in the dialog box about changing account settings.

1. If you approve, choose the option to accept, and then choose **Change setting**.

The system can take several minutes to complete the process of moving all instances from the standard-instances tier to the advanced-instances tier.

**Note**  
For information about changing back to the standard-instances tier, see [Reverting from the advanced-instances tier to the standard-instances tier](fleet-manager-revert-to-standard-tier.md).

## Turning on the advanced-instances tier (AWS CLI)


The following procedure shows you how to use the AWS Command Line Interface to change *all* on-premises servers and VMs that were added using managed-instance activation, in the specified AWS account and AWS Region, to use the advanced-instances tier.

**Important**  
The following procedure describes how to change an account-level setting. This change results in charges being billed to your account.

**To turn on the advanced-instances tier using the AWS CLI**

1. Open the AWS CLI and run the following command. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-service-setting \
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier \
       --setting-value advanced
   ```

------
#### [ Windows ]

   ```
   aws ssm update-service-setting ^
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier ^
       --setting-value advanced
   ```

------

   There is no output if the command succeeds.

1. Run the following command to view the current service settings for managed nodes in the current AWS account and AWS Region.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-service-setting \
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier
   ```

------
#### [ Windows ]

   ```
   aws ssm get-service-setting ^
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier
   ```

------

   The command returns information like the following.

   ```
   {
       "ServiceSetting": {
           "SettingId": "/ssm/managed-instance/activation-tier",
           "SettingValue": "advanced",
           "LastModifiedDate": 1555603376.138,
           "LastModifiedUser": "arn:aws:sts::123456789012:assumed-role/Administrator/User_1",
           "ARN": "arn:aws:ssm:us-east-2:123456789012:servicesetting/ssm/managed-instance/activation-tier",
           "Status": "PendingUpdate"
       }
   }
   ```

## Turning on the advanced-instances tier (PowerShell)


The following procedure shows you how to use the AWS Tools for Windows PowerShell to change *all* on-premises servers and VMs that were added using managed-instance activation, in the specified AWS account and AWS Region, to use the advanced-instances tier.

**Important**  
The following procedure describes how to change an account-level setting. This change results in charges being billed to your account.

**To turn on the advanced-instances tier using PowerShell**

1. Open AWS Tools for Windows PowerShell and run the following command. Replace each *example resource placeholder* with your own information.

   ```
   Update-SSMServiceSetting `
       -SettingId "arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier" `
       -SettingValue "advanced"
   ```

   There is no output if the command succeeds.

1. Run the following command to view the current service settings for managed nodes in the current AWS account and AWS Region.

   ```
   Get-SSMServiceSetting `
       -SettingId "arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier"
   ```

   The command returns information like the following.

   ```
   ARN:arn:aws:ssm:us-east-2:123456789012:servicesetting/ssm/managed-instance/activation-tier
   LastModifiedDate : 4/18/2019 4:02:56 PM
   LastModifiedUser : arn:aws:sts::123456789012:assumed-role/Administrator/User_1
   SettingId        : /ssm/managed-instance/activation-tier
   SettingValue     : advanced
   Status           : PendingUpdate
   ```

The system can take several minutes to complete the process of moving all nodes from the standard-instances tier to the advanced-instances tier.

**Note**  
For information about changing back to the standard-instances tier, see [Reverting from the advanced-instances tier to the standard-instances tier](fleet-manager-revert-to-standard-tier.md).

# Reverting from the advanced-instances tier to the standard-instances tier


This section describes how to change hybrid-activated nodes running in the advanced-instances tier back to the standard-instances tier. This configuration applies to all hybrid-activated nodes in an AWS account and a single AWS Region.

**Before you begin**  
Review the following important details.

**Note**  
You can't revert back to the standard-instance tier if you're running more than 1,000 hybrid-activated nodes in the account and Region. You must first deregister nodes until you have 1,000 or fewer. This also applies to Amazon Elastic Compute Cloud (Amazon EC2) instances that use a Systems Manager hybrid activation (which isn't a common scenario). For more information, see [Deregistering managed nodes in a hybrid and multicloud environment](fleet-manager-deregister-hybrid-nodes.md).
After you revert, you won't be able to use Session Manager, a tool in AWS Systems Manager, to interactively access your hybrid-activated nodes.
After you revert, you won't be able to use Patch Manager, a tool in AWS Systems Manager, to patch applications released by Microsoft on hybrid-activated nodes.
The process of reverting all hybrid-activated nodes back to the standard-instance tier can take 30 minutes or more to complete.

This section describes how to revert all hybrid-activated nodes in an AWS account and AWS Region from the advanced-instances tier to the standard-instances tier.

## Reverting to the standard-instances tier (console)


The following procedure shows you how to use the Systems Manager console to change all hybrid-activated nodes in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment to use the standard-instances tier in the specified AWS account and AWS Region.

**To revert to the standard-instances tier (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the **Account settings** dropdown and choose **Instance tier settings**.

1. Choose **Change account setting**.

1. Review the information in the pop-up about changing account settings, and then if you approve, choose the option to accept and continue.

## Reverting to the standard-instances tier (AWS CLI)


The following procedure shows you how to use the AWS Command Line Interface to change all hybrid-activated nodes in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment to use the standard-instances tier in the specified AWS account and AWS Region.

**To revert to the standard-instances tier using the AWS CLI**

1. Open the AWS CLI and run the following command. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-service-setting \
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier \
       --setting-value standard
   ```

------
#### [ Windows ]

   ```
   aws ssm update-service-setting ^
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier ^
       --setting-value standard
   ```

------

   There is no output if the command succeeds.

1. Run the following command 30 minutes later to view the settings for managed instances in the current AWS account and AWS Region.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-service-setting \
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier
   ```

------
#### [ Windows ]

   ```
   aws ssm get-service-setting ^
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier
   ```

------

   The command returns information like the following.

   ```
   {
       "ServiceSetting": {
           "SettingId": "/ssm/managed-instance/activation-tier",
           "SettingValue": "standard",
           "LastModifiedDate": 1555603376.138,
           "LastModifiedUser": "System",
           "ARN": "arn:aws:ssm:us-east-2:123456789012:servicesetting/ssm/managed-instance/activation-tier",
           "Status": "Default"
       }
   }
   ```

   The status changes to *Default* after the request has been approved.

## Reverting to the standard-instances tier (PowerShell)


The following procedure shows you how to use AWS Tools for Windows PowerShell to change hybrid-activated nodes in your hybrid and multicloud environment to use the standard-instances tier in the specified AWS account and AWS Region.

**To revert to the standard-instances tier using PowerShell**

1. Open AWS Tools for Windows PowerShell and run the following command.

   ```
   Update-SSMServiceSetting `
       -SettingId "arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier" `
       -SettingValue "standard"
   ```

   There is no output if the command succeeds.

1. Run the following command 30 minutes later to view the settings for managed instances in the current AWS account and AWS Region.

   ```
   Get-SSMServiceSetting `
       -SettingId "arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier"
   ```

   The command returns information like the following.

   ```
   ARN: arn:aws:ssm:us-east-2:123456789012:servicesetting/ssm/managed-instance/activation-tier
   LastModifiedDate : 4/18/2019 4:02:56 PM
   LastModifiedUser : System
   SettingId        : /ssm/managed-instance/activation-tier
   SettingValue     : standard
   Status           : Default
   ```

   The status changes to *Default* after the request has been approved.

# Resetting passwords on managed nodes


You can reset the password for any user on a managed node. This includes Amazon Elastic Compute Cloud (Amazon EC2) instances; AWS IoT Greengrass core devices; and on-premises servers, edge devices, and virtual machines (VMs) that are managed by AWS Systems Manager. The password reset functionality is built on Session Manager, a tool in AWS Systems Manager. You can use this functionality to connect to managed nodes without opening inbound ports, maintaining bastion hosts, or managing SSH keys. 

Password reset is useful when a user has forgotten a password, or when you want to quickly update a password without making an RDP or SSH connection to a managed node. 

**Prerequisites**  
Before you can reset the password on a managed node, the following requirements must be met:
+ The managed node on which you want to change a password must be a Systems Manager managed node. Also, SSM Agent version 2.3.668.0 or later must be installed on the managed node.) For information about installing or updating SSM Agent, see [Working with SSM Agent](ssm-agent.md).
+ The password reset functionality uses the Session Manager configuration that is set up for your account to connect to the managed node. Therefore, the prerequisites for using Session Manager must have been completed for your account in the current AWS Region. For more information, see [Setting up Session Manager](session-manager-getting-started.md).
**Note**  
Session Manager support for on-premises nodes is provided for the advanced-instances tier only. For more information, see [Turning on the advanced-instances tier](fleet-manager-enable-advanced-instances-tier.md).
+ The AWS user who is changing the password must have the `ssm:SendCommand` permission for the managed node. For more information, see [Restricting Run Command access based on tags](run-command-setting-up.md#tag-based-access).

**Restricting access**  
You can limit a user's ability to reset passwords to specific managed nodes. This is done by using identity-based policies for the Session Manager `ssm:StartSession` operation with the `AWS-PasswordReset` SSM document. For more information, see [Control user session access to instances](session-manager-getting-started-restrict-access.md).

**Encrypting data**  
Turn on AWS Key Management Service (AWS KMS) complete encryption for Session Manager data to use the password reset option for managed nodes. For more information, see [Turn on KMS key encryption of session data (console)](session-preferences-enable-encryption.md).

## Reset a password on a managed node


You can reset a password on a Systems Manager managed node using the Systems Manager **Fleet Manager** console or the AWS Command Line Interface (AWS CLI).

**To change the password on a managed node (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the node that needs a new password.

1. Choose **Instance actions, Reset password**.

1. For **User name**, enter the name of the user for which you're changing the password. This can be any user name that has an account on the node.

1. Choose **Submit**.

1. Follow the prompts in the **Enter new password** command window to specify the new password.
**Note**  
If the version of SSM Agent on the managed node doesn't support password resets, you're prompted to install a supported version using Run Command, a tool in AWS Systems Manager.

**To reset the password on a managed node (AWS CLI)**

1. To reset the password for a user on a managed node, run the following command. Replace each *example resource placeholder* with your own information.
**Note**  
To use the AWS CLI to reset a password, the Session Manager plugin must be installed on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).

------
#### [ Linux & macOS ]

   ```
   aws ssm start-session \
       --target instance-id \
       --document-name "AWS-PasswordReset" \
       --parameters '{"username": ["user-name"]}'
   ```

------
#### [ Windows ]

   ```
   aws ssm start-session ^
       --target instance-id ^
       --document-name "AWS-PasswordReset" ^
       --parameters username="user-name"
   ```

------

1. Follow the prompts in the **Enter new password** command window to specify the new password.

## Troubleshoot password resets on managed nodes


Many password reset issues can be resolved by ensuring that you have completed the [password reset prerequisites](#pw-reset-prereqs). For other problems, use the following information to help you troubleshoot password reset issues.

**Topics**
+ [

### Managed node not available
](#password-reset-troubleshooting-instances)
+ [

### SSM Agent not up-to-date (console)
](#password-reset-troubleshooting-ssmagent-console)
+ [

### Password reset options aren't provided (AWS CLI)
](#password-reset-troubleshooting-ssmagent-cli)
+ [

### No authorization to run `ssm:SendCommand`
](#password-reset-troubleshooting-sendcommand)
+ [

### Session Manager error message
](#password-reset-troubleshooting-session-manager)

### Managed node not available


**Problem**: You want to reset the password for a managed node on the **Managed instances** console page, but the node isn't in the list.
+ **Solution**: The managed node you want to connect to might not be configured for Systems Manager. To use an EC2 instance with Systems Manager, an AWS Identity and Access Management (IAM) instance profile that gives Systems Manager permission to perform actions on your instances must be attached to the instance. For information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md). 

  To use a non-EC2 machine with Systems Manager, create an IAM service role that gives Systems Manager permission to perform actions on your managed nodes. For more information, see [Create the IAM service role required for Systems Manager in hybrid and multicloud environments](hybrid-multicloud-service-role.md). (Session Manager support for on-premises servers and VMs is provided for the advanced-instances tier only. For more information, see [Turning on the advanced-instances tier](fleet-manager-enable-advanced-instances-tier.md).)

### SSM Agent not up-to-date (console)


**Problem**: A message reports that the version of SSM Agent doesn't support password reset functionality.
+ **Solution**: Version 2.3.668.0 or later of SSM Agent is required to perform password resets. In the console, you can update the agent on the managed node by choosing **Update SSM Agent**. 

  An updated version of SSM Agent is released whenever new tools are added to Systems Manager or updates are made to existing tools. Failing to use the latest version of the agent can prevent your managed node from using various Systems Manager tools and features. For that reason, we recommend that you automate the process of keeping SSM Agent up to date on your machines. For information, see [Automating updates to SSM Agent](ssm-agent-automatic-updates.md). Subscribe to the [SSM Agent Release Notes](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) page on GitHub to get notifications about SSM Agent updates.

### Password reset options aren't provided (AWS CLI)


**Problem**: You connect successfully to a managed node using the AWS CLI `[https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html)` command. You specified the SSM Document `AWS-PasswordReset` and provided a valid user name, but prompts to change the password aren't displayed.
+ **Solution**: The version of SSM Agent on the managed node isn't up-to-date. Version 2.3.668.0 or later is required to perform password resets. 

  An updated version of SSM Agent is released whenever new tools are added to Systems Manager or updates are made to existing tools. Failing to use the latest version of the agent can prevent your managed node from using various Systems Manager tools and features. For that reason, we recommend that you automate the process of keeping SSM Agent up to date on your machines. For information, see [Automating updates to SSM Agent](ssm-agent-automatic-updates.md). Subscribe to the [SSM Agent Release Notes](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) page on GitHub to get notifications about SSM Agent updates.

### No authorization to run `ssm:SendCommand`


**Problem**: You attempt to connect to a managed node to change the password but receive an error message saying that you aren't authorized to run `ssm:SendCommand` on the managed node.
+ **Solution**: Your IAM policy must include permission to run the `ssm:SendCommand` command. For information, see [Restricting Run Command access based on tags](run-command-setting-up.md#tag-based-access).

### Session Manager error message


**Problem**: You receive an error message related to Session Manager.
+ **Solution**: Password reset support requires that Session Manager is configured correctly. For information, see [Setting up Session Manager](session-manager-getting-started.md) and [Troubleshooting Session Manager](session-manager-troubleshooting.md).

# Deregistering managed nodes in a hybrid and multicloud environment


If you no longer want to manage an on-premises server, edge device, or virtual machine (VM) by using AWS Systems Manager, then you can deregister it. Deregistering a hybrid-activated node removes it from the list of managed nodes in Systems Manager. AWS Systems Manager Agent (SSM Agent) running on the hybrid-activated node won't be able to refresh its authorization token because it's no longer registered. SSM Agent hibernates and reduce its ping frequency to Systems Manager in the cloud to once per hour. Systems Manager stores the command history for a deregistered managed node for 30 days.

**Note**  
You can reregister an on-premises server, edge device, or VM using the same activation code and ID as long as you haven't reached the instance limit for the designated activation code and ID. You can verify the instance limit in the console by choosing **Node tools**, and then choose **Hybrid activations**. If the value of **Registered instances** is less than **Registration limit**, you can reregister a machine using the same activation code and ID. If it's greater, you must use a different activation code and ID.

The following procedure describes how to deregister a hybrid-activated node by using the Systems Manager console. For information about how to do this by using the AWS Command Line Interface, see [deregister-managed-instance](https://docs.aws.amazon.com/cli/latest/reference/ssm/deregister-managed-instance.html).

For related information, see the following topics:
+ [Deregister and reregister a managed node (Linux)](hybrid-multicloud-ssm-agent-install-linux.md#systems-manager-install-managed-linux-deregister-reregister) (Linux)
+ [Deregister and reregister a managed node (Windows Server)](hybrid-multicloud-ssm-agent-install-windows.md#systems-manager-install-managed-win-deregister-reregister) (Windows Server)

**To deregister a hybrid-activated node (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the checkbox next to the managed node that you want to deregister.

1. Choose **Node actions, Tools, Deregister this managed node**.

1. Review the information in the **Deregister this managed node** dialog box. If you approve, choose **Deregister**.

# Working with OS file systems using Fleet Manager
Working with OS file systems

You can use Fleet Manager, a tool in AWS Systems Manager, to work with the file system on your managed nodes. Using Fleet Manager, you can view information about the directory and file data stored on the volumes attached to your managed nodes. For example, you can view the name, size, extension, owner, and permissions for your directories and files. Up to 10,000 lines of file data can be previewed as text from the Fleet Manager console. You can also use this feature to `tail` files. When using `tail` to view file data, the last 10 lines of the file are displayed initially. As new lines of data are written to the file, the view is updated in real time. As a result, you can review log data from the console, which can improve the efficiency of your troubleshooting and systems administration. Additionally, you can create directories and copy, cut, paste, rename, or delete files and directories.

We recommend creating regular backups, or taking snapshots of the Amazon Elastic Block Store (Amazon EBS) volumes attached to your managed nodes. When copying, or cutting and pasting files, existing files and directories in the destination path with the same name as the new files or directories are replaced. Serious problems can occur if you replace or modify system files and directories. AWS doesn't guarantee that these problems can be solved. Modify system files at your own risk. You're responsible for all file and directory changes, and ensuring you have backups. Deleting or replacing files and directories can't be undone.

**Note**  
Fleet Manager uses Session Manager, a tool in AWS Systems Manager, to view text previews and `tail` files. For Amazon Elastic Compute Cloud (Amazon EC2) instances, the instance profile attached to your managed instances must provide permissions for Session Manager to use this feature. For more information about adding Session Manager permissions to an instance profile, see [Add Session Manager permissions to an existing IAM role](getting-started-add-permissions-to-existing-profile.md).

**Topics**
+ [

# Viewing the OS file system using Fleet Manager
](fleet-manager-viewing-file-system.md)
+ [

# Previewing OS files using Fleet Manager
](fleet-manager-preview-os-files.md)
+ [

# Tailing OS files using Fleet Manager
](fleet-manager-tailing-os-files.md)
+ [

# Copying, cutting, and pasting OS files or directories using Fleet Manager
](fleet-manager-move-files-or-directories.md)
+ [

# Renaming OS files and directories using Fleet Manager
](fleet-manager-renaming-files-and-directories.md)
+ [

# Deleting OS files and directories using Fleet Manager
](fleet-manager-deleting-files-and-directories.md)
+ [

# Creating OS directories using Fleet Manager
](fleet-manager-creating-directories.md)
+ [

# Cutting, copying, and pasting OS directories using Fleet Manager
](fleet-manager-managing-directories.md)

# Viewing the OS file system using Fleet Manager
Viewing the OS file system

You can use Fleet Manager to view the OS file system on a Systems Manager managed node. 

**To view the file OS system using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node with the file system you want to view.

1. Choose **Tools, File system**.

# Previewing OS files using Fleet Manager
Previewing OS files

You can use Fleet Manager to preview text files on an OS.

**To view text previews of files using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node with the files you want to preview.

1. Choose **Tools, File system**.

1. Select the **File name** of the directory that contains the file you want to preview.

1. Choose the button next to the file whose content you want to preview.

1. Choose **Actions, Preview as text**.

# Tailing OS files using Fleet Manager
Tailing OS file

You can use Fleet Manager to tail a file on a managed node.

**To tail OS files with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node with the files you want to tail.

1. Choose **Tools, File system**.

1. Select the **File name** of the directory that contains the file you want to tail.

1. Choose the button next to the file whose content you want to tail.

1. Choose **Actions, Tail file**.

# Copying, cutting, and pasting OS files or directories using Fleet Manager
Copying, cutting, and pasting OS files or directories

You can use Fleet Manager to copy, cut, and paste OS files on a managed node.

**To copy or cut and paste files or directories using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node with the files you want to copy, or cut and paste.

1. Choose **Tools, File system**.

1. To copy or cut a file, select the **File name** of the directory that contains the file you want to copy or cut. To copy or cut a directory, choose the button next to the directory that you want to copy or cut and then proceed to step 8.

1. Choose the button next to the file you want to copy or cut.

1. In the **Actions** menu, choose **Copy** or **Cut**.

1. In the **File system** view, choose the button next to the directory you want to paste the file in.

1. In the **Actions** menu, choose **Paste**.

# Renaming OS files and directories using Fleet Manager
Renaming OS files and directories

You can use Fleet Manager to rename files and directories on a managed node in your account.

**To rename files or directories with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node with the files or directories you want to rename.

1. Choose **Tools, File system**.

1. To rename a file, select the **File name** of the directory that contains the file you want to rename. To rename a directory, choose the button next to the directory that you want to rename and then proceed to step 8.

1. Choose the button next to the file whose content you want to rename.

1. Choose **Actions, Rename**.

1. For **File name**, enter the new name for the file and select **Rename**.

# Deleting OS files and directories using Fleet Manager
Deleting OS files and directories

You can use Fleet Manager to delete files and directories on a managed node in your account.

**To delete files or directories using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node with the files or directories you want to delete.

1. Choose **Tools, File system**.

1. To delete a file, select the **File name** of the directory that contains the file you want to delete. To delete a directory, choose the button next to the directory that you want to delete and then proceed to step 7.

1. Choose the button next to the file with the content you want to delete.

1. Choose **Actions, Delete**.

# Creating OS directories using Fleet Manager
Creating OS directories

You can use Fleet Manager to create directories on a managed node in your account.

**To create a directory using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node you want to create a directory in.

1. Choose **Tools, File system**.

1. Select the **File name** of the directory where you want to create a new directory.

1. Select **Create directory**.

1. For **Directory name**, enter the name for the new directory, and then select **Create directory**.

# Cutting, copying, and pasting OS directories using Fleet Manager
Cutting, copying, and pasting OS directories

You can use Fleet Manager to cut, copy, and paste directories on a managed node in your account.

**To copy or cut and paste directories with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node with the files you want to copy, or cut and paste.

1. Choose **Tools, File system**.

1. Choose the button next to the directory that you want to copy or cut and then proceed to step 8.

1. In the **Actions** menu, choose **Copy** or **Cut**.

1. In the **File system** view, choose the button next to the directory you want to paste the file in.

1. In the **Actions** menu, choose **Paste**.

# Monitoring managed node performance
Monitoring performance

You can use Fleet Manager, a tool in AWS Systems Manager, to view performance data about your managed nodes in real time. The performance data is retrieved from performance counters.

The following performance counters are available in Fleet Manager:
+ CPU utilization
+ Disk input/output (I/O) utilization
+ Network traffic
+ Memory usage

**Note**  
Fleet Manager uses Session Manager, a tool in AWS Systems Manager, to retrieve performance data. For Amazon Elastic Compute Cloud (Amazon EC2) instances, the instance profile attached to your managed instances must provide permissions for Session Manager to use this feature. For more information about adding Session Manager permissions to an instance profile, see [Add Session Manager permissions to an existing IAM role](getting-started-add-permissions-to-existing-profile.md).

**To view performance data with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node whose performance you want to monitor.

1. Choose **View details**.

1. Choose **Tools, Performance counters**.

# Working with processes


You can use Fleet Manager, a tool in AWS Systems Manager, to work with processes on your managed nodes. Using Fleet Manager, you can view information about processes. For example, you can see the CPU utilization and memory usage of processes in addition to their handles and threads. With Fleet Manager, you can start and terminate processes from the console.

**Note**  
Fleet Manager uses Session Manager, a tool in AWS Systems Manager, to retrieve process data. For Amazon Elastic Compute Cloud (Amazon EC2) instances, the instance profile attached to your managed instances must provide permissions for Session Manager to use this feature. For more information about adding Session Manager permissions to an instance profile, see [Add Session Manager permissions to an existing IAM role](getting-started-add-permissions-to-existing-profile.md).

**Topics**
+ [

# Viewing details about OS processes using Fleet Manager
](fleet-manager-view-process-details.md)
+ [

# Starting an OS process on a managed node using Fleet Manager
](fleet-manager-start-process.md)
+ [

# Terminating an OS process using Fleet Manager
](fleet-manager-terminate-process.md)

# Viewing details about OS processes using Fleet Manager
Viewing details about OS processes

You can use Fleet Manager view details about processes on your managed nodes.

**To view details about processes with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the node whose processes you want to view.

1. Choose **Tools, Processes**.

# Starting an OS process on a managed node using Fleet Manager
Starting an OS process on a managed node

You can use Fleet Manager to start a process on a managed node.

**To start a process with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node you want to start a process on.

1. Choose **Tools, Processes**.

1. Select **Start new process**.

1. For **Process name or full path**, enter the name of the process or the full path to the executable.

1. (Optional) For **Working directory**, enter the directory path where you want the process to run.

# Terminating an OS process using Fleet Manager
Terminating an OS process

**To terminate an OS process using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node you want to start a process on.

1. Choose **Tools, Processes**.

1. Choose the button next to the process you want to terminate.

1. Choose **Actions, Terminate process** or **Actions, Terminate process tree**. 
**Note**  
Terminating a process tree also terminates all processes and applications using that process.

# Viewing logs on managed nodes
Viewing logs

You can use Fleet Manager, a tool in AWS Systems Manager, to view log data stored on your managed nodes. For Windows managed nodes, you can view Windows event logs and copy their details from the console. To help you search events, filter Windows event logs by **Event level**, **Event ID**, **Event source**, and **Time created**. You can also view other log data using the procedure to view the file system. For more information about viewing the file system with Fleet Manager, see [Working with OS file systems using Fleet Manager](fleet-manager-file-system-management.md).

**To view Windows event logs with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node whose event logs you want to view.

1. Choose **View details**.

1. Choose **Tools, Windows event logs**.

1. Choose the **Log name** that contains the events you want to view.

1. Choose the button next to the **Log name** you want to view, and then select **View events**.

1. Choose the button next to the event you want to view, and then select **View event details**.

1. (Optional) Select **Copy as JSON** to copy the event details to your clipboard.

# Managing OS user accounts and groups on managed nodes using Fleet Manager
Managing OS user accounts and groups

You can use Fleet Manager, a tool in AWS Systems Manager, to manage operating system (OS) user accounts and groups on your managed nodes. For example, you can create and delete users and groups. Additionally, you can view details like group membership, user roles, and status.

**Important**  
Fleet Manager uses Run Command and Session Manager, tools in AWS Systems Manager, for various user management operations. As a result, a user could grant permissions to an operating system user account that they would otherwise be unable to. This is because AWS Systems Manager Agent (SSM Agent) runs on Amazon Elastic Compute Cloud (Amazon EC2) instances using root permissions (Linux) or SYSTEM permissions (Windows Server). For more information about restricting access to root-level commands through SSM Agent, see [Restricting access to root-level commands through SSM Agent](ssm-agent-restrict-root-level-commands.md). To restrict access to this feature, we recommend creating AWS Identity and Access Management (IAM) policies for your users that only allow access to the actions you define. For more information about creating IAM policies for Fleet Manager, see [Controlling access to Fleet Manager](configuring-fleet-manager-permissions.md).

**Topics**
+ [

# Creating an OS user or group using Fleet Manager
](manage-os-user-accounts-create.md)
+ [

# Updating user or group membership using Fleet Manager
](manage-os-user-accounts-update.md)
+ [

# Deleting an OS user or group using Fleet Manager
](manage-os-user-accounts-delete.md)

# Creating an OS user or group using Fleet Manager


**Note**  
Fleet Manager uses Session Manager to set passwords for new users. For Amazon EC2 instances, the instance profile attached to your managed instances must provide permissions for Session Manager to use this feature. For more information about adding Session Manager permissions to an instance profile, see [Add Session Manager permissions to an existing IAM role](getting-started-add-permissions-to-existing-profile.md).

Instead of logging on directly to a server to create a user account or group, you can use the Fleet Manager console to perform the same tasks.

**To create an OS user account using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node you want to create a new user on.

1. Choose **View details**.

1. Choose **Tools, Users and groups**.

1. Choose the **Users** tab, and then choose **Create user**.

1. Enter a value for the **Name** of the new user.

1. (Recommended) Select the check box next to **Set password**. You will be prompted to provide a password for the new user at the end of the procedure.

1. Select **Create user**. If you selected the check box to create a password for the new user, you will be prompted to enter a value for the password and select **Done**. If the password you specify doesn't meet the requirements specified by your managed node's local or domain policies, an error is returned.

**To create an OS group using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node you want to create a group in.

1. Choose **View details**.

1. Choose **Tools, Users and groups**.

1. Choose the **Groups** tab, and then choose **Create group**.

1. Enter a value for the **Name** of the new group.

1. (Optional) Enter a value for the **Description** of the new group.

1. (Optional) Select users to add to the **Group members** for the new group.

1. Select **Create group**.

# Updating user or group membership using Fleet Manager


Instead of logging on directly to a server to update a user account or group, you can use the Fleet Manager console to perform the same tasks.

**To add an OS user account to a new group using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node where the user account exists that you want to update.

1. Choose **View details**.

1. Choose **Tools, Users and groups**.

1. Choose the **Users** tab.

1. Choose the button next to the user you want to update.

1. Choose **Actions, Add user to group**.

1. Choose the group you want to add the user to under **Add to group**.

1. Select **Add user to group**.

**To edit an OS group's membership using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node where the group exists that you want to update.

1. Choose **View details**.

1. Choose **Tools, Users and groups**.

1. Choose the **Groups** tab.

1. Choose the button next to the group you want to update.

1. Choose **Actions, Modify group**.

1. Choose the users you want to add or remove under **Group members**.

1. Select **Modify group**.

# Deleting an OS user or group using Fleet Manager


Instead of logging on directly to a server to delete a user account or group, you can use the Fleet Manager console to perform the same tasks.

**To delete an OS user account using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node where the user account exists that you want to delete.

1. Choose **View details**.

1. Choose **Users and groups**.

1. Choose the **Users** tab.

1. Choose the button next to the user you want to delete.

1. Choose **Actions, Delete local user**.

**To delete an OS group using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node where the group exists that you want to delete.

1. Choose **View details**.

1. Choose **Tools, Users and groups**.

1. Choose the **Group** tab.

1. Choose the button next to the group you want to update.

1. Choose **Actions, Delete local group**.

# Managing the Windows registry on managed nodes
Managing the Windows registry

You can use Fleet Manager, a tool in AWS Systems Manager, to manage the registry on your Windows Server managed nodes. From the Fleet Manager console you can create, copy, update, and delete registry entries and values.

**Important**  
We recommend creating a backup of the registry, or taking a snapshot of the root Amazon Elastic Block Store (Amazon EBS) volume attached to your managed node, before you modify the registry. Serious problems can occur if you modify the registry incorrectly. These problems might require you to reinstall the operating system, or restore the root volume of your node from a snapshot. AWS doesn't guarantee that these problems can be solved. Modify the registry at your own risk. You're responsible for all registry changes, and ensuring you have backups.

## Create a Windows registry key or entry


**To create a Windows registry key with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node you want to create a registry key on.

1. Choose **View details**.

1. Choose **Tools, Windows registry**.

1. Choose the hive you want to create a new registry key in by selecting the **Registry name**.

1. Choose **Create, Create registry key**.

1. Choose the button next to the registry entry you want to create a new key in.

1. Choose **Create registry key**.

1. Enter a value for the **Name** of the new registry key, and select **Submit**.

**To create a Windows registry entry with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the instance you want to create a registry entry on.

1. Choose **View details**.

1. Choose **Tools, Windows registry**.

1. Choose the hive, and subsequent registry key you want to create a new registry entry in by selecting the **Registry name**.

1. Choose **Create, Create registry entry**.

1. Enter a value for the **Name** of the new registry entry.

1. Choose the **Type** of value you want to create for the registry entry. For more information about registry value types, see [Registry value types](https://docs.microsoft.com/en-us/windows/win32/sysinfo/registry-value-types).

1. Enter a value for the **Value** of the new registry entry.

## Update a Windows registry entry


**To update a Windows registry entry with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node you want to update a registry entry on.

1. Choose **View details**.

1. Choose **Tools, Windows registry**.

1. Choose the hive, and subsequent registry key you want to update by selecting the **Registry name**.

1. Choose the button next to the registry entry you want to update.

1. Choose **Actions, Update registry entry**.

1. Enter the new value for the **Value** of the registry entry.

1. Choose **Update**.

## Delete a Windows registry entry or key


**To delete a Windows registry key with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node you want to delete a registry key on.

1. Choose **Tools, Windows registry**.

1. Choose the hive, and subsequent registry key you want to delete by selecting the **Registry name**.

1. Choose the button next to the registry key you want to delete.

1. Choose **Actions, Delete registry key**.

**To delete a Windows registry entry with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node you want to delete a registry entry on.

1. Choose **View details**.

1. Choose **Tools, Windows registry**.

1. Choose the hive, and subsequent registry key containing the entry you want to delete by selecting the **Registry name**.

1. Choose the button next to the registry entry you want to delete.

1. Choose **Actions, Delete registry entry**.

# Managing EC2 instances automatically with Default Host Management Configuration


The Default Host Management Configuration setting allows AWS Systems Manager to manage your Amazon EC2 instances automatically as *managed instances*. A managed instance is an EC2 instance that is configured for use with Systems Manager. 

The benefits of managing your instances with Systems Manager include the following:
+ Connect to your EC2 instances securely using Session Manager.
+ Perform automated patch scans using Patch Manager.
+ View detailed information about your instances using Systems Manager Inventory.
+ Track and manage instances using Fleet Manager.
+ Keep SSM Agent up to date automatically.

*Fleet Manager, Inventory, Patch Manager, and Session Manager are tools in Systems Manager.*

Using Default Host Management Configuration, you can manage EC2 instances without having to manually create an AWS Identity and Access Management (IAM) instance profile. Instead, Default Host Management Configuration creates and applies a default IAM role to ensure that Systems Manager has permissions to manage all instances in the AWS account and AWS Region where it's activated. 

If the permissions provided aren't sufficient for your use case, you can also add policies to the default IAM role created by the Default Host Management Configuration. Alternatively, if you don't need permissions for all of the capabilities provided by the default IAM role, you can create your own custom role and policies. Any changes made to the IAM role you choose for Default Host Management Configuration applies to all managed Amazon EC2 instances in the Region and account.

For more information about the policy used by Default Host Management Configuration, see [AWS managed policy: AmazonSSMManagedEC2InstanceDefaultPolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonSSMManagedEC2InstanceDefaultPolicy).

**Implement least privilege access**  
The procedures in this topic are intended to be performed only by administrators. Therefore, we recommend implementing *least privilege access* in order to prevent non-administrative users from configuring or modifying the Default Host Management Configuration. To view example policies that restrict access to the Default Host Management Configuration, see [Least privilege policy examples for Default Host Management Configuration](#least-privilege-examples) later in this topic. 

**Important**  
Registration information for instances registered using Default Host Management Configuration is stored locally in the `var/lib/amazon/ssm` or `C:\ProgramData\Amazon` directories. Removing these directories or their files will prevent the instance from acquiring the necessary credentials to connect to Systems Manager using Default Host Management Configuration. In these cases, you must use an IAM instance profile to provide the required permissions to your instance, or recreate the instance.

**Topics**
+ [

## Prerequisites
](#dhmc-prerequisites)
+ [

## Activating the Default Host Management Configuration setting
](#dhmc-activate)
+ [

## Deactivating the Default Host Management Configuration setting
](#dhmc-deactivate)
+ [

## Least privilege policy examples for Default Host Management Configuration
](#least-privilege-examples)

## Prerequisites


In order to use Default Host Management Configuration in the AWS Region and AWS account where you activate the setting, the following requirements must be met.
+ An instance to be managed must use Instance Metadata Service Version 2 (IMDSv2).

  Default Host Management Configuration doesn't support Instance Metadata Service Version 1. For information about transitioning to IMDSv2, see [Transition to using Instance Metadata Service Version 2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-metadata-transition-to-version-2.html) in the *Amazon EC2 User Guide*
+ SSM Agent version 3.2.582.0 or later must be installed on the instance to be managed.

  For information about checking the version of SSM Agent installed on your instance, see [Checking the SSM Agent version number](ssm-agent-get-version.md).

  For information about updating SSM Agent, see [Automatically updating SSM Agent](ssm-agent-automatic-updates.md#ssm-agent-automatic-updates-console).
+ You, as the administrator performing the tasks in this topic, must have permissions for the [GetServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetServiceSetting.html), [ResetServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ResetServiceSetting.html), and [UpdateServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateServiceSetting.html) API operations. Additionally, you must have permissions for the `iam:PassRole` permission for the `AWSSystemsManagerDefaultEC2InstanceManagementRole` IAM role. The following is an example policy providing these permissions. Replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Action": [
                  "ssm:GetServiceSetting",
                  "ssm:ResetServiceSetting",
                  "ssm:UpdateServiceSetting"
              ],
              "Resource": "arn:aws:ssm:us-east-1:111122223333:servicesetting/ssm/managed-instance/default-ec2-instance-management-role"
          },
          {
              "Effect": "Allow",
              "Action": [
                  "iam:PassRole"
              ],
              "Resource": "arn:aws:iam::111122223333:role/service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole",
              "Condition": {
                  "StringEquals": {
                      "iam:PassedToService": [
                          "ssm.amazonaws.com"
                      ]
                  }
              }
          }
      ]
  }
  ```

------
+ If an IAM instance profile is already attached to an EC2 instance that is to be managed using Systems Manager, you must remove any permissions from it that allow the `ssm:UpdateInstanceInformation` operation. SSM Agent attempts to use instance profile permissions before using the Default Host Management Configuration permissions. If you allow the `ssm:UpdateInstanceInformation` operation in your own IAM instance profile, the instance will not use the Default Host Management Configuration permissions.

## Activating the Default Host Management Configuration setting


You can activate Default Host Management Configuration from the Fleet Manager console, or by using the AWS Command Line Interface or AWS Tools for Windows PowerShell.

You must turn on the Default Host Management Configuration one by one in each Region you where you want your Amazon EC2 instances managed by this setting.

After turning on Default Host Management Configuration, it might take up to 30 minutes for your instances to use the credentials of the role you choose in step 5 in the following procedure.

**To activate Default Host Management Configuration (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose **Account management, Configure Default Host Management Configuration**.

1. Turn on **Enable Default Host Management Configuration**.

1. Choose the AWS Identity and Access Management (IAM) role used to enable Systems Manager tools for your instances. We recommend using the default role provided by Default Host Management Configuration. It contains the minimum set of permissions necessary to manage your Amazon EC2 instances using Systems Manager. If you prefer to use a custom role, the role's trust policy must allow Systems Manager as a trusted entity. 

1. Choose **Configure** to complete setup. 

**To activate Default Host Management Configuration (command line)**

1. Create a JSON file on your local machine containing the following trust relationship policy.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement":[
           {
               "Sid":"",
               "Effect":"Allow",
               "Principal":{
                   "Service":"ssm.amazonaws.com"
               },
               "Action":"sts:AssumeRole"
           }
       ]
   }
   ```

------

1. Open the AWS CLI or Tools for Windows PowerShell and run one of the following commands, depending on the operating system type of your local machine, to create a service role in your account. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws iam create-role \
   --role-name AWSSystemsManagerDefaultEC2InstanceManagementRole \
   --path /service-role/ \
   --assume-role-policy-document file://trust-policy.json
   ```

------
#### [ Windows ]

   ```
   aws iam create-role ^
   --role-name AWSSystemsManagerDefaultEC2InstanceManagementRole ^
   --path /service-role/ ^
   --assume-role-policy-document file://trust-policy.json
   ```

------
#### [ PowerShell ]

   ```
   New-IAMRole `
   -RoleName "AWSSystemsManagerDefaultEC2InstanceManagementRole" `
   -Path "/service-role/" `
   -AssumeRolePolicyDocument "file://trust-policy.json"
   ```

------

1. Run the following command to attach the `AmazonSSMManagedEC2InstanceDefaultPolicy` managed policy to your newly created role. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws iam attach-role-policy \
   --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedEC2InstanceDefaultPolicy \
   --role-name AWSSystemsManagerDefaultEC2InstanceManagementRole
   ```

------
#### [ Windows ]

   ```
   aws iam attach-role-policy ^
   --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedEC2InstanceDefaultPolicy ^
   --role-name AWSSystemsManagerDefaultEC2InstanceManagementRole
   ```

------
#### [ PowerShell ]

   ```
   Register-IAMRolePolicy `
   -PolicyArn "arn:aws:iam::aws:policy/AmazonSSMManagedEC2InstanceDefaultPolicy" `
   -RoleName "AWSSystemsManagerDefaultEC2InstanceManagementRole"
   ```

------

1. Open the AWS CLI or Tools for Windows PowerShell and run the following command. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-service-setting \
   --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role \
   --setting-value service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole
   ```

------
#### [ Windows ]

   ```
   aws ssm update-service-setting ^
   --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role ^
   --setting-value service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole
   ```

------
#### [ PowerShell ]

   ```
   Update-SSMServiceSetting `
   -SettingId "arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role" `
   -SettingValue "service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole"
   ```

------

   There is no output if the command succeeds.

1. Run the following command to view the current service settings for Default Host Management Configuration in the current AWS account and AWS Region.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-service-setting \
   --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role
   ```

------
#### [ Windows ]

   ```
   aws ssm get-service-setting ^
   --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMServiceSetting `
   -SettingId "arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role"
   ```

------

   The command returns information like the following.

   ```
   {
       "ServiceSetting": {
           "SettingId": "/ssm/managed-instance/default-ec2-instance-management-role",
           "SettingValue": "service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole",
           "LastModifiedDate": "2022-11-28T08:21:03.576000-08:00",
           "LastModifiedUser": "System",
           "ARN": "arn:aws:ssm:us-east-2:-123456789012:servicesetting/ssm/managed-instance/default-ec2-instance-management-role",
           "Status": "Custom"
       }
   }
   ```

## Deactivating the Default Host Management Configuration setting


You can deactivate Default Host Management Configuration from the Fleet Manager console, or by using the AWS Command Line Interface or AWS Tools for Windows PowerShell.

You must turn off the Default Host Management Configuration setting one by one in each Region where you no longer want your your Amazon EC2 instances managed by this configuration. Deactivating it in one Region doesn't deactivate it in all Regions.

If you deactivate Default Host Management Configuration, and you have not attached an instance profile to your Amazon EC2 instances that allows access to Systems Manager, they will no longer be managed by Systems Manager. 

**To deactivate Default Host Management Configuration (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose **Account management, Default Host Management Configuration**.

1. Turn off **Enable Default Host Management Configuration**.

1. Choose **Configure** to disable Default Host Management Configuration.

**To deactivate Default Host Management Configuration (command line)**
+ Open the AWS CLI or Tools for Windows PowerShell and run the following command. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

  ```
  aws ssm reset-service-setting \
  --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role
  ```

------
#### [ Windows ]

  ```
  aws ssm reset-service-setting ^
  --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role
  ```

------
#### [ PowerShell ]

  ```
  Reset-SSMServiceSetting `
  -SettingId "arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role"
  ```

------

## Least privilege policy examples for Default Host Management Configuration


The following sample policies demonstrate how to prevent members of your organization from making changes to the Default Host Management Configuration setting in your account.

### Service control policy for AWS Organizations


The following policy demonstrates how to prevent non-administrative members in your AWS Organizations from updating your Default Host Management Configuration setting. Replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [{
            "Effect": "Deny",
            "Action": [
                "ssm:UpdateServiceSetting",
                "ssm:ResetServiceSetting"
            ],
            "Resource": "arn:aws:ssm:*:*:servicesetting/ssm/managed-instance/default-ec2-instance-management-role",
            "Condition": {
                "StringNotEqualsIgnoreCase": {
                    "aws:PrincipalTag/job-function": [
                        "administrator"
                    ]
                }
            }
        },
        {
            "Effect": "Deny",
            "Action": [
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::*:role/service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole",
            "Condition": {
                "StringEquals": {
                    "iam:PassedToService": "ssm.amazonaws.com"
                },
                "StringNotEqualsIgnoreCase": {
                    "aws:PrincipalTag/job-function": [
                        "administrator"
                    ]
                }
            }
        },
        {
            "Effect": "Deny",
            "Resource": "arn:aws:iam::*:role/service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole",
            "Action": [
                "iam:AttachRolePolicy",
                "iam:DeleteRole"
            ],
            "Condition": {
                "StringNotEqualsIgnoreCase": {
                    "aws:PrincipalTag/job-function": [
                        "administrator"
                    ]
                }
            }
        }
    ]
}
```

------

### Policy for IAM principals


The following policy demonstrates how to prevent IAM groups, roles, or users in your AWS Organizations from updating your Default Host Management Configuration setting. Replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "ssm:UpdateServiceSetting",
                "ssm:ResetServiceSetting"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:servicesetting/ssm/managed-instance/default-ec2-instance-management-role"
        },
        {
            "Effect": "Deny",
            "Action": [
                "iam:AttachRolePolicy",
                "iam:DeleteRole",
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::111122223333:role/service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole"
        }
    ]
}
```

------

# Connecting to a Windows Server managed instance using Remote Desktop
Connecting to a managed instance using Remote Desktop

You can use Fleet Manager, a tool in AWS Systems Manager, to connect to your Windows Server Amazon Elastic Compute Cloud (Amazon EC2) instances using the Remote Desktop Protocol (RDP). Fleet Manager Remote Desktop, which is powered by [Amazon DCV](https://docs.aws.amazon.com/dcv/latest/adminguide/what-is-dcv.html), provides you with secure connectivity to your Windows Server instances directly from the Systems Manager console. You can have up to four simultaneous connections in a single browser window.

The Fleet Manager Remote Desktop API is named AWS Systems Manager GUI Connect. For information about using the Systems Manager GUI Connect API, see the *[AWS Systems Manager GUI Connect API Reference](https://docs.aws.amazon.com/ssm-guiconnect/latest/APIReference)*.

Currently, you can only use Remote Desktop with instances that are running Windows Server 2012 RTM or higher. Remote Desktop supports only English language inputs. 

Fleet Manager Remote Desktop is a console-only service and doesn't support command-line connections to your managed instances. To connect to a Windows Server managed instance through a shell, you can use Session Manager, another tool in AWS Systems Manager. For more information, see [AWS Systems Manager Session Manager](session-manager.md).

**Note**  
The duration of an RDP connection is not determined by the duration of your AWS Identity and Access Management (IAM) credentials. Instead, the connection persists until the maximum connection duration or idle time limit is met, whichever comes first. For more information, see [Remote connection duration and concurrency](#rdp-duration-concurrency).

For information about configuring AWS Identity and Access Management (IAM) permissions to allow your instances to interact with Systems Manager, see [Configure instance permissions for Systems Manager](setup-instance-permissions.md).

**Topics**
+ [

## Setting up your environment
](#rdp-prerequisites)
+ [

## Configuring IAM permissions for Remote Desktop
](#rdp-iam-policy-examples)
+ [

## Authenticating Remote Desktop connections
](#rdp-authentication)
+ [

## Remote connection duration and concurrency
](#rdp-duration-concurrency)
+ [

## Systems Manager GUI Connect handling of AWS IAM Identity Center attributes
](#iam-identity-center-attribute-handling)
+ [

## Connect to a managed node using Remote Desktop
](#rdp-connect-to-node)
+ [

## Viewing information about current and completed connections
](#list-connections)

## Setting up your environment


Before using Remote Desktop, verify that your environment meets the following requirements:
+ **Managed node configuration**

  Make sure that your Amazon EC2 instances are configured as [managed nodes](fleet-manager-managed-nodes.md) in Systems Manager.
+ **SSM Agent minimum version**

  Verify that nodes are running SSM Agent version 3.0.222.0 or higher. For information about how to check which agent version is running on a node, see [Checking the SSM Agent version number](ssm-agent-get-version.md). For information about installing or updating SSM Agent, see [Working with SSM Agent](ssm-agent.md).
+ **RDP port configuration**

  To accept remote connections, the Remote Desktop Services service on your Windows Server nodes must use default RDP port 3389. This is the default configuration on Amazon Machine Images (AMIs) provided by AWS. You are not explicitly required to open any inbound ports to use Remote Desktop.
+ **PSReadLine module version for keyboard functionality**

  To ensure that your keyboard functions properly in PowerShell, verify that nodes running Windows Server 2022 have PSReadLine module version 2.2.2 or higher installed. If they are running an older version, you can install the required version using the following commands.

  ```
  Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force
  ```

  After the NuGet package provider is installed, run the following command.

  ```
  Install-Module `
   -Name PSReadLine `
   -Repository PSGallery `
   -MinimumVersion 2.2.2 -Force
  ```
+ **Session Manager configuration**

  Before you can use Remote Desktop, you must complete the prerequisites for Session Manager setup. When you connect to an instance using Remote Desktop, any session preferences defined for your AWS account and AWS Region are applied. For more information, see [Setting up Session Manager](session-manager-getting-started.md).
**Note**  
If you log Session Manager activity using Amazon Simple Storage Service (Amazon S3), then your Remote Desktop connections will generate the following error in `bucket_name/Port/stderr`. This error is expected behavior and can be safely ignored.  

  ```
  Setting up data channel with id SESSION_ID failed: failed to create websocket for datachannel with error: CreateDataChannel failed with no output or error: createDataChannel request failed: unexpected response from the service <BadRequest>
  <ClientErrorMessage>Session is already terminated</ClientErrorMessage>
  </BadRequest>
  ```

## Configuring IAM permissions for Remote Desktop


In addition to the required IAM permissions for Systems Manager and Session Manager, the user or role you use must be allowed permissions for initiating connections.

**Permissions for initiating connections**  
To make RDP connections to EC2 instances in the console, the following permissions are required:
+ `ssm-guiconnect:CancelConnection`
+ `ssm-guiconnect:GetConnection`
+ `ssm-guiconnect:StartConnection`

**Permissions for listing connections**  
In order to view lists of connections in the console, the following permission is required:

`ssm-guiconnect:ListConnections`

The following are example IAM policies that you can attach to a user or role to allow different types of interaction with Remote Desktop. Replace each *example resource placeholder* with your own information.

### Standard policy for connecting to EC2 instances


------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "EC2",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:GetPasswordData"
            ],
            "Resource": "*"
        },
        {
            "Sid": "SSM",
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeInstanceProperties",
                "ssm:GetCommandInvocation",
                "ssm:GetInventorySchema"
            ],
            "Resource": "*"
        },
        {
            "Sid": "TerminateSession",
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/aws:ssmmessages:session-id": [
                        "${aws:userid}"
                    ]
                }
            }
        },
        {
            "Sid": "SSMStartSession",
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:*:111122223333:instance/*",
                "arn:aws:ssm:*:111122223333:managed-instance/*",
                "arn:aws:ssm:*::document/AWS-StartPortForwardingSession"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:CalledVia": "ssm-guiconnect.amazonaws.com"
                }
            }
        },
        {
            "Sid": "SSMMessages",
            "Effect": "Allow",
            "Action": [
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": [
                "arn:aws:ssm:*:111122223333:session/*"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:CalledVia": "ssm-guiconnect.amazonaws.com"
                }
            }
        },
        {
            "Sid": "GuiConnect",
            "Effect": "Allow",
            "Action": [
                "ssm-guiconnect:CancelConnection",
                "ssm-guiconnect:GetConnection",
                "ssm-guiconnect:StartConnection",
                "ssm-guiconnect:ListConnections"
            ],
            "Resource": "*"
        }
    ]
}
```

------

### Policy for connecting to EC2 instances with specific tags


**Note**  
In the following IAM policy, the `SSMStartSession` section requires an Amazon Resource Name (ARN) for the `ssm:StartSession` action. As shown, the ARN you specify does *not* require an AWS account ID. If you specify an account ID, Fleet Manager returns an `AccessDeniedException`.  
The `AccessTaggedInstances` section, which is located lower in the example policy, also requires ARNs for `ssm:StartSession`. For those ARNs, you do specify AWS account IDs.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "EC2",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:GetPasswordData"
            ],
            "Resource": "*"
        },
        {
            "Sid": "SSM",
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeInstanceProperties",
                "ssm:GetCommandInvocation",
                "ssm:GetInventorySchema"
            ],
            "Resource": "*"
        },
        {
            "Sid": "SSMStartSession",
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ssm:*::document/AWS-StartPortForwardingSession"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:CalledVia": "ssm-guiconnect.amazonaws.com"
                }
            }
        },
        {
            "Sid": "AccessTaggedInstances",
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:*:111122223333:instance/*",
                "arn:aws:ssm:*:111122223333:managed-instance/*"
            ],
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/tag key": [
                        "tag value"
                    ]
                }
            }
        },
        {
            "Sid": "SSMMessages",
            "Effect": "Allow",
            "Action": [
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": [
                "arn:aws:ssm:*:111122223333:session/*"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:CalledVia": "ssm-guiconnect.amazonaws.com"
                }
            }
        },
        {
            "Sid": "GuiConnect",
            "Effect": "Allow",
            "Action": [
                "ssm-guiconnect:CancelConnection",
                "ssm-guiconnect:GetConnection",
                "ssm-guiconnect:StartConnection",
                "ssm-guiconnect:ListConnections"
            ],
            "Resource": "*"
        }
    ]
}
```

------

### Policy for AWS IAM Identity Center users to connect to EC2 instances


------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "SSO",
            "Effect": "Allow",
            "Action": [
                "sso:ListDirectoryAssociations*",
                "identitystore:DescribeUser"
            ],
            "Resource": "*"
        },
        {
            "Sid": "EC2",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:GetPasswordData"
            ],
            "Resource": "*"
        },
        {
            "Sid": "SSM",
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeInstanceProperties",
                "ssm:GetCommandInvocation",
                "ssm:GetInventorySchema"
            ],
            "Resource": "*"
        },
        {
            "Sid": "TerminateSession",
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/aws:ssmmessages:session-id": [
                        "${aws:userName}"
                    ]
                }
            }
        },
        {
            "Sid": "SSMStartSession",
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:*:*:instance/*",
                "arn:aws:ssm:*:*:managed-instance/*",
                "arn:aws:ssm:*:*:document/AWS-StartPortForwardingSession"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:CalledVia": "ssm-guiconnect.amazonaws.com"
                }
            }
        },
        {
            "Sid": "SSMSendCommand",
            "Effect": "Allow",
            "Action": [
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ec2:*:*:instance/*",
                "arn:aws:ssm:*:*:managed-instance/*",
                "arn:aws:ssm:*:*:document/AWSSSO-CreateSSOUser"
            ]
        },
        {
            "Sid": "SSMMessages",
            "Effect": "Allow",
            "Action": [
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": [
                "arn:aws:ssm:*:111122223333:session/*"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:CalledVia": "ssm-guiconnect.amazonaws.com"
                }
            }
        },
        {
            "Sid": "GuiConnect",
            "Effect": "Allow",
            "Action": [
                "ssm-guiconnect:CancelConnection",
                "ssm-guiconnect:GetConnection",
                "ssm-guiconnect:StartConnection",
                "ssm-guiconnect:ListConnections"
            ],
            "Resource": "*"
        }
    ]
}
```

------

## Authenticating Remote Desktop connections


When establishing a remote connection, you can authenticate using Windows credentials or the Amazon EC2 key pair (`.pem` file) that is associated with the instance. For information about using key pairs, see [Amazon EC2 key pairs and Windows instances](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*.

Alternatively, if you're authenticated to the AWS Management Console using AWS IAM Identity Center, you can connect to your instances without providing additional credentials. For an example of a policy to allow remote connection authentication using IAM Identity Center, see [Configuring IAM permissions for Remote Desktop](#rdp-iam-policy-examples). 

Remote Desktop connections using IAM Identity Center authentication are available in all AWS Regions where IAM Identity Center is supported.

**Before you begin**  
Note the following conditions for using IAM Identity Center authentication before you begin connecting using Remote Desktop.
+ Remote Desktop supports IAM Identity Center authentication for nodes in the same AWS Region where you enabled IAM Identity Center.
+ Remote Desktop supports IAM Identity Center user names of up to 16 characters. 
+ Remote Desktop supports IAM Identity Center user names consisting of alphanumeric characters and the following special characters: `.` `-` `_`
**Important**  
Connections won't succeed for IAM Identity Center user names that contain the following characters: `+` `=` `,`   
IAM Identity Center supports these characters in user names, but Fleet Manager RDP connections do not.  
In addition, if an IAM Identity Center user name contains one or more `@` symbols, Fleet Manager disregards the first `@` symbol and all characters that follow it, whether or not the `@` introduces the domain portion of an email address. For instance, for the IAM Identity Center user name `diego_ramirez@example.com`, the `@example.com` portion is ignored and the user name for Fleet Manager becomes `diego_ramirez`. For `diego_r@mirez@example.com`, Fleet Manager disregards `@mirez@example.com`, and the username for Fleet Manager becomes `diego_r`.
+ When a connection is authenticated using IAM Identity Center, Remote Desktop creates a local Windows user in the instance’s Local Administrators group. This user persists after the remote connection has ended. 
+ Remote Desktop does not allow IAM Identity Center authentication for nodes that are Microsoft Active Directory domain controllers.
+ Although Remote Desktop allows you to use IAM Identity Center authentication for nodes *joined* to an Active Directory domain, we do not recommend doing so. This authentication method grants administrative permissions to users which might override more restrictive permissions granted by the domain.

## Remote connection duration and concurrency


The following conditions apply to active Remote Desktop connections:
+ **Connection duration**

  By default, a Remote Desktop connection is disconnected after 60 minutes. To prevent a connection from being disconnected, you can choose **Renew session** before being disconnected to reset the duration timer.
+ **Connection timeout**

  A Remote Desktop connection disconnects after it has been idle for more than 10 minutes.
+ **Connection persistence**

  After you connect to a Windows Server using Remote Desktop, the connection persists until the maximum connection duration (60 minutes) or idle timeout limit (10 minutes) is met. Connection duration is not determined by the duration of your AWS Identity and Access Management (IAM) credentials. The connection persists after IAM credentials expire if the connection duration limits are not met. When using Remote Desktop, you should terminate your connection after your IAM credentials expire by leaving the browser page.
+ **Concurrent connections**

  By default, you can have a maximum of 5 active Remote Desktop connections at one time for the same AWS account and AWS Region. To request a service quota increase of up to 50 concurrent connections, see [Requesting a quota increase ](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html)in the *Service Quotas User Guide*.
**Note**  
The standard license for Windows Server allows for two concurrent RDP connections. To support more connections, you must purchase additional Client Access Licenses (CALs) from Microsoft or Microsoft Remote Desktop Services licenses from AWS. For more information on supplemental licensing, see the following topics:  
[Client Access Licenses and Management Licenses](https://www.microsoft.com/en-us/licensing/product-licensing/client-access-license) on the Microsoft website
[Use License Manager user-based subscriptions for supported software products](https://docs.aws.amazon.com/license-manager/latest/userguide/user-based-subscriptions.html) in the *License Manager User Guide*

## Systems Manager GUI Connect handling of AWS IAM Identity Center attributes


Systems Manager GUI Connect is the API that supports Fleet Manager connections to EC2 instances using RDP. The following IAM Identity Center user data is retained after a connection is closed:
+ `username`

Systems Manager GUI Connect encrypts this identity attribute at rest using an AWS managed key by default. Customer managed keys are not supported for encrypting this attribute in Systems Manager GUI Connect. If you delete a user in your IAM Identity Center instance, Systems Manager GUI Connect continues to retain the `username` attribute associated with that user for 7 years, after which it is deleted. This data is retained to support auditing events, such as listing Systems Manager GUI Connect connection history. The data can't be deleted manually.

## Connect to a managed node using Remote Desktop


**Browser copy/paste support for text**  
Using the Google Chrome and Microsoft Edge browsers, you can copy and paste text from a managed node to your local machine, and from your local machine to a managed node that you are connected to.

Using the Mozilla Firefox browser, you can copy and paste text from a managed node to your local machine only. Copying from your local machine to the managed node is not supported.

**To connect to a managed node using Fleet Manager Remote Desktop**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the node that you want to connect to. You can select either the check box or the node name.

1. On the **Node actions** menu, choose **Connect with Remote Desktop**.

1. Choose your preferred **Authentication type**. If you choose **User credentials**, enter the user name and password for a Windows user account on the node that you're connecting to. If you choose **Key pair**, you can provide authentication using one of the following methods:

   1. Choose **Browse local machine** if you want to select the PEM key associated with your instance from your local file system.

      - or -

   1. Choose **Paste key pair content** if you want to copy the contents of the PEM file and paste them in to the provided field.

1. Select **Connect**.

1. To choose your preferred display resolution, in the **Actions** menu, choose **Resolutions**, and then select from the following:
   + **Adapt Automatically**
   + **1920 x 1080**
   + **1400 x 900**
   + **1366 x 768**
   + **800 x 600**

   The **Adapt Automatically** option sets the resolution based on your detected screen size.

## Viewing information about current and completed connections


You can use the Fleet Manager section of the Systems Manager console to view information about RDP connections that have been made in your account. Using a set of filters, you can limit the list of connections displayed to a time range, a specific instance, the user who made the connections, and connections of a specific status. The console also provides tabs that show information about all currently active connections, and all past connections.

**To view information about current and completed connections**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose **Account management, Connect with Remote Desktop**.

1. Choose one of the following tabs:
   + **Active connections**
   + **Connection history**

1. To further narrow the list of connection results displayed, specify one or more filters in the search (![\[\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/search-icon.png)) box. You can also enter a free-text search term.

# Managing Amazon EBS volumes on managed instances
Managing Amazon EBS volumes

[Amazon Elastic Block Store](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) (Amazon EBS) provides block level storage volumes for use with Amazon Elastic Compute Cloud (EC2) instances. EBS volumes behave like raw, unformatted block devices. You can mount these volumes as devices on your instances.

You can use Fleet Manager, a tool in AWS Systems Manager, to manage Amazon EBS volumes on your managed instances. For example, you can initialize an EBS volume, format a partition, and mount the volume to make it available for use.

**Note**  
Fleet Manager currently supports Amazon EBS volume management for Windows Server instances only.

## View EBS volume details


**To view details for an EBS volume with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed instance for which you want to view EBS volume details.

1. Choose **View details**.

1. Choose **Tools, EBS volumes**.

1. To view details for an EBS volume, choose its ID in the **Volume ID ** column.

## Initialize and format an EBS volume


**To initialize and format an EBS volume with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed instance for which you want to initialize, format, and mount an EBS volume. You can only initialize an EBS volume if its disk is empty.

1. Choose **View details**.

1. In the **Tools** menu, choose **EBS volumes**.

1. Choose the button next to the EBS volume you want to initialize and format.

1. Choose **Initialize and format**.

1. In **Partition style**, choose the partition style you want to use for the EBS volume.

1. (Optional) Choose a **Drive letter** for the partition.

1. (Optional) Enter a **Partition name** to identify the partition.

1. Choose the **File system** to use to organize files and data stored in the partition.

1. Choose **Confirm** to make the EBS volume available for use. You can't change the partition configuration from the AWS Management Console after confirming, however, you can use SSH or RDP to log into the instance to change the partition configuration.

# Accessing the Red Hat Knowledge base portal


You can use Fleet Manager, a tool in AWS Systems Manager, to access the Knowledge base portal if you are a Red Hat customer. You are considered a Red Hat customer if you run Red Hat Enterprise Linux (RHEL) instances or use RHEL services on AWS. The Knowledge base portal includes binaries, and knowledge-share and discussion forums for community support that are available only to Red Hat licensed customers.

In addition to the required AWS Identity and Access Management (IAM) permissions for Systems Manager and Fleet Manager, the user or role you use to access the console must allow the `rhelkb:GetRhelURL` action to access the Knowledge base portal.

**To access the Red Hat Knowledgebase Portal**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the RHEL instance you want to use to connect to the Red Hat Knowledgebase Portal.

1. Choose **Account management**, **Access Red Hat Knowledgebase** to open the Red Hat Knowledge base page.

If you use RHEL on AWS to run fully supported RHEL workloads, you can also access the Red Hat Knowledge base through Red Hat's website by using your AWS credentials.

# Troubleshooting managed node availability


For several AWS Systems Manager tools like Run Command, Distributor, and Session Manager, you can choose to manually select the managed nodes on which you want to run an operation. In cases like these, after you specify that you want to choose nodes manually, the system displays a list of managed nodes where you can run the operation.

This topic provides information to help you diagnose why a managed node *that you have confirmed is running* isn't included in your lists of managed nodes in Systems Manager. 

In order for a node to be managed by Systems Manager and available in lists of managed nodes, it must meet three requirements:
+ SSM Agent must be installed and running on the node with a supported operating system.
**Note**  
Some AWS managed Amazon Machine Images (AMIs) are configured to launch instances with [SSM Agent](ssm-agent.md) preinstalled. (You can also configure a custom AMI to preinstall SSM Agent.) For more information, see [Find AMIs with the SSM Agent preinstalled](ami-preinstalled-agent.md).
+ For Amazon Elastic Compute Cloud (Amazon EC2) instances, you must attach an AWS Identity and Access Management (IAM) instance profile to the instance. The instance profile enables the instance to communicate with the Systems Manager service. If you don't assign an instance profile to the instance, you register it using a [hybrid activation](activations.md), which is not a common scenario.
+ SSM Agent must be able to connect to a Systems Manager endpoint in order to register itself with the service. Thereafter, the managed node must be available to the service, which is confirmed by the service sending a signal every five minutes to check the instance's health. 
+ After the status of a managed node has been `Connection Lost` for at least 30 days, the node might no longer be listed in the Fleet Manager console. To restore it to the list, the issue that caused the lost connection must be resolved.

After you verify that a managed node is running, you can use the following command to check whether SSM Agent successfully registered with the Systems Manager service. This command doesn't return results until a successful registration has taken place.

------
#### [ Linux & macOS ]

```
aws ssm describe-instance-associations-status \
    --instance-id instance-id
```

------
#### [ Windows ]

```
aws ssm describe-instance-associations-status ^
    --instance-id instance-id
```

------
#### [ PowerShell ]

```
Get-SSMInstanceAssociationsStatus `
    -InstanceId instance-id
```

------

If registration was successful and the managed node is now available for Systems Manager operations, the command returns results similar to the following.

```
{
    "InstanceAssociationStatusInfos": [
        {
            "AssociationId": "fa262de1-6150-4a90-8f53-d7eb5EXAMPLE",
            "Name": "AWS-GatherSoftwareInventory",
            "DocumentVersion": "1",
            "AssociationVersion": "1",
            "InstanceId": "i-02573cafcfEXAMPLE",
            "Status": "Pending",
            "DetailedStatus": "Associated"
        },
        {
            "AssociationId": "f9ec7a0f-6104-4273-8975-82e34EXAMPLE",
            "Name": "AWS-RunPatchBaseline",
            "DocumentVersion": "1",
            "AssociationVersion": "1",
            "InstanceId": "i-02573cafcfEXAMPLE",
            "Status": "Queued",
            "AssociationName": "SystemAssociationForScanningPatches"
        }
    ]
}
```

If registration hasn't completed yet or was unsuccessful, the command returns results similar to the following:

```
{
    "InstanceAssociationStatusInfos": []
}
```

If the command doesn't return results after 5 minutes or so, use the following information to help you troubleshoot problems with your managed nodes.

**Topics**
+ [

## Solution 1: Verify that SSM Agent is installed and running on the managed node
](#instances-missing-solution-1)
+ [

## Solution 2: Verify that an IAM instance profile has been specified for the instance (EC2 instances only)
](#instances-missing-solution-2)
+ [

## Solution 3: Verify service endpoint connectivity
](#instances-missing-solution-3)
+ [

## Solution 4: Verify target operating system support
](#instances-missing-solution-4)
+ [

## Solution 5: Verify you're working in the same AWS Region as the Amazon EC2 instance
](#instances-missing-solution-5)
+ [

## Solution 6: Verify the proxy configuration you applied to SSM Agent on your managed node
](#instances-missing-solution-6)
+ [

## Solution 7: Install a TLS certificate on managed instances
](#hybrid-tls-certificate)
+ [

# Troubleshooting managed node availability using `ssm-cli`
](troubleshooting-managed-nodes-using-ssm-cli.md)

## Solution 1: Verify that SSM Agent is installed and running on the managed node


Make sure the latest version of SSM Agent is installed and running on the managed node.

To determine whether SSM Agent is installed and running on a managed node, see [Checking SSM Agent status and starting the agent](ssm-agent-status-and-restart.md).

To install or reinstall SSM Agent on a managed node, see the following topics:
+ [Manually installing and uninstalling SSM Agent on EC2 instances for Linux](manually-install-ssm-agent-linux.md)
+ [How to install the SSM Agent on hybrid Linux nodes](hybrid-multicloud-ssm-agent-install-linux.md)
+ [Manually installing and uninstalling SSM Agent on EC2 instances for Windows Server](manually-install-ssm-agent-windows.md)
+ [How to install the SSM Agent on hybrid Windows nodes ](hybrid-multicloud-ssm-agent-install-windows.md)

## Solution 2: Verify that an IAM instance profile has been specified for the instance (EC2 instances only)


For Amazon Elastic Compute Cloud (Amazon EC2) instances, verify that the instance is configured with an AWS Identity and Access Management (IAM) instance profile that allows the instance to communicate with the Systems Manager API. Also verify that your user has an IAM trust policy that allows your user to communicate with the Systems Manager API.

**Note**  
On-premises servers, edge devices, and virtual machines (VMs) use an IAM service role instead of an instance profile. For more information, see [Create the IAM service role required for Systems Manager in hybrid and multicloud environments](hybrid-multicloud-service-role.md).

**To determine whether an instance profile with the necessary permissions is attached to an EC2 instance**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Instances**.

1. Choose the instance to check for an instance profile.

1. On the **Description** tab in the bottom pane, locate **IAM role** and choose the name of the role.

1. On the role **Summary** page for the instance profile, on the **Permissions** tab, ensure that `AmazonSSMManagedInstanceCore` is listed under **Permissions policies**.

   If a custom policy is used instead, ensure that it provides the same permissions as `AmazonSSMManagedInstanceCore`.

   [Open `AmazonSSMManagedInstanceCore` in the console](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore$jsonEditor)

   For information about other policies that can be attached to an instance profile for Systems Manager, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md).

## Solution 3: Verify service endpoint connectivity


Verify that the instance has connectivity to the Systems Manager service endpoints. This connectivity is provided by creating and configuring VPC endpoints for Systems Manager, or by allowing HTTPS (port 443) outbound traffic to the service endpoints.

For Amazon EC2 instances, the Systems Manager service endpoint for the AWS Region is used to register the instance if your virtual private cloud (VPC) configuration allows outbound traffic. However, if the VPC configuration the instance was launched in does not allow outbound traffic and you can't change this configuration to allow connectivity to the public service endpoints, you must configure interface endpoints for your VPC instead.

For more information, see [Improve the security of EC2 instances by using VPC endpoints for Systems Manager](setup-create-vpc.md).

## Solution 4: Verify target operating system support


Verify that the operation you have chosen can be run on the type of managed node you expect to see listed. Some Systems Manager operations can target only Windows instances or only Linux instances. For example, the Systems Manager (SSM) documents `AWS-InstallPowerShellModule` and `AWS-ConfigureCloudWatch` can be run only on Windows instances. In the **Run a command** page, if you choose either of these documents and select **Choose instances manually**, only your Windows instances are listed and available for selection.

## Solution 5: Verify you're working in the same AWS Region as the Amazon EC2 instance


Amazon EC2 instances are created and available in specific AWS Regions, such as the US East (Ohio) Region (us-east-2) or Europe (Ireland) Region (eu-west-1). Ensure that you're working in the same AWS Region as the Amazon EC2 instance that you want to work with. For more information, see [Choosing a Region](https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/getting-started.html#select-region) in *Getting Started with the AWS Management Console*.

## Solution 6: Verify the proxy configuration you applied to SSM Agent on your managed node


Verify that the proxy configuration you applied to SSM Agent on your managed node is correct. If the proxy configuration is incorrect, the node can't connect to the required service endpoints, or Systems Manager might identify the operating system of the managed node incorrectly. For more information, see [Configuring SSM Agent to use a proxy on Linux nodes](configure-proxy-ssm-agent.md) and [Configure SSM Agent to use a proxy for Windows Server instances](configure-proxy-ssm-agent-windows.md).

## Solution 7: Install a TLS certificate on managed instances


A Transport Layer Security (TLS) certificate must be installed on each managed instance you use with AWS Systems Manager. AWS services use these certificates to encrypt calls to other AWS services.

A TLS certificate is already installed by default on each Amazon EC2 instance created from any Amazon Machine Image (AMI). Most modern operating systems include the required TLS certificate from Amazon Trust Services CAs in their trust store.

To verify whether the required certificate is installed on your instance run the following command based on the operating system of your instance. Be sure to replace the *region* portion of the URL with the AWS Region where your managed instance is located.

------
#### [ Linux & macOS ]

```
curl -L https://ssm.region.amazonaws.com
```

------
#### [ Windows ]

```
Invoke-WebRequest -Uri https://ssm.region.amazonaws.com
```

------

The command should return an `UnknownOperationException` error. If you receive an SSL/TLS error message instead then the required certificate might not be installed.

If you find the required Amazon Trust Services CA certificates aren't installed on your base operating systems, on instances created from AMIs that aren't supplied by Amazon, or on your own on-premises servers and VMs, you must install and allow a certificate from [Amazon Trust Services](https://www.amazontrust.com/repository/), or use AWS Certificate Manager (ACM) to create and manage certificates for a supported integrated service.

Each of your managed instances must have one of the following Transport Layer Security (TLS) certificates installed.
+ Amazon Root CA 1
+ Starfield Services Root Certificate Authority - G2
+ Starfield Class 2 Certificate Authority

For information about using ACM, see the *[AWS Certificate Manager User Guide](https://docs.aws.amazon.com/acm/latest/userguide/)*.

If certificates in your computing environment are managed by a Group Policy Object (GPO), then you might need to configure Group Policy to include one of these certificates.

For more information about the Amazon Root and Starfield certificates, see the blog post [How to Prepare for AWS’s Move to Its Own Certificate Authority](https://aws.amazon.com/blogs/security/how-to-prepare-for-aws-move-to-its-own-certificate-authority/).

# Troubleshooting managed node availability using `ssm-cli`


The `ssm-cli` is a standalone command line tool included in the SSM Agent installation. When you install SSM Agent 3.1.501.0 or later on a machine, you can run `ssm-cli` commands on that machine. The output of those commands helps you determine whether the machine meets the minimum requirements for an Amazon EC2 instance or non-EC2 machine to be managed by AWS Systems Manager, and therefore added to lists of managed nodes in Systems Manager. (SSM Agent version 3.1.501.0 was released in November, 2021.)

**Minimum requirements**  
For an Amazon EC2 instance or non-EC2 machine to be managed by AWS Systems Manager, and available in lists of managed nodes, it must meet three primary requirements:
+ SSM Agent must be installed and running on a machine with a [supported operating system](operating-systems-and-machine-types.md#prereqs-operating-systems).

  Some AWS managed Amazon Machine Images (AMIs) for EC2 are configured to launch instances with [SSM Agent](ssm-agent.md) preinstalled. (You can also configure a custom AMI to preinstall SSM Agent.) For more information, see [Find AMIs with the SSM Agent preinstalled](ami-preinstalled-agent.md).
+ An AWS Identity and Access Management (IAM) instance profile (for EC2 instances) or IAM service role (for non-EC2 machines) that supplies the required permissions to communicate with the Systems Manager service must be attached to the machine.
+ SSM Agent must be able to connect to a Systems Manager endpoint to register itself with the service. Thereafter, the managed node must be available to the service, which is confirmed by the service sending a signal every five minutes to check the managed node's health.

**Preconfigured commands in `ssm-cli`**  
Preconfigured commands are included that gather the required information to help you diagnose why a machine that you have confirmed is running isn't included in your lists of managed nodes in Systems Manager. These commands are run when you specify the `get-diagnostics` option.

On the machine, run the following command to use `ssm-cli` to help you troubleshoot managed node availability. 

------
#### [ Linux & macOS ]

```
ssm-cli get-diagnostics --output table
```

------
#### [ Windows ]

On Windows Server machines, you must navigate to the `C:\Program Files\Amazon\SSM` directory before running the command.

```
ssm-cli.exe get-diagnostics --output table
```

------
#### [ PowerShell ]

On Windows Server machines, you must navigate to the `C:\Program Files\Amazon\SSM` directory before running the command.

```
.\ssm-cli.exe get-diagnostics --output table
```

------

The command returns output as a table similar to the following. 

**Note**  
Connectivity checks to the `ssmmessages`, `s3`, `kms`, `logs`, and `monitoring` endpoints are for additional optional features such as Session Manager that can log to Amazon Simple Storage Service (Amazon S3) or Amazon CloudWatch Logs, and use AWS Key Management Service (AWS KMS) encryption.

------
#### [ Linux & macOS ]

```
[root@instance]# ssm-cli get-diagnostics --output table
┌───────────────────────────────────────┬─────────┬───────────────────────────────────────────────────────────────────────┐
│ Check                                 │ Status  │ Note                                                                  │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ EC2 IMDS                              │ Success │ IMDS is accessible and has instance id i-0123456789abcdefa in Region  │
│                                       │         │ us-east-2                                                             │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Hybrid instance registration          │ Skipped │ Instance does not have hybrid registration                            │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Connectivity to ssm endpoint          │ Success │ ssm.us-east-2.amazonaws.com is reachable                              │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Connectivity to ec2messages endpoint  │ Success │ ec2messages.us-east-2.amazonaws.com is reachable                      │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Connectivity to ssmmessages endpoint  │ Success │ ssmmessages.us-east-2.amazonaws.com is reachable                      │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Connectivity to s3 endpoint           │ Success │ s3.us-east-2.amazonaws.com is reachable                               │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Connectivity to kms endpoint          │ Success │ kms.us-east-2.amazonaws.com is reachable                              │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Connectivity to logs endpoint         │ Success │ logs.us-east-2.amazonaws.com is reachable                             │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Connectivity to monitoring endpoint   │ Success │ monitoring.us-east-2.amazonaws.com is reachable                       │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ AWS Credentials                       │ Success │ Credentials are for                                                   │
│                                       │         │ arn:aws:sts::123456789012:assumed-role/Fullaccess/i-0123456789abcdefa │
│                                       │         │ and will expire at 2021-08-17 18:47:49 +0000 UTC                      │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Agent service                         │ Success │ Agent service is running and is running as expected user              │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Proxy configuration                   │ Skipped │ No proxy configuration detected                                       │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ SSM Agent version                     │ Success │ SSM Agent version is 3.0.1209.0, latest available agent version is    │
│                                       │         │ 3.1.192.0                                                             │
└───────────────────────────────────────┴─────────┴───────────────────────────────────────────────────────────────────────┘
```

------
#### [ Windows Server and PowerShell ]

```
PS C:\Program Files\Amazon\SSM> .\ssm-cli.exe get-diagnostics --output table      
┌───────────────────────────────────────┬─────────┬─────────────────────────────────────────────────────────────────────┐
│ Check                                 │ Status  │ Note                                                                │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ EC2 IMDS                              │ Success │ IMDS is accessible and has instance id i-0123456789EXAMPLE in       │
│                                       │         │ Region us-east-2                                                    │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Hybrid instance registration          │ Skipped │ Instance does not have hybrid registration                          │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Connectivity to ssm endpoint          │ Success │ ssm.us-east-2.amazonaws.com is reachable                            │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Connectivity to ec2messages endpoint  │ Success │ ec2messages.us-east-2.amazonaws.com is reachable                    │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Connectivity to ssmmessages endpoint  │ Success │ ssmmessages.us-east-2.amazonaws.com is reachable                    │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Connectivity to s3 endpoint           │ Success │ s3.us-east-2.amazonaws.com is reachable                             │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Connectivity to kms endpoint          │ Success │ kms.us-east-2.amazonaws.com is reachable                            │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Connectivity to logs endpoint         │ Success │ logs.us-east-2.amazonaws.com is reachable                           │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Connectivity to monitoring endpoint   │ Success │ monitoring.us-east-2.amazonaws.com is reachable                     │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ AWS Credentials                       │ Success │ Credentials are for                                                 │
│                                       │         │  arn:aws:sts::123456789012:assumed-role/SSM-Role/i-123abc45EXAMPLE  │
│                                       │         │  and will expire at 2021-09-02 13:24:42 +0000 UTC                   │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Agent service                         │ Success │ Agent service is running and is running as expected user            │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Proxy configuration                   │ Skipped │ No proxy configuration detected                                     │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Windows sysprep image state           │ Success │ Windows image state value is at desired value IMAGE_STATE_COMPLETE  │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ SSM Agent version                     │ Success │ SSM Agent version is 3.2.815.0, latest agent version in us-east-2   │
│                                       │         │ is 3.2.985.0                                                        │
└───────────────────────────────────────┴─────────┴─────────────────────────────────────────────────────────────────────┘
```

------

The following table provides additional details for each of the checks performed by `ssm-cli`.


**`ssm-cli` diagnostic checks**  

| Check | Details | 
| --- | --- | 
| Amazon EC2 instance metadata service | Indicates whether the managed node is able to reach the metadata service. A failed test indicates a connectivity issue to http://169.254.169.254 which can be caused by local route, proxy, or operating system (OS) firewall and proxy configurations. | 
| Hybrid instance registration | Indicates whether SSM Agent is registered using a hybrid activation. | 
| Connectivity to ssm endpoint | Indicates whether the node is able to reach the service endpoints for Systems Manager on TCP port 443. A failed test indicates connectivity issues to https://ssm.region.amazonaws.com depending on the AWS Region where the node is located. Connectivity issues can be caused by the VPC configuration including security groups, network access control lists, route tables, or OS firewalls and proxies. | 
| Connectivity to ec2messages endpoint | Indicates whether the node is able to reach the service endpoints for Systems Manager on TCP port 443. A failed test indicates connectivity issues to https://ec2messages.region.amazonaws.com depending on the AWS Region where the node is located. Connectivity issues can be caused by the VPC configuration including security groups, network access control lists, route tables, or OS firewalls and proxies. | 
| Connectivity to ssmmessages endpoint | Indicates whether the node is able to reach the service endpoints for Systems Manager on TCP port 443. A failed test indicates connectivity issues to https://ssmmessages.region.amazonaws.com depending on the AWS Region where the node is located. Connectivity issues can be caused by the VPC configuration including security groups, network access control lists, route tables, or OS firewalls and proxies. | 
| Connectivity to s3 endpoint | Indicates whether the node is able to reach the service endpoint for Amazon Simple Storage Service on TCP port 443. A failed test indicates connectivity issues to https://s3.region.amazonaws.com depending on the AWS Region where the node is located. Connectivity to this endpoint is not required for a node to appear in your managed nodes list. | 
| Connectivity to kms endpoint |  Indicates whether the node is able to reach the service endpoint for AWS Key Management Service on TCP port 443. A failed test indicates connectivity issues to `https://kms.region.amazonaws.com` depending on the AWS Region where the node is located. Connectivity to this endpoint is not required for a node to appear in your managed nodes list.  | 
| Connectivity to logs endpoint | Indicates whether the node is able to reach the service endpoint for Amazon CloudWatch Logs on TCP port 443. A failed test indicates connectivity issues to https://logs.region.amazonaws.com depending on the AWS Region where the node is located. Connectivity to this endpoint is not required for a node to appear in your managed nodes list. | 
| Connectivity to monitoring endpoint | Indicates whether the node is able to reach the service endpoint for Amazon CloudWatch on TCP port 443. A failed test indicates connectivity issues to https://monitoring.region.amazonaws.com depending on the AWS Region where the node is located. Connectivity to this endpoint is not required for a node to appear in your managed nodes list. | 
| AWS Credentials | Indicates whether SSM Agent has the required credentials based on the IAM instance profile (for EC2 instances) or IAM service role (for non-EC2 machines) attached to the machine. A failed test indicates that no IAM instance profile or IAM service role is attached to the machine, or it does not contain the required permissions for Systems Manager. | 
| Agent service | Indicates whether SSM Agent service is running, and whether the service is running as root for Linux or macOS, or SYSTEM for Windows Server. A failed test indicates SSM Agent service is not running or is not running as root or SYSTEM. | 
| Proxy configuration | Indicates whether SSM Agent is configured to use a proxy. | 
| Sysprep image state (Windows only) | Indicates the state of Sysprep on the node. SSM Agent will not start on the node if the Sysprep state is a value other than IMAGE\$1STATE\$1COMPLETE. | 
| SSM Agent version | Indicates whether the latest available version of SSM Agent is installed. | 

# AWS Systems Manager Hybrid Activations
Hybrid Activations

To configure non-EC2 machines for use with AWS Systems Manager in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment, you create a *hybrid activation*. Non-EC2 machine types supported as managed nodes include the following:
+ Servers on your own premises (on-premises servers)
+ AWS IoT Greengrass core devices
+ AWS IoT and non-AWS edge devices
+ Virtual machines (VMs), including VMs in other cloud environments

When you run the [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-activation.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-activation.html) command to start a hybrid activation process, you receive an activation code and ID in the command response. You then include the activation code and ID with the command to install SSM Agent on the machine, as described in step 3 of [Install SSM Agent on hybrid Linux nodes](hybrid-multicloud-ssm-agent-install-linux.md) and step 4 of [Install SSM Agent on hybrid Windows Server nodes](hybrid-multicloud-ssm-agent-install-windows.md).

This activation process applies to all non-EC2 machine types *except* AWS IoT Greengrass core devices. For information about configuring AWS IoT Greengrass core devices for Systems Manager, see [Managing edge devices with Systems Manager](systems-manager-setting-up-edge-devices.md).

**Note**  
Support isn't currently provided for non-EC2 macOS machines.

**About Systems Manager instances tiers**  
AWS Systems Manager offers a standard-instances tier and an advanced-instances tier. Both support managed nodes in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. The standard-instances tier allows you to register a maximum of 1,000 machines per AWS account per AWS Region. If you need to register more than 1,000 machines in a single account and Region, then use the advanced-instances tier. You can create as many managed nodes as you like in the advanced-instances tier. All managed nodes configured for Systems Manager are priced on a pay-per-use basis. For more information about enabling the advanced instances tier, see [Turning on the advanced-instances tier](fleet-manager-enable-advanced-instances-tier.md). For more information about pricing, see [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/).

Note the following additional information about the standard-instances tier and advanced-instances tier:
+ Advanced instances also allow you to connect to your non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment by using AWS Systems Manager Session Manager. Session Manager provides interactive shell access to your instances. For more information, see [AWS Systems Manager Session Manager](session-manager.md).
+ The standard-instances quota also applies to EC2 instances that use a Systems Manager on-premises activation (which isn't a common scenario).
+ To patch applications released by Microsoft on virtual machines (VMs) on-premises instances, activate the advanced-instances tier. There is a charge to use the advanced-instances tier. There is no additional charge to patch applications released by Microsoft on Amazon Elastic Compute Cloud (Amazon EC2) instances. For more information, see [Patching applications released by Microsoft on Windows Server](patch-manager-patching-windows-applications.md).

# AWS Systems Manager Inventory
Inventory

AWS Systems Manager Inventory provides visibility into your AWS computing environment. You can use Inventory to collect *metadata* from your managed nodes. You can store this metadata in a central Amazon Simple Storage Service (Amazon S3) bucket, and then use built-in tools to query the data and quickly determine which nodes are running the software and configurations required by your software policy, and which nodes need to be updated. You can configure Inventory on all of your managed nodes by using a one-click procedure. You can also configure and view inventory data from multiple AWS Regions and AWS accounts by using Amazon Athena. To get started with Inventory, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/inventory). In the navigation pane, choose **Inventory**.

If the pre-configured metadata types collected by Systems Manager Inventory don't meet your needs, then you can create custom inventory. Custom inventory is simply a JSON file with information that you provide and add to the managed node in a specific directory. When Systems Manager Inventory collects data, it captures this custom inventory data. For example, if you run a large data center, you can specify the rack location of each of your servers as custom inventory. You can then view the rack space data when you view other inventory data.

**Important**  
Systems Manager Inventory collects *only* metadata from your managed nodes. Inventory doesn't access proprietary information or data.

The following table describes the types of data you can collect with Systems Manager Inventory. The table also describes different offerings for targeting nodes and the collection intervals you can specify.


****  

| Configuration | Details | 
| --- | --- | 
|  Metadata types  |  You can configure Inventory to collect the following types of data: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-inventory.html)  To view a list of all metadata collected by Inventory, see [Metadata collected by Inventory](inventory-schema.md).   | 
|  Nodes to target  |  You can choose to inventory all managed nodes in your AWS account, individually select nodes, or target groups of nodes by using tags. For more information about collecting inventory data from all of your managed nodes, see [Inventory all managed nodes in your AWS account](inventory-collection.md#inventory-management-inventory-all).  | 
|  When to collect information  |  You can specify a collection interval in terms of minutes, hours, and days. The shortest collection interval is every 30 minutes.   | 

**Note**  
Depending on the amount of data collected, the system can take several minutes to report the data to the output you specified. After the information is collected, the data is sent over a secure HTTPS channel to a plain-text AWS store that is accessible only from your AWS account. 

You can view the data in the Systems Manager console on the **Inventory** page, which includes several predefined cards to help you query the data.

![\[Systems Manager Inventory cards in the Systems Manager console.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-cards.png)


**Note**  
Inventory cards automatically filter out Amazon EC2 managed instances with a state of *Terminated* and *Stopped*. For on-premises and AWS IoT Greengrass core device managed nodes, Inventory cards automatically filter out nodes with a state of *Terminated*. 

If you create a resource data sync to synchronize and store all of your data in a single Amazon S3 bucket, then you can drill down into the data on the **Inventory Detailed View** page. For more information, see [Querying inventory data from multiple Regions and accounts](systems-manager-inventory-query.md).

**EventBridge support**  
This Systems Manager tool is supported as an *event* type in Amazon EventBridge rules. For information, see [Monitoring Systems Manager events with Amazon EventBridge](monitoring-eventbridge-events.md) and [Reference: Amazon EventBridge event patterns and types for Systems Manager](reference-eventbridge-events.md).

**Topics**
+ [

# Learn more about Systems Manager Inventory
](inventory-about.md)
+ [

# Setting up Systems Manager Inventory
](systems-manager-inventory-setting-up.md)
+ [

# Configuring inventory collection
](inventory-collection.md)
+ [

# Querying inventory data from multiple Regions and accounts
](systems-manager-inventory-query.md)
+ [

# Querying an inventory collection by using filters
](inventory-query-filters.md)
+ [

# Aggregating inventory data
](inventory-aggregate.md)
+ [

# Working with custom inventory
](inventory-custom.md)
+ [

# Viewing inventory history and change tracking
](inventory-history.md)
+ [

# Stopping data collection and deleting inventory data
](systems-manager-inventory-delete.md)
+ [

# Assigning custom inventory metadata to a managed node
](inventory-custom-metadata.md)
+ [

# Using the AWS CLI to configure inventory data collection
](inventory-collection-cli.md)
+ [

# Walkthrough: Using resource data sync to aggregate inventory data
](inventory-resource-data-sync.md)
+ [

# Troubleshooting problems with Systems Manager Inventory
](syman-inventory-troubleshooting.md)

# Learn more about Systems Manager Inventory
Learn more about Inventory

When you configure AWS Systems Manager Inventory, you specify the type of metadata to collect, the managed nodes from where the metadata should be collected, and a schedule for metadata collection. These configurations are saved with your AWS account as an AWS Systems Manager State Manager association. An association is simply a configuration.

**Note**  
Inventory only collects metadata. It doesn't collect any personal or proprietary data.

**Topics**
+ [

# Metadata collected by Inventory
](inventory-schema.md)
+ [

# Working with file and Windows registry inventory
](inventory-file-and-registry.md)

# Metadata collected by Inventory


The following sample shows the complete list of metadata collected by each AWS Systems Manager Inventory plugin.

```
{
    "typeName": "AWS:InstanceInformation",
    "version": "1.0",
    "attributes":[
      { "name": "AgentType",                              "dataType" : "STRING"},
      { "name": "AgentVersion",                           "dataType" : "STRING"},
      { "name": "ComputerName",                           "dataType" : "STRING"},
      { "name": "InstanceId",                             "dataType" : "STRING"},
      { "name": "IpAddress",                              "dataType" : "STRING"},
      { "name": "PlatformName",                           "dataType" : "STRING"},
      { "name": "PlatformType",                           "dataType" : "STRING"},
      { "name": "PlatformVersion",                        "dataType" : "STRING"},
      { "name": "ResourceType",                           "dataType" : "STRING"},
      { "name": "AgentStatus",                            "dataType" : "STRING"},
      { "name": "InstanceStatus",                         "dataType" : "STRING"}    
    ]
  },
  {
    "typeName" : "AWS:Application",
    "version": "1.1",
    "attributes":[
      { "name": "Name",               "dataType": "STRING"},
      { "name": "ApplicationType",    "dataType": "STRING"},
      { "name": "Publisher",          "dataType": "STRING"},
      { "name": "Version",            "dataType": "STRING"},
      { "name": "Release",            "dataType": "STRING"},
      { "name": "Epoch",              "dataType": "STRING"},
      { "name": "InstalledTime",      "dataType": "STRING"},
      { "name": "Architecture",       "dataType": "STRING"},
      { "name": "URL",                "dataType": "STRING"},
      { "name": "Summary",            "dataType": "STRING"},
      { "name": "PackageId",          "dataType": "STRING"}
    ]
  },
  {
    "typeName" : "AWS:File",
    "version": "1.0",
    "attributes":[
      { "name": "Name",               "dataType": "STRING"},
      { "name": "Size",    	      "dataType": "STRING"},
      { "name": "Description",        "dataType": "STRING"},
      { "name": "FileVersion",        "dataType": "STRING"},
      { "name": "InstalledDate",      "dataType": "STRING"},
      { "name": "ModificationTime",   "dataType": "STRING"},
      { "name": "LastAccessTime",     "dataType": "STRING"},
      { "name": "ProductName",        "dataType": "STRING"},
      { "name": "InstalledDir",       "dataType": "STRING"},
      { "name": "ProductLanguage",    "dataType": "STRING"},
      { "name": "CompanyName",        "dataType": "STRING"},
      { "name": "ProductVersion",       "dataType": "STRING"}
    ]
  },
  {
    "typeName": "AWS:AWSComponent",
    "version": "1.0",
    "attributes":[
      { "name": "Name",               "dataType": "STRING"},
      { "name": "ApplicationType",    "dataType": "STRING"},
      { "name": "Publisher",          "dataType": "STRING"},
      { "name": "Version",            "dataType": "STRING"},
      { "name": "InstalledTime",      "dataType": "STRING"},
      { "name": "Architecture",       "dataType": "STRING"},
      { "name": "URL",                "dataType": "STRING"}
    ]
  },
  {
    "typeName": "AWS:WindowsUpdate",
    "version":"1.0",
    "attributes":[
      { "name": "HotFixId",           "dataType": "STRING"},
      { "name": "Description",        "dataType": "STRING"},
      { "name": "InstalledTime",      "dataType": "STRING"},
      { "name": "InstalledBy",        "dataType": "STRING"}
    ]
  },
  {
    "typeName": "AWS:Network",
    "version":"1.0",
    "attributes":[
      { "name": "Name",               "dataType": "STRING"},
      { "name": "SubnetMask",         "dataType": "STRING"},
      { "name": "Gateway",            "dataType": "STRING"},
      { "name": "DHCPServer",         "dataType": "STRING"},
      { "name": "DNSServer",          "dataType": "STRING"},
      { "name": "MacAddress",         "dataType": "STRING"},
      { "name": "IPV4",               "dataType": "STRING"},
      { "name": "IPV6",               "dataType": "STRING"}
    ]
  },
  {
    "typeName": "AWS:PatchSummary",
    "version":"1.0",
    "attributes":[
      { "name": "PatchGroup",                       "dataType": "STRING"},
      { "name": "BaselineId",                       "dataType": "STRING"},
      { "name": "SnapshotId",                       "dataType": "STRING"},
      { "name": "OwnerInformation",                 "dataType": "STRING"},
      { "name": "InstalledCount",                   "dataType": "NUMBER"},
      { "name": "InstalledPendingRebootCount",      "dataType": "NUMBER"},
      { "name": "InstalledOtherCount",              "dataType": "NUMBER"},
      { "name": "InstalledRejectedCount",           "dataType": "NUMBER"},
      { "name": "NotApplicableCount",               "dataType": "NUMBER"},
      { "name": "UnreportedNotApplicableCount",     "dataType": "NUMBER"},
      { "name": "MissingCount",                     "dataType": "NUMBER"},
      { "name": "FailedCount",                      "dataType": "NUMBER"},
      { "name": "OperationType",                    "dataType": "STRING"},
      { "name": "OperationStartTime",               "dataType": "STRING"},
      { "name": "OperationEndTime",                 "dataType": "STRING"},
      { "name": "InstallOverrideList",              "dataType": "STRING"},
      { "name": "RebootOption",                     "dataType": "STRING"},
      { "name": "LastNoRebootInstallOperationTime", "dataType": "STRING"},
      { "name": "ExecutionId",                      "dataType": "STRING",                 "isOptional": "true"},
      { "name": "NonCompliantSeverity",             "dataType": "STRING",                 "isOptional": "true"},
      { "name": "SecurityNonCompliantCount",        "dataType": "NUMBER",                 "isOptional": "true"},
      { "name": "CriticalNonCompliantCount",        "dataType": "NUMBER",                 "isOptional": "true"},
      { "name": "OtherNonCompliantCount",           "dataType": "NUMBER",                 "isOptional": "true"}
    ]
  },
  {
    "typeName": "AWS:PatchCompliance",
    "version":"1.0",
    "attributes":[
      { "name": "Title",                        "dataType": "STRING"},
      { "name": "KBId",                         "dataType": "STRING"},
      { "name": "Classification",               "dataType": "STRING"},
      { "name": "Severity",                     "dataType": "STRING"},
      { "name": "State",                        "dataType": "STRING"},
      { "name": "InstalledTime",                "dataType": "STRING"}
    ]
  },
  {
    "typeName": "AWS:ComplianceItem",
    "version":"1.0",
    "attributes":[
      { "name": "ComplianceType",               "dataType": "STRING",                 "isContext": "true"},
      { "name": "ExecutionId",                  "dataType": "STRING",                 "isContext": "true"},
      { "name": "ExecutionType",                "dataType": "STRING",                 "isContext": "true"},
      { "name": "ExecutionTime",                "dataType": "STRING",                 "isContext": "true"},
      { "name": "Id",                           "dataType": "STRING"},
      { "name": "Title",                        "dataType": "STRING"},
      { "name": "Status",                       "dataType": "STRING"},
      { "name": "Severity",                     "dataType": "STRING"},
      { "name": "DocumentName",                 "dataType": "STRING"},
      { "name": "DocumentVersion",              "dataType": "STRING"},
      { "name": "Classification",               "dataType": "STRING"},
      { "name": "PatchBaselineId",              "dataType": "STRING"},
      { "name": "PatchSeverity",                "dataType": "STRING"},
      { "name": "PatchState",                   "dataType": "STRING"},
      { "name": "PatchGroup",                   "dataType": "STRING"},
      { "name": "InstalledTime",                "dataType": "STRING"},
      { "name": "InstallOverrideList",          "dataType": "STRING",                 "isOptional": "true"},
      { "name": "DetailedText",                 "dataType": "STRING",                 "isOptional": "true"},
      { "name": "DetailedLink",                 "dataType": "STRING",                 "isOptional": "true"},
      { "name": "CVEIds",                       "dataType": "STRING",                 "isOptional": "true"}
    ]
  },
  {
    "typeName": "AWS:ComplianceSummary",
    "version":"1.0",
    "attributes":[
      { "name": "ComplianceType",                 "dataType": "STRING"},
      { "name": "PatchGroup",                     "dataType": "STRING"},
      { "name": "PatchBaselineId",                "dataType": "STRING"},
      { "name": "Status",                         "dataType": "STRING"},
      { "name": "OverallSeverity",                "dataType": "STRING"},
      { "name": "ExecutionId",                    "dataType": "STRING"},
      { "name": "ExecutionType",                  "dataType": "STRING"},
      { "name": "ExecutionTime",                  "dataType": "STRING"},
      { "name": "CompliantCriticalCount",         "dataType": "NUMBER"},
      { "name": "CompliantHighCount",             "dataType": "NUMBER"},
      { "name": "CompliantMediumCount",           "dataType": "NUMBER"},
      { "name": "CompliantLowCount",              "dataType": "NUMBER"},
      { "name": "CompliantInformationalCount",    "dataType": "NUMBER"},
      { "name": "CompliantUnspecifiedCount",      "dataType": "NUMBER"},
      { "name": "NonCompliantCriticalCount",      "dataType": "NUMBER"},
      { "name": "NonCompliantHighCount",          "dataType": "NUMBER"},
      { "name": "NonCompliantMediumCount",        "dataType": "NUMBER"},
      { "name": "NonCompliantLowCount",           "dataType": "NUMBER"},
      { "name": "NonCompliantInformationalCount", "dataType": "NUMBER"},
      { "name": "NonCompliantUnspecifiedCount",   "dataType": "NUMBER"}
    ]
  },
  {
    "typeName": "AWS:InstanceDetailedInformation",
    "version":"1.0",
    "attributes":[
      { "name": "CPUModel",                     "dataType": "STRING"},
      { "name": "CPUCores",                     "dataType": "NUMBER"},
      { "name": "CPUs",                         "dataType": "NUMBER"},
      { "name": "CPUSpeedMHz",                  "dataType": "NUMBER"},
      { "name": "CPUSockets",                   "dataType": "NUMBER"},
      { "name": "CPUHyperThreadEnabled",        "dataType": "STRING"},
      { "name": "OSServicePack",                "dataType": "STRING"}
    ]
   },
   {
     "typeName": "AWS:Service",
     "version":"1.0",
     "attributes":[
       { "name": "Name",                         "dataType": "STRING"},
       { "name": "DisplayName",                  "dataType": "STRING"},
       { "name": "ServiceType",                  "dataType": "STRING"},
       { "name": "Status",                       "dataType": "STRING"},
       { "name": "DependentServices",            "dataType": "STRING"},
       { "name": "ServicesDependedOn",           "dataType": "STRING"},
       { "name": "StartType",                    "dataType": "STRING"}
     ]
    },
    {
      "typeName": "AWS:WindowsRegistry",
      "version":"1.0",
      "attributes":[
        { "name": "KeyPath",                         "dataType": "STRING"},
        { "name": "ValueName",                       "dataType": "STRING"},
        { "name": "ValueType",                       "dataType": "STRING"},
        { "name": "Value",                           "dataType": "STRING"}
      ]
    },
    {
      "typeName": "AWS:WindowsRole",
      "version":"1.0",
      "attributes":[
        { "name": "Name",                         "dataType": "STRING"},
        { "name": "DisplayName",                  "dataType": "STRING"},
        { "name": "Path",                         "dataType": "STRING"},
        { "name": "FeatureType",                  "dataType": "STRING"},
        { "name": "DependsOn",                    "dataType": "STRING"},
        { "name": "Description",                  "dataType": "STRING"},
        { "name": "Installed",                    "dataType": "STRING"},
        { "name": "InstalledState",               "dataType": "STRING"},
        { "name": "SubFeatures",                  "dataType": "STRING"},
        { "name": "ServerComponentDescriptor",    "dataType": "STRING"},
        { "name": "Parent",                       "dataType": "STRING"}
      ]
    },
    {
      "typeName": "AWS:Tag",
      "version":"1.0",
      "attributes":[
        { "name": "Key",                     "dataType": "STRING"},
        { "name": "Value",                   "dataType": "STRING"}
      ]
    },
    {
      "typeName": "AWS:ResourceGroup",
      "version":"1.0",
      "attributes":[
        { "name": "Name",                   "dataType": "STRING"},
        { "name": "Arn",                    "dataType": "STRING"}
      ]
    },
    {
      "typeName": "AWS:BillingInfo",
      "version": "1.0",
      "attributes": [
        { "name": "BillingProductId",       "dataType": "STRING"}
      ]
    }
```

**Note**  
For `"typeName": "AWS:InstanceInformation"`, `InstanceStatus` can be one of the following: Active, ConnectionLost, Stopped, Terminated.
With the release of version 2.5, RPM Package Manager replaced the Serial attribute with Epoch. The Epoch attribute is a monotonically increasing integer like Serial. When you inventory by using the `AWS:Application` type, a larger value for Epoch means a newer version. If Epoch values are the same or empty, then use the value of the Version or Release attribute to determine the newer version. 
Some metadata is not available from Linux instances. Specifically, for "typeName": "AWS:Network", the following metadata types are not yet supported for Linux instances. They ARE supported for Windows.  
\$1 "name": "SubnetMask", "dataType": "STRING"\$1,
\$1 "name": "DHCPServer", "dataType": "STRING"\$1,
\$1 "name": "DNSServer", "dataType": "STRING"\$1,
\$1 "name": "Gateway", "dataType": "STRING"\$1,

# Working with file and Windows registry inventory


AWS Systems Manager Inventory allows you to search and inventory files on Windows Server, Linux, and macOS operating systems. You can also search and inventory the Windows Registry.

**Files**: You can collect metadata information about files, including file names, the time files were created, the time files were last modified and accessed, and file sizes, to name a few. To start collecting file inventory, you specify a file path where you want to perform the inventory, one or more patterns that define the types of files you want to inventory, and if the path should be traversed recursively. Systems Manager inventories all file metadata for files in the specified path that match the pattern. File inventory uses the following parameter input.

```
{
"Path": string,
"Pattern": array[string],
"Recursive": true,
"DirScanLimit" : number // Optional
}
```
+ **Path**: The directory path where you want to inventory files. For Windows, you can use environment variables like `%PROGRAMFILES% `as long as the variable maps to a single directory path. For example, if you use %PATH% that maps to multiple directory paths, Inventory throws an error. 
+ **Pattern**: An array of patterns to identify files.
+ **Recursive**: A Boolean value indicating whether Inventory should recursively traverse the directories.
+ **DirScanLimit**: An optional value specifying how many directories to scan. Use this parameter to minimize performance impact on your managed nodes. Inventory scans a maximum of 5,000 directories.

**Note**  
Inventory collects metadata for a maximum of 500 files across all specified paths.

Here are some examples of how to specify the parameters when performing an inventory of files.
+ On Linux and macOS, collect metadata of .sh files in the `/home/ec2-user` directory, excluding all subdirectories.

  ```
  [{"Path":"/home/ec2-user","Pattern":["*.sh", "*.sh"],"Recursive":false}]
  ```
+ On Windows, collect metadata of all ".exe" files in the Program Files folder, including subdirectories recursively.

  ```
  [{"Path":"C:\Program Files","Pattern":["*.exe"],"Recursive":true}]
  ```
+ On Windows, collect metadata of specific log patterns.

  ```
  [{"Path":"C:\ProgramData\Amazon","Pattern":["*amazon*.log"],"Recursive":true}]
  ```
+ Limit the directory count when performing recursive collection.

  ```
  [{"Path":"C:\Users","Pattern":["*.ps1"],"Recursive":true, "DirScanLimit": 1000}]
  ```

**Windows Registry**: You can collect Windows Registry keys and values. You can choose a key path and collect all keys and values recursively. You can also collect a specific registry key and its value for a specific path. Inventory collects the key path, name, type, and the value.

```
{
"Path": string, 
"Recursive": true,
"ValueNames": array[string] // optional
}
```
+ **Path**: The path to the Registry key.
+ **Recursive**: A Boolean value indicating whether Inventory should recursively traverse Registry paths.
+ **ValueNames**: An array of value names for performing inventory of Registry keys. If you use this parameter, Systems Manager will inventory only the specified value names for the specified path.

**Note**  
Inventory collects a maximum of 250 Registry key values for all specified paths.

Here are some examples of how to specify the parameters when performing an inventory of the Windows Registry.
+ Collect all keys and values recursively for a specific path.

  ```
  [{"Path":"HKEY_LOCAL_MACHINE\SOFTWARE\Amazon","Recursive": true}]
  ```
+ Collect all keys and values for a specific path (recursive search turned off).

  ```
  [{"Path":"HKEY_LOCAL_MACHINE\SOFTWARE\Intel\PSIS\PSIS_DECODER", "Recursive": false}]
  ```
+ Collect a specific key by using the `ValueNames` option.

  ```
  {"Path":"HKEY_LOCAL_MACHINE\SOFTWARE\Amazon\MachineImage","ValueNames":["AMIName"]}
  ```

# Setting up Systems Manager Inventory
Setting up Inventory

Before you use AWS Systems Manager Inventory to collect metadata about the applications, services, AWS components and more running on your managed nodes, we recommend that you configure resource data sync to centralize the storage of your inventory data in a single Amazon Simple Storage Service (Amazon S3) bucket. We also recommend that you configure Amazon EventBridge monitoring of inventory events. These processes make it easier to view and manage inventory data and collection.

**Topics**
+ [

# Creating a resource data sync for Inventory
](inventory-create-resource-data-sync.md)
+ [

# Using EventBridge to monitor Inventory events
](systems-manager-inventory-setting-up-eventbridge.md)

# Creating a resource data sync for Inventory


This topic describes how to set up and configure resource data sync for AWS Systems Manager Inventory. For information about resource data sync for Systems Manager Explorer, see [Setting up Systems Manager Explorer to display data from multiple accounts and Regions](Explorer-resource-data-sync.md).

## About resource data sync


You can use Systems Manager resource data sync to send inventory data collected from all of your managed nodes to a single Amazon Simple Storage Service (Amazon S3) bucket. Resource data sync then automatically updates the centralized data when new inventory data is collected. With all inventory data stored in a target Amazon S3 bucket, you can use services like Amazon Athena and Amazon Quick to query and analyze the aggregated data.

For example, say that you've configured inventory to collect data about the operating system (OS) and applications running on a fleet of 150 managed nodes. Some of these nodes are located in an on-premises data center, and others are running in Amazon Elastic Compute Cloud (Amazon EC2) across multiple AWS Regions. If you have *not* configured resource data sync, you either need to manually gather the collected inventory data for each managed node, or you have to create scripts to gather this information. You would then need to port the data into an application so that you can run queries and analyze it.

With resource data sync, you perform a one-time operation that synchronizes all inventory data from all of your managed nodes. After the sync is successfully created, Systems Manager creates a baseline of all inventory data and saves it in the target Amazon S3 bucket. When new inventory data is collected, Systems Manager automatically updates the data in the Amazon S3 bucket. You can then quickly and cost-effectively port the data to Amazon Athena and Amazon Quick.

Diagram 1 shows how resource data sync aggregates inventory data from Amazon EC2 and other machine types in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment to a target Amazon S3 bucket. This diagram also shows how resource data sync works with multiple AWS accounts and AWS Regions.

**Diagram 1: Resource data sync with multiple AWS accounts and AWS Regions**

![\[Systems Manager resource data sync architecture\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-resource-data-sync-updated.png)


If you delete a managed node, resource data sync preserves the inventory file for the deleted node. For running nodes, however, resource data sync automatically overwrites old inventory files when new files are created and written to the Amazon S3 bucket. If you want to track inventory changes over time, you can use the AWS Config service to track the `SSM:ManagedInstanceInventory` resource type. For more information, see [Getting Started with AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/getting-started.html).

Use the procedures in this section to create a resource data sync for Inventory by using the Amazon S3 and AWS Systems Manager consoles. You can also use AWS CloudFormation to create or delete a resource data sync. To use CloudFormation, add the [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssm-resourcedatasync.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssm-resourcedatasync.html) resource to your CloudFormation template. For information, see one of the following documentation resources:
+ [AWS CloudFormation resource for resource data sync in AWS Systems Manager](https://aws.amazon.com/blogs/mt/aws-cloudformation-resource-for-resource-data-sync-in-aws-systems-manager/) (blog)
+ [Working with AWS CloudFormation Templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html) in the *AWS CloudFormation User Guide*

**Note**  
You can use AWS Key Management Service (AWS KMS) to encrypt inventory data in the Amazon S3 bucket. For an example of how to create an encrypted sync by using the AWS Command Line Interface (AWS CLI) and how to work with the centralized data in Amazon Athena and Amazon Quick, see [Walkthrough: Using resource data sync to aggregate inventory data](inventory-resource-data-sync.md). 

## Before you begin


Before you create a resource data sync, use the following procedure to create a central Amazon S3 bucket to store aggregated inventory data. The procedure describes how to assign a bucket policy that allows Systems Manager to write inventory data to the bucket from multiple accounts. If you already have an Amazon S3 bucket that you want to use to aggregate inventory data for resource data sync, then you must configure the bucket to use the policy in the following procedure.

**Note**  
Systems Manager Inventory can't add data to a specified Amazon S3 bucket if that bucket is configured to use Object Lock. Verify that the Amazon S3 bucket you create or choose for resource data sync isn't configured to use Amazon S3 Object Lock. For more information, see [How Amazon S3 Object Lock works](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html) in the *Amazon Simple Storage Service User Guide*.

**To create and configure an Amazon S3 bucket for resource data sync**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Create a bucket to store your aggregated Inventory data. For more information, see [Create a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingABucket.html) in the *Amazon Simple Storage Service User Guide*. Make a note of the bucket name and the AWS Region where you created it.

1. Choose the **Permissions** tab, and then choose **Bucket Policy**.

1. Copy and paste the following bucket policy into the policy editor. Replace *amzn-s3-demo-bucket* with the name of the S3 bucket you created. Replace *account\$1ID\$1number* with a valid AWS account ID number. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "SSMBucketPermissionsCheck",
               "Effect": "Allow",
               "Principal": {
                   "Service": "ssm.amazonaws.com"
               },
               "Action": "s3:GetBucketAcl",
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
           },
           {
               "Sid": " SSMBucketDelivery",
               "Effect": "Allow",
               "Principal": {
                   "Service": "ssm.amazonaws.com"
               },
               "Action": "s3:PutObject",
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket/*/accountid=111122223333/*",
                   "arn:aws:s3:::amzn-s3-demo-bucket/*/accountid=444455556666/*",
                   "arn:aws:s3:::amzn-s3-demo-bucket/*/accountid=123456789012/*",
                   "arn:aws:s3:::amzn-s3-demo-bucket/*/accountid=777788889999/*"
               ],
               "Condition": {
                   "StringEquals": {
                       "s3:x-amz-acl": "bucket-owner-full-control",
                       "aws:SourceAccount": "111122223333"
                   },
                   "ArnLike": {
                       "aws:SourceArn": "arn:aws:ssm:*:111122223333:resource-data-sync/*"
                   }
               }
           }
       ]
   }
   ```

------

1. Save your changes.

## Create a resource data sync for Inventory


Use the following procedure to create a resource data sync for Systems Manager Inventory by using the Systems Manager console. For information about how to create a resource data sync by using the AWS CLI, see [Using the AWS CLI to configure inventory data collection](inventory-collection-cli.md).

**To create a resource data sync**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. In the **Account management** menu, choose **Resource data sync**.

1. Choose **Create resource data sync**.

1. In the **Sync name** field, enter a name for the sync configuration.

1. In the **Bucket name** field, enter the name of the Amazon S3 bucket you created using the **To create and configure an Amazon S3 bucket for resource data sync** procedure.

1. (Optional) In the **Bucket prefix** field, enter the name of an Amazon S3 bucket prefix (subdirectory).

1. In the **Bucket region** field, choose **This region** if the Amazon S3 bucket you created is located in the current AWS Region. If the bucket is located in a different AWS Region, choose **Another region**, and enter the name of the Region.
**Note**  
If the sync and the target Amazon S3 bucket are located in different regions, you might be subject to data transfer pricing. For more information, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

1. (Optional) In the **KMS Key ARN** field, type or paste a KMS Key ARN to encrypt inventory data in Amazon S3.

1. Choose **Create**.

To synchronize inventory data from multiple AWS Regions, you must create a resource data sync in *each* Region. Repeat this procedure in each AWS Region where you want to collect inventory data and send it to the central Amazon S3 bucket. When you create the sync in each Region, specify the central Amazon S3 bucket in the **Bucket name** field. Then use the **Bucket region** option to choose the Region where you created the central Amazon S3 bucket, as shown in the following screen shot. The next time the association runs to collect inventory data, Systems Manager stores the data in the central Amazon S3 bucket. 

![\[Systems Manager resource data sync from multiple AWS Regions\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-rds-multiple-regions.png)


## Creating an inventory resource data sync for accounts defined in AWS Organizations


You can synchronize inventory data from AWS accounts defined in AWS Organizations to a central Amazon S3 bucket. After you complete the following procedures, inventory data is synchronized to *individual* Amazon S3 key prefixes in the central bucket. Each key prefix represents a different AWS account ID.

**Before you begin**  
Before you begin, verify that you set up and configured AWS accounts in AWS Organizations. For more information, see [ in the *AWS Organizations User Guide*.](https://docs.aws.amazon.com/organizations/latest/userguide/rgs_getting-started.html)

Also, be aware that you must create the organization-based resource data sync for each AWS Region and AWS account defined in AWS Organizations. 

### Creating a central Amazon S3 bucket


Use the following procedure to create a central Amazon S3 bucket to store aggregated inventory data. The procedure describes how to assign a bucket policy that allows Systems Manager to write inventory data to the bucket from your AWS Organizations account ID. If you already have an Amazon S3 bucket that you want to use to aggregate inventory data for resource data sync, then you must configure the bucket to use the policy in the following procedure.

**To create and configure an Amazon S3 bucket for resource data sync for multiple accounts defined in AWS Organizations**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Create a bucket to store your aggregated inventory data. For more information, see [Create a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingABucket.html) in the *Amazon Simple Storage Service User Guide*. Make a note of the bucket name and the AWS Region where you created it.

1. Choose the **Permissions** tab, and then choose **Bucket Policy**.

1. Copy and paste the following bucket policy into the policy editor. Replace *amzn-s3-demo-bucket *and *organization-id* with the name of the Amazon S3 bucket you created and a valid AWS Organizations account ID.

   Optionally, replace *bucket-prefix* with the name of an Amazon S3 prefix (subdirectory). If you didn't create a prefix, remove *bucket-prefix*/ from the ARN in the following policy. 

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "SSMBucketPermissionsCheck",
         "Effect": "Allow",
         "Principal": {
           "Service": "ssm.amazonaws.com"
         },
         "Action": "s3:GetBucketAcl",
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
       },
       {
         "Sid": " SSMBucketDelivery",
         "Effect": "Allow",
         "Principal": {
           "Service": "ssm.amazonaws.com"
         },
         "Action": "s3:PutObject",
         "Resource": [
           "arn:aws:s3:::amzn-s3-demo-bucket/bucket-prefix/*/accountid=*/*"
         ],
         "Condition": {
           "StringEquals": {
             "s3:x-amz-acl": "bucket-owner-full-control",
             "aws:SourceOrgID": "organization-id"
                     }
         }
       },
       {
         "Sid": " SSMBucketDeliveryTagging",
         "Effect": "Allow",
         "Principal": {
           "Service": "ssm.amazonaws.com"
         },
         "Action": "s3:PutObjectTagging",
         "Resource": [
           "arn:aws:s3:::amzn-s3-demo-bucket/bucket-prefix/*/accountid=*/*"
         ]
       }
     ]
   }
   ```

------

### Create an inventory resource data sync for accounts defined in AWS Organizations


The following procedure describes how to use the AWS CLI to create a resource data sync for accounts that are defined in AWS Organizations. You must use the AWS CLI to perform this task. You must also perform this procedure for each AWS Region and AWS account defined in AWS Organizations.

**To create a resource data sync for an account defined in AWS Organizations (AWS CLI)**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to verify that you don't have any other *AWS Organizations-based* resource data syncs. You can have multiple standard syncs, including multiple standard syncs and an Organizations-based sync. But, you can only have one Organizations-based resource data sync.

   ```
   aws ssm list-resource-data-sync
   ```

   If the command returns other Organizations-based resource data sync, you must delete them or choose not to create a new one.

1. Run the following command to create a resource data sync for an account defined in AWS Organizations. For amzn-s3-demo-bucket, specify the name of the Amazon S3 bucket you created earlier in this topic. If you created a prefix (subdirectory) for your bucket, then specify this information for *prefix-name*. 

   ```
   aws ssm create-resource-data-sync --sync-name name --s3-destination "BucketName=amzn-s3-demo-bucket,Prefix=prefix-name,SyncFormat=JsonSerDe,Region=AWS Region, for example us-east-2,DestinationDataSharing={DestinationDataSharingType=Organization}"
   ```

1. Repeat Steps 2 and 3 for every AWS Region and AWS account where you want to synchronize data to the central Amazon S3 bucket.

### Managing resource data syncs


Each AWS account can have 5 resource data syncs per AWS Region. You can use the AWS Systems Manager Fleet Manager console to manage your resource data syncs.

**To view resource data syncs**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. In the **Account management** dropdown, choose **Resource data syncs**.

1. Select a resource data sync from the table, and then choose **View details** to view information about your resource data sync.

**To delete a resource data sync**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. In the **Account management** dropdown, choose **Resource data syncs**.

1. Select a resource data sync from the table, and then choose **Delete**.

# Using EventBridge to monitor Inventory events


You can configure a rule in Amazon EventBridge to create an event in response to AWS Systems Manager Inventory resource state changes. EventBridge supports events for the following Inventory state changes. All events are sent on a best effort basis.

**Custom inventory type deleted for a specific instance**: If a rule is configured to monitor for this event, EventBridge creates an event when a custom inventory type on a specific managed is deleted. EventBridge sends one event per node per custom inventory type. Here is a sample event pattern.

```
{
    "timestampMillis": 1610042981103,
    "source": "SSM",
    "account": "123456789012",
    "type": "INVENTORY_RESOURCE_STATE_CHANGE",
    "startTime": "Jan 7, 2021 6:09:41 PM",
    "resources": [
        {
            "arn": "arn:aws:ssm:us-east-1:123456789012:managed-instance/i-12345678"
        }
    ],
    "body": {
        "action-status": "succeeded",
        "action": "delete",
        "resource-type": "managed-instance",
        "resource-id": "i-12345678",
        "action-reason": "",
        "type-name": "Custom:MyCustomInventoryType"
    }
}
```

**Custom inventory type deleted event for all instances**: If a rule is configured to monitor for this event, EventBridge creates an event when a custom inventory type for all managed nodes is deleted. Here is a sample event pattern.

```
{
    "timestampMillis": 1610042904712,
    "source": "SSM",
    "account": "123456789012",
    "type": "INVENTORY_RESOURCE_STATE_CHANGE",
    "startTime": "Jan 7, 2021 6:08:24 PM",
    "resources": [
        
    ],
    "body": {
        "action-status": "succeeded",
        "action": "delete-summary",
        "resource-type": "managed-instance",
        "resource-id": "",
        "action-reason": "The delete for type name Custom:SomeCustomInventoryType was completed. The deletion summary is: {\"totalCount\":1,\"remainingCount\":0,\"summaryItems\":[{\"version\":\"1.1\",\"count\":1,\"remainingCount\":0}]}",
        "type-name": "Custom:MyCustomInventoryType"
    }
}
```

**[PutInventory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutInventory.html) call with old schema version event**: If a rule is configured to monitor for this event, EventBridge creates an event when a PutInventory call is made that uses a schema version that is lower than the current schema. This event applies to all inventory types. Here is a sample event pattern.

```
{
    "timestampMillis": 1610042629548,
    "source": "SSM",
    "account": "123456789012",
    "type": "INVENTORY_RESOURCE_STATE_CHANGE",
    "startTime": "Jan 7, 2021 6:03:49 PM",
    "resources": [
        {
            "arn": "arn:aws:ssm:us-east-1:123456789012:managed-instance/i-12345678"
        }
    ],
    "body": {
        "action-status": "failed",
        "action": "put",
        "resource-type": "managed-instance",
        "resource-id": "i-01f017c1b2efbe2bc",
        "action-reason": "The inventory item with type name Custom:MyCustomInventoryType was sent with a disabled schema verison 1.0. You must send a version greater than 1.0",
        "type-name": "Custom:MyCustomInventoryType"
    }
}
```

For information about how to configure EventBridge to monitor for these events, see [Configuring EventBridge for Systems Manager events](monitoring-systems-manager-events.md).

# Configuring inventory collection


This section describes how to configure AWS Systems Manager Inventory collection on one or more managed nodes by using the Systems Manager console. For an example of how to configure inventory collection by using the AWS Command Line Interface (AWS CLI), see [Using the AWS CLI to configure inventory data collection](inventory-collection-cli.md).

When you configure inventory collection, you start by creating a AWS Systems Manager State Manager association. Systems Manager collects the inventory data when the association is run. If you don't create the association first, and attempt to invoke the `aws:softwareInventory` plugin by using, for example, AWS Systems Manager Run Command, the system returns the following error: `The aws:softwareInventory plugin can only be invoked via ssm-associate.`

**Note**  
Be aware of the following behavior if you create multiple inventory associations for a managed node:  
Each node can be assigned an inventory association that targets *all* nodes (--targets "Key=InstanceIds,Values=\$1").
Each node can also be assigned a specific association that uses either tag key/value pairs or an AWS resource group.
If a node is assigned multiple inventory associations, the status shows *Skipped* for the association that hasn't run. The association that ran most recently displays the actual status of the inventory association.
If a node is assigned multiple inventory associations and each uses a tag key/value pair, then those inventory associations fail to run on the node because of the tag conflict. The association still runs on nodes that don't have the tag key/value conflict. 

**Before You Begin**  
Before you configure inventory collection, complete the following tasks.
+ Update AWS Systems Manager SSM Agent on the nodes you want to inventory. By running the latest version of SSM Agent, you ensure that you can collect metadata for all supported inventory types. For information about how to update SSM Agent by using State Manager, see [Walkthrough: Automatically update SSM Agent with the AWS CLI](state-manager-update-ssm-agent-cli.md).
+ Verify that you have completed the setup requirements for your Amazon Elastic Compute Cloud (Amazon EC2) instances and non-EC2 machines in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. For information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md).
+ For Microsoft Windows Server nodes, verify that your managed node is configured with Windows PowerShell 3.0 (or later). SSM Agent uses the `ConvertTo-Json` cmdlet in PowerShell to convert Windows update inventory data to the required format.
+ (Optional) Create a resource data sync to centrally store inventory data in an Amazon S3 bucket. resource data sync then automatically updates the centralized data when new inventory data is collected. For more information, see [Walkthrough: Using resource data sync to aggregate inventory data](inventory-resource-data-sync.md).
+ (Optional) Create a JSON file to collect custom inventory. For more information, see [Working with custom inventory](inventory-custom.md).

## Inventory all managed nodes in your AWS account


You can inventory all managed nodes in your AWS account by creating a global inventory association. A global inventory association performs the following actions:
+ Automatically applies the global inventory configuration (association) to all existing managed nodes in your AWS account. Managed nodes that already have an inventory association are skipped when the global inventory association is applied and runs. When a node is skipped, the detailed status message states `Overridden By Explicit Inventory Association`. Those nodes are skipped by the global association, but they will still report inventory when they run their assigned inventory association.
+ Automatically adds new nodes created in your AWS account to the global inventory association.

**Note**  
If a managed node is configured for the global inventory association, and you assign a specific association to that node, then Systems Manager Inventory deprioritizes the global association and applies the specific association.
Global inventory associations are available in SSM Agent version 2.0.790.0 or later. For information about how to update SSM Agent on your nodes, see [Updating the SSM Agent using Run Command](run-command-tutorial-update-software.md#rc-console-agentexample).

### Configuring inventory collection with one click (console)


Use the following procedure to configure Systems Manager Inventory for all managed nodes in your AWS account and in a single AWS Region. 

**To configure all of your managed nodes in the current Region for Systems Manager inventory**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Inventory**.

1. In the **Managed instances with inventory enabled** card, choose **Click here to enable inventory on all instances**.  
![\[Enabling Systems Manager Inventory on all managed nodes.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-one-click-1.png)

   If successful, the console displays the following message.  
![\[Enabling Systems Manager Inventory on all managed nodes.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-one-click-2.png)

   Depending on the number of managed nodes in your account, it can take several minutes for the global inventory association to be applied. Wait a few minutes and then refresh the page. Verify that the graphic changes to reflect that inventory is configured on all of your managed nodes.

### Configuring collection by using the console


This section includes information about how to configure Systems Manager Inventory to collect metadata from your managed nodes by using the Systems Manager console. You can quickly collect metadata from all nodes in a specific AWS account (and any future nodes that might be created in that account) or you can selectively collect inventory data by using tags or node IDs.

**Note**  
Before completing this procedure, check to see if a global inventory association already exists. If a global inventory association already exists, anytime you launch a new instance, the association will be applied to it, and the new instance will be inventoried.

**To configure inventory collection**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Inventory**.

1. Choose **Setup Inventory**.

1. In the **Targets** section, identify the nodes where you want to run this operation by choosing one of the following.
   + **Selecting all managed instances in this account** - This option selects all managed nodes for which there is no existing inventory association. If you choose this option, nodes that already had inventory associations are skipped during inventory collection, and shown with a status of **Skipped** in inventory results. For more information, see [Inventory all managed nodes in your AWS account](#inventory-management-inventory-all). 
   + **Specifying a tag** - Use this option to specify a single tag to identify nodes in your account from which you want to collect inventory. If you use a tag, any nodes created in the future with the same tag will also report inventory. If there is an existing inventory association with all nodes, using a tag to select specific nodes as a target for a different inventory overrides node membership in the **All managed instances** target group. Managed nodes with the specified tag are skipped on future inventory collection from **All managed instances**.
   + **Manually selecting instances** - Use this option to choose specific managed nodes in your account. Explicitly choosing specific nodes by using this option overrides inventory associations on the **All managed instances** target. The node is skipped on future inventory collection from **All managed instances**.
**Note**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

1. In the **Schedule** section, choose how often you want the system to collect inventory metadata from your nodes.

1. In the **Parameters** section, use the lists to turn on or turn off different types of inventory collection. For more information about collecting File and Windows Registry inventory, see [Working with file and Windows registry inventory](inventory-file-and-registry.md).

1. In the **Advanced** section, choose **Sync inventory execution logs to an Amazon S3 bucket** if you want to store the association execution status in an Amazon S3 bucket.

1. Choose **Setup Inventory**. Systems Manager creates a State Manager association and immediately runs Inventory on the nodes.

1. In the navigation pane, choose **State Manager**. Verify that a new association was created that uses the `AWS-GatherSoftwareInventory` document. The association schedule uses a rate expression. Also, verify that the **Status** field shows **Success**. If you chose the option to **Sync inventory execution logs to an Amazon S3 bucket**, then you can view the log data in Amazon S3 after a few minutes. If you want to view inventory data for a specific node, then choose **Managed Instances** in the navigation pane. 

1. Choose a node, and then choose **View details**.

1. On the node details page, choose **Inventory**. Use the **Inventory type** lists to filter the inventory.

# Querying inventory data from multiple Regions and accounts


AWS Systems Manager Inventory integrates with Amazon Athena to help you query inventory data from multiple AWS Regions and AWS accounts. Athena integration uses resource data sync so that you can view inventory data from all of your managed nodes on the **Detailed View** page in the AWS Systems Manager console.

**Important**  
This feature uses AWS Glue to crawl the data in your Amazon Simple Storage Service (Amazon S3) bucket, and Amazon Athena to query the data. Depending on how much data is crawled and queried, you can be charged for using these services. With AWS Glue, you pay an hourly rate, billed by the second, for crawlers (discovering data) and ETL jobs (processing and loading data). With Athena, you're charged based on the amount of data scanned by each query. We encourage you to view the pricing guidelines for these services before you use Amazon Athena integration with Systems Manager Inventory. For more information, see [Amazon Athena pricing](https://aws.amazon.com/athena/pricing/) and [AWS Glue pricing](https://aws.amazon.com/glue/pricing/).

You can view inventory data on the **Detailed View** page in all AWS Regions where Amazon Athena is available. For a list of supported Regions, see [Amazon Athena Service Endpoints](https://docs.aws.amazon.com/general/latest/gr/athena.html#athena_region) in the *Amazon Web Services General Reference*.

**Before you begin**  
Athena integration uses resource data sync. You must set up and configure resource data sync to use this feature. For more information, see [Walkthrough: Using resource data sync to aggregate inventory data](inventory-resource-data-sync.md).

Also, be aware that the **Detailed View** page displays inventory data for the *owner* of the central Amazon S3 bucket used by resource data sync. If you aren't the owner of the central Amazon S3 bucket, then you won't see inventory data on the **Detailed View** page.

## Configuring access


Before you can query and view data from multiple accounts and Regions on the **Detailed View** page in the Systems Manager console, you must configure your IAM entity with permission to view the data.

If the inventory data is stored in an Amazon S3 bucket that uses AWS Key Management Service (AWS KMS) encryption, you must also configure your IAM entity and the `Amazon-GlueServiceRoleForSSM` service role for AWS KMS encryption. 

**Topics**
+ [

### Configuring your IAM entity to access the Detailed View page
](#systems-manager-inventory-query-iam-user)
+ [

### (Optional) Configure permissions for viewing AWS KMS encrypted data
](#systems-manager-inventory-query-kms)

### Configuring your IAM entity to access the Detailed View page


The following describes the minimum permissions required to view inventory data on the **Detailed View** page.

The `AWSQuicksightAthenaAccess` managed policy

The following `PassRole` and additional required permissions block

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowGlue",
            "Effect": "Allow",
            "Action": [
                "glue:GetCrawler",
                "glue:GetCrawlers",
                "glue:GetTables",
                "glue:StartCrawler",
                "glue:CreateCrawler"
            ],
            "Resource": "*"
        },
        {
            "Sid": "iamPassRole",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": [
                "arn:aws:iam::111122223333:role/SSMInventoryGlueRole",
                "arn:aws:iam::111122223333:role/SSMInventoryServiceRole"
            ],
            "Condition": {
                "StringEquals": {
                    "iam:PassedToService": "glue.amazonaws.com"
                }
            }
        },
        {
            "Sid": "iamRoleCreation",
            "Effect": "Allow",
            "Action": [
                "iam:CreateRole",
                "iam:AttachRolePolicy"
            ],
            "Resource": "arn:aws:iam::111122223333:role/*"
        },
        {
            "Sid": "iamPolicyCreation",
            "Effect": "Allow",
            "Action": "iam:CreatePolicy",
            "Resource": "arn:aws:iam::111122223333:policy/*"
        }
    ]
}
```

------

(Optional) If the Amazon S3 bucket used to store inventory data is encrypted by using AWS KMS, you must also add the following block to the policy.

```
{
    "Effect": "Allow",
    "Action": [
        "kms:Decrypt"
    ],
    "Resource": [
        "arn:aws:kms:Region:account_ID:key/key_ARN"
    ]
}
```

To provide access, add permissions to your users, groups, or roles:
+ Users and groups in AWS IAM Identity Center:

  Create a permission set. Follow the instructions in [Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/howtocreatepermissionset.html) in the *AWS IAM Identity Center User Guide*.
+ Users managed in IAM through an identity provider:

  Create a role for identity federation. Follow the instructions in [Create a role for a third-party identity provider (federation)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*.
+ IAM users:
  + Create a role that your user can assume. Follow the instructions in [Create a role for an IAM user](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.
  + (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in [Adding permissions to a user (console)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

### (Optional) Configure permissions for viewing AWS KMS encrypted data


If the Amazon S3 bucket used to store inventory data is encrypted by using the AWS Key Management Service (AWS KMS), you must configure your IAM entity and the **Amazon-GlueServiceRoleForSSM** role with `kms:Decrypt` permissions for the AWS KMS key. 

**Before you begin**  
To provide the `kms:Decrypt` permissions for the AWS KMS key, add the following policy block to your IAM entity:

```
{
    "Effect": "Allow",
    "Action": [
        "kms:Decrypt"
    ],
    "Resource": [
        "arn:aws:kms:Region:account_ID:key/key_ARN"
    ]
}
```

If you haven't done so already, complete that procedure and add `kms:Decrypt` permissions for the AWS KMS key.

Use the following procedure to configure the **Amazon-GlueServiceRoleForSSM** role with `kms:Decrypt` permissions for the AWS KMS key. 

**To configure the **Amazon-GlueServiceRoleForSSM** role with `kms:Decrypt` permissions**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**, and then use the search field to locate the **Amazon-GlueServiceRoleForSSM** role. The **Summary** page opens.

1. Use the search field to find the **Amazon-GlueServiceRoleForSSM** role. Choose the role name. The **Summary** page opens.

1. Choose the role name. The **Summary** page opens.

1. Choose **Add inline policy**. The **Create policy** page opens.

1. Choose the **JSON** tab.

1. Delete the existing JSON text in the editor, and then copy and paste the following policy into the JSON editor. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt"
               ],
               "Resource": [
                   "arn:aws:kms:us-east-1:111122223333:key/key_ARN"
               ]
           }
       ]
   }
   ```

------

1. Choose **Review policy**

1. On the **Review Policy** page, enter a name in the **Name** field.

1. Choose **Create policy**.

## Querying data on the inventory detailed view page


Use the following procedure to view inventory data from multiple AWS Regions and AWS accounts on the Systems Manager Inventory **Detailed View** page.

**Important**  
The Inventory **Detailed View** page is only available in AWS Regions that offer Amazon Athena. If the following tabs aren't displayed on the Systems Manager Inventory page, it means Athena isn't available in the Region and you can't use the **Detailed View** to query data.  

![\[Displaying Inventory Dashboard | Detailed View | Settings tabs\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-detailed-view-for-error.png)


**To view inventory data from multiple Regions and accounts in the AWS Systems Manager console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Inventory**.

1. Choose the **Detailed View** tab.  
![\[Accessing the AWS Systems Manager Inventory Detailed View page\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-detailed-view.png)

1. Choose the resource data sync for which you want to query data.  
![\[Displaying inventory data in the AWS Systems Manager console\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-display-data.png)

1. In the **Inventory Type** list, choose the type of inventory data that you want to query, and then press Enter.  
![\[Choosing an inventory type in the AWS Systems Manager console\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-type.png)

1. To filter the data, choose the Filter bar, and then choose a filter option.  
![\[Filtering inventory data in the AWS Systems Manager console\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-filter.png)

You can use the **Export to CSV** button to view the current query set in a spreadsheet application such as Microsoft Excel. You can also use the **Query History** and **Run Advanced Queries** buttons to view history details and interact with your data in Amazon Athena.

### Editing the AWS Glue crawler schedule


AWS Glue crawls the inventory data in the central Amazon S3 bucket twice daily, by default. If you frequently change the types of data to collect on your nodes then you might want to crawl the data more frequently, as described in the following procedure.

**Important**  
AWS Glue charges your AWS account based on an hourly rate, billed by the second, for crawlers (discovering data) and ETL jobs (processing and loading data). Before you change the crawler schedule, view the [AWS Glue pricing](https://aws.amazon.com/glue/pricing/) page.

**To change the inventory data crawler schedule**

1. Open the AWS Glue console at [https://console.aws.amazon.com/glue/](https://console.aws.amazon.com/glue/).

1. In the navigation pane, choose **Crawlers**.

1. In the crawlers list, choose the option next to the Systems Manager Inventory data crawler. The crawler name uses the following format:

   `AWSSystemsManager-s3-bucket-name-Region-account_ID`

1. Choose **Action**, and then choose **Edit crawler**.

1. In the navigation pane, choose **Schedule**.

1. In the **Cron expression** field, specify a new schedule by using a cron format. For more information about the cron format, see [Time-Based Schedules for Jobs and Crawlers](https://docs.aws.amazon.com/glue/latest/dg/monitor-data-warehouse-schedule.html) in the *AWS Glue Developer Guide*.

**Important**  
You can pause the crawler to stop incurring charges from AWS Glue. If you pause the crawler, or if you change the frequency so that the data is crawled less often, then the Inventory **Detailed View** might display data that isn't current.

# Querying an inventory collection by using filters


After you collect inventory data, you can use the filter capabilities in AWS Systems Manager to query a list of managed nodes that meet certain filter criteria. 

**To query nodes based on inventory filters**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Inventory**.

1. In the **Filter by resource groups, tags or inventory types** section, choose the filter box. A list of predefined filters is displayed.

1. Choose an attribute to filter on. For example, choose `AWS:Application`. If prompted, choose a secondary attribute to filter. For example, choose `AWS:Application.Name`. 

1. Choose a delimiter from the list. For example, choose **Begin with**. A text box is displayed in the filter.

1. Enter a value in the text box. For example, enter *Amazon* (SSM Agent is named *Amazon SSM Agent*). 

1. Press Enter. The system returns a list of managed nodes that include an application name that begins with the word *Amazon*.

**Note**  
You can combine multiple filters to refine your search.

# Aggregating inventory data


After you configure your managed nodes for AWS Systems Manager Inventory, you can view aggregated counts of inventory data. For example, say you configured dozens or hundreds of managed nodes to collect the `AWS:Application` inventory type. By using the information in this section, you can see an exact count of how many nodes are configured to collect this data.

You can also see specific inventory details by aggregating on a data type. For example, the `AWS:InstanceInformation` inventory type collects operating system platform information with the `Platform` data type. By aggregating data on the `Platform` data type, you can quickly see how many nodes are running Windows Server, how many are running Linux, and how many are running macOS. 

The procedures in this section describe how to view aggregated counts of inventory data by using the AWS Command Line Interface (AWS CLI). You can also view pre-configured aggregated counts in the AWS Systems Manager console on the **Inventory** page. These pre-configured dashboards are called *Inventory Insights* and they offer one-click remediation of your inventory configuration issues.

Note the following important details about aggregation counts of inventory data:
+ If you terminate a managed node that is configured to collect inventory data, Systems Manager retains the inventory data for 30 days and then deletes it. For running nodes, the systems deletes inventory data that is older than 30 days. If you need to store inventory data longer than 30 days, you can use AWS Config to record history or periodically query and upload the data to an Amazon Simple Storage Service (Amazon S3) bucket.
+ If a node was previously configured to report a specific inventory data type, for example `AWS:Network`, and later you change the configuration to stop collecting that type, aggregation counts still show `AWS:Network` data until the node has been terminated and 30 days have passed.

For information about how to quickly configure and collect inventory data from all nodes in a specific AWS account (and any future nodes that might be created in that account), see [Inventory all managed nodes in your AWS account](inventory-collection.md#inventory-management-inventory-all).

**Topics**
+ [

## Aggregating inventory data to see counts of nodes that collect specific types of data
](#inventory-aggregate-type)
+ [

## Aggregating inventory data with groups to see which nodes are and aren't configured to collect an inventory type
](#inventory-aggregate-groups)

## Aggregating inventory data to see counts of nodes that collect specific types of data
Aggregating for specific types of data

You can use the AWS Systems Manager [GetInventory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetInventory.html) API operation to view aggregated counts of nodes that collect one or more inventory types and data types. For example, the `AWS:InstanceInformation` inventory type allows you to view an aggregate of operating systems by using the GetInventory API operation with the `AWS:InstanceInformation.PlatformType` data type. Here is an example AWS CLI command and output.

```
aws ssm get-inventory --aggregators "Expression=AWS:InstanceInformation.PlatformType"
```

The system returns information like the following.

```
{
   "Entities":[
      {
         "Data":{
            "AWS:InstanceInformation":{
               "Content":[
                  {
                     "Count":"7",
                     "PlatformType":"windows"
                  },
                  {
                     "Count":"5",
                     "PlatformType":"linux"
                  }
               ]
            }
         }
      }
   ]
}
```

**Getting started**  
Determine the inventory types and data types for which you want to view counts. You can view a list of inventory types and data types that support aggregation by running the following command in the AWS CLI.

```
aws ssm get-inventory-schema --aggregator
```

The command returns a JSON list of inventory types and data types that support aggregation. The **TypeName** field shows supported inventory types. And the **Name** field shows each data type. For example, in the following list, the `AWS:Application` inventory type includes data types for `Name` and `Version`.

```
{
    "Schemas": [
        {
            "TypeName": "AWS:Application",
            "Version": "1.1",
            "DisplayName": "Application",
            "Attributes": [
                {
                    "DataType": "STRING",
                    "Name": "Name"
                },
                {
                    "DataType": "STRING",
                    "Name": "Version"
                }
            ]
        },
        {
            "TypeName": "AWS:InstanceInformation",
            "Version": "1.0",
            "DisplayName": "Platform",
            "Attributes": [
                {
                    "DataType": "STRING",
                    "Name": "PlatformName"
                },
                {
                    "DataType": "STRING",
                    "Name": "PlatformType"
                },
                {
                    "DataType": "STRING",
                    "Name": "PlatformVersion"
                }
            ]
        },
        {
            "TypeName": "AWS:ResourceGroup",
            "Version": "1.0",
            "DisplayName": "ResourceGroup",
            "Attributes": [
                {
                    "DataType": "STRING",
                    "Name": "Name"
                }
            ]
        },
        {
            "TypeName": "AWS:Service",
            "Version": "1.0",
            "DisplayName": "Service",
            "Attributes": [
                {
                    "DataType": "STRING",
                    "Name": "Name"
                },
                {
                    "DataType": "STRING",
                    "Name": "DisplayName"
                },
                {
                    "DataType": "STRING",
                    "Name": "ServiceType"
                },
                {
                    "DataType": "STRING",
                    "Name": "Status"
                },
                {
                    "DataType": "STRING",
                    "Name": "StartType"
                }
            ]
        },
        {
            "TypeName": "AWS:WindowsRole",
            "Version": "1.0",
            "DisplayName": "WindowsRole",
            "Attributes": [
                {
                    "DataType": "STRING",
                    "Name": "Name"
                },
                {
                    "DataType": "STRING",
                    "Name": "DisplayName"
                },
                {
                    "DataType": "STRING",
                    "Name": "FeatureType"
                },
                {
                    "DataType": "STRING",
                    "Name": "Installed"
                }
            ]
        }
    ]
}
```

You can aggregate data for any of the listed inventory types by creating a command that uses the following syntax.

```
aws ssm get-inventory --aggregators "Expression=InventoryType.DataType"
```

Here are some examples.

**Example 1**

This example aggregates a count of the Windows roles used by your nodes.

```
aws ssm get-inventory --aggregators "Expression=AWS:WindowsRole.Name"
```

**Example 2**

This example aggregates a count of the applications installed on your nodes.

```
aws ssm get-inventory --aggregators "Expression=AWS:Application.Name"
```

**Combining multiple aggregators**  
You can also combine multiple inventory types and data types in one command to help you better understand the data. Here are some examples.

**Example 1**

This example aggregates a count of the operating system types used by your nodes. It also returns the specific name of the operating systems.

```
aws ssm get-inventory --aggregators '[{"Expression": "AWS:InstanceInformation.PlatformType", "Aggregators":[{"Expression": "AWS:InstanceInformation.PlatformName"}]}]'
```

**Example 2**

This example aggregates a count of the applications running on your nodes and the specific version of each application.

```
aws ssm get-inventory --aggregators '[{"Expression": "AWS:Application.Name", "Aggregators":[{"Expression": "AWS:Application.Version"}]}]'
```

If you prefer, you can create an aggregation expression with one or more inventory types and data types in a JSON file and call the file from the AWS CLI. The JSON in the file must use the following syntax.

```
[
       {
           "Expression": "string",
           "Aggregators": [
               {
                  "Expression": "string"
               }
           ]
       }
]
```

You must save the file with the .json file extension. 

Here is an example that uses multiple inventory types and data types.

```
[
       {
           "Expression": "AWS:Application.Name",
           "Aggregators": [
               {
                   "Expression": "AWS:Application.Version",
                   "Aggregators": [
                     {
                     "Expression": "AWS:InstanceInformation.PlatformType"
                     }
                   ]
               }
           ]
       }
]
```

Use the following command to call the file from the AWS CLI. 

```
aws ssm get-inventory --aggregators file://file_name.json
```

The command returns information like the following.

```
{"Entities": 
 [
   {"Data": 
     {"AWS:Application": 
       {"Content": 
         [
           {"Count": "3", 
            "PlatformType": "linux", 
            "Version": "2.6.5", 
            "Name": "audit-libs"}, 
           {"Count": "2", 
            "PlatformType": "windows", 
            "Version": "2.6.5", 
            "Name": "audit-libs"}, 
           {"Count": "4", 
            "PlatformType": "windows", 
            "Version": "6.2.8", 
            "Name": "microsoft office"}, 
           {"Count": "2", 
            "PlatformType": "windows", 
            "Version": "2.6.5", 
            "Name": "chrome"}, 
           {"Count": "1", 
            "PlatformType": "linux", 
            "Version": "2.6.5", 
            "Name": "chrome"}, 
           {"Count": "2", 
            "PlatformType": "linux", 
            "Version": "6.3", 
            "Name": "authconfig"}
         ]
       }
     }, 
    "ResourceType": "ManagedInstance"}
 ]
}
```

## Aggregating inventory data with groups to see which nodes are and aren't configured to collect an inventory type
Using groups

Groups in Systems Manager Inventory allow you to quickly see a count of which managed nodes are and aren’t configured to collect one or more inventory types. With groups, you specify one or more inventory types and a filter that uses the `exists` operator.

For example, say that you have four managed nodes configured to collect the following inventory types:
+ Node 1: `AWS:Application`
+ Node 2: `AWS:File`
+ Node 3: `AWS:Application`, `AWS:File`
+ Node 4: `AWS:Network`

You can run the following command from the AWS CLI to see how many nodes are configured to collect both the `AWS:Application` and `AWS:File inventory` types. The response also returns a count of how many nodes aren't configured to collect both of these inventory types.

```
aws ssm get-inventory --aggregators 'Groups=[{Name=ApplicationAndFile,Filters=[{Key=TypeName,Values=[AWS:Application],Type=Exists},{Key=TypeName,Values=[AWS:File],Type=Exists}]}]'
```

The command response shows that only one managed node is configured to collect both the `AWS:Application` and `AWS:File` inventory types.

```
{
   "Entities":[
      {
         "Data":{
            "ApplicationAndFile":{
               "Content":[
                  {
                     "notMatchingCount":"3"
                  },
                  {
                     "matchingCount":"1"
                  }
               ]
            }
         }
      }
   ]
}
```

**Note**  
Groups don't return data type counts. Also, you can't drill-down into the results to see the IDs of nodes that are or aren't configured to collect the inventory type.

If you prefer, you can create an aggregation expression with one or more inventory types in a JSON file and call the file from the AWS CLI. The JSON in the file must use the following syntax:

```
{
   "Aggregators":[
      {
         "Groups":[
            {
               "Name":"Name",
               "Filters":[
                  {
                     "Key":"TypeName",
                     "Values":[
                        "Inventory_type"
                     ],
                     "Type":"Exists"
                  },
                  {
                     "Key":"TypeName",
                     "Values":[
                        "Inventory_type"
                     ],
                     "Type":"Exists"
                  }
               ]
            }
         ]
      }
   ]
}
```

You must save the file with the .json file extension. 

Use the following command to call the file from the AWS CLI. 

```
aws ssm get-inventory --cli-input-json file://file_name.json
```

**Additional examples**  
The following examples show you how to aggregate inventory data to see which managed nodes are and aren't configured to collect the specified inventory types. These examples use the AWS CLI. Each example includes a full command with filters that you can run from the command line and a sample input.json file if you prefer to enter the information in a file.

**Example 1**

This example aggregates a count of nodes that are and aren't configured to collect either the `AWS:Application` or the `AWS:File` inventory types.

Run the following command from the AWS CLI.

```
aws ssm get-inventory --aggregators 'Groups=[{Name=ApplicationORFile,Filters=[{Key=TypeName,Values=[AWS:Application, AWS:File],Type=Exists}]}]'
```

If you prefer to use a file, copy and paste the following sample into a file and save it as input.json.

```
{
   "Aggregators":[
      {
         "Groups":[
            {
               "Name":"ApplicationORFile",
               "Filters":[
                  {
                     "Key":"TypeName",
                     "Values":[
                        "AWS:Application",
                        "AWS:File"
                     ],
                     "Type":"Exists"
                  }
               ]
            }
         ]
      }
   ]
}
```

Run the following command from the AWS CLI.

```
aws ssm get-inventory --cli-input-json file://input.json
```

The command returns information like the following.

```
{
   "Entities":[
      {
         "Data":{
            "ApplicationORFile":{
               "Content":[
                  {
                     "notMatchingCount":"1"
                  },
                  {
                     "matchingCount":"3"
                  }
               ]
            }
         }
      }
   ]
}
```

**Example 2**

This example aggregates a count of nodes that are and aren't configured to collect the `AWS:Application`, `AWS:File`, and `AWS:Network` inventory types.

Run the following command from the AWS CLI.

```
aws ssm get-inventory --aggregators 'Groups=[{Name=Application,Filters=[{Key=TypeName,Values=[AWS:Application],Type=Exists}]}, {Name=File,Filters=[{Key=TypeName,Values=[AWS:File],Type=Exists}]}, {Name=Network,Filters=[{Key=TypeName,Values=[AWS:Network],Type=Exists}]}]'
```

If you prefer to use a file, copy and paste the following sample into a file and save it as input.json.

```
{
   "Aggregators":[
      {
         "Groups":[
            {
               "Name":"Application",
               "Filters":[
                  {
                     "Key":"TypeName",
                     "Values":[
                        "AWS:Application"
                     ],
                     "Type":"Exists"
                  }
               ]
            },
            {
               "Name":"File",
               "Filters":[
                  {
                     "Key":"TypeName",
                     "Values":[
                        "AWS:File"
                     ],
                     "Type":"Exists"
                  }
               ]
            },
            {
               "Name":"Network",
               "Filters":[
                  {
                     "Key":"TypeName",
                     "Values":[
                        "AWS:Network"
                     ],
                     "Type":"Exists"
                  }
               ]
            }
         ]
      }
   ]
}
```

Run the following command from the AWS CLI.

```
aws ssm get-inventory --cli-input-json file://input.json
```

The command returns information like the following.

```
{
   "Entities":[
      {
         "Data":{
            "Application":{
               "Content":[
                  {
                     "notMatchingCount":"2"
                  },
                  {
                     "matchingCount":"2"
                  }
               ]
            },
            "File":{
               "Content":[
                  {
                     "notMatchingCount":"2"
                  },
                  {
                     "matchingCount":"2"
                  }
               ]
            },
            "Network":{
               "Content":[
                  {
                     "notMatchingCount":"3"
                  },
                  {
                     "matchingCount":"1"
                  }
               ]
            }
         }
      }
   ]
}
```

# Working with custom inventory


You can assign any metadata you want to your nodes by creating AWS Systems Manager Inventory *custom inventory*. For example, let's say you manage a large number of servers in racks in your data center, and these servers have been configured as Systems Manager managed nodes. Currently, you store information about server rack location in a spreadsheet. With custom inventory, you can specify the rack location of each node as metadata on the node. When you collect inventory by using Systems Manager, the metadata is collected with other inventory metadata. You can then port all inventory metadata to a central Amazon S3 bucket by using [resource data sync](inventory-resource-data-sync.html) and query the data.

**Note**  
Systems Manager supports a maximum of 20 custom inventory types per AWS account.

To assign custom inventory to a node, you can either use the Systems Manager [PutInventory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutInventory.html) API operation, as described in [Assigning custom inventory metadata to a managed node](inventory-custom-metadata.md). Or, you can create a custom inventory JSON file and upload it to the node. This section describes how to create the JSON file.

The following example JSON file with custom inventory specifies rack information about an on-premises server. This examples specifies one type of custom inventory data (`"TypeName": "Custom:RackInformation"`), with multiple entries under `Content` that describe the data.

```
{
    "SchemaVersion": "1.0",
    "TypeName": "Custom:RackInformation",
    "Content": {
        "Location": "US-EAST-02.CMH.RACK1",
        "InstalledTime": "2016-01-01T01:01:01Z",
        "vendor": "DELL",
        "Zone" : "BJS12",
        "TimeZone": "UTC-8"
      }
 }
```

You can also specify distinct entries in the `Content` section, as shown in the following example.

```
{
"SchemaVersion": "1.0",
"TypeName": "Custom:PuppetModuleInfo",
    "Content": [{
        "Name": "puppetlabs/aws",
        "Version": "1.0"
      },
      {
        "Name": "puppetlabs/dsc",
        "Version": "2.0"
      }
    ]
}
```

The JSON schema for custom inventory requires `SchemaVersion`, `TypeName`, and `Content` sections, but you can define the information in those sections.

```
{
    "SchemaVersion": "user_defined",
    "TypeName": "Custom:user_defined",
    "Content": {
        "user_defined_attribute1": "user_defined_value1",
        "user_defined_attribute2": "user_defined_value2",
        "user_defined_attribute3": "user_defined_value3",
        "user_defined_attribute4": "user_defined_value4"
      }
 }
```

The value of `TypeName` is limited to 100 characters. Also, the `TypeName` value must begin with the capitalized word `Custom`. For example, `Custom:PuppetModuleInfo`. Therefore, the following examples would result in an exception: `CUSTOM:PuppetModuleInfo`, `custom:PuppetModuleInfo`. 

The `Content` section includes attributes and *data*. These items aren't case-sensitive. However, if you define an attribute (for example: "`Vendor`": "DELL"), then you must consistently reference this attribute in your custom inventory files. If you specify "`Vendor`": "DELL" (using a capital “V” in `vendor`) in one file, and then you specify "`vendor`": "DELL" (using a lowercase “v” in `vendor`) in another file, the system returns an error.

**Note**  
You must save the file with a `.json` extension and the inventory you define must consist only of string values.

After you create the file, you must save it on the node. The following table shows the location where custom inventory JSON files must be stored on the node.


****  

| Operating system | Path | 
| --- | --- | 
|  Linux  |  /var/lib/amazon/ssm/*node-id*/inventory/custom  | 
|  macOS  |  `/opt/aws/ssm/data/node-id/inventory/custom`  | 
|  Windows Server  |  %SystemDrive%\$1ProgramData\$1Amazon\$1SSM\$1InstanceData\$1*node-id*\$1inventory\$1custom  | 

For an example of how to use custom inventory, see [Get Disk Utilization of Your Fleet Using EC2 Systems Manager Custom Inventory Types](https://aws.amazon.com/blogs/mt/get-disk-utilization-of-your-fleet-using-ec2-systems-manager-custom-inventory-types/).

## Deleting custom inventory


You can use the [DeleteInventory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DeleteInventory.html) API operation to delete a custom inventory type and the data associated with that type. You call the delete-inventory command by using the AWS Command Line Interface (AWS CLI) to delete all data for an inventory type. You call the delete-inventory command with the `SchemaDeleteOption` to delete a custom inventory type.

**Note**  
An inventory type is also called an inventory schema.

The `SchemaDeleteOption` parameter includes the following options:
+ **DeleteSchema**: This option deletes the specified custom type and all data associated with it. You can recreate the schema later, if you want.
+ **DisableSchema**: If you choose this option, the system turns off the current version, deletes all data for it, and ignores all new data if the version is less than or equal to the turned off version. You can allow this inventory type again by calling the [PutInventory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutInventory.html) action for a version greater than the turned off version.

**To delete or turn off custom inventory by using the AWS CLI**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to use the `dry-run` option to see which data will be deleted from the system. This command doesn't delete any data.

   ```
   aws ssm delete-inventory --type-name "Custom:custom_type_name" --dry-run
   ```

   The system returns information like the following.

   ```
   {
      "DeletionSummary":{
         "RemainingCount":3,
         "SummaryItems":[
            {
               "Count":2,
               "RemainingCount":2,
               "Version":"1.0"
            },
            {
               "Count":1,
               "RemainingCount":1,
               "Version":"2.0"
            }
         ],
         "TotalCount":3
      },
      "TypeName":"Custom:custom_type_name"
   }
   ```

   For information about how to understand the delete inventory summary, see [Understanding the delete inventory summary](#delete-custom-inventory-summary).

1. Run the following command to delete all data for a custom inventory type.

   ```
   aws ssm delete-inventory --type-name "Custom:custom_type_name"
   ```
**Note**  
The output of this command doesn't show the deletion progress. For this reason, TotalCount and Remaining Count are always the same because the system hasn't deleted anything yet. You can use the describe-inventory-deletions command to show the deletion progress, as described later in this topic.

   The system returns information like the following.

   ```
   {
      "DeletionId":"system_generated_deletion_ID",
      "DeletionSummary":{
         "RemainingCount":3,
         "SummaryItems":[
            {
               "Count":2,
               "RemainingCount":2,
               "Version":"1.0"
            },
            {
               "Count":1,
               "RemainingCount":1,
               "Version":"2.0"
            }
         ],
         "TotalCount":3
      },
      "TypeName":"custom_type_name"
   }
   ```

   The system deletes all data for the specified custom inventory type from the Systems Manager Inventory service. 

1. Run the following command. The command performs the following actions for the current version of the inventory type: turns off the current version, deletes all data for it, and ignores all new data if the version is less than or equal to the turned off version. 

   ```
   aws ssm delete-inventory --type-name "Custom:custom_type_name" --schema-delete-option "DisableSchema"
   ```

   The system returns information like the following.

   ```
   {
      "DeletionId":"system_generated_deletion_ID",
      "DeletionSummary":{
         "RemainingCount":3,
         "SummaryItems":[
            {
               "Count":2,
               "RemainingCount":2,
               "Version":"1.0"
            },
            {
               "Count":1,
               "RemainingCount":1,
               "Version":"2.0"
            }
         ],
         "TotalCount":3
      },
      "TypeName":"Custom:custom_type_name"
   }
   ```

   You can view a turned off inventory type by using the following command.

   ```
   aws ssm get-inventory-schema --type-name Custom:custom_type_name
   ```

1. Run the following command to delete an inventory type.

   ```
   aws ssm delete-inventory --type-name "Custom:custom_type_name" --schema-delete-option "DeleteSchema"
   ```

   The system deletes the schema and all inventory data for the specified custom type.

   The system returns information like the following.

   ```
   {
      "DeletionId":"system_generated_deletion_ID",
      "DeletionSummary":{
         "RemainingCount":3,
         "SummaryItems":[
            {
               "Count":2,
               "RemainingCount":2,
               "Version":"1.0"
            },
            {
               "Count":1,
               "RemainingCount":1,
               "Version":"2.0"
            }
         ],
         "TotalCount":3
      },
      "TypeName":"Custom:custom_type_name"
   }
   ```

### Viewing the deletion status


You can check the status of a delete operation by using the `describe-inventory-deletions` AWS CLI command. You can specify a deletion ID to view the status of a specific delete operation. Or, you can omit the deletion ID to view a list of all deletions run in the last 30 days.

****

1. Run the following command to view the status of a deletion operation. The system returned the deletion ID in the delete-inventory summary.

   ```
   aws ssm describe-inventory-deletions --deletion-id system_generated_deletion_ID
   ```

   The system returns the latest status. The delete operation might not be finished yet. The system returns information like the following.

   ```
   {"InventoryDeletions": 
     [
       {"DeletionId": "system_generated_deletion_ID", 
        "DeletionStartTime": 1521744844, 
        "DeletionSummary": 
         {"RemainingCount": 1, 
          "SummaryItems": 
           [
             {"Count": 1, 
              "RemainingCount": 1, 
              "Version": "1.0"}
           ], 
          "TotalCount": 1}, 
        "LastStatus": "InProgress", 
        "LastStatusMessage": "The Delete is in progress", 
        "LastStatusUpdateTime": 1521744844, 
        "TypeName": "Custom:custom_type_name"}
     ]
   }
   ```

   If the delete operation is successful, the `LastStatusMessage` states: Deletion is successful.

   ```
   {"InventoryDeletions": 
     [
       {"DeletionId": "system_generated_deletion_ID", 
        "DeletionStartTime": 1521744844, 
        "DeletionSummary": 
         {"RemainingCount": 0, 
          "SummaryItems": 
           [
             {"Count": 1, 
              "RemainingCount": 0, 
              "Version": "1.0"}
           ], 
          "TotalCount": 1}, 
        "LastStatus": "Complete", 
        "LastStatusMessage": "Deletion is successful", 
        "LastStatusUpdateTime": 1521745253, 
        "TypeName": "Custom:custom_type_name"}
     ]
   }
   ```

1. Run the following command to view a list of all deletions run in the last 30 days.

   ```
   aws ssm describe-inventory-deletions --max-results a number
   ```

   ```
   {"InventoryDeletions": 
     [
       {"DeletionId": "system_generated_deletion_ID", 
        "DeletionStartTime": 1521682552, 
        "DeletionSummary": 
         {"RemainingCount": 0, 
          "SummaryItems": 
           [
             {"Count": 1, 
              "RemainingCount": 0, 
              "Version": "1.0"}
           ], 
          "TotalCount": 1}, 
        "LastStatus": "Complete", 
        "LastStatusMessage": "Deletion is successful", 
        "LastStatusUpdateTime": 1521682852, 
        "TypeName": "Custom:custom_type_name"}, 
       {"DeletionId": "system_generated_deletion_ID", 
        "DeletionStartTime": 1521744844, 
        "DeletionSummary": 
         {"RemainingCount": 0, 
          "SummaryItems": 
           [
             {"Count": 1, 
              "RemainingCount": 0, 
              "Version": "1.0"}
           ], 
          "TotalCount": 1}, 
        "LastStatus": "Complete", 
        "LastStatusMessage": "Deletion is successful", 
        "LastStatusUpdateTime": 1521745253, 
        "TypeName": "Custom:custom_type_name"}, 
       {"DeletionId": "system_generated_deletion_ID", 
        "DeletionStartTime": 1521680145, 
        "DeletionSummary": 
         {"RemainingCount": 0, 
          "SummaryItems": 
           [
             {"Count": 1, 
              "RemainingCount": 0, 
              "Version": "1.0"}
           ], 
          "TotalCount": 1}, 
        "LastStatus": "Complete", 
        "LastStatusMessage": "Deletion is successful", 
        "LastStatusUpdateTime": 1521680471, 
        "TypeName": "Custom:custom_type_name"}
     ], 
    "NextToken": "next-token"
   ```

### Understanding the delete inventory summary


To help you understand the contents of the delete inventory summary, consider the following example. A user assigned Custom:RackSpace inventory to three nodes. Inventory items 1 and 2 use custom type version 1.0 ("SchemaVersion":"1.0"). Inventory item 3 uses custom type version 2.0 ("SchemaVersion":"2.0").

RackSpace custom inventory 1

```
{
   "CaptureTime":"2018-02-19T10:48:55Z",
   "TypeName":"CustomType:RackSpace",
   "InstanceId":"i-1234567890",
   "SchemaVersion":"1.0"   "Content":[
      {
         content of custom type omitted
      }
   ]
}
```

RackSpace custom inventory 2

```
{
   "CaptureTime":"2018-02-19T10:48:55Z",
   "TypeName":"CustomType:RackSpace",
   "InstanceId":"i-1234567891",
   "SchemaVersion":"1.0"   "Content":[
      {
         content of custom type omitted
      }
   ]
}
```

RackSpace custom inventory 3

```
{
   "CaptureTime":"2018-02-19T10:48:55Z",
   "TypeName":"CustomType:RackSpace",
   "InstanceId":"i-1234567892",
   "SchemaVersion":"2.0"   "Content":[
      {
         content of custom type omitted
      }
   ]
}
```

The user runs the following command to preview which data will be deleted.

```
aws ssm delete-inventory --type-name "Custom:RackSpace" --dry-run
```

The system returns information like the following.

```
{
   "DeletionId":"1111-2222-333-444-66666",
   "DeletionSummary":{
      "RemainingCount":3,           
      "TotalCount":3,             
                TotalCount and RemainingCount are the number of items that would be deleted if this was not a dry run. These numbers are the same because the system didn't delete anything.
      "SummaryItems":[
         {
            "Count":2,             The system found two items that use SchemaVersion 1.0. Neither item was deleted.           
            "RemainingCount":2,
            "Version":"1.0"
         },
         {
            "Count":1,             The system found one item that uses SchemaVersion 1.0. This item was not deleted.
            "RemainingCount":1,
            "Version":"2.0"
         }
      ],

   },
   "TypeName":"Custom:RackSpace"
}
```

The user runs the following command to delete the Custom:RackSpace inventory. 

**Note**  
The output of this command doesn't show the deletion progress. For this reason, `TotalCount` and `RemainingCount` are always the same because the system hasn't deleted anything yet. You can use the `describe-inventory-deletions` command to show the deletion progress.

```
aws ssm delete-inventory --type-name "Custom:RackSpace"
```

The system returns information like the following.

```
{
   "DeletionId":"1111-2222-333-444-7777777",
   "DeletionSummary":{
      "RemainingCount":3,           There are three items to delete
      "SummaryItems":[
         {
            "Count":2,              The system found two items that use SchemaVersion 1.0.
            "RemainingCount":2,     
            "Version":"1.0"
         },
         {
            "Count":1,              The system found one item that uses SchemaVersion 2.0.
            "RemainingCount":1,     
            "Version":"2.0"
         }
      ],
      "TotalCount":3                
   },
   "TypeName":"RackSpace"
}
```

### Viewing inventory delete actions in EventBridge


You can configure Amazon EventBridge to create an event anytime a user deletes custom inventory. EventBridge offers three types of events for custom inventory delete operations:
+ **Delete action for an instance**: If the custom inventory for a specific managed node was successfully deleted or not. 
+ **Delete action summary**: A summary of the delete action.
+ **Warning for turned off custom inventory type**: A warning event if a user called the [PutInventory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutInventory.html) API operation for a custom inventory type version that was previously turned off.

Here are examples of each event.

**Delete action for an instance**

```
{
   "version":"0",
   "id":"998c9cde-56c0-b38b-707f-0411b3ff9d11",
   "detail-type":"Inventory Resource State Change",
   "source":"aws.ssm",
   "account":"478678815555",
   "time":"2018-05-24T22:24:34Z",
   "region":"us-east-1",
   "resources":[
      "arn:aws:ssm:us-east-1:478678815555:managed-instance/i-0a5feb270fc3f0b97"
   ],
   "detail":{
      "action-status":"succeeded",
      "action":"delete",
      "resource-type":"managed-instance",
      "resource-id":"i-0a5feb270fc3f0b97",
      "action-reason":"",
      "type-name":"Custom:MyInfo"
   }
}
```

**Delete action summary**

```
{
   "version":"0",
   "id":"83898300-f576-5181-7a67-fb3e45e4fad4",
   "detail-type":"Inventory Resource State Change",
   "source":"aws.ssm",
   "account":"478678815555",
   "time":"2018-05-24T22:28:25Z",
   "region":"us-east-1",
   "resources":[

   ],
   "detail":{
      "action-status":"succeeded",
      "action":"delete-summary",
      "resource-type":"managed-instance",
      "resource-id":"",
      "action-reason":"The delete for type name Custom:MyInfo was completed. The deletion summary is: {\"totalCount\":2,\"remainingCount\":0,\"summaryItems\":[{\"version\":\"1.0\",\"count\":2,\"remainingCount\":0}]}",
      "type-name":"Custom:MyInfo"
   }
}
```

**Warning for turned off custom inventory type**

```
{
   "version":"0",
   "id":"49c1855c-9c57-b5d7-8518-b64aeeef5e4a",
   "detail-type":"Inventory Resource State Change",
   "source":"aws.ssm",
   "account":"478678815555",
   "time":"2018-05-24T22:46:58Z",
   "region":"us-east-1",
   "resources":[
      "arn:aws:ssm:us-east-1:478678815555:managed-instance/i-0ee2d86a2cfc371f6"
   ],
   "detail":{
      "action-status":"failed",
      "action":"put",
      "resource-type":"managed-instance",
      "resource-id":"i-0ee2d86a2cfc371f6",
      "action-reason":"The inventory item with type name Custom:MyInfo was sent with a disabled schema version 1.0. You must send a version greater than 1.0",
      "type-name":"Custom:MyInfo"
   }
}
```

Use the following procedure to create an EventBridge rule for custom inventory delete operations. This procedure shows you how to create a rule that sends notifications for custom inventory delete operations to an Amazon SNS topic. Before you begin, verify that you have an Amazon SNS topic, or create a new one. For more information, see [Getting Started](https://docs.aws.amazon.com/sns/latest/dg/GettingStarted.html) in the *Amazon Simple Notification Service Developer Guide*.

**To configure EventBridge for delete inventory operations**

1. Open the Amazon EventBridge console at [https://console.aws.amazon.com/events/](https://console.aws.amazon.com/events/).

1. In the navigation pane, choose **Rules**.

1. Choose **Create rule**.

1. Enter a name and description for the rule.

   A rule can't have the same name as another rule in the same Region and on the same event bus.

1. For **Event bus**, choose the event bus that you want to associate with this rule. If you want this rule to respond to matching events that come from your own AWS account, select **default**. When an AWS service in your account emits an event, it always goes to your account’s default event bus.

1. For **Rule type**, choose **Rule with an event pattern**.

1. Choose **Next**.

1. For **Event source**, choose **AWS events or EventBridge partner events**.

1. In the **Event pattern** section, choose **Event pattern form**.

1. For **Event source**, choose **AWS services**.

1. For **AWS service**, choose **Systems Manager**.

1. For **Event type**, choose **Inventory**.

1. For **Specific detail type(s)**, choose **Inventory Resource State Change**.

1. Choose **Next**.

1. For **Target types**, choose **AWS service**.

1. For **Select a target**, choose **SNS topic**, and then for **Topic**, choose your topic.

1. In the **Additional settings** section, for **Configure target input**, verify that **Matched event** is selected.

1. Choose **Next**.

1. (Optional) Enter one or more tags for the rule. For more information, see [Tagging Your Amazon EventBridge Resources](https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-tagging.html) in the *Amazon EventBridge User Guide*.

1. Choose **Next**.

1. Review the details of the rule and choose **Create rule**.

# Viewing inventory history and change tracking


You can view AWS Systems Manager Inventory history and change tracking for all of your managed nodes by using [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/). AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time. To view inventory history and change tracking, you must turn on the following resources in AWS Config: 
+ SSM:ManagedInstanceInventory
+ SSM:PatchCompliance
+ SSM:AssociationCompliance
+ SSM:FileData

**Note**  
Note the following important details about Inventory history and change tracking:  
If you use AWS Config to track changes in your system, you must configure Systems Manager Inventory to collect `AWS:File` metadata so that you can view file changes in AWS Config (`SSM:FileData`). If you don't, then AWS Config doesn't track file changes on your system.
By turning on SSM:PatchCompliance and SSM:AssociationCompliance, you can view Systems Manager Patch Manager patching and Systems Manager State Manager association compliance history and change tracking. For more information about compliance management for these resources, see [Learn details about Compliance](compliance-about.md). 

The following procedure describes how to turn on inventory history and change-track recording in AWS Config by using the AWS Command Line Interface (AWS CLI). For more information about how to choose and configure these resources in AWS Config, see [Selecting Which Resources AWS Config Records](https://docs.aws.amazon.com/config/latest/developerguide/select-resources.html) in the *AWS Config Developer Guide*. For information about AWS Config pricing, see [Pricing](https://aws.amazon.com/config/pricing/).

**Before you begin**

AWS Config requires AWS Identity and Access Management (IAM) permissions to get configuration details about Systems Manager resources. In the following procedure, you must specify an Amazon Resource Name (ARN) for an IAM role that gives AWS Config permission to Systems Manager resources. You can attach the `AWS_ConfigRole` managed policy to the IAM role that you assign to AWS Config. For more information about this role, see [AWS managed policy: AWS\$1ConfigRole](https://docs.aws.amazon.com/config/latest/developerguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AWS_ConfigRole) in the *AWS Config Developer Guide*. For information about how to create an IAM role and assign the `AWS_ConfigRole` managed policy to that role, see [Creating a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*. 

**To turn on inventory history and change-track recording in AWS Config**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Copy and paste the following JSON sample into a simple text file and save it as recordingGroup.json.

   ```
   {
      "allSupported":false,
      "includeGlobalResourceTypes":false,
      "resourceTypes":[
         "AWS::SSM::AssociationCompliance",
         "AWS::SSM::PatchCompliance",
         "AWS::SSM::ManagedInstanceInventory",
         "AWS::SSM::FileData"
      ]
   }
   ```

1. Run the following command to load the recordingGroup.json file into AWS Config.

   ```
   aws configservice put-configuration-recorder --configuration-recorder name=myRecorder,roleARN=arn:aws:iam::123456789012:role/myConfigRole --recording-group file://recordingGroup.json
   ```

1. Run the following command to start recording inventory history and change tracking.

   ```
   aws configservice start-configuration-recorder --configuration-recorder-name myRecorder
   ```

After you configure history and change tracking, you can drill down into the history for a specific managed node by choosing the **AWS Config** button in the Systems Manager console. You can access the **AWS Config** button from either the **Managed Instances** page or the **Inventory** page. Depending on your monitor size, you might need to scroll to the right side of the page to see the button.

# Stopping data collection and deleting inventory data


If you no longer want to use AWS Systems Manager Inventory to view metadata about your AWS resources, you can stop data collection and delete data that has already been collected. This section includes the following information.

**Topics**
+ [

## Stopping data collection
](#systems-manager-inventory-delete-association)
+ [

## Deleting an Inventory resource data sync
](#systems-manager-inventory-delete-RDS)

## Stopping data collection


When you initially configure Systems Manager to collect inventory data, the system creates a State Manager association that defines the schedule and the resources from which to collect metadata. You can stop data collection by deleting any State Manager associations that use the `AWS-GatherSoftwareInventory` document.

**To delete an Inventory association**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Choose an association that uses the `AWS-GatherSoftwareInventory` document and then choose **Delete**.

1. Repeat step three for any remaining associations that use the `AWS-GatherSoftwareInventory` document.

## Deleting an Inventory resource data sync


If you no longer want to use AWS Systems Manager Inventory to view metadata about your AWS resources, then we also recommend deleting resource data syncs used for inventory data collection.

**To delete an Inventory resource data sync**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Inventory**.

1. Choose **Resource Data Syncs**.

1. Choose a sync in the list.
**Important**  
Make sure you choose the sync used for Inventory. Systems Manager supports resource data sync for multiple tools. If you choose the wrong sync, you could disrupt data aggregation for Systems Manager Explorer or Systems Manager Compliance.

1. Choose **Delete**

1. Repeat these steps for any remaining resource data syncs you want to delete.

1. Delete the Amazon Simple Storage Service (Amazon S3) bucket where the data was stored. For information about deleting an Amazon S3 bucket, see [Deleting a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.html).

# Assigning custom inventory metadata to a managed node


The following procedure walks you through the process of using the AWS Systems Manager [PutInventory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutInventory.html) API operation to assign custom inventory metadata to a managed node. This example assigns rack location information to a node. For more information about custom inventory, see [Working with custom inventory](inventory-custom.md).

**To assign custom inventory metadata to a node**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to assign rack location information to a node.

   **Linux**

   ```
   aws ssm put-inventory --instance-id "ID" --items '[{"CaptureTime": "2016-08-22T10:01:01Z", "TypeName": "Custom:RackInfo", "Content":[{"RackLocation": "Bay B/Row C/Rack D/Shelf E"}], "SchemaVersion": "1.0"}]'
   ```

   **Windows**

   ```
   aws ssm put-inventory --instance-id "ID" --items "TypeName=Custom:RackInfo,SchemaVersion=1.0,CaptureTime=2021-05-22T10:01:01Z,Content=[{RackLocation='Bay B/Row C/Rack D/Shelf F'}]"
   ```

1. Run the following command to view custom inventory entries for this node.

   ```
   aws ssm list-inventory-entries --instance-id ID --type-name "Custom:RackInfo"
   ```

   The system responds with information like the following.

   ```
   {
       "InstanceId": "ID", 
       "TypeName": "Custom:RackInfo", 
       "Entries": [
           {
               "RackLocation": "Bay B/Row C/Rack D/Shelf E"
           }
       ], 
       "SchemaVersion": "1.0", 
       "CaptureTime": "2016-08-22T10:01:01Z"
   }
   ```

1. Run the following command to view the custom inventory schema.

   ```
   aws ssm get-inventory-schema --type-name Custom:RackInfo
   ```

   The system responds with information like the following.

   ```
   {
       "Schemas": [
           {
               "TypeName": "Custom:RackInfo",
               "Version": "1.0",
               "Attributes": [
                   {
                       "DataType": "STRING",
                       "Name": "RackLocation"
                   }
               ]
           }
       ]
   }
   ```

# Using the AWS CLI to configure inventory data collection


The following procedures walk you through the process of configuring AWS Systems Manager Inventory to collect metadata from your managed nodes. When you configure inventory collection, you start by creating a Systems Manager State Manager association. Systems Manager collects the inventory data when the association is run. If you don't create the association first, and attempt to invoke the `aws:softwareInventory` plugin by using, for example, Systems Manager Run Command, the system returns the following error:

`The aws:softwareInventory plugin can only be invoked via ssm-associate`.

**Note**  
A node can have only one inventory association configured at a time. If you configure a node with two or more inventory associations, the association doesn't run and no inventory data is collected.

## Quickly configure all of your managed nodes for Inventory (CLI)


You can quickly configure all managed nodes in your AWS account and in the current Region to collect inventory data. This is called creating a global inventory association. To create a global inventory association by using the AWS CLI, use the wildcard option for the `instanceIds` value, as shown in the following procedure.

**To configure inventory for all managed nodes in your AWS account and in the current Region (CLI)**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --name AWS-GatherSoftwareInventory \
   --targets Key=InstanceIds,Values=* \
   --schedule-expression "rate(1 day)" \
   --parameters applications=Enabled,awsComponents=Enabled,customInventory=Enabled,instanceDetailedInformation=Enabled,networkConfig=Enabled,services=Enabled,windowsRoles=Enabled,windowsUpdates=Enabled
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --name AWS-GatherSoftwareInventory ^
   --targets Key=InstanceIds,Values=* ^
   --schedule-expression "rate(1 day)" ^
   --parameters applications=Enabled,awsComponents=Enabled,customInventory=Enabled,instanceDetailedInformation=Enabled,networkConfig=Enabled,services=Enabled,windowsRoles=Enabled,windowsUpdates=Enabled
   ```

------

**Note**  
This command doesn't allow Inventory to collect metadata for the Windows Registry or files. To inventory these datatypes, use the next procedure.

## Manually configuring Inventory on your managed nodes (CLI)


Use the following procedure to manually configure AWS Systems Manager Inventory on your managed nodes by using node IDs or tags.

**To manually configure your managed nodes for inventory (CLI)**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to create a State Manager association that runs Systems Manager Inventory on the node. Replace each *example resource placeholder* with your own information. This command configures the service to run every six hours and to collect network configuration, Windows Update, and application metadata from a node.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --name "AWS-GatherSoftwareInventory" \
   --targets "Key=instanceids,Values=an_instance_ID" \
   --schedule-expression "rate(240 minutes)" \
   --output-location "{ \"S3Location\": { \"OutputS3Region\": \"region_ID, for example us-east-2\", \"OutputS3BucketName\": \"amzn-s3-demo-bucket\", \"OutputS3KeyPrefix\": \"Test\" } }" \
   --parameters "networkConfig=Enabled,windowsUpdates=Enabled,applications=Enabled"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --name "AWS-GatherSoftwareInventory" ^
   --targets "Key=instanceids,Values=an_instance_ID" ^
   --schedule-expression "rate(240 minutes)" ^
   --output-location "{ \"S3Location\": { \"OutputS3Region\": \"region_ID, for example us-east-2\", \"OutputS3BucketName\": \"amzn-s3-demo-bucket\", \"OutputS3KeyPrefix\": \"Test\" } }" ^
   --parameters "networkConfig=Enabled,windowsUpdates=Enabled,applications=Enabled"
   ```

------

   The system responds with information like the following.

   ```
   {
       "AssociationDescription": {
           "ScheduleExpression": "rate(240 minutes)",
           "OutputLocation": {
               "S3Location": {
                   "OutputS3KeyPrefix": "Test",
                   "OutputS3BucketName": "Test bucket",
                   "OutputS3Region": "us-east-2"
               }
           },
           "Name": "The name you specified",
           "Parameters": {
               "applications": [
                   "Enabled"
               ],
               "networkConfig": [
                   "Enabled"
               ],
               "windowsUpdates": [
                   "Enabled"
               ]
           },
           "Overview": {
               "Status": "Pending",
               "DetailedStatus": "Creating"
           },
           "AssociationId": "1a2b3c4d5e6f7g-1a2b3c-1a2b3c-1a2b3c-1a2b3c4d5e6f7g",
           "DocumentVersion": "$DEFAULT",
           "LastUpdateAssociationDate": 1480544990.06,
           "Date": 1480544990.06,
           "Targets": [
               {
                   "Values": [
                      "i-02573cafcfEXAMPLE"
                   ],
                   "Key": "InstanceIds"
               }
           ]
       }
   }
   ```

   You can target large groups of nodes by using the `Targets` parameter with EC2 tags. See the following example.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --name "AWS-GatherSoftwareInventory" \
   --targets "Key=tag:Environment,Values=Production" \
   --schedule-expression "rate(240 minutes)" \
   --output-location "{ \"S3Location\": { \"OutputS3Region\": \"us-east-2\", \"OutputS3BucketName\": \"amzn-s3-demo-bucket\", \"OutputS3KeyPrefix\": \"Test\" } }" \
   --parameters "networkConfig=Enabled,windowsUpdates=Enabled,applications=Enabled"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --name "AWS-GatherSoftwareInventory" ^
   --targets "Key=tag:Environment,Values=Production" ^
   --schedule-expression "rate(240 minutes)" ^
   --output-location "{ \"S3Location\": { \"OutputS3Region\": \"us-east-2\", \"OutputS3BucketName\": \"amzn-s3-demo-bucket\", \"OutputS3KeyPrefix\": \"Test\" } }" ^
   --parameters "networkConfig=Enabled,windowsUpdates=Enabled,applications=Enabled"
   ```

------

   You can also inventory files and Windows Registry keys on a Windows Server node by using the `files` and `windowsRegistry` inventory types with expressions. For more information about these inventory types, see [Working with file and Windows registry inventory](inventory-file-and-registry.md).

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --name "AWS-GatherSoftwareInventory" \
   --targets "Key=instanceids,Values=i-0704358e3a3da9eb1" \
   --schedule-expression "rate(240 minutes)" \
   --parameters '{"files":["[{\"Path\": \"C:\\Program Files\", \"Pattern\": [\"*.exe\"], \"Recursive\": true}]"], "windowsRegistry": ["[{\"Path\":\"HKEY_LOCAL_MACHINE\\Software\\Amazon\", \"Recursive\":true}]"]}' \
   --profile dev-pdx
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --name "AWS-GatherSoftwareInventory" ^
   --targets "Key=instanceids,Values=i-0704358e3a3da9eb1" ^
   --schedule-expression "rate(240 minutes)" ^
   --parameters '{"files":["[{\"Path\": \"C:\\Program Files\", \"Pattern\": [\"*.exe\"], \"Recursive\": true}]"], "windowsRegistry": ["[{\"Path\":\"HKEY_LOCAL_MACHINE\\Software\\Amazon\", \"Recursive\":true}]"]}' ^
   --profile dev-pdx
   ```

------

1. Run the following command to view the association status.

   ```
   aws ssm describe-instance-associations-status --instance-id an_instance_ID
   ```

   The system responds with information like the following.

   ```
   {
   "InstanceAssociationStatusInfos": [
            {
               "Status": "Pending",
               "DetailedStatus": "Associated",
               "Name": "reInvent2016PolicyDocumentTest",
               "InstanceId": "i-1a2b3c4d5e6f7g",
               "AssociationId": "1a2b3c4d5e6f7g-1a2b3c-1a2b3c-1a2b3c-1a2b3c4d5e6f7g",
               "DocumentVersion": "1"
           }
   ]
   }
   ```

# Walkthrough: Using resource data sync to aggregate inventory data


The following walkthrough describes how to create a resource data sync configuration for AWS Systems Manager Inventory by using the AWS Command Line Interface (AWS CLI). A resource data sync automatically ports inventory data from all of your managed nodes to a central Amazon Simple Storage Service (Amazon S3) bucket. The sync automatically updates the data in the central Amazon S3 bucket whenever new inventory data is discovered. 

This walkthrough also describes how to use Amazon Athena and Amazon Quick to query and analyze the aggregated data. For information about creating a resource data sync by using Systems Manager in the AWS Management Console, see [Walkthrough: Using resource data sync to aggregate inventory data](#inventory-resource-data-sync). For information about querying inventory from multiple AWS Regions and accounts by using Systems Manager in the AWS Management Console, see [Querying inventory data from multiple Regions and accounts](systems-manager-inventory-query.md).

**Note**  
This walkthrough includes information about how to encrypt the sync by using AWS Key Management Service (AWS KMS). Inventory doesn't collect any user-specific, proprietary, or sensitive data so encryption is optional. For more information about AWS KMS, see [AWS Key Management Service Developer Guide](https://docs.aws.amazon.com/kms/latest/developerguide/).

**Before you begin**  
Review or complete the following tasks before you begin the walkthrough in this section:
+ Collect inventory data from your managed nodes. For the purpose of the Amazon Athena and Amazon Quick sections in this walkthrough, we recommend that you collect Application data. For more information about how to collect inventory data, see [Configuring inventory collection](inventory-collection.md) or [Using the AWS CLI to configure inventory data collection](inventory-collection-cli.md).
+ (Optional) If the inventory data is stored in an Amazon Simple Storage Service (Amazon S3) bucket that uses AWS Key Management Service (AWS KMS) encryption, you must also configure your IAM account and the `Amazon-GlueServiceRoleForSSM` service role for AWS KMS encryption. If you don't configure your IAM account and this role, Systems Manager displays `Cannot load Glue tables` when you choose the **Detailed View** tab in the console. For more information, see [(Optional) Configure permissions for viewing AWS KMS encrypted data](systems-manager-inventory-query.md#systems-manager-inventory-query-kms).
+ (Optional) If you want to encrypt the resource data sync by using AWS KMS, then you must either create a new key that includes the following policy, or you must update an existing key and add this policy to it.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Id": "ssm-access-policy",
      "Statement": [
          {
              "Sid": "ssm-access-policy-statement",
              "Action": [
                  "kms:GenerateDataKey"
              ],
              "Effect": "Allow",
              "Principal": {
                  "Service": "ssm.amazonaws.com"
              },
              "Resource": "arn:aws:kms:us-east-1:123456789012:key/KMS_key_id",
              "Condition": {
                  "StringLike": {
                      "aws:SourceAccount": "123456789012"
                  },
                  "ArnLike": {
                      "aws:SourceArn": "arn:aws:ssm:*:123456789012:resource-data-sync/*"
                  }
              }
          }
      ]
  }
  ```

------

**To create a resource data sync for Inventory**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Create a bucket to store your aggregated inventory data. For more information, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon Simple Storage Service User Guide*. Make a note of the bucket name and the AWS Region where you created it.

1. After you create the bucket, choose the **Permissions** tab, and then choose **Bucket Policy**.

1. Copy and paste the following bucket policy into the policy editor. Replace amzn-s3-demo-bucket and *account-id* with the name of the Amazon S3 bucket you created and a valid AWS account ID. When adding multiple accounts, add an additional condition string and ARN for each account. Remove the additional placeholders from the example when adding one account. Optionally, replace *bucket-prefix* with the name of an Amazon S3 prefix (subdirectory). If you didn't created a prefix, remove *bucket-prefix/* from the ARN in the policy. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": " SSMBucketDelivery",
               "Effect": "Allow",
               "Principal": {
                   "Service": "ssm.amazonaws.com"
               },
               "Action": "s3:PutObject",
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket/bucket-prefix/*/accountid=111122223333/*"
               ],
               "Condition": {
                   "StringEquals": {
                       "s3:x-amz-acl": "bucket-owner-full-control",
                       "aws:SourceAccount": [
                           "111122223333",
                           "444455556666",
                           "123456789012",
                           "777788889999"
                       ]
                   },
                   "ArnLike": {
                       "aws:SourceArn": [
                           "arn:aws:ssm:*:111122223333:resource-data-sync/*",
                           "arn:aws:ssm:*:444455556666:resource-data-sync/*",
                           "arn:aws:ssm:*:123456789012:resource-data-sync/*",
                           "arn:aws:ssm:*:777788889999:resource-data-sync/*"
                       ]
                   }
               }
           }
       ]
   }
   ```

------

1. (Optional) If you want to encrypt the sync, then you must add the following conditions to the policy listed in the previous step. Add these in the `StringEquals` section.

   ```
   "s3:x-amz-server-side-encryption":"aws:kms",
   "s3:x-amz-server-side-encryption-aws-kms-key-id":"arn:aws:kms:region:account_ID:key/KMS_key_ID"
   ```

   Here is an example:

   ```
   "StringEquals": {
             "s3:x-amz-acl": "bucket-owner-full-control",
             "aws:SourceAccount": "account-id",
             "s3:x-amz-server-side-encryption":"aws:kms",
             "s3:x-amz-server-side-encryption-aws-kms-key-id":"arn:aws:kms:region:account_ID:key/KMS_key_ID"
           }
   ```

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. (Optional) If you want to encrypt the sync, run the following command to verify that the bucket policy is enforcing the AWS KMS key requirement. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws s3 cp ./A_file_in_the_bucket s3://amzn-s3-demo-bucket/prefix/ \
   --sse aws:kms \
   --sse-kms-key-id "arn:aws:kms:region:account_ID:key/KMS_key_id" \
   --region region, for example, us-east-2
   ```

------
#### [ Windows ]

   ```
   aws s3 cp ./A_file_in_the_bucket s3://amzn-s3-demo-bucket/prefix/ ^ 
       --sse aws:kms ^
       --sse-kms-key-id "arn:aws:kms:region:account_ID:key/KMS_key_id" ^
       --region region, for example, us-east-2
   ```

------

1. Run the following command to create a resource data sync configuration with the Amazon S3 bucket you created at the start of this procedure. This command creates a sync from the AWS Region you're logged into.
**Note**  
If the sync and the target Amazon S3 bucket are located in different regions, you might be subject to data transfer pricing. For more information, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

------
#### [ Linux & macOS ]

   ```
   aws ssm create-resource-data-sync \
   --sync-name a_name \
   --s3-destination "BucketName=amzn-s3-demo-bucket,Prefix=prefix_name, if_specified,SyncFormat=JsonSerDe,Region=bucket_region"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-resource-data-sync ^
   --sync-name a_name ^
   --s3-destination "BucketName=amzn-s3-demo-bucket,Prefix=prefix_name, if_specified,SyncFormat=JsonSerDe,Region=bucket_region"
   ```

------

   You can use the `region` parameter to specify where the sync configuration should be created. In the following example, inventory data from the us-west-1 Region, will be synchronized in the Amazon S3 bucket in the us-west-2 Region.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-resource-data-sync \
       --sync-name InventoryDataWest \
       --s3-destination "BucketName=amzn-s3-demo-bucket,Prefix=HybridEnv,SyncFormat=JsonSerDe,Region=us-west-2" 
       --region us-west-1
   ```

------
#### [ Windows ]

   ```
   aws ssm create-resource-data-sync ^ 
   --sync-name InventoryDataWest ^
   --s3-destination "BucketName=amzn-s3-demo-bucket,Prefix=HybridEnv,SyncFormat=JsonSerDe,Region=us-west-2" ^ --region us-west-1
   ```

------

   (Optional) If you want to encrypt the sync by using AWS KMS, run the following command to create the sync. If you encrypt the sync, then the AWS KMS key and the Amazon S3 bucket must be in the same Region.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-resource-data-sync \
   --sync-name sync_name \
   --s3-destination "BucketName=amzn-s3-demo-bucket,Prefix=prefix_name, if_specified,SyncFormat=JsonSerDe,AWSKMSKeyARN=arn:aws:kms:region:account_ID:key/KMS_key_ID,Region=bucket_region" \
   --region region
   ```

------
#### [ Windows ]

   ```
   aws ssm create-resource-data-sync ^
   --sync-name sync_name ^
   --s3-destination "BucketName=amzn-s3-demo-bucket,Prefix=prefix_name, if_specified,SyncFormat=JsonSerDe,AWSKMSKeyARN=arn:aws:kms:region:account_ID:key/KMS_key_ID,Region=bucket_region" ^
   --region region
   ```

------

1. Run the following command to view the status of sync configuration. 

   ```
   aws ssm list-resource-data-sync 
   ```

   If you created the sync configuration in a different Region, then you must specify the `region` parameter, as shown in the following example.

   ```
   aws ssm list-resource-data-sync --region us-west-1
   ```

1. After the sync configuration is created successfully, examine the target bucket in Amazon S3. Inventory data should be displayed within a few minutes.

**Working with the Data in Amazon Athena**

The following section describes how to view and query the data in Amazon Athena. Before you begin, we recommend that you learn about Athena. For more information, see [What is Amazon Athena?](https://docs.aws.amazon.com/athena/latest/ug/what-is.html) and [Working with Data](https://docs.aws.amazon.com/athena/latest/ug/work-with-data.html) in the *Amazon Athena User Guide*.

**To view and query the data in Amazon Athena**

1. Open the Athena console at [https://console.aws.amazon.com/athena/](https://console.aws.amazon.com/athena/home).

1. Copy and paste the following statement into the query editor and then choose **Run Query**.

   ```
   CREATE DATABASE ssminventory
   ```

   The system creates a database called ssminventory.

1. Copy and paste the following statement into the query editor and then choose **Run Query**. Replace amzn-s3-demo-bucket and *bucket\$1prefix* with the name and prefix of the Amazon S3 target.

   ```
   CREATE EXTERNAL TABLE IF NOT EXISTS ssminventory.AWS_Application (
   Name string,
   ResourceId string,
   ApplicationType string,
   Publisher string,
   Version string,
   InstalledTime string,
   Architecture string,
   URL string,
   Summary string,
   PackageId string
   ) 
   PARTITIONED BY (AccountId string, Region string, ResourceType string)
   ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
   WITH SERDEPROPERTIES (
     'serialization.format' = '1'
   ) LOCATION 's3://amzn-s3-demo-bucket/bucket_prefix/AWS:Application/'
   ```

1. Copy and paste the following statement into the query editor and then choose **Run Query**.

   ```
   MSCK REPAIR TABLE ssminventory.AWS_Application
   ```

   The system partitions the table.
**Note**  
If you create resource data syncs from additional AWS Regions or AWS accounts, then you must run this command again to update the partitions. You might also need to update your Amazon S3 bucket policy.

1. To preview your data, choose the view icon next to the `AWS_Application` table.  
![\[The preview data icon in Amazon Athena.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/sysman-inventory-resource-data-sync-walk.png)

1. Copy and paste the following statement into the query editor and then choose **Run Query**.

   ```
   SELECT a.name, a.version, count( a.version) frequency 
   from aws_application a where
   a.name = 'aws-cfn-bootstrap'
   group by a.name, a.version
   order  by frequency desc
   ```

   The query returns a count of different versions of `aws-cfn-bootstrap`, which is an AWS application present on Amazon Elastic Compute Cloud (Amazon EC2) instances for Linux, macOS, and Windows Server.

1. Individually copy and paste the following statements into the query editor, replace amzn-s3-demo-bucket and *bucket-prefix* with information for Amazon S3, and then choose **Run Query**. These statements set up additional inventory tables in Athena.

   ```
   CREATE EXTERNAL TABLE IF NOT EXISTS ssminventory.AWS_AWSComponent (
    `ResourceId` string,
     `Name` string,
     `ApplicationType` string,
     `Publisher` string,
     `Version` string,
     `InstalledTime` string,
     `Architecture` string,
     `URL` string
   )
   PARTITIONED BY (AccountId string, Region string, ResourceType string)
   ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
   WITH SERDEPROPERTIES (
     'serialization.format' = '1'
   ) LOCATION 's3://amzn-s3-demo-bucket/bucket-prefix/AWS:AWSComponent/'
   ```

   ```
   MSCK REPAIR TABLE ssminventory.AWS_AWSComponent
   ```

   ```
   CREATE EXTERNAL TABLE IF NOT EXISTS ssminventory.AWS_WindowsUpdate (
     `ResourceId` string,
     `HotFixId` string,
     `Description` string,
     `InstalledTime` string,
     `InstalledBy` string
   )
   PARTITIONED BY (AccountId string, Region string, ResourceType string)
   ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
   WITH SERDEPROPERTIES (
     'serialization.format' = '1'
   ) LOCATION 's3://amzn-s3-demo-bucket/bucket-prefix/AWS:WindowsUpdate/'
   ```

   ```
   MSCK REPAIR TABLE ssminventory.AWS_WindowsUpdate
   ```

   ```
   CREATE EXTERNAL TABLE IF NOT EXISTS ssminventory.AWS_InstanceInformation (
     `AgentType` string,
     `AgentVersion` string,
     `ComputerName` string,
     `IamRole` string,
     `InstanceId` string,
     `IpAddress` string,
     `PlatformName` string,
     `PlatformType` string,
     `PlatformVersion` string
   )
   PARTITIONED BY (AccountId string, Region string, ResourceType string)
   ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
   WITH SERDEPROPERTIES (
     'serialization.format' = '1'
   ) LOCATION 's3://amzn-s3-demo-bucket/bucket-prefix/AWS:InstanceInformation/'
   ```

   ```
   MSCK REPAIR TABLE ssminventory.AWS_InstanceInformation
   ```

   ```
   CREATE EXTERNAL TABLE IF NOT EXISTS ssminventory.AWS_Network (
     `ResourceId` string,
     `Name` string,
     `SubnetMask` string,
     `Gateway` string,
     `DHCPServer` string,
     `DNSServer` string,
     `MacAddress` string,
     `IPV4` string,
     `IPV6` string
   )
   PARTITIONED BY (AccountId string, Region string, ResourceType string)
   ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
   WITH SERDEPROPERTIES (
     'serialization.format' = '1'
   ) LOCATION 's3://amzn-s3-demo-bucket/bucket-prefix/AWS:Network/'
   ```

   ```
   MSCK REPAIR TABLE ssminventory.AWS_Network
   ```

   ```
   CREATE EXTERNAL TABLE IF NOT EXISTS ssminventory.AWS_PatchSummary (
     `ResourceId` string,
     `PatchGroup` string,
     `BaselineId` string,
     `SnapshotId` string,
     `OwnerInformation` string,
     `InstalledCount` int,
     `InstalledOtherCount` int,
     `NotApplicableCount` int,
     `MissingCount` int,
     `FailedCount` int,
     `OperationType` string,
     `OperationStartTime` string,
     `OperationEndTime` string
   )
   PARTITIONED BY (AccountId string, Region string, ResourceType string)
   ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
   WITH SERDEPROPERTIES (
     'serialization.format' = '1'
   ) LOCATION 's3://amzn-s3-demo-bucket/bucket-prefix/AWS:PatchSummary/'
   ```

   ```
   MSCK REPAIR TABLE ssminventory.AWS_PatchSummary
   ```

**Working with the Data in Amazon Quick**

The following section provides an overview with links for building a visualization in Amazon Quick.

**To build a visualization in Amazon Quick**

1. Sign up for [Amazon Quick](https://quicksight.aws/) and then log in to the Quick console.

1. Create a data set from the `AWS_Application` table and any other tables you created. For more information, see [Creating a dataset using Amazon Athena data](https://docs.aws.amazon.com/quicksuite/latest/userguide/create-a-data-set-athena.html) in the *Amazon Quick User Guide*.

1. Join tables. For example, you could join the `instanceid` column from `AWS_InstanceInformation` because it matches the `resourceid` column in other inventory tables. For more information about joining tables, see [Joining data](https://docs.aws.amazon.com/quicksuite/latest/userguide/joining-data.html) in the *Amazon Quick User Guide*.

1. Build a visualization. For more information, see [Analyses and reports: Visualizing data in Amazon Quick Sight](https://docs.aws.amazon.com/quicksuite/latest/userguide/working-with-visuals.html) in the *Amazon Quick User Guide*.

# Troubleshooting problems with Systems Manager Inventory
Troubleshooting Inventory

This topic includes information about how to troubleshoot common errors or problems with AWS Systems Manager Inventory. If you're having trouble viewing your nodes in Systems Manager, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md).

**Topics**
+ [

## Multiple apply all associations with document '`AWS-GatherSoftwareInventory`' are not supported
](#systems-manager-inventory-troubleshooting-multiple)
+ [

## Inventory execution status never exits pending
](#inventory-troubleshooting-pending)
+ [

## The `AWS-ListWindowsInventory` document fails to run
](#inventory-troubleshooting-ListWindowsInventory)
+ [

## Console doesn't display Inventory Dashboard \$1 Detailed View \$1 Settings tabs
](#inventory-troubleshooting-tabs)
+ [

## UnsupportedAgent
](#inventory-troubleshooting-unsupported-agent)
+ [

## Skipped
](#inventory-troubleshooting-skipped)
+ [

## Failed
](#inventory-troubleshooting-failed)
+ [

## Inventory compliance failed for an Amazon EC2 instance
](#inventory-troubleshooting-ec2-compliance)
+ [

## S3 bucket object contains old data
](#systems-manager-inventory-troubleshooting-s3)

## Multiple apply all associations with document '`AWS-GatherSoftwareInventory`' are not supported


An error that `Multiple apply all associations with document 'AWS-GatherSoftwareInventory' are not supported` means that one or more AWS Regions where you're trying to configure an Inventory association *for all nodes* are already configured with an inventory association for all nodes. If necessary, you can delete the existing inventory association for all nodes and then create a new one. To view existing inventory associations, choose **State Manager** in the Systems Manager console and then locate associations that use the `AWS-GatherSoftwareInventory` SSM document. If the existing inventory association for all nodes was created across multiple Regions, and you want to create a new one, you must delete the existing association from each Region where it exists.

## Inventory execution status never exits pending


There are two reasons why inventory collection never exits the `Pending` status:
+ No nodes in the selected AWS Region:

  If you create a global inventory association by using Systems Manager Quick Setup, the status of the inventory association (`AWS-GatherSoftwareInventory` document) shows `Pending` if there are no nodes available in the selected Region.****
+ Insufficient permissions:

  An inventory association shows `Pending` if one or more nodes don't have permission to run Systems Manager Inventory. Verify that the AWS Identity and Access Management (IAM) instance profile includes the **AmazonSSMManagedInstanceCore** managed policy. For information about how to add this policy to an instance profile, see [Alternative configuration for EC2 instance permissions](setup-instance-permissions.md#instance-profile-add-permissions).

  At a minimum, the instance profile must have the following IAM permissions.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Action": [
                  "ssm:DescribeAssociation",
                  "ssm:ListAssociations",
                  "ssm:ListInstanceAssociations",
                  "ssm:PutInventory",
                  "ssm:PutComplianceItems",
                  "ssm:UpdateAssociationStatus",
                  "ssm:UpdateInstanceAssociationStatus",
                  "ssm:UpdateInstanceInformation",
                  "ssm:GetDocument",
                  "ssm:DescribeDocument"
              ],
              "Resource": "*"
          }
      ]
  }
  ```

------

## The `AWS-ListWindowsInventory` document fails to run


The `AWS-ListWindowsInventory` document is deprecated. Don't use this document to collect inventory. Instead, use one of the processes described in [Configuring inventory collection](inventory-collection.md). 

## Console doesn't display Inventory Dashboard \$1 Detailed View \$1 Settings tabs


The Inventory **Detailed View ** page is only available in AWS Regions that offer Amazon Athena. If the following tabs aren't displayed on the Inventory page, it means Athena isn't available in the Region and you can't use the **Detailed View** to query data.

![\[Displaying Inventory Dashboard | Detailed View | Settings tabs\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-detailed-view-for-error.png)


## UnsupportedAgent


If the detailed status of an inventory association shows **UnsupportedAgent**, and the **Association status** shows **Failed**, then the version of AWS Systems Manager SSM Agent on the managed node isn't correct. To create a global inventory association (to inventory all nodes in your AWS account) for example, you must use SSM Agent version 2.0.790.0 or later. You can view the agent version running on each of your nodes on the **Managed Instances** page in the **Agent version** column. For information about how to update SSM Agent on your nodes, see [Updating the SSM Agent using Run Command](run-command-tutorial-update-software.md#rc-console-agentexample).

## Skipped


If the status of the inventory association for a node shows **Skipped**, this means that a higher-priority inventory association is already running on that node. Systems Manager follows a specific priority order when multiple inventory associations could apply to the same managed node.

### Inventory association priority order


Systems Manager applies inventory associations in the following priority order:

1. **Quick Setup inventory associations** - Associations created using Quick Setup and the unified console. These associations have names that start with `AWS-QuickSetup-SSM-CollectInventory-` and target all managed nodes.

1. **Explicit inventory associations** - Associations that target specific managed nodes using:
   + Instance IDs
   + Tag key-value pairs
   + AWS resource groups

1. **Global inventory associations** - Associations that target all managed nodes (using `--targets "Key=InstanceIds,Values=*"`) but were **not** created through Quick Setup.

### Common scenarios


**Scenario 1: Quick Setup association overrides explicit association**
+ You have a Quick Setup inventory association targeting all instances
+ You create a manual association targeting specific managed nodes by tag
+ Result: The manual association shows `Skipped` with detailed status `OverriddenByExplicitInventoryAssociation`
+ The Quick Setup association continues to collect inventory from all instances

**Scenario 2: Explicit association overrides global association**
+ You have a global inventory association targeting all instances (not created by Quick Setup)
+ You create an association targeting specific instances
+ Result: The global association shows `Skipped` for the specifically targeted instances
+ The explicit association runs on the targeted instances

### Resolution steps


**If you want to use your own inventory association instead of Quick Setup:**

1. **Identify Quick Setup associations**: In the Systems Manager console, go to State Manager and look for associations with names starting with `AWS-QuickSetup-SSM-CollectInventory-`.

1. **Remove Quick Setup configuration**:
   + Go to Quick Setup in the Systems Manager console.
   + Find your inventory collection configuration.
   + Delete the Quick Setup configuration (this removes the associated inventory association).
**Note**  
You don't need to manually delete the association created by Quick Setup.

1. **Verify your association runs**: After removing the Quick Setup configuration, your explicit inventory association should start running successfully.

**If you want to modify existing behavior:**
+ To view all existing inventory associations, choose **State Manager** in the Systems Manager console and locate associations that use the `AWS-GatherSoftwareInventory` SSM document.
+ Remember that each managed node can only have one active inventory association at a time.

**Important**  
Inventory data is still collected from skipped nodes when their assigned (higher-priority) inventory association runs.
Quick Setup inventory associations take precedence over all other types, even those with explicit targeting.
The detailed status message `OverriddenByExplicitInventoryAssociation` appears when any association is overridden by a higher-priority one, regardless of the association type.

## Failed


If the status of the inventory association for a node shows **Failed**, this could mean that the node has multiple inventory associations assigned to it. A node can only have one inventory association assigned at a time. An inventory association uses the `AWS-GatherSoftwareInventory` AWS Systems Manager document (SSM document). You can run the following command by using the AWS Command Line Interface (AWS CLI) to view a list of associations for a node.

```
aws ssm describe-instance-associations-status --instance-id instance-ID
```

## Inventory compliance failed for an Amazon EC2 instance


Inventory compliance for an Amazon Elastic Compute Cloud (Amazon EC2) instance can fail if you assign multiple inventory associations to the instance. 

 To resolve this issue, delete one or more inventory associations assigned to the instance. For more information, see [Deleting an association](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-state-manager-delete-association.html). 

**Note**  
Be aware of the following behavior if you create multiple inventory associations for a managed node:  
Each node can be assigned an inventory association that targets *all* nodes (--targets "Key=InstanceIds,Values=\$1").
Each node can also be assigned a specific association that uses either tag key-value pairs or an AWS resource group.
If a node is assigned multiple inventory associations, the status shows *Skipped* for the association that hasn't run. The association that ran most recently displays the actual status of the inventory association. 
If a node is assigned multiple inventory associations and each uses a tag key-value pair, then those inventory associations fail to run on the node because of the tag conflict. The association still runs on nodes that don't have the tag key-value conflict. 

## S3 bucket object contains old data


Data inside the Amazon S3 bucket object is updated when the inventory association is successful and new data is discovered. The Amazon S3 bucket object is updated for each node when the association runs and fails, but the data inside the object is not updated in this case. Data inside the Amazon S3 bucket object will update only when the association runs successfully. When the inventory association fails, you will see old data in the Amazon S3 bucket object.

# AWS Systems Manager Patch Manager
Patch Manager

Patch Manager, a tool in AWS Systems Manager, automates the process of patching managed nodes with both security-related updates and other types of updates.

**Note**  
Systems Manager provides support for *patch policies* in Quick Setup, a tool in AWS Systems Manager. Using patch policies is the recommended method for configuring your patching operations. Using a single patch policy configuration, you can define patching for all accounts in all Regions in your organization; for only the accounts and Regions you choose; or for a single account-Region pair. For more information, see [Patch policy configurations in Quick Setup](patch-manager-policies.md).

You can use Patch Manager to apply patches for both operating systems and applications. (On Windows Server, application support is limited to updates for applications released by Microsoft.) You can use Patch Manager to install Service Packs on Windows nodes and perform minor version upgrades on Linux nodes. You can patch fleets of Amazon Elastic Compute Cloud (Amazon EC2) instances, edge devices, on-premises servers, and virtual machines (VMs) by operating system type. This includes supported versions of several operating systems, as listed in [Patch Manager prerequisites](patch-manager-prerequisites.md). You can scan instances to see only a report of missing patches, or you can scan and automatically install all missing patches. To get started with Patch Manager, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/patch-manager). In the navigation pane, choose **Patch Manager**.

AWS doesn't test patches before making them available in Patch Manager. Also, Patch Manager doesn't support upgrading major versions of operating systems, such as Windows Server 2016 to Windows Server 2019, or Red Hat Enterprise Linux (RHEL) 7.0 to RHEL 8.0.

For Linux-based operating system types that report a severity level for patches, Patch Manager uses the severity level reported by the software publisher for the update notice or individual patch. Patch Manager doesn't derive severity levels from third-party sources, such as the [Common Vulnerability Scoring System](https://www.first.org/cvss/) (CVSS), or from metrics released by the [National Vulnerability Database](https://nvd.nist.gov/vuln) (NVD).

## How can Patch Manager benefit my organization?


Patch Manager automates the process of patching managed nodes with both security-related updates and other types of updates. It provides several key benefits:
+ **Centralized patching control** –Using patch policies, you can set up recurring patching operations for all accounts in all Regions in your organization, specific accounts and Regions, or a single account-Region pair.
+ **Flexible patching operations** – You can choose to scan instances to see only a report of missing patches, or scan and automatically install all missing patches.
+ **Comprehensive compliance reporting** – After scanning operations, you can view detailed information about which managed nodes are out of patch compliance and which patches are missing.
+ **Cross-platform support** – Patch Manager supports multiple operating systems including various Linux distributions, macOS, and Windows Server.
+ **Custom patch baselines** – You can define what constitutes patch compliance for your organization through custom patch baselines that specify which patches are approved for installation.
+ **Integration with other AWS services** – Patch Manager integrates with AWS Organizations, AWS Security Hub CSPM, AWS CloudTrail, and AWS Config for enhanced management and security.
+ **Deterministic upgrades** – Support for deterministic upgrades through versioned repositories for operating systems like Amazon Linux 2023.

## Who should use Patch Manager?


Patch Manager is designed for the following:
+ IT administrators who need to maintain patch compliance across their fleet of managed nodes
+ Operations managers who need visibility into patch compliance status across their infrastructure
+ Cloud architects who want to implement automated patching solutions at scale
+ DevOps engineers who need to integrate patching into their operational workflows
+ Organizations with multi-account/multi-Region deployments who need centralized patch management
+ Anyone responsible for maintaining the security posture and operational health of AWS managed nodes, edge devices, on-premises servers, and virtual machines

## What are the main features of Patch Manager?


Patch Manager offers several key features:
+ **Patch policies** – Configure patching operations across multiple AWS accounts and Regions using a single policy through integration with AWS Organizations.
+ **Custom patch baselines** – Define rules for auto-approving patches within days of their release, along with approved and rejected patch lists.
+ **Multiple patching methods** – Choose from patch policies, maintenance windows, or on-demand "Patch now" operations to meet your specific needs.
+ **Compliance reporting** – Generate detailed reports on patch compliance status that can be sent to an Amazon S3 bucket in CSV format.
+ **Cross-platform support** – Patch both operating systems and applications across Windows Server, various Linux distributions, and macOS.
+ **Scheduling flexibility** – Set different schedules for scanning and installing patches using custom CRON or Rate expressions.
+ **Lifecycle hooks** – Run custom scripts before and after patching operations using Systems Manager documents.
+ **Security focus** – By default, Patch Manager focuses on security-related updates rather than installing all available patches.
+ **Rate control** – Configure concurrency and error thresholds for patching operations to minimize operational impact.

## What is compliance in Patch Manager?


The benchmark for what constitutes *patch compliance* for the managed nodes in your Systems Manager fleets is not defined by AWS, by operating system (OS) vendors, or by third parties such as security consulting firms.

Instead, you define what patch compliance means for managed nodes in your organization or account in a *patch baseline*. A patch baseline is a configuration that specifies rules for which patches must be installed on a managed node. A managed node is patch compliant when it is up to date with all the patches that meet the approval criteria that you specify in the patch baseline. 

Note that being *compliant* with a patch baseline doesn't mean that a managed node is necessarily *secure*. Compliant means that the patches defined by the patch baseline that are both *available* and *approved* have been installed on the node. The overall security of a managed node is determined by many factors outside the scope of Patch Manager. For more information, see [Security in AWS Systems Manager](security.md).

Each patch baseline is a configuration for a specific supported operating system (OS) type, such as Red Hat Enterprise Linux (RHEL), macOS, or Windows Server. A patch baseline can define patching rules for all supported versions of an OS or be limited to only those you specify, such as RHEL 7.8. and RHEL 9.3.

In a patch baseline, you could specify that all patches of certain classifications and severity levels are approved for installation. For example, you might include all patches classified as `Security` but exclude other classifications, such as `Bugfix` or `Enhancement`. And you could include all patches with a severity of `Critical` and exclude others, such as `Important` and `Moderate`.

You can also define patches explicitly in a patch baseline by adding their IDs to lists of specific patches to approve or reject, such as `KB2736693` for Windows Server or `dbus.x86_64:1:1.12.28-1.amzn2023.0.1` for Amazon Linux 2023 (AL2023). You can optionally specify a certain number of days to wait for patching after a patch becomes available. For Linux and macOS, you have the option of specifying an external list of patches for compliance (an Install Override list) instead of those defined by the patch baseline rules.

When a patching operation runs, Patch Manager compares the patches currently applied to a managed node to those that should be applied according to the rules set up in the patch baseline or an Install Override list. You can choose for Patch Manager to show you only a report of missing patches (a `Scan` operation), or you can choose for Patch Manager to automatically install all patches it find are missing from a managed node (a `Scan and install` operation).

**Note**  
Patch compliance data represents a point-in-time snapshot from the latest successful patching operation. Each compliance report contains a capture time that identifies when the compliance status was calculated. When reviewing compliance data, consider the capture time to determine if the operation was executed as expected.

Patch Manager provides predefined patch baselines that you can use for your patching operations; however, these predefined configurations are provided as examples and not as recommended best practices. We recommend that you create custom patch baselines of your own to exercise greater control over what constitutes patch compliance for your fleet.

For more information about patch baselines, see the following topics:
+ [Predefined and custom patch baselines](patch-manager-predefined-and-custom-patch-baselines.md)
+ [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md)
+ [Viewing AWS predefined patch baselines](patch-manager-view-predefined-patch-baselines.md)
+ [Working with custom patch baselines](patch-manager-manage-patch-baselines.md)
+ [Working with patch compliance reports](patch-manager-compliance-reports.md)

## Primary components


Before you start working with the Patch Manager tool, you should familiarize yourself with some major components and features of the tool's patching operations.

**Patch baselines**  
Patch Manager uses *patch baselines*, which include rules for auto-approving patches within days of their release, in addition to optional lists of approved and rejected patches. When a patching operation runs, Patch Manager compares the patches currently applied to a managed node to those that should be applied according to the rules set up in the patch baseline. You can choose for Patch Manager to show you only a report of missing patches (a `Scan` operation), or you can choose for Patch Manager to automatically install all patches it find are missing from a managed node (a `Scan and install` operation).

**Patching operation methods**  
Patch Manager currently offers four methods for running `Scan` and `Scan and install` operations:
+ **(Recommended) A patch policy configured in Quick Setup** – Based on integration with AWS Organizations, a single patch policy can define patching schedules and patch baselines for an entire organization, including multiple AWS accounts and all AWS Regions those accounts operate in. A patch policy can also target only some organizational units (OUs) in an organization. You can use a single patch policy to scan and install on different schedules. For more information, see [Configure patching for instances in an organization using a Quick Setup patch policy](quick-setup-patch-manager.md) and [Patch policy configurations in Quick Setup](patch-manager-policies.md).
+ **A Host Management option configured in Quick Setup** – Host Management configurations are also supported by integration with AWS Organizations, making it possible to run a patching operation for up to an entire Organization. However, this option is limited to scanning for missing patches using the current default patch baseline and providing results in compliance reports. This operation method can't install patches. For more information, see [Set up Amazon EC2 host management using Quick Setup](quick-setup-host-management.md).
+ **A maintenance window to run a patch `Scan` or `Install` task** – A maintenance window, which you set up in the Systems Manager tool called Maintenance Windows, can be configured to run different types of tasks on a schedule you define. A Run Command-type task can be used to run `Scan` or `Scan and install` tasks a set of managed nodes that you choose. Each maintenance window task can target managed nodes in only a single AWS account-AWS Region pair. For more information, see [Tutorial: Create a maintenance window for patching using the console](maintenance-window-tutorial-patching.md).
+ **An on-demand **Patch now** operation in Patch Manager** – The **Patch now** option lets you bypass schedule setups when you need to patch managed nodes as quickly as possible. Using **Patch now**, you specify whether to run `Scan` or `Scan and install` operation and which managed nodes to run the operation on. You can also choose to running Systems Manager documents (SSM documents) as lifecycle hooks during the patching operation. Each **Patch now** operation can target managed nodes in only a single AWS account-AWS Region pair. For more information, see [Patching managed nodes on demand](patch-manager-patch-now-on-demand.md).

**Compliance reporting**  
After a `Scan` operation, you can use the Systems Manager console to view information about which of your managed nodes are out of patch compliance, and which patches are missing from each of those nodes. You can also generate patch compliance reports in .csv format that are sent to an Amazon Simple Storage Service (Amazon S3) bucket of your choice. You can generate one-time reports, or generate reports on a regular schedule. For a single managed node, reports include details of all patches for the node. For a report on all managed nodes, only a summary of how many patches are missing is provided. After a report is generated, you can use a tool like Amazon Quick to import and analyze the data. For more information, see [Working with patch compliance reports](patch-manager-compliance-reports.md).

**Note**  
A compliance item generated through the use of a patch policy has an execution type of `PatchPolicy`. A compliance item not generated in a patch policy operation has an execution type of `Command`.

**Integrations**  
Patch Manager integrates with the following other AWS services: 
+ **AWS Identity and Access Management (IAM)** – Use IAM to control which users, groups, and roles have access to Patch Manager operations. For more information, see [How AWS Systems Manager works with IAM](security_iam_service-with-iam.md) and [Configure instance permissions required for Systems Manager](setup-instance-permissions.md). 
+ **AWS CloudTrail** – Use CloudTrail to record an auditable history of patching operation events initiated by users, roles, or groups. For more information, see [Logging AWS Systems Manager API calls with AWS CloudTrail](monitoring-cloudtrail-logs.md).
+ **AWS Security Hub CSPM** – Patch compliance data from Patch Manager can be sent to AWS Security Hub CSPM. Security Hub CSPM gives you a comprehensive view of your high-priority security alerts and compliance status. It also monitors the patching status of your fleet. For more information, see [Integrating Patch Manager with AWS Security Hub CSPM](patch-manager-security-hub-integration.md). 
+ **AWS Config** – Set up recording in AWS Config to view Amazon EC2 instance management data in the Patch Manager Dashboard. For more information, see [Viewing patch Dashboard summaries](patch-manager-view-dashboard-summaries.md).

**Topics**
+ [

## How can Patch Manager benefit my organization?
](#how-can-patch-manager-benefit-my-organization)
+ [

## Who should use Patch Manager?
](#who-should-use-patch-manager)
+ [

## What are the main features of Patch Manager?
](#what-are-the-main-features-of-patch-manager)
+ [

## What is compliance in Patch Manager?
](#patch-manager-definition-of-compliance)
+ [

## Primary components
](#primary-components)
+ [

# Patch policy configurations in Quick Setup
](patch-manager-policies.md)
+ [

# Patch Manager prerequisites
](patch-manager-prerequisites.md)
+ [

# How Patch Manager operations work
](patch-manager-patching-operations.md)
+ [

# SSM Command documents for patching managed nodes
](patch-manager-ssm-documents.md)
+ [

# Patch baselines
](patch-manager-patch-baselines.md)
+ [

# Using Kernel Live Patching on Amazon Linux 2 managed nodes
](patch-manager-kernel-live-patching.md)
+ [

# Working with Patch Manager resources and compliance using the console
](patch-manager-console.md)
+ [

# Working with Patch Manager resources using the AWS CLI
](patch-manager-cli-commands.md)
+ [

# AWS Systems Manager Patch Manager tutorials
](patch-manager-tutorials.md)
+ [

# Troubleshooting Patch Manager
](patch-manager-troubleshooting.md)

# Patch policy configurations in Quick Setup


AWS recommends the use of *patch policies* to configure patching for your organization and AWS accounts. Patch policies were introduced in Patch Manager in December, 2022. 

A patch policy is a configuration you set up using Quick Setup, a tool in AWS Systems Manager. Patch policies provide more extensive and more centralized control over your patching operations than is available with previous methods of configuring patching. Patch policies can be used with [all operating systems supported by Patch Manager](patch-manager-prerequisites.md#pm-prereqs), including supported versions of Linux, macOS, and Windows Server. For instructions for creating a patch policy, see [Configure patching for instances in an organization using a Quick Setup patch policy](quick-setup-patch-manager.md).

## Major features of patch policies


Instead of using other methods of patching your nodes, use a patch policy to take advantage of these major features:
+ **Single setup** – Setting up patching operations using a maintenance window or State Manager association can require multiple tasks in different parts of the Systems Manager console. Using a patch policy, all your patching operations can be set up in a single wizard.
+ **Multi-account/Multi-Region support** – Using a maintenance window, a State Manager association, or the **Patch now** feature in Patch Manager, you're limited to targeting managed nodes in a single AWS account-AWS Region pair. If you use multiple accounts and multiple Regions, your setup and maintenance tasks can require a great deal of time, because you must perform setup tasks in each account-Region pair. However, if you use AWS Organizations, you can set up one patch policy that applies to all your managed nodes in all AWS Regions in all your AWS accounts. Or, if you choose, a patch policy can apply to only some organizational units (OUs) in the accounts and Regions you choose. A patch policy can also apply to a single local account, if you choose.
+ **Installation support at the organizational level** – The existing Host Management configuration option in Quick Setup provides support for a daily scan of your managed nodes for patch compliance. However, this scan is done at a predetermined time and results in patch compliance information only. No patch installations are performed. Using a patch policy, you can specify different schedules for scanning and installing. You can also choose the frequency and time of these operations by using custom CRON or Rate expressions. For example, you could scan for missing patches every day to provide you with regularly updated compliance information. But, your installation schedule could be just once a week to avoid unwanted downtime.
+ **Simplified patch baseline selection** – Patch policies still incorporate patch baselines, and there are no changes to the way patch baselines are configured. However, when you create or update a patch policy, you can select the AWS managed or custom baseline you want to use for each operating system (OS) type in a single list. It’s not necessary to specify the default baseline for each OS type in separate tasks.

**Note**  
When patching operations based on a patch policy run, they use the `AWS-RunPatchBaseline` SSM document. For more information, see [SSM Command document for patching: `AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md).

**Related information**  
[Centrally deploy patching operations across your AWS Organization using Systems Manager Quick Setup](https://aws.amazon.com/blogs/mt/centrally-deploy-patching-operations-across-your-aws-organization-using-systems-manager-quick-setup/) (AWS Cloud Operations and Migrations Blog)

## Other differences with patch policies


Here are some other differences to note when using patch policies instead of previous methods of configuring patching:
+ **No patch groups required** – In previous patching operations, you could tag multiple nodes to belong to a patch group, and then specify the patch baseline to use for that patch group. If no patch group was defined, Patch Manager patched instances with the current default patch baseline for the OS type. Using patch policies, it’s no longer necessary to set up and maintain patch groups. 
**Note**  
Patch group functionality is not supported in the console for account-Region pairs that did not already use patch groups before patch policy support was released on December 22, 2022. Patch group functionality is still available in account-Region pairs that began using patch groups before this date.
+ **‘Configure patching’ page removed** – Before the release of patch policies, you could specify defaults for which nodes to patch, a patching schedule, and a patching operation on a **Configure patching** page. This page has been removed from Patch Manager. These options are now specified in patch policies. 
+ **No ‘Patch now’ support** – The ability to patch nodes on demand is still limited to a single AWS account-AWS Region pair at a time. For information, see [Patching managed nodes on demand](patch-manager-patch-now-on-demand.md).
+ **Patch policies and compliance information** – When your managed nodes are scanned for compliance according to a patching policy configuration, compliance data is made available to you. You can view and work with the data in the same way as with other methods of compliance scanning. Although you can set up a patch policy for an entire organization or multiple organizational units, compliance information is reported individually for each AWS account-AWS Region pair. For more information, see [Working with patch compliance reports](patch-manager-compliance-reports.md).
+ **Association compliance status and patch policies** – The patching status for a managed node that's under a Quick Setup patch policy matches the status of the State Manager association execution for that node. If the association execution status is `Compliant`, the patching status for the managed node is also marked `Compliant`. If the association execution status is `Non-Compliant`, the patching status for the managed node is also marked `Non-Compliant`. 

## AWS Regions supported for patch policies


Patch policy configurations in Quick Setup are currently supported in the following Regions:
+ US East (Ohio) (us-east-2)
+ US East (N. Virginia) (us-east-1)
+ US West (N. California) (us-west-1)
+ US West (Oregon) (us-west-2)
+ Asia Pacific (Mumbai) (ap-south-1)
+ Asia Pacific (Seoul) (ap-northeast-2)
+ Asia Pacific (Singapore) (ap-southeast-1)
+ Asia Pacific (Sydney) (ap-southeast-2)
+ Asia Pacific (Tokyo) (ap-northeast-1)
+ Canada (Central) (ca-central-1)
+ Europe (Frankfurt) (eu-central-1)
+ Europe (Ireland) (eu-west-1)
+ Europe (London) (eu-west-2)
+ Europe (Paris) (eu-west-3)
+ Europe (Stockholm) (eu-north-1)
+ South America (São Paulo) (sa-east-1)

# Patch Manager prerequisites


Make sure that you have met the required prerequisites before using Patch Manager, a tool in AWS Systems Manager. 

**Topics**
+ [

## SSM Agent version
](#agent-versions)
+ [

## Python version
](#python-version)
+ [

## Additional package requirements
](#additional-package-requirements)
+ [

## Connectivity to the patch source
](#source-connectivity)
+ [

## S3 endpoint access
](#s3-endpoint-access)
+ [

## Permissions to install patches locally
](#local-installation-permissions)
+ [

## Supported operating systems for Patch Manager
](#supported-os)

## SSM Agent version


Version 2.0.834.0 or later of SSM Agent is running on the managed node you want to manage with Patch Manager.

**Note**  
An updated version of SSM Agent is released whenever new tools are added to Systems Manager or updates are made to existing tools. Failing to use the latest version of the agent can prevent your managed node from using various Systems Manager tools and features. For that reason, we recommend that you automate the process of keeping SSM Agent up to date on your machines. For information, see [Automating updates to SSM Agent](ssm-agent-automatic-updates.md). Subscribe to the [SSM Agent Release Notes](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) page on GitHub to get notifications about SSM Agent updates.

## Python version


For macOS and most Linux operating systems (OSs), Patch Manager currently supports Python versions 2.6 - 3.12. The AlmaLinux, Debian Server, and Ubuntu Server OSs require a supported version of Python 3 (3.0 - 3.12).

## Additional package requirements


For DNF-based operating systems, the `zstd`, `xz`, and `unzip` utilities may be required for decompressing repository information and patch files. DNF-based operating systems include Amazon Linux 2023, Red Hat Enterprise Linux 8 and later versions, Oracle Linux8 and later versions, Rocky Linux, AlmaLinux, & CentOS 8 and later versions. If you see an error similar to `No such file or directory: b'zstd'`, `No such file or directory: b'unxz'`, or patching failures due to missing `unzip`, then you need to install these utilities. `zstd`, `xz`, and `unzip` can be installed by running the following:

```
dnf install zstd xz unzip
```

## Connectivity to the patch source


If your managed nodes don't have a direct connection to the Internet and you're using an Amazon Virtual Private Cloud (Amazon VPC) with a VPC endpoint, you must ensure that the nodes have access to the source patch repositories (repos). On Linux nodes, patch updates are typically downloaded from the remote repos configured on the node. Therefore, the node must be able to connect to the repos so the patching can be performed. For more information, see [How security patches are selected](patch-manager-selecting-patches.md).

When patching a node that is running in an IPv6 only environment, ensure that the node has connectivity to the patch source. You can check the Run Command output from the patching execution to check for warnings about inaccessible repositories. For DNF-based operating systems, it is possible to configure unavailable repositories to be skipped during patching if the `skip_if_unavailable` option is set to `True` under `/etc/dnf/dnf.conf`. DNF-based operating systems include Amazon Linux 2023, Red Hat Enterprise Linux 8 and later versions, Oracle Linux 8 and later versions, Rocky Linux, AlmaLinux, & CentOS 8 and later versions. On Amazon Linux 2023, the `skip_if_unavailable` option is set to `True` by default.

**CentOS Stream: Enable the `EnableNonSecurity` flag**  
CentOS Stream nodes uses DNF as the package manager, which uses the concept of an update notice. An update notice is simply a collection of packages that fix specific problems. 

However, CentOS Stream default repos aren't configured with an update notice. This means that Patch Manager doesn't detect packages on default CentOS Stream repos. To allow Patch Manager to process packages that aren't contained in an update notice, you must turn on the `EnableNonSecurity` flag in the patch baseline rules.

**Windows Server: Ensure connectivity to Windows Update Catalog or Windows Server Update Services (WSUS)**  
Windows Server managed nodes must be able to connect to the Windows Update Catalog or Windows Server Update Services (WSUS). Confirm that your nodes have connectivity to the [Microsoft Update Catalog](https://www.catalog.update.microsoft.com/home.aspx) through an internet gateway, NAT gateway, or NAT instance. If you are using WSUS, confirm that the node has connectivity to the WSUS server in your environment. For more information, see [Issue: managed node doesn't have access to Windows Update Catalog or WSUS](patch-manager-troubleshooting.md#patch-manager-troubleshooting-instance-access).

## S3 endpoint access


Whether your managed nodes operate in a private or public network, without access to the required AWS managed Amazon Simple Storage Service (Amazon S3) buckets, patching operations fail. For information about the S3 buckets your managed nodes must be able to access, see [SSM Agent communications with AWS managed S3 buckets](ssm-agent-technical-details.md#ssm-agent-minimum-s3-permissions) and [Improve the security of EC2 instances by using VPC endpoints for Systems Manager](setup-create-vpc.md).

## Permissions to install patches locally


On Windows Server and Linux operating systems, Patch Manager assumes the Administrator and root user accounts, respectively, to install patches.

On macOS, however, for Brew and Brew Cask, Homebrew doesn't support its commands running under the root user account. As a result, Patch Manager queries for and runs Homebrew commands as either the owner of the Homebrew directory, or as a valid user belonging to the Homebrew directory’s owner group. Therefore, in order to install patches, the owner of the `homebrew` directory also needs recursive owner permissions for the `/usr/local` directory.

**Tip**  
The following command provides this permission for the specified user:  

```
sudo chown -R $USER:admin /usr/local
```

## Supported operating systems for Patch Manager


The Patch Manager tool might not support all the same operating systems versions that are supported by other Systems Manager tools. (For the full list of Systems Manager-supported operating systems, see [Supported operating systems for Systems Manager](operating-systems-and-machine-types.md#prereqs-operating-systems).) Therefore, ensure that the managed nodes you want to use with Patch Manager are running one of the operating systems listed in the following table.

**Note**  
Patch Manager relies on the patch repositories that are configured on a managed node, such as Windows Update Catalog and Windows Server Update Services for Windows, to retrieve available patches to install. Therefore, for end of life (EOL) operating system versions, if no new updates are available, Patch Manager might not be able to report on the new updates. This can be because no new updates are released by the Linux distribution maintainer, Microsoft, or Apple, or because the managed node does not have the proper license to access the new updates.  
We strongly recommend that you avoid using OS versions that have reached End-of-Life (EOL). OS vendors including AWS typically don't provide security patches or other updates for versions that have reached EOL. Continuing to use an EOL system greatly increases the risk of not being able to apply upgrades, including security fixes, and other operational problems. AWS does not test Systems Manager functionality on OS versions that have reached EOL.  
Patch Manager reports compliance status against the available patches on the managed node. Therefore, if an instance is running an EOL operating system, and no updates are available, Patch Manager might report the node as Compliant, depending on the patch baselines configured for the patching operation.


| Operating system | Details | 
| --- | --- | 
|  Linux  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-prerequisites.html)  | 
| macOS |  *macOS is supported for Amazon EC2 instances only.* 13.0–13.7 (Ventura) 14*.x* (Sonoma) 15.*x* (Sequoia)  macOS OS updates Patch Manager doesn't support operating system (OS) updates or upgrades for macOS, such as from 13.1 to 13.2. To perform OS version updates on macOS, we recommend using Apple's built-in OS upgrade mechanisms. For more information, see [Device Management](https://developer.apple.com/documentation/devicemanagement) on the Apple Developer Documentation website.   Homebrew support Patch Manager requires Homebrew, the open-source software package management system, to be installed at either of the following default install locations:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-prerequisites.html) Patching operations using Patch Manager fail to function correctly when Homebrew is not installed.  Region support macOS is not supported in all AWS Regions. For more information about Amazon EC2 support for macOS, see [Amazon EC2 Mac instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-mac-instances.html) in the *Amazon EC2 User Guide*.   macOS edge devices SSM Agent for AWS IoT Greengrass core devices is not supported on macOS. You can't use Patch Manager to patch macOS edge devices.   | 
|  Windows  |  Windows Server 2012 through Windows Server 2025, including R2 versions.  SSM Agent for AWS IoT Greengrass core devices is not supported on Windows 10. You can't use Patch Manager to patch Windows 10 edge devices.   Windows Server 2012 and 2012 R2 support Windows Server 2012 and 2012 R2 reached end of support on October 10, 2023. To use Patch Manager with these versions, we recommend also using Extended Security Updates (ESUs) from Microsoft. For more information, see [Windows Server 2012 and 2012 R2 reaching end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on the Microsoft website.   | 

# How Patch Manager operations work


This section provides technical details that explain how Patch Manager, a tool in AWS Systems Manager, determines which patches to install and how it installs them on each supported operating system. For Linux operating systems, it also provides information about specifying a source repository, in a custom patch baseline, for patches other than the default configured on a managed node. This section also provides details about how patch baseline rules work on different distributions of the Linux operating system.

**Note**  
The information in the following topics applies no matter which method or type of configuration you are using for your patching operations:  
A patch policy configured in Quick Setup
A Host Management option configured in Quick Setup
A maintenance window to run a patch `Scan` or `Install` task
An on-demand **Patch now** operation

**Topics**
+ [

# How package release dates and update dates are calculated
](patch-manager-release-dates.md)
+ [

# How security patches are selected
](patch-manager-selecting-patches.md)
+ [

# How to specify an alternative patch source repository (Linux)
](patch-manager-alternative-source-repository.md)
+ [

# How patches are installed
](patch-manager-installing-patches.md)
+ [

# How patch baseline rules work on Linux-based systems
](patch-manager-linux-rules.md)
+ [

# Patching operation differences between Linux and Windows Server
](patch-manager-windows-and-linux-differences.md)

# How package release dates and update dates are calculated


**Important**  
The information on this page applies to the Amazon Linux 2 and Amazon Linux 2023 operating systems (OSs) for Amazon Elastic Compute Cloud (Amazon EC2) instances. The packages for these OS types are created and maintained by Amazon Web Services. How the manufacturers of other operating systems manage their packages and repositories affect how their release dates and update dates are calculated. For OSs besides Amazon Linux 2 and Amazon Linux 2023, such as Red Hat Enterprise Linux, refer to the manufacturer's documentation for information about how their packages are updated and maintained.

In the settings for [custom patch baselines](patch-manager-predefined-and-custom-patch-baselines.md#patch-manager-baselines-custom) you create, for most OS types, you can specify that patches are auto-approved for installation after a certain number of days. AWS provides several predefined patch baselines that include auto-approval dates of 7 days.

An *auto-approval delay* is the number of days to wait after the patch was released, before the patch is automatically approved for patching. For example, you create a rule using the `CriticalUpdates` classification and configure it for 7 days auto-approval delay. As a result, a new critical patch with a release date or last update date of July 7 is automatically approved on July 14.

To avoid unexpected results with auto-approval delays on Amazon Linux 2 and Amazon Linux 2023, it's important to understand how their release dates and update dates are calculated.

**Note**  
If an Amazon Linux 2 or Amazon Linux 2023 repository does not provide release date information for packages, Patch Manager uses the build time of the package as the date for auto-approval date specifications. If the build time of the package can't be determined, Patch Manager uses a default date of January 1st, 1970. This results in Patch Manager bypassing any auto-approval date specifications in patch baselines that are configured to approve patches for any date after January 1st, 1970.

In most cases, the auto-approval wait time before patches are installed is calculated from an `Updated Date` value in `updateinfo.xml`, not a `Release Date` value. The following are important details about these date calculations: 
+ The `Release Date` is the date a *notice* is released. It does not mean the package is necessarily available in the associated repositories yet. 
+ The `Update Date` is the last date the notice was updated. An update to a notice can represent something as small as a text or description update. It does not mean the package was released from that date or is necessarily available in the associated repositories yet. 

  This means that a package could have an `Update Date` value of July 7 but not be available for installation until (for example) July 13. Suppose for this case that a patch baseline that specifies a 7-day auto-approval delay runs in an `Install` operation on July 14. Because the `Update Date` value is 7 days prior to the run date, the patches and updates in the package are installed on July 14. The installation happens even though only 1 day has passed since the package became available for actual installation.
+ A package containing operating system or application patches can be updated more than once after initial release.
+ A package can be released into the AWS managed repositories but then rolled back if issues are later discovered with it.

In some patching operations, these factors might not be important. For example, if a patch baseline is configured to install a patch with severity values of `Low` and `Medium`, and a classification of `Recommended`, any auto-approval delay might have little impact on your operations.

However, in cases where the timing of critical or high-severity patches is more important, you might want to exercise more control over when patches are installed. The recommended method for doing this is to use alternative patch source repositories instead of the default repositories for patching operations on a managed node. 

You can specify alternative patch source repositories when you create a custom patch baseline. In each custom patch baseline, you can specify patch source configurations for up to 20 versions of a supported Linux operating system. For more information, see [How to specify an alternative patch source repository (Linux)](patch-manager-alternative-source-repository.md).

# How security patches are selected


The primary focus of Patch Manager, a tool in AWS Systems Manager, is on installing operating systems security-related updates on managed nodes. By default, Patch Manager doesn't install all available patches, but rather a smaller set of patches focused on security.

By default, Patch Manager doesn't replace a package that has been marked as obsolete in a package repository with any differently named replacement packages unless this replacement is required by the installation of another package update. Instead, for commands that update a package, Patch Manager only reports and installs missing updates for the package that is installed but obsolete. This is because replacing an obsolete package typically requires uninstalling an existing package and installing its replacement. Replacing the obsolete package could introduce breaking changes or additional functionality that you didn't intend.

This behavior is consistent with YUM and DNF's `update-minimal` command, which focuses on security updates rather than feature upgrades. For more information, see [How patches are installed](patch-manager-installing-patches.md).

**Note**  
When you use the `ApproveUntilDate` parameter or `ApproveAfterDays` parameter in a patch baseline rule, Patch Manager evaluates patch release dates using Coordinated Universal Time (UTC).   
For example, for `ApproveUntilDate`, if you specify a date such as `2025-11-16`, patches released between `2025-11-16T00:00:00Z` and `2025-11-16T23:59:59Z` will be approved.   
Be aware that patch release dates displayed by native package managers on your managed nodes may show different times based on your system's local timezone, but Patch Manager always uses the UTC datetime for approval calculations. This ensures consistency with patch release dates published on official security advisory websites.

For Linux-based operating system types that report a severity level for patches, Patch Manager uses the severity level reported by the software publisher for the update notice or individual patch. Patch Manager doesn't derive severity levels from third-party sources, such as the [Common Vulnerability Scoring System](https://www.first.org/cvss/) (CVSS), or from metrics released by the [National Vulnerability Database](https://nvd.nist.gov/vuln) (NVD).

**Note**  
On all Linux-based systems supported by Patch Manager, you can choose a different source repository configured for the managed node, typically to install nonsecurity updates. For information, see [How to specify an alternative patch source repository (Linux)](patch-manager-alternative-source-repository.md).

Choose from the following tabs to learn how Patch Manager selects security patches for your operating system.

------
#### [ Amazon Linux 2 and Amazon Linux 2023 ]

Preconfigured repositories are handled differently on Amazon Linux 2 than on Amazon Linux 2023.

On Amazon Linux 2, the Systems Manager patch baseline service uses preconfigured repositories on the managed node. There are usually two preconfigured repositories (repos) on a node:

**On Amazon Linux 2**
+ **Repo ID**: `amzn2-core/2/architecture`

  **Repo name**: `Amazon Linux 2 core repository`
+ **Repo ID**: `amzn2extra-docker/2/architecture`

  **Repo name**: `Amazon Extras repo for docker`

**Note**  
*architecture* can be x86\$164 or (for Graviton processors) aarch64.

When you create an Amazon Linux 2023 (AL2023) instance, it contains the updates that were available in the version of AL2023 and the specific AMI you selected. Your AL2023 instance doesn't automatically receive additional critical and important security updates at launch time. Instead, with the *deterministic upgrades through versioned repositories* feature supported for AL2023, which is turned on by default, you can apply updates based on a schedule that meets your specific needs. For more information, see [Deterministic upgrades through versioned repositories](https://docs.aws.amazon.com/linux/al2023/ug/deterministic-upgrades.html) in the *Amazon Linux 2023 User Guide*. 

On AL2023, the preconfigured repository is the following:
+ **Repo ID**: `amazonlinux`

  **Repo name**: Amazon Linux 2023 repository

On Amazon Linux 2023 (preview release), the preconfigured repositories are tied to *locked versions* of package updates. When new Amazon Machine Images (AMIs) for AL2023 are released, they are locked to a specific version. For patch updates, Patch Manager retrieves the latest locked version of the patch update repository and then updates packages on the managed node based on the content of that locked version.

**Package managers**  
Amazon Linux 2 managed nodes use Yum as the package manager. Amazon Linux 2023 use DNF as the package manager. 

Both package managers use the concept of an *update notice* as a file named `updateinfo.xml`. An update notice is simply a collection of packages that fix specific problems. All packages that are in an update notice are considered Security by Patch Manager. Individual packages aren't assigned classifications or severity levels. For this reason, Patch Manager assigns the attributes of an update notice to the related packages.

**Note**  
If you select the **Include non-security updates** check box in the **Create patch baseline** page, then packages that aren't classified in an `updateinfo.xml` file (or a package that contains a file without properly formatted Classification, Severity, and Date values) can be included in the prefiltered list of patches. However, in order for a patch to be applied, the patch must still meet the user-specified patch baseline rules.  
For more information about the **Include non-security updates** option, see [How patches are installed](patch-manager-installing-patches.md) and [How patch baseline rules work on Linux-based systems](patch-manager-linux-rules.md).

------
#### [  CentOS Stream ]

On CentOS Stream, the Systems Manager patch baseline service uses preconfigured repositories (repos) on the managed node. The following list provides examples for a fictitious CentOS 9.2 Amazon Machine Image (AMI):
+ **Repo ID**: `example-centos-9.2-base`

  **Repo name**: `Example CentOS-9.2 - Base`
+ **Repo ID**: `example-centos-9.2-extras` 

  **Repo name**: `Example CentOS-9.2 - Extras`
+ **Repo ID**: `example-centos-9.2-updates`

  **Repo name**: `Example CentOS-9.2 - Updates`
+ **Repo ID**: `example-centos-9.x-examplerepo`

  **Repo name**: `Example CentOS-9.x – Example Repo Packages`

**Note**  
All updates are downloaded from the remote repos configured on the managed node. Therefore, the node must have outbound access to the internet in order to connect to the repos so the patching can be performed.

CentOS Stream nodes use DNF as the package manager. The package manager uses the concept of an update notice. An update notice is simply a collection of packages that fix specific problems. 

However, CentOS Stream default repos aren't configured with an update notice. This means that Patch Manager doesn't detect packages on default CentOS Stream repos. To allow Patch Manager to process packages that aren't contained in an update notice, you must turn on the `EnableNonSecurity` flag in the patch baseline rules.

**Note**  
CentOS Stream update notices are supported. Repos with update notices can be downloaded after launch.

------
#### [ Debian Server ]

On Debian Server, the Systems Manager patch baseline service uses preconfigured repositories (repos) on the instance. These preconfigured repos are used to pull an updated list of available package upgrades. For this, Systems Manager performs the equivalent of a `sudo apt-get update` command. 

Packages are then filtered from `debian-security codename` repos. This means that on each version of Debian Server, Patch Manager only identifies upgrades that are part of the associated repo for that version, as follows:
+  Debian Server 11: `debian-security bullseye`
+ Debian Server 12: `debian-security bookworm`

------
#### [ Oracle Linux ]

On Oracle Linux, the Systems Manager patch baseline service uses preconfigured repositories (repos) on the managed node. There are usually two preconfigured repos on a node.

**Oracle Linux 7**:
+ **Repo ID**: `ol7_UEKR5/x86_64`

  **Repo name**: `Latest Unbreakable Enterprise Kernel Release 5 for Oracle Linux 7Server (x86_64)`
+ **Repo ID**: `ol7_latest/x86_64`

  **Repo name**: `Oracle Linux 7Server Latest (x86_64)` 

**Oracle Linux 8**:
+ **Repo ID**: `ol8_baseos_latest` 

  **Repo name**: `Oracle Linux 8 BaseOS Latest (x86_64)`
+ **Repo ID**: `ol8_appstream`

  **Repo name**: `Oracle Linux 8 Application Stream (x86_64)` 
+ **Repo ID**: `ol8_UEKR6`

  **Repo name**: `Latest Unbreakable Enterprise Kernel Release 6 for Oracle Linux 8 (x86_64)`

**Oracle Linux 9**:
+ **Repo ID**: `ol9_baseos_latest` 

  **Repo name**: `Oracle Linux 9 BaseOS Latest (x86_64)`
+ **Repo ID**: `ol9_appstream`

  **Repo name**: `Oracle Linux 9 Application Stream Packages(x86_64)` 
+ **Repo ID**: `ol9_UEKR7`

  **Repo name**: `Oracle Linux UEK Release 7 (x86_64)`

**Note**  
All updates are downloaded from the remote repos configured on the managed node. Therefore, the node must have outbound access to the internet in order to connect to the repos so the patching can be performed.

Oracle Linux managed nodes use Yum as the package manager, and Yum uses the concept of an update notice as a file named `updateinfo.xml`. An update notice is simply a collection of packages that fix specific problems. Individual packages aren't assigned classifications or severity levels. For this reason, Patch Manager assigns the attributes of an update notice to the related packages and installs packages based on the Classification filters specified in the patch baseline.

**Note**  
If you select the **Include non-security updates** check box in the **Create patch baseline** page, then packages that aren't classified in an `updateinfo.xml` file (or a package that contains a file without properly formatted Classification, Severity, and Date values) can be included in the prefiltered list of patches. However, in order for a patch to be applied, the patch must still meet the user-specified patch baseline rules.

------
#### [ AlmaLinux, RHEL, and Rocky Linux  ]

On AlmaLinux, Red Hat Enterprise Linux, and Rocky Linux the Systems Manager patch baseline service uses preconfigured repositories (repos) on the managed node. There are usually three preconfigured repos on a node.

All updates are downloaded from the remote repos configured on the managed node. Therefore, the node must have outbound access to the internet in order to connect to the repos so the patching can be performed.

**Note**  
If you select the **Include non-security updates** check box in the **Create patch baseline** page, then packages that aren't classified in an `updateinfo.xml` file (or a package that contains a file without properly formatted Classification, Severity, and Date values) can be included in the prefiltered list of patches. However, in order for a patch to be applied, the patch must still meet the user-specified patch baseline rules.

Red Hat Enterprise Linux 7 managed nodes use Yum as the package manager. AlmaLinux, Red Hat Enterprise Linux 8, and Rocky Linux managed nodes use DNF as the package manager. Both package managers use the concept of an update notice as a file named `updateinfo.xml`. An update notice is simply a collection of packages that fix specific problems. Individual packages aren't assigned classifications or severity levels. For this reason, Patch Manager assigns the attributes of an update notice to the related packages and installs packages based on the Classification filters specified in the patch baseline.

RHEL 7  
The following repo IDs are associated with RHUI 2. RHUI 3 launched in December 2019 and introduced a different naming scheme for Yum repository IDs. Depending on the RHEL-7 AMI you create your managed nodes from, you might need to update your commands. For more information, see [Repository IDs for RHEL 7 in AWS Have Changed](https://access.redhat.com/articles/4599971) on the *Red Hat Customer Portal*.
+ **Repo ID**: `rhui-REGION-client-config-server-7/x86_64`

  **Repo name**: `Red Hat Update Infrastructure 2.0 Client Configuration Server 7`
+ **Repo ID**: `rhui-REGION-rhel-server-releases/7Server/x86_64`

  **Repo name**: `Red Hat Enterprise Linux Server 7 (RPMs)`
+ **Repo ID**: `rhui-REGION-rhel-server-rh-common/7Server/x86_64`

  **Repo name**: `Red Hat Enterprise Linux Server 7 RH Common (RPMs)`

AlmaLinux, 8 RHEL 8, and Rocky Linux 8  
+ **Repo ID**: `rhel-8-appstream-rhui-rpms`

  **Repo name**: `Red Hat Enterprise Linux 8 for x86_64 - AppStream from RHUI (RPMs)`
+ **Repo ID**: `rhel-8-baseos-rhui-rpms`

  **Repo name**: `Red Hat Enterprise Linux 8 for x86_64 - BaseOS from RHUI (RPMs)`
+ **Repo ID**: `rhui-client-config-server-8`

  **Repo name**: `Red Hat Update Infrastructure 3 Client Configuration Server 8`

AlmaLinux 9, RHEL 9, and Rocky Linux 9  
+ **Repo ID**: `rhel-9-appstream-rhui-rpms`

  **Repo name**: `Red Hat Enterprise Linux 9 for x86_64 - AppStream from RHUI (RPMs)`
+ **Repo ID**: `rhel-9-baseos-rhui-rpms`

  **Repo name**: `Red Hat Enterprise Linux 9 for x86_64 - BaseOS from RHUI (RPMs)`
+ **Repo ID**: `rhui-client-config-server-9`

  **Repo name**: `Red Hat Enterprise Linux 9 Client Configuration`

------
#### [ SLES ]

On SUSE Linux Enterprise Server (SLES) managed nodes, the ZYPP library gets the list of available patches (a collection of packages) from the following locations:
+ List of repositories: `etc/zypp/repos.d/*`
+ Package information: `/var/cache/zypp/raw/*`

SLES managed nodes use Zypper as the package manager, and Zypper uses the concept of a patch. A patch is simply a collection of packages that fix a specific problem. Patch Manager handles all packages referenced in a patch as security-related. Because individual packages aren't given classifications or severity, Patch Manager assigns the packages the attributes of the patch that they belong to.

------
#### [ Ubuntu Server ]

On Ubuntu Server, the Systems Manager patch baseline service uses preconfigured repositories (repos) on the managed node. These preconfigured repos are used to pull an updated list of available package upgrades. For this, Systems Manager performs the equivalent of a `sudo apt-get update` command. 

Packages are then filtered from `codename-security` repos, where the codename is unique to the release version, such as `trusty` for Ubuntu Server 14. Patch Manager only identifies upgrades that are part of these repos: 
+ Ubuntu Server 16.04 LTS: `xenial-security`
+ Ubuntu Server 18.04 LTS: `bionic-security`
+ Ubuntu Server 20.04 LTS: `focal-security`
+ Ubuntu Server 22.04 LTS (`jammy-security`)
+ Ubuntu Server 24.04 LTS (`noble-security`)
+ Ubuntu Server 25.04 (`plucky-security`)

------
#### [ Windows Server ]

On Microsoft Windows operating systems, Patch Manager retrieves a list of available updates that Microsoft publishes to Microsoft Update and are automatically available to Windows Server Update Services (WSUS).

**Note**  
Patch Manager only makes available patches for Windows Server operating system versions that are supported for Patch Manager. For example, Patch Manager can't be used to patch Windows RT.

Patch Manager continually monitors for new updates in every AWS Region. The list of available updates is refreshed in each Region at least once per day. When the patch information from Microsoft is processed, Patch Manager removes updates that were replaced by later updates from its patch list . Therefore, only the most recent update is displayed and made available for installation. For example, if `KB4012214` replaces `KB3135456`, only `KB4012214` is made available as an update in Patch Manager.

Similarly, Patch Manager can install only patches that are available on the managed node during the time of the patching operation. By default, Windows Server 2019 and Windows Server 2022 remove updates that are replaced by later updates. As a result, if you use the `ApproveUntilDate` parameter in a Windows Server patch baseline, but the date selected in the `ApproveUntilDate` parameter is *before* the date of the latest patch, then the following scenario occurs:
+ The superseded patch is removed from the node and therefore can't be installed using Patch Manager.
+ The latest, replacement patch is present on the node but not yet approved for installation per the `ApproveUntilDate` parameter's specified date. 

This means that the managed node is compliant in terms of Systems Manager operations, even though a critical patch from the previous month might not be installed. This same scenario can occur when using the `ApproveAfterDays` parameter. Because of the Microsoft superseded patch behavior, it is possible to set a number (generally greater than 30 days) so that patches for Windows Server are never installed if the latest available patch from Microsoft is released before the number of days in `ApproveAfterDays` has elapsed. Note that this system behavior doesn't apply if you have modified your Windows Group Policy Object (GPO) settings to make the superseded patch available on your managed nodes.

**Note**  
In some cases, Microsoft releases patches for applications that don't specify an updated date and time. In these cases, an updated date and time of `01/01/1970` is supplied by default.

------

# How to specify an alternative patch source repository (Linux)


When you use the default repositories configured on a managed node for patching operations, Patch Manager, a tool in AWS Systems Manager, scans for or installs security-related patches. This is the default behavior for Patch Manager. For complete information about how Patch Manager selects and installs security patches, see [How security patches are selected](patch-manager-selecting-patches.md).

On Linux systems, however, you can also use Patch Manager to install patches that aren't related to security, or that are in a different source repository than the default one configured on the managed node. You can specify alternative patch source repositories when you create a custom patch baseline. In each custom patch baseline, you can specify patch source configurations for up to 20 versions of a supported Linux operating system. 

For example, suppose that your Ubuntu Server fleet includes both Ubuntu Server 25.04 managed nodes. In this case, you can specify alternate repositories for each version in the same custom patch baseline. For each version, you provide a name, specify the operating system version type (product), and provide a repository configuration. You can also specify a single alternative source repository that applies to all versions of a supported operating system.

**Note**  
Running a custom patch baseline that specifies alternative patch repositories for a managed node doesn't make them the new default repositories on the operating system. After the patching operation is complete, the repositories previously configured as the defaults for the node's operating system remain the defaults.

For a list of example scenarios for using this option, see [Sample uses for alternative patch source repositories](#patch-manager-alternative-source-repository-examples) later in this topic.

For information about default and custom patch baselines, see [Predefined and custom patch baselines](patch-manager-predefined-and-custom-patch-baselines.md).

**Example: Using the console**  
To specify alternative patch source repositories when you're working in the Systems Manager console, use the **Patch sources** section on the **Create patch baseline** page. For information about using the **Patch sources** options, see [Creating a custom patch baseline for Linux](patch-manager-create-a-patch-baseline-for-linux.md).

**Example: Using the AWS CLI**  
For an example of using the `--sources` option with the AWS Command Line Interface (AWS CLI), see [Create a patch baseline with custom repositories for different OS versions](patch-manager-cli-commands.md#patch-manager-cli-commands-create-patch-baseline-mult-sources).

**Topics**
+ [

## Important considerations for alternative repositories
](#alt-source-repository-important)
+ [

## Sample uses for alternative patch source repositories
](#patch-manager-alternative-source-repository-examples)

## Important considerations for alternative repositories


Keep in mind the following points as you plan your patching strategy using alternative patch repositories.

**Enforce repo update verifications (YUM and DNF)**  
A default configuration for a package manager on a Linux distribution might be set to skip an unreachable package repository if connection to the repository cannot be established. To enforce repository update verification, add `skip_if_unavailable=False` to the repository configuration.

For more information about the `skip_if_available` option, see [Connectivity to the patch source](patch-manager-prerequisites.md#source-connectivity).

**Only specified repositories are used for patching**  
Specifying alternative repositories doesn't mean specifying *additional* repositories. You can choose to specify repositories other than those configured as defaults on a managed node. However, you must also specify the default repositories as part of the alternative patch source configuration if you want their updates to be applied.

For example, on Amazon Linux 2 managed nodes, the default repositories are `amzn2-core` and `amzn2extra-docker`. If you want to include the Extra Packages for Enterprise Linux (EPEL) repository in your patching operations, you must specify all three repositories as alternative repositories.

**Note**  
Running a custom patch baseline that specifies alternative patch repositories for a managed node doesn't make them the new default repositories on the operating system. After the patching operation is complete, the repositories previously configured as the defaults for the node's operating system remain the defaults.

**Patching behavior for YUM-based distributions depends on the updateinfo.xml manifest**  
When you specify alternative patch repositories for YUM-based distributions, such as Amazon Linux 2, or Red Hat Enterprise Linux, patching behavior depends on whether the repository includes an update manifest in the form of a complete and correctly formatted `updateinfo.xml` file. This file specifies the release date, classifications, and severities of the various packages. Any of the following will affect the patching behavior:
+ If you filter on **Classification** and **Severity**, but they aren't specified in `updateinfo.xml`, the package won't be included by the filter. This also means that packages without an `updateinfo.xml` file won't be included in patching.
+ If you filter on **ApprovalAfterDays**, but the package release date isn't in Unix Epoch format (or has no release date specified), the package won't be included by the filter.
+ There is an exception if you select the **Include non-security updates** check box in the **Create patch baseline** page. In this case, packages without an `updateinfo.xml` file (or that contains this file without properly formatted **Classification**, **Severity**, and **Date** values) *will* be included in the prefiltered list of patches. (They must still meet the other patch baseline rule requirements in order to be installed.)

## Sample uses for alternative patch source repositories


**Example 1 – Nonsecurity Updates for Ubuntu Server**  
You're already using Patch Manager to install security patches on a fleet of Ubuntu Server managed nodes using the AWS-provided predefined patch baseline `AWS-UbuntuDefaultPatchBaseline`. You can create a new patch baseline that is based on this default, but specify in the approval rules that you want nonsecurity related updates that are part of the default distribution to be installed as well. When this patch baseline is run against your nodes, patches for both security and nonsecurity issues are applied. You can also choose to approve nonsecurity patches in the patch exceptions you specify for a baseline.

**Example 2 - Personal Package Archives (PPA) for Ubuntu Server**  
Your Ubuntu Server managed nodes are running software that is distributed through a [Personal Package Archives (PPA) for Ubuntu](https://launchpad.net/ubuntu/+ppas). In this case, you create a patch baseline that specifies a PPA repository that you have configured on the managed node as the source repository for the patching operation. Then use Run Command to run the patch baseline document on the nodes.

**Example 3 – Internal Corporate Applications on supported Amazon Linux versions**  
You need to run some applications needed for industry regulatory compliance on your Amazon Linux managed nodes. You can configure a repository for these applications on the nodes, use YUM to initially install the applications, and then update or create a new patch baseline to include this new corporate repository. After this you can use Run Command to run the `AWS-RunPatchBaseline` document with the `Scan` option to see if the corporate package is listed among the installed packages and is up to date on the managed node. If it isn't up to date, you can run the document again using the `Install` option to update the applications. 

# How patches are installed


Patch Manager, a tool in AWS Systems Manager, uses the operating system built-in package manager to install updates on managed nodes. For example, it uses the Windows Update API on Windows Server and `DNF` on Amazon Linux 2023. Patch Manager respects existing package manager and repository configurations on the nodes, including settings such as repository status, mirror URLs, GPG verification, and options like `skip_if_unavailable`.

Patch Manager doesn't install a new package that replaces an obsolete package that's currently installed. (Exceptions: The new package is a dependency of another package updating being installed, or the new package has the same name as the obsolete package.) Instead, Patch Manager reports on and installs available updates to installed packages. This approach helps prevent unexpected changes to your system functionality that might occur when one package replaces another.

If you need to uninstall a package that has been made obsolete and install its replacement, you might need to use a custom script or use package manager commands outside of Patch Manager's standard operations.

Choose from the following tabs to learn how Patch Manager installs patches on an operating system.

------
#### [ Amazon Linux 2 and Amazon Linux 2023 ]

On Amazon Linux 2 and Amazon Linux 2023 managed nodes, the patch installation workflow is as follows:

1. If a list of patches is specified using an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL using the `InstallOverrideList` parameter for the `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` documents, the listed patches are installed and steps 2-7 are skipped.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. 

   If nonsecurity updates are included, patches from other repositories are considered as well.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. If multiple versions of a patch are approved, the latest version is applied.

1. The YUM update API (Amazon Linux 2) or the DNF update API (Amazon Linux 2023) is applied to approved patches as follows:
   + For predefined default patch baselines provided by AWS, only patches specified in `updateinfo.xml` are applied (security updates only). This is because the **Include nonsecurity updates** check box is not selected. The predefined baselines are equivalent to a custom baseline with the following:
     + The **Include nonsecurity updates** check box is not selected
     + A SEVERITY list of `[Critical, Important]`
     + A CLASSIFICATION list of `[Security, Bugfix]`

     For Amazon Linux 2, the equivalent yum command for this workflow is:

     ```
     sudo yum update-minimal --sec-severity=Critical,Important --bugfix -y
     ```

     For Amazon Linux 2023, the equivalent dnf command for this workflow is:

     ```
     sudo dnf upgrade-minimal --sec-severity=Critical --sec-severity=Important --bugfix -y
     ```

     If the **Include nonsecurity updates** check box is selected, patches in `updateinfo.xml` and those not in `updateinfo.xml` are all applied (security and nonsecurity updates).

     For Amazon Linux 2, if a baseline with **Include nonsecurity updates** is selected, has a SEVERITY list of `[Critical, Important]` and a CLASSIFICATION list of `[Security, Bugfix]`, the equivalent yum command is:

     ```
     sudo yum update --security --sec-severity=Critical,Important --bugfix -y
     ```

     For Amazon Linux 2023, the equivalent dnf command is:

     ```
     sudo dnf upgrade --security --sec-severity=Critical --sec-severity=Important --bugfix -y
     ```
**Note**  
New packages that replace now-obsolete packages with different names are installed if you run these `yum` or `dnf` commands outside of Patch Manager. However, they are *not* installed by the equivalent Patch Manager operations.

**Additional patching details for Amazon Linux 2023**  
Support for severity level 'None'  
Amazon Linux 2023 also supports the patch severity level `None`, which is recognized by the DNF package manager.   
Support for severity level 'Medium'  
For Amazon Linux 2023, a patch severity level of `Medium` is equivalent to a severity of `Moderate` that might be defined in some external repositories. If you include `Medium` severity patches in the patch baseline, `Moderate` severity patches from external patches are also installed on the instances.  
When you query for compliance data using the API action [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeInstancePatches.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeInstancePatches.html), filtering for the severity level `Medium` reports patches with severity levels of both `Medium` and `Moderate`.  
Transitive dependency handling for Amazon Linux 2023  
For Amazon Linux 2023, Patch Manager might install different versions of transitive dependencies than the equivalent `dnf` commands install. Transitive dependencies are packages that are automatically installed to satisfy the requirements of other packages (dependencies of dependencies).   
For example, `dnf upgrade-minimal --security` installs the *minimal* versions of transitive dependencies needed to resolve known security issues, while Patch Manager installs the **latest available versions of the same transitive dependencies.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

**Note**  
A default configuration for a package manager on a Linux distribution might be set to skip an unreachable package repository without error. In such cases, the related patching operation proceeds without installing updates from the repository and concludes with success. To enforce repository updates, add `skip_if_unavailable=False` to the repository configuration.  
For more information about the `skip_if_available` option, see [Connectivity to the patch source](patch-manager-prerequisites.md#source-connectivity).

------
#### [ CentOS Stream ]

On CentOS Stream managed nodes, the patch installation workflow is as follows:

1. If a list of patches is specified using an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL using the `InstallOverrideList` parameter for the `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` documents, the listed patches are installed and steps 2-7 are skipped.

   Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. 

   If nonsecurity updates are included, patches from other repositories are considered as well.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. If multiple versions of a patch are approved, the latest version is applied.

1. The DNF update on CentOS Stream is applied to approved patches.
**Note**  
For CentOS Stream, Patch Manager might install different versions of transitive dependencies than the equivalent `dnf` commands install. Transitive dependencies are packages that are automatically installed to satisfy the requirements of other packages (dependencies of dependencies).   
For example, `dnf upgrade-minimal ‐‐security` installs the *minimal* versions of transitive dependencies needed to resolve known security issues, while Patch Manager installs the *latest available versions* of the same transitive dependencies.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

------
#### [ Debian Server ]

On Debian Server instances, the patch installation workflow is as follows:

1. If a list of patches is specified using an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL using the `InstallOverrideList` parameter for the `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` documents, the listed patches are installed and steps 2-7 are skipped.

1. If an update is available for `python3-apt` (a Python library interface to `libapt`), it is upgraded to the latest version. (This nonsecurity package is upgraded even if you did not select the **Include nonsecurity updates** option.)

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved. 
**Note**  
Because it isn't possible to reliably determine the release dates of update packages for Debian Server, the auto-approval options aren't supported for this operating system.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. 

   If nonsecurity updates are included, patches from other repositories are considered as well.
**Note**  
For Debian Server, patch candidate versions are limited to patches included in `debian-security`.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. The APT library is used to upgrade packages.
**Note**  
Patch Manager does not support using the APT `Pin-Priority` option to assign priorities to packages. Patch Manager aggregates available updates from all enabled repositories and selects the most recent update that matches the baseline for each installed package.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

------
#### [ macOS ]

On macOS managed nodes, the patch installation workflow is as follows:

1. The `/Library/Receipts/InstallHistory.plist` property list is a record of software that has been installed and upgraded using the `softwareupdate` and `installer` package managers. Using the `pkgutil` command line tool (for `installer`) and the `softwareupdate` package manager, CLI commands are run to parse this list. 

   For `installer`, the response to the CLI commands includes `package name`, `version`, `volume`, `location`, and `install-time` details, but only the `package name` and `version` are used by Patch Manager.

   For `softwareupdate`, the response to the CLI commands includes the package name (`display name`), `version`, and `date`, but only the package name and version are used by Patch Manager.

   For Brew and Brew Cask, Homebrew doesn't support its commands running under the root user. As a result, Patch Manager queries for and runs Homebrew commands as either the owner of the Homebrew directory or as a valid user belonging to the Homebrew directory’s owner group. The commands are similar to `softwareupdate` and `installer` and are run through a Python subprocess to gather package data, and the output is parsed to identify package names and versions.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. If multiple versions of a patch are approved, the latest version is applied.

1. Invokes the appropriate package CLI on the managed node to process approved patches as follows:
**Note**  
`installer` lacks the functionality to check for and install updates. Therefore, for `installer`, Patch Manager only reports which packages are installed. As a result, `installer` packages are never reported as `Missing`.
   + For predefined default patch baselines provided by AWS, and for custom patch baselines where the **Include non-security updates** check box is *not* selected, only security updates are applied.
   + For custom patch baselines where the **Include non-security updates** check box *is* selected, both security and nonsecurity updates are applied.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

------
#### [ Oracle Linux ]

On Oracle Linux managed nodes, the patch installation workflow is as follows:

1. If a list of patches is specified using an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL using the `InstallOverrideList` parameter for the `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` documents, the listed patches are installed and steps 2-7 are skipped.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. 

   If nonsecurity updates are included, patches from other repositories are considered as well.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. If multiple versions of a patch are approved, the latest version is applied.

1. On version 7 managed nodes, the YUM update API is applied to approved patches as follows:
   + For predefined default patch baselines provided by AWS, and for custom patch baselines where the **Include non-security updates** check box is *not* selected, only patches specified in `updateinfo.xml` are applied (security updates only).

     The equivalent yum command for this workflow is:

     ```
     sudo yum update-minimal --sec-severity=Important,Moderate --bugfix -y
     ```
   + For custom patch baselines where the **Include non-security updates** check box *is* selected, both patches in `updateinfo.xml` and those not in `updateinfo.xml` are applied (security and nonsecurity updates).

     The equivalent yum command for this workflow is:

     ```
     sudo yum update --security --bugfix -y
     ```

     On version 8 and 9 managed nodes, the DNF update API is applied to approved patches as follows:
     + For predefined default patch baselines provided by AWS, and for custom patch baselines where the **Include non-security updates** check box is *not* selected, only patches specified in `updateinfo.xml` are applied (security updates only).

       The equivalent yum command for this workflow is:

       ```
       sudo dnf upgrade-minimal --security --sec-severity=Moderate --sec-severity=Important
       ```
**Note**  
For Oracle Linux, Patch Manager might install different versions of transitive dependencies than the equivalent `dnf` commands install. Transitive dependencies are packages that are automatically installed to satisfy the requirements of other packages (dependencies of dependencies).   
For example, `dnf upgrade-minimal --security` installs the *minimal* versions of transitive dependencies needed to resolve known security issues, while Patch Manager installs the *latest available versions* of the same transitive dependencies.
     + For custom patch baselines where the **Include non-security updates** check box *is* selected, both patches in `updateinfo.xml` and those not in `updateinfo.xml` are applied (security and nonsecurity updates).

       The equivalent yum command for this workflow is:

       ```
       sudo dnf upgrade --security --bugfix
       ```
**Note**  
New packages that replace now-obsolete packages with different names are installed if you run these `yum` or `dnf` commands outside of Patch Manager. However, they are *not* installed by the equivalent Patch Manager operations.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

**Note**  
A default configuration for a package manager on a Linux distribution might be set to skip an unreachable package repository without error. In such cases, the related patching operation proceeds without installing updates from the repository and concludes with success. To enforce repository updates, add `skip_if_unavailable=False` to the repository configuration.  
For more information about the `skip_if_available` option, see [Connectivity to the patch source](patch-manager-prerequisites.md#source-connectivity).

------
#### [ AlmaLinux, RHEL, and Rocky Linux  ]

On AlmaLinux, Red Hat Enterprise Linux, and Rocky Linux managed nodes, the patch installation workflow is as follows:

1. If a list of patches is specified using an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL using the `InstallOverrideList` parameter for the `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` documents, the listed patches are installed and steps 2-7 are skipped.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. 

   If nonsecurity updates are included, patches from other repositories are considered as well.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. If multiple versions of a patch are approved, the latest version is applied.

1. The YUM update API (on RHEL 7) or the DNF update API (on AlmaLinux 8 and 9, RHEL 8, 9, and 10, and Rocky Linux 8 and 9) is applied to approved patches according to the following rules:

    

**Scenario 1: Non-security updates excluded**
   + **Applies to**: Predefined default patch baselines provided by AWS and custom patch baselines.
   + **Include non-security updates** check box: *Not* selected.
   + **Patches applied**: Patches specified in `updateinfo.xml` (security updates only) are applied *only* if they both match the patch baseline configuration and are found in the configured repos.

     In some cases, a patch specified in `updateinfo.xml` might no longer be available in a configured repo. Configured repos usually have only the latest version of a patch, which is a cumulative roll-up of all prior updates, but the latest version might not match the patch baseline rules and is omitted from the patching operation.
   + **Commands**: For RHEL 7, the equivalent yum command for this workflow is: 

     ```
     sudo yum update-minimal --sec-severity=Critical,Important --bugfix -y
     ```

     For AlmaLinux, RHEL 8, and Rocky Linux , the equivalent dnf commands for this workflow are:

     ```
     sudo dnf update-minimal --sec-severity=Critical --bugfix -y ; \
     sudo dnf update-minimal --sec-severity=Important --bugfix -y
     ```
**Note**  
For AlmaLinux, RHEL, and Rocky LinuxRocky Linux, Patch Manager might install different versions of transitive dependencies than the equivalent `dnf` commands install. Transitive dependencies are packages that are automatically installed to satisfy the requirements of other packages (dependencies of dependencies).   
For example, `dnf upgrade-minimal --security` installs the *minimal* versions of transitive dependencies needed to resolve known security issues, while Patch Manager installs the *latest available versions* of the same transitive dependencies.

**Scenario 2: Non-security updates included**
   + **Apples to**: Custom patch baselines.
   + **Include non-security updates** check box: Selected.
   + **Patches applied**: Patches in `updateinfo.xml` *and* those not in `updateinfo.xml` are applied (security and nonsecurity updates).
   + **Commands**: For RHEL 7, the equivalent yum command for this workflow is:

     ```
     sudo yum update --security --bugfix -y
     ```

     For AlmaLinux 8 and 9, RHEL 8 and 9, and Rocky Linux 8 and 9, the equivalent dnf command for this workflow is:

     ```
     sudo dnf update --security --bugfix -y
     ```
**Note**  
New packages that replace now-obsolete packages with different names are installed if you run these `yum` or `dnf` commands outside of Patch Manager. However, they are *not* installed by the equivalent Patch Manager operations.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

**Note**  
A default configuration for a package manager on a Linux distribution might be set to skip an unreachable package repository without error. In such cases, the related patching operation proceeds without installing updates from the repository and concludes with success. To enforce repository updates, add `skip_if_unavailable=False` to the repository configuration.  
For more information about the `skip_if_available` option, see [Connectivity to the patch source](patch-manager-prerequisites.md#source-connectivity).

------
#### [ SLES ]

On SUSE Linux Enterprise Server (SLES) managed nodes, the patch installation workflow is as follows:

1. If a list of patches is specified using an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL using the `InstallOverrideList` parameter for the `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` documents, the listed patches are installed and steps 2-7 are skipped.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. 

   If nonsecurity updates are included, patches from other repositories are considered as well.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. If multiple versions of a patch are approved, the latest version is applied.

1. The Zypper update API is applied to approved patches.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

------
#### [ Ubuntu Server ]

On Ubuntu Server managed nodes, the patch installation workflow is as follows:

1. If a list of patches is specified using an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL using the `InstallOverrideList` parameter for the `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` documents, the listed patches are installed and steps 2-7 are skipped.

1. If an update is available for `python3-apt` (a Python library interface to `libapt`), it is upgraded to the latest version. (This nonsecurity package is upgraded even if you did not select the **Include nonsecurity updates** option.)

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved. 
**Note**  
Because it's not possible to reliably determine the release dates of update packages for Ubuntu Server, the auto-approval options aren't supported for this operating system.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. 

   If nonsecurity updates are included, patches from other repositories are considered as well.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.
**Note**  
For each version of Ubuntu Server, patch candidate versions are limited to patches that are part of the associated repo for that version, as follows:  
Ubuntu Server 16.04 LTS: `xenial-security`
Ubuntu Server 18.04 LTS: `bionic-security`
Ubuntu Server 20.04 LTS): `focal-security`
Ubuntu Server 22.04 LTS: `jammy-security`
Ubuntu Server 24.04 LTS (`noble-security`)
Ubuntu Server 25.04 (`plucky-security`)

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. The APT library is used to upgrade packages.
**Note**  
Patch Manager does not support using the APT `Pin-Priority` option to assign priorities to packages. Patch Manager aggregates available updates from all enabled repositories and selects the most recent update that matches the baseline for each installed package.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

------
#### [ Windows Server ]

When a patching operation is performed on a Windows Server managed node, the node requests a snapshot of the appropriate patch baseline from Systems Manager. This snapshot contains the list of all updates available in the patch baseline that were approved for deployment. This list of updates is sent to the Windows Update API, which determines which of the updates are applicable to the managed node and installs them as needed. Windows allows only the latest available version of a KB to be installed. Patch Manager installs the latest version of a KB when it, or any previous version of the KB, matches the applied patch baseline. If any updates are installed, the managed node is rebooted afterwards, as many times as necessary to complete all necessary patching. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).) The summary of the patching operation can be found in the output of the Run Command request. Additional logs can be found on the managed node in the `%PROGRAMDATA%\Amazon\PatchBaselineOperations\Logs` folder.

Because the Windows Update API is used to download and install KBs, all Group Policy settings for Windows Update are respected. No Group Policy settings are required to use Patch Manager, but any settings that you have defined will be applied, such as to direct managed nodes to a Windows Server Update Services (WSUS) server.

**Note**  
By default, Windows downloads all KBs from Microsoft's Windows Update site because Patch Manager uses the Windows Update API to drive the download and installation of KBs. As a result, the managed node must be able to reach the Microsoft Windows Update site or patching will fail. Alternatively, you can configure a WSUS server to serve as a KB repository and configure your managed nodes to target that WSUS server using Group Policies.  
Patch Manager might reference KB IDs when creating custom patch baselines for Windows Server, such as when an **Approved patches** list or **Rejected patches** list is included the the baseline configuration. Only updates that are assigned a KB ID in Microsoft Windows Update or a WSUS server are installed by Patch Manager. Updates that lack a KB ID are not included in patching operations.   
For information about creating custom patch baselines, see the following topics:  
 [Creating a custom patch baseline for Windows Server](patch-manager-create-a-patch-baseline-for-windows.md)
 [Create a patch baseline (CLI)](patch-manager-create-a-patch-baseline-for-windows.md)
[Package name formats for Windows operating systems](patch-manager-approved-rejected-package-name-formats.md#patch-manager-approved-rejected-package-name-formats-windows)

------

# How patch baseline rules work on Linux-based systems


The rules in a patch baseline for Linux distributions operate differently based on the distribution type. Unlike patch updates on Windows Server managed nodes, rules are evaluated on each node to take the configured repos on the instance into consideration. Patch Manager, a tool in AWS Systems Manager, uses the native package manager to drive the installation of patches approved by the patch baseline.

For Linux-based operating system types that report a severity level for patches, Patch Manager uses the severity level reported by the software publisher for the update notice or individual patch. Patch Manager doesn't derive severity levels from third-party sources, such as the [Common Vulnerability Scoring System](https://www.first.org/cvss/) (CVSS), or from metrics released by the [National Vulnerability Database](https://nvd.nist.gov/vuln) (NVD).

**Topics**
+ [

## How patch baseline rules work on Amazon Linux 2 and Amazon Linux 2023
](#linux-rules-amazon-linux)
+ [

## How patch baseline rules work on CentOS Stream
](#linux-rules-centos)
+ [

## How patch baseline rules work on Debian Server
](#linux-rules-debian)
+ [

## How patch baseline rules work on macOS
](#linux-rules-macos)
+ [

## How patch baseline rules work on Oracle Linux
](#linux-rules-oracle)
+ [

## How patch baseline rules work on AlmaLinux, RHEL, and Rocky Linux
](#linux-rules-rhel)
+ [

## How patch baseline rules work on SUSE Linux Enterprise Server
](#linux-rules-sles)
+ [

## How patch baseline rules work on Ubuntu Server
](#linux-rules-ubuntu)

## How patch baseline rules work on Amazon Linux 2 and Amazon Linux 2023


**Note**  
Amazon Linux 2023 (AL2023) uses versioned repositories that can be locked to a specific version through one or more system settings. For all patching operations on AL2023 EC2 instances, Patch Manager uses the latest repository versions, independent of the system configuration. For more information, see [Deterministic upgrades through versioned repositories](https://docs.aws.amazon.com/linux/al2023/ug/deterministic-upgrades.html) in the *Amazon Linux 2023 User Guide*.

On Amazon Linux 2 and Amazon Linux 2023, the patch selection process is as follows:

1. On the managed node, the YUM library (Amazon Linux 2) or the DNF library (Amazon Linux 2023) accesses the `updateinfo.xml` file for each configured repo. 

   If no `updateinfo.xml` file is found, whether patches are installed depend on settings for **Include non-security updates** and **Auto-approval**. For example, if non-security updates are permitted, they're installed when the auto-approval time arrives.

1. Each update notice in `updateinfo.xml` includes several attributes that denote the properties of the packages in the notice, as described in the following table.  
**Update notice attributes**    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

   For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

1. The product of the managed node is determined by SSM Agent. This attribute corresponds to the value of the Product key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type.

1. Packages are selected for the update according to the following guidelines.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

## How patch baseline rules work on CentOS Stream


The CentOS Stream default repositories do not include an `updateinfo.xml` file. However, custom repositories that you create or use might include this file. In this topic, references to `updateinfo.xml` apply only to these custom repositories.

On CentOS Stream, the patch selection process is as follows:

1. On the managed node, the DNF library accesses the `updateinfo.xml` file, if it exists in a custom repository, for each configured repo.

   If there is no `updateinfo.xml` found, which always includes the default repos, whether patches are installed depends on settings for **Include non-security updates** and **Auto-approval**. For example, if non-security updates are permitted, they're installed when the auto-approval time arrives.

1. If `updateinfo.xml` is present, each update notice in the file includes several attributes that denote the properties of the packages in the notice, as described in the following table.  
**Update notice attributes**    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

   For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

1. In all cases, the product of the managed node is determined by SSM Agent. This attribute corresponds to the value of the Product key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type.

1. Packages are selected for the update according to the following guidelines.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

## How patch baseline rules work on Debian Server


On Debian Server , the patch baseline service offers filtering on the *Priority* and *Section *fields. These fields are typically present for all Debian Server packages. To determine whether a patch is selected by the patch baseline, Patch Manager does the following:

1. On Debian Server systems, the equivalent of `sudo apt-get update` is run to refresh the list of available packages. Repos aren't configured and the data is pulled from repos configured in a `sources` list.

1. If an update is available for `python3-apt` (a Python library interface to `libapt`), it is upgraded to the latest version. (This nonsecurity package is upgraded even if you did not select the **Include nonsecurity updates** option.)

1. Next, the [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters), [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules), [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) and [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) lists are applied.
**Note**  
Because it isn't possible to reliably determine the release dates of update packages for Debian Server, the auto-approval options aren't supported for this operating system.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. In this case, for Debian Server, patch candidate versions are limited to patches included in the following repos:

   These repos are named as follows:
   + Debian Server 11: `debian-security bullseye`
   + Debian Server 12: `debian-security bookworm`

   If nonsecurity updates are included, patches from other repositories are considered as well.

   For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

To view the contents of the *Priority* and *Section *fields, run the following `aptitude` command: 

**Note**  
You might need to first install Aptitude on Debian Server systems.

```
aptitude search -F '%p %P %s %t %V#' '~U'
```

In the response to this command, all upgradable packages are reported in this format: 

```
name, priority, section, archive, candidate version
```

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

## How patch baseline rules work on macOS


On macOS, the patch selection process is as follows:

1. On the managed node, Patch Manager accesses the parsed contents of the `InstallHistory.plist` file and identifies package names and versions. 

   For details about the parsing process, see the **macOS** tab in [How patches are installed](patch-manager-installing-patches.md).

1. The product of the managed node is determined by SSM Agent. This attribute corresponds to the value of the Product key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type.

1. Packages are selected for the update according to the following guidelines.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

## How patch baseline rules work on Oracle Linux


On Oracle Linux, the patch selection process is as follows:

1. On the managed node, the YUM library accesses the `updateinfo.xml` file for each configured repo.
**Note**  
The `updateinfo.xml` file might not be available if the repo isn't one managed by Oracle. If there is no `updateinfo.xml` found, whether patches are installed depend on settings for **Include non-security updates** and **Auto-approval**. For example, if non-security updates are permitted, they're installed when the auto-approval time arrives.

1. Each update notice in `updateinfo.xml` includes several attributes that denote the properties of the packages in the notice, as described in the following table.  
**Update notice attributes**    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

   For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

1. The product of the managed node is determined by SSM Agent. This attribute corresponds to the value of the Product key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type.

1. Packages are selected for the update according to the following guidelines.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

## How patch baseline rules work on AlmaLinux, RHEL, and Rocky Linux


On AlmaLinux, Red Hat Enterprise Linux (RHEL), and Rocky Linux, the patch selection process is as follows:

1. On the managed node, the YUM library (RHEL 7) or the DNF library (AlmaLinux 8 and 9, RHEL 8, 9, and 10, and Rocky Linux 8 and 9) accesses the `updateinfo.xml` file for each configured repo.
**Note**  
The `updateinfo.xml` file might not be available if the repo isn't one managed by Red Hat. If there is no `updateinfo.xml` found, whether patches are installed depend on settings for **Include non-security updates** and **Auto-approval**. For example, if non-security updates are permitted, they're installed when the auto-approval time arrives.

1. Each update notice in `updateinfo.xml` includes several attributes that denote the properties of the packages in the notice, as described in the following table.  
**Update notice attributes**    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

   For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

1. The product of the managed node is determined by SSM Agent. This attribute corresponds to the value of the Product key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type.

1. Packages are selected for the update according to the following guidelines.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

## How patch baseline rules work on SUSE Linux Enterprise Server


On SLES, each patch includes the following attributes that denote the properties of the packages in the patch:
+ **Category**: Corresponds to the value of the **Classification** key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type. Denotes the type of patch included in the update notice.

  You can view the list of supported values by using the AWS CLI command **[https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-properties.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-properties.html)** or the API operation **[https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchProperties.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchProperties.html)**. You can also view the list in the **Approval rules** area of the **Create patch baseline** page or **Edit patch baseline** page in the Systems Manager console.
+ **Severity**: Corresponds to the value of the **Severity** key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type. Denotes the severity of the patches.

  You can view the list of supported values by using the AWS CLI command **[https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-properties.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-properties.html)** or the API operation **[https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchProperties.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchProperties.html)**. You can also view the list in the **Approval rules** area of the **Create patch baseline** page or **Edit patch baseline** page in the Systems Manager console.

The product of the managed node is determined by SSM Agent. This attribute corresponds to the value of the **Product** key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type. 

For each patch, the patch baseline is used as a filter, allowing only the qualified packages to be included in the update. If multiple packages are applicable after applying the patch baseline definition, the latest version is used. 

For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

## How patch baseline rules work on Ubuntu Server


On Ubuntu Server, the patch baseline service offers filtering on the *Priority* and *Section *fields. These fields are typically present for all Ubuntu Server packages. To determine whether a patch is selected by the patch baseline, Patch Manager does the following:

1. On Ubuntu Server systems, the equivalent of `sudo apt-get update` is run to refresh the list of available packages. Repos aren't configured and the data is pulled from repos configured in a `sources` list.

1. If an update is available for `python3-apt` (a Python library interface to `libapt`), it is upgraded to the latest version. (This nonsecurity package is upgraded even if you did not select the **Include nonsecurity updates** option.)

1. Next, the [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters), [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules), [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) and [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) lists are applied.
**Note**  
Because it's not possible to reliably determine the release dates of update packages for Ubuntu Server, the auto-approval options aren't supported for this operating system.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. In this case, for Ubuntu Server, patch candidate versions are limited to patches included in the following repos:
   + Ubuntu Server 16.04 LTS: `xenial-security`
   + Ubuntu Server 18.04 LTS: `bionic-security`
   + Ubuntu Server 20.04 LTS: `focal-security`
   + Ubuntu Server 22.04 LTS (`jammy-security`)
   + Ubuntu Server 24.04 LTS (`noble-security`)
   + Ubuntu Server 25.04 (`plucky-security`)

   If nonsecurity updates are included, patches from other repositories are considered as well.

   For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

To view the contents of the *Priority* and *Section *fields, run the following `aptitude` command: 

**Note**  
You might need to first install Aptitude on Ubuntu Server 16 systems.

```
aptitude search -F '%p %P %s %t %V#' '~U'
```

In the response to this command, all upgradable packages are reported in this format: 

```
name, priority, section, archive, candidate version
```

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

# Patching operation differences between Linux and Windows Server


This topic describes important differences between Linux and Windows Server patching in Patch Manager, a tool in AWS Systems Manager.

**Note**  
To patch Linux managed nodes, your nodes must be running SSM Agent version 2.0.834.0 or later.  
An updated version of SSM Agent is released whenever new tools are added to Systems Manager or updates are made to existing tools. Failing to use the latest version of the agent can prevent your managed node from using various Systems Manager tools and features. For that reason, we recommend that you automate the process of keeping SSM Agent up to date on your machines. For information, see [Automating updates to SSM Agent](ssm-agent-automatic-updates.md). Subscribe to the [SSM Agent Release Notes](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) page on GitHub to get notifications about SSM Agent updates.

## Difference 1: Patch evaluation


Patch Manager uses different processes on Windows managed nodes and Linux managed nodes in order to evaluate which patches should be present. 

**Linux**  
For Linux patching, Systems Manager evaluates patch baseline rules and the list of approved and rejected patches on *each* managed node. Systems Manager must evaluate patching on each node because the service retrieves the list of known patches and updates from the repositories that are configured on the managed node.

**Windows**  
For Windows patching, Systems Manager evaluates patch baseline rules and the list of approved and rejected patches *directly in the service*. It can do this because Windows patches are pulled from a single repository (Windows Update).

## Difference 2: `Not Applicable` patches


Due to the large number of available packages for Linux operating systems, Systems Manager doesn't report details about patches in the *Not Applicable* state. A `Not Applicable` patch is, for example, a patch for Apache software when the instance doesn't have Apache installed. Systems Manager does report the number of `Not Applicable` patches in the summary, but if you call the [DescribeInstancePatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeInstancePatches.html) API for a managed node, the returned data doesn't include patches with a state of `Not Applicable`. This behavior is different from Windows.

## Difference 3: SSM document support


The `AWS-ApplyPatchBaseline` Systems Manager document (SSM document) doesn't support Linux managed nodes. For applying patch baselines to Linux, macOS, and Windows Server managed nodes, the recommended SSM document is `AWS-RunPatchBaseline`. For more information, see [SSM Command documents for patching managed nodes](patch-manager-ssm-documents.md) and [SSM Command document for patching: `AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md).

## Difference 4: Application patches


The primary focus of Patch Manager is applying patches to operating systems. However, you can also use Patch Manager to apply patches to some applications on your managed nodes.

**Linux**  
On Linux operating systems, Patch Manager uses the configured repositories for updates, and doesn't differentiate between operating systems and application patches. You can use Patch Manager to define which repositories to fetch updates from. For more information, see [How to specify an alternative patch source repository (Linux)](patch-manager-alternative-source-repository.md).

**Windows**  
On Windows Server managed nodes, you can apply approval rules, as well as *Approved* and *Rejected* patch exceptions, for applications released by Microsoft, such as Microsoft Word 2016 and Microsoft Exchange Server 2016. For more information, see [Working with custom patch baselines](patch-manager-manage-patch-baselines.md).

## Difference 5: Rejected patch list options in custom patch baselines


When you create a custom patch baseline, you can specify one or more patches for a **Rejected patches** list. For Linux managed nodes, you can also choose to allow them to be installed if they're dependencies for another patch allowed by the baseline.

Windows Server, however, doesn't support the concept of patch dependencies. You can add a patch to the **Rejected patches** list in a custom baseline for Windows Server, but the result depends on (1) whether or not the rejected patch is already installed on a managed node, and (2) which option you choose for **Rejected patches action**.

Refer to the following table for details about rejected patch options on Windows Server.


| Installation status | Option: "Allow as dependency" | Option: "Block" | 
| --- | --- | --- | 
| Patch is already installed | Reported status: INSTALLED\$1OTHER | Reported status: INSTALLED\$1REJECTED | 
| Patch is not already installed | Patch skipped | Patch skipped | 

Each patch for Windows Server that Microsoft releases typically contains all the information needed for the installation to succeed. Occasionally, however, a prerequisite package might be required, which you must install manually. Patch Manager doesn't report information about these prerequisites. For related information, see [Windows Update issues troubleshooting](https://learn.microsoft.com/en-us/troubleshoot/windows-client/installing-updates-features-roles/windows-update-issues-troubleshooting) on the Microsoft website.

# SSM Command documents for patching managed nodes


This topic describes the nine Systems Manager documents (SSM documents) available to help you keep your managed nodes patched with the latest security-related updates. 

We recommend using just five of these documents in your patching operations. Together, these five SSM documents provide you with a full range of patching options using AWS Systems Manager. Four of these documents were released later than the four legacy SSM documents they replace and represent expansions or consolidations of functionality.

**Recommended SSM documents for patching**  
We recommend using the following five SSM documents in your patching operations.
+ `AWS-ConfigureWindowsUpdate`
+ `AWS-InstallWindowsUpdates`
+ `AWS-RunPatchBaseline`
+ `AWS-RunPatchBaselineAssociation`
+ `AWS-RunPatchBaselineWithHooks`

**Legacy SSM documents for patching**  
The following four legacy SSM documents remain available in some AWS Regions but are no longer updated or supported. These documents aren't guaranteed to work in IPv6 environments and support only IPv4. They can't be guaranteed to work in all scenarios and might lose support in the future. We recommend that you don't use these documents for patching operations.
+ `AWS-ApplyPatchBaseline`
+ `AWS-FindWindowsUpdates`
+ `AWS-InstallMissingWindowsUpdates`
+ `AWS-InstallSpecificWindowsUpdates`

For steps to set up patching operations in an environment that supportsonly IPv6, see [Tutorial: Patching a server in an IPv6 only environment](patch-manager-server-patching-iPv6-tutorial.md).

Refer to the following sections for more information about using these SSM documents in your patching operations.

**Topics**
+ [

## SSM documents recommended for patching managed nodes
](#patch-manager-ssm-documents-recommended)
+ [

## Legacy SSM documents for patching managed nodes
](#patch-manager-ssm-documents-legacy)
+ [

## Known limitations of the SSM documents for patching managed nodes
](#patch-manager-ssm-documents-known-limitations)
+ [

# SSM Command document for patching: `AWS-RunPatchBaseline`
](patch-manager-aws-runpatchbaseline.md)
+ [

# SSM Command document for patching: `AWS-RunPatchBaselineAssociation`
](patch-manager-aws-runpatchbaselineassociation.md)
+ [

# SSM Command document for patching: `AWS-RunPatchBaselineWithHooks`
](patch-manager-aws-runpatchbaselinewithhooks.md)
+ [

# Sample scenario for using the InstallOverrideList parameter in `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation`
](patch-manager-override-lists.md)
+ [

# Using the BaselineOverride parameter
](patch-manager-baselineoverride-parameter.md)

## SSM documents recommended for patching managed nodes


The following five SSM documents are recommended for use in your managed node patching operations.

**Topics**
+ [

### `AWS-ConfigureWindowsUpdate`
](#patch-manager-ssm-documents-recommended-AWS-ConfigureWindowsUpdate)
+ [

### `AWS-InstallWindowsUpdates`
](#patch-manager-ssm-documents-recommended-AWS-InstallWindowsUpdates)
+ [

### `AWS-RunPatchBaseline`
](#patch-manager-ssm-documents-recommended-AWS-RunPatchBaseline)
+ [

### `AWS-RunPatchBaselineAssociation`
](#patch-manager-ssm-documents-recommended-AWS-RunPatchBaselineAssociation)
+ [

### `AWS-RunPatchBaselineWithHooks`
](#patch-manager-ssm-documents-recommended-AWS-RunPatchBaselineWithHooks)

### `AWS-ConfigureWindowsUpdate`


Supports configuring basic Windows Update functions and using them to install updates automatically (or to turn off automatic updates). Available in all AWS Regions.

This SSM document prompts Windows Update to download and install the specified updates and reboot managed nodes as needed. Use this document with State Manager, a tool in AWS Systems Manager, to ensure Windows Update maintains its configuration. You can also run it manually using Run Command, a tool in AWS Systems Manager, to change the Windows Update configuration. 

The available parameters in this document support specifying a category of updates to install (or whether to turn off automatic updates), as well as specifying the day of the week and time of day to run patching operations. This SSM document is most useful if you don't need strict control over Windows updates and don't need to collect compliance information. 

**Replaces legacy SSM documents: **
+ *None*

### `AWS-InstallWindowsUpdates`


Installs updates on a Windows Server managed node. Available in all AWS Regions.

This SSM document provides basic patching functionality in cases where you either want to install a specific update (using the `Include Kbs` parameter), or want to install patches with specific classifications or categories but don't need patch compliance information. 

**Replaces legacy SSM documents:**
+ `AWS-FindWindowsUpdates`
+ `AWS-InstallMissingWindowsUpdates`
+ `AWS-InstallSpecificWindowsUpdates`

The three legacy documents perform different functions, but you can achieve the same results by using different parameter settings with the newer SSM document `AWS-InstallWindowsUpdates`. These parameter settings are described in [Legacy SSM documents for patching managed nodes](#patch-manager-ssm-documents-legacy).

### `AWS-RunPatchBaseline`


Installs patches on your managed nodes or scans nodes to determine whether any qualified patches are missing. Available in all AWS Regions.

`AWS-RunPatchBaseline` allows you to control patch approvals using the patch baseline specified as the "default" for an operating system type. Reports patch compliance information that you can view using the Systems Manager Compliance tools. These tools provide you with insights on the patch compliance state of your managed nodes, such as which nodes are missing patches and what those patches are. When you use `AWS-RunPatchBaseline`, patch compliance information is recorded using the `PutInventory` API command. For Linux operating systems, compliance information is provided for patches from both the default source repository configured on a managed node and from any alternative source repositories you specify in a custom patch baseline. For more information about alternative source repositories, see [How to specify an alternative patch source repository (Linux)](patch-manager-alternative-source-repository.md). For more information about the Systems Manager Compliance tools, see [AWS Systems Manager Compliance](systems-manager-compliance.md).

 **Replaces legacy documents:**
+ `AWS-ApplyPatchBaseline`

The legacy document `AWS-ApplyPatchBaseline` applies only to Windows Server managed nodes, and doesn't provide support for application patching. The newer `AWS-RunPatchBaseline` provides the same support for both Windows and Linux systems. Version 2.0.834.0 or later of SSM Agent is required in order to use the `AWS-RunPatchBaseline` document. 

For more information about the `AWS-RunPatchBaseline` SSM document, see [SSM Command document for patching: `AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md).

### `AWS-RunPatchBaselineAssociation`


Installs patches on your instances or scans instances to determine whether any qualified patches are missing. Available in all commercial AWS Regions. 

`AWS-RunPatchBaselineAssociation` differs from `AWS-RunPatchBaseline` in a few important ways:
+ `AWS-RunPatchBaselineAssociation` is intended for use primarily with State Manager associations created using Quick Setup, a tool in AWS Systems Manager. Specifically, when you use the Quick Setup Host Management configuration type, if you choose the option **Scan instances for missing patches daily**, the system uses `AWS-RunPatchBaselineAssociation` for the operation.

  In most cases, however, when setting up your own patching operations, you should choose [`AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md) or [`AWS-RunPatchBaselineWithHooks`](patch-manager-aws-runpatchbaselinewithhooks.md) instead of `AWS-RunPatchBaselineAssociation`. 

  For more information, see the following topics:
  + [AWS Systems Manager Quick Setup](systems-manager-quick-setup.md)
  + [SSM Command document for patching: `AWS-RunPatchBaselineAssociation`](patch-manager-aws-runpatchbaselineassociation.md)
+ `AWS-RunPatchBaselineAssociation` supports the use of tags to identify which patch baseline to use with a set of targets when it runs. 
+ For patching operations that use `AWS-RunPatchBaselineAssociation`, patch compliance data is compiled in terms of a specific State Manager association. The patch compliance data collected when `AWS-RunPatchBaselineAssociation` runs is recorded using the `PutComplianceItems` API command instead of the `PutInventory` command. This prevents compliance data that isn't associated with this particular association from being overwritten.

  For Linux operating systems, compliance information is provided for patches from both the default source repository configured on an instance and from any alternative source repositories you specify in a custom patch baseline. For more information about alternative source repositories, see [How to specify an alternative patch source repository (Linux)](patch-manager-alternative-source-repository.md). For more information about the Systems Manager Compliance tools, see [AWS Systems Manager Compliance](systems-manager-compliance.md).

 **Replaces legacy documents:**
+ **None**

For more information about the `AWS-RunPatchBaselineAssociation` SSM document, see [SSM Command document for patching: `AWS-RunPatchBaselineAssociation`](patch-manager-aws-runpatchbaselineassociation.md).

### `AWS-RunPatchBaselineWithHooks`


Installs patches on your managed nodes or scans nodes to determine whether any qualified patches are missing, with optional hooks you can use to run SSM documents at three points during the patching cycle. Available in all commercial AWS Regions. Not supported on macOS.

`AWS-RunPatchBaselineWithHooks` differs from `AWS-RunPatchBaseline` in its `Install` operation.

`AWS-RunPatchBaselineWithHooks` supports lifecycle hooks that run at designated points during managed node patching. Because patch installations sometimes require managed nodes to reboot, the patching operation is divided into two events, for a total of three hooks that support custom functionality. The first hook is before the `Install with NoReboot` operation. The second hook is after the `Install with NoReboot` operation. The third hook is available after the reboot of the node.

 **Replaces legacy documents:**
+ **None**

For more information about the `AWS-RunPatchBaselineWithHooks` SSM document, see [SSM Command document for patching: `AWS-RunPatchBaselineWithHooks`](patch-manager-aws-runpatchbaselinewithhooks.md).

## Legacy SSM documents for patching managed nodes


The following four SSM documents are still available in some AWS Regions. However, they are no longer updated and might be no longer supported in the future, so we don't recommend their use. Instead, use the documents described in [SSM documents recommended for patching managed nodes](#patch-manager-ssm-documents-recommended).

**Topics**
+ [

### `AWS-ApplyPatchBaseline`
](#patch-manager-ssm-documents-legacy-AWS-ApplyPatchBaseline)
+ [

### `AWS-FindWindowsUpdates`
](#patch-manager-ssm-documents-legacy-AWS-AWS-FindWindowsUpdates)
+ [

### `AWS-InstallMissingWindowsUpdates`
](#patch-manager-ssm-documents-legacy-AWS-InstallMissingWindowsUpdates)
+ [

### `AWS-InstallSpecificWindowsUpdates`
](#patch-manager-ssm-documents-legacy-AWS-InstallSpecificWindowsUpdates)

### `AWS-ApplyPatchBaseline`


Supports only Windows Server managed nodes, but doesn't include support for patching applications that is found in its replacement, `AWS-RunPatchBaseline`. Not available in AWS Regions launched after August 2017.

**Note**  
The replacement for this SSM document, `AWS-RunPatchBaseline`, requires version 2.0.834.0 or a later version of SSM Agent. You can use the `AWS-UpdateSSMAgent` document to update your managed nodes to the latest version of the agent. 

### `AWS-FindWindowsUpdates`


Replaced by `AWS-InstallWindowsUpdates`, which can perform all the same actions. Not available in AWS Regions launched after April 2017.

To achieve the same result that you would from this legacy SSM document, use the following parameter configuration with the recommended replacement document, `AWS-InstallWindowsUpdates`:
+ `Action` = `Scan`
+ `Allow Reboot` = `False`

### `AWS-InstallMissingWindowsUpdates`


Replaced by `AWS-InstallWindowsUpdates`, which can perform all the same actions. Not available in any AWS Regions launched after April 2017.

To achieve the same result that you would from this legacy SSM document, use the following parameter configuration with the recommended replacement document, `AWS-InstallWindowsUpdates`:
+ `Action` = `Install`
+ `Allow Reboot` = `True`

### `AWS-InstallSpecificWindowsUpdates`


Replaced by `AWS-InstallWindowsUpdates`, which can perform all the same actions. Not available in any AWS Regions launched after April 2017.

To achieve the same result that you would from this legacy SSM document, use the following parameter configuration with the recommended replacement document, `AWS-InstallWindowsUpdates`:
+ `Action` = `Install`
+ `Allow Reboot` = `True`
+ `Include Kbs` = *comma-separated list of KB articles*

## Known limitations of the SSM documents for patching managed nodes


### External reboot interruptions


If a reboot is initiated by the system on the node during patch installation (for example, to apply updates to firmware or features like SecureBoot), the patching document execution may be interrupted and marked as failed even though patches were successfully installed. This occurs because the SSM Agent cannot persist and resume the document execution state across external reboots.

To verify patch installation status after a failed execution, run a `Scan` patching operation, then check the patch compliance data in Patch Manager to assess the current compliance state.

# SSM Command document for patching: `AWS-RunPatchBaseline`
AWS-RunPatchBaseline

AWS Systems Manager supports `AWS-RunPatchBaseline`, a Systems Manager document (SSM document) for Patch Manager, a tool in AWS Systems Manager. This SSM document performs patching operations on managed nodes for both security related and other types of updates. When the document is run, it uses the patch baseline specified as the "default" for an operating system type if no patch group is specified. Otherwise, it uses the patch baseline that is associated with the patch group. For information about patch groups, see [Patch groups](patch-manager-patch-groups.md). 

You can use the document `AWS-RunPatchBaseline` to apply patches for both operating systems and applications. (On Windows Server, application support is limited to updates for applications released by Microsoft.)

This document supports Linux, macOS, and Windows Server managed nodes. The document will perform the appropriate actions for each platform. 

**Note**  
Patch Manager also supports the legacy SSM document `AWS-ApplyPatchBaseline`. However, this document supports patching on Windows managed nodes only. We encourage you to use `AWS-RunPatchBaseline` instead because it supports patching on Linux, macOS, and Windows Server managed nodes. Version 2.0.834.0 or later of SSM Agent is required in order to use the `AWS-RunPatchBaseline` document.

------
#### [ Windows Server ]

On Windows Server managed nodes, the `AWS-RunPatchBaseline` document downloads and invokes a PowerShell module, which in turn downloads a snapshot of the patch baseline that applies to the managed node. This patch baseline snapshot contains a list of approved patches that is compiled by querying the patch baseline against a Windows Server Update Services (WSUS) server. This list is passed to the Windows Update API, which controls downloading and installing the approved patches as appropriate. 

------
#### [ Linux ]

On Linux managed nodes, the `AWS-RunPatchBaseline` document invokes a Python module, which in turn downloads a snapshot of the patch baseline that applies to the managed node. This patch baseline snapshot uses the defined rules and lists of approved and blocked patches to drive the appropriate package manager for each node type: 
+ Amazon Linux 2, Oracle Linux, and RHEL 7 managed nodes use YUM. For YUM operations, Patch Manager requires `Python 2.6` or a later supported version (2.6 - 3.12). Amazon Linux 2023 uses DNF. For DNF operations, Patch Manager requires a supported version of `Python 2` or `Python 3` (2.6 - 3.12).
+ RHEL 8 managed nodes use DNF. For DNF operations, Patch Manager requires a supported version of `Python 2` or `Python 3` (2.6 - 3.12). (Neither version is installed by default on RHEL 8. You must install one or the other manually.)
+ Debian Server, and Ubuntu Server instances use APT. For APT operations, Patch Manager requires a supported version of `Python 3` (3.0 - 3.12).

------
#### [ macOS ]

On macOS managed nodes, the `AWS-RunPatchBaseline` document invokes a Python module, which in turn downloads a snapshot of the patch baseline that applies to the managed node. Next, a Python subprocess invokes the AWS Command Line Interface (AWS CLI) on the node to retrieve the installation and update information for the specified package managers and to drive the appropriate package manager for each update package.

------

Each snapshot is specific to an AWS account, patch group, operating system, and snapshot ID. The snapshot is delivered through a presigned Amazon Simple Storage Service (Amazon S3) URL, which expires 24 hours after the snapshot is created. After the URL expires, however, if you want to apply the same snapshot content to other managed nodes, you can generate a new presigned Amazon S3 URL up to 3 days after the snapshot was created. To do this, use the [https://docs.aws.amazon.com/cli/latest/reference/ssm/get-deployable-patch-snapshot-for-instance.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/get-deployable-patch-snapshot-for-instance.html) command. 

After all approved and applicable updates have been installed, with reboots performed as necessary, patch compliance information is generated on a managed node and reported back to Patch Manager. 

**Note**  
If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](#patch-manager-aws-runpatchbaseline-parameters-norebootoption).

For information about viewing patch compliance data, see [About patch compliance](compliance-about.md#compliance-monitor-patch). 

## `AWS-RunPatchBaseline` parameters


`AWS-RunPatchBaseline` supports six parameters. The `Operation` parameter is required. The `InstallOverrideList`, `BaselineOverride`, and `RebootOption` parameters are optional. `Snapshot-ID` is technically optional, but we recommend that you supply a custom value for it when you run `AWS-RunPatchBaseline` outside of a maintenance window. Patch Manager can supply the custom value automatically when the document is run as part of a maintenance window operation.

**Topics**
+ [

### Parameter name: `Operation`
](#patch-manager-aws-runpatchbaseline-parameters-operation)
+ [

### Parameter name: `AssociationId`
](#patch-manager-aws-runpatchbaseline-parameters-association-id)
+ [

### Parameter name: `Snapshot ID`
](#patch-manager-aws-runpatchbaseline-parameters-snapshot-id)
+ [

### Parameter name: `InstallOverrideList`
](#patch-manager-aws-runpatchbaseline-parameters-installoverridelist)
+ [

### Parameter name: `RebootOption`
](#patch-manager-aws-runpatchbaseline-parameters-norebootoption)
+ [

### Parameter name: `BaselineOverride`
](#patch-manager-aws-runpatchbaseline-parameters-baselineoverride)
+ [

### Parameter name: `StepTimeoutSeconds`
](#patch-manager-aws-runpatchbaseline-parameters-steptimeoutseconds)

### Parameter name: `Operation`


**Usage**: Required.

**Options**: `Scan` \$1 `Install`. 

Scan  
When you choose the `Scan` option, `AWS-RunPatchBaseline` determines the patch compliance state of the managed node and reports this information back to Patch Manager. `Scan` doesn't prompt updates to be installed or managed nodes to be rebooted. Instead, the operation identifies where updates are missing that are approved and applicable to the node. 

Install  
When you choose the `Install` option, `AWS-RunPatchBaseline` attempts to install the approved and applicable updates that are missing from the managed node. Patch compliance information generated as part of an `Install` operation doesn't list any missing updates, but might report updates that are in a failed state if the installation of the update didn't succeed for any reason. Whenever an update is installed on a managed node, the node is rebooted to ensure the update is both installed and active. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)  
If a patch specified by the baseline rules is installed *before* Patch Manager updates the managed node, the system might not reboot as expected. This can happen when a patch is installed manually by a user or installed automatically by another program, such as the `unattended-upgrades` package on Ubuntu Server.

### Parameter name: `AssociationId`


**Usage**: Optional.

`AssociationId` is the ID of an existing association in State Manager, a tool in AWS Systems Manager. It's used by Patch Manager to add compliance data to a specified association. This association is related to a patching operation that's [set up in a patch policy in Quick Setup](quick-setup-patch-manager.md). 

**Note**  
With the `AWS-RunPatchBaseline`, if an `AssociationId` value is provided along with a patch policy baseline override, patching is done as a `PatchPolicy` operation and the `ExecutionType` value reported in `AWS:ComplianceItem` is also `PatchPolicy`. If no `AssociationId` value is provided, patching is done as a `Command` operation and the `ExecutionType` value report in on the `AWS:ComplianceItem` submitted is also `Command`. 

If you don't already have an association you want to use, you can create one by running [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html) the command. 

### Parameter name: `Snapshot ID`


**Usage**: Optional.

`Snapshot ID` is a unique ID (GUID) used by Patch Manager to ensure that a set of managed nodes that are patched in a single operation all have the exact same set of approved patches. Although the parameter is defined as optional, our best practice recommendation depends on whether or not you're running `AWS-RunPatchBaseline` in a maintenance window, as described in the following table.


**`AWS-RunPatchBaseline` best practices**  

| Mode | Best practice | Details | 
| --- | --- | --- | 
| Running AWS-RunPatchBaseline inside a maintenance window | Don't supply a Snapshot ID. Patch Manager will supply it for you. |  If you use a maintenance window to run `AWS-RunPatchBaseline`, you shouldn't provide your own generated Snapshot ID. In this scenario, Systems Manager provides a GUID value based on the maintenance window execution ID. This ensures that a correct ID is used for all the invocations of `AWS-RunPatchBaseline` in that maintenance window.  If you do specify a value in this scenario, note that the snapshot of the patch baseline might not remain in place for more than 3 days. After that, a new snapshot will be generated even if you specify the same ID after the snapshot expires.   | 
| Running AWS-RunPatchBaseline outside of a maintenance window | Generate and specify a custom GUID value for the Snapshot ID.¹ |  When you aren't using a maintenance window to run `AWS-RunPatchBaseline`, we recommend that you generate and specify a unique Snapshot ID for each patch baseline, particularly if you're running the `AWS-RunPatchBaseline` document on multiple managed nodes in the same operation. If you don't specify an ID in this scenario, Systems Manager generates a different Snapshot ID for each managed node the command is sent to. This might result in varying sets of patches being specified among the managed nodes. For instance, say that you're running the `AWS-RunPatchBaseline` document directly through Run Command, a tool in AWS Systems Manager, and targeting a group of 50 managed nodes. Specifying a custom Snapshot ID results in the generation of a single baseline snapshot that is used to evaluate and patch all the nodes, ensuring that they end up in a consistent state.   | 
|  ¹ You can use any tool capable of generating a GUID to generate a value for the Snapshot ID parameter. For example, in PowerShell, you can use the `New-Guid` cmdlet to generate a GUID in the format of `12345699-9405-4f69-bc5e-9315aEXAMPLE`.  | 

### Parameter name: `InstallOverrideList`


**Usage**: Optional.

Using `InstallOverrideList`, you specify an https URL or an Amazon S3 path-style URL to a list of patches to be installed. This patch installation list, which you maintain in YAML format, overrides the patches specified by the current default patch baseline. This provides you with more granular control over which patches are installed on your managed nodes. 

**Important**  
The `InstallOverrideList` file name can't contain the following characters: backtick (`), single quote ('), double quote ("), and dollar sign (\$1).

The patching operation behavior when using the `InstallOverrideList` parameter differs between Linux & macOS managed nodes and Windows Server managed nodes. On Linux & macOS, Patch Manager attempts to apply patches included in the `InstallOverrideList` patch list that are present in any repository enabled on the node, whether or not the patches match the patch baseline rules. On Windows Server nodes, however, patches in the `InstallOverrideList` patch list are applied *only* if they also match the patch baseline rules.

On Linux & macOS managed nodes, patches specified in the `InstallOverrideList` are applied only as updates to packages that are already installed on the node. If the `InstallOverrideList` includes patches for packages that are not currently installed on the node, those patches are not installed.

Be aware that compliance reports reflect patch states according to what’s specified in the patch baseline, not what you specify in an `InstallOverrideList` list of patches. In other words, Scan operations ignore the `InstallOverrideList` parameter. This is to ensure that compliance reports consistently reflect patch states according to policy rather than what was approved for a specific patching operation. 

**Note**  
When you're patching a node that only uses IPv6, ensure that the provided URL is reachable from the node. If the SSM Agent config option `UseDualStackEndpoint` is set to `true`, then a dualstack S3 client is used when an S3 URL is provided. See [Tutorial: Patching a server in an IPv6 only environment](patch-manager-server-patching-iPv6-tutorial.md) for more information on configuring the agent to use dualstack.

For a description of how you might use the `InstallOverrideList` parameter to apply different types of patches to a target group, on different maintenance window schedules, while still using a single patch baseline, see [Sample scenario for using the InstallOverrideList parameter in `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation`](patch-manager-override-lists.md).

**Valid URL formats**

**Note**  
If your file is stored in a publicly available bucket, you can specify either an https URL format or an Amazon S3 path-style URL. If your file is stored in a private bucket, you must specify an Amazon S3 path-style URL.
+ **https URL format**:

  ```
  https://s3.aws-api-domain/amzn-s3-demo-bucket/my-windows-override-list.yaml
  ```
+ **Amazon S3 path-style URL**:

  ```
  s3://amzn-s3-demo-bucket/my-windows-override-list.yaml
  ```

**Valid YAML content formats**

The formats you use to specify patches in your list depends on the operating system of your managed node. The general format, however, is as follows:

```
patches:
    - 
        id: '{patch-d}'
        title: '{patch-title}'
        {additional-fields}:{values}
```

Although you can provide additional fields in your YAML file, they're ignored during patch operations.

In addition, we recommend verifying that the format of your YAML file is valid before adding or updating the list in your S3 bucket. For more information about the YAML format, see [yaml.org](http://www.yaml.org). For validation tool options, perform a web search for "yaml format validators".

------
#### [ Linux ]

**id**  
The **id** field is required. Use it to specify patches using the package name and architecture. For example: `'dhclient.x86_64'`. You can use wildcards in id to indicate multiple packages. For example: `'dhcp*'` and `'dhcp*1.*'`.

**Title**  
The **title** field is optional, but on Linux systems it does provide additional filtering capabilities. If you use **title**, it should contain the package version information in the one of the following formats:

YUM/SUSE Linux Enterprise Server (SLES):

```
{name}.{architecture}:{epoch}:{version}-{release}
```

APT

```
{name}.{architecture}:{version}
```

For Linux patch titles, you can use one or more wildcards in any position to expand the number of package matches. For example: `'*32:9.8.2-0.*.rc1.57.amzn1'`. 

For example: 
+ apt package version 1.2.25 is currently installed on your managed node, but version 1.2.27 is now available. 
+ You add apt.amd64 version 1.2.27 to the patch list. It depends on apt utils.amd64 version 1.2.27, but apt-utils.amd64 version 1.2.25 is specified in the list. 

In this case, apt version 1.2.27 will be blocked from installation and reported as “Failed-NonCompliant.”

------
#### [ Windows Server ]

**id**  
The **id** field is required. Use it to specify patches using Microsoft Knowledge Base IDs (for example, KB2736693) and Microsoft Security Bulletin IDs (for example, MS17-023). 

Any other fields you want to provide in a patch list for Windows are optional and are for your own informational use only. You can use additional fields such as **title**, **classification**, **severity**, or anything else for providing more detailed information about the specified patches.

------
#### [ macOS ]

**id**  
The **id** field is required. The value for the **id** field can be supplied using either a `{package-name}.{package-version}` format or a \$1package\$1name\$1 format.

------

**Sample patch lists**
+ **Amazon Linux 2**

  ```
  patches:
      -
          id: 'kernel.x86_64'
      -
          id: 'bind*.x86_64'
          title: '39.11.4-26.P2.amzn2.5.2'
  
          id: 'glibc*'
      -
          id: 'dhclient*'
          title: '*4.2.5-58.amzn2'
      -
          id: 'dhcp*'
          title: '*4.2.5-77.amzn2'
  ```
+ **Debian Server**

  ```
  patches:
      -
          id: 'apparmor.amd64'
          title: '2.10.95-0ubuntu2.9'
      -
          id: 'cryptsetup.amd64'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'cryptsetup-bin.*'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'apt.amd64'
          title: '*1.2.27'
      -
          id: 'apt-utils.amd64'
          title: '*1.2.25'
  ```
+ **macOS**

  ```
  patches:
      -
          id: 'XProtectPlistConfigData'
      -
          id: 'MRTConfigData.1.61'
      -
          id: 'Command Line Tools for Xcode.11.5'
      -
          id: 'Gatekeeper Configuration Data'
  ```
+ **Oracle Linux**

  ```
  patches:
      -
          id: 'audit-libs.x86_64'
          title: '*2.8.5-4.el7'
      -
          id: 'curl.x86_64'
          title: '*.el7'
      -
          id: 'grub2.x86_64'
          title: 'grub2.x86_64:1:2.02-0.81.0.1.el7'
      -
          id: 'grub2.x86_64'
          title: 'grub2.x86_64:1:*-0.81.0.1.el7'
  ```
+ **Red Hat Enterprise Linux (RHEL)**

  ```
  patches:
      -
          id: 'NetworkManager.x86_64'
          title: '*1:1.10.2-14.el7_5'
      -
          id: 'NetworkManager-*.x86_64'
          title: '*1:1.10.2-14.el7_5'
      -
          id: 'audit.x86_64'
          title: '*0:2.8.1-3.el7'
      -
          id: 'dhclient.x86_64'
          title: '*.el7_5.1'
      -
          id: 'dhcp*.x86_64'
          title: '*12:5.2.5-68.el7'
  ```
+ **SUSE Linux Enterprise Server (SLES)**

  ```
  patches:
      -
          id: 'amazon-ssm-agent.x86_64'
      -
          id: 'binutils'
          title: '*0:2.26.1-9.12.1'
      -
          id: 'glibc*.x86_64'
          title: '*2.19*'
      -
          id: 'dhcp*'
          title: '0:4.3.3-9.1'
      -
          id: 'lib*'
  ```
+ **Ubuntu Server **

  ```
  patches:
      -
          id: 'apparmor.amd64'
          title: '2.10.95-0ubuntu2.9'
      -
          id: 'cryptsetup.amd64'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'cryptsetup-bin.*'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'apt.amd64'
          title: '*1.2.27'
      -
          id: 'apt-utils.amd64'
          title: '*1.2.25'
  ```
+ **Windows**

  ```
  patches:
      -
          id: 'KB4284819'
          title: '2018-06 Cumulative Update for Windows Server 2016 (1709) for x64-based Systems (KB4284819)'
      -
          id: 'KB4284833'
      -
          id: 'KB4284835'
          title: '2018-06 Cumulative Update for Windows Server 2016 (1803) for x64-based Systems (KB4284835)'
      -
          id: 'KB4284880'
      -
          id: 'KB4338814'
  ```

### Parameter name: `RebootOption`


**Usage**: Optional.

**Options**: `RebootIfNeeded` \$1 `NoReboot` 

**Default**: `RebootIfNeeded`

**Warning**  
The default option is `RebootIfNeeded`. Be sure to select the correct option for your use case. For example, if your managed nodes must reboot immediately to complete a configuration process, choose `RebootIfNeeded`. Or, if you need to maintain managed node availability until a scheduled reboot time, choose `NoReboot`.

**Important**  
We don’t recommend using Patch Manager for patching cluster instances in Amazon EMR (previously called Amazon Elastic MapReduce). In particular, don’t select the `RebootIfNeeded` option for the `RebootOption` parameter. (This option is available in the SSM Command documents for patching `AWS-RunPatchBaseline`, `AWS-RunPatchBaselineAssociation`, and `AWS-RunPatchBaselineWithHooks`.)  
The underlying commands for patching using Patch Manager use `yum` and `dnf` commands. Therefore, the operations result in incompatibilities because of how packages are installed. For information about the preferred methods for updating software on Amazon EMR clusters, see [Using the default AMI for Amazon EMR](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-default-ami.html) in the *Amazon EMR Management Guide*.

RebootIfNeeded  
When you choose the `RebootIfNeeded` option, the managed node is rebooted in either of the following cases:   
+ Patch Manager installed one or more patches. 

  Patch Manager doesn't evaluate whether a reboot is *required* by the patch. The system is rebooted even if the patch doesn't require a reboot.
+ Patch Manager detects one or more patches with a status of `INSTALLED_PENDING_REBOOT` during the `Install` operation. 

  The `INSTALLED_PENDING_REBOOT` status can mean that the option `NoReboot` was selected the last time the `Install` operation was run, or that a patch was installed outside of Patch Manager since the last time the managed node was rebooted.
Rebooting managed nodes in these two cases ensures that updated packages are flushed from memory and keeps patching and rebooting behavior consistent across all operating systems.

NoReboot  
When you choose the `NoReboot` option, Patch Manager doesn't reboot a managed node even if it installed patches during the `Install` operation. This option is useful if you know that your managed nodes don't require rebooting after patches are applied, or you have applications or processes running on a node that shouldn't be disrupted by a patching operation reboot. It's also useful when you want more control over the timing of managed node reboots, such as by using a maintenance window.  
If you choose the `NoReboot` option and a patch is installed, the patch is assigned a status of `InstalledPendingReboot`. The managed node itself, however, is marked as `Non-Compliant`. After a reboot occurs and a `Scan` operation is run, the managed node status is updated to `Compliant`.  
The `NoReboot` option only prevents operating system-level restarts. Service-level restarts can still occur as part of the patching process. For example, when Docker is updated, dependent services such as Amazon Elastic Container Service might automatically restart even with `NoReboot` enabled. If you have critical services that must not be disrupted, consider additional measures such as temporarily removing instances from service or scheduling patching during maintenance windows.

**Patch installation tracking file**: To track patch installation, especially patches that were installed since the last system reboot, Systems Manager maintains a file on the managed node.

**Important**  
Don't delete or modify the tracking file. If this file is deleted or corrupted, the patch compliance report for the managed node is inaccurate. If this happens, reboot the node and run a patch Scan operation to restore the file.

This tracking file is stored in the following locations on your managed nodes:
+ Linux operating systems: 
  + `/var/log/amazon/ssm/patch-configuration/patch-states-configuration.json`
  + `/var/log/amazon/ssm/patch-configuration/patch-inventory-from-last-operation.json`
+ Windows Server operating system:
  + `C:\ProgramData\Amazon\PatchBaselineOperations\State\PatchStatesConfiguration.json`
  + `C:\ProgramData\Amazon\PatchBaselineOperations\State\PatchInventoryFromLastOperation.json`

### Parameter name: `BaselineOverride`


**Usage**: Optional.

You can define patching preferences at runtime using the `BaselineOverride` parameter. This baseline override is maintained as a JSON object in an S3 bucket. It ensures patching operations use the provided baselines that match the host operating system instead of applying the rules from the default patch baseline

**Important**  
The `BaselineOverride` file name can't contain the following characters: backtick (`), single quote ('), double quote ("), and dollar sign (\$1).

For more information about how to use the `BaselineOverride` parameter, see [Using the BaselineOverride parameter](patch-manager-baselineoverride-parameter.md).

### Parameter name: `StepTimeoutSeconds`


**Usage**: Required.

The time in seconds—between 1 and 36000 seconds (10 hours)—for a command to be completed before it is considered to have failed.

# SSM Command document for patching: `AWS-RunPatchBaselineAssociation`
AWS-RunPatchBaselineAssociation

Like the `AWS-RunPatchBaseline` document, `AWS-RunPatchBaselineAssociation` performs patching operations on instances for both security related and other types of updates. You can also use the document `AWS-RunPatchBaselineAssociation` to apply patches for both operating systems and applications. (On Windows Server, application support is limited to updates for applications released by Microsoft.)

This document supports Amazon Elastic Compute Cloud (Amazon EC2) instances for Linux, macOS, and Windows Server. It does not support non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. The document will perform the appropriate actions for each platform, invoking a Python module on Linux and macOS instances, and a PowerShell module on Windows instances.

`AWS-RunPatchBaselineAssociation`, however, differs from `AWS-RunPatchBaseline` in the following ways: 
+ `AWS-RunPatchBaselineAssociation` is intended for use primarily with State Manager associations created using [Quick Setup](systems-manager-quick-setup.md), a tool in AWS Systems Manager. Specifically, when you use the Quick Setup Host Management configuration type, if you choose the option **Scan instances for missing patches daily**, the system uses `AWS-RunPatchBaselineAssociation` for the operation.

  In most cases, however, when setting up your own patching operations, you should choose [`AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md) or [`AWS-RunPatchBaselineWithHooks`](patch-manager-aws-runpatchbaselinewithhooks.md) instead of `AWS-RunPatchBaselineAssociation`.
+ When you use the `AWS-RunPatchBaselineAssociation` document, you can specify a tag key pair in the document's `BaselineTags` parameter field. If a custom patch baseline in your AWS account shares these tags, Patch Manager, a tool in AWS Systems Manager, uses that tagged baseline when it runs on the target instances instead of the currently specified "default" patch baseline for the operating system type.
**Note**  
If you choose to use `AWS-RunPatchBaselineAssociation` in patching operations other than those set up using Quick Setup, and you want to use its optional `BaselineTags` parameter, you must provide some additional permissions to the [instance profile](setup-instance-permissions.md) for Amazon Elastic Compute Cloud (Amazon EC2) instances. For more information, see [Parameter name: `BaselineTags`](#patch-manager-aws-runpatchbaselineassociation-parameters-baselinetags).

  Both of the following formats are valid for your `BaselineTags` parameter:

  `Key=tag-key,Values=tag-value`

  `Key=tag-key,Values=tag-value1,tag-value2,tag-value3`
**Important**  
Tag keys and values can't contain the following characters: backtick (`), single quote ('), double quote ("), and dollar sign (\$1).
+ When `AWS-RunPatchBaselineAssociation` runs, the patch compliance data it collects is recorded using the `PutComplianceItems` API command instead of the `PutInventory` command, which is used by `AWS-RunPatchBaseline`. This difference means that the patch compliance information that is stored and reported per a specific *association*. Patch compliance data generated outside of this association isn't overwritten.
+ The patch compliance information reported after running `AWS-RunPatchBaselineAssociation` indicates whether or not an instance is in compliance. It doesn't include patch-level details, as demonstrated by the output of the following AWS Command Line Interface (AWS CLI) command. The command filters on `Association` as the compliance type:

  ```
  aws ssm list-compliance-items \
      --resource-ids "i-02573cafcfEXAMPLE" \
      --resource-types "ManagedInstance" \
      --filters "Key=ComplianceType,Values=Association,Type=EQUAL" \
      --region us-east-2
  ```

  The system returns information like the following.

  ```
  {
      "ComplianceItems": [
          {
              "Status": "NON_COMPLIANT", 
              "Severity": "UNSPECIFIED", 
              "Title": "MyPatchAssociation", 
              "ResourceType": "ManagedInstance", 
              "ResourceId": "i-02573cafcfEXAMPLE", 
              "ComplianceType": "Association", 
              "Details": {
                  "DocumentName": "AWS-RunPatchBaselineAssociation", 
                  "PatchBaselineId": "pb-0c10e65780EXAMPLE", 
                  "DocumentVersion": "1"
              }, 
              "ExecutionSummary": {
                  "ExecutionTime": 1590698771.0
              }, 
              "Id": "3e5d5694-cd07-40f0-bbea-040e6EXAMPLE"
          }
      ]
  }
  ```

If a tag key pair value has been specified as a parameter for the `AWS-RunPatchBaselineAssociation` document, Patch Manager searches for a custom patch baseline that matches the operating system type and has been tagged with that same tag-key pair. This search isn't limited to the current specified default patch baseline or the baseline assigned to a patch group. If no baseline is found with the specified tags, Patch Manager next looks for a patch group, if one was specified in the command that runs `AWS-RunPatchBaselineAssociation`. If no patch group is matched, Patch Manager falls back to the current default patch baseline for the operating system account. 

If more than one patch baseline is found with the tags specified in the `AWS-RunPatchBaselineAssociation` document, Patch Manager returns an error message indicating that only one patch baseline can be tagged with that key-value pair in order for the operation to proceed.

**Note**  
On Linux nodes, the appropriate package manager for each node type is used to install packages:   
Amazon Linux 2, Oracle Linux, and RHEL instances use YUM. For YUM operations, Patch Manager requires `Python 2.6` or a later supported version (2.6 - 3.12). Amazon Linux 2023 uses DNF. For DNF operations, Patch Manager requires a supported version of `Python 2` or `Python 3` (2.6 - 3.12)
Debian Server and Ubuntu Server instances use APT. For APT operations, Patch Manager requires a supported version of `Python 3` (3.0 - 3.12).

After a scan is complete, or after all approved and applicable updates have been installed, with reboots performed as necessary, patch compliance information is generated on an instance and reported back to the Patch Compliance service. 

**Note**  
If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaselineAssociation` document, the instance isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](#patch-manager-aws-runpatchbaselineassociation-parameters-norebootoption).

For information about viewing patch compliance data, see [About patch compliance](compliance-about.md#compliance-monitor-patch). 

## `AWS-RunPatchBaselineAssociation` parameters


`AWS-RunPatchBaselineAssociation` supports five parameters. The `Operation` and `AssociationId` parameters are required. The `InstallOverrideList`, `RebootOption`, and `BaselineTags` parameters are optional. 

**Topics**
+ [

### Parameter name: `Operation`
](#patch-manager-aws-runpatchbaselineassociation-parameters-operation)
+ [

### Parameter name: `BaselineTags`
](#patch-manager-aws-runpatchbaselineassociation-parameters-baselinetags)
+ [

### Parameter name: `AssociationId`
](#patch-manager-aws-runpatchbaselineassociation-parameters-association-id)
+ [

### Parameter name: `InstallOverrideList`
](#patch-manager-aws-runpatchbaselineassociation-parameters-installoverridelist)
+ [

### Parameter name: `RebootOption`
](#patch-manager-aws-runpatchbaselineassociation-parameters-norebootoption)

### Parameter name: `Operation`


**Usage**: Required.

**Options**: `Scan` \$1 `Install`. 

Scan  
When you choose the `Scan` option, `AWS-RunPatchBaselineAssociation` determines the patch compliance state of the instance and reports this information back to Patch Manager. `Scan` doesn't prompt updates to be installed or instances to be rebooted. Instead, the operation identifies where updates are missing that are approved and applicable to the instance. 

Install  
When you choose the `Install` option, `AWS-RunPatchBaselineAssociation` attempts to install the approved and applicable updates that are missing from the instance. Patch compliance information generated as part of an `Install` operation doesn't list any missing updates, but might report updates that are in a failed state if the installation of the update didn't succeed for any reason. Whenever an update is installed on an instance, the instance is rebooted to ensure the update is both installed and active. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaselineAssociation` document, the instance isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](#patch-manager-aws-runpatchbaselineassociation-parameters-norebootoption).)  
If a patch specified by the baseline rules is installed *before* Patch Manager updates the instance, the system might not reboot as expected. This can happen when a patch is installed manually by a user or installed automatically by another program, such as the `unattended-upgrades` package on Ubuntu Server.

### Parameter name: `BaselineTags`


**Usage**: Optional. 

`BaselineTags` is a unique tag key-value pair that you choose and assign to an individual custom patch baseline. You can specify one or more values for this parameter. Both of the following formats are valid:

`Key=tag-key,Values=tag-value`

`Key=tag-key,Values=tag-value1,tag-value2,tag-value3`

**Important**  
Tag keys and values can't contain the following characters: backtick (`), single quote ('), double quote ("), and dollar sign (\$1).

The `BaselineTags` value is used by Patch Manager to ensure that a set of instances that are patched in a single operation all have the exact same set of approved patches. When the patching operation runs, Patch Manager checks to see if a patch baseline for the operating system type is tagged with the same key-value pair you specify for `BaselineTags`. If there is a match, this custom patch baseline is used. If there isn't a match, a patch baseline is identified according to any patch group specified for the patching operating. If there is none, the AWS managed predefined patch baseline for that operating system is used. 

**Additional permission requirements**  
If you use `AWS-RunPatchBaselineAssociation` in patching operations other than those set up using Quick Setup, and you want to use the optional `BaselineTags` parameter, you must add the following permissions to the [instance profile](setup-instance-permissions.md) for Amazon Elastic Compute Cloud (Amazon EC2) instances.

**Note**  
Quick Setup and `AWS-RunPatchBaselineAssociation` don't support on-premises servers and virtual machines (VMs).

```
{
    "Effect": "Allow",
    "Action": [
        "ssm:DescribePatchBaselines",
        "tag:GetResources"
    ],
    "Resource": "*"
},
{
    "Effect": "Allow",
    "Action": [
        "ssm:GetPatchBaseline",
        "ssm:DescribeEffectivePatchesForPatchBaseline"
    ],
    "Resource": "patch-baseline-arn"
}
```

Replace *patch-baseline-arn* with the Amazon Resource Name (ARN) of the patch baseline to which you want to provide access, in the format `arn:aws:ssm:us-east-2:123456789012:patchbaseline/pb-0c10e65780EXAMPLE`.

### Parameter name: `AssociationId`


**Usage**: Required.

`AssociationId` is the ID of an existing association in State Manager, a tool in AWS Systems Manager. It's used by Patch Manager to add compliance data to a specified association. This association is related to a patch `Scan` operation enabled in a [Host Management configuration created in Quick Setup](quick-setup-host-management.md). By sending patching results as association compliance data instead of inventory compliance data, existing inventory compliance information for your instances isn't overwritten after a patching operation, nor for other association IDs. If you don't already have an association you want to use, you can create one by running [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html) the command. For example:

------
#### [ Linux & macOS ]

```
aws ssm create-association \
    --name "AWS-RunPatchBaselineAssociation" \
    --association-name "MyPatchHostConfigAssociation" \
    --targets "Key=instanceids,Values=[i-02573cafcfEXAMPLE,i-07782c72faEXAMPLE,i-07782c72faEXAMPLE]" \
    --parameters "Operation=Scan" \
    --schedule-expression "cron(0 */30 * * * ? *)" \
    --sync-compliance "MANUAL" \
    --region us-east-2
```

------
#### [ Windows Server ]

```
aws ssm create-association ^
    --name "AWS-RunPatchBaselineAssociation" ^
    --association-name "MyPatchHostConfigAssociation" ^
    --targets "Key=instanceids,Values=[i-02573cafcfEXAMPLE,i-07782c72faEXAMPLE,i-07782c72faEXAMPLE]" ^
    --parameters "Operation=Scan" ^
    --schedule-expression "cron(0 */30 * * * ? *)" ^
    --sync-compliance "MANUAL" ^
    --region us-east-2
```

------

### Parameter name: `InstallOverrideList`


**Usage**: Optional.

Using `InstallOverrideList`, you specify an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL to a list of patches to be installed. This patch installation list, which you maintain in YAML format, overrides the patches specified by the current default patch baseline. This provides you with more granular control over which patches are installed on your instances.

**Important**  
The `InstallOverrideList` file name can't contain the following characters: backtick (`), single quote ('), double quote ("), and dollar sign (\$1).

The patching operation behavior when using the `InstallOverrideList` parameter differs between Linux & macOS managed nodes and Windows Server managed nodes. On Linux & macOS, Patch Manager attempts to apply patches included in the `InstallOverrideList` patch list that are present in any repository enabled on the node, whether or not the patches match the patch baseline rules. On Windows Server nodes, however, patches in the `InstallOverrideList` patch list are applied *only* if they also match the patch baseline rules.

On Linux & macOS managed nodes, patches specified in the `InstallOverrideList` are applied only as updates to packages that are already installed on the node. If the `InstallOverrideList` includes patches for packages that are not currently installed on the node, those patches are not installed.

Be aware that compliance reports reflect patch states according to what’s specified in the patch baseline, not what you specify in an `InstallOverrideList` list of patches. In other words, Scan operations ignore the `InstallOverrideList` parameter. This is to ensure that compliance reports consistently reflect patch states according to policy rather than what was approved for a specific patching operation. 

**Valid URL formats**

**Note**  
If your file is stored in a publicly available bucket, you can specify either an https URL format or an Amazon S3 path-style URL. If your file is stored in a private bucket, you must specify an Amazon S3 path-style URL.
+ **https URL format example**:

  ```
  https://s3.amazonaws.com/amzn-s3-demo-bucket/my-windows-override-list.yaml
  ```
+ **Amazon S3 path-style URL example**:

  ```
  s3://amzn-s3-demo-bucket/my-windows-override-list.yaml
  ```

**Valid YAML content formats**

The formats you use to specify patches in your list depends on the operating system of your instance. The general format, however, is as follows:

```
patches:
    - 
        id: '{patch-d}'
        title: '{patch-title}'
        {additional-fields}:{values}
```

Although you can provide additional fields in your YAML file, they're ignored during patch operations.

In addition, we recommend verifying that the format of your YAML file is valid before adding or updating the list in your S3 bucket. For more information about the YAML format, see [yaml.org](http://www.yaml.org). For validation tool options, perform a web search for "yaml format validators".
+ Microsoft Windows

**id**  
The **id** field is required. Use it to specify patches using Microsoft Knowledge Base IDs (for example, KB2736693) and Microsoft Security Bulletin IDs (for example, MS17-023). 

  Any other fields you want to provide in a patch list for Windows are optional and are for your own informational use only. You can use additional fields such as **title**, **classification**, **severity**, or anything else for providing more detailed information about the specified patches.
+ Linux

**id**  
The **id** field is required. Use it to specify patches using the package name and architecture. For example: `'dhclient.x86_64'`. You can use wildcards in id to indicate multiple packages. For example: `'dhcp*'` and `'dhcp*1.*'`.

**title**  
The **title** field is optional, but on Linux systems it does provide additional filtering capabilities. If you use **title**, it should contain the package version information in the one of the following formats:

  YUM/Red Hat Enterprise Linux (RHEL):

  ```
  {name}.{architecture}:{epoch}:{version}-{release}
  ```

  APT

  ```
  {name}.{architecture}:{version}
  ```

  For Linux patch titles, you can use one or more wildcards in any position to expand the number of package matches. For example: `'*32:9.8.2-0.*.rc1.57.amzn1'`. 

  For example: 
  + apt package version 1.2.25 is currently installed on your instance, but version 1.2.27 is now available. 
  + You add apt.amd64 version 1.2.27 to the patch list. It depends on apt utils.amd64 version 1.2.27, but apt-utils.amd64 version 1.2.25 is specified in the list. 

  In this case, apt version 1.2.27 will be blocked from installation and reported as “Failed-NonCompliant.”

**Other fields**  
Any other fields you want to provide in a patch list for Linux are optional and are for your own informational use only. You can use additional fields such as **classification**, **severity**, or anything else for providing more detailed information about the specified patches.

**Sample patch lists**
+ **Windows**

  ```
  patches:
      -
          id: 'KB4284819'
          title: '2018-06 Cumulative Update for Windows Server 2016 (1709) for x64-based Systems (KB4284819)'
      -
          id: 'KB4284833'
      -
          id: 'KB4284835'
          title: '2018-06 Cumulative Update for Windows Server 2016 (1803) for x64-based Systems (KB4284835)'
      -
          id: 'KB4284880'
      -
          id: 'KB4338814'
  ```
+ **APT**

  ```
  patches:
      -
          id: 'apparmor.amd64'
          title: '2.10.95-0ubuntu2.9'
      -
          id: 'cryptsetup.amd64'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'cryptsetup-bin.*'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'apt.amd64'
          title: '*1.2.27'
      -
          id: 'apt-utils.amd64'
          title: '*1.2.25'
  ```
+ **Amazon Linux 2**

  ```
  patches:
      -
          id: 'kernel.x86_64'
      -
          id: 'bind*.x86_64'
          title: '39.11.4-26.P2.amzn2.5.2'
  
          id: 'glibc*'
      -
          id: 'dhclient*'
          title: '*4.2.5-58.amzn2'
      -
          id: 'dhcp*'
          title: '*4.2.5-77.amzn2'
  ```
+ **Red Hat Enterprise Linux (RHEL)**

  ```
  patches:
      -
          id: 'NetworkManager.x86_64'
          title: '*1:1.10.2-14.el7_5'
      -
          id: 'NetworkManager-*.x86_64'
          title: '*1:1.10.2-14.el7_5'
      -
          id: 'audit.x86_64'
          title: '*0:2.8.1-3.el7'
      -
          id: 'dhclient.x86_64'
          title: '*.el7_5.1'
      -
          id: 'dhcp*.x86_64'
          title: '*12:5.2.5-68.el7'
  ```
+ **SUSE Linux Enterprise Server (SLES)**

  ```
  patches:
      -
          id: 'amazon-ssm-agent.x86_64'
      -
          id: 'binutils'
          title: '*0:2.26.1-9.12.1'
      -
          id: 'glibc*.x86_64'
          title: '*2.19*'
      -
          id: 'dhcp*'
          title: '0:4.3.3-9.1'
      -
          id: 'lib*'
  ```
+ **Ubuntu Server **

  ```
  patches:
      -
          id: 'apparmor.amd64'
          title: '2.10.95-0ubuntu2.9'
      -
          id: 'cryptsetup.amd64'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'cryptsetup-bin.*'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'apt.amd64'
          title: '*1.2.27'
      -
          id: 'apt-utils.amd64'
          title: '*1.2.25'
  ```
+ **Windows**

  ```
  patches:
      -
          id: 'KB4284819'
          title: '2018-06 Cumulative Update for Windows Server 2016 (1709) for x64-based Systems (KB4284819)'
      -
          id: 'KB4284833'
      -
          id: 'KB4284835'
          title: '2018-06 Cumulative Update for Windows Server 2016 (1803) for x64-based Systems (KB4284835)'
      -
          id: 'KB4284880'
      -
          id: 'KB4338814'
  ```

### Parameter name: `RebootOption`


**Usage**: Optional.

**Options**: `RebootIfNeeded` \$1 `NoReboot` 

**Default**: `RebootIfNeeded`

**Warning**  
The default option is `RebootIfNeeded`. Be sure to select the correct option for your use case. For example, if your instances must reboot immediately to complete a configuration process, choose `RebootIfNeeded`. Or, if you need to maintain instances availability until a scheduled reboot time, choose `NoReboot`.

**Important**  
The `NoReboot` option only prevents operating system-level restarts. Service-level restarts can still occur as part of the patching process. For example, when Docker is updated, dependent services such as Amazon Elastic Container Service might automatically restart even with `NoReboot` enabled. If you have critical services that must not be disrupted, consider additional measures such as temporarily removing instances from service or scheduling patching during maintenance windows.

**Important**  
We don’t recommend using Patch Manager for patching cluster instances in Amazon EMR (previously called Amazon Elastic MapReduce). In particular, don’t select the `RebootIfNeeded` option for the `RebootOption` parameter. (This option is available in the SSM Command documents for patching `AWS-RunPatchBaseline`, `AWS-RunPatchBaselineAssociation`, and `AWS-RunPatchBaselineWithHooks`.)  
The underlying commands for patching using Patch Manager use `yum` and `dnf` commands. Therefore, the operations result in incompatibilities because of how packages are installed. For information about the preferred methods for updating software on Amazon EMR clusters, see [Using the default AMI for Amazon EMR](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-default-ami.html) in the *Amazon EMR Management Guide*.

RebootIfNeeded  
When you choose the `RebootIfNeeded` option, the instance is rebooted in either of the following cases:   
+ Patch Manager installed one or more patches. 

  Patch Manager doesn't evaluate whether a reboot is *required* by the patch. The system is rebooted even if the patch doesn't require a reboot.
+ Patch Manager detects one or more patches with a status of `INSTALLED_PENDING_REBOOT` during the `Install` operation. 

  The `INSTALLED_PENDING_REBOOT` status can mean that the option `NoReboot` was selected the last time the `Install` operation was run, or that a patch was installed outside of Patch Manager since the last time the managed node was rebooted.
Rebooting instances in these two cases ensures that updated packages are flushed from memory and keeps patching and rebooting behavior consistent across all operating systems.

NoReboot  
When you choose the `NoReboot` option, Patch Manager doesn't reboot an instance even if it installed patches during the `Install` operation. This option is useful if you know that your instances don't require rebooting after patches are applied, or you have applications or processes running on an instance that shouldn't be disrupted by a patching operation reboot. It's also useful when you want more control over the timing of instance reboots, such as by using a maintenance window.

**Patch installation tracking file**: To track patch installation, especially patches that have been installed since the last system reboot, Systems Manager maintains a file on the managed instance.

**Important**  
Don't delete or modify the tracking file. If this file is deleted or corrupted, the patch compliance report for the instance is inaccurate. If this happens, reboot the instance and run a patch Scan operation to restore the file.

This tracking file is stored in the following locations on your managed instances:
+ Linux operating systems: 
  + `/var/log/amazon/ssm/patch-configuration/patch-states-configuration.json`
  + `/var/log/amazon/ssm/patch-configuration/patch-inventory-from-last-operation.json`
+ Windows Server operating system:
  + `C:\ProgramData\Amazon\PatchBaselineOperations\State\PatchStatesConfiguration.json`
  + `C:\ProgramData\Amazon\PatchBaselineOperations\State\PatchInventoryFromLastOperation.json`

# SSM Command document for patching: `AWS-RunPatchBaselineWithHooks`
AWS-RunPatchBaselineWithHooks

AWS Systems Manager supports `AWS-RunPatchBaselineWithHooks`, a Systems Manager document (SSM document) for Patch Manager, a tool in AWS Systems Manager. This SSM document performs patching operations on managed nodes for both security related and other types of updates. 

`AWS-RunPatchBaselineWithHooks` differs from `AWS-RunPatchBaseline` in the following ways:
+ **A wrapper document** – `AWS-RunPatchBaselineWithHooks` is a wrapper for `AWS-RunPatchBaseline` and relies on `AWS-RunPatchBaseline` for some of its operations.
+ **The `Install` operation** – `AWS-RunPatchBaselineWithHooks` supports lifecycle hooks that run at designated points during managed node patching. Because patch installations sometimes require managed nodes to reboot, the patching operation is divided into two events, for a total of three hooks that support custom functionality. The first hook is before the `Install with NoReboot` operation. The second hook is after the `Install with NoReboot` operation. The third hook is available after the reboot of the managed node.
+ **No custom patch list support** – `AWS-RunPatchBaselineWithHooks` doesn't support the `InstallOverrideList` parameter.
+ **SSM Agent support** – `AWS-RunPatchBaselineWithHooks` requires that SSM Agent 3.0.502 or later be installed on the managed node to patch.

When the document is run, it uses the patch baseline currently specified as the "default" for an operating system type if no patch group is specified. Otherwise, it uses the patch baselines that is associated with the patch group. For information about patch groups, see [Patch groups](patch-manager-patch-groups.md). 

You can use the document `AWS-RunPatchBaselineWithHooks` to apply patches for both operating systems and applications. (On Windows Server, application support is limited to updates for applications released by Microsoft.)

This document supports Linux and Windows Server managed nodes. The document will perform the appropriate actions for each platform.

**Note**  
`AWS-RunPatchBaselineWithHooks` isn't supported on macOS.

------
#### [ Linux ]

On Linux managed nodes, the `AWS-RunPatchBaselineWithHooks` document invokes a Python module, which in turn downloads a snapshot of the patch baseline that applies to the managed node. This patch baseline snapshot uses the defined rules and lists of approved and blocked patches to drive the appropriate package manager for each node type: 
+ Amazon Linux 2, Oracle Linux, and RHEL 7 managed nodes use YUM. For YUM operations, Patch Manager requires `Python 2.6` or a later supported version (2.6 - 3.12). Amazon Linux 2023 uses DNF. For DNF operations, Patch Manager requires a supported version of `Python 2` or `Python 3` (2.6 - 3.12).
+ RHEL 8 managed nodes use DNF. For DNF operations, Patch Manager requires a supported version of `Python 2` or `Python 3` (2.6 - 3.12). (Neither version is installed by default on RHEL 8. You must install one or the other manually.)
+ Debian Server and Ubuntu Server instances use APT. For APT operations, Patch Manager requires a supported version of `Python 3` (3.0 - 3.12).

------
#### [ Windows Server ]

On Windows Server managed nodes, the `AWS-RunPatchBaselineWithHooks` document downloads and invokes a PowerShell module, which in turn downloads a snapshot of the patch baseline that applies to the managed node. This patch baseline snapshot contains a list of approved patches that is compiled by querying the patch baseline against a Windows Server Update Services (WSUS) server. This list is passed to the Windows Update API, which controls downloading and installing the approved patches as appropriate. 

------

Each snapshot is specific to an AWS account, patch group, operating system, and snapshot ID. The snapshot is delivered through a presigned Amazon Simple Storage Service (Amazon S3) URL, which expires 24 hours after the snapshot is created. After the URL expires, however, if you want to apply the same snapshot content to other managed nodes, you can generate a new presigned Amazon S3 URL up to three days after the snapshot was created. To do this, use the [https://docs.aws.amazon.com/cli/latest/reference/ssm/get-deployable-patch-snapshot-for-instance.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/get-deployable-patch-snapshot-for-instance.html) command. 

After all approved and applicable updates have been installed, with reboots performed as necessary, patch compliance information is generated on an managed node and reported back to Patch Manager. 

If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaselineWithHooks` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).

**Important**  
While the `NoReboot` option prevents operating system restarts, it does not prevent service-level restarts that might occur when certain packages are updated. For example, updating packages like Docker may trigger automatic restarts of dependent services (such as container orchestration services) even when `NoReboot` is specified.

For information about viewing patch compliance data, see [About patch compliance](compliance-about.md#compliance-monitor-patch).

## `AWS-RunPatchBaselineWithHooks` operational steps


When the `AWS-RunPatchBaselineWithHooks` runs, the following steps are performed:

1. **Scan** - A `Scan` operation using `AWS-RunPatchBaseline` is run on the managed node, and a compliance report is generated and uploaded. 

1. **Verify local patch states** - A script is run to determine what steps will be performed based on the selected operation and `Scan` result from Step 1. 

   1. If the selected operation is `Scan`, the operation is marked complete. The operation concludes. 

   1. If the selected operation is `Install`, Patch Manager evaluates the `Scan` result from Step 1 to determine what to run next: 

      1. If no missing patches are detected, and no pending reboots required, the operation proceeds directly to the final step (Step 8), which includes a hook you have provided. Any steps in between are skipped. 

      1. If no missing patches are detected, but there are pending reboots required and the selected reboot option is `NoReboot`, the operation proceeds directly to the final step (Step 8), which includes a hook you have provided. Any steps in between are skipped. 

      1. Otherwise, the operation proceeds to the next step.

1. **Pre-patch hook operation** - The SSM document you have provided for the first lifecycle hook, `PreInstallHookDocName`, is run on the managed node. 

1. **Install with NoReboot** - An `Install` operation with the reboot option of `NoReboot` using `AWS-RunPatchBaseline` is run on the managed node, and a compliance report is generated and uploaded. 

1. **Post-install hook operation** - The SSM document you have provided for the second lifecycle hook, `PostInstallHookDocName`, is run on the managed node.

1. **Verify reboot** - A script runs to determine whether a reboot is needed for the managed node and what steps to run:

   1. If the selected reboot option is `NoReboot`, the operation proceeds directly to the final step (Step 8), which includes a hook you have provided. Any steps in between are skipped. 

   1. If the selected reboot option is `RebootIfNeeded`, Patch Manager checks whether there are any pending reboots required from the inventory collected in Step 4. This means that the operation continues to Step 7 and the managed node is rebooted in either of the following cases:

      1. Patch Manager installed one or more patches. (Patch Manager doesn't evaluate whether a reboot is required by the patch. The system is rebooted even if the patch doesn't require a reboot.)

      1. Patch Manager detects one or more patches with a status of `INSTALLED_PENDING_REBOOT` during the Install operation. The `INSTALLED_PENDING_REBOOT` status can mean that the option `NoReboot` was selected the last time the Install operation was run, or that a patch was installed outside of Patch Manager since the last time the managed node was rebooted. 

      If no patches meeting these criteria are found, the managed node patching operation is complete, and the operation proceeds directly to the final step (Step 8), which includes a hook you have provided. Any steps in between are skipped.

1. **Reboot and report** - An installation operation with the reboot option of `RebootIfNeeded` runs on the managed node using `AWS-RunPatchBaseline`, and a compliance report is generated and uploaded. 

1. **Post-reboot hook operation** - The SSM document you have provided for the third lifecycle hook, `OnExitHookDocName`, is run on the managed node. 

For a `Scan` operation, if Step 1 fails, the process of running the document stops and the step is reported as failed, although subsequent steps are reported as successful. 

 For an `Install` operation, if any of the `aws:runDocument` steps fail during the operation, those steps are reported as failed, and the operation proceeds directly to the final step (Step 8), which includes a hook you have provided. Any steps in between are skipped. This step is reported as failed, the last step reports the status of its operation result, and all steps in between are reported as successful.

## `AWS-RunPatchBaselineWithHooks` parameters


`AWS-RunPatchBaselineWithHooks` supports six parameters. 

The `Operation` parameter is required. 

The `RebootOption`, `PreInstallHookDocName`, `PostInstallHookDocName`, and `OnExitHookDocName` parameters are optional. 

`Snapshot-ID` is technically optional, but we recommend that you supply a custom value for it when you run `AWS-RunPatchBaselineWithHooks` outside of a maintenance window. Let Patch Manager supply the value automatically when the document is run as part of a maintenance window operation.

**Topics**
+ [

### Parameter name: `Operation`
](#patch-manager-aws-runpatchbaseline-parameters-operation)
+ [

### Parameter name: `Snapshot ID`
](#patch-manager-aws-runpatchbaselinewithhook-parameters-snapshot-id)
+ [

### Parameter name: `RebootOption`
](#patch-manager-aws-runpatchbaselinewithhooks-parameters-norebootoption)
+ [

### Parameter name: `PreInstallHookDocName`
](#patch-manager-aws-runpatchbaselinewithhooks-parameters-preinstallhookdocname)
+ [

### Parameter name: `PostInstallHookDocName`
](#patch-manager-aws-runpatchbaselinewithhooks-parameters-postinstallhookdocname)
+ [

### Parameter name: `OnExitHookDocName`
](#patch-manager-aws-runpatchbaselinewithhooks-parameters-onexithookdocname)

### Parameter name: `Operation`


**Usage**: Required.

**Options**: `Scan` \$1 `Install`. 

Scan  
When you choose the `Scan` option, the systems uses the `AWS-RunPatchBaseline` document to determine the patch compliance state of the managed node and reports this information back to Patch Manager. `Scan` doesn't prompt updates to be installed or managed nodes to be rebooted. Instead, the operation identifies where updates are missing that are approved and applicable to the node. 

Install  
When you choose the `Install` option, `AWS-RunPatchBaselineWithHooks` attempts to install the approved and applicable updates that are missing from the managed node. Patch compliance information generated as part of an `Install` operation doesn't list any missing updates, but might report updates that are in a failed state if the installation of the update didn't succeed for any reason. Whenever an update is installed on a managed node, the node is rebooted to ensure the update is both installed and active. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaselineWithHooks` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](#patch-manager-aws-runpatchbaselinewithhooks-parameters-norebootoption).)  
If a patch specified by the baseline rules is installed *before* Patch Manager updates the managed node, the system might not reboot as expected. This can happen when a patch is installed manually by a user or installed automatically by another program, such as the `unattended-upgrades` package on Ubuntu Server.

### Parameter name: `Snapshot ID`


**Usage**: Optional.

`Snapshot ID` is a unique ID (GUID) used by Patch Manager to ensure that a set of managed nodes that are patched in a single operation all have the exact same set of approved patches. Although the parameter is defined as optional, our best practice recommendation depends on whether or not you're running `AWS-RunPatchBaselineWithHooks` in a maintenance window, as described in the following table.


**`AWS-RunPatchBaselineWithHooks` best practices**  

| Mode | Best practice | Details | 
| --- | --- | --- | 
| Running AWS-RunPatchBaselineWithHooks inside a maintenance window | Don't supply a Snapshot ID. Patch Manager will supply it for you. |  If you use a maintenance window to run `AWS-RunPatchBaselineWithHooks`, you shouldn't provide your own generated Snapshot ID. In this scenario, Systems Manager provides a GUID value based on the maintenance window execution ID. This ensures that a correct ID is used for all the invocations of `AWS-RunPatchBaselineWithHooks` in that maintenance window.  If you do specify a value in this scenario, note that the snapshot of the patch baseline might not remain in place for more than 3 days. After that, a new snapshot will be generated even if you specify the same ID after the snapshot expires.   | 
| Running AWS-RunPatchBaselineWithHooks outside of a maintenance window | Generate and specify a custom GUID value for the Snapshot ID.¹ |  When you aren't using a maintenance window to run `AWS-RunPatchBaselineWithHooks`, we recommend that you generate and specify a unique Snapshot ID for each patch baseline, particularly if you're running the `AWS-RunPatchBaselineWithHooks` document on multiple managed nodes in the same operation. If you don't specify an ID in this scenario, Systems Manager generates a different Snapshot ID for each managed node the command is sent to. This might result in varying sets of patches being specified among the nodes. For instance, say that you're running the `AWS-RunPatchBaselineWithHooks` document directly through Run Command, a tool inAWS Systems Manager, and targeting a group of 50 managed nodes. Specifying a custom Snapshot ID results in the generation of a single baseline snapshot that is used to evaluate and patch all the managed nodes, ensuring that they end up in a consistent state.   | 
|  ¹ You can use any tool capable of generating a GUID to generate a value for the Snapshot ID parameter. For example, in PowerShell, you can use the `New-Guid` cmdlet to generate a GUID in the format of `12345699-9405-4f69-bc5e-9315aEXAMPLE`.  | 

### Parameter name: `RebootOption`


**Usage**: Optional.

**Options**: `RebootIfNeeded` \$1 `NoReboot` 

**Default**: `RebootIfNeeded`

**Warning**  
The default option is `RebootIfNeeded`. Be sure to select the correct option for your use case. For example, if your managed nodes must reboot immediately to complete a configuration process, choose `RebootIfNeeded`. Or, if you need to maintain managed node availability until a scheduled reboot time, choose `NoReboot`.

**Important**  
We don’t recommend using Patch Manager for patching cluster instances in Amazon EMR (previously called Amazon Elastic MapReduce). In particular, don’t select the `RebootIfNeeded` option for the `RebootOption` parameter. (This option is available in the SSM Command documents for patching `AWS-RunPatchBaseline`, `AWS-RunPatchBaselineAssociation`, and `AWS-RunPatchBaselineWithHooks`.)  
The underlying commands for patching using Patch Manager use `yum` and `dnf` commands. Therefore, the operations result in incompatibilities because of how packages are installed. For information about the preferred methods for updating software on Amazon EMR clusters, see [Using the default AMI for Amazon EMR](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-default-ami.html) in the *Amazon EMR Management Guide*.

RebootIfNeeded  
When you choose the `RebootIfNeeded` option, the managed node is rebooted in either of the following cases:   
+ Patch Manager installed one or more patches. 

  Patch Manager doesn't evaluate whether a reboot is *required* by the patch. The system is rebooted even if the patch doesn't require a reboot.
+ Patch Manager detects one or more patches with a status of `INSTALLED_PENDING_REBOOT` during the `Install` operation. 

  The `INSTALLED_PENDING_REBOOT` status can mean that the option `NoReboot` was selected the last time the `Install` operation was run, or that a patch was installed outside of Patch Manager since the last time the managed node was rebooted.
Rebooting managed nodes in these two cases ensures that updated packages are flushed from memory and keeps patching and rebooting behavior consistent across all operating systems.

NoReboot  
When you choose the `NoReboot` option, Patch Manager doesn't reboot a managed node even if it installed patches during the `Install` operation. This option is useful if you know that your managed nodes don't require rebooting after patches are applied, or you have applications or processes running on a node that shouldn't be disrupted by a patching operation reboot. It's also useful when you want more control over the timing of managed node reboots, such as by using a maintenance window.  
If you choose the `NoReboot` option and a patch is installed, the patch is assigned a status of `InstalledPendingReboot`. The managed node itself, however, is marked as `Non-Compliant`. After a reboot occurs and a `Scan` operation is run, the node status is updated to `Compliant`.

**Patch installation tracking file**: To track patch installation, especially patches that were installed since the last system reboot, Systems Manager maintains a file on the managed node.

**Important**  
Don't delete or modify the tracking file. If this file is deleted or corrupted, the patch compliance report for the managed node is inaccurate. If this happens, reboot the node and run a patch Scan operation to restore the file.

This tracking file is stored in the following locations on your managed nodes:
+ Linux operating systems: 
  + `/var/log/amazon/ssm/patch-configuration/patch-states-configuration.json`
  + `/var/log/amazon/ssm/patch-configuration/patch-inventory-from-last-operation.json`
+ Windows Server operating system:
  + `C:\ProgramData\Amazon\PatchBaselineOperations\State\PatchStatesConfiguration.json`
  + `C:\ProgramData\Amazon\PatchBaselineOperations\State\PatchInventoryFromLastOperation.json`

### Parameter name: `PreInstallHookDocName`


**Usage**: Optional.

**Default**: `AWS-Noop`. 

The value to provide for the `PreInstallHookDocName` parameter is the name or Amazon Resource Name (ARN) of an SSM document of your choice. You can provide the name of an AWS managed document or the name or ARN of a custom SSM document that you have created or that has been shared with you. (For an SSM document that has been shared with you from a different AWS account, you must specify the full resource ARN, such as `arn:aws:ssm:us-east-2:123456789012:document/MySharedDocument`.)

The SSM document you specify is run before the `Install` operation and performs any actions supported by SSM Agent, such as a shell script to check application health check before patching is performed on the managed node. (For a list of actions, see [Command document plugin reference](documents-command-ssm-plugin-reference.md)). The default SSM document name is `AWS-Noop`, which doesn't perform any operation on the managed node. 

For information about creating a custom SSM document, see [Creating SSM document content](documents-creating-content.md). 

### Parameter name: `PostInstallHookDocName`


**Usage**: Optional.

**Default**: `AWS-Noop`. 

The value to provide for the `PostInstallHookDocName` parameter is the name or Amazon Resource Name (ARN) of an SSM document of your choice. You can provide the name of an AWS managed document or the name or ARN of a custom SSM document that you have created or that has been shared with you. (For an SSM document that has been shared with you from a different AWS account, you must specify the full resource ARN, such as `arn:aws:ssm:us-east-2:123456789012:document/MySharedDocument`.)

The SSM document you specify is run after the `Install with NoReboot` operation and performs any actions supported by SSM Agent, such as a shell script for installing third party updates before reboot. (For a list of actions, see [Command document plugin reference](documents-command-ssm-plugin-reference.md)). The default SSM document name is `AWS-Noop`, which doesn't perform any operation on the managed node. 

For information about creating a custom SSM document, see [Creating SSM document content](documents-creating-content.md). 

### Parameter name: `OnExitHookDocName`


**Usage**: Optional.

**Default**: `AWS-Noop`. 

The value to provide for the `OnExitHookDocName` parameter is the name or Amazon Resource Name (ARN) of an SSM document of your choice. You can provide the name of an AWS managed document or the name or ARN of a custom SSM document that you have created or that has been shared with you. (For an SSM document that has been shared with you from a different AWS account, you must specify the full resource ARN, such as `arn:aws:ssm:us-east-2:123456789012:document/MySharedDocument`.)

The SSM document you specify is run after the managed node reboot operation and performs any actions supported by SSM Agent, such as a shell script to verify node health after the patching operation is complete. (For a list of actions, see [Command document plugin reference](documents-command-ssm-plugin-reference.md)). The default SSM document name is `AWS-Noop`, which doesn't perform any operation on the managed node. 

For information about creating a custom SSM document, see [Creating SSM document content](documents-creating-content.md). 

# Sample scenario for using the InstallOverrideList parameter in `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation`
Sample scenario for using the `InstallOverrideList` parameter

You can use the `InstallOverrideList` parameter when you want to override the patches specified by the current default patch baseline in Patch Manager, a tool in AWS Systems Manager. This topic provides examples that show how to use this parameter to achieve the following:
+ Apply different sets of patches to a target group of managed nodes.
+ Apply these patch sets on different frequencies.
+ Use the same patch baseline for both operations.

Say that you want to install two different categories of patches on your Amazon Linux 2 managed nodes. You want to install these patches on different schedules using maintenance windows. You want one maintenance window to run every week and install all `Security` patches. You want another maintenance window to run once a month and install all available patches, or categories of patches other than `Security`. 

However, only one patch baseline at a time can be defined as the default for an operating system. This requirement helps avoid situations where one patch baseline approves a patch while another blocks it, which can lead to issues between conflicting versions.

With the following strategy, you use the `InstallOverrideList` parameter to apply different types of patches to a target group, on different schedules, while still using the same patch baseline:

1. In the default patch baseline, ensure that only `Security` updates are specified.

1. Create a maintenance window that runs `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` each week. Don't specify an override list.

1. Create an override list of the patches of all types that you want to apply on a monthly basis and store it in an Amazon Simple Storage Service (Amazon S3) bucket. 

1. Create a second maintenance window that runs once a month. However, for the Run Command task you register for this maintenance window, specify the location of your override list.

The result: Only `Security` patches, as defined in your default patch baseline, are installed each week. All available patches, or whatever subset of patches you define, are installed each month.

**Note**  
When you're patching a node that only uses IPv6, ensure that the provided URL is reachable from the node. If the SSM Agent config option `UseDualStackEndpoint` is set to `true`, then a dualstack S3 client is used when an S3 URL is provided. See [Tutorial: Patching a server in an IPv6 only environment](patch-manager-server-patching-iPv6-tutorial.md) for more information on configuring the agent to use dualstack.

For more information and sample lists, see [Parameter name: `InstallOverrideList`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-installoverridelist).

# Using the BaselineOverride parameter


You can define patching preferences at runtime using the baseline override feature in Patch Manager, a tool in AWS Systems Manager. Do this by specifying an Amazon Simple Storage Service (Amazon S3) bucket containing a JSON object with a list of patch baselines. The patching operation uses the baselines provided in the JSON object that match the host operating system instead of applying the rules from the default patch baseline.

**Important**  
The `BaselineOverride` file name can't contain the following characters: backtick (`), single quote ('), double quote ("), and dollar sign (\$1).

Except when a patching operation uses a patch policy, using the `BaselineOverride` parameter doesn't overwrite the patch compliance of the baseline provided in the parameter. The output results are recorded in the Stdout logs from Run Command, a tool in AWS Systems Manager. The results only print out packages that are marked as `NON_COMPLIANT`. This means the package is marked as `Missing`, `Failed`, `InstalledRejected`, or `InstalledPendingReboot`.

When a patch operation uses a patch policy, however, the system passes the override parameter from the associated S3 bucket, and the compliance value is updated for the managed node. For more information about patch policy behaviors, see [Patch policy configurations in Quick Setup](patch-manager-policies.md).

**Note**  
When you're patching a node that only uses IPv6, ensure that the provided URL is reachable from the node. If the SSM Agent config option `UseDualStackEndpoint` is set to `true`, then a dualstack S3 client is used when an S3 URL is provided. See [Tutorial: Patching a server in an IPv6 only environment](patch-manager-server-patching-iPv6-tutorial.md) for more information on configuring the agent to use dualstack.

## Using the patch baseline override with Snapshot Id or Install Override List parameters


There are two cases where the patch baseline override has noteworthy behavior.

**Using baseline override and Snapshot Id at the same time**  
Snapshot Ids ensure that all managed nodes in a particular patching command all apply the same thing. For example, if you patch 1,000 nodes at one time, the patches will be the same.

When using both a Snapshot Id and a patch baseline override, the Snapshot Id takes precedence over the patch baseline override. The baseline override rules will still be used, but they will only be evaluated once. In the earlier example, the patches across your 1,000 managed nodes will still always be the same. If, midway through the patching operation, you changed the JSON file in the referenced S3 bucket to be something different, the patches applied will still be the same. This is because the Snapshot Id was provided.

**Using baseline override and Install Override List at the same time**  
You can't use these two parameters at the same time. The patching document fails if both parameters are supplied, and it doesn't perform any scans or installs on the managed node.

## Code examples


The following code example for Python shows how to generate the patch baseline override.

```
import boto3
import json

ssm = boto3.client('ssm')
s3 = boto3.resource('s3')
s3_bucket_name = 'my-baseline-override-bucket'
s3_file_name = 'MyBaselineOverride.json'
baseline_ids_to_export = ['pb-0000000000000000', 'pb-0000000000000001']

baseline_overrides = []
for baseline_id in baseline_ids_to_export:
    baseline_overrides.append(ssm.get_patch_baseline(
        BaselineId=baseline_id
    ))

json_content = json.dumps(baseline_overrides, indent=4, sort_keys=True, default=str)
s3.Object(bucket_name=s3_bucket_name, key=s3_file_name).put(Body=json_content)
```

This produces a patch baseline override like the following.

```
[
    {
        "ApprovalRules": {
            "PatchRules": [
                {
                    "ApproveAfterDays": 0, 
                    "ComplianceLevel": "UNSPECIFIED", 
                    "EnableNonSecurity": false, 
                    "PatchFilterGroup": {
                        "PatchFilters": [
                            {
                                "Key": "PRODUCT", 
                                "Values": [
                                    "*"
                                ]
                            }, 
                            {
                                "Key": "CLASSIFICATION", 
                                "Values": [
                                    "*"
                                ]
                            }, 
                            {
                                "Key": "SEVERITY", 
                                "Values": [
                                    "*"
                                ]
                            }
                        ]
                    }
                }
            ]
        }, 
        "ApprovedPatches": [], 
        "ApprovedPatchesComplianceLevel": "UNSPECIFIED", 
        "ApprovedPatchesEnableNonSecurity": false, 
        "GlobalFilters": {
            "PatchFilters": []
        }, 
        "OperatingSystem": "AMAZON_LINUX_2", 
        "RejectedPatches": [], 
        "RejectedPatchesAction": "ALLOW_AS_DEPENDENCY", 
        "Sources": []
    }, 
    {
        "ApprovalRules": {
            "PatchRules": [
                {
                    "ApproveUntilDate": "2021-01-06", 
                    "ComplianceLevel": "UNSPECIFIED", 
                    "EnableNonSecurity": true, 
                    "PatchFilterGroup": {
                        "PatchFilters": [
                            {
                                "Key": "PRODUCT", 
                                "Values": [
                                    "*"
                                ]
                            }, 
                            {
                                "Key": "CLASSIFICATION", 
                                "Values": [
                                    "*"
                                ]
                            }, 
                            {
                                "Key": "SEVERITY", 
                                "Values": [
                                    "*"
                                ]
                            }
                        ]
                    }
                }
            ]
        }, 
        "ApprovedPatches": [
            "open-ssl*"
        ], 
        "ApprovedPatchesComplianceLevel": "UNSPECIFIED", 
        "ApprovedPatchesEnableNonSecurity": false, 
        "GlobalFilters": {
            "PatchFilters": []
        }, 
        "OperatingSystem": "SUSE", 
        "RejectedPatches": [
            "python*"
        ], 
        "RejectedPatchesAction": "ALLOW_AS_DEPENDENCY", 
        "Sources": []
    }
]
```

# Patch baselines


The topics in this section provide information about how patch baselines work in Patch Manager, a tool in AWS Systems Manager, when you run a `Scan` or `Install` operation on your managed nodes.

**Topics**
+ [

# Predefined and custom patch baselines
](patch-manager-predefined-and-custom-patch-baselines.md)
+ [

# Package name formats for approved and rejected patch lists
](patch-manager-approved-rejected-package-name-formats.md)
+ [

# Patch groups
](patch-manager-patch-groups.md)
+ [

# Patching applications released by Microsoft on Windows Server
](patch-manager-patching-windows-applications.md)

# Predefined and custom patch baselines


Patch Manager, a tool in AWS Systems Manager, provides predefined patch baselines for each of the operating systems supported by Patch Manager. You can use these baselines as they are currently configured (you can't customize them) or you can create your own custom patch baselines. Custom patch baselines allows you greater control over which patches are approved or rejected for your environment. Also, the predefined baselines assign a compliance level of `Unspecified` to all patches installed using those baselines. For compliance values to be assigned, you can create a copy of a predefined baseline and specify the compliance values you want to assign to patches. For more information, see [Custom baselines](#patch-manager-baselines-custom) and [Working with custom patch baselines](patch-manager-manage-patch-baselines.md).

**Note**  
The information in this topic applies no matter which method or type of configuration you are using for your patching operations:  
A patch policy configured in Quick Setup
A Host Management option configured in Quick Setup
A maintenance window to run a patch `Scan` or `Install` task
An on-demand **Patch now** operation

**Topics**
+ [

## Predefined baselines
](#patch-manager-baselines-pre-defined)
+ [

## Custom baselines
](#patch-manager-baselines-custom)

## Predefined baselines


The following table describes the predefined patch baselines provided with Patch Manager.

For information about which versions of each operating system Patch Manager supports, see [Patch Manager prerequisites](patch-manager-prerequisites.md).


****  

| Name | Supported operating system | Details | 
| --- | --- | --- | 
|  `AWS-AlmaLinuxDefaultPatchBaseline`  |  AlmaLinux  |  Approves all operating system patches that are classified as "Security" and that have a severity level of "Critical" or "Important". Also approves all patches that are classified as "Bugfix". Patches are auto-approved 7 days after they are released or updated.¹  | 
| AWS-AmazonLinux2DefaultPatchBaseline | Amazon Linux 2 | Approves all operating system patches that are classified as "Security" and that have a severity level of "Critical" or "Important". Also approves all patches with a classification of "Bugfix". Patches are auto-approved 7 days after release.¹ | 
| AWS-AmazonLinux2023DefaultPatchBaseline | Amazon Linux 2023 |  Approves all operating system patches that are classified as "Security" and that have a severity level of "Critical" or "Important". Patches are auto-approved seven days after release. Also approves all patches with a classification of "Bugfix" seven days after release.  | 
| AWS-CentOSDefaultPatchBaseline | CentOS Stream | Approves all updates 7 days after they become available, including nonsecurity updates. | 
| AWS-DebianDefaultPatchBaseline | Debian Server | Immediately approves all operating system security-related patches that have a priority of "Required", "Important", "Standard," "Optional," or "Extra." There is no wait before approval because reliable release dates aren't available in the repositories. | 
| AWS-MacOSDefaultPatchBaseline | macOS | Approves all operating system patches that are classified as "Security". Also approves all packages with a current update. | 
| AWS-OracleLinuxDefaultPatchBaseline | Oracle Linux | Approves all operating system patches that are classified as "Security" and that have a severity level of "Important" or "Moderate". Also approves all patches that are classified as "Bugfix" 7 days after release. Patches are auto-approved 7 days after they are released or updated.¹ | 
|  `AWS-RedHatDefaultPatchBaseline`  |  Red Hat Enterprise Linux (RHEL)   |  Approves all operating system patches that are classified as "Security" and that have a severity level of "Critical" or "Important". Also approves all patches that are classified as "Bugfix". Patches are auto-approved 7 days after they are released or updated.¹  | 
|  `AWS-RockyLinuxDefaultPatchBaseline`  |  Rocky Linux  |  Approves all operating system patches that are classified as "Security" and that have a severity level of "Critical" or "Important". Also approves all patches that are classified as "Bugfix". Patches are auto-approved 7 days after they are released or updated.¹  | 
| AWS-SuseDefaultPatchBaseline | SUSE Linux Enterprise Server (SLES) | Approves all operating system patches that are classified as "Security" and with a severity of "Critical" or "Important". Patches are auto-approved 7 days after they are released or updated.¹ | 
|  `AWS-UbuntuDefaultPatchBaseline`  |  Ubuntu Server  |  Immediately approves all operating system security-related patches that have a priority of "Required", "Important", "Standard," "Optional," or "Extra." There is no wait before approval because reliable release dates aren't available in the repositories.  | 
| AWS-DefaultPatchBaseline |  Windows Server  |  Approves all Windows Server operating system patches that are classified as "CriticalUpdates" or "SecurityUpdates" and that have an MSRC severity of "Critical" or "Important". Patches are auto-approved 7 days after they are released or updated.²  | 
| AWS-WindowsPredefinedPatchBaseline-OS |  Windows Server  |  Approves all Windows Server operating system patches that are classified as "CriticalUpdates" or "SecurityUpdates" and that have an MSRC severity of "Critical" or "Important". Patches are auto-approved 7 days after they are released or updated.²  | 
| AWS-WindowsPredefinedPatchBaseline-OS-Applications | Windows Server | For the Windows Server operating system, approves all patches that are classified as "CriticalUpdates" or "SecurityUpdates" and that have an MSRC severity of "Critical" or "Important". For applications released by Microsoft, approves all patches. Patches for both OS and applications are auto-approved 7 days after they are released or updated.² | 

¹ For Amazon Linux 2, the 7-day wait before patches are auto-approved is calculated from an `Updated Date` value in `updateinfo.xml`, not a `Release Date` value. Various factors can affect the `Updated Date` value. Other operating systems handle release and update dates differently. For information to help you avoid unexpected results with auto-approval delays, see [How package release dates and update dates are calculated](patch-manager-release-dates.md).

² For Windows Server, default baselines include a 7-day auto-approval delay. To install a patch within 7 days after release, you must create a custom baseline.

## Custom baselines


Use the following information to help you create custom patch baselines to meet your patching goals.

**Topics**
+ [

### Using auto-approvals in custom baselines
](#baselines-auto-approvals)
+ [

### Additional information for creating patch baselines
](#baseline-additional-info)

### Using auto-approvals in custom baselines


If you create your own patch baseline, you can choose which patches to auto-approve by using the following categories.
+ **Operating system**: Supported versions of Windows Server, Amazon Linux, Ubuntu Server, and so on.
+ **Product name **(for operating systems): For example, RHEL 7.5, Amazon Linux 2023 2023.8.20250808, Windows Server 2012, Windows Server 2012 R2, and so on.
+ **Product name **(for applications released by Microsoft on Windows Server only): For example, Word 2016, BizTalk Server, and so on.
+ **Classification**: For example, Critical updates, Security updates, and so on.
+ **Severity**: For example, Critical, Important, and so on.

For each approval rule that you create, you can choose to specify an auto-approval delay or specify a patch approval cutoff date. 

**Note**  
Because it's not possible to reliably determine the release dates of update packages for Ubuntu Server, the auto-approval options aren't supported for this operating system.

An auto-approval delay is the number of days to wait after the patch was released or last updated, before the patch is automatically approved for patching. For example, if you create a rule using the `CriticalUpdates` classification and configure it for 7 days auto-approval delay, then a new critical patch released on July 7 is automatically approved on July 14.

If a Linux repository does not provide release date information for packages, Patch Manager uses the build time of the package as the date for auto-approval date specifications for Amazon Linux 2, Amazon Linux 2023, and Red Hat Enterprise Linux (RHEL). If the build time of the package can't be determined, Patch Manager uses a default date of January 1st, 1970. This results in Patch Manager bypassing any auto-approval date specifications in patch baselines that are configured to approve patches for any date after January 1st, 1970.

When you specify an auto-approval cutoff date, Patch Manager automatically applies all patches released or last updated on or before that date. For example, if you specify July 7, 2023 as the cutoff date, no patches released or last updated on or after July 8, 2023 are installed automatically.

When you create a custom patch baseline, you can specify a compliance severity level for patches approved by that patch baseline, such as `Critical` or `High`. If the patch state of any approved patch is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.

### Additional information for creating patch baselines


Keep the following in mind when you create a patch baseline:
+ Patch Manager provides one predefined patch baseline for each supported operating system. These predefined patch baselines are used as the default patch baselines for each operating system type unless you create your own patch baseline and designate it as the default for the corresponding operating system type. 
**Note**  
For Windows Server, three predefined patch baselines are provided. The patch baselines `AWS-DefaultPatchBaseline` and `AWS-WindowsPredefinedPatchBaseline-OS` support only operating system updates on the Windows operating system itself. `AWS-DefaultPatchBaseline` is used as the default patch baseline for Windows Server managed nodes unless you specify a different patch baseline. The configuration settings in these two patch baselines are the same. The newer of the two, `AWS-WindowsPredefinedPatchBaseline-OS`, was created to distinguish it from the third predefined patch baseline for Windows Server. That patch baseline, `AWS-WindowsPredefinedPatchBaseline-OS-Applications`, can be used to apply patches to both the Windows Server operating system and supported applications released by Microsoft.
+ By default, Windows Server 2019 and Windows Server 2022 remove updates that are replaced by later updates. As a result, if you use the `ApproveUntilDate` parameter in a Windows Server patch baseline, but the date selected in the `ApproveUntilDate` parameter is before the date of the latest patch, then the new patch isn't installed when the patching operation runs. For more information about Windows Server patching rules, see the Windows Server tab in [How security patches are selected](patch-manager-selecting-patches.md).

  This means that the managed node is compliant in terms of Systems Manager operations, even though a critical patch from the previous month might not be installed. This same scenario can occur when using the `ApproveAfterDays` parameter. Because of the Microsoft superseded patch behavior, it is possible to set a number (generally greater than 30 days) so that patches for Windows Server are never installed if the latest available patch from Microsoft is released before the number of days in `ApproveAfterDays` has elapsed. 
+ For Windows Server only, an available security update patch that is not approved by the patch baseline can have a compliance value of `Compliant` or `Non-Compliant`, as defined in a custom patch baseline. 

  When you create or update a patch baseline, you choose the status you want to assign to security patches that are available but not approved because they don't meet the installation criteria specified in the patch baseline. For example, security patches that you might want installed can be skipped if you have specified a long period to wait after a patch is released before installation. If an update to the patch is released during your specified waiting period, the waiting period for installing the patch starts over. If the waiting period is too long, multiple versions of the patch could be released but never installed.

  Using the console to create or update a patch baseline, you specify this option in the **Available security updates compliance status** field. Using the AWS CLI to run the [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-patch-baseline.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-patch-baseline.html) or [https://docs.aws.amazon.com/cli/latest/reference/ssm/update-patch-baseline.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/update-patch-baseline.html) command, you specify this option in the `available-security-updates-compliance-status` parameter. 
+ For on-premises servers and virtual machines (VMs), Patch Manager attempts to use your custom default patch baseline. If no custom default patch baseline exists, the system uses the predefined patch baseline for the corresponding operating system.
+ If a patch is listed as both approved and rejected in the same patch baseline, the patch is rejected.
+ A managed node can have only one patch baseline defined for it.
+ The formats of package names you can add to lists of approved patches and rejected patches for a patch baseline depend on the type of operating system you're patching.

  For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).
+ If you are using a [patch policy configuration](patch-manager-policies.md) in Quick Setup, updates you make to custom patch baselines are synchronized with Quick Setup once an hour. 

  If a custom patch baseline that was referenced in a patch policy is deleted, a banner displays on the Quick Setup **Configuration details** page for your patch policy. The banner informs you that the patch policy references a patch baseline that no longer exists, and that subsequent patching operations will fail. In this case, return to the Quick Setup **Configurations** page, select the Patch Manager configuration , and choose **Actions**, **Edit configuration**. The deleted patch baseline name is highlighted, and you must select a new patch baseline for the affected operating system.
+ When you create an approval rule with multiple `Classification` and `Severity` values, patches are approved based on their available attributes. Packages with both `Classification` and `Severity` attributes will match the selected baseline values for both fields. Packages with only `Classification` attributes are matched only against the selected baseline `Classification` values. Severity requirements in the same rule are ignored for packages that don't have `Severity` attributes. 

For information about creating a patch baseline, see [Working with custom patch baselines](patch-manager-manage-patch-baselines.md) and [Tutorial: Patch a server environment using the AWS CLI](patch-manager-patch-servers-using-the-aws-cli.md).

# Package name formats for approved and rejected patch lists


The formats of package names you can add to lists of approved patches and rejected patches depend on the type of operating system you're patching.

## Package name formats for Linux operating systems


The formats you can specify for approved and rejected patches in your patch baseline vary by Linux type. More specifically, the formats that are supported depend on the package manager used by the type of Linux operating system.

**Topics**
+ [

### Amazon Linux 2, Amazon Linux 2023, Oracle Linux, and Red Hat Enterprise Linux (RHEL)
](#patch-manager-approved-rejected-package-name-formats-standard)
+ [

### Debian Server and Ubuntu Server
](#patch-manager-approved-rejected-package-name-formats-ubuntu)
+ [

### SUSE Linux Enterprise Server (SLES)
](#patch-manager-approved-rejected-package-name-formats-sles)

### Amazon Linux 2, Amazon Linux 2023, Oracle Linux, and Red Hat Enterprise Linux (RHEL)


**Package manager**: YUM, except for Amazon Linux 2023, and RHEL 8, which use DNF as the package manager.

**Approved patches**: For approved patches, you can specify any of the following:
+ Bugzilla IDs, in the format `1234567` (The system processes numbers-only strings as Bugzilla IDs.)
+ CVE IDs, in the format `CVE-2018-1234567`
+ Advisory IDs, in formats such as `RHSA-2017:0864` and `ALAS-2018-123`
+ Package names that are constructed using one or more of the available components for package naming. To illustrate, for the package named `dbus.x86_64:1:1.12.28-1.amzn2023.0.1`, the components are as follows: 
  + `name`: `dbus`
  + `architecture`: `x86_64`
  + `epoch`: `1`
  + `version`: `1.12.28`
  + `release`: `1.amzn2023.0.1`

  Package names with the following constructions are supported:
  + `name`
  + `name.arch`
  + `name-version`
  + `name-version-release`
  + `name-version-release.arch`
  + `version`
  + `version-release`
  + `epoch:version-release`
  + `name-epoch:version-release`
  + `name-epoch:version-release.arch`
  + `epoch:name-version-release.arch`
  + `name.arch:epoch:version-release`

  Some examples:
  + `dbus.x86_64`
  + `dbus-1.12.28`
  + `dbus-1.12.28-1.amzn2023.0.1`
  + `dbus-1:1.12.28-1.amzn2023.0.1.x86_64`
+ We also support package name components with a single wild card in the above formats, such as the following:
  + `dbus*` 
  + `dbus-1.12.2*`
  + `dbus-*:1.12.28-1.amzn2023.0.1.x86_64`

**Rejected patches**: For rejected patches, you can specify any of the following:
+ Package names that are constructed using one or more of the available components for package naming. To illustrate, for the package named `dbus.x86_64:1:1.12.28-1.amzn2023.0.1`, the components are as follows: 
  + `name`: `dbus`
  + `architecture`; `x86_64`
  + `epoch`: `1`
  + `version`: `1.12.28`
  + `release`: `1.amzn2023.0.1`

  Package names with the following constructions are supported:
  + `name`
  + `name.arch`
  + `name-version`
  + `name-version-release`
  + `name-version-release.arch`
  + `version`
  + `version-release`
  + `epoch:version-release`
  + `name-epoch:version-release`
  + `name-epoch:version-release.arch`
  + `epoch:name-version-release.arch`
  + `name.arch:epoch:version-release`

  Some examples:
  + `dbus.x86_64`
  + `dbus-1.12.28`
  + `dbus-1.12.28-1.amzn2023.0.1`
  + `dbus-1:1.12.28-1.amzn2023.0.1.x86_64` 
+ We also support package name components with a single wild card in the above formats, such as the following:
  + `dbus*` 
  + `dbus-1.12.2*`
  + `dbus-*:1.12.28-1.amzn2023.0.1.x86_64`

### Debian Server and Ubuntu Server


**Package manager**: APT

**Approved patches** and **rejected patches**: For both approved and rejected patches, specify the following:
+ Package names, in the format `ExamplePkg33`
**Note**  
For Debian Server lists, and Ubuntu Server lists, don't include elements such as architecture or versions. For example, you specify the package name `ExamplePkg33` to include all the following in a patch list:  
`ExamplePkg33.x86.1`
`ExamplePkg33.x86.2`
`ExamplePkg33.x64.1`
`ExamplePkg33.3.2.5-364.noarch`

### SUSE Linux Enterprise Server (SLES)


**Package manager**: Zypper

**Approved patches** and **rejected patches**: For both approved and rejected patch lists, you can specify any of the following:
+ Full package names, in formats such as:
  + `SUSE-SLE-Example-Package-15-2023-123`
  + `example-pkg-2023.15.4-46.17.1.x86_64.rpm`
+ Package names with a single wildcard, such as:
  + `SUSE-SLE-Example-Package-15-2023-*`
  + `example-pkg-2023.15.4-46.17.1.*.rpm`

## Package name formats for macOS


**Supported package managers**: softwareupdate, installer, Brew, Brew Cask

**Approved patches** and **rejected patches**: For both approved and rejected patch lists, you specify full package names, in formats such as:
+ `XProtectPlistConfigData`
+ `MRTConfigData`

Wildcards aren't supported in approved and rejected patch lists for macOS.

## Package name formats for Windows operating systems


For Windows operating systems, specify patches using Microsoft Knowledge Base IDs and Microsoft Security Bulletin IDs; for example:

```
KB2032276,KB2124261,MS10-048
```

# Patch groups


**Note**  
Patch groups are not used in patching operations that are based on *patch policies*. For information about working with patch policies, see [Patch policy configurations in Quick Setup](patch-manager-policies.md).  
Patch group functionality is not supported in the console for account-Region pairs that did not already use patch groups before patch policy support was released on December 22, 2022. Patch group functionality is still available in account-Region pairs that began using patch groups before this date.

You can use a *patch group* to associate managed nodes with a specific patch baseline in Patch Manager, a tool in AWS Systems Manager. Patch groups help ensure that you're deploying the appropriate patches, based on the associated patch baseline rules, to the correct set of nodes. Patch groups can also help you avoid deploying patches before they have been adequately tested. For example, you can create patch groups for different environments (such as Development, Test, and Production) and register each patch group to an appropriate patch baseline. 

When you run `AWS-RunPatchBaseline` or other SSM Command documents for patching, you can target managed nodes using their ID or tags. SSM Agent and Patch Manager then evaluate which patch baseline to use based on the patch group value that you added to the managed node.

## Using tags to define patch groups


You create a patch group by using tags applied to your Amazon Elastic Compute Cloud (Amazon EC2) instances and non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. Note the following details about using tags for patch groups:
+ 

  A patch group must be defined using either the tag key `Patch Group` or `PatchGroup` applied to your managed nodes. When registering a patch group for a patch baseline, any identical *values* specified for these two keys are interpreted to be part of the same group. For instance, say that you have tagged five nodes with the first of the following key-value pairs, and five with the second:
  + `key=PatchGroup,value=DEV` 
  + `key=Patch Group,value=DEV`

  The Patch Manager command to create a baseline combines these 10 managed nodes into a single group based on the value `DEV`. The AWS CLI equivalent for the command to create a patch baseline for patch groups is as follows:

  ```
  aws ssm register-patch-baseline-for-patch-group \
      --baseline-id pb-0c10e65780EXAMPLE \
      --patch-group DEV
  ```

  Combining values from different keys into a single target is unique to this Patch Manager command for creating a new patch group and not supported by other API actions. For example, if you run [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) actions using `PatchGroup` and `Patch Group` keys with the same values, you are targeting two completely different sets of nodes:

  ```
  aws ssm send-command \
      --document-name AWS-RunPatchBaseline \
      --targets "Key=tag:PatchGroup,Values=DEV"
  ```

  ```
  aws ssm send-command \
      --document-name AWS-RunPatchBaseline \
      --targets "Key=tag:Patch Group,Values=DEV"
  ```
+ There are limits on tag-based targeting. Each array of targets for `SendCommand` can contain a maximum of five key-value pairs.
+ We recommend that you choose only one of these tag key conventions, either `PatchGroup` (without a space) or `Patch Group` (with a space). However, if you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS) on an instance, you must use `PatchGroup`.
+ The key is case-sensitive. You can specify any *value* to help you identify and target the resources in that group, for example "web servers" or "US-EAST-PROD", but the key must be `Patch Group` or `PatchGroup`.

After you create a patch group and tag managed nodes, you can register the patch group with a patch baseline. Registering the patch group with a patch baseline ensures that the nodes within the patch group use the rules defined in the associated patch baseline. 

For more information about how to create a patch group and associate the patch group to a patch baseline, see [Creating and managing patch groups](patch-manager-tag-a-patch-group.md) and [Add a patch group to a patch baseline](patch-manager-tag-a-patch-group.md#sysman-patch-group-patchbaseline).

To view an example of creating a patch baseline and patch groups by using the AWS Command Line Interface (AWS CLI), see [Tutorial: Patch a server environment using the AWS CLI](patch-manager-patch-servers-using-the-aws-cli.md). For more information about Amazon EC2 tags, see [Tag your Amazon EC2 resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) in the *Amazon EC2 User Guide*.

## How it works


When the system runs the task to apply a patch baseline to a managed node, SSM Agent verifies that a patch group value is defined for the node. If the node is assigned to a patch group, Patch Manager then verifies which patch baseline is registered to that group. If a patch baseline is found for that group, Patch Manager notifies SSM Agent to use the associated patch baseline. If a node isn't configured for a patch group, Patch Manager automatically notifies SSM Agent to use the currently configured default patch baseline.

**Important**  
A managed node can only be in one patch group.  
A patch group can be registered with only one patch baseline for each operating system type.  
You can't apply the `Patch Group` tag (with a space) to an Amazon EC2 instance if the **Allow tags in instance metadata** option is enabled on the instance. Allowing tags in instance metadata prevents tag key names from containing spaces. If you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS), you must use the tag key `PatchGroup` (without a space).

**Diagram 1: General example of patching operations process flow**

The following illustration shows a general example of the processes that Systems Manager performs when sending a Run Command task to your fleet of servers to patch using Patch Manager. These processes determine which patch baselines to use in patching operations. (A similar process is used when a maintenance window is configured to send a command to patch using Patch Manager.)

The full process is explained below the illustration.

![\[Patch Manager workflow for determining which patch baselines to use when performing patching operations.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/patch-groups-how-it-works.png)


In this example, we have three groups of EC2 instances for Windows Server with the following tags applied:


****  

| EC2 instances group | Tags | 
| --- | --- | 
|  Group 1  |  `key=OS,value=Windows` `key=PatchGroup,value=DEV`  | 
|  Group 2  |  `key=OS,value=Windows`  | 
|  Group 3  |  `key=OS,value=Windows` `key=PatchGroup,value=QA`  | 

For this example, we also have these two Windows Server patch baselines:


****  

| Patch baseline ID | Default | Associated patch group | 
| --- | --- | --- | 
|  `pb-0123456789abcdef0`  |  Yes  |  `Default`  | 
|  `pb-9876543210abcdef0`  |  No  |  `DEV`  | 

The general process to scan or install patches using Run Command, a tool in AWS Systems Manager, and Patch Manager is as follows:

1. **Send a command to patch**: Use the Systems Manager console, SDK, AWS Command Line Interface (AWS CLI), or AWS Tools for Windows PowerShell to send a Run Command task using the document `AWS-RunPatchBaseline`. The diagram shows a Run Command task to patch managed instances by targeting the tag `key=OS,value=Windows`.

1. **Patch baseline determination**: SSM Agent verifies the patch group tags applied to the EC2 instance and queries Patch Manager for the corresponding patch baseline.
   + **Matching patch group value associated with patch baseline:**

     1. SSM Agent, which is installed on EC2 instances in group one, receives the command issued in Step 1 to begin a patching operation. SSM Agent validates that the EC2 instances have the patch group tag-value `DEV` applied and queries Patch Manager for an associated patch baseline.

     1. Patch Manager verifies that patch baseline `pb-9876543210abcdef0` has the patch group `DEV` associated and notifies SSM Agent.

     1. SSM Agent retrieves a patch baseline snapshot from Patch Manager based on the approval rules and exceptions configured in `pb-9876543210abcdef0` and proceeds to the next step.
   + **No patch group tag added to instance:**

     1. SSM Agent, which is installed on EC2 instances in group two, receives the command issued in Step 1 to begin a patching operation. SSM Agent validates that the EC2 instances don't have a `Patch Group` or `PatchGroup` tag applied and as a result, SSM Agent queries Patch Manager for the default Windows patch baseline.

     1. Patch Manager verifies that the default Windows Server patch baseline is `pb-0123456789abcdef0` and notifies SSM Agent.

     1. SSM Agent retrieves a patch baseline snapshot from Patch Manager based on the approval rules and exceptions configured in the default patch baseline `pb-0123456789abcdef0` and proceeds to the next step.
   + **No matching patch group value associated with a patch baseline:**

     1. SSM Agent, which is installed on EC2 instances in group three, receives the command issued in Step 1 to begin a patching operation. SSM Agent validates that the EC2 instances have the patch group tag-value `QA` applied and queries Patch Manager for an associated patch baseline.

     1. Patch Manager doesn't find a patch baseline that has the patch group `QA` associated.

     1. Patch Manager notifies SSM Agent to use the default Windows patch baseline `pb-0123456789abcdef0`.

     1. SSM Agent retrieves a patch baseline snapshot from Patch Manager based on the approval rules and exceptions configured in the default patch baseline `pb-0123456789abcdef0` and proceeds to the next step.

1. **Patch scan or install**: After determining the appropriate patch baseline to use, SSM Agent begins either scanning for or installing patches based on the operation value specified in Step 1. The patches that are scanned for or installed are determined by the approval rules and patch exceptions defined in the patch baseline snapshot provided by Patch Manager.

**More info**  
+ [Patch compliance state values](patch-manager-compliance-states.md)

# Patching applications released by Microsoft on Windows Server


Use the information in this topic to help you prepare to patch applications on Windows Server using Patch Manager, a tool in AWS Systems Manager.

**Microsoft application patching**  
Patching support for applications on Windows Server managed nodes is limited to applications released by Microsoft.

**Note**  
In some cases, Microsoft releases patches for applications that don't specify an updated date and time. In these cases, an updated date and time of `01/01/1970` is supplied by default.

**Patch baselines to patch applications released by Microsoft**  
For Windows Server, three predefined patch baselines are provided. The patch baselines `AWS-DefaultPatchBaseline` and `AWS-WindowsPredefinedPatchBaseline-OS` support only operating system updates on the Windows operating system itself. `AWS-DefaultPatchBaseline` is used as the default patch baseline for Windows Server managed nodes unless you specify a different patch baseline. The configuration settings in these two patch baselines are the same. The newer of the two, `AWS-WindowsPredefinedPatchBaseline-OS`, was created to distinguish it from the third predefined patch baseline for Windows Server. That patch baseline, `AWS-WindowsPredefinedPatchBaseline-OS-Applications`, can be used to apply patches to both the Windows Server operating system and supported applications released by Microsoft.

You can also create a custom patch baseline to update applications released by Microsoft on Windows Server machines.

**Support for patching applications released by Microsoft on on-premises servers, edge devices, VMs, and other non-EC2 nodes**  
To patch applications released by Microsoft on virtual machines (VMs) and other non-EC2 managed nodes, you must turn on the advanced-instances tier. There is a charge to use the advanced-instances tier. **However, there is no additional charge to patch applications released by Microsoft on Amazon Elastic Compute Cloud (Amazon EC2) instances.** For more information, see [Configuring instance tiers](fleet-manager-configure-instance-tiers.md).

**Windows update option for "other Microsoft products"**  
In order for Patch Manager to be able to patch applications released by Microsoft on your Windows Server managed nodes, the Windows Update option **Give me updates for other Microsoft products when I update Windows** must be activated on the managed node. 

For information about allowing this option on a single managed node, see [Update Office with Microsoft Update](https://support.microsoft.com/en-us/office/update-office-with-microsoft-update-f59d3f9d-bd5d-4d3b-a08e-1dd659cf5282) on the Microsoft Support website.

For a fleet of managed nodes running Windows Server 2016 and later, you can use a Group Policy Object (GPO) to turn on the setting. In the Group Policy Management Editor, go to **Computer Configuration**, **Administrative Templates**, **Windows Components**, **Windows Updates**, and choose **Install updates for other Microsoft products**. We also recommend configuring the GPO with additional parameters that prevent unplanned automatic updates and reboots outside of Patch Manager. For more information, see [Configuring Automatic Updates in a Non-Active Directory Environment](https://docs.microsoft.com/de-de/security-updates/windowsupdateservices/18127499) on the Microsoft technical documentation website.

For a fleet of managed nodes running Windows Server 2012 or 2012 R2 , you can turn on the option by using a script, as described in [Enabling and Disabling Microsoft Update in Windows 7 via Script](https://docs.microsoft.com/en-us/archive/blogs/technet/danbuche/enabling-and-disabling-microsoft-update-in-windows-7-via-script) on the Microsoft Docs Blog website. For example, you could do the following:

1. Save the script from the blog post in a file.

1. Upload the file to an Amazon Simple Storage Service (Amazon S3) bucket or other accessible location.

1. Use Run Command, a tool in AWS Systems Manager, to run the script on your managed nodes using the Systems Manager document (SSM document) `AWS-RunPowerShellScript` with a command similar to the following.

   ```
   Invoke-WebRequest `
       -Uri "https://s3.aws-api-domain/amzn-s3-demo-bucket/script.vbs" `
       -Outfile "C:\script.vbs" cscript c:\script.vbs
   ```

**Minimum parameter requirements**  
To include applications released by Microsoft in your custom patch baseline, you must, at a minimum, specify the product that you want to patch. The following AWS Command Line Interface (AWS CLI) command demonstrates the minimal requirements to patch a product, such as Microsoft Office 2016.

------
#### [ Linux & macOS ]

```
aws ssm create-patch-baseline \
    --name "My-Windows-App-Baseline" \
    --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=PRODUCT,Values='Office 2016'},{Key=PATCH_SET,Values='APPLICATION'}]},ApproveAfterDays=5}]"
```

------
#### [ Windows Server ]

```
aws ssm create-patch-baseline ^
    --name "My-Windows-App-Baseline" ^
    --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=PRODUCT,Values='Office 2016'},{Key=PATCH_SET,Values='APPLICATION'}]},ApproveAfterDays=5}]"
```

------

If you specify the Microsoft application product family, each product you specify must be a supported member of the selected product family. For example, to patch the product "Active Directory Rights Management Services Client 2.0," you must specify its product family as "Active Directory" and not, for example, "Office" or "SQL Server." The following AWS CLI command demonstrates a matched pairing of product family and product.

------
#### [ Linux & macOS ]

```
aws ssm create-patch-baseline \
    --name "My-Windows-App-Baseline" \
    --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=PRODUCT_FAMILY,Values='Active Directory'},{Key=PRODUCT,Values='Active Directory Rights Management Services Client 2.0'},{Key=PATCH_SET,Values='APPLICATION'}]},ApproveAfterDays=5}]"
```

------
#### [ Windows Server ]

```
aws ssm create-patch-baseline ^
    --name "My-Windows-App-Baseline" ^
    --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=PRODUCT_FAMILY,Values='Active Directory'},{Key=PRODUCT,Values='Active Directory Rights Management Services Client 2.0'},{Key=PATCH_SET,Values='APPLICATION'}]},ApproveAfterDays=5}]"
```

------

**Note**  
If you receive an error message about a mismatched product and family pairing, see [Issue: mismatched product family/product pairs](patch-manager-troubleshooting.md#patch-manager-troubleshooting-product-family-mismatch) for help resolving the issue.

# Using Kernel Live Patching on Amazon Linux 2 managed nodes


Kernel Live Patching for Amazon Linux 2 allows you to apply security vulnerability and critical bug patches to a running Linux kernel without reboots or disruptions to running applications. This allows you to benefit from improved service and application availability, while keeping your infrastructure secure and up to date. Kernel Live Patching is supported on Amazon EC2 instances, AWS IoT Greengrass core devices, and [on-premises virtual machines](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/amazon-linux-2-virtual-machine.html) running Amazon Linux 2.

For general information about Kernel Live Patching, see [Kernel Live Patching on AL2](https://docs.aws.amazon.com/linux/al2/ug/al2-live-patching.html) in the *Amazon Linux 2 User Guide*.

After you turn on Kernel Live Patching on an Amazon Linux 2 managed node, you can use Patch Manager, a tool in AWS Systems Manager, to apply kernel live patches to the managed node. Using Patch Manager is an alternative to using existing yum workflows on the node to apply the updates.

**Before you begin**  
To use Patch Manager to apply kernel live patches to your Amazon Linux 2 managed nodes, ensure your nodes are based on the correct architecture and kernel version. For information, see [Supported configurations and prerequisites](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/al2-live-patching.html#al2-live-patching-prereq) in the *Amazon EC2 User Guide*.

**Topics**
+ [

## Kernel Live Patching  using  Patch Manager
](#about-klp)
+ [

## How Kernel Live Patching using Patch Manager works
](#how-klp-works)
+ [

# Turning on Kernel Live Patching using Run Command
](enable-klp.md)
+ [

# Applying kernel live patches using Run Command
](install-klp.md)
+ [

# Turning off Kernel Live Patching using Run Command
](disable-klp.md)

## Kernel Live Patching  using  Patch Manager


Updating the kernel version  
You don't need to reboot a managed node after applying a kernel live patch update. However, AWS provides kernel live patches for an Amazon Linux 2 kernel version for up to three months after its release. After the three-month period, you must update to a later kernel version to continue to receive kernel live patches. We recommend using a maintenance window to schedule a reboot of your node at least once every three months to prompt the kernel version update.

Uninstalling kernel live patches  
Kernel live patches can't be uninstalled using Patch Manager. Instead, you can turn off Kernel Live Patching, which removes the RPM packages for the applied kernel live patches. For more information, see [Turning off Kernel Live Patching using Run Command](disable-klp.md).

Kernel compliance  
In some cases, installing all CVE fixes from live patches for the current kernel version can bring that kernel into the same compliance state that a newer kernel version would have. When that happens, the newer version is reported as `Installed`, and the managed node reported as `Compliant`. No installation time is reported for newer kernel version, however.

One kernel live patch, multiple CVEs  
If a kernel live patch addresses multiple CVEs, and those CVEs have various classification and severity values, only the highest classification and severity from among the CVEs is reported for the patch. 

The remainder of this section describes how to use Patch Manager to apply kernel live patches to managed nodes that meet these requirements.

## How Kernel Live Patching using Patch Manager works


AWS releases two types of kernel live patches for Amazon Linux 2: security updates and bug fixes. To apply those types of patches, you use a patch baseline document that targets only the classifications and severities listed in the following table.


| Classification | Severity | 
| --- | --- | 
| Security | Critical, Important | 
| Bugfix | All | 

You can create a custom patch baseline that targets only these patches, or use the predefined `AWS-AmazonLinux2DefaultPatchBaseline` patch baseline. In other words, you can use `AWS-AmazonLinux2DefaultPatchBaseline` with Amazon Linux 2 managed nodes on which Kernel Live Patching is turned on, and kernel live updates will be applied.

**Note**  
The `AWS-AmazonLinux2DefaultPatchBaseline` configuration specifies a 7-day waiting period after a patch is released or last updated before it's installed automatically. If you don't want to wait 7 days for kernel live patches to be auto-approved, you can create and use a custom patch baseline. In your patch baseline, you can specify no auto-approval waiting period, or specify a shorter or longer one. For more information, see [Working with custom patch baselines](patch-manager-manage-patch-baselines.md).

We recommend the following strategy to patch your managed nodes with kernel live updates:

1. Turn on Kernel Live Patching on your Amazon Linux 2 managed nodes.

1. Use Run Command, a tool in AWS Systems Manager, to run a `Scan` operation on your managed nodes using the predefined `AWS-AmazonLinux2DefaultPatchBaseline` or a custom patch baseline that also targets only `Security` updates with severity classified as `Critical` and `Important`, and the `Bugfix` severity of `All`. 

1. Use Compliance, a tool in AWS Systems Manager, to review whether non-compliance for patching is reported for any of the managed nodes that were scanned. If so, view the node compliance details to determine whether any kernel live patches are missing from the managed node.

1. To install missing kernel live patches, use Run Command with the same patch baseline you specified before, but this time run an `Install` operation instead of a `Scan` operation.

   Because kernel live patches are installed without the need to reboot, you can choose the `NoReboot` reboot option for this operation. 
**Note**  
You can still reboot the managed node if required for other types of patches installed on it, or if you want to update to a newer kernel. In these cases, choose the `RebootIfNeeded` reboot option instead.

1. Return to Compliance to verify that the kernel live patches were installed.

# Turning on Kernel Live Patching using Run Command
Turning on Kernel Live Patching

To turn on Kernel Live Patching, you can either run `yum` commands on your managed nodes or use Run Command and a custom Systems Manager document (SSM document) that you create.

For information about turning on Kernel Live Patching by running `yum` commands directly on the managed node, see [Enable Kernel Live Patching](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/al2-live-patching.html#al2-live-patching-prereq) in the *Amazon EC2 User Guide*.

**Note**  
When you turn on Kernel Live Patching, if the kernel already running on the managed node is *earlier* than `kernel-4.14.165-131.185.amzn2.x86_64` (the minimum supported version), the process installs the latest available kernel version and reboots the managed node. If the node is already running `kernel-4.14.165-131.185.amzn2.x86_64` or later, the process doesn't install a newer version and doesn't reboot the node.

**To turn on Kernel Live Patching using Run Command (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose **Run command**.

1. In the **Command document** list, choose the custom SSM document `AWS-ConfigureKernelLivePatching`.

1. In the **Command parameters** section, specify whether you want managed nodes to reboot as part of this operation.

1. For information about working with the remaining controls on this page, see [Running commands from the console](running-commands-console.md).

1. Choose **Run**.

**To turn on Kernel Live Patching (AWS CLI)**
+ Run the following command on your local machine.

------
#### [ Linux & macOS ]

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureKernelLivePatching" \
      --parameters "EnableOrDisable=Enable" \
      --targets "Key=instanceids,Values=instance-id"
  ```

------
#### [ Windows Server ]

  ```
  aws ssm send-command ^
      --document-name "AWS-ConfigureKernelLivePatching" ^
      --parameters "EnableOrDisable=Enable" ^
      --targets "Key=instanceids,Values=instance-id"
  ```

------

  Replace *instance-id* with the ID of the Amazon Linux 2 managed node on which you want to turn on the feature, such as i-02573cafcfEXAMPLE. To turn on the feature on multiple managed nodes, you can use either of the following formats.
  + `--targets "Key=instanceids,Values=instance-id1,instance-id2"`
  + `--targets "Key=tag:tag-key,Values=tag-value"`

  For information about other options you can use in the command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) in the *AWS CLI Command Reference*.

# Applying kernel live patches using Run Command
Applying kernel live patches

To apply kernel live patches, you can either run `yum` commands on your managed nodes or use Run Command and the SSM document `AWS-RunPatchBaseline`. 

For information about applying kernel live patches by running `yum` commands directly on the managed node, see [Apply kernel live patches](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/al2-live-patching.html#al2-live-patching-apply) in the *Amazon EC2 User Guide*.

**To apply kernel live patches using Run Command (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose **Run command**.

1. In the **Command document** list, choose the SSM document `AWS-RunPatchBaseline`.

1. In the **Command parameters** section, do one of the following:
   + If you're checking whether new kernel live patches are available, for **Operation**, choose `Scan`. For **Reboot Option**, if don't want your managed nodes to reboot after this operation, choose `NoReboot`. After the operation is complete, you can check for new patches and compliance status in Compliance.
   + If you checked patch compliance already and are ready to apply available kernel live patches, for **Operation**, choose `Install`. For **Reboot Option**, if you don't want your managed nodes to reboot after this operation, choose `NoReboot`.

1. For information about working with the remaining controls on this page, see [Running commands from the console](running-commands-console.md).

1. Choose **Run**.

**To apply kernel live patches using Run Command (AWS CLI)**

1. To perform a `Scan` operation before checking your results in Compliance, run the following command from your local machine.

------
#### [ Linux & macOS ]

   ```
   aws ssm send-command \
       --document-name "AWS-RunPatchBaseline" \
       --targets "Key=InstanceIds,Values=instance-id" \
       --parameters '{"Operation":["Scan"],"RebootOption":["RebootIfNeeded"]}'
   ```

------
#### [ Windows Server ]

   ```
   aws ssm send-command ^
       --document-name "AWS-RunPatchBaseline" ^
       --targets "Key=InstanceIds,Values=instance-id" ^
       --parameters {\"Operation\":[\"Scan\"],\"RebootOption\":[\"RebootIfNeeded\"]}
   ```

------

   For information about other options you can use in the command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) in the *AWS CLI Command Reference*.

1. To perform an `Install` operation after checking your results in Compliance, run the following command from your local machine.

------
#### [ Linux & macOS ]

   ```
   aws ssm send-command \
       --document-name "AWS-RunPatchBaseline" \
       --targets "Key=InstanceIds,Values=instance-id" \
       --parameters '{"Operation":["Install"],"RebootOption":["NoReboot"]}'
   ```

------
#### [ Windows Server ]

   ```
   aws ssm send-command ^
       --document-name "AWS-RunPatchBaseline" ^
       --targets "Key=InstanceIds,Values=instance-id" ^
       --parameters {\"Operation\":[\"Install\"],\"RebootOption\":[\"NoReboot\"]}
   ```

------

In both of the preceding commands, replace *instance-id* with the ID of the Amazon Linux 2 managed node on which you want to apply kernel live patches, such as i-02573cafcfEXAMPLE. To turn on the feature on multiple managed nodes, you can use either of the following formats.
+ `--targets "Key=instanceids,Values=instance-id1,instance-id2"`
+ `--targets "Key=tag:tag-key,Values=tag-value"`

For information about other options you can use in these commands, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) in the *AWS CLI Command Reference*.

# Turning off Kernel Live Patching using Run Command
Turning off Kernel Live Patching

To turn off Kernel Live Patching, you can either run `yum` commands on your managed nodes or use Run Command and the custom SSM document `AWS-ConfigureKernelLivePatching`.

**Note**  
If you no longer need to use Kernel Live Patching, you can turn it off at any time. In most cases, turning off the feature isn't necessary.

For information about turning off Kernel Live Patching by running `yum` commands directly on the managed node, see [Enable Kernel Live Patching](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/al2-live-patching.html#al2-live-patching-enable) in the *Amazon EC2 User Guide*.

**Note**  
When you turn off Kernel Live Patching, the process uninstalls the Kernel Live Patching plugin and then reboots the managed node.

**To turn off Kernel Live Patching using Run Command (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose **Run command**.

1. In the **Command document** list, choose the SSM document `AWS-ConfigureKernelLivePatching`.

1. In the **Command parameters** section, specify values for required parameters.

1. For information about working with the remaining controls on this page, see [Running commands from the console](running-commands-console.md).

1. Choose **Run**.

**To turn off Kernel Live Patching (AWS CLI)**
+ Run a command similar to the following.

------
#### [ Linux & macOS ]

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureKernelLivePatching" \
      --targets "Key=instanceIds,Values=instance-id" \
      --parameters "EnableOrDisable=Disable"
  ```

------
#### [ Windows Server ]

  ```
  aws ssm send-command ^
      --document-name "AWS-ConfigureKernelLivePatching" ^
      --targets "Key=instanceIds,Values=instance-id" ^
      --parameters "EnableOrDisable=Disable"
  ```

------

  Replace *instance-id* with the ID of the Amazon Linux 2 managed node on which you want to turn off the feature, such as i-02573cafcfEXAMPLE. To turn off the feature on multiple managed nodes, you can use either of the following formats.
  + `--targets "Key=instanceids,Values=instance-id1,instance-id2"`
  + `--targets "Key=tag:tag-key,Values=tag-value"`

  For information about other options you can use in the command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) in the *AWS CLI Command Reference*.

# Working with Patch Manager resources and compliance using the console


To use Patch Manager, a tool in AWS Systems Manager, complete the following tasks. These tasks are described in more detail in this section.

1. Verify that the AWS predefined patch baseline for each operating system type that you use meets your needs. If it doesn't, create a patch baseline that defines a standard set of patches for that managed node type and set it as the default instead.

1. Organize managed nodes into patch groups by using Amazon Elastic Compute Cloud (Amazon EC2) tags (optional, but recommended).

1. Do one of the following:
   + (Recommended) Configure a patch policy in Quick Setup, a tool in Systems Manager, that lets you install missing patches on a schedule for an entire organization, a subset of organizational units, or a single AWS account. For more information, see [Configure patching for instances in an organization using a Quick Setup patch policy](quick-setup-patch-manager.md).
   + Create a maintenance window that uses the Systems Manager document (SSM document) `AWS-RunPatchBaseline` in a Run Command task type. For more information, see [Tutorial: Create a maintenance window for patching using the console](maintenance-window-tutorial-patching.md).
   + Manually run `AWS-RunPatchBaseline` in a Run Command operation. For more information, see [Running commands from the console](running-commands-console.md).
   + Manually patch nodes on demand using the **Patch now** feature. For more information, see [Patching managed nodes on demand](patch-manager-patch-now-on-demand.md).

1. Monitor patching to verify compliance and investigate failures.

**Topics**
+ [

# Creating a patch policy
](patch-manager-create-a-patch-policy.md)
+ [

# Viewing patch Dashboard summaries
](patch-manager-view-dashboard-summaries.md)
+ [

# Working with patch compliance reports
](patch-manager-compliance-reports.md)
+ [

# Patching managed nodes on demand
](patch-manager-patch-now-on-demand.md)
+ [

# Working with patch baselines
](patch-manager-create-a-patch-baseline.md)
+ [

# Viewing available patches
](patch-manager-view-available-patches.md)
+ [

# Creating and managing patch groups
](patch-manager-tag-a-patch-group.md)
+ [

# Integrating Patch Manager with AWS Security Hub CSPM
](patch-manager-security-hub-integration.md)

# Creating a patch policy


A patch policy is a configuration you set up using Quick Setup, a tool in AWS Systems Manager. Patch policies provide more extensive and more centralized control over your patching operations than is available with other methods of configuring patching. A patch policy defines the schedule and baseline to use when automatically patching your nodes and applications.

For more information, see the following topics:
+ [Patch policy configurations in Quick Setup](patch-manager-policies.md)
+ [Configure patching for instances in an organization using a Quick Setup patch policy](quick-setup-patch-manager.md)

# Viewing patch Dashboard summaries
Viewing patch Dashboard summaries

The **Dashboard** tab in Patch Manager provides you with a summary view in the console that you can use to monitor your patching operations in a consolidated view. Patch Manager is a tool in AWS Systems Manager. On the **Dashboard** tab, you can view the following:
+ A snapshot of how many managed nodes are compliant and noncompliant with patching rules.
+ A snapshot of the age of patch compliance results for your managed nodes.
+ A linked count of how many noncompliant managed nodes there are for each of the most common reasons for noncompliance.
+ A linked list of the most recent patching operations.
+ A linked list of the recurring patching tasks that have been set up.

**To view patch Dashboard summaries**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Dashboard** tab.

1. Scroll to the section containing summary data that you want to view:
   + **Amazon EC2 instance management**
   + **Compliance summary**
   + **Noncompliance counts**
   + **Compliance reports**
   + **Non-patch policy-based operations**
   + **Non-patch policy-based recurring tasks**

# Working with patch compliance reports
Patch compliance reports

Use the information in the following topics to help you generate and work with patch compliance reports in Patch Manager, a tool in AWS Systems Manager.

The information in the following topics apply no matter which method or type of configuration you're using for your patching operations: 
+ A patch policy configured in Quick Setup
+ A Host Management option configured in Quick Setup
+ A maintenance window to run a patch `Scan` or `Install` task
+ An on-demand **Patch now** operation

**Important**  
Patch compliance reports are point-in-time snapshots generated only by successful patching operations. Each report contains a capture time that identifies when the compliance status was calculated.  
If you have multiple types of operations in place to scan your instances for patch compliance, note that each scan overwrites the patch compliance data of previous scans. As a result, you might end up with unexpected results in your patch compliance data. For more information, see [Identifying the execution that created patch compliance data](patch-manager-compliance-data-overwrites.md).  
To verify which patch baseline was used to generate the latest compliance information, navigate to the **Compliance reporting** tab in Patch Manager, locate the row for the managed node you want information about, and then choose the baseline ID in the **Baseline ID used** column.

**Topics**
+ [

# Viewing patch compliance results
](patch-manager-view-compliance-results.md)
+ [

# Generating .csv patch compliance reports
](patch-manager-store-compliance-results-in-s3.md)
+ [

# Remediating noncompliant managed nodes with Patch Manager
](patch-manager-noncompliant-nodes.md)
+ [

# Identifying the execution that created patch compliance data
](patch-manager-compliance-data-overwrites.md)

# Viewing patch compliance results
Viewing patch compliance results

Use these procedures to view patch compliance information about your managed nodes.

This procedure applies to patch operations that use the `AWS-RunPatchBaseline` document. For information about viewing patch compliance information for patch operations that use the `AWS-RunPatchBaselineAssociation` document, see [Identifying noncompliant managed nodes](patch-manager-find-noncompliant-nodes.md).

**Note**  
The patch scanning operations for Quick Setup and Explorer use the `AWS-RunPatchBaselineAssociation` document. Quick Setup and Explorer are both tools in AWS Systems Manager.

**Identify the patch solution for a specific CVE issue (Linux)**  
For many Linux-based operating systems, patch compliance results indicate which Common Vulnerabilities and Exposure (CVE) bulletin issues are resolved by which patches. This information can help you determine how urgently you need to install a missing or failed patch.

CVE details are included for supported versions of the following operating system types:
+ AlmaLinux
+ Amazon Linux 2
+ Amazon Linux 2023
+ Oracle Linux
+ Red Hat Enterprise Linux (RHEL)
+ Rocky Linux

**Note**  
By default, CentOS Stream doesn't provide CVE information about updates. You can, however, allow this support by using third-party repositories such as the Extra Packages for Enterprise Linux (EPEL) repository published by Fedora. For information, see [EPEL](https://fedoraproject.org/wiki/EPEL) on the Fedora Wiki.  
Currently, CVE ID values are reported only for patches with a status of `Missing` or `Failed`.

You can also add CVE IDs to your lists of approved or rejected patches in your patch baselines, as the situation and your patching goals warrant.

For information about working with approved and rejected patch lists, see the following topics:
+ [Working with custom patch baselines](patch-manager-manage-patch-baselines.md)
+ [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md)
+ [How patch baseline rules work on Linux-based systems](patch-manager-linux-rules.md)
+ [How patches are installed](patch-manager-installing-patches.md)

**Note**  
In some cases, Microsoft releases patches for applications that don't specify an updated date and time. In these cases, an updated date and time of `01/01/1970` is supplied by default.

## Viewing patching compliance results


Use the following procedures to view patch compliance results in the AWS Systems Manager console. 

**Note**  
For information about generating patch compliance reports that are downloaded to an Amazon Simple Storage Service (Amazon S3) bucket, see [Generating .csv patch compliance reports](patch-manager-store-compliance-results-in-s3.md).

**To view patch compliance results**

1. Do one of the following.

   **Option 1** (recommended) – Navigate from Patch Manager, a tool in AWS Systems Manager:
   + In the navigation pane, choose **Patch Manager**.
   + Choose the **Compliance reporting** tab.
   + In the **Node patching details** area, choose the node ID of the managed node for which you want to review patch compliance results. Nodes that are `stopped` or `terminated` will not be displayed here.
   + In the **Details** area, in the **Properties** list, choose **Patches**.

   **Option 2** – Navigate from Compliance, a tool in AWS Systems Manager:
   + In the navigation pane, choose **Compliance**.
   + For **Compliance resources summary**, choose a number in the column for the types of patch resources you want to review, such as **Non-Compliant resources**.
   + Below, in the **Resource** list, choose the ID of the managed node for which you want to review patch compliance results.
   + In the **Details** area, in the **Properties** list, choose **Patches**.

   **Option 3** – Navigate from Fleet Manager, a tool in AWS Systems Manager.
   + In the navigation pane, choose **Fleet Manager**.
   + In the **Managed instances** area, choose the ID of the managed node for which you want to review patch compliance results.
   + In the **Details** area, in the **Properties** list, choose **Patches**.

1. (Optional) In the Search box (![\[The Search icon\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/search-icon.png)), choose from the available filters.

   For example, for Red Hat Enterprise Linux (RHEL), choose from the following:
   + Name
   + Classification
   + State
   + Severity

    For Windows Server, choose from the following:
   + KB
   + Classification
   + State
   + Severity

1. Choose one of the available values for the filter type you chose. For example, if you chose **State**, now choose a compliance state such as **InstalledPendingReboot**, **Failed** or **Missing**.
**Note**  
Currently, CVE ID values are reported only for patches with a status of `Missing` or `Failed`.

1. Depending on the compliance state of the managed node, you can choose what action to take to remedy any noncompliant nodes.

   For example, you can choose to patch your noncompliant managed nodes immediately. For information about patching your managed nodes on demand, see [Patching managed nodes on demand](patch-manager-patch-now-on-demand.md).

   For information about patch compliance states, see [Patch compliance state values](patch-manager-compliance-states.md).

# Generating .csv patch compliance reports
Generating .csv patch compliance reports

You can use the AWS Systems Manager console to generate patch compliance reports that are saved as a .csv file to an Amazon Simple Storage Service (Amazon S3) bucket of your choice. You can generate a single on-demand report or specify a schedule for generating the reports automatically. 

Reports can be generated for a single managed node or for all managed nodes in your selected AWS account and AWS Region. For a single node, a report contains comprehensive details, including the IDs of patches related to a node being noncompliant. For a report on all managed nodes, only summary information and counts of noncompliant nodes' patches are provided.

After a report is generated, you can use a tool like Amazon Quick to import and analyze the data. Quick is a business intelligence (BI) service you can use to explore and interpret information in an interactive visual environment. For more information, see the [Amazon Quick User Guide](https://docs.aws.amazon.com/quicksuite/latest/userguide/what-is.html).

**Note**  
When you create a custom patch baseline, you can specify a compliance severity level for patches approved by that patch baseline, such as `Critical` or `High`. If the patch state of any approved patch is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.

You can also specify an Amazon Simple Notification Service (Amazon SNS) topic to use for sending notifications when a report is generated.

**Service roles for generating patch compliance reports**  
The first time you generate a report, Systems Manager creates an Automation assume role named `AWS-SystemsManager-PatchSummaryExportRole` to use for the export process to S3.

**Note**  
If you are exporting compliance data to an encrypted S3 bucket, you must update its associated AWS KMS key policy to provide the necessary permissions for `AWS-SystemsManager-PatchSummaryExportRole`. For instance, add a permission similar to this to your S3 bucket's AWS KMS policy:  

```
{
    "Effect": "Allow",
    "Action": [
        "kms:GenerateDataKey"
    ],
    "Resource": "role-arn"
}
```
Replace *role-arn* with the Amazon Resource Name (ARN) of the created in your account, in the format `arn:aws:iam::111222333444:role/service-role/AWS-SystemsManager-PatchSummaryExportRole`.  
For more information, see [Key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) in the *AWS Key Management Service Developer Guide*.

The first time you generate a report on a schedule, Systems Manager creates another service role named `AWS-EventBridge-Start-SSMAutomationRole`, along with the service role `AWS-SystemsManager-PatchSummaryExportRole` (if not created already) to use for the export process. `AWS-EventBridge-Start-SSMAutomationRole` enables Amazon EventBridge to start an automation using the runbook [AWS-ExportPatchReportToS3](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-aws-exportpatchreporttos3).

We recommend against attempting to modify these policies and roles. Doing so could cause patch compliance report generation to fail. For more information, see [Troubleshooting patch compliance report generation](#patch-compliance-reports-troubleshooting).

**Topics**
+ [

## What's in a generated patch compliance report?
](#patch-compliance-reports-to-s3-examples)
+ [

## Generating patch compliance reports for a single managed node
](#patch-compliance-reports-to-s3-one-instance)
+ [

## Generating patch compliance reports for all managed nodes
](#patch-compliance-reports-to-s3-all-instances)
+ [

## Viewing patch compliance reporting history
](#patch-compliance-reporting-history)
+ [

## Viewing patch compliance reporting schedules
](#patch-compliance-reporting-schedules)
+ [

## Troubleshooting patch compliance report generation
](#patch-compliance-reports-troubleshooting)

## What's in a generated patch compliance report?


This topic provides information about the types of content included in the patch compliance reports that are generated and downloaded to a specified S3 bucket.

### Report format for a single managed node


A report generated for a single managed node provides both summary and detailed information.

[Download a sample report (single node)](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/Sample-single-instance-patch-compliance-report.zip)

Summary information for a single managed node includes the following:
+ Index
+ Instance ID
+ Instance name
+ Instance IP
+ Platform name
+ Platform version
+ SSM Agent version
+ Patch baseline
+ Patch group
+ Compliance status
+ Compliance severity
+ Noncompliant Critical severity patch count
+ Noncompliant High severity patch count
+ Noncompliant Medium severity patch count
+ Noncompliant Low severity patch count
+ Noncompliant Informational severity patch count
+ Noncompliant Unspecified severity patch count

Detailed information for a single managed node includes the following:
+ Index
+ Instance ID
+ Instance name
+ Patch name
+ KB ID/Patch ID
+ Patch state
+ Last report time
+ Compliance level
+ Patch severity
+ Patch classification
+ CVE ID
+ Patch baseline
+ Logs URL
+ Instance IP
+ Platform name
+ Platform version
+ SSM Agent version

**Note**  
When you create a custom patch baseline, you can specify a compliance severity level for patches approved by that patch baseline, such as `Critical` or `High`. If the patch state of any approved patch is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.

### Report format for all managed nodes


A report generated for all managed nodes provides only summary information.

[Download a sample report (all managed nodes)](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/Sample-all-instances-patch-compliance-report.zip)

Summary information for all managed nodes includes the following:
+ Index
+ Instance ID
+ Instance name
+ Instance IP
+ Platform name
+ Platform version
+ SSM Agent version
+ Patch baseline
+ Patch group
+ Compliance status
+ Compliance severity
+ Noncompliant Critical severity patch count
+ Noncompliant High severity patch count
+ Noncompliant Medium severity patch count
+ Noncompliant Low severity patch count
+ Noncompliant Informational severity patch count
+ Noncompliant Unspecified severity patch count

## Generating patch compliance reports for a single managed node


Use the following procedure to generate a patch summary report for a single managed node in your AWS account. The report for a single managed node provides details about each patch that is out of compliance, including patch names and IDs. 

**To generate patch compliance reports for a single managed node**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Compliance reporting** tab.

1. Choose the button for the row of the managed node for which you want to generate a report, and then choose **View detail**.

1. In the **Patch summary** section, choose **Export to S3**.

1. For **Report name**, enter a name to help you identify the report later.

1. For **Reporting frequency**, choose one of the following:
   + **On demand** – Create a one-time report. Skip to Step 9.
   + **On a schedule** – Specify a recurring schedule for automatically generating reports. Continue to Step 8.

1. For **Schedule type**, specify either a rate expression, such as every 3 days, or provide a cron expression to set the report frequency.

   For information about cron expressions, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

1. For **Bucket name**, select the name of an S3 bucket where you want to store the .csv report files.
**Important**  
If you're working in an AWS Region that was launched after March 20, 2019, you must select an S3 bucket in that same Region. Regions launched after that date were turned off by default. For more information and a list of these Regions, see [Enabling a Region](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html#rande-manage-enable) in the *Amazon Web Services General Reference*.

1. (Optional) To send notifications when the report is generated, expend the **SNS topic** section, and then choose an existing Amazon SNS topic from **SNS topic Amazon Resource Name (ARN)**.

1. Choose **Submit**.

For information about viewing a history of generated reports, see [Viewing patch compliance reporting history](#patch-compliance-reporting-history).

For information about viewing details of reporting schedules you have created, see [Viewing patch compliance reporting schedules](#patch-compliance-reporting-schedules).

## Generating patch compliance reports for all managed nodes


Use the following procedure to generate a patch summary report for all managed nodes in your AWS account. The report for all managed nodes indicates which nodes are out of compliance and the numbers of noncompliant patches. It doesn't provide the names or other identifiers of the patches. For these additional details, you can generate a patch compliance report for a single managed node. For information, see [Generating patch compliance reports for a single managed node](#patch-compliance-reports-to-s3-one-instance) earlier in this topic. 

**To generate patch compliance reports for all managed nodes**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Compliance reporting** tab.

1. Choose **Export to S3**. (Don't select a node ID first.)

1. For **Report name**, enter a name to help you identify the report later.

1. For **Reporting frequency**, choose one of the following:
   + **On demand** – Create a one-time report. Skip to Step 8.
   + **On a schedule** – Specify a recurring schedule for automatically generating reports. Continue to Step 7.

1. For **Schedule type**, specify either a rate expression, such as every 3 days, or provide a cron expression to set the report frequency.

   For information about cron expressions, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

1. For **Bucket name**, select the name of an S3 bucket where you want to store the .csv report files.
**Important**  
If you're working in an AWS Region that was launched after March 20, 2019, you must select an S3 bucket in that same Region. Regions launched after that date were turned off by default. For more information and a list of these Regions, see [Enabling a Region](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html#rande-manage-enable) in the *Amazon Web Services General Reference*.

1. (Optional) To send notifications when the report is generated, expend the **SNS topic** section, and then choose an existing Amazon SNS topic from **SNS topic Amazon Resource Name (ARN)**.

1. Choose **Submit**.

For information about viewing a history of generated reports, see [Viewing patch compliance reporting history](#patch-compliance-reporting-history).

For information about viewing details of reporting schedules you have created, see [Viewing patch compliance reporting schedules](#patch-compliance-reporting-schedules).

## Viewing patch compliance reporting history


Use the information in this topic to help you view details about the patch compliance reports generated in your AWS account.

**To view patch compliance reporting history**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Compliance reporting** tab.

1. Choose **View all S3 exports**, and then choose the **Export history** tab.

## Viewing patch compliance reporting schedules


Use the information in this topic to help you view details about the patch compliance reporting schedules created in your AWS account.

**To view patch compliance reporting history**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Compliance reporting** tab.

1. Choose **View all S3 exports**, and then choose the **Report schedule rules** tab.

## Troubleshooting patch compliance report generation


Use the following information to help you troubleshoot problems with generating patch compliance report generation in Patch Manager, a tool in AWS Systems Manager.

**Topics**
+ [

### A message reports that the `AWS-SystemsManager-PatchManagerExportRolePolicy` policy is corrupted
](#patch-compliance-reports-troubleshooting-1)
+ [

### After deleting patch compliance policies or roles, scheduled reports aren't generated successfully
](#patch-compliance-reports-troubleshooting-2)

### A message reports that the `AWS-SystemsManager-PatchManagerExportRolePolicy` policy is corrupted


**Problem**: You receive an error message similar to the following, indicating the `AWS-SystemsManager-PatchManagerExportRolePolicy` is corrupted:

```
An error occurred while updating the AWS-SystemsManager-PatchManagerExportRolePolicy
policy. If you have edited the policy, you might need to delete the policy, and any 
role that uses it, then try again. Systems Manager recreates the roles and policies 
you have deleted.
```
+ **Solution**: Use the Patch Manager console or AWS CLI to delete the affected roles and policies before generating a new patch compliance report.

**To delete the corrupt policy using the console**

  1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

  1. Do one of the following:

     **On-demand reports** – If the problem occurred while generating a one-time on-demand report, in the left navigation, choose **Policies**, search for `AWS-SystemsManager-PatchManagerExportRolePolicy`, then delete the policy. Next, choose **Roles**, search for `AWS-SystemsManager-PatchSummaryExportRole`, then delete the role.

     **Scheduled reports** – If the problem occurred while generating a report on a schedule, in the left navigation, choose **Policies**, search one at a time for `AWS-EventBridge-Start-SSMAutomationRolePolicy` and `AWS-SystemsManager-PatchManagerExportRolePolicy`, and delete each policy. Next, choose **Roles**, search one at a time for `AWS-EventBridge-Start-SSMAutomationRole` and `AWS-SystemsManager-PatchSummaryExportRole`, and delete each role.

**To delete the corrupt policy using the AWS CLI**

  Replace the *placeholder values* with your account ID.
  + If the problem occurred while generating a one-time on-demand report, run the following commands:

    ```
    aws iam delete-policy --policy-arn arn:aws:iam::account-id:policy/AWS-SystemsManager-PatchManagerExportRolePolicy
    ```

    ```
    aws iam delete-role --role-name AWS-SystemsManager-PatchSummaryExportRole
    ```

    If the problem occurred while generating a report on a schedule, run the following commands:

    ```
    aws iam delete-policy --policy-arn arn:aws:iam::account-id:policy/AWS-EventBridge-Start-SSMAutomationRolePolicy
    ```

    ```
    aws iam delete-policy --policy-arn arn:aws:iam::account-id:policy/AWS-SystemsManager-PatchManagerExportRolePolicy
    ```

    ```
    aws iam delete-role --role-name AWS-EventBridge-Start-SSMAutomationRole
    ```

    ```
    aws iam delete-role --role-name AWS-SystemsManager-PatchSummaryExportRole
    ```

  After completing either procedure, follow the steps to generate or schedule a new patch compliance report.

### After deleting patch compliance policies or roles, scheduled reports aren't generated successfully


**Problem**: The first time you generate a report, Systems Manager creates a service role and a policy to use for the export process (`AWS-SystemsManager-PatchSummaryExportRole` and `AWS-SystemsManager-PatchManagerExportRolePolicy`). The first time you generate a report on a schedule, Systems Manager creates another service role and a policy (`AWS-EventBridge-Start-SSMAutomationRole` and `AWS-EventBridge-Start-SSMAutomationRolePolicy`). These let Amazon EventBridge start an automation using the runbook [AWS-ExportPatchReportToS3 ](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-aws-exportpatchreporttos3).

If you delete any of these policies or roles, the connections between your schedule and your specified S3 bucket and Amazon SNS topic might be lost. 
+ **Solution**: To work around this problem, we recommend deleting the previous schedule and creating a new schedule to replace the one that was experiencing issues.

# Remediating noncompliant managed nodes with Patch Manager
Remediating noncompliant managed nodes

The topics in this section provide overviews of how to identify managed nodes that are out of patch compliance and how to bring nodes into compliance.

**Topics**
+ [

# Identifying noncompliant managed nodes
](patch-manager-find-noncompliant-nodes.md)
+ [

# Patch compliance state values
](patch-manager-compliance-states.md)
+ [

# Patching noncompliant managed nodes
](patch-manager-compliance-remediation.md)

# Identifying noncompliant managed nodes


Out-of-compliance managed nodes are identified when either of two AWS Systems Manager documents (SSM documents) are run. These SSM documents reference the appropriate patch baseline for each managed node in Patch Manager, a tool in AWS Systems Manager. They then evaluate the patch state of the managed node and then make compliance results available to you.

There are two SSM documents that are used to identify or update noncompliant managed nodes: `AWS-RunPatchBaseline` and `AWS-RunPatchBaselineAssociation`. Each one is used by different processes, and their compliance results are available through different channels. The following table outlines the differences between these documents.

**Note**  
Patch compliance data from Patch Manager can be sent to AWS Security Hub CSPM. Security Hub CSPM gives you a comprehensive view of your high-priority security alerts and compliance status. It also monitors the patching status of your fleet. For more information, see [Integrating Patch Manager with AWS Security Hub CSPM](patch-manager-security-hub-integration.md). 


|  | `AWS-RunPatchBaseline` | `AWS-RunPatchBaselineAssociation` | 
| --- | --- | --- | 
| Processes that use the document |  **Patch on demand** - You can scan or patch managed nodes on demand using the **Patch now** option. For information, see [Patching managed nodes on demand](patch-manager-patch-now-on-demand.md). **Systems Manager Quick Setup patch policies** – You can create a patching configuration in Quick Setup, a tool in AWS Systems Manager, that can scan for or install missing patches on separate schedules for an entire organization, a subset of organizational units, or a single AWS account. For information, see [Configure patching for instances in an organization using a Quick Setup patch policy](quick-setup-patch-manager.md). **Run a command** – You can manually run `AWS-RunPatchBaseline` in an operation in Run Command, a tool in AWS Systems Manager. For information, see [Running commands from the console](running-commands-console.md). **Maintenance window** – You can create a maintenance window that uses the SSM document `AWS-RunPatchBaseline` in a Run Command task type. For information, see [Tutorial: Create a maintenance window for patching using the console](maintenance-window-tutorial-patching.md).  |  **Systems Manager Quick Setup Host Management** – You can enable a Host Management configuration option in Quick Setup to scan your managed instances for patch compliance each day. For information, see [Set up Amazon EC2 host management using Quick Setup](quick-setup-host-management.md). **Systems Manager [Explorer](Explorer.md)** – When you allow Explorer, a tool in AWS Systems Manager, it regularly scans your managed instances for patch compliance and reports results in the Explorer dashboard.  | 
| Format of the patch scan result data |  After `AWS-RunPatchBaseline` runs, Patch Manager sends an `AWS:PatchSummary` object to Inventory, a tool in AWS Systems Manager. This report is generated only by successful patching operations and includes a capture time that identifies when the compliance status was calculated.  |  After `AWS-RunPatchBaselineAssociation` runs, Patch Manager sends an `AWS:ComplianceItem` object to Systems Manager Inventory.  | 
| Viewing patch compliance reports in the console |  You can view patch compliance information for processes that use `AWS-RunPatchBaseline` in [Systems Manager Configuration Compliance](systems-manager-compliance.md) and [Working with managed nodes](fleet-manager-managed-nodes.md). For more information, see [Viewing patch compliance results](patch-manager-view-compliance-results.md).  |  If you use Quick Setup to scan your managed instances for patch compliance, you can see the compliance report in [Systems Manager Fleet Manager](fleet-manager.md). In the Fleet Manager console, choose the node ID of your managed node. In the **General** menu, choose **Configuration compliance**. If you use Explorer to scan your managed instances for patch compliance, you can see the compliance report in both Explorer and [Systems Manager OpsCenter](OpsCenter.md).  | 
| AWS CLI commands for viewing patch compliance results |  For processes that use `AWS-RunPatchBaseline`, you can use the following AWS CLI commands to view summary information about patches on a managed node. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-find-noncompliant-nodes.html)  |  For processes that use `AWS-RunPatchBaselineAssociation`, you can use the following AWS CLI command to view summary information about patches on an instance. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-find-noncompliant-nodes.html)  | 
| Patching operations |  For processes that use `AWS-RunPatchBaseline`, you specify whether you want the operation to run a `Scan` operation only, or a `Scan and install` operation. If your goal is to identify noncompliant managed nodes and not remediate them, run only a `Scan` operation.  | Quick Setup and Explorer processes, which use AWS-RunPatchBaselineAssociation, run only a Scan operation. | 
| More info |  [SSM Command document for patching: `AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md)  |  [SSM Command document for patching: `AWS-RunPatchBaselineAssociation`](patch-manager-aws-runpatchbaselineassociation.md)  | 

For information about the various patch compliance states you might see reported, see [Patch compliance state values](patch-manager-compliance-states.md)

For information about remediating managed nodes that are out of patch compliance, see [Patching noncompliant managed nodes](patch-manager-compliance-remediation.md).

# Patch compliance state values


The information about patches for a managed node include a report of the state, or status, of each individual patch.

**Tip**  
If you want to assign a specific patch compliance state to a managed node, you can use the [https://docs.aws.amazon.com/cli/latest/reference/ssm/put-compliance-items.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/put-compliance-items.html) AWS Command Line Interface (AWS CLI) command or the [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutComplianceItems.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutComplianceItems.html) API operation. Assigning compliance state isn't supported in the console.

Use the information in the following tables to help you identify why a managed node might be out of patch compliance.

## Patch compliance values for Debian Server and Ubuntu Server


For Debian Server and Ubuntu Server, the rules for package classification into the different compliance states are described in the following table.

**Note**  
Keep the following in mind when you're evaluating the `INSTALLED`, `INSTALLED_OTHER`, and `MISSING` status values: If you don't select the **Include nonsecurity updates** check box when creating or updating a patch baseline, patch candidate versions are limited to patches in the following repositories: .   
Ubuntu Server 16.04 LTS: `xenial-security`
Ubuntu Server 18.04 LTS: `bionic-security`
Ubuntu Server 20.04 LTS: `focal-security`
Ubuntu Server 22.04 LTS (`jammy-security`)
Ubuntu Server 24.04 LTS (`noble-security`)
Ubuntu Server 25.04 (`plucky-security`)
`debian-security` (Debian Server)
If you do select the **Include nonsecurity updates** check box, patches from other repositories are considered as well.


| Patch state | Description | Compliance status | 
| --- | --- | --- | 
|  **`INSTALLED`**  |  The patch is listed in the patch baseline and is installed on the managed node. It could have been installed either manually by an individual or automatically by Patch Manager when the `AWS-RunPatchBaseline` document was run on the managed node.  | Compliant | 
|  **`INSTALLED_OTHER`**  |  The patch isn't included in the baseline or isn't approved by the baseline but is installed on the managed node. The patch might have been installed manually, the package could be a required dependency of another approved patch, or the patch might have been included in an InstallOverrideList operation. If you don't specify `Block` as the **Rejected patches** action, `INSTALLED_OTHER` patches also includes installed but rejected patches.   | Compliant | 
|  **`INSTALLED_PENDING_REBOOT`**  |  `INSTALLED_PENDING_REBOOT` can mean either of two things: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-compliance-states.html) In neither case does it mean that a patch with this status *requires* a reboot, only that the node hasn't been rebooted since the patch was installed.  | Non-Compliant | 
|  **`INSTALLED_REJECTED`**  |  The patch is installed on the managed node but is specified in a **Rejected patches** list. This typically means the patch was installed before it was added to a list of rejected patches.  | Non-Compliant | 
|  **`MISSING`**  |  The patch is approved in the baseline, but it isn't installed on the managed node. If you configure the `AWS-RunPatchBaseline` document task to scan (instead of install), the system reports this status for patches that were located during the scan but haven't been installed.  | Non-Compliant | 
|  **`FAILED`**  |  The patch is approved in the baseline, but it couldn't be installed. To troubleshoot this situation, review the command output for information that might help you understand the problem.  | Non-Compliant | 

## Patch compliance values for other operating systems


For all operating systems besides Debian Server and Ubuntu Server, the rules for package classification into the different compliance states are described in the following table. 


|  Patch state | Description | Compliance value | 
| --- | --- | --- | 
|  **`INSTALLED`**  |  The patch is listed in the patch baseline and is installed on the managed node. It could have been installed either manually by an individual or automatically by Patch Manager when the `AWS-RunPatchBaseline` document was run on the node.  | Compliant | 
|  **`INSTALLED_OTHER`**¹  |  The patch isn't in the baseline, but it is installed on the managed node. There are two possible reasons for this: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-compliance-states.html)  | Compliant | 
|  **`INSTALLED_REJECTED`**  |  The patch is installed on the managed node but is specified in a rejected patches list. This typically means the patch was installed before it was added to a list of rejected patches.  | Non-Compliant | 
|  **`INSTALLED_PENDING_REBOOT`**  |  `INSTALLED_PENDING_REBOOT` can mean either of two things: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-compliance-states.html) In neither case does it mean that a patch with this status *requires* a reboot, only that the node hasn't been rebooted since the patch was installed.  | Non-Compliant | 
|  **`MISSING`**  |  The patch is approved in the baseline, but it isn't installed on the managed node. If you configure the `AWS-RunPatchBaseline` document task to scan (instead of install), the system reports this status for patches that were located during the scan but haven't been installed.  | Non-Compliant | 
|  **`FAILED`**  |  The patch is approved in the baseline, but it couldn't be installed. To troubleshoot this situation, review the command output for information that might help you understand the problem.  | Non-Compliant | 
|  **`NOT_APPLICABLE`**¹  |  *This compliance state is reported only for Windows Server operating systems.* The patch is approved in the baseline, but the service or feature that uses the patch isn't installed on the managed node. For example, a patch for a web server service such as Internet Information Services (IIS) would show `NOT_APPLICABLE` if it was approved in the baseline, but the web service isn't installed on the managed node. A patch can also be marked `NOT_APPLICABLE` if it has been superseded by a subsequent update. This means that the later update is installed and the `NOT_APPLICABLE` update is no longer required.  | Not applicable | 
| AVAILABLE\$1SECURITY\$1UPDATES |  *This compliance state is reported only for Windows Server operating systems.* An available security update patch that is not approved by the patch baseline can have a compliance value of `Compliant` or `Non-Compliant`, as defined in a custom patch baseline. When you create or update a patch baseline, you choose the status you want to assign to security patches that are available but not approved because they don't meet the installation criteria specified in the patch baseline. For example, security patches that you might want installed can be skipped if you have specified a long period to wait after a patch is released before installation. If an update to the patch is released during your specified waiting period, the waiting period for installing the patch starts over. If the waiting period is too long, multiple versions of the patch could be released but never installed. For patch summary counts, when a patch is reported as `AvailableSecurityUpdate`, it will always be included in `AvailableSecurityUpdateCount`. If the baseline is configured to report these patches as `NonCompliant`, it is also included in `SecurityNonCompliantCount`. If the baseline is configured to report these patches as `Compliant`, they are not included in `SecurityNonCompliantCount`. These patches are always reported with an unspecified severity and are never included in the `CriticalNonCompliantCount`.  |  Compliant or Non-Compliant, depending on the option selected for available security updates.  Using the console to create or update a patch baseline, you specify this option in the **Available security updates compliance status** field. Using the AWS CLI to run the [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-patch-baseline.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-patch-baseline.html) or [https://docs.aws.amazon.com/cli/latest/reference/ssm/update-patch-baseline.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/update-patch-baseline.html) command, you specify this option in the `available-security-updates-compliance-status` parameter.   | 

¹ For patches with the state `INSTALLED_OTHER` and `NOT_APPLICABLE`, Patch Manager omits some data from query results based on the [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-instance-patches.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-instance-patches.html) command, such as the values for `Classification` and `Severity`. This is done to help prevent exceeding the data limit for individual nodes in Inventory, a tool in AWS Systems Manager. To view all patch details, you can use the [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-available-patches.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-available-patches.html) command. 

# Patching noncompliant managed nodes


Many of the same AWS Systems Manager tools and processes you can use to check managed nodes for patch compliance can be used to bring nodes into compliance with the patch rules that currently apply to them. To bring managed nodes into patch compliance, Patch Manager, a tool in AWS Systems Manager, must run a `Scan and install` operation. (If your goal is only to identify noncompliant managed nodes and not remediate them, run a `Scan` operation instead. For more information, see [Identifying noncompliant managed nodes](patch-manager-find-noncompliant-nodes.md).)

**Install patches using Systems Manager**  
You can choose from several tools to run a `Scan and install` operation:
+ (Recommended) Configure a patch policy in Quick Setup, a tool in Systems Manager, that lets you install missing patches on a schedule for an entire organization, a subset of organizational units, or a single AWS account. For more information, see [Configure patching for instances in an organization using a Quick Setup patch policy](quick-setup-patch-manager.md).
+ Create a maintenance window that uses the Systems Manager document (SSM document) `AWS-RunPatchBaseline` in a Run Command task type. For information, see [Tutorial: Create a maintenance window for patching using the console](maintenance-window-tutorial-patching.md).
+ Manually run `AWS-RunPatchBaseline` in a Run Command operation. For information, see [Running commands from the console](running-commands-console.md).
+ Install patches on demand using the **Patch now** option. For information, see [Patching managed nodes on demand](patch-manager-patch-now-on-demand.md).

# Identifying the execution that created patch compliance data


Patch compliance data represents a point-in-time snapshot from the latest successful patching operation. Each compliance report includes both an execution ID and a capture time that help you identify which operation created the compliance data and when it was generated.

If you have multiple types of operations in place to scan your instances for patch compliance, each scan overwrites the patch compliance data of previous scans. As a result, you might end up with unexpected results in your patch compliance data.

For example, suppose you create a patch policy that scans for patch compliance each day at 2 AM local time. That patch policy uses a patch baseline that targets patches with severity marked as `Critical`, `Important`, and `Moderate`. This patch baseline also specifies a few specifically rejected patches. 

Also suppose that you already had a maintenance window set up to scan the same set of managed nodes each day at 4 AM local time, which you don't delete or deactivate. That maintenance window’s task uses a different patch baseline, one that targets only patches with a `Critical` severity and doesn’t exclude any specific patches. 

When this second scan is performed by the maintenance window, the patch compliance data from the first scan is deleted and replaced with patch compliance from the second scan. 

Therefore, we strongly recommend using only one automated method for scanning and installing in your patching operations. If you're setting up patch policies, you should delete or deactivate other methods of scanning for patch compliance. For more information, see the following topics: 
+ To remove a patching operation task from a maintenance window – [Updating or deregistering maintenance window tasks using the console](sysman-maintenance-update.md#sysman-maintenance-update-tasks) 
+ To delete a State Manager association – [Deleting associations](systems-manager-state-manager-delete-association.md).

To deactivate daily patch compliance scans in a Host Management configuration, do the following in Quick Setup:

1. In the navigation pane, choose **Quick Setup**.

1. Select the Host Management configuration to update.

1. Choose **Actions, Edit configuration**.

1. Clear the **Scan instances for missing patches daily** check box.

1. Choose **Update**.

**Note**  
Using the **Patch now** option to scan a managed node for compliance also results in an overwrite of patch compliance data. 

# Patching managed nodes on demand
Patching managed nodes on demand

Using the **Patch now** option in Patch Manager, a tool in AWS Systems Manager, you can run on-demand patching operations from the Systems Manager console. This means you don’t have to create a schedule in order to update the compliance status of your managed nodes or to install patches on noncompliant nodes. You also don’t need to switch the Systems Manager console between Patch Manager and Maintenance Windows, a tool in AWS Systems Manager, in order to set up or modify a scheduled patching window.

**Patch now** is especially useful when you must apply zero-day updates or install other critical patches on your managed nodes as soon as possible.

**Note**  
Patching on demand is supported for a single AWS account-AWS Region pair at a time. It can't be used with patching operations that are based on *patch policies*. We recommend using patch policies for keeping all your managed nodes in compliance. For more information about working with patch policies, see [Patch policy configurations in Quick Setup](patch-manager-policies.md).

**Topics**
+ [

## How 'Patch now' works
](#patch-on-demand-how-it-works)
+ [

## Running 'Patch now'
](#run-patch-now)

## How 'Patch now' works


To run **Patch now**, you specify just two required settings:
+ Whether to scan for missing patches only, or to scan *and* install patches on your managed nodes
+ Which managed nodes to run the operation on

When the **Patch now** operation runs, it determines which patch baseline to use in the same way one is selected for other patching operations. If a managed node is associated with a patch group, the patch baseline specified for that group is used. If the managed node isn't associated with a patch group, the operation uses the patch baseline that is currently set as the default for the operating system type of the managed node. This can be a predefined baseline, or the custom baseline you have set as the default. For more information about patch baseline selection, see [Patch groups](patch-manager-patch-groups.md). 

Options you can specify for **Patch now** include choosing when, or whether, to reboot managed nodes after patching, specifying an Amazon Simple Storage Service (Amazon S3) bucket to store log data for the patching operation, and running Systems Manager documents (SSM documents) as lifecycle hooks during patching.

### Concurrency and error thresholds for 'Patch now'


For **Patch now** operations, concurrency and error threshold options are handled by Patch Manager. You don't need to specify how many managed nodes to patch at once, nor how many errors are permitted before the operation fails. Patch Manager applies the concurrency and error threshold settings described in the following tables when you patch on demand.

**Important**  
The following thresholds apply to `Scan and install` operations only. For `Scan` operations, Patch Manager attempts to scan up to 1,000 nodes concurrently, and continue scanning until it has encountered up to 1,000 errors.


**Concurrency: Install operations**  

| Total number of managed nodes in the **Patch now** operation | Number of managed nodes scanned or patched at a time | 
| --- | --- | 
| Fewer than 25 | 1 | 
| 25-100 | 5% | 
| 101 to 1,000 | 8% | 
| More than 1,000 | 10% | 


**Error threshold: Install operations**  

| Total number of managed nodes in the **Patch now** operation | Number of errors permitted before the operation fails | 
| --- | --- | 
| Fewer than 25 | 1 | 
| 25-100 | 5 | 
| 101 to 1,000 | 10 | 
| More than 1,000 | 10 | 

### Using 'Patch now' lifecycle hooks


**Patch now** provides you with the ability to run SSM Command documents as lifecycle hooks during an `Install` patching operation. You can use these hooks for tasks such as shutting down applications before patching or running health checks on your applications after patching or after a reboot. 

For more information about using lifecycle hooks, see [SSM Command document for patching: `AWS-RunPatchBaselineWithHooks`](patch-manager-aws-runpatchbaselinewithhooks.md).

The following table lists the lifecycle hooks available for each of the three **Patch now** reboot options, in addition to sample uses for each hook.


**Lifecycle hooks and sample uses**  

| Reboot option | Hook: Before installation | Hook: After installation | Hook: On exit | Hook: After scheduled reboot | 
| --- | --- | --- | --- | --- | 
| Reboot if needed |  Run an SSM document before patching begins. Example use: Safely shut down applications before the patching process begins.   |  Run an SSM document at the end of the patching operation and before managed node reboot. Example use: Run operations such as installing third-party applications before a potential reboot.  |  Run an SSM document after the patching operation is complete and instances are rebooted. Example use: Ensure that applications are running as expected after patching.  | Not available | 
| Do not reboot my instances | Same as above. |  Run an SSM document at the end of the patching operation. Example use: Ensure that applications are running as expected after patching.  |  *Not available*   |  *Not available*   | 
| Schedule a reboot time | Same as above. | Same as for Do not reboot my instances. | Not available |  Run an SSM document immediately after a scheduled reboot is complete. Example use: Ensure that applications are running as expected after the reboot.  | 

## Running 'Patch now'


Use the following procedure to patch your managed nodes on demand.

**To run 'Patch now'**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose **Patch now**.

1. For **Patching operation**, choose one of the following:
   + **Scan**: Patch Manager finds which patches are missing from your managed nodes but doesn't install them. You can view the results in the **Compliance** dashboard or in other tools you use for viewing patch compliance.
   + **Scan and install**: Patch Manager finds which patches are missing from your managed nodes and installs them.

1. Use this step only if you chose **Scan and install** in the previous step. For **Reboot option**, choose one of the following:
   + **Reboot if needed**: After installation, Patch Manager reboots managed nodes only if needed to complete a patch installation.
   + **Don't reboot my instances**: After installation, Patch Manager doesn't reboot managed nodes. You can reboot nodes manually when you choose or manage reboots outside of Patch Manager.
   + **Schedule a reboot time**: Specify the date, time, and UTC time zone for Patch Manager to reboot your managed nodes. After you run the **Patch now** operation, the scheduled reboot is listed as an association in State Manager with the name `AWS-PatchRebootAssociation`.
**Important**  
If you cancel the main patching operation after it has started, the `AWS-PatchRebootAssociation` association in State Manager is NOT automatically canceled. To prevent unexpected reboots, you must manually delete `AWS-PatchRebootAssociation` from State Manager if you no longer want the scheduled reboot to occur. Failure to do so may result in unplanned system reboots that could impact production workloads. You can find this association in the Systems Manager console under **State Manager** > **Associations**.

1. For **Instances to patch**, choose one of the following:
   + **Patch all instances**: Patch Manager runs the specified operation on all managed nodes in your AWS account in the current AWS Region.
   + **Patch only the target instances I specify**: You specify which managed nodes to target in the next step.

1. Use this step only if you chose **Patch only the target instances I specify** in the previous step. In the **Target selection** section, identify the nodes on which you want to run this operation by specifying tags, selecting nodes manually, or specifying a resource group.
**Note**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.  
If you choose to target a resource group, note that resource groups that are based on an AWS CloudFormation stack must still be tagged with the default `aws:cloudformation:stack-id` tag. If it has been removed, Patch Manager might be unable to determine which managed nodes belong to the resource group.

1. (Optional) For **Patching log storage**, if you want to create and save logs from this patching operation, select the S3 bucket for storing the logs.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile (for EC2 instances) or IAM service role (hybrid-activated machines) assigned to the instance, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create the IAM service role required for Systems Manager in hybrid and multicloud environments](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, make sure that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. (Optional) If you want to run SSM documents as lifecycle hooks during specific points of the patching operation, do the following:
   + Choose **Use lifecycle hooks**.
   + For each available hook, select the SSM document to run at the specified point of the operation:
     + Before installation
     + After installation
     + On exit
     + After scheduled reboot
**Note**  
The default document, `AWS-Noop`, runs no operations.

1. Choose **Patch now**.

   The **Association execution summary** page opens. (Patch now uses associations in State Manager, a tool in AWS Systems Manager, for its operations.) In the **Operation summary** area, you can monitor the status of scanning or patching on the managed nodes you specified.

# Working with patch baselines
Patch baselines

A patch baseline in Patch Manager, a tool in AWS Systems Manager, defines which patches are approved for installation on your managed nodes. You can specify approved or rejected patches one by one. You can also create auto-approval rules to specify that certain types of updates (for example, critical updates) should be automatically approved. The rejected list overrides both the rules and the approve list. To use a list of approved patches to install specific packages, you first remove all auto-approval rules. If you explicitly identify a patch as rejected, it won't be approved or installed, even if it matches all of the criteria in an auto-approval rule. Also, a patch is installed on a managed node only if it applies to the software on the node, even if the patch has otherwise been approved for the managed node.

**Topics**
+ [

# Viewing AWS predefined patch baselines
](patch-manager-view-predefined-patch-baselines.md)
+ [

# Working with custom patch baselines
](patch-manager-manage-patch-baselines.md)
+ [

# Setting an existing patch baseline as the default
](patch-manager-default-patch-baseline.md)

**More info**  
+ [Patch baselines](patch-manager-patch-baselines.md)

# Viewing AWS predefined patch baselines
Viewing AWS predefined patch baselines

Patch Manager, a tool in AWS Systems Manager, includes a predefined patch baseline for each operating system supported by Patch Manager. You can use these patch baselines (you can't customize them), or you can create your own. The following procedure describes how to view a predefined patch baseline to see if it meets your needs. To learn more about patch baselines, see [Predefined and custom patch baselines](patch-manager-predefined-and-custom-patch-baselines.md).

**To view AWS predefined patch baselines**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. In the patch baselines list, choose the baseline ID of one of the predefined patch baselines.

   -or-

   If you are accessing Patch Manager for the first time in the current AWS Region, choose **Start with an overview**, choose the **Patch baselines** tab, and then choose the baseline ID of one of the predefined patch baselines.
**Note**  
For Windows Server, three predefined patch baselines are provided. The patch baselines `AWS-DefaultPatchBaseline` and `AWS-WindowsPredefinedPatchBaseline-OS` support only operating system updates on the Windows operating system itself. `AWS-DefaultPatchBaseline` is used as the default patch baseline for Windows Server managed nodes unless you specify a different patch baseline. The configuration settings in these two patch baselines are the same. The newer of the two, `AWS-WindowsPredefinedPatchBaseline-OS`, was created to distinguish it from the third predefined patch baseline for Windows Server. That patch baseline, `AWS-WindowsPredefinedPatchBaseline-OS-Applications`, can be used to apply patches to both the Windows Server operating system and supported applications released by Microsoft.  
For more information, see [Setting an existing patch baseline as the default](patch-manager-default-patch-baseline.md).

1. In the **Approval rules** section, review the patch baseline configuration.

1. If the configuration is acceptable for your managed nodes, you can skip ahead to the procedure [Creating and managing patch groups](patch-manager-tag-a-patch-group.md). 

   -or-

   To create your own default patch baseline, continue to the topic [Working with custom patch baselines](patch-manager-manage-patch-baselines.md).

# Working with custom patch baselines
Custom patch baselines

Patch Manager, a tool in AWS Systems Manager, includes a predefined patch baseline for each operating system supported by Patch Manager. You can use these patch baselines (you can't customize them), or you can create your own. 

The following procedures describe how to create, update, and delete your own custom patch baselines. To learn more about patch baselines, see [Predefined and custom patch baselines](patch-manager-predefined-and-custom-patch-baselines.md).

**Topics**
+ [

# Creating a custom patch baseline for Linux
](patch-manager-create-a-patch-baseline-for-linux.md)
+ [

# Creating a custom patch baseline for macOS
](patch-manager-create-a-patch-baseline-for-macos.md)
+ [

# Creating a custom patch baseline for Windows Server
](patch-manager-create-a-patch-baseline-for-windows.md)
+ [

# Updating or deleting a custom patch baseline
](patch-manager-update-or-delete-a-patch-baseline.md)

# Creating a custom patch baseline for Linux


Use the following procedure to create a custom patch baseline for Linux managed nodes in Patch Manager, a tool in AWS Systems Manager. 

For information about creating a patch baseline for macOS managed nodes, see [Creating a custom patch baseline for macOS](patch-manager-create-a-patch-baseline-for-macos.md). For information about creating a patch baseline for Windows managed nodes, see [Creating a custom patch baseline for Windows Server](patch-manager-create-a-patch-baseline-for-windows.md).

**To create a custom patch baseline for Linux managed nodes**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Patch baselines** tab, and then choose **Create patch baseline**.

   -or-

   If you are accessing Patch Manager for the first time in the current AWS Region, choose **Start with an overview**, choose the **Patch baselines** tab, and then choose **Create patch baseline**.

1. For **Name**, enter a name for your new patch baseline, for example, `MyRHELPatchBaseline`.

1. (Optional) For **Description**, enter a description for this patch baseline.

1. For **Operating system**, choose an operating system, for example, `Red Hat Enterprise Linux`.

1. If you want to begin using this patch baseline as the default for the selected operating system as soon as you create it, check the box next to **Set this patch baseline as the default patch baseline for *operating system name* instances**.
**Note**  
This option is available only if you first accessed Patch Manager before the [patch policies](patch-manager-policies.md) release on December 22, 2022.  
For information about setting an existing patch baseline as the default, see [Setting an existing patch baseline as the default](patch-manager-default-patch-baseline.md).

1. In the **Approval rules for operating-systems** section, use the fields to create one or more auto-approval rules.
   + **Products**: The version of the operating systems the approval rule applies to, such as `RedhatEnterpriseLinux7.4`. The default selection is `All`.
   + **Classification**: The type of patches the approval rule applies to, such as `Security` or `Enhancement`. The default selection is `All`. 
**Tip**  
You can configure a patch baseline to control whether minor version upgrades for Linux are installed, such as RHEL 7.8. Minor version upgrades can be installed automatically by Patch Manager provided that the update is available in the appropriate repository.  
For Linux operating systems, minor version upgrades aren't classified consistently. They can be classified as bug fixes or security updates, or not classified, even within the same kernel version. Here are a few options for controlling whether a patch baseline installs them.   
**Option 1**: The broadest approval rule to ensure minor version upgrades are installed when available is to specify **Classification** as `All` (\$1) and choose the **Include nonsecurity updates** option.
**Option 2**: To ensure patches for an operating system version are installed, you can use a wildcard (\$1) to specify its kernel format in the **Patch exceptions** section of the baseline. For example, the kernel format for RHEL 7.\$1 is `kernel-3.10.0-*.el7.x86_64`.  
Enter `kernel-3.10.0-*.el7.x86_64` in the **Approved patches** list in your patch baseline to ensure all patches, including minor version upgrades, are applied to your RHEL 7.\$1 managed nodes. (If you know the exact package name of a minor version patch, you can enter that instead.)
**Option 3**: You can have the most control over which patches are applied to your managed nodes, including minor version upgrades, by using the [InstallOverrideList](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-installoverridelist) parameter in the `AWS-RunPatchBaseline` document. For more information, see [SSM Command document for patching: `AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md).
   + **Severity**: The severity value of patches the rule is to apply to, such as `Critical`. The default selection is `All`. 
   + **Auto-approval**: The method for selecting patches for automatic approval.
**Note**  
Because it's not possible to reliably determine the release dates of update packages for Ubuntu Server, the auto-approval options aren't supported for this operating system.
     + **Approve patches after a specified number of days**: The number of days for Patch Manager to wait after a patch is released or last updated before a patch is automatically approved. You can enter any integer from zero (0) to 360. For most scenarios, we recommend waiting no more than 100 days.
     + **Approve patches released up to a specific date**: The patch release date for which Patch Manager automatically applies all patches released or updated on or before that date. For example, if you specify July 7, 2023, no patches released or last updated on or after July 8, 2023, are installed automatically.
   + (Optional) **Compliance reporting**: The severity level you want to assign to patches approved by the baseline, such as `Critical` or `High`.
**Note**  
If you specify a compliance reporting level and the patch state of any approved patch is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.
   + **Include non-security updates**: Select the check box to install nonsecurity Linux operating system patches made available in the source repository, in addition to the security-related patches. 

   For more information about working with approval rules in a custom patch baseline, see [Custom baselines](patch-manager-predefined-and-custom-patch-baselines.md#patch-manager-baselines-custom).

1. If you want to explicitly approve any patches in addition to those meeting your approval rules, do the following in the **Patch exceptions** section:
   + For **Approved patches**, enter a comma-separated list of the patches you want to approve.

     For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).
   + (Optional) For **Approved patches compliance level**, assign a compliance level to the patches in the list.
   + If any approved patches you specify aren't related to security, select the **Include non-security updates** check box for these patches to be installed on your Linux operating system as well.

1. If you want to explicitly reject any patches that otherwise meet your approval rules, do the following in the **Patch exceptions** section:
   + For **Rejected patches**, enter a comma-separated list of the patches you want to reject.

     For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).
   + For **Rejected patches action**, select the action for Patch Manager to take on patches included in the **Rejected patches** list.
     + **Allow as dependency**: A package in the **Rejected patches** list is installed only if it's a dependency of another package. It's considered compliant with the patch baseline and its status is reported as *InstalledOther*. This is the default action if no option is specified.
     + **Block**: Packages in the **Rejected patches** list, and packages that include them as dependencies, aren't installed by Patch Manager under any circumstances. If a package was installed before it was added to the **Rejected patches** list, or is installed outside of Patch Manager afterward, it's considered noncompliant with the patch baseline and its status is reported as *InstalledRejected*.
**Note**  
Patch Manager searches for patch dependencies recursively.

1. (Optional) If you want to specify alternative patch repositories for different versions of an operating system, such as *AmazonLinux2016.03* and *AmazonLinux2017.09*, do the following for each product in the **Patch sources** section:
   + In **Name**, enter a name to help you identify the source configuration.
   + In **Product**, select the version of the operating systems the patch source repository is for, such as `RedhatEnterpriseLinux7.4`.
   + In **Configuration**, enter the value of the repository configuration to use in the appropriate format:

------
#### [  Example for yum repositories  ]

     ```
     [main]
     name=MyCustomRepository
     baseurl=https://my-custom-repository
     enabled=1
     ```

**Tip**  
For information about other options available for your yum repository configuration, see [dnf.conf(5)](https://man7.org/linux/man-pages/man5/dnf.conf.5.html).

------
#### [  Examples for Ubuntu Server and Debian Server ]

      `deb http://security.ubuntu.com/ubuntu jammy main` 

      `deb https://site.example.com/debian distribution component1 component2 component3` 

     Repo information for Ubuntu Server repositories must be specifed in a single line. For more examples and information, see [jammy (5) sources.list.5.gz](https://manpages.ubuntu.com/manpages/jammy/man5/sources.list.5.html) on the *Ubuntu Server Manuals* website and [sources.list format](https://wiki.debian.org/SourcesList#sources.list_format) on the *Debian Wiki*.

------

     Choose **Add another source** to specify a source repository for each additional operating system version, up to a maximum of 20.

     For more information about alternative source patch repositories, see [How to specify an alternative patch source repository (Linux)](patch-manager-alternative-source-repository.md).

1. (Optional) For **Manage tags**, apply one or more tag key name/value pairs to the patch baseline.

   Tags are optional metadata that you assign to a resource. Tags allow you to categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag a patch baseline to identify the severity level of patches it specifies, the operating system family it applies to, and the environment type. In this case, you could specify tags similar to the following key name/value pairs:
   + `Key=PatchSeverity,Value=Critical`
   + `Key=OS,Value=RHEL`
   + `Key=Environment,Value=Production`

1. Choose **Create patch baseline**.

# Creating a custom patch baseline for macOS


Use the following procedure to create a custom patch baseline for macOS managed nodes in Patch Manager, a tool in AWS Systems Manager. 

For information about creating a patch baseline for Windows Server managed nodes, see [Creating a custom patch baseline for Windows Server](patch-manager-create-a-patch-baseline-for-windows.md). For information about creating a patch baseline for Linux managed nodes, see [Creating a custom patch baseline for Linux](patch-manager-create-a-patch-baseline-for-linux.md). 

**Note**  
macOS is not supported in all AWS Regions. For more information about Amazon EC2 support for macOS, see [Amazon EC2 Mac instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-mac-instances.html) in the *Amazon EC2 User Guide*.

**To create a custom patch baseline for macOS managed nodes**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Patch baselines** tab, and then choose **Create patch baseline**.

   -or-

   If you are accessing Patch Manager for the first time in the current AWS Region, choose **Start with an overview**, choose the **Patch baselines** tab, and then choose **Create patch baseline**.

1. For **Name**, enter a name for your new patch baseline, for example, `MymacOSPatchBaseline`.

1. (Optional) For **Description**, enter a description for this patch baseline.

1. For **Operating system**, choose macOS.

1. If you want to begin using this patch baseline as the default for macOS as soon as you create it, check the box next to **Set this patch baseline as the default patch baseline for macOS instances**.
**Note**  
This option is available only if you first accessed Patch Manager before the [patch policies](patch-manager-policies.md) release on December 22, 2022.  
For information about setting an existing patch baseline as the default, see [Setting an existing patch baseline as the default](patch-manager-default-patch-baseline.md).

1. In the **Approval rules for operating-systems** section, use the fields to create one or more auto-approval rules.
   + **Products**: The version of the operating systems the approval rule applies to, such as `BigSur11.3.1` or `Ventura13.7`. The default selection is `All`.
   + **Classification**: The package manager or package managers that you want to apply packages during the patching process. You can choose from the following:
     + softwareupdate
     + installer
     + brew
     + brew cask

     The default selection is `All`. 
   + (Optional) **Compliance reporting**: The severity level you want to assign to patches approved by the baseline, such as `Critical` or `High`.
**Note**  
If you specify a compliance reporting level and the patch state of any approved patch is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.

   For more information about working with approval rules in a custom patch baseline, see [Custom baselines](patch-manager-predefined-and-custom-patch-baselines.md#patch-manager-baselines-custom).

1. If you want to explicitly approve any patches in addition to those meeting your approval rules, do the following in the **Patch exceptions** section:
   + For **Approved patches**, enter a comma-separated list of the patches you want to approve.

     For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).
   + (Optional) For **Approved patches compliance level**, assign a compliance level to the patches in the list.

1. If you want to explicitly reject any patches that otherwise meet your approval rules, do the following in the **Patch exceptions** section:
   + For **Rejected patches**, enter a comma-separated list of the patches you want to reject.

     For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).
   + For **Rejected patches action**, select the action for Patch Manager to take on patches included in the **Rejected patches** list.
     + **Allow as dependency**: A package in the **Rejected patches** list is installed only if it's a dependency of another package. It's considered compliant with the patch baseline and its status is reported as *InstalledOther*. This is the default action if no option is specified.
     + **Block**: Packages in the **Rejected patches** list, and packages that include them as dependencies, aren't installed by Patch Manager under any circumstances. If a package was installed before it was added to the **Rejected patches** list, or is installed outside of Patch Manager afterward, it's considered noncompliant with the patch baseline and its status is reported as *InstalledRejected*.

1. (Optional) For **Manage tags**, apply one or more tag key name/value pairs to the patch baseline.

   Tags are optional metadata that you assign to a resource. Tags allow you to categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag a patch baseline to identify the severity level of patches it specifies, the package manager it applies to, and the environment type. In this case, you could specify tags similar to the following key name/value pairs:
   + `Key=PatchSeverity,Value=Critical`
   + `Key=PackageManager,Value=softwareupdate`
   + `Key=Environment,Value=Production`

1. Choose **Create patch baseline**.

# Creating a custom patch baseline for Windows Server


Use the following procedure to create a custom patch baseline for Windows managed nodes in Patch Manager, a tool in AWS Systems Manager. 

For information about creating a patch baseline for Linux managed nodes, see [Creating a custom patch baseline for Linux](patch-manager-create-a-patch-baseline-for-linux.md). Fo information about creating a patch baseline for macOS managed nodes, see [Creating a custom patch baseline for macOS](patch-manager-create-a-patch-baseline-for-macos.md).

For an example of creating a patch baseline that is limited to installing Windows Service Packs only, see [Tutorial: Create a patch baseline for installing Windows Service Packs using the console](patch-manager-windows-service-pack-patch-baseline-tutorial.md).

**To create a custom patch baseline (Windows)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Patch baselines** tab, and then choose **Create patch baseline**. 

   -or-

   If you are accessing Patch Manager for the first time in the current AWS Region, choose **Start with an overview**, choose the **Patch baselines** tab, and then choose **Create patch baseline**.

1. For **Name**, enter a name for your new patch baseline, for example, `MyWindowsPatchBaseline`.

1. (Optional) For **Description**, enter a description for this patch baseline.

1. For **Operating system**, choose `Windows`.

1. For **Available security updates compliance status**, choose the status you want to assign to security patches that are available but not approved because they don't meet the installation criteria specified in the patch baseline, **Non-Compliant** or **Compliant**.

   Example scenario: Security patches that you might want installed can be skipped if you have specified a long period to wait after a patch is released before installation. If an update to the patch is released during your specified waiting period, the waiting period for installing the patch starts over. If the waiting period is too long, multiple versions of the patch could be released but never installed.

1. If you want to begin using this patch baseline as the default for Windows as soon as you create it, select **Set this patch baseline as the default patch baseline for Windows Server instances** .
**Note**  
This option is available only if you first accessed Patch Manager before the [patch policies](patch-manager-policies.md) release on December 22, 2022.  
For information about setting an existing patch baseline as the default, see [Setting an existing patch baseline as the default](patch-manager-default-patch-baseline.md).

1. In the **Approval rules for operating systems** section, use the fields to create one or more auto-approval rules.
   + **Products**: The version of the operating systems the approval rule applies to, such as `WindowsServer2012`. The default selection is `All`.
   + **Classification**: The type of patches the approval rule applies to, such as `CriticalUpdates`, `Drivers`, and `Tools`. The default selection is `All`. 
**Tip**  
You can include Windows Service Pack installations in your approval rules by including `ServicePacks` or by choosing `All` in your **Classification** list. For an example, see [Tutorial: Create a patch baseline for installing Windows Service Packs using the console](patch-manager-windows-service-pack-patch-baseline-tutorial.md).
   + **Severity**: The severity value of patches the rule is to apply to, such as `Critical`. The default selection is `All`. 
   + **Auto-approval**: The method for selecting patches for automatic approval.
     + **Approve patches after a specified number of days**: The number of days for Patch Manager to wait after a patch is released or updated before a patch is automatically approved. You can enter any integer from zero (0) to 360. For most scenarios, we recommend waiting no more than 100 days.
     + **Approve patches released up to a specific date**: The patch release date for which Patch Manager automatically applies all patches released or updated on or before that date. For example, if you specify July 7, 2023, no patches released or last updated on or after July 8, 2023, are installed automatically.
   + (Optional) **Compliance reporting**: The severity level you want to assign to patches approved by the baseline, such as `High`.
**Note**  
If you specify a compliance reporting level and the patch state of any approved patch is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.

1. (Optional) In the **Approval rules for applications** section, use the fields to create one or more auto-approval rules.
**Note**  
Instead of specifying approval rules, you can specify lists of approved and rejected patches as patch exceptions. See steps 10 and 11. 
   + **Product family**: The general Microsoft product family for which you want to specify a rule, such as `Office` or `Exchange Server`.
   + **Products**: The version of the application the approval rule applies to, such as `Office 2016` or `Active Directory Rights Management Services Client 2.0 2016`. The default selection is `All`.
   + **Classification**: The type of patches the approval rule applies to, such as `CriticalUpdates`. The default selection is `All`. 
   + **Severity**: The severity value of patches the rule applies to, such as `Critical`. The default selection is `All`. 
   + **Auto-approval**: The method for selecting patches for automatic approval.
     + **Approve patches after a specified number of days**: The number of days for Patch Manager to wait after a patch is released or updated before a patch is automatically approved. You can enter any integer from zero (0) to 360. For most scenarios, we recommend waiting no more than 100 days.
     + **Approve patches released up to a specific date**: The patch release date for which Patch Manager automatically applies all patches released or updated on or before that date. For example, if you specify July 7, 2023, no patches released or last updated on or after July 8, 2023, are installed automatically.
   + (Optional) **Compliance reporting**: The severity level you want to assign to patches approved by the baseline, such as `Critical` or `High`.
**Note**  
If you specify a compliance reporting level and the patch state of any approved patch is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.

1. (Optional) If you want to explicitly approve any patches instead of letting patches be selected according to approval rules, do the following in the **Patch exceptions** section:
   + For **Approved patches**, enter a comma-separated list of the patches you want to approve.

     For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).
   + (Optional) For **Approved patches compliance level**, assign a compliance level to the patches in the list.

1. If you want to explicitly reject any patches that otherwise meet your approval rules, do the following in the **Patch exceptions** section:
   + For **Rejected patches**, enter a comma-separated list of the patches you want to reject.

     For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).
   + For **Rejected patches action**, select the action for Patch Manager to take on patches included in the **Rejected patches** list.
     + **Allow as dependency**: Windows Server doesn't support the concept of package dependencies. If a package in the **Rejected patches** list and already installed on the node, its status is reported as `INSTALLED_OTHER`. Any package not already installed on the node is skipped. 
     + **Block**: Packages in the **Rejected patches** list aren't installed by Patch Manager under any circumstances. If a package was installed before it was added to the **Rejected patches** list, or is installed outside of Patch Manager afterward, it's considered noncompliant with the patch baseline and its status is reported as `INSTALLED_REJECTED`.

     For more information about rejected package actions, see [Rejected patch list options in custom patch baselines](patch-manager-windows-and-linux-differences.md#rejected-patches-diff). 

1. (Optional) For **Manage tags**, apply one or more tag key name/value pairs to the patch baseline.

   Tags are optional metadata that you assign to a resource. Tags allow you to categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag a patch baseline to identify the severity level of patches it specifies, the operating system family it applies to, and the environment type. In this case, you could specify tags similar to the following key name/value pairs:
   + `Key=PatchSeverity,Value=Critical`
   + `Key=OS,Value=RHEL`
   + `Key=Environment,Value=Production`

1. Choose **Create patch baseline**.

# Updating or deleting a custom patch baseline


You can update or delete a custom patch baseline that you have created in Patch Manager, a tool in AWS Systems Manager. When you update a patch baseline, you can change its name or description, its approval rules, and its exceptions for approved and rejected patches. You can also update the tags that are applied to the patch baseline. You can't change the operating system type that a patch baseline has been created for, and you can't make changes to a predefined patch baseline provided by AWS.

## Updating or deleting a patch baseline


Follow these steps to update or delete a patch baseline.

**Important**  
 Use caution when deleting a custom patch baseline that might be used by a patch policy configuration in Quick Setup.  
If you are using a [patch policy configuration](patch-manager-policies.md) in Quick Setup, updates you make to custom patch baselines are synchronized with Quick Setup once an hour.   
If a custom patch baseline that was referenced in a patch policy is deleted, a banner displays on the Quick Setup **Configuration details** page for your patch policy. The banner informs you that the patch policy references a patch baseline that no longer exists, and that subsequent patching operations will fail. In this case, return to the Quick Setup **Configurations** page, select the Patch Manager configuration , and choose **Actions**, **Edit configuration**. The deleted patch baseline name is highlighted, and you must select a new patch baseline for the affected operating system.

**To update or delete a patch baseline**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the patch baseline that you want to update or delete, and then do one of the following:
   + To remove the patch baseline from your AWS account, choose **Delete**. The system prompts you to confirm your actions. 
   + To make changes to the patch baseline name or description, approval rules, or patch exceptions, choose **Edit**. On the **Edit patch baseline** page, change the values and options that you want, and then choose **Save changes**. 
   + To add, change, or delete tags applied to the patch baseline, choose the **Tags** tab, and then choose **Edit tags**. On the **Edit patch baseline tags** page, make updates to the patch baseline tags, and then choose **Save changes**. 

   For information about the configuration choices you can make, see [Working with custom patch baselines](patch-manager-manage-patch-baselines.md).

# Setting an existing patch baseline as the default
Setting an existing patch baseline as the default

**Important**  
Any default patch baseline selections you make here do not apply to patching operations that are based on a patch policy. Patch policies use their own patch baseline specifications. For more information about patch policies, see [Patch policy configurations in Quick Setup](patch-manager-policies.md).

When you create a custom patch baseline in Patch Manager, a tool in AWS Systems Manager, you can set the baseline as the default for the associated operating system type as soon as you create it. For information, see [Working with custom patch baselines](patch-manager-manage-patch-baselines.md).

You can also set an existing patch baseline as the default for an operating system type.

**Note**  
The steps you follow depend on whether you first accessed Patch Manager before or after the patch policies release on December 22, 2022. If you used Patch Manager before that date, you can use the console procedure. Otherwise, use the AWS CLI procedure. The **Actions** menu referenced in the console procedure is not displayed in Regions where Patch Manager wasn't used before the patch policies release.

**To set a patch baseline as the default**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Patch baselines** tab.

1. In the patch baselines list, choose the button of a patch baseline that isn't currently set as the default for an operating system type.

   The **Default baseline** column indicates which baselines are currently set as the defaults.

1. In the **Actions** menu, choose **Set default patch baseline**.
**Important**  
The **Actions** menu is not available if you didn't work with Patch Manager in the current AWS account and Region before December 22, 2022. See the **Note** earlier in this topic for more information.

1. In the confirmation dialog box, choose **Set default**.

**To set a patch baseline as the default (AWS CLI)**

1. Run the [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-baselines.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-baselines.html) command to view a list of available patch baselines and their IDs and Amazon Resource Names (ARNs).

   ```
   aws ssm describe-patch-baselines
   ```

1. Run the [https://docs.aws.amazon.com/cli/latest/reference/ssm/register-default-patch-baseline.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/register-default-patch-baseline.html) command to set a baseline as the default for the operating system it's associated with. Replace *baseline-id-or-ARN* with the ID of the custom patch baseline or predefined baseline to use. 

------
#### [ Linux & macOS ]

   ```
   aws ssm register-default-patch-baseline \
       --baseline-id baseline-id-or-ARN
   ```

   The following is an example of a setting a custom baseline as the default.

   ```
   aws ssm register-default-patch-baseline \
       --baseline-id pb-abc123cf9bEXAMPLE
   ```

   The following is an example of a setting a predefined baseline managed by AWS as the default.

   ```
   aws ssm register-default-patch-baseline \
       --baseline-id arn:aws:ssm:us-east-2:733109147000:patchbaseline/pb-0574b43a65ea646e
   ```

------
#### [ Windows Server ]

   ```
   aws ssm register-default-patch-baseline ^
       --baseline-id baseline-id-or-ARN
   ```

   The following is an example of a setting a custom baseline as the default.

   ```
   aws ssm register-default-patch-baseline ^
       --baseline-id pb-abc123cf9bEXAMPLE
   ```

   The following is an example of a setting a predefined baseline managed by AWS as the default.

   ```
   aws ssm register-default-patch-baseline ^
       --baseline-id arn:aws:ssm:us-east-2:733109147000:patchbaseline/pb-071da192df1226b63
   ```

------

# Viewing available patches
Viewing available patches

With Patch Manager, a tool in AWS Systems Manager, you can view all available patches for a specified operating system and, optionally, a specific operating system version.

**Tip**  
To generate a list of available patches and save them to a file, you can use the [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-available-patches.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-available-patches.html) command and specify your preferred [output](https://docs.aws.amazon.com/cli/latest/reference/ssm/cli-usage-output.html).

**To view available patches**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Patches** tab.

   -or-

   If you are accessing Patch Manager for the first time in the current AWS Region, choose **Start with an overview**, and then choose the **Patches** tab.
**Note**  
For Windows Server, the **Patches** tab displays updates that are available from Windows Server Update Service (WSUS).

1. For **Operating system**, choose the operating system for which you want to view available patches, such as `Windows` or `Amazon Linux`.

1. (Optional) For **Product**, choose an OS version, such as `WindowsServer2019` or `AmazonLinux2018.03`.

1. (Optional) To add or remove information columns for your results, choose the configure button (![\[The icon to view configuration settings.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/configure-button.png)) at the top right of the **Patches** list. (By default, the **Patches** tab displays columns for only some of the available patch metadata.)

   For information about the types of metadata you can add to your view, see [Patch](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_Patch.html) in the *AWS Systems Manager API Reference*.

# Creating and managing patch groups


If you are *not* using patch policies in your operations, you can organize your patching efforts by adding managed nodes to patch groups by using tags.

**Note**  
Patch groups are not used in patching operations that are based on *patch policies*. For information about working with patch policies, see [Patch policy configurations in Quick Setup](patch-manager-policies.md).  
Patch group functionality is not supported in the console for account-Region pairs that did not already use patch groups before patch policy support was released on December 22, 2022. Patch group functionality is still available in account-Region pairs that began using patch groups before this date.

To use tags in patching operations, you must apply the tag key `Patch Group` or `PatchGroup` to your managed nodes. You must also specify the name that you want to give the patch group as the value of the tag. You can specify any tag value, but the tag key must be `Patch Group` or `PatchGroup`.

`PatchGroup` (without a space) is required if you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS). 

After you group your managed nodes using tags, you add the patch group value to a patch baseline. By registering the patch group with a patch baseline, you ensure that the correct patches are installed during the patching operation. For more information about patch groups, see [Patch groups](patch-manager-patch-groups.md).

Complete the tasks in this topic to prepare your managed nodes for patching using tags with your nodes and patch baseline. Task 1 is required only if you are patching Amazon EC2 instances. Task 2 is required only if you are patching non-EC2 instances in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. Task 3 is required for all managed nodes.

**Tip**  
You can also add tags to managed nodes using the AWS CLI command `[https://docs.aws.amazon.com/cli/latest/reference/ssm/add-tags-to-resource.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/add-tags-to-resource.html)` or the Systems Manager API operation ssm-agent-minimum-s3-permissions-required`[https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_AddTagsToResource.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_AddTagsToResource.html)`.

**Topics**
+ [

## Task 1: Add EC2 instances to a patch group using tags
](#sysman-patch-group-tagging-ec2)
+ [

## Task 2: Add managed nodes to a patch group using tags
](#sysman-patch-group-tagging-managed)
+ [

## Task 3: Add a patch group to a patch baseline
](#sysman-patch-group-patchbaseline)

## Task 1: Add EC2 instances to a patch group using tags
Task 1: Add EC2 instances to a patch group using tags

You can add tags to EC2 instances using the Systems Manager console or the Amazon EC2 console. This task is required only if you are patching Amazon EC2 instances.

**Important**  
You can't apply the `Patch Group` tag (with a space) to an Amazon EC2 instance if the **Allow tags in instance metadata** option is enabled on the instance. Allowing tags in instance metadata prevents tag key names from containing spaces. If you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS), you must use the tag key `PatchGroup` (without a space).

**Option 1: To add EC2 instances to a patch group (Systems Manager console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. In the **Managed nodes** list, choose the ID of a managed EC2 instance that you want to configure for patching. Node IDs for EC2 instances begin with `i-`.
**Note**  
When using the Amazon EC2 console and AWS CLI, it's possible to apply `Key = Patch Group` or `Key = PatchGroup` tags to instances that aren't yet configured for use with Systems Manager.  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

1. Choose the **Tags** tab, then choose **Edit**.

1. In the left column, enter **Patch Group** or **PatchGroup**. If you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS), you must use `PatchGroup` (without a space).

1. In the right column, enter a tag value to serve as the name for the patch group.

1. Choose **Save**.

1. Repeat this procedure to add other EC2 instances to the same patch group.

**Option 2: To add EC2 instances to a patch group (Amazon EC2 console)**

1. Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/), and then choose **Instances** in the navigation pane. 

1. In the list of instances, choose an instance that you want to configure for patching.

1. In the **Actions** menu, choose **Instance settings**, **Manage tags**.

1. Choose **Add new tag**.

1. For **Key**, enter **Patch Group** or **PatchGroup**. If you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS), you must use `PatchGroup` (without a space).

1. For **Value**, enter a value to serve as the name for the patch group.

1. Choose **Save**.

1. Repeat this procedure to add other instances to the same patch group.

## Task 2: Add managed nodes to a patch group using tags
Task 2: Add non-EC2 managed nodes to a patch group using tags

Follow the steps in this topic to add tags to AWS IoT Greengrass core devices and non-EC2 hybrid-activated managed nodes (mi-\$1). This task is required only if you are patching non-EC2 instances in a hybrid and multicloud environment.

**Note**  
You can't add tags for non-EC2 managed nodes using the Amazon EC2 console.

**To add non-EC2 managed nodes to a patch group (Systems Manager console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. In the **Managed nodes** list, choose the name of the managed node that you want to configure for patching.
**Note**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

1. Choose the **Tags** tab, then choose **Edit**.

1. In the left column, enter **Patch Group** or **PatchGroup**. If you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS), you must use `PatchGroup` (without a space).

1. In the right column, enter a tag value to serve as the name for the patch group.

1. Choose **Save**.

1. Repeat this procedure to add other managed nodes to the same patch group.

## Task 3: Add a patch group to a patch baseline
Task 3: Add a patch group to a patch baseline

To associate a specific patch baseline with your managed nodes, you must add the patch group value to the patch baseline. By registering the patch group with a patch baseline, you can ensure that the correct patches are installed during a patching operation. This task is required whether you are patching EC2 instances, non-EC2 managed nodes, or both.

For more information about patch groups, see [Patch groups](patch-manager-patch-groups.md).

**Note**  
The steps you follow depend on whether you first accessed Patch Manager before or after the [patch policies](patch-manager-policies.md) release on December 22, 2022.

**To add a patch group to a patch baseline (Systems Manager console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. If you're accessing Patch Manager for the first time in the current AWS Region and the Patch Manager start page opens, choose **Start with an overview**.

1. Choose the **Patch baselines** tab, and then in the **Patch baselines** list, choose the name of the patch baseline that you want to configure for your patch group.

   If you didn't first access Patch Manager until after the patch policies release, you must choose a custom baseline that you have created.

1. If the **Baseline ID** details page includes an **Actions** menu, do the following: 
   + Choose **Actions**, then **Modify patch groups**.
   + Enter the tag *value* you added to your managed nodes in [Task 2: Add managed nodes to a patch group using tags](#sysman-patch-group-tagging-managed), then choose **Add**.

   If the **Baseline ID** details page does *not* include an **Actions** menu, patch groups can't be configured in the console. Instead, you can do either of the following:
   + (Recommended) Set up a patch policy in Quick Setup, a tool in AWS Systems Manager, to map a patch baseline to one or more EC2 instances.

     For more information, see [Using Quick Setup patch policies](https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-policies.html) and [Automate organization-wide patching using a Quick Setup patch policy](https://docs.aws.amazon.com/systems-manager/latest/userguide/quick-setup-patch-manager.html).
   + Use the [https://docs.aws.amazon.com/cli/latest/reference/ssm/register-patch-baseline-for-patch-group.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/register-patch-baseline-for-patch-group.html) command in the AWS Command Line Interface (AWS CLI) to configure a patch group.

# Integrating Patch Manager with AWS Security Hub CSPM
Integrating Patch Manager with AWS Security Hub CSPM

[AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) provides you with a comprehensive view of your security state in AWS. Security Hub CSPM collects security data from across AWS accounts, AWS services, and supported third-party partner products. With Security Hub CSPM, you can check your environment against security industry standards and best practices. Security Hub CSPM helps you to analyze your security trends and identify the highest priority security issues.

By using the integration between Patch Manager, a tool in AWS Systems Manager, and Security Hub CSPM, you can send findings about noncompliant nodes from Patch Manager to Security Hub CSPM. A finding is the observable record of a security check or security-related detection. Security Hub CSPM can then include those patch-related findings in its analysis of your security posture.

The information in the following topics applies no matter which method or type of configuration you are using for your patching operations:
+ A patch policy configured in Quick Setup
+ A Host Management option configured in Quick Setup
+ A maintenance window to run a patch `Scan` or `Install` task
+ An on-demand **Patch now** operation

**Contents**
+ [

## How Patch Manager sends findings to Security Hub CSPM
](#securityhub-integration-sending-findings)
  + [

### Types of findings that Patch Manager sends
](#securityhub-integration-finding-types)
  + [

### Latency for sending findings
](#securityhub-integration-finding-latency)
  + [

### Retrying when Security Hub CSPM isn't available
](#securityhub-integration-retry-send)
  + [

### Viewing findings in Security Hub CSPM
](#securityhub-integration-view-findings)
+ [

## Typical finding from Patch Manager
](#securityhub-integration-finding-example)
+ [

## Turning on and configuring the integration
](#securityhub-integration-enable)
+ [

## How to stop sending findings
](#securityhub-integration-disable)

## How Patch Manager sends findings to Security Hub CSPM


In Security Hub CSPM, security issues are tracked as findings. Some findings come from issues that are detected by other AWS services or by third-party partners. Security Hub CSPM also has a set of rules that it uses to detect security issues and generate findings.

 Patch Manager is one of the Systems Manager tools that sends findings to Security Hub CSPM. After you perform a patching operation by running a SSM document (`AWS-RunPatchBaseline`, `AWS-RunPatchBaselineAssociation`, or `AWS-RunPatchBaselineWithHooks`), the patching information is sent to Inventory or Compliance, tools in AWS Systems Manager, or both. After Inventory, Compliance, or both receive the data, Patch Manager receives a notification. Then, Patch Manager evaluates the data for accuracy, formatting, and compliance. If all conditions are met, Patch Manager forwards the data to Security Hub CSPM.

Security Hub CSPM provides tools to manage findings from across all of these sources. You can view and filter lists of findings and view details for a finding. For more information, see [Viewing findings](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-viewing.html) in the *AWS Security Hub User Guide*. You can also track the status of an investigation into a finding. For more information, see [Taking action on findings](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-taking-action.html) in the *AWS Security Hub User Guide*.

All findings in Security Hub CSPM use a standard JSON format called the AWS Security Finding Format (ASFF). The ASFF includes details about the source of the issue, the affected resources, and the current status of the finding. For more information, see [AWS Security Finding Format (ASFF)](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format.htm) in the *AWS Security Hub User Guide*.

### Types of findings that Patch Manager sends


Patch Manager sends the findings to Security Hub CSPM using the [AWS Security Finding Format (ASFF)](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format.html). In ASFF, the `Types` field provides the finding type. Findings from Patch Manager have the following value for `Types`:
+ Software and Configuration Checks/Patch Management

 Patch Manager sends one finding per noncompliant managed node. The finding is reported with the resource type [https://docs.aws.amazon.com//securityhub/latest/userguide/securityhub-findings-format-attributes.html#asff-resourcedetails-awsec2instance](https://docs.aws.amazon.com//securityhub/latest/userguide/securityhub-findings-format-attributes.html#asff-resourcedetails-awsec2instance) so that findings can be correlated with other Security Hub CSPM integrations that report `AwsEc2Instance` resource types. Patch Manager only forwards a finding to Security Hub CSPM if the operation discovered the managed node to be noncompliant. The finding includes the Patch Summary results. 

**Note**  
After reporting a noncompliant node to Security Hub CSPM. Patch Manager doesn't send an update to Security Hub CSPM after the node is made compliant. You can manually resolve findings in Security Hub CSPM after the required patches have been applied to the managed node.

For more information about compliance definitions, see [Patch compliance state values](patch-manager-compliance-states.md). For more information about `PatchSummary`, see [PatchSummary](https://docs.aws.amazon.com//securityhub/1.0/APIReference/API_PatchSummary.html) in the *AWS Security Hub API Reference*.

### Latency for sending findings


When Patch Manager creates a new finding, it's usually sent to Security Hub CSPM within a few seconds to 2 hours. The speed depends on the traffic in the AWS Region being processed at that time.

### Retrying when Security Hub CSPM isn't available


If there is a service outage, an AWS Lambda function is run to put the messages back into the main queue after the service is running again. After the messages are in the main queue, the retry is automatic.

If Security Hub CSPM isn't available, Patch Manager retries sending the findings until they're received.

### Viewing findings in Security Hub CSPM


This procedure describes how to view findings in Security Hub CSPM about managed nodes in your fleet that are out of patch compliance.

**To review Security Hub CSPM findings for patch compliance**

1. Sign in to the AWS Management Console and open the AWS Security Hub CSPM console at [https://console.aws.amazon.com/securityhub/](https://console.aws.amazon.com/securityhub/).

1. In the navigation pane, choose **Findings**.

1. Choose the **Add filters** (![\[The Search icon\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/search-icon.png)) box.

1. In the menu, under **Filters**, choose **Product name**.

1. In the dialog box that opens, choose **is** in the first field and then enter **Systems Manager Patch Manager** in the second field.

1. Choose **Apply**.

1. Add any additional filters you want to help narrow down your results.

1. In the list of results, choose the title of a finding you want more information about.

   A pane opens on the right side of the screen with more details about the resource, the issue discovered, and a recommended remediation.
**Important**  
At this time, Security Hub CSPM reports the resource type of all managed nodes as `EC2 Instance`. This includes on-premises servers and virtual machines (VMs) that you have registered for use with Systems Manager.

**Severity classifications**  
The list of findings for **Systems Manager Patch Manager** includes a report of the severity of the finding. **Severity** levels include the following, from lowest to highest:
+ **INFORMATIONAL ** – No issue was found.
+ **LOW** – The issue does not require remediation.
+ **MEDIUM** – The issue must be addressed but is not urgent.
+ **HIGH** – The issue must be addressed as a priority.
+ **CRITICAL** – The issue must be remediated immediately to avoid escalation.

Severity is determined by the most severe noncompliant package on an instance. Because you can have multiple patch baselines with multiple severity levels, the highest severity is reported out of all the noncompliant packages. For example, suppose you have two noncompliant packages where the severity of package A is "Critical" and the severity of package B is "Low". "Critical" will be reported as the severity.

Note that the severity field correlates directly with the Patch Manager `Compliance` field. This is a field that you set assign to individual patches that match the rule. Because this `Compliance` field is assigned to individual patches, it is not reflected at the Patch Summary level.

**Related content**
+ [Findings](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings.html) in the *AWS Security Hub User Guide*
+ [Multi-Account patch compliance with Patch Manager and Security Hub](https://aws.amazon.com/blogs/mt/multi-account-patch-compliance-with-patch-manager-and-security-hub/) in the *AWS Management & Governance Blog*

## Typical finding from Patch Manager


Patch Manager sends findings to Security Hub CSPM using the [AWS Security Finding Format (ASFF)](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format.html).

Here is an example of a typical finding from Patch Manager.

```
{
  "SchemaVersion": "2018-10-08",
  "Id": "arn:aws:patchmanager:us-east-2:111122223333:instance/i-02573cafcfEXAMPLE/document/AWS-RunPatchBaseline/run-command/d710f5bd-04e3-47b4-82f6-df4e0EXAMPLE",
  "ProductArn": "arn:aws:securityhub:us-east-1::product/aws/ssm-patch-manager",
  "GeneratorId": "d710f5bd-04e3-47b4-82f6-df4e0EXAMPLE",
  "AwsAccountId": "111122223333",
  "Types": [
    "Software & Configuration Checks/Patch Management/Compliance"
  ],
  "CreatedAt": "2021-11-11T22:05:25Z",
  "UpdatedAt": "2021-11-11T22:05:25Z",
  "Severity": {
    "Label": "INFORMATIONAL",
    "Normalized": 0
  },
  "Title": "Systems Manager Patch Summary - Managed Instance Non-Compliant",
  "Description": "This AWS control checks whether each instance that is managed by AWS Systems Manager is in compliance with the rules of the patch baseline that applies to that instance when a compliance Scan runs.",
  "Remediation": {
    "Recommendation": {
      "Text": "For information about bringing instances into patch compliance, see 'Remediating out-of-compliance instances (Patch Manager)'.",
      "Url": "https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-compliance-remediation.html"
    }
  },
  "SourceUrl": "https://us-east-2.console.aws.amazon.com/systems-manager/fleet-manager/i-02573cafcfEXAMPLE/patch?region=us-east-2",
  "ProductFields": {
    "aws/securityhub/FindingId": "arn:aws:securityhub:us-east-2::product/aws/ssm-patch-manager/arn:aws:patchmanager:us-east-2:111122223333:instance/i-02573cafcfEXAMPLE/document/AWS-RunPatchBaseline/run-command/d710f5bd-04e3-47b4-82f6-df4e0EXAMPLE",
    "aws/securityhub/ProductName": "Systems Manager Patch Manager",
    "aws/securityhub/CompanyName": "AWS"
  },
  "Resources": [
    {
      "Type": "AwsEc2Instance",
      "Id": "i-02573cafcfEXAMPLE",
      "Partition": "aws",
      "Region": "us-east-2"
    }
  ],
  "WorkflowState": "NEW",
  "Workflow": {
    "Status": "NEW"
  },
  "RecordState": "ACTIVE",
  "PatchSummary": {
    "Id": "pb-0c10e65780EXAMPLE",
    "InstalledCount": 45,
    "MissingCount": 2,
    "FailedCount": 0,
    "InstalledOtherCount": 396,
    "InstalledRejectedCount": 0,
    "InstalledPendingReboot": 0,
    "OperationStartTime": "2021-11-11T22:05:06Z",
    "OperationEndTime": "2021-11-11T22:05:25Z",
    "RebootOption": "NoReboot",
    "Operation": "SCAN"
  }
}
```

## Turning on and configuring the integration


To use the Patch Manager integration with Security Hub CSPM, you must turn on Security Hub CSPM. For information about how to turn on Security Hub CSPM, see [Setting up Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-settingup.html) in the *AWS Security Hub User Guide*.

The following procedure describes how to integrate Patch Manager and Security Hub CSPM when Security Hub CSPM is already active but Patch Manager integration is turned off. You only need to complete this procedure if integration was manually turned off.

**To add Patch Manager to Security Hub CSPM integration**

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Settings** tab.

   -or-

   If you are accessing Patch Manager for the first time in the current AWS Region, choose **Start with an overview**, and then choose the **Settings** tab.

1. Under the **Export to Security Hub CSPM** section, to the right of **Patch compliance findings aren't being exported to Security Hub**, choose **Enable**.

## How to stop sending findings


To stop sending findings to Security Hub CSPM, you can use either the Security Hub CSPM console or the API.

For more information, see the following topics in the *AWS Security Hub User Guide*:
+ [Disabling and enabling the flow of findings from an integration (console)](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-integrations-managing.html#securityhub-integration-findings-flow-console)
+ [Disabling the flow of findings from an integration (Security Hub CSPM API, AWS CLI)](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-integrations-managing.html#securityhub-integration-findings-flow-disable-api)

# Working with Patch Manager resources using the AWS CLI


The section includes examples of AWS Command Line Interface (AWS CLI) commands that you can use to perform configuration tasks for Patch Manager, a tool in AWS Systems Manager.

For an illustration of using the AWS CLI to patch a server environment by using a custom patch baseline, see [Tutorial: Patch a server environment using the AWS CLI](patch-manager-patch-servers-using-the-aws-cli.md).

For more information about using the AWS CLI for AWS Systems Manager tasks, see the [AWS Systems Manager section of the AWS CLI Command Reference](https://docs.aws.amazon.com/cli/latest/reference/ssm/index.html).

**Topics**
+ [

## AWS CLI commands for patch baselines
](#patch-baseline-cli-commands)
+ [

## AWS CLI commands for patch groups
](#patch-group-cli-commands)
+ [

## AWS CLI commands for viewing patch summaries and details
](#patch-details-cli-commands)
+ [

## AWS CLI commands for scanning and patching managed nodes
](#patch-operations-cli-commands)

## AWS CLI commands for patch baselines


**Topics**
+ [

### Create a patch baseline
](#patch-manager-cli-commands-create-patch-baseline)
+ [

### Create a patch baseline with custom repositories for different OS versions
](#patch-manager-cli-commands-create-patch-baseline-mult-sources)
+ [

### Update a patch baseline
](#patch-manager-cli-commands-update-patch-baseline)
+ [

### Rename a patch baseline
](#patch-manager-cli-commands-rename-patch-baseline)
+ [

### Delete a patch baseline
](#patch-manager-cli-commands-delete-patch-baseline)
+ [

### List all patch baselines
](#patch-manager-cli-commands-describe-patch-baselines)
+ [

### List all AWS-provided patch baselines
](#patch-manager-cli-commands-describe-patch-baselines-aws)
+ [

### List my patch baselines
](#patch-manager-cli-commands-describe-patch-baselines-custom)
+ [

### Display a patch baseline
](#patch-manager-cli-commands-get-patch-baseline)
+ [

### Get the default patch baseline
](#patch-manager-cli-commands-get-default-patch-baseline)
+ [

### Set a custom patch baseline as the default
](#patch-manager-cli-commands-register-default-patch-baseline)
+ [

### Reset an AWS patch baseline as the default
](#patch-manager-cli-commands-register-aws-patch-baseline)
+ [

### Tag a patch baseline
](#patch-manager-cli-commands-add-tags-to-resource)
+ [

### List the tags for a patch baseline
](#patch-manager-cli-commands-list-tags-for-resource)
+ [

### Remove a tag from a patch baseline
](#patch-manager-cli-commands-remove-tags-from-resource)

### Create a patch baseline


The following command creates a patch baseline that approves all critical and important security updates for Windows Server 2012 R2 5 days after they're released. Patches have also been specified for the Approved and Rejected patch lists. In addition, the patch baseline has been tagged to indicate that it's for a production environment.

------
#### [ Linux & macOS ]

```
aws ssm create-patch-baseline \
    --name "Windows-Server-2012R2" \
    --tags "Key=Environment,Value=Production" \
    --description "Windows Server 2012 R2, Important and Critical security updates" \
    --approved-patches "KB2032276,MS10-048" \
    --rejected-patches "KB2124261" \
    --rejected-patches-action "ALLOW_AS_DEPENDENCY" \
    --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=MSRC_SEVERITY,Values=[Important,Critical]},{Key=CLASSIFICATION,Values=SecurityUpdates},{Key=PRODUCT,Values=WindowsServer2012R2}]},ApproveAfterDays=5}]"
```

------
#### [ Windows Server ]

```
aws ssm create-patch-baseline ^
    --name "Windows-Server-2012R2" ^
    --tags "Key=Environment,Value=Production" ^
    --description "Windows Server 2012 R2, Important and Critical security updates" ^
    --approved-patches "KB2032276,MS10-048" ^
    --rejected-patches "KB2124261" ^
    --rejected-patches-action "ALLOW_AS_DEPENDENCY" ^
    --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=MSRC_SEVERITY,Values=[Important,Critical]},{Key=CLASSIFICATION,Values=SecurityUpdates},{Key=PRODUCT,Values=WindowsServer2012R2}]},ApproveAfterDays=5}]"
```

------

The system returns information like the following.

```
{
   "BaselineId":"pb-0c10e65780EXAMPLE"
}
```

### Create a patch baseline with custom repositories for different OS versions


Applies to Linux managed nodes only. The following command shows how to specify the patch repository to use for a particular version of the Amazon Linux operating system. This sample uses a source repository allowed by default on Amazon Linux 2017.09, but it could be adapted to a different source repository that you have configured for a managed node.

**Note**  
To better demonstrate this more complex command, we're using the `--cli-input-json` option with additional options stored an external JSON file.

1. Create a JSON file with a name like `my-patch-repository.json` and add the following content to it.

   ```
   {
       "Description": "My patch repository for Amazon Linux 2",
       "Name": "Amazon-Linux-2",
       "OperatingSystem": "AMAZON_LINUX_2",
       "ApprovalRules": {
           "PatchRules": [
               {
                   "ApproveAfterDays": 7,
                   "EnableNonSecurity": true,
                   "PatchFilterGroup": {
                       "PatchFilters": [
                           {
                               "Key": "SEVERITY",
                               "Values": [
                                   "Important",
                                   "Critical"
                               ]
                           },
                           {
                               "Key": "CLASSIFICATION",
                               "Values": [
                                   "Security",
                                   "Bugfix"
                               ]
                           },
                           {
                               "Key": "PRODUCT",
                               "Values": [
                                   "AmazonLinux2"
                               ]
                           }
                       ]
                   }
               }
           ]
       },
       "Sources": [
           {
               "Name": "My-AL2",
               "Products": [
                   "AmazonLinux2"
               ],
               "Configuration": "[amzn-main] \nname=amzn-main-Base\nmirrorlist=http://repo./$awsregion./$awsdomain//$releasever/main/mirror.list //nmirrorlist_expire=300//nmetadata_expire=300 \npriority=10 \nfailovermethod=priority \nfastestmirror_enabled=0 \ngpgcheck=1 \ngpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-amazon-ga \nenabled=1 \nretries=3 \ntimeout=5\nreport_instanceid=yes"
           }
       ]
   }
   ```

1. In the directory where you saved the file, run the following command.

   ```
   aws ssm create-patch-baseline --cli-input-json file://my-patch-repository.json
   ```

   The system returns information like the following.

   ```
   {
       "BaselineId": "pb-0c10e65780EXAMPLE"
   }
   ```

### Update a patch baseline


The following command adds two patches as rejected and one patch as approved to an existing patch baseline.

For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

------
#### [ Linux & macOS ]

```
aws ssm update-patch-baseline \
    --baseline-id pb-0c10e65780EXAMPLE \
    --rejected-patches "KB2032276" "MS10-048" \
    --approved-patches "KB2124261"
```

------
#### [ Windows Server ]

```
aws ssm update-patch-baseline ^
    --baseline-id pb-0c10e65780EXAMPLE ^
    --rejected-patches "KB2032276" "MS10-048" ^
    --approved-patches "KB2124261"
```

------

The system returns information like the following.

```
{
   "BaselineId":"pb-0c10e65780EXAMPLE",
   "Name":"Windows-Server-2012R2",
   "RejectedPatches":[
      "KB2032276",
      "MS10-048"
   ],
   "GlobalFilters":{
      "PatchFilters":[

      ]
   },
   "ApprovalRules":{
      "PatchRules":[
         {
            "PatchFilterGroup":{
               "PatchFilters":[
                  {
                     "Values":[
                        "Important",
                        "Critical"
                     ],
                     "Key":"MSRC_SEVERITY"
                  },
                  {
                     "Values":[
                        "SecurityUpdates"
                     ],
                     "Key":"CLASSIFICATION"
                  },
                  {
                     "Values":[
                        "WindowsServer2012R2"
                     ],
                     "Key":"PRODUCT"
                  }
               ]
            },
            "ApproveAfterDays":5
         }
      ]
   },
   "ModifiedDate":1481001494.035,
   "CreatedDate":1480997823.81,
   "ApprovedPatches":[
      "KB2124261"
   ],
   "Description":"Windows Server 2012 R2, Important and Critical security updates"
}
```

### Rename a patch baseline


------
#### [ Linux & macOS ]

```
aws ssm update-patch-baseline \
    --baseline-id pb-0c10e65780EXAMPLE \
    --name "Windows-Server-2012-R2-Important-and-Critical-Security-Updates"
```

------
#### [ Windows Server ]

```
aws ssm update-patch-baseline ^
    --baseline-id pb-0c10e65780EXAMPLE ^
    --name "Windows-Server-2012-R2-Important-and-Critical-Security-Updates"
```

------

The system returns information like the following.

```
{
   "BaselineId":"pb-0c10e65780EXAMPLE",
   "Name":"Windows-Server-2012-R2-Important-and-Critical-Security-Updates",
   "RejectedPatches":[
      "KB2032276",
      "MS10-048"
   ],
   "GlobalFilters":{
      "PatchFilters":[

      ]
   },
   "ApprovalRules":{
      "PatchRules":[
         {
            "PatchFilterGroup":{
               "PatchFilters":[
                  {
                     "Values":[
                        "Important",
                        "Critical"
                     ],
                     "Key":"MSRC_SEVERITY"
                  },
                  {
                     "Values":[
                        "SecurityUpdates"
                     ],
                     "Key":"CLASSIFICATION"
                  },
                  {
                     "Values":[
                        "WindowsServer2012R2"
                     ],
                     "Key":"PRODUCT"
                  }
               ]
            },
            "ApproveAfterDays":5
         }
      ]
   },
   "ModifiedDate":1481001795.287,
   "CreatedDate":1480997823.81,
   "ApprovedPatches":[
      "KB2124261"
   ],
   "Description":"Windows Server 2012 R2, Important and Critical security updates"
}
```

### Delete a patch baseline


```
aws ssm delete-patch-baseline --baseline-id "pb-0c10e65780EXAMPLE"
```

The system returns information like the following.

```
{
   "BaselineId":"pb-0c10e65780EXAMPLE"
}
```

### List all patch baselines


```
aws ssm describe-patch-baselines
```

The system returns information like the following.

```
{
   "BaselineIdentities":[
      {
         "BaselineName":"AWS-DefaultPatchBaseline",
         "DefaultBaseline":true,
         "BaselineDescription":"Default Patch Baseline Provided by AWS.",
         "BaselineId":"arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
      },
      {
         "BaselineName":"Windows-Server-2012R2",
         "DefaultBaseline":false,
         "BaselineDescription":"Windows Server 2012 R2, Important and Critical security updates",
         "BaselineId":"pb-0c10e65780EXAMPLE"
      }
   ]
}
```

Here is another command that lists all patch baselines in an AWS Region.

------
#### [ Linux & macOS ]

```
aws ssm describe-patch-baselines \
    --region us-east-2 \
    --filters "Key=OWNER,Values=[All]"
```

------
#### [ Windows Server ]

```
aws ssm describe-patch-baselines ^
    --region us-east-2 ^
    --filters "Key=OWNER,Values=[All]"
```

------

The system returns information like the following.

```
{
   "BaselineIdentities":[
      {
         "BaselineName":"AWS-DefaultPatchBaseline",
         "DefaultBaseline":true,
         "BaselineDescription":"Default Patch Baseline Provided by AWS.",
         "BaselineId":"arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
      },
      {
         "BaselineName":"Windows-Server-2012R2",
         "DefaultBaseline":false,
         "BaselineDescription":"Windows Server 2012 R2, Important and Critical security updates",
         "BaselineId":"pb-0c10e65780EXAMPLE"
      }
   ]
}
```

### List all AWS-provided patch baselines


------
#### [ Linux & macOS ]

```
aws ssm describe-patch-baselines \
    --region us-east-2 \
    --filters "Key=OWNER,Values=[AWS]"
```

------
#### [ Windows Server ]

```
aws ssm describe-patch-baselines ^
    --region us-east-2 ^
    --filters "Key=OWNER,Values=[AWS]"
```

------

The system returns information like the following.

```
{
   "BaselineIdentities":[
      {
         "BaselineName":"AWS-DefaultPatchBaseline",
         "DefaultBaseline":true,
         "BaselineDescription":"Default Patch Baseline Provided by AWS.",
         "BaselineId":"arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
      }
   ]
}
```

### List my patch baselines


------
#### [ Linux & macOS ]

```
aws ssm describe-patch-baselines \
    --region us-east-2 \
    --filters "Key=OWNER,Values=[Self]"
```

------
#### [ Windows Server ]

```
aws ssm describe-patch-baselines ^
    --region us-east-2 ^
    --filters "Key=OWNER,Values=[Self]"
```

------

The system returns information like the following.

```
{
   "BaselineIdentities":[
      {
         "BaselineName":"Windows-Server-2012R2",
         "DefaultBaseline":false,
         "BaselineDescription":"Windows Server 2012 R2, Important and Critical security updates",
         "BaselineId":"pb-0c10e65780EXAMPLE"
      }
   ]
}
```

### Display a patch baseline


```
aws ssm get-patch-baseline --baseline-id pb-0c10e65780EXAMPLE
```

**Note**  
For custom patch baselines, you can specify either the patch baseline ID or the full Amazon Resource Name (ARN). For an AWS-provided patch baseline, you must specify the full ARN. For example, `arn:aws:ssm:us-east-2:075727635805:patchbaseline/pb-0c10e65780EXAMPLE`.

The system returns information like the following.

```
{
   "BaselineId":"pb-0c10e65780EXAMPLE",
   "Name":"Windows-Server-2012R2",
   "PatchGroups":[
      "Web Servers"
   ],
   "RejectedPatches":[

   ],
   "GlobalFilters":{
      "PatchFilters":[

      ]
   },
   "ApprovalRules":{
      "PatchRules":[
         {
            "PatchFilterGroup":{
               "PatchFilters":[
                  {
                     "Values":[
                        "Important",
                        "Critical"
                     ],
                     "Key":"MSRC_SEVERITY"
                  },
                  {
                     "Values":[
                        "SecurityUpdates"
                     ],
                     "Key":"CLASSIFICATION"
                  },
                  {
                     "Values":[
                        "WindowsServer2012R2"
                     ],
                     "Key":"PRODUCT"
                  }
               ]
            },
            "ApproveAfterDays":5
         }
      ]
   },
   "ModifiedDate":1480997823.81,
   "CreatedDate":1480997823.81,
   "ApprovedPatches":[

   ],
   "Description":"Windows Server 2012 R2, Important and Critical security updates"
}
```

### Get the default patch baseline


```
aws ssm get-default-patch-baseline --region us-east-2
```

The system returns information like the following.

```
{
   "BaselineId":"arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
}
```

### Set a custom patch baseline as the default


------
#### [ Linux & macOS ]

```
aws ssm register-default-patch-baseline \
    --region us-east-2 \
    --baseline-id "pb-0c10e65780EXAMPLE"
```

------
#### [ Windows Server ]

```
aws ssm register-default-patch-baseline ^
    --region us-east-2 ^
    --baseline-id "pb-0c10e65780EXAMPLE"
```

------

The system returns information like the following.

```
{
   "BaselineId":"pb-0c10e65780EXAMPLE"
}
```

### Reset an AWS patch baseline as the default


------
#### [ Linux & macOS ]

```
aws ssm register-default-patch-baseline \
    --region us-east-2 \
    --baseline-id "arn:aws:ssm:us-east-2:123456789012:patchbaseline/pb-0c10e65780EXAMPLE"
```

------
#### [ Windows Server ]

```
aws ssm register-default-patch-baseline ^
    --region us-east-2 ^
    --baseline-id "arn:aws:ssm:us-east-2:123456789012:patchbaseline/pb-0c10e65780EXAMPLE"
```

------

The system returns information like the following.

```
{
   "BaselineId":"pb-0c10e65780EXAMPLE"
}
```

### Tag a patch baseline


------
#### [ Linux & macOS ]

```
aws ssm add-tags-to-resource \
    --resource-type "PatchBaseline" \
    --resource-id "pb-0c10e65780EXAMPLE" \
    --tags "Key=Project,Value=Testing"
```

------
#### [ Windows Server ]

```
aws ssm add-tags-to-resource ^
    --resource-type "PatchBaseline" ^
    --resource-id "pb-0c10e65780EXAMPLE" ^
    --tags "Key=Project,Value=Testing"
```

------

### List the tags for a patch baseline


------
#### [ Linux & macOS ]

```
aws ssm list-tags-for-resource \
    --resource-type "PatchBaseline" \
    --resource-id "pb-0c10e65780EXAMPLE"
```

------
#### [ Windows Server ]

```
aws ssm list-tags-for-resource ^
    --resource-type "PatchBaseline" ^
    --resource-id "pb-0c10e65780EXAMPLE"
```

------

### Remove a tag from a patch baseline


------
#### [ Linux & macOS ]

```
aws ssm remove-tags-from-resource \
    --resource-type "PatchBaseline" \
    --resource-id "pb-0c10e65780EXAMPLE" \
    --tag-keys "Project"
```

------
#### [ Windows Server ]

```
aws ssm remove-tags-from-resource ^
    --resource-type "PatchBaseline" ^
    --resource-id "pb-0c10e65780EXAMPLE" ^
    --tag-keys "Project"
```

------

## AWS CLI commands for patch groups


**Topics**
+ [

### Create a patch group
](#patch-manager-cli-commands-create-patch-group)
+ [

### Register a patch group "web servers" with a patch baseline
](#patch-manager-cli-commands-register-patch-baseline-for-patch-group-web-servers)
+ [

### Register a patch group "Backend" with the AWS-provided patch baseline
](#patch-manager-cli-commands-register-patch-baseline-for-patch-group-backend)
+ [

### Display patch group registrations
](#patch-manager-cli-commands-describe-patch-groups)
+ [

### Deregister a patch group from a patch baseline
](#patch-manager-cli-commands-deregister-patch-baseline-for-patch-group)

### Create a patch group


**Note**  
Patch groups are not used in patching operations that are based on *patch policies*. For information about working with patch policies, see [Patch policy configurations in Quick Setup](patch-manager-policies.md).

To help you organize your patching efforts, we recommend that you add managed nodes to patch groups by using tags. Patch groups require use of the tag key `Patch Group` or `PatchGroup`. If you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS), you must use `PatchGroup` (without a space). You can specify any tag value, but the tag key must be `Patch Group` or `PatchGroup`. For more information about patch groups, see [Patch groups](patch-manager-patch-groups.md).

After you group your managed nodes using tags, you add the patch group value to a patch baseline. By registering the patch group with a patch baseline, you ensure that the correct patches are installed during the patching operation.

#### Task 1: Add EC2 instances to a patch group using tags


**Note**  
When using the Amazon Elastic Compute Cloud (Amazon EC2) console and AWS CLI, it's possible to apply `Key = Patch Group` or `Key = PatchGroup` tags to instances that aren't yet configured for use with Systems Manager. If an EC2 instance you expect to see in Patch Manager isn't listed after applying the `Patch Group` or `Key = PatchGroup` tag, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

Run the following command to add the `PatchGroup` tag to an EC2 instance.

```
aws ec2 create-tags --resources "i-1234567890abcdef0" --tags "Key=PatchGroup,Value=GroupValue"
```

#### Task 2: Add managed nodes to a patch group using tags


Run the following command to add the `PatchGroup` tag to a managed node.

------
#### [ Linux & macOS ]

```
aws ssm add-tags-to-resource \
    --resource-type "ManagedInstance" \
    --resource-id "mi-0123456789abcdefg" \
    --tags "Key=PatchGroup,Value=GroupValue"
```

------
#### [ Windows Server ]

```
aws ssm add-tags-to-resource ^
    --resource-type "ManagedInstance" ^
    --resource-id "mi-0123456789abcdefg" ^
    --tags "Key=PatchGroup,Value=GroupValue"
```

------

#### Task 3: Add a patch group to a patch baseline


Run the following command to associate a `PatchGroup` tag value to the specified patch baseline.

------
#### [ Linux & macOS ]

```
aws ssm register-patch-baseline-for-patch-group \
    --baseline-id "pb-0c10e65780EXAMPLE" \
    --patch-group "Development"
```

------
#### [ Windows Server ]

```
aws ssm register-patch-baseline-for-patch-group ^
    --baseline-id "pb-0c10e65780EXAMPLE" ^
    --patch-group "Development"
```

------

The system returns information like the following.

```
{
  "PatchGroup": "Development",
  "BaselineId": "pb-0c10e65780EXAMPLE"
}
```

### Register a patch group "web servers" with a patch baseline


------
#### [ Linux & macOS ]

```
aws ssm register-patch-baseline-for-patch-group \
    --baseline-id "pb-0c10e65780EXAMPLE" \
    --patch-group "Web Servers"
```

------
#### [ Windows Server ]

```
aws ssm register-patch-baseline-for-patch-group ^
    --baseline-id "pb-0c10e65780EXAMPLE" ^
    --patch-group "Web Servers"
```

------

The system returns information like the following.

```
{
   "PatchGroup":"Web Servers",
   "BaselineId":"pb-0c10e65780EXAMPLE"
}
```

### Register a patch group "Backend" with the AWS-provided patch baseline


------
#### [ Linux & macOS ]

```
aws ssm register-patch-baseline-for-patch-group \
    --region us-east-2 \
    --baseline-id "arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE" \
    --patch-group "Backend"
```

------
#### [ Windows Server ]

```
aws ssm register-patch-baseline-for-patch-group ^
    --region us-east-2 ^
    --baseline-id "arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE" ^
    --patch-group "Backend"
```

------

The system returns information like the following.

```
{
   "PatchGroup":"Backend",
   "BaselineId":"arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
}
```

### Display patch group registrations


```
aws ssm describe-patch-groups --region us-east-2
```

The system returns information like the following.

```
{
   "PatchGroupPatchBaselineMappings":[
      {
         "PatchGroup":"Backend",
         "BaselineIdentity":{
            "BaselineName":"AWS-DefaultPatchBaseline",
            "DefaultBaseline":false,
            "BaselineDescription":"Default Patch Baseline Provided by AWS.",
            "BaselineId":"arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
         }
      },
      {
         "PatchGroup":"Web Servers",
         "BaselineIdentity":{
            "BaselineName":"Windows-Server-2012R2",
            "DefaultBaseline":true,
            "BaselineDescription":"Windows Server 2012 R2, Important and Critical updates",
            "BaselineId":"pb-0c10e65780EXAMPLE"
         }
      }
   ]
}
```

### Deregister a patch group from a patch baseline


------
#### [ Linux & macOS ]

```
aws ssm deregister-patch-baseline-for-patch-group \
    --region us-east-2 \
    --patch-group "Production" \
    --baseline-id "arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
```

------
#### [ Windows Server ]

```
aws ssm deregister-patch-baseline-for-patch-group ^
    --region us-east-2 ^
    --patch-group "Production" ^
    --baseline-id "arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
```

------

The system returns information like the following.

```
{
   "PatchGroup":"Production",
   "BaselineId":"arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
}
```

## AWS CLI commands for viewing patch summaries and details


**Topics**
+ [

### Get all patches defined by a patch baseline
](#patch-manager-cli-commands-describe-effective-patches-for-patch-baseline)
+ [

### Get all patches for AmazonLinux2018.03 that have a Classification `SECURITY` and Severity of `Critical`
](#patch-manager-cli-commands-describe-available-patches-linux)
+ [

### Get all patches for Windows Server 2012 that have a MSRC severity of `Critical`
](#patch-manager-cli-commands-describe-available-patches)
+ [

### Get all available patches
](#patch-manager-cli-commands-describe-available-patches)
+ [

### Get patch summary states per-managed node
](#patch-manager-cli-commands-describe-instance-patch-states)
+ [

### Get patch compliance details for a managed node
](#patch-manager-cli-commands-describe-instance-patches)
+ [

### View patching compliance results (AWS CLI)
](#viewing-patch-compliance-results-cli)

### Get all patches defined by a patch baseline


**Note**  
This command is supported for Windows Server patch baselines only.

------
#### [ Linux & macOS ]

```
aws ssm describe-effective-patches-for-patch-baseline \
    --region us-east-2 \
    --baseline-id "pb-0c10e65780EXAMPLE"
```

------
#### [ Windows Server ]

```
aws ssm describe-effective-patches-for-patch-baseline ^
    --region us-east-2 ^
    --baseline-id "pb-0c10e65780EXAMPLE"
```

------

The system returns information like the following.

```
{
   "NextToken":"--token string truncated--",
   "EffectivePatches":[
      {
         "PatchStatus":{
            "ApprovalDate":1384711200.0,
            "DeploymentStatus":"APPROVED"
         },
         "Patch":{
            "ContentUrl":"https://support.microsoft.com/en-us/kb/2876331",
            "ProductFamily":"Windows",
            "Product":"WindowsServer2012R2",
            "Vendor":"Microsoft",
            "Description":"A security issue has been identified in a Microsoft software 
               product that could affect your system. You can help protect your system 
               by installing this update from Microsoft. For a complete listing of the 
               issues that are included in this update, see the associated Microsoft 
               Knowledge Base article. After you install this update, you may have to 
               restart your system.",
            "Classification":"SecurityUpdates",
            "Title":"Security Update for Windows Server 2012 R2 Preview (KB2876331)",
            "ReleaseDate":1384279200.0,
            "MsrcClassification":"Critical",
            "Language":"All",
            "KbNumber":"KB2876331",
            "MsrcNumber":"MS13-089",
            "Id":"e74ccc76-85f0-4881-a738-59e9fc9a336d"
         }
      },
      {
         "PatchStatus":{
            "ApprovalDate":1428858000.0,
            "DeploymentStatus":"APPROVED"
         },
         "Patch":{
            "ContentUrl":"https://support.microsoft.com/en-us/kb/2919355",
            "ProductFamily":"Windows",
            "Product":"WindowsServer2012R2",
            "Vendor":"Microsoft",
            "Description":"Windows Server 2012 R2 Update is a cumulative 
               set of security updates, critical updates and updates. You 
               must install Windows Server 2012 R2 Update to ensure that 
               your computer can continue to receive future Windows Updates, 
               including security updates. For a complete listing of the 
               issues that are included in this update, see the associated 
               Microsoft Knowledge Base article for more information. After 
               you install this item, you may have to restart your computer.",
            "Classification":"SecurityUpdates",
            "Title":"Windows Server 2012 R2 Update (KB2919355)",
            "ReleaseDate":1428426000.0,
            "MsrcClassification":"Critical",
            "Language":"All",
            "KbNumber":"KB2919355",
            "MsrcNumber":"MS14-018",
            "Id":"8452bac0-bf53-4fbd-915d-499de08c338b"
         }
      }
     ---output truncated---
```

### Get all patches for AmazonLinux2018.03 that have a Classification `SECURITY` and Severity of `Critical`


------
#### [ Linux & macOS ]

```
aws ssm describe-available-patches \
    --region us-east-2 \
    --filters Key=PRODUCT,Values=AmazonLinux2018.03 Key=SEVERITY,Values=Critical
```

------
#### [ Windows Server ]

```
aws ssm describe-available-patches ^
    --region us-east-2 ^
    --filters Key=PRODUCT,Values=AmazonLinux2018.03 Key=SEVERITY,Values=Critical
```

------

The system returns information like the following.

```
{
    "Patches": [
        {
            "AdvisoryIds": ["ALAS-2011-1"],
            "BugzillaIds": [ "1234567" ],
            "Classification": "SECURITY",
            "CVEIds": [ "CVE-2011-3192"],
            "Name": "zziplib",
            "Epoch": "0",
            "Version": "2.71",
            "Release": "1.3.amzn1",
            "Arch": "i686",
            "Product": "AmazonLinux2018.03",
            "ReleaseDate": 1590519815,
            "Severity": "CRITICAL"
        }
    ]
}     
---output truncated---
```

### Get all patches for Windows Server 2012 that have a MSRC severity of `Critical`


------
#### [ Linux & macOS ]

```
aws ssm describe-available-patches \
    --region us-east-2 \
    --filters Key=PRODUCT,Values=WindowsServer2012 Key=MSRC_SEVERITY,Values=Critical
```

------
#### [ Windows Server ]

```
aws ssm describe-available-patches ^
    --region us-east-2 ^
    --filters Key=PRODUCT,Values=WindowsServer2012 Key=MSRC_SEVERITY,Values=Critical
```

------

The system returns information like the following.

```
{
   "Patches":[
      {
         "ContentUrl":"https://support.microsoft.com/en-us/kb/2727528",
         "ProductFamily":"Windows",
         "Product":"WindowsServer2012",
         "Vendor":"Microsoft",
         "Description":"A security issue has been identified that could 
           allow an unauthenticated remote attacker to compromise your 
           system and gain control over it. You can help protect your 
           system by installing this update from Microsoft. After you 
           install this update, you may have to restart your system.",
         "Classification":"SecurityUpdates",
         "Title":"Security Update for Windows Server 2012 (KB2727528)",
         "ReleaseDate":1352829600.0,
         "MsrcClassification":"Critical",
         "Language":"All",
         "KbNumber":"KB2727528",
         "MsrcNumber":"MS12-072",
         "Id":"1eb507be-2040-4eeb-803d-abc55700b715"
      },
      {
         "ContentUrl":"https://support.microsoft.com/en-us/kb/2729462",
         "ProductFamily":"Windows",
         "Product":"WindowsServer2012",
         "Vendor":"Microsoft",
         "Description":"A security issue has been identified that could 
           allow an unauthenticated remote attacker to compromise your 
           system and gain control over it. You can help protect your 
           system by installing this update from Microsoft. After you 
           install this update, you may have to restart your system.",
         "Classification":"SecurityUpdates",
         "Title":"Security Update for Microsoft .NET Framework 3.5 on 
           Windows 8 and Windows Server 2012 for x64-based Systems (KB2729462)",
         "ReleaseDate":1352829600.0,
         "MsrcClassification":"Critical",
         "Language":"All",
         "KbNumber":"KB2729462",
         "MsrcNumber":"MS12-074",
         "Id":"af873760-c97c-4088-ab7e-5219e120eab4"
      }
     
---output truncated---
```

### Get all available patches


```
aws ssm describe-available-patches --region us-east-2
```

The system returns information like the following.

```
{
   "NextToken":"--token string truncated--",
   "Patches":[
      {
            "Classification": "SecurityUpdates",
            "ContentUrl": "https://support.microsoft.com/en-us/kb/4074588",
            "Description": "A security issue has been identified in a Microsoft software 
            product that could affect your system. You can help protect your system by 
            installing this update from Microsoft. For a complete listing of the issues 
            that are included in this update, see the associated Microsoft Knowledge Base 
            article. After you install this update, you may have to restart your system.",
            "Id": "11adea10-0701-430e-954f-9471595ae246",
            "KbNumber": "KB4074588",
            "Language": "All",
            "MsrcNumber": "",
            "MsrcSeverity": "Critical",
            "Product": "WindowsServer2016",
            "ProductFamily": "Windows",
            "ReleaseDate": 1518548400,
            "Title": "2018-02 Cumulative Update for Windows Server 2016 (1709) for x64-based 
            Systems (KB4074588)",
            "Vendor": "Microsoft"
        },
        {
            "Classification": "SecurityUpdates",
            "ContentUrl": "https://support.microsoft.com/en-us/kb/4074590",
            "Description": "A security issue has been identified in a Microsoft software 
            product that could affect your system. You can help protect your system by 
            installing this update from Microsoft. For a complete listing of the issues that are included in this update, see the associated Microsoft Knowledge Base article. After you install this update, you may have to restart your system.",
            "Id": "f5f58231-ac5d-4640-ab1b-9dc8d857c265",
            "KbNumber": "KB4074590",
            "Language": "All",
            "MsrcNumber": "",
            "MsrcSeverity": "Critical",
            "Product": "WindowsServer2016",
            "ProductFamily": "Windows",
            "ReleaseDate": 1518544805,
            "Title": "2018-02 Cumulative Update for Windows Server 2016 for x64-based 
            Systems (KB4074590)",
            "Vendor": "Microsoft"
        }
      ---output truncated---
```

### Get patch summary states per-managed node


The per-managed node summary gives you the number of patches in the following states per node: "NotApplicable", "Missing", "Failed", "InstalledOther" and "Installed". 

------
#### [ Linux & macOS ]

```
aws ssm describe-instance-patch-states \
    --instance-ids i-08ee91c0b17045407 i-09a618aec652973a9
```

------
#### [ Windows Server ]

```
aws ssm describe-instance-patch-states ^
    --instance-ids i-08ee91c0b17045407 i-09a618aec652973a9
```

------

The system returns information like the following.

```
{
   "InstancePatchStates":[
      {
            "InstanceId": "i-08ee91c0b17045407",
            "PatchGroup": "",
            "BaselineId": "pb-0c10e65780EXAMPLE",
            "SnapshotId": "6d03d6c5-f79d-41d0-8d0e-00a9aEXAMPLE",
            "InstalledCount": 50,
            "InstalledOtherCount": 353,
            "InstalledPendingRebootCount": 0,
            "InstalledRejectedCount": 0,
            "MissingCount": 0,
            "FailedCount": 0,
            "UnreportedNotApplicableCount": -1,
            "NotApplicableCount": 671,
            "OperationStartTime": "2020-01-24T12:37:56-08:00",
            "OperationEndTime": "2020-01-24T12:37:59-08:00",
            "Operation": "Scan",
            "RebootOption": "NoReboot"
        },
        {
            "InstanceId": "i-09a618aec652973a9",
            "PatchGroup": "",
            "BaselineId": "pb-0c10e65780EXAMPLE",
            "SnapshotId": "c7e0441b-1eae-411b-8aa7-973e6EXAMPLE",
            "InstalledCount": 36,
            "InstalledOtherCount": 396,
            "InstalledPendingRebootCount": 0,
            "InstalledRejectedCount": 0,
            "MissingCount": 3,
            "FailedCount": 0,
            "UnreportedNotApplicableCount": -1,
            "NotApplicableCount": 420,
            "OperationStartTime": "2020-01-24T12:37:34-08:00",
            "OperationEndTime": "2020-01-24T12:37:37-08:00",
            "Operation": "Scan",
            "RebootOption": "NoReboot"
        }
     ---output truncated---
```

### Get patch compliance details for a managed node


```
aws ssm describe-instance-patches --instance-id i-08ee91c0b17045407
```

The system returns information like the following.

```
{
   "NextToken":"--token string truncated--",
   "Patches":[
      {
            "Title": "bind-libs.x86_64:32:9.8.2-0.68.rc1.60.amzn1",
            "KBId": "bind-libs.x86_64",
            "Classification": "Security",
            "Severity": "Important",
            "State": "Installed",
            "InstalledTime": "2019-08-26T11:05:24-07:00"
        },
        {
            "Title": "bind-utils.x86_64:32:9.8.2-0.68.rc1.60.amzn1",
            "KBId": "bind-utils.x86_64",
            "Classification": "Security",
            "Severity": "Important",
            "State": "Installed",
            "InstalledTime": "2019-08-26T11:05:32-07:00"
        },
        {
            "Title": "dhclient.x86_64:12:4.1.1-53.P1.28.amzn1",
            "KBId": "dhclient.x86_64",
            "Classification": "Security",
            "Severity": "Important",
            "State": "Installed",
            "InstalledTime": "2019-08-26T11:05:31-07:00"
        },
    ---output truncated---
```

### View patching compliance results (AWS CLI)


**To view patch compliance results for a single managed node**

Run the following command in the AWS Command Line Interface (AWS CLI) to view patch compliance results for a single managed node.

```
aws ssm describe-instance-patch-states --instance-id instance-id
```

Replace *instance-id* with the ID of the managed node for which you want to view results, in the format `i-02573cafcfEXAMPLE` or `mi-0282f7c436EXAMPLE`.

The systems returns information like the following.

```
{
    "InstancePatchStates": [
        {
            "InstanceId": "i-02573cafcfEXAMPLE",
            "PatchGroup": "mypatchgroup",
            "BaselineId": "pb-0c10e65780EXAMPLE",            
            "SnapshotId": "a3f5ff34-9bc4-4d2c-a665-4d1c1EXAMPLE",
            "CriticalNonCompliantCount": 2,
            "SecurityNonCompliantCount": 2,
            "OtherNonCompliantCount": 1,
            "InstalledCount": 123,
            "InstalledOtherCount": 334,
            "InstalledPendingRebootCount": 0,
            "InstalledRejectedCount": 0,
            "MissingCount": 1,
            "FailedCount": 2,
            "UnreportedNotApplicableCount": 11,
            "NotApplicableCount": 2063,
            "OperationStartTime": "2021-05-03T11:00:56-07:00",
            "OperationEndTime": "2021-05-03T11:01:09-07:00",
            "Operation": "Scan",
            "LastNoRebootInstallOperationTime": "2020-06-14T12:17:41-07:00",
            "RebootOption": "RebootIfNeeded"
        }
    ]
}
```

**To view a patch count summary for all EC2 instances in a Region**

The `describe-instance-patch-states` supports retrieving results for just one managed instance at a time. However, using a custom script with the `describe-instance-patch-states` command, you can generate a more granular report.

For example, if the [jq filter tool](https://stedolan.github.io/jq/download/) is installed on you local machine, you could run the following command to identify which of your EC2 instances in a particular AWS Region have a status of `InstalledPendingReboot`.

```
aws ssm describe-instance-patch-states \
    --instance-ids $(aws ec2 describe-instances --region region | jq '.Reservations[].Instances[] | .InstanceId' | tr '\n|"' ' ') \
    --output text --query 'InstancePatchStates[*].{Instance:InstanceId, InstalledPendingRebootCount:InstalledPendingRebootCount}'
```

*region* represents the identifier for an AWS Region supported by AWS Systems Manager, such as `us-east-2` for the US East (Ohio) Region. For a list of supported *region* values, see the **Region** column in [Systems Manager service endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) in the *Amazon Web Services General Reference*.

For example:

```
aws ssm describe-instance-patch-states \
    --instance-ids $(aws ec2 describe-instances --region us-east-2 | jq '.Reservations[].Instances[] | .InstanceId' | tr '\n|"' ' ') \
    --output text --query 'InstancePatchStates[*].{Instance:InstanceId, InstalledPendingRebootCount:InstalledPendingRebootCount}'
```

The system returns information like the following.

```
1       i-02573cafcfEXAMPLE
0       i-0471e04240EXAMPLE
3       i-07782c72faEXAMPLE
6       i-083b678d37EXAMPLE
0       i-03a530a2d4EXAMPLE
1       i-01f68df0d0EXAMPLE
0       i-0a39c0f214EXAMPLE
7       i-0903a5101eEXAMPLE
7       i-03823c2fedEXAMPLE
```

In addition to `InstalledPendingRebootCount`, the list of count types you can search for include the following:
+ `CriticalNonCompliantCount`
+ `SecurityNonCompliantCount`
+ `OtherNonCompliantCount`
+ `UnreportedNotApplicableCount `
+ `InstalledPendingRebootCount`
+ `FailedCount`
+ `NotApplicableCount`
+ `InstalledRejectedCount`
+ `InstalledOtherCount`
+ `MissingCount`
+ `InstalledCount`

## AWS CLI commands for scanning and patching managed nodes


After running the following commands to scan for patch compliance or install patches, you can use commands in the [AWS CLI commands for viewing patch summaries and details](#patch-details-cli-commands) section to view information about patch status and compliance.

**Topics**
+ [

### Scan managed nodes for patch compliance (AWS CLI)
](#patch-operations-scan)
+ [

### Install patches on managed nodes (AWS CLI)
](#patch-operations-install-cli)

### Scan managed nodes for patch compliance (AWS CLI)


**To scan specific managed nodes for patch compliance**

Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name 'AWS-RunPatchBaseline' \
    --targets Key=InstanceIds,Values='i-02573cafcfEXAMPLE,i-0471e04240EXAMPLE' \
    --parameters 'Operation=Scan' \
    --timeout-seconds 600
```

------
#### [ Windows Server ]

```
aws ssm send-command ^
    --document-name "AWS-RunPatchBaseline" ^
    --targets Key=InstanceIds,Values="i-02573cafcfEXAMPLE,i-0471e04240EXAMPLE" ^
    --parameters "Operation=Scan" ^
    --timeout-seconds 600
```

------

The system returns information like the following.

```
{
    "Command": {
        "CommandId": "a04ed06c-8545-40f4-87c2-a0babEXAMPLE",
        "DocumentName": "AWS-RunPatchBaseline",
        "DocumentVersion": "$DEFAULT",
        "Comment": "",
        "ExpiresAfter": 1621974475.267,
        "Parameters": {
            "Operation": [
                "Scan"
            ]
        },
        "InstanceIds": [],
        "Targets": [
            {
                "Key": "InstanceIds",
                "Values": [
                    "i-02573cafcfEXAMPLE,
                     i-0471e04240EXAMPLE"
                ]
            }
        ],
        "RequestedDateTime": 1621952275.267,
        "Status": "Pending",
        "StatusDetails": "Pending",
        "TimeoutSeconds": 600,

    ---output truncated---

    }
}
```

**To scan managed nodes for patch compliance by patch group tag**

Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name 'AWS-RunPatchBaseline' \
    --targets Key='tag:PatchGroup',Values='Web servers' \
    --parameters 'Operation=Scan' \
    --timeout-seconds 600
```

------
#### [ Windows Server ]

```
aws ssm send-command ^
    --document-name "AWS-RunPatchBaseline" ^
    --targets Key="tag:PatchGroup",Values="Web servers" ^
    --parameters "Operation=Scan" ^
    --timeout-seconds 600
```

------

The system returns information like the following.

```
{
    "Command": {
        "CommandId": "87a448ee-8adc-44e0-b4d1-6b429EXAMPLE",
        "DocumentName": "AWS-RunPatchBaseline",
        "DocumentVersion": "$DEFAULT",
        "Comment": "",
        "ExpiresAfter": 1621974983.128,
        "Parameters": {
            "Operation": [
                "Scan"
            ]
        },
        "InstanceIds": [],
        "Targets": [
            {
                "Key": "tag:PatchGroup",
                "Values": [
                    "Web servers"
                ]
            }
        ],
        "RequestedDateTime": 1621952783.128,
        "Status": "Pending",
        "StatusDetails": "Pending",
        "TimeoutSeconds": 600,

    ---output truncated---

    }
}
```

### Install patches on managed nodes (AWS CLI)


**To install patches on specific managed nodes**

Run the following command. 

**Note**  
The target managed nodes reboot as needed to complete patch installation. For more information, see [SSM Command document for patching: `AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md).

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name 'AWS-RunPatchBaseline' \
    --targets Key=InstanceIds,Values='i-02573cafcfEXAMPLE,i-0471e04240EXAMPLE' \
    --parameters 'Operation=Install' \
    --timeout-seconds 600
```

------
#### [ Windows Server ]

```
aws ssm send-command ^
    --document-name "AWS-RunPatchBaseline" ^
    --targets Key=InstanceIds,Values="i-02573cafcfEXAMPLE,i-0471e04240EXAMPLE" ^
    --parameters "Operation=Install" ^
    --timeout-seconds 600
```

------

The system returns information like the following.

```
{
    "Command": {
        "CommandId": "5f403234-38c4-439f-a570-93623EXAMPLE",
        "DocumentName": "AWS-RunPatchBaseline",
        "DocumentVersion": "$DEFAULT",
        "Comment": "",
        "ExpiresAfter": 1621975301.791,
        "Parameters": {
            "Operation": [
                "Install"
            ]
        },
        "InstanceIds": [],
        "Targets": [
            {
                "Key": "InstanceIds",
                "Values": [
                    "i-02573cafcfEXAMPLE,
                     i-0471e04240EXAMPLE"
                ]
            }
        ],
        "RequestedDateTime": 1621953101.791,
        "Status": "Pending",
        "StatusDetails": "Pending",
        "TimeoutSeconds": 600,

    ---output truncated---

    }
}
```

**To install patches on managed nodes in a specific patch group**

Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name 'AWS-RunPatchBaseline' \
    --targets Key='tag:PatchGroup',Values='Web servers' \
    -parameters 'Operation=Install' \
    --timeout-seconds 600
```

------
#### [ Windows Server ]

```
aws ssm send-command ^
    --document-name "AWS-RunPatchBaseline" ^
    --targets Key="tag:PatchGroup",Values="Web servers" ^
    --parameters "Operation=Install" ^
    --timeout-seconds 600
```

------

The system returns information like the following.

```
{
    "Command": {
        "CommandId": "fa44b086-7d36-4ad5-ac8d-627ecEXAMPLE",
        "DocumentName": "AWS-RunPatchBaseline",
        "DocumentVersion": "$DEFAULT",
        "Comment": "",
        "ExpiresAfter": 1621975407.865,
        "Parameters": {
            "Operation": [
                "Install"
            ]
        },
        "InstanceIds": [],
        "Targets": [
            {
                "Key": "tag:PatchGroup",
                "Values": [
                    "Web servers"
                ]
            }
        ],
        "RequestedDateTime": 1621953207.865,
        "Status": "Pending",
        "StatusDetails": "Pending",
        "TimeoutSeconds": 600,

    ---output truncated---

    }
}
```

# AWS Systems Manager Patch Manager tutorials
Patch Manager tutorials

The tutorials in this section demonstrate how to use Patch Manager, a tool in AWS Systems Manager, for several patching scenarios.

**Topics**
+ [

# Tutorial: Patching a server in an IPv6 only environment
](patch-manager-server-patching-iPv6-tutorial.md)
+ [

# Tutorial: Create a patch baseline for installing Windows Service Packs using the console
](patch-manager-windows-service-pack-patch-baseline-tutorial.md)
+ [

# Tutorial: Update application dependencies, patch a managed node, and perform an application-specific health check using the console
](aws-runpatchbaselinewithhooks-tutorial.md)
+ [

# Tutorial: Patch a server environment using the AWS CLI
](patch-manager-patch-servers-using-the-aws-cli.md)

# Tutorial: Patching a server in an IPv6 only environment


Patch Manager supports the patching of nodes in environments that only have IPv6. By updating the SSM Agent configuration, patching operations can be configured to only make calls to IPv6 service endpoints.

**To patch a server in an IPv6 only environment**

1. Ensure that SSM Agent version 3.3270.0 or later is installed on the managed node.

1. On the managed node, navigate to the SSM Agent configuration file. You can find the `amazon-ssm-agent.json` file in the following directories:
   + Linux: `/etc/amazon/ssm/`
   + macOS: `/opt/aws/ssm/`
   + Windows Server: `C:\Program Files\Amazon\SSM`

   If `amazon-ssm-agent.json` doesn't exist yet, copy the contents of `amazon-ssm-agent.json.template` under the same directory to `amazon-ssm-agent.json`.

1. Update the following entry to set the correct Region and set `UseDualStackEndpoint` to `true`:

   ```
   {
    --------
       "Agent": {
           "Region": "region",
           "UseDualStackEndpoint": true
       },
   --------
   }
   ```

1. Restart the SSM Agent service using the appropriate command for your operating system:
   + Linux: `sudo systemctl restart amazon-ssm-agent`
   + Ubuntu Server using Snap: `sudo snap restart amazon-ssm-agent`
   + macOS: `sudo launchctl stop com.amazon.aws.ssm` followed by `sudo launchctl start com.amazon.aws.ssm`
   + Windows Server: `Stop-Service AmazonSSMAgent` followed by `Start-Service AmazonSSMAgent`

   For the full list of commands per operating system, see [Checking SSM Agent status and starting the agent](ssm-agent-status-and-restart.md).

1. Execute any patching operation to verify patching operations succeed in your IPv6-only environment. Ensure that the nodes being patched have connectivity to the patch source. You can check the Run Command output from the patching execution to check for warnings about inaccessible repositories. When patching a node that is running in an IPv6 only environment, ensure that the node has connectivity to the patch source. You can check the Run Command output from the patching execution to check for warnings about inaccessible repositories. For DNF-based operating systems, it is possible to configure unavailable repositories to be skipped during patching if the `skip_if_unavailable` option is set to `True` under `/etc/dnf/dnf.conf`. DNF-based operating systems include Amazon Linux 2023, Red Hat Enterprise Linux 8 and later versions, Oracle Linux 8 and later versions, Rocky Linux, AlmaLinux, & CentOS 8 and later versions. On Amazon Linux 2023, the `skip_if_unavailable` option is set to `True` by default.
**Note**  
 When using the Install Override List or Baseline Override features, ensure that the provided URL is reachable from the node. If the SSM Agent config option `UseDualStackEndpoint` is set to `true`, then a dualstack S3 client is used when an S3 URL is provided.

# Tutorial: Create a patch baseline for installing Windows Service Packs using the console


When you create a custom patch baseline, you can specify that all, some, or only one type of supported patch is installed.

In patch baselines for Windows, you can select `ServicePacks` as the only **Classification** option in order to limit patching updates to Service Packs only. Service Packs can be installed automatically by Patch Manager, a tool in AWS Systems Manager, provided that the update is available in Windows Update or Windows Server Update Services (WSUS).

You can configure a patch baseline to control whether Service Packs for all Windows versions are installed, or just those for specific versions, such as Windows 7 or Windows Server 2016. 

Use the following procedure to create a custom patch baseline to be used exclusively for installing all Service Packs on your Windows managed nodes. 

**To create a patch baseline for installing Windows Service Packs (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Patch baselines** tab, and then choose **Create patch baseline**. 

1. For **Name**, enter a name for your new patch baseline, for example, `MyWindowsServicePackPatchBaseline`.

1. (Optional) For **Description**, enter a description for this patch baseline.

1. For **Operating system**, choose `Windows`.

1. If you want to begin using this patch baseline as the default for Windows as soon as you create it, select **Set this patch baseline as the default patch baseline for Windows Server instances** .
**Note**  
This option is available only if you first accessed Patch Manager before the [patch policies](patch-manager-policies.md) release on December 22, 2022.  
For information about setting an existing patch baseline as the default, see [Setting an existing patch baseline as the default](patch-manager-default-patch-baseline.md).

1. In the **Approval rules for operating systems** section, use the fields to create one or more auto-approval rules.
   + **Products**: The operating system versions that the approval rule applies to, such as `WindowsServer2012`. You can choose one, more than one, or all supported versions of Windows. The default selection is `All`.
   + **Classification**: Choose `ServicePacks`. 
   + **Severity**: The severity value of patches the rule is to apply to. To ensure that all Service Packs are included by the rule, choose `All`. 
   + **Auto-approval**: The method for selecting patches for automatic approval.
     + **Approve patches after a specified number of days**: The number of days for Patch Manager to wait after a patch is released or updated before a patch is automatically approved. You can enter any integer from zero (0) to 360. For most scenarios, we recommend waiting no more than 100 days.
     + **Approve patches released up to a specific date**: The patch release date for which Patch Manager automatically applies all patches released or updated on or before that date. For example, if you specify July 7, 2023, no patches released or last updated on or after July 8, 2023, are installed automatically.
   + (Optional) **Compliance reporting**: The severity level you want to assign to Service Packs approved by the baseline, such as `High`.
**Note**  
If you specify a compliance reporting level and the patch state of any approved Service Pack is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.

1. (Optional) For **Manage tags**, apply one or more tag key name/value pairs to the patch baseline.

   Tags are optional metadata that you assign to a resource. Tags allow you to categorize a resource in different ways, such as by purpose, owner, or environment. For this patch baseline dedicated to updating Service Packs, you could specify key-value pairs such as the following:
   + `Key=OS,Value=Windows`
   + `Key=Classification,Value=ServicePacks`

1. Choose **Create patch baseline**.

# Tutorial: Update application dependencies, patch a managed node, and perform an application-specific health check using the console


In many cases, a managed node must be rebooted after it has been patched with the latest software update. However, rebooting a node in production without safeguards in place can cause several problems, such as invoking alarms, recording incorrect metric data, and interrupting data synchronizations.

This tutorial demonstrates how to avoid problems like these by using the AWS Systems Manager document (SSM document) `AWS-RunPatchBaselineWithHooks` to achieve a complex, multi-step patching operation that accomplishes the following:

1. Prevent new connections to the application

1. Install operating system updates

1. Update the package dependencies of the application

1. Restart the system

1. Perform an application-specific health check

For this example, we have set up our infrastructure this way:
+ The virtual machines targeted are registered as managed nodes with Systems Manager.
+ `Iptables` is used as a local firewall.
+ The application hosted on the managed nodes is running on port 443.
+ The application hosted on the managed nodes is a `nodeJS` application.
+ The application hosted on the managed nodes is managed by the pm2 process manager.
+ The application already has a specified health check endpoint.
+ The application’s health check endpoint requires no end user authentication. The endpoint allows for a health check that meets the organization’s requirements in establishing availability. (In your environments, it might be enough to simply ascertain that the `nodeJS` application is running and able to listen for requests. In other cases, you might want to also verify that a connection to the caching layer or database layer has already been established.)

The examples in this tutorial are for demonstration purposes only and not meant to be implemented as-is into production environments. Also, keep in mind that the lifecycle hooks feature of Patch Manager, a tool in Systems Manager, with the `AWS-RunPatchBaselineWithHooks` document can support numerous other scenarios. Here are several examples.
+ Stop a metrics reporting agent before patching and restarting it after the managed node reboots.
+ Detach the managed node from a CRM or PCS cluster before patching and reattach after the node reboots.
+ Update third-party software (for example, Java, Tomcat, Adobe applications, and so on) on Windows Server machines after operating system (OS) updates are applied, but before the managed node reboots.

**To update application dependencies, patch a managed node, and perform an application-specific health check**

1. Create an SSM document for your preinstallation script with the following contents and name it `NodeJSAppPrePatch`. Replace *your\$1application* with the name of your application.

   This script immediately blocks new incoming requests and provides five seconds for already active ones to complete before beginning the patching operation. For the `sleep` option, specify a number of seconds greater than it usually takes for incoming requests to complete.

   ```
   # exit on error
   set -e
   # set up rule to block incoming traffic
   iptables -I INPUT -j DROP -p tcp --syn --destination-port 443 || exit 1
   # wait for current connections to end. Set timeout appropriate to your application's latency
   sleep 5 
   # Stop your application
   pm2 stop your_application
   ```

   For information about creating SSM documents, see [Creating SSM document content](documents-creating-content.md).

1. Create another SSM document with the following content for your postinstall script to update your application dependencies and name it `NodeJSAppPostPatch`. Replace */your/application/path* with the path to your application.

   ```
   cd /your/application/path
   npm update 
   # you can use npm-check-updates if you want to upgrade major versions
   ```

1. Create another SSM document with the following content for your `onExit` script to bring your application back up and perform a health check. Name this SSM document `NodeJSAppOnExitPatch`. Replace *your\$1application* with the name of your application.

   ```
   # exit on error
   set -e
   # restart nodeJs application
   pm2 start your_application
   # sleep while your application starts and to allow for a crash
   sleep 10
   # check with pm2 to see if your application is running
   pm2 pid your_application
   # re-enable incoming connections
   iptables -D INPUT -j DROP -p tcp --syn --destination-port 
   # perform health check
   /usr/bin/curl -m 10 -vk -A "" http://localhost:443/health-check || exit 1
   ```

1. Create an association in State Manager, a tool in AWS Systems Manager, to issue the operation by performing the following steps:

   1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

   1. In the navigation pane, choose **State Manager**, and then choose **Create association**.

   1. For **Name**, provide a name to help identify the purpose of the association.

   1. In the **Document** list, choose `AWS-RunPatchBaselineWithHooks`.

   1. For **Operation**, choose **Install**.

   1. (Optional) For **Snapshot Id**, provide a GUID that you generate to help speed up the operation and ensure consistency. The GUID value can be as simple as `00000000-0000-0000-0000-111122223333`.

   1. For **Pre Install Hook Doc Name**, enter `NodeJSAppPrePatch`. 

   1. For **Post Install Hook Doc Name**, enter `NodeJSAppPostPatch`. 

   1. For **On ExitHook Doc Name**,enter `NodeJSAppOnExitPatch`. 

1. For **Targets**, identify your managed nodes by specifying tags, choosing nodes manually, choosing a resource group, or choosing all managed nodes.

1. For **Specify schedule**, specify how often to run the association. For managed node patching, once per week is a common cadence.

1. In the **Rate control** section, choose options to control how the association runs on multiple managed nodes. Ensure that only a portion of managed nodes are updated at a time. Otherwise, all or most of your fleet could be taken offline at once. For more information about using rate controls, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

1. (Optional) For **Output options**, to save the command output to a file, select the **Enable writing output to S3** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile assigned to the managed node, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, verify that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. Choose **Create Association**.

# Tutorial: Patch a server environment using the AWS CLI


The following procedure describes how to patch a server environment by using a custom patch baseline, patch groups, and a maintenance window.

**Before you begin**
+ Install or update the SSM Agent on your managed nodes. To patch Linux managed nodes, your nodes must be running SSM Agent version 2.0.834.0 or later. For more information, see [Updating the SSM Agent using Run Command](run-command-tutorial-update-software.md#rc-console-agentexample).
+ Configure roles and permissions for Maintenance Windows, a tool in AWS Systems Manager. For more information, see [Setting up Maintenance Windows](setting-up-maintenance-windows.md).
+ Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

  For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

**To configure Patch Manager and patch managed nodes (command line)**

1. Run the following command to create a patch baseline for Windows named `Production-Baseline`. This patch baseline approves patches for a production environment 7 days after they're released or last updated. That is, we have tagged the patch baseline to indicate that it's for a production environment.
**Note**  
The `OperatingSystem` parameter and `PatchFilters` vary depending on the operating system of the target managed nodes the patch baseline applies to. For more information, see [OperatingSystem](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-OperatingSystem) and [PatchFilter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html).

------
#### [ Linux & macOS ]

   ```
   aws ssm create-patch-baseline \
       --name "Production-Baseline" \
       --operating-system "WINDOWS" \
       --tags "Key=Environment,Value=Production" \
       --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=MSRC_SEVERITY,Values=[Critical,Important]},{Key=CLASSIFICATION,Values=[SecurityUpdates,Updates,ServicePacks,UpdateRollups,CriticalUpdates]}]},ApproveAfterDays=7}]" \
       --description "Baseline containing all updates approved for production systems"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm create-patch-baseline ^
       --name "Production-Baseline" ^
       --operating-system "WINDOWS" ^
       --tags "Key=Environment,Value=Production" ^
       --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=MSRC_SEVERITY,Values=[Critical,Important]},{Key=CLASSIFICATION,Values=[SecurityUpdates,Updates,ServicePacks,UpdateRollups,CriticalUpdates]}]},ApproveAfterDays=7}]" ^
       --description "Baseline containing all updates approved for production systems"
   ```

------

   The system returns information like the following.

   ```
   {
      "BaselineId":"pb-0c10e65780EXAMPLE"
   }
   ```

1. Run the following commands to register the "Production-Baseline" patch baseline for two patch groups. The groups are named "Database Servers" and "Front-End Servers".

------
#### [ Linux & macOS ]

   ```
   aws ssm register-patch-baseline-for-patch-group \
       --baseline-id pb-0c10e65780EXAMPLE \
       --patch-group "Database Servers"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm register-patch-baseline-for-patch-group ^
       --baseline-id pb-0c10e65780EXAMPLE ^
       --patch-group "Database Servers"
   ```

------

   The system returns information like the following.

   ```
   {
      "PatchGroup":"Database Servers",
      "BaselineId":"pb-0c10e65780EXAMPLE"
   }
   ```

------
#### [ Linux & macOS ]

   ```
   aws ssm register-patch-baseline-for-patch-group \
       --baseline-id pb-0c10e65780EXAMPLE \
       --patch-group "Front-End Servers"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm register-patch-baseline-for-patch-group ^
       --baseline-id pb-0c10e65780EXAMPLE ^
       --patch-group "Front-End Servers"
   ```

------

   The system returns information like the following.

   ```
   {
      "PatchGroup":"Front-End Servers",
      "BaselineId":"pb-0c10e65780EXAMPLE"
   }
   ```

1. Run the following commands to create two maintenance windows for the production servers. The first window runs every Tuesday at 10 PM. The second window runs every Saturday at 10 PM. In addition, the maintenance window is tagged to indicate that it's for a production environment.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-maintenance-window \
       --name "Production-Tuesdays" \
       --tags "Key=Environment,Value=Production" \
       --schedule "cron(0 0 22 ? * TUE *)" \
       --duration 1 \
       --cutoff 0 \
       --no-allow-unassociated-targets
   ```

------
#### [ Windows Server ]

   ```
   aws ssm create-maintenance-window ^
       --name "Production-Tuesdays" ^
       --tags "Key=Environment,Value=Production" ^
       --schedule "cron(0 0 22 ? * TUE *)" ^
       --duration 1 ^
       --cutoff 0 ^
       --no-allow-unassociated-targets
   ```

------

   The system returns information like the following.

   ```
   {
      "WindowId":"mw-0c50858d01EXAMPLE"
   }
   ```

------
#### [ Linux & macOS ]

   ```
   aws ssm create-maintenance-window \
       --name "Production-Saturdays" \
       --tags "Key=Environment,Value=Production" \
       --schedule "cron(0 0 22 ? * SAT *)" \
       --duration 2 \
       --cutoff 0 \
       --no-allow-unassociated-targets
   ```

------
#### [ Windows Server ]

   ```
   aws ssm create-maintenance-window ^
       --name "Production-Saturdays" ^
       --tags "Key=Environment,Value=Production" ^
       --schedule "cron(0 0 22 ? * SAT *)" ^
       --duration 2 ^
       --cutoff 0 ^
       --no-allow-unassociated-targets
   ```

------

   The system returns information like the following.

   ```
   {
      "WindowId":"mw-9a8b7c6d5eEXAMPLE"
   }
   ```

1. Run the following commands to register the `Database` and `Front-End` servers patch groups with their respective maintenance windows.

------
#### [ Linux & macOS ]

   ```
   aws ssm register-target-with-maintenance-window \
       --window-id mw-0c50858d01EXAMPLE \
       --targets "Key=tag:PatchGroup,Values=Database Servers" \
       --owner-information "Database Servers" \
       --resource-type "INSTANCE"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm register-target-with-maintenance-window ^
       --window-id mw-0c50858d01EXAMPLE ^
       --targets "Key=tag:PatchGroup,Values=Database Servers" ^
       --owner-information "Database Servers" ^
       --resource-type "INSTANCE"
   ```

------

   The system returns information like the following.

   ```
   {
      "WindowTargetId":"e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE"
   }
   ```

------
#### [ Linux & macOS ]

   ```
   aws ssm register-target-with-maintenance-window \
   --window-id mw-9a8b7c6d5eEXAMPLE \
   --targets "Key=tag:PatchGroup,Values=Front-End Servers" \
   --owner-information "Front-End Servers" \
   --resource-type "INSTANCE"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm register-target-with-maintenance-window ^
       --window-id mw-9a8b7c6d5eEXAMPLE ^
       --targets "Key=tag:PatchGroup,Values=Front-End Servers" ^
       --owner-information "Front-End Servers" ^
       --resource-type "INSTANCE"
   ```

------

   The system returns information like the following.

   ```
   {
      "WindowTargetId":"faa01c41-1d57-496c-ba77-ff9caEXAMPLE"
   }
   ```

1. Run the following commands to register a patch task that installs missing updates on the `Database` and `Front-End` servers during their respective maintenance windows.

------
#### [ Linux & macOS ]

   ```
   aws ssm register-task-with-maintenance-window \
       --window-id mw-0c50858d01EXAMPLE \
       --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" \
       --task-arn "AWS-RunPatchBaseline" \
       --service-role-arn "arn:aws:iam::123456789012:role/MW-Role" \
       --task-type "RUN_COMMAND" \
       --max-concurrency 2 \
       --max-errors 1 \
       --priority 1 \
       --task-invocation-parameters "RunCommand={Parameters={Operation=Install}}"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm register-task-with-maintenance-window ^
       --window-id mw-0c50858d01EXAMPLE ^
       --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" ^
       --task-arn "AWS-RunPatchBaseline" ^
       --service-role-arn "arn:aws:iam::123456789012:role/MW-Role" ^
       --task-type "RUN_COMMAND" ^
       --max-concurrency 2 ^
       --max-errors 1 ^
       --priority 1 ^
       --task-invocation-parameters "RunCommand={Parameters={Operation=Install}}"
   ```

------

   The system returns information like the following.

   ```
   {
      "WindowTaskId":"4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE"
   }
   ```

------
#### [ Linux & macOS ]

   ```
   aws ssm register-task-with-maintenance-window \
       --window-id mw-9a8b7c6d5eEXAMPLE \
       --targets "Key=WindowTargetIds,Values=faa01c41-1d57-496c-ba77-ff9caEXAMPLE" \
       --task-arn "AWS-RunPatchBaseline" \
       --service-role-arn "arn:aws:iam::123456789012:role/MW-Role" \
       --task-type "RUN_COMMAND" \
       --max-concurrency 2 \
       --max-errors 1 \
       --priority 1 \
       --task-invocation-parameters "RunCommand={Parameters={Operation=Install}}"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm register-task-with-maintenance-window ^
       --window-id mw-9a8b7c6d5eEXAMPLE ^
       --targets "Key=WindowTargetIds,Values=faa01c41-1d57-496c-ba77-ff9caEXAMPLE" ^
       --task-arn "AWS-RunPatchBaseline" ^
       --service-role-arn "arn:aws:iam::123456789012:role/MW-Role" ^
       --task-type "RUN_COMMAND" ^
       --max-concurrency 2 ^
       --max-errors 1 ^
       --priority 1 ^
       --task-invocation-parameters "RunCommand={Parameters={Operation=Install}}"
   ```

------

   The system returns information like the following.

   ```
   {
      "WindowTaskId":"8a5c4629-31b0-4edd-8aea-33698EXAMPLE"
   }
   ```

1. Run the following command to get the high-level patch compliance summary for a patch group. The high-level patch compliance summary includes the number of managed nodes with patches in the respective patch states.
**Note**  
It's expected to see zeroes for the number of managed nodes in the summary until the patch task runs during the first maintenance window.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-patch-group-state \
       --patch-group "Database Servers"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm describe-patch-group-state ^
       --patch-group "Database Servers"
   ```

------

   The system returns information like the following.

   ```
   {
      "Instances": number,
      "InstancesWithFailedPatches": number,
      "InstancesWithInstalledOtherPatches": number,
      "InstancesWithInstalledPatches": number,
      "InstancesWithInstalledPendingRebootPatches": number,
      "InstancesWithInstalledRejectedPatches": number,
      "InstancesWithMissingPatches": number,
      "InstancesWithNotApplicablePatches": number,
      "InstancesWithUnreportedNotApplicablePatches": number
   }
   ```

1. Run the following command to get patch summary states per-managed node for a patch group. The per-managed node summary includes a number of patches in the respective patch states per managed node for a patch group.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-instance-patch-states-for-patch-group \
       --patch-group "Database Servers"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm describe-instance-patch-states-for-patch-group ^
       --patch-group "Database Servers"
   ```

------

   The system returns information like the following.

   ```
   {
      "InstancePatchStates": [ 
         { 
            "BaselineId": "string",
            "FailedCount": number,
            "InstalledCount": number,
            "InstalledOtherCount": number,
            "InstalledPendingRebootCount": number,
            "InstalledRejectedCount": number,
            "InstallOverrideList": "string",
            "InstanceId": "string",
            "LastNoRebootInstallOperationTime": number,
            "MissingCount": number,
            "NotApplicableCount": number,
            "Operation": "string",
            "OperationEndTime": number,
            "OperationStartTime": number,
            "OwnerInformation": "string",
            "PatchGroup": "string",
            "RebootOption": "string",
            "SnapshotId": "string",
            "UnreportedNotApplicableCount": number
         }
      ]
   }
   ```

For examples of other AWS CLI commands you can use for your Patch Manager configuration tasks, see [Working with Patch Manager resources using the AWS CLI](patch-manager-cli-commands.md).

# Troubleshooting Patch Manager


Use the following information to help you troubleshoot problems with Patch Manager, a tool in AWS Systems Manager.

**Topics**
+ [

## Issue: "Invoke-PatchBaselineOperation : Access Denied" error or "Unable to download file from S3" error for `baseline_overrides.json`
](#patch-manager-troubleshooting-patch-policy-baseline-overrides)
+ [

## Issue: Patching fails without an apparent cause or error message
](#race-condition-conflict)
+ [

## Issue: Unexpected patch compliance results
](#patch-manager-troubleshooting-compliance)
+ [

## Errors when running `AWS-RunPatchBaseline` on Linux
](#patch-manager-troubleshooting-linux)
+ [

## Errors when running `AWS-RunPatchBaseline` on Windows Server
](#patch-manager-troubleshooting-windows)
+ [

## Errors when running `AWS-RunPatchBaseline` on macOS
](#patch-manager-troubleshooting-macos)
+ [

## Using AWS Support Automation runbooks
](#patch-manager-troubleshooting-using-support-runbooks)
+ [

## Contacting AWS Support
](#patch-manager-troubleshooting-contact-support)

## Issue: "Invoke-PatchBaselineOperation : Access Denied" error or "Unable to download file from S3" error for `baseline_overrides.json`


**Problem**: When the patching operations specified by your patch policy run, you receive an error similar to the following example. 

------
#### [ Example error on Windows Server ]

```
----------ERROR-------
Invoke-PatchBaselineOperation : Access Denied
At C:\ProgramData\Amazon\SSM\InstanceData\i-02573cafcfEXAMPLE\document\orchestr
ation\792dd5bd-2ad3-4f1e-931d-abEXAMPLE\PatchWindows\_script.ps1:219 char:13
+ $response = Invoke-PatchBaselineOperation -Operation Install -Snapsho ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OperationStopped: (Amazon.Patch.Ba...UpdateOpera
tion:InstallWindowsUpdateOperation) [Invoke-PatchBaselineOperation], Amazo
nS3Exception
+ FullyQualifiedErrorId : PatchBaselineOperations,Amazon.Patch.Baseline.Op
erations.PowerShellCmdlets.InvokePatchBaselineOperation
failed to run commands: exit status 0xffffffff
```

------
#### [ Example error on Linux ]

```
[INFO]: Downloading Baseline Override from s3://aws-quicksetup-patchpolicy-123456789012-abcde/baseline_overrides.json
[ERROR]: Unable to download file from S3: s3://aws-quicksetup-patchpolicy-123456789012-abcde/baseline_overrides.json.
[ERROR]: Error loading entrance module.
```

------

**Cause**: You created a patch policy in Quick Setup, and some of your managed nodes already had an instance profile attached (for EC2 instances) or a service role attached (for non-EC2 machines). 

However, as shown in the following image, you didn't select the **Add required IAM policies to existing instance profiles attached to your instances** check box.

![\[\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/QS-instance-profile-option.png)


When you create a patch policy, an Amazon S3 bucket is also created to store the policy's configuration `baseline_overrides.json` file. If you don't select the **Add required IAM policies to existing instance profiles attached to your instances** check box when creating the policy, the IAM policies and resource tags that are needed to access `baseline_overrides.json` in the S3 bucket are not automatically added to your existing IAM instance profiles and service roles.

**Solution 1**: Delete the existing patch policy configuration, then create a replacement, making sure to select the **Add required IAM policies to existing instance profiles attached to your instances** check box. This selection applies the IAM policies created by this Quick Setup configuration to nodes that already have an instance profile or service role attached. (By default, Quick Setup adds the required policies to instances and nodes that do *not* already have instance profiles or service roles.) For more information, see [Automate organization-wide patching using a Quick Setup patch policy](https://docs.aws.amazon.com/systems-manager/latest/userguide/quick-setup-patch-manager.html). 

**Solution 2**: Manually add the required permissions and tags to each IAM instance profile and IAM service role that you use with Quick Setup. For instructions, see [Permissions for the patch policy S3 bucket](quick-setup-patch-manager.md#patch-policy-s3-bucket-permissions).

## Issue: Patching fails without an apparent cause or error message


**Problem**: A patching operation fails without returning an error message.

**Possible cause**: When patching managed nodes, the document execution may be interrupted and marked as failed even though patches were successfully installed. This can occur if the system initiates an unexpected reboot during the patching operation (for example, to apply updates to firmware or features like SecureBoot). The SSM Agent cannot persist and resume the document execution state across external reboots, resulting in the execution being reported as failed. This can occur with the `AWS-RunPatchBaseline`, `AWS-RunPatchBaselineAssociation`, `AWS-RunPatchBaselineWithHooks`, and `AWS-InstallWindowsUpdates` documents.

**Solution**: To verify patch installation status after a failed execution, run a `Scan` patching operations, then check the patch compliance data in Patch Manager to assess the current compliance state.

If you determine that external reboots weren't the cause of the failure in this scenario, we recommend contacting [AWS Support](#patch-manager-troubleshooting-contact-support).

## Issue: Unexpected patch compliance results


**Problem**: When reviewing the patching compliance details generated after a `Scan` operation, the results include information that don't reflect the rules set up in your patch baseline. For example, an exception you added to the **Rejected patches** list in a patch baseline is listed as `Missing`. Or patches classified as `Important` are listed as missing even though your patch baseline specifies `Critical` patches only.

**Cause**: Patch Manager currently supports multiple methods of running `Scan` operations:
+ A patch policy configured in Quick Setup
+ A Host Management option configured in Quick Setup
+ A maintenance window to run a patch `Scan` or `Install` task
+ An on-demand **Patch now** operation

When a `Scan` operation runs, it overwrites the compliance details from the most recent scan. If you have more than one method set up to run a `Scan` operation, and they use different patch baselines with different rules, they will result in differing patch compliance results. 

**Solution**: To avoid unexpected patch compliance results, we recommend using only one method at a time for running the Patch Manager `Scan` operation. For more information, see [Identifying the execution that created patch compliance data](patch-manager-compliance-data-overwrites.md).

## Errors when running `AWS-RunPatchBaseline` on Linux


**Topics**
+ [

### Issue: 'No such file or directory' error
](#patch-manager-troubleshooting-linux-1)
+ [

### Issue: 'another process has acquired yum lock' error
](#patch-manager-troubleshooting-linux-2)
+ [

### Issue: 'Permission denied / failed to run commands' error
](#patch-manager-troubleshooting-linux-3)
+ [

### Issue: 'Unable to download payload' error
](#patch-manager-troubleshooting-linux-4)
+ [

### Issue: 'unsupported package manager and python version combination' error
](#patch-manager-troubleshooting-linux-5)
+ [

### Issue: Patch Manager isn't applying rules specified to exclude certain packages
](#patch-manager-troubleshooting-linux-6)
+ [

### Issue: Patching fails and Patch Manager reports that the Server Name Indication extension to TLS is not available
](#patch-manager-troubleshooting-linux-7)
+ [

### Issue: Patch Manager reports 'No more mirrors to try'
](#patch-manager-troubleshooting-linux-8)
+ [

### Issue: Patching fails with 'Error code returned from curl is 23'
](#patch-manager-troubleshooting-linux-9)
+ [

### Issue: Patching fails with ‘Error unpacking rpm package…’ message
](#error-unpacking-rpm)
+ [

### Issue: Patching fails with 'Encounter service side error when uploading the inventory'
](#inventory-upload-error)
+ [

### Issue: Patching fails with ‘Errors were encountered while downloading packages’ message
](#errors-while-downloading)
+ [

### Issue: Patching fails with an out of memory (OOM) error
](#patch-manager-troubleshooting-linux-oom)
+ [

### Issue: Patching fails with a message that 'The following signatures couldn't be verified because the public key is not available'
](#public-key-unavailable)
+ [

### Issue: Patching fails with a 'NoMoreMirrorsRepoError' message
](#no-more-mirrors-repo-error)
+ [

### Issue: Patching fails with an 'Unable to download payload' message
](#payload-download-error)
+ [

### Issue: Patching fails with a message 'install errors: dpkg: error: dpkg frontend is locked by another process'
](#dpkg-frontend-locked)
+ [

### Issue: Patching on Ubuntu Server fails with a 'dpkg was interrupted' error
](#dpkg-interrupted)
+ [

### Issue: The package manager utility can't resolve a package dependency
](#unresolved-dependency)
+ [

### Issue: Zypper package lock dependency failures on SLES managed nodes
](#patch-manager-troubleshooting-linux-zypper-locks)
+ [

### Issue: Cannot acquire lock. Another patching operation is in progress.
](#patch-manager-troubleshooting-linux-concurrent-lock)

### Issue: 'No such file or directory' error


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with one of the following errors.

```
IOError: [Errno 2] No such file or directory: 'patch-baseline-operations-X.XX.tar.gz'
```

```
Unable to extract tar file: /var/log/amazon/ssm/patch-baseline-operations/patch-baseline-operations-1.75.tar.gz.failed to run commands: exit status 155
```

```
Unable to load and extract the content of payload, abort.failed to run commands: exit status 152
```

**Cause 1**: Two commands to run `AWS-RunPatchBaseline` were running at the same time on the same managed node. This creates a race condition that results in the temporary `file patch-baseline-operations*` not being created or accessed properly.

**Cause 2**: Insufficient storage space remains under the `/var` directory. 

**Solution 1**: Ensure that no maintenance window has two or more Run Command tasks that run `AWS-RunPatchBaseline` with the same Priority level and that run on the same target IDs. If this is the case, reorder the priority. Run Command is a tool in AWS Systems Manager.

**Solution 2**: Ensure that only one maintenance window at a time is running Run Command tasks that use `AWS-RunPatchBaseline` on the same targets and on the same schedule. If this is the case, change the schedule.

**Solution 3**: Ensure that only one State Manager association is running `AWS-RunPatchBaseline` on the same schedule and targeting the same managed nodes. State Manager is a tool in AWS Systems Manager.

**Solution 4**: Free up sufficient storage space under the `/var` directory for the update packages.

### Issue: 'another process has acquired yum lock' error


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with the following error.

```
12/20/2019 21:41:48 root [INFO]: another process has acquired yum lock, waiting 2 s and retry.
```

**Cause**: The `AWS-RunPatchBaseline` document has started running on a managed node where it's already running in another operation and has acquired the package manager `yum` process.

**Solution**: Ensure that no State Manager association, maintenance window tasks, or other configurations that run `AWS-RunPatchBaseline` on a schedule are targeting the same managed node around the same time.

### Issue: 'Permission denied / failed to run commands' error


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with the following error.

```
sh: 
/var/lib/amazon/ssm/instanceid/document/orchestration/commandid/PatchLinux/_script.sh: Permission denied
failed to run commands: exit status 126
```

**Cause**: `/var/lib/amazon/` might be mounted with `noexec` permissions. This is an issue because SSM Agent downloads payload scripts to `/var/lib/amazon/ssm` and runs them from that location.

**Solution**: Ensure that you have configured exclusive partitions to `/var/log/amazon` and `/var/lib/amazon`, and that they're mounted with `exec` permissions.

### Issue: 'Unable to download payload' error


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with the following error.

```
Unable to download payload: https://s3.amzn-s3-demo-bucket.region.amazonaws.com/aws-ssm-region/patchbaselineoperations/linux/payloads/patch-baseline-operations-X.XX.tar.gz.failed to run commands: exit status 156
```

**Cause**: The managed node doesn't have the required permissions to access the specified Amazon Simple Storage Service (Amazon S3) bucket.

**Solution**: Update your network configuration so that S3 endpoints are reachable. For more details, see information about required access to S3 buckets for Patch Manager in [SSM Agent communications with AWS managed S3 buckets](ssm-agent-technical-details.md#ssm-agent-minimum-s3-permissions).

### Issue: 'unsupported package manager and python version combination' error


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with the following error.

```
An unsupported package manager and python version combination was found. Apt requires Python3 to be installed.
failed to run commands: exit status 1
```

**Cause**: A supported version of python3 isn't installed on the Debian Server or Ubuntu Server instance.

**Solution**: Install a supported version of python3 (3.0 - 3.12) on the server, which is required for Debian Server and Ubuntu Server managed nodes.

### Issue: Patch Manager isn't applying rules specified to exclude certain packages


**Problem**: You have attempted to exclude certain packages by specifying them in the `/etc/yum.conf` file, in the format `exclude=package-name`, but they aren't excluded during the Patch Manager `Install` operation.

**Cause**: Patch Manager doesn't incorporate exclusions specified in the `/etc/yum.conf` file.

**Solution**: To exclude specific packages, create a custom patch baseline and create a rule to exclude the packages you don't want installed.

### Issue: Patching fails and Patch Manager reports that the Server Name Indication extension to TLS is not available


**Problem**: The patching operation issues the following message.

```
/var/log/amazon/ssm/patch-baseline-operations/urllib3/util/ssl_.py:369: 
SNIMissingWarning: An HTTPS request has been made, but the SNI (Server Name Indication) extension
to TLS is not available on this platform. This might cause the server to present an incorrect TLS 
certificate, which can cause validation failures. You can upgrade to a newer version of Python 
to solve this. 
For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
```

**Cause**: This message doesn't indicate an error. Instead, it's a warning that the older version of Python distributed with the operating system doesn't support TLS Server Name Indication. The Systems Manager patch payload script issues this warning when connecting to AWS APIs that support SNI.

**Solution**: To troubleshoot any patching failures when this message is reported, review the contents of the `stdout` and `stderr` files. If you haven't configured the patch baseline to store these files in an S3 bucket or in Amazon CloudWatch Logs, you can locate the files in the following location on your Linux managed node. 

`/var/lib/amazon/ssm/instance-id/document/orchestration/Run-Command-execution-id/awsrunShellScript/PatchLinux`

### Issue: Patch Manager reports 'No more mirrors to try'


**Problem**: The patching operation issues the following message.

```
[Errno 256] No more mirrors to try.
```

**Cause**: The repositories configured on the managed node are not working correctly. Possible causes for this include:
+ The `yum` cache is corrupted.
+ A repository URL can't be reached due to network-related issues.

**Solution**: Patch Manager uses the managed node’s default package manager to perform patching operation. Double-check that repositories are configured and operating correctly.

### Issue: Patching fails with 'Error code returned from curl is 23'


**Problem**: A patching operating that uses `AWS-RunPatchBaseline` fails with an error similar to the following:

```
05/01/2025 17:04:30 root [ERROR]: Error code returned from curl is 23
```

**Cause**: The curl tool in use on your systems lacks the permissions needed to write to the filesystem. This can occur when if the package manager's default curl tool was replaced by a different version, such as one installed with snap.

**Solution**: If the curl version provided by the package manager was uninstalled when a different version was installed, reinstall it.

If you need to keep multiple curl versions installed, ensure that the version associated with the package manager is in the first directory listed in the `PATH` variable. You can check this by running the command `echo $PATH` to see the current order of directories that are checked for executable files on your system.

### Issue: Patching fails with ‘Error unpacking rpm package…’ message


**Problem**: A patching operation fails with an error similar to the following:

```
Error : Error unpacking rpm package python-urllib3-1.25.9-1.amzn2.0.2.noarch
python-urllib3-1.25.9-1.amzn2.0.1.noarch was supposed to be removed but is not!
failed to run commands: exit status 1
```

**Cause 1**: When a particular package is present in multiple package installers, such as both `pip` and `yum` or `dnf`, conflicts can occur when using the default package manager.

A common example occurs with the `urllib3` package, which is found in `pip`, `yum`, and `dnf`.

**Cause 2**: The `python-urllib3` package is corrupted. This can happen if the package files were installed or updated by `pip` after the `rpm` was package was previously installed by `yum` or `dnf`.

**Solution**: Remove the `python-urllib3` package from pip by running the command `sudo pip uninstall urllib3`, keeping the package only in the default package manager (`yum` or `dnf`). 

### Issue: Patching fails with 'Encounter service side error when uploading the inventory'


**Problem**: When running the `AWS-RunPatchBaseline` document, you receive the following error message:

```
Encounter service side error when uploading the inventory
```

**Cause**: Two commands to run `AWS-RunPatchBaseline` were running at the same time on the same managed node. This creates a race condition when initializing boto3 client during patching operations.

**Solution**: Ensure that no State Manager association, maintenance window tasks, or other configurations that run `AWS-RunPatchBaseline` on a schedule are targeting the same managed node around the same time.

### Issue: Patching fails with ‘Errors were encountered while downloading packages’ message


**Problem**: During patching, you receive an error similar to the following:

```
YumDownloadError: [u'Errors were encountered while downloading 
packages.', u'libxml2-2.9.1-6.el7_9.6.x86_64: [Errno 5] [Errno 12] 
Cannot allocate memory', u'libxslt-1.1.28-6.el7.x86_64: [Errno 5] 
[Errno 12] Cannot allocate memory', u'libcroco-0.6.12-6.el7_9.x86_64: 
[Errno 5] [Errno 12] Cannot allocate memory', u'openldap-2.4.44-25.el7_9.x86_64: 
[Errno 5] [Errno 12] Cannot allocate memory',
```

**Cause**: This error can occur when insufficient memory is available on a managed node.

**Solution**: Configure the swap memory, or upgrade the instance to a different type to increase the memory support. Then start a new patching operation.

### Issue: Patching fails with an out of memory (OOM) error


**Problem**: When you run `AWS-RunPatchBaseline`, the patching operation fails due to insufficient memory on the managed node. You might see errors such as `Cannot allocate memory`, `Killed` (from the Linux OOM killer), or the operation fails unexpectedly. This error is more likely to occur on instances with less than 1 GB of RAM, but can also affect instances with more memory when a large number of updates are available.

**Cause**: Patch Manager runs patching operations using the native package manager on the managed node. The memory required during a patching operation depends on several factors, including:
+ The number of packages installed and available updates on the managed node.
+ The package manager in use and its memory characteristics.
+ Other processes running on the managed node at the time of the patching operation.

Managed nodes with a large number of installed packages or a large number of available updates require more memory during patching operations. When available memory is insufficient, the patching process will fail and exit with an error. The operating system can also terminate the patching process.

**Solution**: Try one or more of the following:
+ Schedule patching operations during periods of low workload activity on the managed node, such as by using maintenance windows.
+ Upgrade the instance to a type with more memory.
+ Configure swap memory on the managed node. Note that on instances with limited EBS throughput, heavy swap usage may cause performance degradation.
+ Review and reduce the number of processes running on the managed node during patching operations.

### Issue: Patching fails with a message that 'The following signatures couldn't be verified because the public key is not available'


**Problem**: Patching fails on Ubuntu Server with an error similar to the following:

```
02/17/2022 21:08:43 root [ERROR]: W:GPG error: 
http://repo.mysql.com/apt/ubuntu  bionic InRelease: The following 
signatures couldn't be verified because the public key is not available: 
NO_PUBKEY 467B942D3A79BD29, E:The repository ' http://repo.mysql.com/apt/ubuntu bionic
```

**Cause**: The GNU Privacy Guard (GPG) key has expired or is missing.

**Solution**: Refresh the GPG key, or add the key again.

For example, using the error shown previously, we see that the `467B942D3A79BD29` key is missing and must be added. To do so, run either of the following commands:

```
sudo apt-key adv --keyserver hkps://keyserver.ubuntu.com --recv-keys 467B942D3A79BD29
```

```
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 467B942D3A79BD29
```

Or, to refresh all keys, run the following command:

```
sudo apt-key adv --keyserver hkps://keyserver.ubuntu.com --refresh-keys
```

If the error recurs after this, we recommend reporting the issue to the organization that maintains the repository. Until a fix is available, you can edit the `/etc/apt/sources.list` file to omit the repository during the patching process.

To do so, open the `sources.list` file for editing, locate the line for the repository, and insert a `#` character at the beginning of the line to comment it out. Then save and close the file.

### Issue: Patching fails with a 'NoMoreMirrorsRepoError' message


**Problem**: You receive an error similar to the following:

```
NoMoreMirrorsRepoError: failure: repodata/repomd.xml from pgdg94: [Errno 256] No more mirrors to try.
```

**Cause**: There is an error in the source repository.

**Solution**: We recommend reporting the issue to the organization that maintains the repository. Until the error is fixed, you can disable the repository at the operating system level. To do so, run the following command, replacing the value for *repo-name* with your repository name:

```
yum-config-manager --disable repo-name
```

Following is an example.

```
yum-config-manager --disable pgdg94
```

After you run this command, run another patching operation.

### Issue: Patching fails with an 'Unable to download payload' message


**Problem**: You receive an error similar to the following:

```
Unable to download payload: 
https://s3.dualstack.eu-west-1.amazonaws.com/aws-ssm-eu-west-1/patchbaselineoperations/linux/payloads/patch-baseline-operations-1.83.tar.gz.
failed  to run commands: exit status 156
```

**Cause**: The managed node configuration contains errors or is incomplete.

**Solution**: Make sure that the managed node is configured with the following:
+ Outbound TCP 443 rule in security group.
+ Egress TCP 443 rule in NACL.
+ Ingress TCP 1024-65535 rule in NACL.
+ NAT/IGW in route table to provide connectivity to an S3 endpoint. If the instance doesn't have internet access, provide it connectivity with the S3 endpoint. To do that, add an S3 gateway endpoint in the VPC and integrate it with the route table of the managed node.

### Issue: Patching fails with a message 'install errors: dpkg: error: dpkg frontend is locked by another process'


**Problem**: Patching fails with an error similar to the following:

```
install errors: dpkg: error: dpkg frontend is locked by another process
failed to run commands: exit status 2
Failed to install package; install status Failed
```

**Cause**: The package manager is already running another process on a managed node at the operating system level. If that other process takes a long time to complete, the Patch Manager patching operation can time out and fail.

**Solution**: After the other process that’s using the package manager completes, run a new patching operation.

### Issue: Patching on Ubuntu Server fails with a 'dpkg was interrupted' error


**Problem**: On Ubuntu Server, patching fails with an error similar to the following:

```
E: dpkg was interrupted, you must manually run
'dpkg --configure -a' to correct the problem.
```

**Cause**: One or more packages is misconfigured.

**Solution**: Perform the following steps:

1. Check to see which packages are affected, and what the issues are with each package by running the following commands, one at a time:

   ```
   sudo apt-get check
   ```

   ```
   sudo dpkg -C
   ```

   ```
   dpkg-query -W -f='${db:Status-Abbrev} ${binary:Package}\n' | grep -E ^.[^nci]
   ```

1. Correct the packages with issues by running the following command:

   ```
   sudo dpkg --configure -a
   ```

1. If the previous command didn't fully resolve the issue, run the following command:

   ```
   sudo apt --fix-broken install
   ```

### Issue: The package manager utility can't resolve a package dependency


**Problem**: The native package manager on the managed node is unable to resolve a package dependency and patching fails. The following error message example indicates this type of failure on an operating system that uses `yum` as the package manager.

```
09/22/2020 08:56:09 root [ERROR]: yum update failed with result code: 1, 
message: [u'rpm-python-4.11.3-25.amzn2.0.3.x86_64 requires rpm = 4.11.3-25.amzn2.0.3', 
u'awscli-1.18.107-1.amzn2.0.1.noarch requires python2-botocore = 1.17.31']
```

**Cause**: On Linux operating systems, Patch Manager uses the native package manager on the machine to run patching operations. such as `yum`, `dnf`, `apt`, and `zypper`. The applications automatically detect, install, update, or remove dependent packages as required. However, some conditions can result in the package manager being unable to complete a dependency operation, such as:
+ Multiple conflicting repositories are configured on the operating system.
+ A remote repository URL is inaccessible due to network-related issues.
+ A package for the wrong architecture is found in the repository.

**Solution**: Patching might fail because of a dependency issue for a wide variety of reasons. Therefore, we recommend that you contact AWS Support to assist with troubleshooting.

### Issue: Zypper package lock dependency failures on SLES managed nodes


**Problem**: When you run `AWS-RunPatchBaseline` with the `Install` operation on SUSE Linux Enterprise Server instances, patching fails with dependency check errors related to package locks. You might see error messages similar to the following:

```
Problem: mock-pkg-has-dependencies-0.2.0-21.adistro.noarch requires mock-pkg-standalone = 0.2.0, but this requirement cannot be provided
  uninstallable providers: mock-pkg-standalone-0.2.0-21.adistro.noarch[local-repo]
 Solution 1: remove lock to allow installation of mock-pkg-standalone-0.2.0-21.adistro.noarch[local-repo]
 Solution 2: do not install mock-pkg-has-dependencies-0.2.0-21.adistro.noarch
 Solution 3: break mock-pkg-has-dependencies-0.2.0-21.adistro.noarch by ignoring some of its dependencies

Choose from above solutions by number or cancel [1/2/3/c] (c): c
```

In this example, the package `mock-pkg-standalone` is locked, which you could verify by running `sudo zypper locks` and looking for this package name in the output.

Or you might see log entries indicating dependency check failures:

```
Encountered a known exception in the CLI Invoker: CLIInvokerError(error_message='Dependency check failure during commit process', error_code='4')
```

**Note**  
This issue only occurs during `Install` operations. `Scan` operations do not apply package locks and are not affected by existing locks."

**Cause**: This error occurs when zypper package locks prevent the installation or update of packages due to dependency conflicts. Package locks can be present for several reasons:
+ **Customer-applied locks**: You or your system administrator manually locked packages using zypper commands such as `zypper addlock`.
+ **Patch Manager rejected patches**: Patch Manager automatically applies package locks when you specify packages in the **Rejected patches** list of your patch baseline to prevent their installation.
+ **Residual locks from interrupted operations**: In rare cases, if a patch operation was interrupted (such as by a system reboot) before Patch Manager could clean up temporary locks, residual package locks might remain on your managed node.

**Solution**: To resolve zypper package lock issues, follow these steps based on the cause:

**Step 1: Identify locked packages**

Connect to your SLES managed node and run the following command to list all currently locked packages:

```
sudo zypper locks
```

**Step 2: Determine the source of locks**
+ If the locked packages are ones you intentionally locked for system stability, consider whether they need to remain locked or if they can be temporarily unlocked for patching.
+ If the locked packages match entries in your patch baseline's **Rejected patches** list, these are likely residual locks from an interrupted patch operation. During normal operations, Patch Manager applies these locks temporarily and removes them automatically when the operation completes. You can either remove the packages from the rejected list or modify your patch baseline rules.
+ If you don't recognize the locked packages and they weren't intentionally locked, they might be residual locks from a previous interrupted patch operation.

**Step 3: Remove locks as appropriate**

To remove specific package locks, use the following command:

```
sudo zypper removelock package-name
```

To remove all package locks (use with caution), run:

```
sudo zypper cleanlocks
```

**Step 4: Update your patch baseline (if applicable)**

If the locks were caused by rejected patches in your patch baseline:

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Patch baselines** tab, and then choose your custom patch baseline.

1. Choose **Actions**, **Modify patch baseline**.

1. In the **Rejected patches** section, review the listed packages and remove any that should be allowed to install.

1. Choose **Save changes**.

**Step 5: Retry the patch operation**

After removing the problematic locks and updating your patch baseline if necessary, run the `AWS-RunPatchBaseline` document again.

**Note**  
When Patch Manager applies locks for rejected patches during `Install` operations, it's designed to clean up these locks automatically after the patch operation completes. If you see these locks when running `sudo zypper locks`, it indicates a previous patch operation was interrupted before cleanup could occur. However, if a patch operation is interrupted, manual cleanup might be required as described in this procedure.

**Prevention**: To avoid future zypper lock conflicts:
+ Carefully review your patch baseline's rejected patches list to ensure it only includes packages you truly want to exclude.
+ Avoid manually locking packages that might be required as dependencies for security updates.
+ If you must lock packages manually, document the reasons and review the locks periodically.
+ Ensure patch operations complete successfully and aren't interrupted by system reboots or other factors.
+ Monitor patch operations to completion and avoid interrupting them with system reboots or other actions that could prevent proper cleanup of temporary locks.

### Issue: Cannot acquire lock. Another patching operation is in progress.


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with error code 4 and the following error message.

```
[ERROR]: Cannot acquire lock on /var/log/amazon/ssm/patch-baseline-concurrent.lock. Another patching operation is in progress.
```

**Cause**: This error occurs when multiple patching operations are attempting to run on the same managed node at the same time. The lock file prevents concurrent patching operations to avoid conflicts and ensure system stability.

**Solution**: Ensure that patching operations are not scheduled to run at the same time on the same managed node. Review the following configurations to identify and resolve scheduling conflicts:
+ **Patch policies**: Check your Quick Setup patch policy configurations to ensure they don't overlap with other patching schedules.
+ **Maintenance windows**: Review your maintenance window associations to verify that multiple windows aren't targeting the same managed nodes with patching tasks at overlapping times.
+ **Manual Patch now operations**: Avoid initiating manual **Patch now** operations while scheduled patching is in progress.

## Errors when running `AWS-RunPatchBaseline` on Windows Server


**Topics**
+ [

### Issue: mismatched product family/product pairs
](#patch-manager-troubleshooting-product-family-mismatch)
+ [

### Issue: `AWS-RunPatchBaseline` output returns an `HRESULT` (Windows Server)
](#patch-manager-troubleshooting-hresult)
+ [

### Issue: managed node doesn't have access to Windows Update Catalog or WSUS
](#patch-manager-troubleshooting-instance-access)
+ [

### Issue: PatchBaselineOperations PowerShell module is not downloadable
](#patch-manager-troubleshooting-module-not-downloadable)
+ [

### Issue: missing patches
](#patch-manager-troubleshooting-missing-patches)
+ [

### Issue: Cannot acquire lock. Another patching operation is in progress.
](#patch-manager-troubleshooting-windows-concurrent-lock)

### Issue: mismatched product family/product pairs


**Problem**: When you create a patch baseline in the Systems Manager console, you specify a product family and a product. For example, you might choose:
+ **Product family**: `Office`

  **Product**: `Office 2016`

**Cause**: If you attempt to create a patch baseline with a mismatched product family/product pair, an error message is displayed. The following are reasons this can occur:
+ You selected a valid product family and product pair but then removed the product family selection.
+ You chose a product from the **Obsolete or mismatched options** sublist instead of the **Available and matching options** sublist. 

  Items in the product **Obsolete or mismatched options** sublist might have been entered in error through an SDK or AWS Command Line Interface (AWS CLI) `create-patch-baseline` command. This could mean a typo was introduced or a product was assigned to the wrong product family. A product is also included in the **Obsolete or mismatched options** sublist if it was specified for a previous patch baseline but has no patches available from Microsoft. 

**Solution**: To avoid this issue in the console, always choose options from the **Currently available options** sublists.

You can also view the products that have available patches by using the `[https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-properties.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-properties.html)` command in the AWS CLI or the `[https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchProperties.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchProperties.html)` API command.

### Issue: `AWS-RunPatchBaseline` output returns an `HRESULT` (Windows Server)


**Problem**: You received an error like the following.

```
----------ERROR-------
Invoke-PatchBaselineOperation : Exception Details: An error occurred when 
attempting to search Windows Update.
Exception Level 1:
 Error Message: Exception from HRESULT: 0x80240437
 Stack Trace: at WUApiLib.IUpdateSearcher.Search(String criteria)..
(Windows updates)
11/22/2020 09:17:30 UTC | Info | Searching for Windows Updates.
11/22/2020 09:18:59 UTC | Error | Searching for updates resulted in error: Exception from HRESULT: 0x80240437
----------ERROR-------
failed to run commands: exit status 4294967295
```

**Cause**: This output indicates that the native Windows Update APIs were unable to run the patching operations.

**Solution**: Check the `HResult` code in the following microsoft.com topics to identify troubleshooting steps for resolving the error:
+ [Windows Update error codes by component](https://learn.microsoft.com/en-us/windows/deployment/update/windows-update-error-reference) 
+ [Windows Update common errors and mitigation](https://learn.microsoft.com/en-us/troubleshoot/windows-client/deployment/common-windows-update-errors) 

### Issue: managed node doesn't have access to Windows Update Catalog or WSUS


**Problem**: You received an error like the following.

```
Downloading PatchBaselineOperations PowerShell module from https://s3.aws-api-domain/path_to_module.zip to C:\Windows\TEMP\Amazon.PatchBaselineOperations-1.29.zip.

Extracting PatchBaselineOperations zip file contents to temporary folder.

Verifying SHA 256 of the PatchBaselineOperations PowerShell module files.

Successfully downloaded and installed the PatchBaselineOperations PowerShell module.

Patch Summary for

PatchGroup :

BaselineId :

Baseline : null

SnapshotId :

RebootOption : RebootIfNeeded

OwnerInformation :

OperationType : Scan

OperationStartTime : 1970-01-01T00:00:00.0000000Z

OperationEndTime : 1970-01-01T00:00:00.0000000Z

InstalledCount : -1

InstalledRejectedCount : -1

InstalledPendingRebootCount : -1

InstalledOtherCount : -1

FailedCount : -1

MissingCount : -1

NotApplicableCount : -1

UnreportedNotApplicableCount : -1

EC2AMAZ-VL3099P - PatchBaselineOperations Assessment Results - 2020-12-30T20:59:46.169

----------ERROR-------

Invoke-PatchBaselineOperation : Exception Details: An error occurred when attempting to search Windows Update.

Exception Level 1:

Error Message: Exception from HRESULT: 0x80072EE2

Stack Trace: at WUApiLib.IUpdateSearcher.Search(String criteria)

at Amazon.Patch.Baseline.Operations.PatchNow.Implementations.WindowsUpdateAgent.SearchForUpdates(String

searchCriteria)

At C:\ProgramData\Amazon\SSM\InstanceData\i-02573cafcfEXAMPLE\document\orchestration\3d2d4864-04b7-4316-84fe-eafff1ea58

e3\PatchWindows\_script.ps1:230 char:13

+ $response = Invoke-PatchBaselineOperation -Operation Install -Snapsho ...

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : OperationStopped: (Amazon.Patch.Ba...UpdateOperation:InstallWindowsUpdateOperation) [Inv

oke-PatchBaselineOperation], Exception

+ FullyQualifiedErrorId : Exception Level 1:

Error Message: Exception Details: An error occurred when attempting to search Windows Update.

Exception Level 1:

Error Message: Exception from HRESULT: 0x80072EE2

Stack Trace: at WUApiLib.IUpdateSearcher.Search(String criteria)

at Amazon.Patch.Baseline.Operations.PatchNow.Implementations.WindowsUpdateAgent.SearchForUpdates(String searc

---Error truncated----
```

**Cause**: This error could be related to the Windows Update components, or to a lack of connectivity to the Windows Update Catalog or Windows Server Update Services (WSUS).

**Solution**: Confirm that the managed node has connectivity to the [Microsoft Update Catalog](https://www.catalog.update.microsoft.com/home.aspx) through an internet gateway, NAT gateway, or NAT instance. If you're using WSUS, confirm that the managed node has connectivity to the WSUS server in your environment. If connectivity is available to the intended destination, check the Microsoft documentation for other potential causes of `HResult 0x80072EE2`. This might indicate an operating system level issue. 

### Issue: PatchBaselineOperations PowerShell module is not downloadable


**Problem**: You received an error like the following.

```
Preparing to download PatchBaselineOperations PowerShell module from S3.
                    
Downloading PatchBaselineOperations PowerShell module from https://s3.aws-api-domain/path_to_module.zip to C:\Windows\TEMP\Amazon.PatchBaselineOperations-1.29.zip.
----------ERROR-------

C:\ProgramData\Amazon\SSM\InstanceData\i-02573cafcfEXAMPLE\document\orchestration\aaaaaaaa-bbbb-cccc-dddd-4f6ed6bd5514\

PatchWindows\_script.ps1 : An error occurred when executing PatchBaselineOperations: Unable to connect to the remote server

+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException

+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,_script.ps1

failed to run commands: exit status 4294967295
```

**Solution**: Check the managed node connectivity and permissions to Amazon Simple Storage Service (Amazon S3). The managed node's AWS Identity and Access Management (IAM) role must use the minimum permissions cited in [SSM Agent communications with AWS managed S3 buckets](ssm-agent-technical-details.md#ssm-agent-minimum-s3-permissions). The node must communicate with the Amazon S3 endpoint through the Amazon S3 gateway endpoint, NAT gateway, or internet gateway. For more information about the VPC Endpoint requirements for AWS Systems Manager SSM Agent (SSM Agent), see [Improve the security of EC2 instances by using VPC endpoints for Systems Manager](setup-create-vpc.md). 

### Issue: missing patches


**Problem**: `AWS-RunPatchbaseline` completed successfully, but there are some missing patches.

The following are some common causes and their solutions.

**Cause 1**: The baseline isn't effective.

**Solution 1**: To check if this is the cause, use the following procedure.

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Select the **Command history** tab and then select the command whose baseline you want to check.

1. Select the managed node that has missing patches.

1. Select **Step 1 - Output** and find the `BaselineId` value.

1. Check the assigned [patch baseline configuration](patch-manager-predefined-and-custom-patch-baselines.md#patch-manager-baselines-custom), that is, the operating system, product name, classification, and severity for the patch baseline.

1. Go to the [Microsoft Update Catalog](https://www.catalog.update.microsoft.com/home.aspx).

1. Search the Microsoft Knowledge Base (KB) article IDs (for example, KB3216916).

1. Verify that the value under **Product** matches that of your managed node and select the corresponding **Title**. A new **Update Details** window will open.

1. In the **Overview** tab, the **classification** and **MSRC severity** must match the patch baseline configuration you found earlier.

**Cause 2**: The patch was replaced.

**Solution 2**: To check if this is true, use the following procedure.

1. Go to the [Microsoft Update Catalog](https://www.catalog.update.microsoft.com/home.aspx).

1. Search the Microsoft Knowledge Base (KB) article IDs (for example, KB3216916).

1. Verify that the value under **Product** matches that of your managed node and select the corresponding **Title**. A new **Update Details** window will open.

1. Go to the **Package Details** tab. Look for an entry under the **This update has been replaced by the following updates:** header.

**Cause 3**: The same patch might have different KB numbers because the WSUS and Window online updates are handled as independent Release Channels by Microsoft.

**Solution 3**: Check the patch eligibility. If the package isn't available under WSUS, install [OS Build 14393.3115](https://support.microsoft.com/en-us/topic/july-16-2019-kb4507459-os-build-14393-3115-511a3df6-c07e-14e3-dc95-b9898a7a7a57). If the package is available for all operating system builds, install [OS Builds 18362.1256 and 18363.1256](https://support.microsoft.com/en-us/topic/december-8-2020-kb4592449-os-builds-18362-1256-and-18363-1256-c448f3df-a5f1-1d55-aa31-0e1cf7a440a9).

### Issue: Cannot acquire lock. Another patching operation is in progress.


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with error code 4 and the following error message.

```
Cannot acquire lock on C:\ProgramData\Amazon\SSM\patch-baseline-concurrent.lock. Another patching operation is in progress.
```

**Cause**: This error occurs when multiple patching operations are attempting to run on the same managed node at the same time. The lock file prevents concurrent patching operations to avoid conflicts and ensure system stability.

**Solution**: Ensure that patching operations are not scheduled to run at the same time on the same managed node. Review the following configurations to identify and resolve scheduling conflicts:
+ **Patch policies**: Check your Quick Setup patch policy configurations to ensure they don't overlap with other patching schedules.
+ **Maintenance windows**: Review your maintenance window associations to verify that multiple windows aren't targeting the same managed nodes with patching tasks at overlapping times.
+ **Manual Patch now operations**: Avoid initiating manual **Patch now** operations while scheduled patching is in progress.

## Errors when running `AWS-RunPatchBaseline` on macOS


**Topics**
+ [

### Issue: Cannot acquire lock. Another patching operation is in progress.
](#patch-manager-troubleshooting-macos-concurrent-lock)

### Issue: Cannot acquire lock. Another patching operation is in progress.


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with error code 4 and the following error message.

```
[ERROR]: Cannot acquire lock on /var/log/amazon/ssm/patch-baseline-concurrent.lock. Another patching operation is in progress.
```

**Cause**: This error occurs when multiple patching operations are attempting to run on the same managed node at the same time. The lock file prevents concurrent patching operations to avoid conflicts and ensure system stability.

**Solution**: Ensure that patching operations are not scheduled to run at the same time on the same managed node. Review the following configurations to identify and resolve scheduling conflicts:
+ **Patch policies**: Check your Quick Setup patch policy configurations to ensure they don't overlap with other patching schedules.
+ **Maintenance windows**: Review your maintenance window associations to verify that multiple windows aren't targeting the same managed nodes with patching tasks at overlapping times.
+ **Manual Patch now operations**: Avoid initiating manual **Patch now** operations while scheduled patching is in progress.

## Using AWS Support Automation runbooks


AWS Support provides two Automation runbooks you can use to troubleshoot certain issues related to patching.
+ `AWSSupport-TroubleshootWindowsUpdate` – The [https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/awssupport-troubleshoot-windows-update.html](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/awssupport-troubleshoot-windows-update.html) runbook is used to identify issues that could fail the Windows Server updates for Amazon Elastic Compute Cloud (Amazon EC2) Windows Server instances.
+ `AWSSupport-TroubleshootPatchManagerLinux` – The [https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-troubleshoot-patch-manager-linux.html](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-troubleshoot-patch-manager-linux.html) runbook troubleshoots common issues that can cause a patch failure on Linux-based managed nodes using Patch Manager. The main goal of this runbook is to identify the patch command failure root cause and suggest a remediation plan.

**Note**  
There is a charge to run Automation runbooks. For information, see [AWS Systems Manager Pricing for Automation](https://aws.amazon.com/systems-manager/pricing/#Automation).

## Contacting AWS Support


If you can't find troubleshooting solutions in this section or in the Systems Manager issues in [AWS re:Post](https://repost.aws/tags/TA-UbbRGVYRWCDaCvae6itYg/aws-systems-manager), and you have a [Developer, Business, or Enterprise Support plan](https://aws.amazon.com/premiumsupport/plans), you can create a technical support case at [AWS Support](https://aws.amazon.com/premiumsupport/).

Before you contact Support, collect the following items:
+ [SSM agent logs](ssm-agent-logs.md)
+ Run Command command ID, maintenance window ID, or Automation execution ID
+ For Windows Server managed nodes, also collect the following:
  + `%PROGRAMDATA%\Amazon\PatchBaselineOperations\Logs` as described on the **Windows** tab of [How patches are installed](patch-manager-installing-patches.md)
  + Windows update logs: For Windows Server 2012 R2 and older, use `%windir%/WindowsUpdate.log`. For Windows Server 2016 and newer, first run the PowerShell command [https://docs.microsoft.com/en-us/powershell/module/windowsupdate/get-windowsupdatelog?view=win10-ps](https://docs.microsoft.com/en-us/powershell/module/windowsupdate/get-windowsupdatelog?view=win10-ps) before using `%windir%/WindowsUpdate.log`
+ For Linux managed nodes, also collect the following:
  + The contents of the directory `/var/lib/amazon/ssm/instance-id/document/orchestration/Run-Command-execution-id/awsrunShellScript/PatchLinux`

# AWS Systems Manager Run Command
Run Command

Using Run Command, a tool in AWS Systems Manager, you can remotely and securely manage the configuration of your managed nodes. A *managed node* is any Amazon Elastic Compute Cloud (Amazon EC2) instance or non-EC2 machine in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment that has been configured for Systems Manager. Run Command allows you to automate common administrative tasks and perform one-time configuration changes at scale. You can use Run Command from the AWS Management Console, the AWS Command Line Interface (AWS CLI), AWS Tools for Windows PowerShell, or the AWS SDKs. Run Command is offered at no additional cost. To get started with Run Command, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/run-command). In the navigation pane, choose **Run Command**.

Administrators use Run Command to install or bootstrap applications, build a deployment pipeline, capture log files when an instance is removed from an Auto Scaling group, join instances to a Windows domain, and more.

The Run Command API follows an eventual consistency model, due to the distributed nature of the system supporting the API. This means that the result of an API command you run that affects your resources might not be immediately visible to all subsequent commands you run. You should keep this in mind when you carry out an API command that immediately follows a previous API command.

**Getting Started**  
The following table includes information to help you get started with Run Command.


****  

| Topic | Details | 
| --- | --- | 
|  [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md)  |  Verify that you have completed the setup requirements for your Amazon Elastic Compute Cloud (Amazon EC2) instances and non-EC2 machines in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment.  | 
|  [Managing nodes in hybrid and multicloud environments with Systems Manager](systems-manager-hybrid-multicloud.md)  |  (Optional) Register on-premises servers and VMs with AWS so you can manage them using Run Command.  | 
|  [Managing edge devices with Systems Manager](systems-manager-setting-up-edge-devices.md)  |  (Optional) Configure edge devices so you can manage them using Run Command.  | 
|  [Running commands on managed nodes](running-commands.md)  |  Learn how to run a command that targets one or more managed nodes by using the AWS Management Console.  | 
|  [Run Command walkthroughs](run-command-walkthroughs.md)  |  Learn how to run commands using either Tools for Windows PowerShell or the AWS CLI.  | 

**EventBridge support**  
This Systems Manager tool is supported as both an *event* type and a *target* type in Amazon EventBridge rules. For information, see [Monitoring Systems Manager events with Amazon EventBridge](monitoring-eventbridge-events.md) and [Reference: Amazon EventBridge event patterns and types for Systems Manager](reference-eventbridge-events.md).

**More info**  
+ [Remotely Run Command on an EC2 Instance (10 minute tutorial)](https://aws.amazon.com/getting-started/hands-on/remotely-run-commands-ec2-instance-systems-manager/)
+ [Systems Manager service quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html#limits_ssm) in the *Amazon Web Services General Reference*
+ [AWS Systems Manager API Reference](https://docs.aws.amazon.com/systems-manager/latest/APIReference/) 

**Topics**
+ [

# Setting up Run Command
](run-command-setting-up.md)
+ [

# Running commands on managed nodes
](running-commands.md)
+ [

# Using exit codes in commands
](run-command-handle-exit-status.md)
+ [

# Understanding command statuses
](monitor-commands.md)
+ [

# Run Command walkthroughs
](run-command-walkthroughs.md)
+ [

# Troubleshooting Systems Manager Run Command
](troubleshooting-remote-commands.md)

# Setting up Run Command


Before you can manage nodes by using Run Command, a tool in AWS Systems Manager, configure an AWS Identity and Access Management (IAM) policy for any user who will run commands. If you use any global condition keys for the `SendCommand` action in your IAM policies, you must include the `aws:ViaAWSService` condition key and set the boolean value to `true`. The following is an example.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/YourDocument"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:SourceVpce": [
                        "vpce-1234567890abcdef0"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/YourDocument"
            ],
            "Condition": {
                "Bool": {
                    "aws:ViaAWSService": "true"
                }
            }
        }
    ]
}
```

------

You must also configure your nodes for Systems Manager. For more information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md).

We recommend completing the following optional setup tasks to help minimize the security posture and day-to-day management of your managed nodes.

Monitor command executions using Amazon EventBridge  
You can use EventBridge to log command execution status changes. You can create a rule that runs whenever there is a state transition, or when there is a transition to one or more states that are of interest. You can also specify Run Command as a target action when an EventBridge event occurs. For more information, see [Configuring EventBridge for Systems Manager events](monitoring-systems-manager-events.md).

Monitor command executions using Amazon CloudWatch Logs  
You can configure Run Command to periodically send all command output and error logs to an Amazon CloudWatch log group. You can monitor these output logs in near real-time, search for specific phrases, values, or patterns, and create alarms based on the search. For more information, see [Configuring Amazon CloudWatch Logs for Run Command](sysman-rc-setting-up-cwlogs.md).

Restrict Run Command access to specific managed nodes  
You can restrict a user's ability to run commands on managed nodes by using AWS Identity and Access Management (IAM). Specifically, you can create an IAM policy with a condition that the user can only run commands on managed nodes that are tagged with specific tags. For more information, see [Restricting Run Command access based on tags](#tag-based-access).

## Restricting Run Command access based on tags


This section describes how to restrict a user's ability to run commands on managed nodes by specifying a tag condition in an IAM policy. Managed nodes include Amazon EC2 instances and non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment that are configured for Systems Manager. Though the information is not explicitly presented, you can also restrict access to managed AWS IoT Greengrass core devices. To get started, you must tag your AWS IoT Greengrass devices. For more information, see [Tag your AWS IoT Greengrass Version 2 resources](https://docs.aws.amazon.com/greengrass/v2/developerguide/tag-resources.html) in the *AWS IoT Greengrass Version 2 Developer Guide*.

You can restrict command execution to specific managed nodes by creating an IAM policy that includes a condition that the user can only run commands on nodes with specific tags. In the following example, the user is allowed to use Run Command (`Effect: Allow, Action: ssm:SendCommand`) by using any SSM document (`Resource: arn:aws:ssm:*:*:document/*`) on any node (`Resource: arn:aws:ec2:*:*:instance/*`) with the condition that the node is a Finance WebServer (`ssm:resourceTag/Finance: WebServer`). If the user sends a command to a node that isn't tagged or that has any tag other than `Finance: WebServer`, the execution results show `AccessDenied`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "ssm:SendCommand"
         ],
         "Resource":[
            "arn:aws:ssm:*:*:document/*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "ssm:SendCommand"
         ],
         "Resource":[
            "arn:aws:ec2:*:*:instance/*"
         ],
         "Condition":{
            "StringLike":{
               "ssm:resourceTag/Finance":[
                  "WebServers"
               ]
            }
         }
      }
   ]
}
```

------

You can create IAM policies that allow a user to run commands on managed nodes that are tagged with multiple tags. The following policy allows the user to run commands on managed nodes that have two tags. If a user sends a command to a node that isn't tagged with both of these tags, the execution results show `AccessDenied`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "ssm:SendCommand"
         ],
         "Resource":"*",
         "Condition":{
            "StringLike":{
               "ssm:resourceTag/tag_key1":[
                  "tag_value1"
               ],
               "ssm:resourceTag/tag_key2":[
                  "tag_value2"
               ]
            }
         }
      },
      {
         "Effect":"Allow",
         "Action":[
            "ssm:SendCommand"
         ],
         "Resource":[
            "arn:aws:ssm:us-west-1::document/AWS-*",
            "arn:aws:ssm:us-east-2::document/AWS-*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "ssm:UpdateInstanceInformation",
            "ssm:ListCommands",
            "ssm:ListCommandInvocations",
            "ssm:GetDocument"
         ],
         "Resource":"*"
      }
   ]
}
```

------

You can also create IAM policies that allows a user to run commands on multiple groups of tagged managed nodes. The following example policy allows the user to run commands on either group of tagged nodes, or both groups.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "ssm:SendCommand"
         ],
         "Resource":"*",
         "Condition":{
            "StringLike":{
               "ssm:resourceTag/tag_key1":[
                  "tag_value1"
               ]
            }
         }
      },
      {
         "Effect":"Allow",
         "Action":[
            "ssm:SendCommand"
         ],
         "Resource":"*",
         "Condition":{
            "StringLike":{
               "ssm:resourceTag/tag_key2":[
                  "tag_value2"
               ]
            }
         }
      },
      {
         "Effect":"Allow",
         "Action":[
            "ssm:SendCommand"
         ],
         "Resource":[
            "arn:aws:ssm:us-west-1::document/AWS-*",
            "arn:aws:ssm:us-east-2::document/AWS-*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "ssm:UpdateInstanceInformation",
            "ssm:ListCommands",
            "ssm:ListCommandInvocations",
            "ssm:GetDocument"
         ],
         "Resource":"*"
      }
   ]
}
```

------

For more information about creating IAM policies, see [Managed policies and inline policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html) in the *IAM User Guide*. For more information about tagging managed nodes, see [Tag Editor](https://docs.aws.amazon.com/ARG/latest/userguide/tag-editor.html) in the *AWS Resource Groups User Guide*. 

# Running commands on managed nodes


This section includes information about how to send commands from the AWS Systems Manager console to managed nodes. This section also includes information about how to cancel a command.

Note that if your node is configured with the `noexec` mount option for the var directory, Run Command is unable to successfuly run commands.

**Important**  
When you send a command using Run Command, don't include sensitive information formatted as plaintext, such as passwords, configuration data, or other secrets. All Systems Manager API activity in your account is logged in an S3 bucket for AWS CloudTrail logs. This means that any user with access to S3 bucket can view the plaintext values of those secrets. For this reason, we recommend creating and using `SecureString` parameters to encrypt sensitive data you use in your Systems Manager operations.  
For more information, see [Restricting access to Parameter Store parameters using IAM policies](sysman-paramstore-access.md).

**Execution history retention**  
The history of each command is available for up to 30 days. In addition, you can store a copy of all log files in Amazon Simple Storage Service or have an audit trail of all API calls in AWS CloudTrail.

**Related information**  
For information about sending commands using other tools, see the following topics: 
+ [Walkthrough: Use the AWS Tools for Windows PowerShell with Run Command](walkthrough-powershell.md) or the examples in the [AWS Systems Manager section of the AWS Tools for PowerShell Cmdlet Reference](https://docs.aws.amazon.com/powershell/latest/reference/items/AWS_Systems_Manager_cmdlets.html).
+ [Walkthrough: Use the AWS CLI with Run Command](walkthrough-cli.md) or the examples in the [SSM CLI Reference](https://docs.aws.amazon.com/cli/latest/reference/ssm/)

**Topics**
+ [

# Running commands from the console
](running-commands-console.md)
+ [

# Running commands using a specific document version
](run-command-version.md)
+ [

# Run commands at scale
](send-commands-multiple.md)
+ [

# Canceling a command
](cancel-run-command.md)

# Running commands from the console


You can use Run Command, a tool in AWS Systems Manager, from the AWS Management Console to configure managed nodes without having to log into them. This topic includes an example that shows how to [update SSM Agent](run-command-tutorial-update-software.md#rc-console-agentexample) on a managed node by using Run Command.

**Before you begin**  
Before you send a command using Run Command, verify that your managed nodes meet all Systems Manager [setup requirements](systems-manager-setting-up-nodes.md).

**To send a command using Run Command**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose **Run command**.

1. In the **Command document** list, choose a Systems Manager document.

1. In the **Command parameters** section, specify values for required parameters.

1. In the **Targets** section, choose the managed nodes on which you want to run this operation by specifying tags, selecting instances or edge devices manually, or specifying a resource group.
**Tip**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

1. For **Other parameters**:
   + For **Comment**, enter information about this command.
   + For **Timeout (seconds)**, specify the number of seconds for the system to wait before failing the overall command execution. 

1. For **Rate control**:
   + For **Concurrency**, specify either a number or a percentage of managed nodes on which to run the command at the same time.
**Note**  
If you selected targets by specifying tags applied to managed nodes or by specifying AWS resource groups, and you aren't certain how many managed nodes are targeted, then restrict the number of targets that can run the document at the same time by specifying a percentage.
   + For **Error threshold**, specify when to stop running the command on other managed nodes after it fails on either a number or a percentage of nodes. For example, if you specify three errors, then Systems Manager stops sending the command when the fourth error is received. Managed nodes still processing the command might also send errors.

1. (Optional) Choose a CloudWatch alarm to apply to your command for monitoring. To attach a CloudWatch alarm to your command, the IAM principal that runs the command must have permission for the `iam:createServiceLinkedRole` action. For more information about CloudWatch alarms, see [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html). Note that if your alarm activates, any pending command invocations do not run.

1. (Optional) For **Output options**, to save the command output to a file, select the **Write command output to an S3 bucket** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile (for EC2 instances) or IAM service role (hybrid-activated machines) assigned to the instance, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, make sure that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. In the **SNS notifications** section, if you want notifications sent about the status of the command execution, select the **Enable SNS notifications** check box.

   For more information about configuring Amazon SNS notifications for Run Command, see [Monitoring Systems Manager status changes using Amazon SNS notifications](monitoring-sns-notifications.md).

1. Choose **Run**.

For information about canceling a command, see [Canceling a command](cancel-run-command.md). 

## Rerunning commands


Systems Manager includes two options to help you rerun a command from the **Run Command** page in the Systems Manager console. 
+ **Rerun**: This button allows you to run the same command without making changes to it.
+ **Copy to new**: This button copies the settings of one command to a new command and gives you the option to edit those settings before you run it.

**To rerun a command**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose a command to rerun. You can rerun a command immediately after executing it from the command details page. Or, you can choose a command that you previously ran from the **Command history** tab.

1. Choose either **Rerun** to run the same command without changes, or choose **Copy to new** to edit the command settings before you run it.

# Running commands using a specific document version


You can use the document version parameter to specify which version of an AWS Systems Manager document to use when the command runs. You can specify one of the following options for this parameter:
+ \$1DEFAULT
+ \$1LATEST
+ Version number

Run the following procedure to run a command using the document version parameter. 

------
#### [ Linux ]

**To run commands using the AWS CLI on local Linux machines**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. List all available documents

   This command lists all of the documents available for your account based on AWS Identity and Access Management (IAM) permissions.

   ```
   aws ssm list-documents
   ```

1. Run the following command to view the different versions of a document. Replace *document name* with your own information.

   ```
   aws ssm list-document-versions \
       --name "document name"
   ```

1. Run the following command to run a command that uses an SSM document version. Replace each *example resource placeholder* with your own information.

   ```
   aws ssm send-command \
       --document-name "AWS-RunShellScript" \
       --parameters commands="echo Hello" \
       --instance-ids instance-ID \
       --document-version '$LATEST'
   ```

------
#### [ Windows ]

**To run commands using the AWS CLI on local Windows machines**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. List all available documents

   This command lists all of the documents available for your account based on AWS Identity and Access Management (IAM) permissions.

   ```
   aws ssm list-documents
   ```

1. Run the following command to view the different versions of a document. Replace *document name* with your own information.

   ```
   aws ssm list-document-versions ^
       --name "document name"
   ```

1. Run the following command to run a command that uses an SSM document version. Replace each *example resource placeholder* with your own information.

   ```
   aws ssm send-command ^
       --document-name "AWS-RunShellScript" ^
       --parameters commands="echo Hello" ^
       --instance-ids instance-ID ^
       --document-version "$LATEST"
   ```

------
#### [ PowerShell ]

**To run commands using the Tools for PowerShell**

1. Install and configure the AWS Tools for PowerShell (Tools for Windows PowerShell), if you haven't already.

   For information, see [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. List all available documents

   This command lists all of the documents available for your account based on AWS Identity and Access Management (IAM) permissions.

   ```
   Get-SSMDocumentList
   ```

1. Run the following command to view the different versions of a document. Replace *document name* with your own information.

   ```
   Get-SSMDocumentVersionList `
       -Name "document name"
   ```

1. Run the following command to run a command that uses an SSM document version. Replace each *example resource placeholder* with your own information.

   ```
   Send-SSMCommand `
       -DocumentName "AWS-RunShellScript" `
       -Parameter @{commands = "echo helloWorld"} `
       -InstanceIds "instance-ID" `
       -DocumentVersion $LATEST
   ```

------

# Run commands at scale


You can use Run Command, a tool in AWS Systems Manager, to run commands on a fleet of managed nodes by using the `targets`. The `targets` parameter accepts a `Key,Value` combination based on tags that you specified for your managed nodes. When you run the command, the system locates and attempts to run the command on all managed nodes that match the specified tags. For more information about tagging managed instances, see [Tagging your AWS resources](https://docs.aws.amazon.com/tag-editor/latest/userguide/tag-editor.html) in the *Tagging AWS Resources User Guide*. For information about tagging your managed IoT devices, see [Tag your AWS IoT Greengrass Version 2 resources](https://docs.aws.amazon.com/greengrass/v2/developerguide/tag-resources.html) in the *AWS IoT Greengrass Version 2 Developer Guide*. 

You can also use the `targets` parameter to target a list of specific managed node IDs, as described in the next section.

To control how commands run across hundreds or thousands of managed nodes, Run Command also includes parameters for restricting how many nodes can simultaneously process a request and how many errors can be thrown by a command before the command is canceled.

**Topics**
+ [

## Targeting multiple managed nodes
](#send-commands-targeting)
+ [

## Using rate controls
](#send-commands-rate)

## Targeting multiple managed nodes


You can run a command and target managed nodes by specifying tags, AWS resource group names, or managed node IDs. 

The following examples show the command format when using Run Command from the AWS Command Line Interface (AWS CLI ). Replace each *example resource placeholder* with your own information. Sample commands in this section are truncated using `[...]`.

**Example 1: Targeting tags**

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:tag-name,Values=tag-value \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:tag-name,Values=tag-value ^
    [...]
```

------

**Example 2: Targeting an AWS resource group by name**

You can specify a maximum of one resource group name per command. When you create a resource group, we recommend including `AWS::SSM:ManagedInstance` and `AWS::EC2::Instance` as resource types in your grouping criteria. 

**Note**  
In order to send commands that target a resource group, you must have been granted AWS Identity and Access Management (IAM) permissions to list or view the resources that belong to that group. For more information, see [Set up permissions](https://docs.aws.amazon.com/ARG/latest/userguide/gettingstarted-prereqs.html#gettingstarted-prereqs-permissions) in the *AWS Resource Groups User Guide*. 

------
#### [ Linux & macOS ]

```
aws ssm send-command \    
    --document-name document-name \
    --targets Key=resource-groups:Name,Values=resource-group-name \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^    
    --document-name document-name ^
    --targets Key=resource-groups:Name,Values=resource-group-name ^
    [...]
```

------

**Example 3: Targeting an AWS resource group by resource type**

You can specify a maximum of five resource group types per command. When you create a resource group, we recommend including `AWS::SSM:ManagedInstance` and `AWS::EC2::Instance` as resource types in your grouping criteria.

**Note**  
In order to send commands that target a resource group, you must have been granted IAM permissions to list, or view, the resources that belong to that group. For more information, see [Set up permissions](https://docs.aws.amazon.com/ARG/latest/userguide/gettingstarted-prereqs.html#gettingstarted-prereqs-permissions) in the *AWS Resource Groups User Guide*. 

------
#### [ Linux & macOS ]

```
aws ssm send-command \    
    --document-name document-name \
    --targets Key=resource-groups:ResourceTypeFilters,Values=resource-type-1,resource-type-2 \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^    
    --document-name document-name ^
    --targets Key=resource-groups:ResourceTypeFilters,Values=resource-type-1,resource-type-2 ^
    [...]
```

------

**Example 4: Targeting instance IDs**

The following examples show how to target managed nodes by using the `instanceids` key with the `targets` parameter. You can use this key to target managed AWS IoT Greengrass core devices because each device is assigned an mi-*ID\$1number*. You can view device IDs in Fleet Manager, a tool in AWS Systems Manager.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=instanceids,Values=instance-ID-1,instance-ID-2,instance-ID-3 \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=instanceids,Values=instance-ID-1,instance-ID-2,instance-ID-3 ^
    [...]
```

------

If you tagged managed nodes for different environments using a `Key` named `Environment` and `Values` of `Development`, `Test`, `Pre-production` and `Production`, then you could send a command to all managed nodes in *one* of these environments by using the `targets` parameter with the following syntax.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:Environment,Values=Development \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:Environment,Values=Development ^
    [...]
```

------

You could target additional managed nodes in other environments by adding to the `Values` list. Separate items using commas.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:Environment,Values=Development,Test,Pre-production \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:Environment,Values=Development,Test,Pre-production ^
    [...]
```

------

**Variation**: Refining your targets using multiple `Key` criteria

You can refine the number of targets for your command by including multiple `Key` criteria. If you include more than one `Key` criteria, the system targets managed nodes that meet *all* of the criteria. The following command targets all managed nodes tagged for the Finance Department *and* tagged for the database server role.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:Department,Values=Finance Key=tag:ServerRole,Values=Database \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:Department,Values=Finance Key=tag:ServerRole,Values=Database ^
    [...]
```

------

**Variation**: Using multiple `Key` and `Value` criteria

Expanding on the previous example, you can target multiple departments and multiple server roles by including additional items in the `Values` criteria.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:Department,Values=Finance,Marketing Key=tag:ServerRole,Values=WebServer,Database \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:Department,Values=Finance,Marketing Key=tag:ServerRole,Values=WebServer,Database ^
    [...]
```

------

**Variation**: Targeting tagged managed nodes using multiple `Values` criteria

If you tagged managed nodes for different environments using a `Key` named `Department` and `Values` of `Sales` and `Finance`, then you could send a command to all of the nodes in these environments by using the `targets` parameter with the following syntax.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:Department,Values=Sales,Finance \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:Department,Values=Sales,Finance ^
    [...]
```

------

You can specify a maximum of five keys, and five values for each key.

If either a tag key (the tag name) or a tag value includes spaces, enclose the tag key or the value in quotation marks, as shown in the following examples.

**Example**: Spaces in `Value` tag

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:OS,Values="Windows Server 2016" \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:OS,Values="Windows Server 2016" ^
    [...]
```

------

**Example**: Spaces in `tag` key and `Value`

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key="tag:Operating System",Values="Windows Server 2016" \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key="tag:Operating System",Values="Windows Server 2016" ^
    [...]
```

------

**Example**: Spaces in one item in a list of `Values`

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:Department,Values="Sales","Finance","Systems Mgmt" \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:Department,Values="Sales","Finance","Systems Mgmt" ^
    [...]
```

------

## Using rate controls


You can control the rate at which commands are sent to managed nodes in a group by using *concurrency controls *and *error controls*.

**Topics**
+ [

### Using concurrency controls
](#send-commands-velocity)
+ [

### Using error controls
](#send-commands-maxerrors)

### Using concurrency controls


You can control the number of managed nodes that run a command simultaneously by using the `max-concurrency` parameter (the **Concurrency** options in the **Run a command** page). You can specify either an absolute number of managed nodes, for example **10**, or a percentage of the target set, for example **10%**. The queueing system delivers the command to a single node and waits until the system acknowledges the initial invocation before sending the command to two more nodes. The system exponentially sends commands to more nodes until the system meets the value of `max-concurrency`. The default for value `max-concurrency` is 50. The following examples show you how to specify values for the `max-concurrency` parameter.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --max-concurrency 10 \
    --targets Key=tag:Environment,Values=Development \
    [...]
```

```
aws ssm send-command \
    --document-name document-name \
    --max-concurrency 10% \
    --targets Key=tag:Department,Values=Finance,Marketing Key=tag:ServerRole,Values=WebServer,Database \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --max-concurrency 10 ^
    --targets Key=tag:Environment,Values=Development ^
    [...]
```

```
aws ssm send-command ^
    --document-name document-name ^
    --max-concurrency 10% ^
    --targets Key=tag:Department,Values=Finance,Marketing Key=tag:ServerRole,Values=WebServer,Database ^
    [...]
```

------

### Using error controls


You can also control the execution of a command to hundreds or thousands of managed nodes by setting an error limit using the `max-errors` parameters (the **Error threshold** field in the **Run a command** page). The parameter specifies how many errors are allowed before the system stops sending the command to additional managed nodes. You can specify either an absolute number of errors, for example **10**, or a percentage of the target set, for example **10%**. If you specify **3**, for example, the system stops sending the command when the fourth error is received. If you specify **0**, then the system stops sending the command to additional managed nodes after the first error result is returned. If you send a command to 50 managed nodes and set `max-errors` to **10%**, then the system stops sending the command to additional nodes when the sixth error is received.

Invocations that are already running a command when `max-errors` is reached are allowed to complete, but some of these invocations might fail as well. If you need to ensure that there won’t be more than `max-errors` failed invocations, set `max-concurrency` to **1** so the invocations proceed one at a time. The default for max-errors is 0. The following examples show you how to specify values for the `max-errors` parameter.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --max-errors 10 \
    --targets Key=tag:Database,Values=Development \
    [...]
```

```
aws ssm send-command \
    --document-name document-name \
    --max-errors 10% \
    --targets Key=tag:Environment,Values=Development \
    [...]
```

```
aws ssm send-command \
    --document-name document-name \
    --max-concurrency 1 \
    --max-errors 1 \
    --targets Key=tag:Environment,Values=Production \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --max-errors 10 ^
    --targets Key=tag:Database,Values=Development ^
    [...]
```

```
aws ssm send-command ^
    --document-name document-name ^
    --max-errors 10% ^
    --targets Key=tag:Environment,Values=Development ^
    [...]
```

```
aws ssm send-command ^
    --document-name document-name ^
    --max-concurrency 1 ^
    --max-errors 1 ^
    --targets Key=tag:Environment,Values=Production ^
    [...]
```

------

# Canceling a command


You can attempt to cancel a command as long as the service shows that it's in either a Pending or Executing state. However, even if a command is still in one of these states, we can't guarantee that the command will be canceled and the underlying process stopped. 

**To cancel a command using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Select the command invocation that you want to cancel.

1. Choose **Cancel command**.

**To cancel a command using the AWS CLI**  
Run the following command. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

```
aws ssm cancel-command \
    --command-id "command-ID" \
    --instance-ids "instance-ID"
```

------
#### [ Windows ]

```
aws ssm cancel-command ^
    --command-id "command-ID" ^
    --instance-ids "instance-ID"
```

------

For information about the status of a canceled command, see [Understanding command statuses](monitor-commands.md).

# Using exit codes in commands


In some cases, you might need to manage how your commands are handled by using exit codes.

## Specify exit codes in commands


Using Run Command, a tool in AWS Systems Manager, you can specify exit codes to determine how commands are handled. By default, the exit code of the last command run in a script is reported as the exit code for the entire script. For example, you have a script that contains three commands. The first one fails but the following ones succeed. Because the final command succeeded, the status of the execution is reported as `succeeded`.

**Shell scripts**  
To fail the entire script at the first command failure, you can include a shell conditional statement to exit the script if any command before the final one fails. Use the following approach.

```
<command 1>
    if [ $? != 0 ]
    then
        exit <N>
    fi
    <command 2>
    <command 3>
```

In the following example, the entire script fails if the first command fails.

```
cd /test
    if [ $? != 0 ]
    then
        echo "Failed"
        exit 1
    fi
    date
```

**PowerShell scripts**  
PowerShell requires that you call `exit` explicitly in your scripts for Run Command to successfully capture the exit code.

```
<command 1>
    if ($?) {<do something>}
    else {exit <N>}
    <command 2>
    <command 3>
    exit <N>
```

Here is an example:

```
cd C:\
    if ($?) {echo "Success"}
    else {exit 1}
    date
```

# Handling reboots when running commands


If you use Run Command, a tool in AWS Systems Manager, to run scripts that reboot managed nodes, we recommend that you specify an exit code in your script. If you attempt to reboot a node from a script by using some other mechanism, the script execution status might not be updated correctly, even if the reboot is the last step in your script. For Windows managed nodes, you specify `exit 3010` in your script. For Linux and macOS managed nodes, you specify `exit 194`. The exit code instructs AWS Systems Manager Agent (SSM Agent) to reboot the managed node, and then restart the script after the reboot completed. Before starting the reboot, SSM Agent informs the Systems Manager service in the cloud that communication will be disrupted during the server reboot.

**Note**  
The reboot script can't be part of an `aws:runDocument` plugin. If a document contains the reboot script and another document tries to run that document through the `aws:runDocument` plugin, SSM Agent returns an error.

**Create idempotent scripts**

When developing scripts that reboot managed nodes, make the scripts idempotent so the script execution continues where it left off after the reboot. Idempotent scripts manage state and validate if the action was performed or not. This prevents a step from running multiple times when it's only intended to run once.

Here is an outline example of an idempotent script that reboots a managed node multiple times.

```
$name = Get current computer name
If ($name –ne $desiredName) 
    {
        Rename computer
        exit 3010
    }
            
$domain = Get current domain name
If ($domain –ne $desiredDomain) 
    {
        Join domain
        exit 3010
    }
            
If (desired package not installed) 
    {
        Install package
        exit 3010
    }
```

**Examples**

The following script samples use exit codes to restart managed nodes. The Linux example installs package updates on Amazon Linux, and then restarts the node. The Windows Server example installs the Telnet-Client on the node, and then restarts it. 

------
#### [ Amazon Linux 2 ]

```
#!/bin/bash
yum -y update
needs-restarting -r
if [ $? -eq 1 ]
then
        exit 194
else
        exit 0
fi
```

------
#### [ Windows ]

```
$telnet = Get-WindowsFeature -Name Telnet-Client
if (-not $telnet.Installed)
    { 
        # Install Telnet and then send a reboot request to SSM Agent.
        Install-WindowsFeature -Name "Telnet-Client"
        exit 3010 
    }
```

------

# Understanding command statuses


Run Command, a tool in AWS Systems Manager, reports detailed status information about the different states a command experiences during processing and for each managed node that processed the command. You can monitor command statuses using the following methods:
+ Choose the **Refresh** icon on the **Commands** tab in the Run Command console interface.
+ Call [list-commands](https://docs.aws.amazon.com/cli/latest/reference/ssm/list-commands.html) or [list-command-invocations](https://docs.aws.amazon.com/cli/latest/reference/ssm/list-command-invocations.html) using the AWS Command Line Interface (AWS CLI). Or call [Get-SSMCommand](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-SSMCommand.html) or [Get-SSMCommandInvocation](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-SSMCommandInvocation.html) using AWS Tools for Windows PowerShell.
+ Configure Amazon EventBridge to respond to state or status changes.
+ Configure Amazon Simple Notification Service (Amazon SNS) to send notifications for all status changes or specific statuses such as `Failed` or `TimedOut`.

## Run Command status


Run Command reports status details for three areas: plugins, invocations, and an overall command status. A *plugin* is a code-execution block that is defined in your command's SSM document. For more information about plugins, see [Command document plugin reference](documents-command-ssm-plugin-reference.md).

When you send a command to multiple managed nodes at the same time, each copy of the command targeting each node is a *command invocation*. For example, if you use the `AWS-RunShellScript` document and send an `ifconfig` command to 20 Linux instances, that command has 20 invocations. Each command invocation individually reports status. The plugins for a given command invocation individually report status as well. 

Lastly, Run Command includes an aggregated command status for all plugins and invocations. The aggregated command status can be different than the status reported by plugins or invocations, as noted in the following tables.

**Note**  
If you run commands to large numbers of managed nodes using the `max-concurrency` or `max-errors` parameters, command status reflects the limits imposed by those parameters, as described in the following tables. For more information about these parameters, see [Run commands at scale](send-commands-multiple.md).


**Detailed status for command plugins and invocations**  

| Status | Details | 
| --- | --- | 
| Pending | The command hasn't yet been sent to the managed node or hasn't been received by SSM Agent. If the command isn't received by the agent before the length of time passes that is equal to the sum of the Timeout (seconds) parameter and the Execution timeout parameter, the status changes to Delivery Timed Out. | 
| InProgress | Systems Manager is attempting to send the command to the managed node, or the command was received by SSM Agent and has started running on the instance. Depending on the result of all command plugins, the status changes to Success, Failed, Delivery Timed Out, or Execution Timed Out. Exception: If the agent isn't running or available on the node, the command status remains at In Progress until the agent is available again, or until the execution timeout limit is reached. The status then changes to a terminal state. | 
| Delayed | The system attempted to send the command to the managed node but wasn't successful. The system retries again. | 
| Success | This status is returned under a variety of conditions. This status doesn't mean the command was processed on the node. For example, the command can be received by SSM Agent on the managed node and return an exit code of zero as a result of your PowerShell ExecutionPolicy preventing the command from running. This is a terminal state. Conditions that result in a command returning a Success status are: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-commands.html)  The same conditions apply when targeting resource groups. To troubleshoot errors or get more information about the command execution, send a command that handles errors or exceptions by returning appropriate exit codes (non-zero exit codes for command failure).  | 
| DeliveryTimedOut | The command wasn't delivered to the managed node before the total timeout expired. Total timeouts don't count against the parent command’s max-errors limit, but they do contribute to whether the parent command status is Success, Incomplete, or Delivery Timed Out. This is a terminal state. | 
| ExecutionTimedOut | Command automation started on the managed node, but the command wasn’t completed before the execution timeout expired. Execution timeouts count as a failure, which will send a non-zero reply and Systems Manager will exit the attempt to run the command automation, and report a failure status. | 
| Failed |  The command wasn't successful on the managed node. For a plugin, this indicates that the result code wasn't zero. For a command invocation, this indicates that the result code for one or more plugins wasn't zero. Invocation failures count against the max-errors limit of the parent command. This is a terminal state. | 
| Cancelled | The command was canceled before it was completed. This is a terminal state. | 
| Undeliverable | The command can't be delivered to the managed node. The node might not exist or it might not be responding. Undeliverable invocations don't count against the parent command’s max-errors limit, but they do contribute to whether the parent command status is Success or Incomplete. For example, if all invocations in a command have the status Undeliverable, then the command status returned is Failed. However, if a command has five invocations, four of which return the status Undeliverable and one of which returns the status Success, then the parent command's status is Success. This is a terminal state. | 
| Terminated | The parent command exceeded its max-errors limit and subsequent command invocations were canceled by the system. This is a terminal state. | 
| InvalidPlatform | The command was sent to a managed node that didn't match the required platforms specified by the chosen document. Invalid Platform doesn't count against the parent command’s max-errors limit, but it does contribute to whether the parent command status is Success or Failed. For example, if all invocations in a command have the status Invalid Platform, then the command status returned is Failed. However, if a command has five invocations, four of which return the status Invalid Platform and one of which returns the status Success, then the parent command's status is Success. This is a terminal state. | 
| AccessDenied | The AWS Identity and Access Management (IAM) user or role initiating the command doesn't have access to the targeted managed node. Access Denied doesn't count against the parent command’s max-errors limit, but it does contribute to whether the parent command status is Success or Failed. For example, if all invocations in a command have the status Access Denied, then the command status returned is Failed. However, if a command has five invocations, four of which return the status Access Denied and one of which returns the status Success, then the parent command's status is Success. This is a terminal state. | 


**Detailed status for a command**  

| Status | Details | 
| --- | --- | 
| Pending | The command wasn't yet received by an agent on any managed nodes. | 
| InProgress | The command has been sent to at least one managed node but hasn't reached a final state on all nodes.  | 
| Delayed | The system attempted to send the command to the node but wasn't successful. The system retries again. | 
| Success | The command was received by SSM Agent on all specified or targeted managed nodes and returned an exit code of zero. All command invocations have reached a terminal state, and the value of max-errors wasn't reached. This status doesn't mean the command was successfully processed on all specified or targeted managed nodes. This is a terminal state.  To troubleshoot errors or get more information about the command execution, send a command that handles errors or exceptions by returning appropriate exit codes (non-zero exit codes for command failure).  | 
| DeliveryTimedOut | The command wasn't delivered to the managed node before the total timeout expired. The value of max-errors or more command invocations shows a status of Delivery Timed Out. This is a terminal state. | 
| Failed |  The command wasn't successful on the managed node. The value of `max-errors` or more command invocations shows a status of `Failed`. This is a terminal state.  | 
| Incomplete | The command was attempted on all managed nodes and one or more of the invocations doesn't have a value of Success. However, not enough invocations failed for the status to be Failed. This is a terminal state. | 
| Cancelled | The command was canceled before it was completed. This is a terminal state. | 
| RateExceeded | The number of managed nodes targeted by the command exceeded the account quota for pending invocations. The system has canceled the command before executing it on any node. This is a terminal state. | 
| AccessDenied | The user or role initiating the command doesn't have access to the targeted resource group. AccessDenied doesn't count against the parent command’s max-errors limit, but does contribute to whether the parent command status is Success or Failed. (For example, if all invocations in a command have the status AccessDenied, then the command status returned is Failed. However, if a command has 5 invocations, 4 of which return the status AccessDenied and 1 of which returns the status Success, then the parent command's status is Success.) This is a terminal state. | 
| No Instances In Tag | The tag key-pair value or resource group targeted by the command doesn't match any managed nodes. This is a terminal state. | 

## Understanding command timeout values


Systems Manager enforces the following timeout values when running commands.

**Total Timeout**  
In the Systems Manager console, you specify the timeout value in the **Timeout (seconds)** field. After a command is sent, Run Command checks whether the command has expired or not. If a command reaches the command expiration limit (total timeout), it changes status to `DeliveryTimedOut` for all invocations that have the status `InProgress`, `Pending` or `Delayed`.

![\[The Timeout (seconds) field in the Systems Manager console\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/run-command-delivery-time-out-time-out-seconds.png)


On a more technical level, total timeout (**Timeout (seconds)**) is a combination of two timeout values, as shown here: 

`Total timeout = "Timeout(seconds)" from the console + "timeoutSeconds": "{{ executionTimeout }}" from your SSM document`

For example, the default value of **Timeout (seconds)** in the Systems Manager console is 600 seconds. If you run a command by using the `AWS-RunShellScript` SSM document, the default value of **"timeoutSeconds": "\$1\$1 executionTimeout \$1\$1"** is 3600 seconds, as shown in the following document sample:

```
  "executionTimeout": {
      "type": "String",
      "default": "3600",

  "runtimeConfig": {
    "aws:runShellScript": {
      "properties": [
        {
          "timeoutSeconds": "{{ executionTimeout }}"
```

This means the command runs for 4,200 seconds (70 minutes) before the system sets the command status to `DeliveryTimedOut`.

**Execution Timeout**  
In the Systems Manager console, you specify the execution timeout value in the **Execution Timeout** field, if available. Not all SSM documents require that you specify an execution timeout. The **Execution Timeout** field is only displayed when a corresponding input parameter has been defined in the SSM document. If specified, the command must complete within this time period.

**Note**  
Run Command relies on the SSM Agent document terminal response to determine whether or not the command was delivered to the agent. SSM Agent must send an `ExecutionTimedOut` signal for an invocation or command to be marked as `ExecutionTimedOut`.

![\[The Execution Timeout field in the Systems Manager console\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/run-command-execution-timeout-console.png)


**Default Execution Timeout**  
If a SSM document doesn't require that you explicitly specify an execution timeout value, then Systems Manager enforces the hard-coded default execution timeout.

**How Systems Manager reports timeouts**  
If Systems Manager receives an `execution timeout` reply from SSM Agent on a target, then Systems Manager marks the command invocation as `executionTimeout`.

If Run Command doesn't receive a document terminal response from SSM Agent, the command invocation is marked as `deliveryTimeout`.

To determine timeout status on a target, SSM Agent combines all parameters and the content of the SSM document to calculate for `executionTimeout`. When SSM Agent determines that a command has timed out, it sends `executionTimeout` to the service.

The default for **Timeout (seconds)** is 3600 seconds. The default for **Execution Timeout** is also 3600 seconds. Therefore, the total default timeout for a command is 7200 seconds.

**Note**  
SSM Agent processes `executionTimeout` differently depending on the type of SSM document and the document version. 

# Run Command walkthroughs


The walkthroughs in this section show you how to run commands with Run Command, a tool in AWS Systems Manager, using either the AWS Command Line Interface (AWS CLI) or AWS Tools for Windows PowerShell.

**Topics**
+ [

# Updating software using Run Command
](run-command-tutorial-update-software.md)
+ [

# Walkthrough: Use the AWS CLI with Run Command
](walkthrough-cli.md)
+ [

# Walkthrough: Use the AWS Tools for Windows PowerShell with Run Command
](walkthrough-powershell.md)

You can also view sample commands in the following references.
+ [Systems Manager AWS CLI Reference](https://docs.aws.amazon.com/cli/latest/reference/ssm/)
+ [AWS Tools for Windows PowerShell - AWS Systems Manager](https://docs.aws.amazon.com/powershell/latest/reference/items/SimpleSystemsManagement_cmdlets.html)

# Updating software using Run Command


The following procedures describe how to update software on your managed nodes.

## Updating the SSM Agent using Run Command


The following procedure describes how to update the SSM Agent running on your managed nodes. You can update to either the latest version of SSM Agent or downgrade to an older version. When you run the command, the system downloads the version from AWS, installs it, and then uninstalls the version that existed before the command was run. If an error occurs during this process, the system rolls back to the version on the server before the command was run and the command status shows that the command failed.

**Note**  
If an instance is running macOS version 13.0 (Ventura) or later, the instance must have the SSM Agent version 3.1.941.0 or higher to run the AWS-UpdateSSMAgent document. If the instance is running a version of SSM Agent released before 3.1.941.0, you can update your SSM Agent to run the AWS-UpdateSSMAgent document by running `brew update` and `brew upgrade amazon-ssm-agent` commands.

To be notified about SSM Agent updates, subscribe to the [SSM Agent Release Notes](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) page on GitHub.

**To update SSM Agent using Run Command**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose **Run command**.

1. In the **Command document** list, choose **`AWS-UpdateSSMAgent`**.

1. In the **Command parameters** section, specify values for the following parameters, if you want:

   1. (Optional) For **Version**, enter the version of SSM Agent to install. You can install [older versions](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) of the agent. If you don't specify a version, the service installs the latest version.

   1. (Optional) For **Allow Downgrade**, choose **true** to install an earlier version of SSM Agent. If you choose this option, specify the [earlier](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) version number. Choose **false** to install only the newest version of the service.

1. In the **Targets** section, choose the managed nodes on which you want to run this operation by specifying tags, selecting instances or edge devices manually, or specifying a resource group.
**Tip**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

1. For **Other parameters**:
   + For **Comment**, enter information about this command.
   + For **Timeout (seconds)**, specify the number of seconds for the system to wait before failing the overall command execution. 

1. For **Rate control**:
   + For **Concurrency**, specify either a number or a percentage of managed nodes on which to run the command at the same time.
**Note**  
If you selected targets by specifying tags applied to managed nodes or by specifying AWS resource groups, and you aren't certain how many managed nodes are targeted, then restrict the number of targets that can run the document at the same time by specifying a percentage.
   + For **Error threshold**, specify when to stop running the command on other managed nodes after it fails on either a number or a percentage of nodes. For example, if you specify three errors, then Systems Manager stops sending the command when the fourth error is received. Managed nodes still processing the command might also send errors.

1. (Optional) For **Output options**, to save the command output to a file, select the **Write command output to an S3 bucket** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile (for EC2 instances) or IAM service role (hybrid-activated machines) assigned to the instance, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, make sure that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. In the **SNS notifications** section, if you want notifications sent about the status of the command execution, select the **Enable SNS notifications** check box.

   For more information about configuring Amazon SNS notifications for Run Command, see [Monitoring Systems Manager status changes using Amazon SNS notifications](monitoring-sns-notifications.md).

1. Choose **Run**.

## Updating PowerShell using Run Command


The following procedure describes how to update PowerShell to version 5.1 on your Windows Server 2012 and 2012 R2 managed nodes. The script provided in this procedure downloads the Windows Management Framework (WMF) version 5.1 update, and starts the installation of the update. The node reboots during this process because this is required when installing WMF 5.1. The download and installation of the update takes approximately five minutes to complete.

**To update PowerShell using Run Command**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose **Run command**.

1. In the **Command document** list, choose **`AWS-RunPowerShellScript`**.

1. In the **Commands** section, paste the following commands for your operating system.

------
#### [ Windows Server 2012 R2 ]

   ```
   Set-Location -Path "C:\Windows\Temp"
   
   Invoke-WebRequest "https://go.microsoft.com/fwlink/?linkid=839516" -OutFile "Win8.1AndW2K12R2-KB3191564-x64.msu"
   
   Start-Process -FilePath "$env:systemroot\system32\wusa.exe" -Verb RunAs -ArgumentList ('Win8.1AndW2K12R2-KB3191564-x64.msu', '/quiet')
   ```

------
#### [ Windows Server 2012 ]

   ```
   Set-Location -Path "C:\Windows\Temp"
   
   Invoke-WebRequest "https://go.microsoft.com/fwlink/?linkid=839513" -OutFile "W2K12-KB3191565-x64.msu"
   
   Start-Process -FilePath "$env:systemroot\system32\wusa.exe" -Verb RunAs -ArgumentList ('W2K12-KB3191565-x64.msu', '/quiet')
   ```

------

1. In the **Targets** section, choose the managed nodes on which you want to run this operation by specifying tags, selecting instances or edge devices manually, or specifying a resource group.
**Tip**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

1. For **Other parameters**:
   + For **Comment**, enter information about this command.
   + For **Timeout (seconds)**, specify the number of seconds for the system to wait before failing the overall command execution. 

1. For **Rate control**:
   + For **Concurrency**, specify either a number or a percentage of managed nodes on which to run the command at the same time.
**Note**  
If you selected targets by specifying tags applied to managed nodes or by specifying AWS resource groups, and you aren't certain how many managed nodes are targeted, then restrict the number of targets that can run the document at the same time by specifying a percentage.
   + For **Error threshold**, specify when to stop running the command on other managed nodes after it fails on either a number or a percentage of nodes. For example, if you specify three errors, then Systems Manager stops sending the command when the fourth error is received. Managed nodes still processing the command might also send errors.

1. (Optional) For **Output options**, to save the command output to a file, select the **Write command output to an S3 bucket** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile (for EC2 instances) or IAM service role (hybrid-activated machines) assigned to the instance, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, make sure that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. In the **SNS notifications** section, if you want notifications sent about the status of the command execution, select the **Enable SNS notifications** check box.

   For more information about configuring Amazon SNS notifications for Run Command, see [Monitoring Systems Manager status changes using Amazon SNS notifications](monitoring-sns-notifications.md).

1. Choose **Run**.

After the managed node reboots and the installation of the update is complete, connect to your node to confirm that PowerShell successfully upgraded to version 5.1. To check the version of PowerShell on your node, open PowerShell and enter `$PSVersionTable`. The `PSVersion` value in the output table shows 5.1 if the upgrade was successful.

If the `PSVersion` value is different than 5.1, for example 3.0 or 4.0, review the **Setup** logs in Event Viewer under **Windows Logs**. These logs indicate why the update installation failed.

# Walkthrough: Use the AWS CLI with Run Command
Use the AWS CLI

The following sample walkthrough shows you how to use the AWS Command Line Interface (AWS CLI) to view information about commands and command parameters, how to run commands, and how to view the status of those commands. 

**Important**  
Only trusted administrators should be allowed to use AWS Systems Manager pre-configured documents shown in this topic. The commands or scripts specified in Systems Manager documents run with administrative permissions on your managed nodes. If a user has permission to run any of the pre-defined Systems Manager documents (any document that begins with `AWS-`), then that user also has administrator access to the node. For all other users, you should create restrictive documents and share them with specific users.

**Topics**
+ [

## Step 1: Getting started
](#walkthrough-cli-settings)
+ [

## Step 2: Run shell scripts to view resource details
](#walkthrough-cli-run-scripts)
+ [

## Step 3: Send simple commands using the `AWS-RunShellScript` document
](#walkthrough-cli-example-1)
+ [

## Step 4: Run a simple Python script using Run Command
](#walkthrough-cli-example-2)
+ [

## Step 5: Run a Bash script using Run Command
](#walkthrough-cli-example-3)

## Step 1: Getting started


You must either have administrator permissions on the managed node you want to configure or you must have been granted the appropriate permission in AWS Identity and Access Management (IAM). Also note, this example uses the US East (Ohio) Region (us-east-2). Run Command is available in the AWS Regions listed in [Systems Manager service endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) in the *Amazon Web Services General Reference*. For more information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md).

**To run commands using the AWS CLI**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. List all available documents.

   This command lists all of the documents available for your account based on IAM permissions. 

   ```
   aws ssm list-documents
   ```

1. Verify that an managed node is ready to receive commands.

   The output of the following command shows if managed nodes are online.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-instance-information \
       --output text --query "InstanceInformationList[*]"
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-instance-information ^
       --output text --query "InstanceInformationList[*]"
   ```

------

1. Run the following command to view details about a particular managed node.
**Note**  
To run the commands in this walkthrough, replace the instance and command IDs. For managed AWS IoT Greengrass core devices, use the mi-*ID\$1number* for instance ID. The command ID is returned as a response to **send-command**. Instance IDs are available from Fleet Manager, a tool in AWS Systems Manager..

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-instance-information \
       --instance-information-filter-list key=InstanceIds,valueSet=instance-ID
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-instance-information ^
       --instance-information-filter-list key=InstanceIds,valueSet=instance-ID
   ```

------

## Step 2: Run shell scripts to view resource details


Using Run Command and the `AWS-RunShellScript` document, you can run any command or script on a managed node as if you were logged on locally.

**View the description and available parameters**

Run the following command to view a description of the Systems Manager JSON document.

------
#### [ Linux & macOS ]

```
aws ssm describe-document \
    --name "AWS-RunShellScript" \
    --query "[Document.Name,Document.Description]"
```

------
#### [ Windows ]

```
aws ssm describe-document ^
    --name "AWS-RunShellScript" ^
    --query "[Document.Name,Document.Description]"
```

------

Run the following command to view the available parameters and details about those parameters.

------
#### [ Linux & macOS ]

```
aws ssm describe-document \
    --name "AWS-RunShellScript" \
    --query "Document.Parameters[*]"
```

------
#### [ Windows ]

```
aws ssm describe-document ^
    --name "AWS-RunShellScript" ^
    --query "Document.Parameters[*]"
```

------

## Step 3: Send simple commands using the `AWS-RunShellScript` document


Run the following command to get IP information for a Linux managed node.

If you're targeting a Windows Server managed node, change the `document-name` to `AWS-RunPowerShellScript` and change the `command` from `ifconfig` to `ipconfig`.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --instance-ids "instance-ID" \
    --document-name "AWS-RunShellScript" \
    --comment "IP config" \
    --parameters commands=ifconfig \
    --output text
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --instance-ids "instance-ID" ^
    --document-name "AWS-RunShellScript" ^
    --comment "IP config" ^
    --parameters commands=ifconfig ^
    --output text
```

------

**Get command information with response data**  
The following command uses the Command ID that was returned from the previous command to get the details and response data of the command execution. The system returns the response data if the command completed. If the command execution shows `"Pending"` or `"InProgress"` you run this command again to see the response data.

------
#### [ Linux & macOS ]

```
aws ssm list-command-invocations \
    --command-id $sh-command-id \
    --details
```

------
#### [ Windows ]

```
aws ssm list-command-invocations ^
    --command-id $sh-command-id ^
    --details
```

------

**Identify user**

The following command displays the default user running the commands. 

------
#### [ Linux & macOS ]

```
sh_command_id=$(aws ssm send-command \
    --instance-ids "instance-ID" \
    --document-name "AWS-RunShellScript" \
    --comment "Demo run shell script on Linux managed node" \
    --parameters commands=whoami \
    --output text \
    --query "Command.CommandId")
```

------

**Get command status**  
The following command uses the Command ID to get the status of the command execution on the managed node. This example uses the Command ID that was returned in the previous command. 

------
#### [ Linux & macOS ]

```
aws ssm list-commands \
    --command-id "command-ID"
```

------
#### [ Windows ]

```
aws ssm list-commands ^
    --command-id "command-ID"
```

------

**Get command details**  
The following command uses the Command ID from the previous command to get the status of the command execution on a per managed node basis.

------
#### [ Linux & macOS ]

```
aws ssm list-command-invocations \
    --command-id "command-ID" \
    --details
```

------
#### [ Windows ]

```
aws ssm list-command-invocations ^
    --command-id "command-ID" ^
    --details
```

------

**Get command information with response data for a specific managed node**  
The following command returns the output of the original `aws ssm send-command` request for a specific managed node. 

------
#### [ Linux & macOS ]

```
aws ssm list-command-invocations \
    --instance-id instance-ID \
    --command-id "command-ID" \
    --details
```

------
#### [ Windows ]

```
aws ssm list-command-invocations ^
    --instance-id instance-ID ^
    --command-id "command-ID" ^
    --details
```

------

**Display Python version**

The following command returns the version of Python running on a node.

------
#### [ Linux & macOS ]

```
sh_command_id=$(aws ssm send-command \
    --instance-ids "instance-ID" \
    --document-name "AWS-RunShellScript" \
    --comment "Demo run shell script on Linux Instances" \
    --parameters commands='python -V' \
    --output text --query "Command.CommandId") \
    sh -c 'aws ssm list-command-invocations \
    --command-id "$sh_command_id" \
    --details \
    --query "CommandInvocations[].CommandPlugins[].{Status:Status,Output:Output}"'
```

------

## Step 4: Run a simple Python script using Run Command


The following command runs a simple Python "Hello World" script using Run Command.

------
#### [ Linux & macOS ]

```
sh_command_id=$(aws ssm send-command \
    --instance-ids "instance-ID" \
    --document-name "AWS-RunShellScript" \
    --comment "Demo run shell script on Linux Instances" \
    --parameters '{"commands":["#!/usr/bin/python","print \"Hello World from python\""]}' \
    --output text \
    --query "Command.CommandId") \
    sh -c 'aws ssm list-command-invocations \
    --command-id "$sh_command_id" \
    --details \
    --query "CommandInvocations[].CommandPlugins[].{Status:Status,Output:Output}"'
```

------

## Step 5: Run a Bash script using Run Command


The examples in this section demonstrate how to run the following bash script using Run Command.

For examples of using Run Command to run scripts stored in remote locations, see [Running scripts from Amazon S3](integration-s3.md) and [Running scripts from GitHub](integration-remote-scripts.md).

```
#!/bin/bash
yum -y update
yum install -y ruby
cd /home/ec2-user
curl -O https://aws-codedeploy-us-east-2.s3.amazonaws.com/latest/install
chmod +x ./install
./install auto
```

This script installs the AWS CodeDeploy agent on Amazon Linux and Red Hat Enterprise Linux (RHEL) instances, as described in [Create an Amazon EC2 instance for CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/instances-ec2-create.html) in the *AWS CodeDeploy User Guide*.

The script installs the CodeDeploy agent from an AWS managed S3 bucket in thee US East (Ohio) Region (us-east-2), `aws-codedeploy-us-east-2`.

**Run a bash script in an AWS CLI command**

The following sample demonstrates how to include the bash script in a CLI command using the `--parameters` option.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name "AWS-RunShellScript" \
    --targets '[{"Key":"InstanceIds","Values":["instance-id"]}]' \
    --parameters '{"commands":["#!/bin/bash","yum -y update","yum install -y ruby","cd /home/ec2-user","curl -O https://aws-codedeploy-us-east-2.s3.amazonaws.com/latest/install","chmod +x ./install","./install auto"]}'
```

------

**Run a bash script in a JSON file**

In the following example, the content of the bash script is stored in a JSON file, and the file is included in the command using the `--cli-input-json` option.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name "AWS-RunShellScript" \
    --targets "Key=InstanceIds,Values=instance-id" \
    --cli-input-json file://installCodeDeployAgent.json
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name "AWS-RunShellScript" ^
    --targets "Key=InstanceIds,Values=instance-id" ^
    --cli-input-json file://installCodeDeployAgent.json
```

------

The contents of the referenced `installCodeDeployAgent.json` file is shown in the following example.

```
{
    "Parameters": {
        "commands": [
            "#!/bin/bash",
            "yum -y update",
            "yum install -y ruby",
            "cd /home/ec2-user",
            "curl -O https://aws-codedeploy-us-east-2.s3.amazonaws.com/latest/install",
            "chmod +x ./install",
            "./install auto"
        ]
    }
}
```

# Walkthrough: Use the AWS Tools for Windows PowerShell with Run Command
Use the Tools for Windows PowerShell

The following examples show how to use the AWS Tools for Windows PowerShell to view information about commands and command parameters, how to run commands, and how to view the status of those commands. This walkthrough includes an example for each of the pre-defined AWS Systems Manager documents.

**Important**  
Only trusted administrators should be allowed to use Systems Manager pre-configured documents shown in this topic. The commands or scripts specified in Systems Manager documents run with administrative permission on your managed nodes. If a user has permission to run any of the predefined Systems Manager documents (any document that begins with AWS), then that user also has administrator access to the node. For all other users, you should create restrictive documents and share them with specific users.

**Topics**
+ [

## Configure AWS Tools for Windows PowerShell session settings
](#walkthrough-powershell-settings)
+ [

## List all available documents
](#walkthrough-powershell-all-documents)
+ [

## Run PowerShell commands or scripts
](#walkthrough-powershell-run-script)
+ [

## Install an application using the `AWS-InstallApplication` document
](#walkthrough-powershell-install-application)
+ [

## Install a PowerShell module using the `AWS-InstallPowerShellModule` JSON document
](#walkthrough-powershell-install-module)
+ [

## Join a managed node to a Domain using the `AWS-JoinDirectoryServiceDomain` JSON document
](#walkthrough-powershell-domain-join)
+ [

## Send Windows metrics to Amazon CloudWatch Logs using the `AWS-ConfigureCloudWatch` document
](#walkthrough-powershell-windows-metrics)
+ [

## Turn on or turn off Windows automatic update using the `AWS-ConfigureWindowsUpdate` document
](#walkthrough-powershell-enable-windows-update)
+ [

## Manage Windows updates using Run Command
](#walkthough-powershell-windows-updates)

## Configure AWS Tools for Windows PowerShell session settings


**Specify your credentials**  
Open **Tools for Windows PowerShell** on your local computer and run the following command to specify your credentials. You must either have administrator permissions on the managed nodes you want to configure or you must have been granted the appropriate permission in AWS Identity and Access Management (IAM). For more information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md).

```
Set-AWSCredentials –AccessKey key-name –SecretKey key-name
```

**Set a default AWS Region**  
Run the following command to set the region for your PowerShell session. The example uses the US East (Ohio) Region (us-east-2). Run Command is available in the AWS Regions listed in [Systems Manager service endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) in the *Amazon Web Services General Reference*.

```
Set-DefaultAWSRegion `
    -Region us-east-2
```

## List all available documents


This command lists all documents available for your account.

```
Get-SSMDocumentList
```

## Run PowerShell commands or scripts


Using Run Command and the `AWS-RunPowerShell` document, you can run any command or script on a managed node as if you were logged on locally. You can issue commands or enter a path to a local script to run the command. 

**Note**  
For information about rebooting managed nodes when using Run Command to call scripts, see [Handling reboots when running commands](send-commands-reboot.md).

**View the description and available parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-RunPowerShellScript"
```

**View more information about parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-RunPowerShellScript" | Select -ExpandProperty Parameters
```

### Send a command using the `AWS-RunPowerShellScript` document


The following command shows the contents of the `"C:\Users"` directory and the contents of the `"C:\"` directory on two managed nodes. 

```
$runPSCommand = Send-SSMCommand `
    -InstanceIds @("instance-ID-1", "instance-ID-2") `
    -DocumentName "AWS-RunPowerShellScript" `
    -Comment "Demo AWS-RunPowerShellScript with two instances" `
    -Parameter @{'commands'=@('dir C:\Users', 'dir C:\')}
```

**Get command request details**  
The following command uses the `CommandId` to get the status of the command execution on both managed nodes. This example uses the `CommandId` that was returned in the previous command. 

```
Get-SSMCommand `
    -CommandId $runPSCommand.CommandId
```

The status of the command in this example can be Success, Pending, or InProgress.

**Get command information per managed node**  
The following command uses the `CommandId` from the previous command to get the status of the command execution on a per managed node basis.

```
Get-SSMCommandInvocation `
    -CommandId $runPSCommand.CommandId
```

**Get command information with response data for a specific managed node**  
The following command returns the output of the original `Send-SSMCommand` for a specific managed node. 

```
Get-SSMCommandInvocation `
    -CommandId $runPSCommand.CommandId `
    -Details $true `
    -InstanceId instance-ID | Select -ExpandProperty CommandPlugins
```

### Cancel a command


The following command cancels the `Send-SSMCommand` for the `AWS-RunPowerShellScript` document.

```
$cancelCommand = Send-SSMCommand `
    -InstanceIds @("instance-ID-1","instance-ID-2") `
    -DocumentName "AWS-RunPowerShellScript" `
    -Comment "Demo AWS-RunPowerShellScript with two instances" `
    -Parameter @{'commands'='Start-Sleep –Seconds 120; dir C:\'}

Stop-SSMCommand -CommandId $cancelCommand.CommandId
```

**Check the command status**  
The following command checks the status of the `Cancel` command.

```
Get-SSMCommand `
    -CommandId $cancelCommand.CommandId
```

## Install an application using the `AWS-InstallApplication` document


Using Run Command and the `AWS-InstallApplication` document, you can install, repair, or uninstall applications on managed nodes. The command requires the path or address to an MSI.

**Note**  
For information about rebooting managed nodes when using Run Command to call scripts, see [Handling reboots when running commands](send-commands-reboot.md).

**View the description and available parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-InstallApplication"
```

**View more information about parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-InstallApplication" | Select -ExpandProperty Parameters
```

### Send a command using the `AWS-InstallApplication` document


The following command installs a version of Python on your managed node in unattended mode, and logs the output to a local text file on your `C:` drive.

```
$installAppCommand = Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-InstallApplication" `
    -Parameter @{'source'='https://www.python.org/ftp/python/2.7.9/python-2.7.9.msi'; 'parameters'='/norestart /quiet /log c:\pythoninstall.txt'}
```

**Get command information per managed node**  
The following command uses the `CommandId` to get the status of the command execution.

```
Get-SSMCommandInvocation `
    -CommandId $installAppCommand.CommandId `
    -Details $true
```

**Get command information with response data for a specific managed node**  
The following command returns the results of the Python installation.

```
Get-SSMCommandInvocation `
    -CommandId $installAppCommand.CommandId `
    -Details $true `
    -InstanceId instance-ID | Select -ExpandProperty CommandPlugins
```

## Install a PowerShell module using the `AWS-InstallPowerShellModule` JSON document


You can use Run Command to install PowerShell modules on managed nodes. For more information about PowerShell modules, see [Windows PowerShell Modules](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_modules?view=powershell-6).

**View the description and available parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-InstallPowerShellModule"
```

**View more information about parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-InstallPowerShellModule" | Select -ExpandProperty Parameters
```

### Install a PowerShell module


The following command downloads the EZOut.zip file, installs it, and then runs an additional command to install XPS viewer. Lastly, the output of this command is uploaded to an S3 bucket named "amzn-s3-demo-bucket". 

```
$installPSCommand = Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-InstallPowerShellModule" `
    -Parameter @{'source'='https://gallery.technet.microsoft.com/EZOut-33ae0fb7/file/110351/1/EZOut.zip';'commands'=@('Add-WindowsFeature -name XPS-Viewer -restart')} `
    -OutputS3BucketName amzn-s3-demo-bucket
```

**Get command information per managed node**  
The following command uses the `CommandId` to get the status of the command execution. 

```
Get-SSMCommandInvocation `
    -CommandId $installPSCommand.CommandId `
    -Details $true
```

**Get command information with response data for the managed node**  
The following command returns the output of the original `Send-SSMCommand` for the specific `CommandId`. 

```
Get-SSMCommandInvocation `
    -CommandId $installPSCommand.CommandId `
    -Details $true | Select -ExpandProperty CommandPlugins
```

## Join a managed node to a Domain using the `AWS-JoinDirectoryServiceDomain` JSON document


Using Run Command, you can quickly join a managed node to an AWS Directory Service domain. Before executing this command, [create a directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started_create_directory.html). We also recommend that you learn more about the Directory Service. For more information, see the [AWS Directory Service Administration Guide](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/).

You can only join a managed node to a domain. You can't remove a node from a domain.

**Note**  
For information about managed nodes when using Run Command to call scripts, see [Handling reboots when running commands](send-commands-reboot.md).

**View the description and available parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-JoinDirectoryServiceDomain"
```

**View more information about parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-JoinDirectoryServiceDomain" | Select -ExpandProperty Parameters
```

### Join a managed node to a domain


The following command joins a managed node to the given Directory Service domain and uploads any generated output to the example Amazon Simple Storage Service (Amazon S3) bucket. 

```
$domainJoinCommand = Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-JoinDirectoryServiceDomain" `
    -Parameter @{'directoryId'='d-example01'; 'directoryName'='ssm.example.com'; 'dnsIpAddresses'=@('192.168.10.195', '192.168.20.97')} `
    -OutputS3BucketName amzn-s3-demo-bucket
```

**Get command information per managed node**  
The following command uses the `CommandId` to get the status of the command execution. 

```
Get-SSMCommandInvocation `
    -CommandId $domainJoinCommand.CommandId `
    -Details $true
```

**Get command information with response data for the managed node**  
This command returns the output of the original `Send-SSMCommand` for the specific `CommandId`.

```
Get-SSMCommandInvocation `
    -CommandId $domainJoinCommand.CommandId `
    -Details $true | Select -ExpandProperty CommandPlugins
```

## Send Windows metrics to Amazon CloudWatch Logs using the `AWS-ConfigureCloudWatch` document


You can send Windows Server messages in the application, system, security, and Event Tracing for Windows (ETW) logs to Amazon CloudWatch Logs. When you allow logging for the first time, Systems Manager sends all logs generated within one (1) minute from the time that you start uploading logs for the application, system, security, and ETW logs. Logs that occurred before this time aren't included. If you turn off logging and then later turn logging back on, Systems Manager sends logs from the time it left off. For any custom log files and Internet Information Services (IIS) logs, Systems Manager reads the log files from the beginning. In addition, Systems Manager can also send performance counter data to CloudWatch Logs.

If you previously turned on CloudWatch integration in EC2Config, the Systems Manager settings override any settings stored locally on the managed node in the `C:\Program Files\Amazon\EC2ConfigService\Settings\AWS.EC2.Windows.CloudWatch.json` file. For more information about using EC2Config to manage performance counters and logs on a single managed node, see [Collecting metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html) in the *Amazon CloudWatch User Guide*.

**View the description and available parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-ConfigureCloudWatch"
```

**View more information about parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-ConfigureCloudWatch" | Select -ExpandProperty Parameters
```

### Send application logs to CloudWatch


The following command configures the managed node and moves Windows Applications logs to CloudWatch.

```
$cloudWatchCommand = Send-SSMCommand `
    -InstanceID instance-ID `
    -DocumentName "AWS-ConfigureCloudWatch" `
    -Parameter @{'properties'='{"engineConfiguration": {"PollInterval":"00:00:15", "Components":[{"Id":"ApplicationEventLog", "FullName":"AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch", "Parameters":{"LogName":"Application", "Levels":"7"}},{"Id":"CloudWatch", "FullName":"AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch", "Parameters":{"Region":"region", "LogGroup":"my-log-group", "LogStream":"instance-id"}}], "Flows":{"Flows":["ApplicationEventLog,CloudWatch"]}}}'}
```

**Get command information per managed node**  
The following command uses the `CommandId` to get the status of the command execution. 

```
Get-SSMCommandInvocation `
    -CommandId $cloudWatchCommand.CommandId `
    -Details $true
```

**Get command information with response data for a specific managed node**  
The following command returns the results of the Amazon CloudWatch configuration.

```
Get-SSMCommandInvocation `
    -CommandId $cloudWatchCommand.CommandId `
    -Details $true `
    -InstanceId instance-ID | Select -ExpandProperty CommandPlugins
```

### Send performance counters to CloudWatch using the `AWS-ConfigureCloudWatch` document


The following demonstration command uploads performance counters to CloudWatch. For more information, see the *[Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/)*.

```
$cloudWatchMetricsCommand = Send-SSMCommand `
    -InstanceID instance-ID `
    -DocumentName "AWS-ConfigureCloudWatch" `
    -Parameter @{'properties'='{"engineConfiguration": {"PollInterval":"00:00:15", "Components":[{"Id":"PerformanceCounter", "FullName":"AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch", "Parameters":{"CategoryName":"Memory", "CounterName":"Available MBytes", "InstanceName":"", "MetricName":"AvailableMemory", "Unit":"Megabytes","DimensionName":"", "DimensionValue":""}},{"Id":"CloudWatch", "FullName":"AWS.EC2.Windows.CloudWatch.CloudWatch.CloudWatchOutputComponent,AWS.EC2.Windows.CloudWatch", "Parameters":{"AccessKey":"", "SecretKey":"","Region":"region", "NameSpace":"Windows-Default"}}], "Flows":{"Flows":["PerformanceCounter,CloudWatch"]}}}'}
```

## Turn on or turn off Windows automatic update using the `AWS-ConfigureWindowsUpdate` document


Using Run Command and the `AWS-ConfigureWindowsUpdate` document, you can turn on or turn off automatic Windows updates on your Windows Server managed nodes. This command configures the Windows Update Agent to download and install Windows updates on the day and hour that you specify. If an update requires a reboot, the managed node reboots automatically 15 minutes after updates have been installed. With this command you can also configure Windows Update to check for updates but not install them. The `AWS-ConfigureWindowsUpdate` document is officially supported on Windows Server 2012 and later versions.

**View the description and available parameters**

```
Get-SSMDocumentDescription `
    –Name "AWS-ConfigureWindowsUpdate"
```

**View more information about parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-ConfigureWindowsUpdate" | Select -ExpandProperty Parameters
```

### Turn on Windows automatic update


The following command configures Windows Update to automatically download and install updates daily at 10:00 PM. 

```
$configureWindowsUpdateCommand = Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-ConfigureWindowsUpdate" `
    -Parameters @{'updateLevel'='InstallUpdatesAutomatically'; 'scheduledInstallDay'='Daily'; 'scheduledInstallTime'='22:00'}
```

**View command status for allowing Windows automatic update**  
The following command uses the `CommandId` to get the status of the command execution for allowing Windows automatic update.

```
Get-SSMCommandInvocation `
    -Details $true `
    -CommandId $configureWindowsUpdateCommand.CommandId | Select -ExpandProperty CommandPlugins
```

### Turn off Windows automatic update


The following command lowers the Windows Update notification level so the system checks for updates but doesn't automatically update the managed node.

```
$configureWindowsUpdateCommand = Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-ConfigureWindowsUpdate" `
    -Parameters @{'updateLevel'='NeverCheckForUpdates'}
```

**View command status for turning off Windows automatic update**  
The following command uses the `CommandId` to get the status of the command execution for turning off Windows automatic update.

```
Get-SSMCommandInvocation `
    -Details $true `
    -CommandId $configureWindowsUpdateCommand.CommandId | Select -ExpandProperty CommandPlugins
```

## Manage Windows updates using Run Command


Using Run Command and the `AWS-InstallWindowsUpdates` document, you can manage updates for Windows Server managed nodes. This command scans for or installs missing updates on your managed nodes and optionally reboots following installation. You can also specify the appropriate classifications and severity levels for updates to install in your environment.

**Note**  
For information about rebooting managed nodes when using Run Command to call scripts, see [Handling reboots when running commands](send-commands-reboot.md).

The following examples demonstrate how to perform the specified Windows Update management tasks.

### Search for all missing Windows updates


```
Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-InstallWindowsUpdates" `
    -Parameters @{'Action'='Scan'}
```

### Install specific Windows updates


```
Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-InstallWindowsUpdates" `
    -Parameters @{'Action'='Install';'IncludeKbs'='kb-ID-1,kb-ID-2,kb-ID-3';'AllowReboot'='True'}
```

### Install important missing Windows updates


```
Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-InstallWindowsUpdates" `
    -Parameters @{'Action'='Install';'SeverityLevels'='Important';'AllowReboot'='True'}
```

### Install missing Windows updates with specific exclusions


```
Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-InstallWindowsUpdates" `
    -Parameters @{'Action'='Install';'ExcludeKbs'='kb-ID-1,kb-ID-2';'AllowReboot'='True'}
```

# Troubleshooting Systems Manager Run Command
Troubleshooting Run Command

Run Command, a tool in AWS Systems Manager, provides status details with each command execution. For more information about the details of command statuses, see [Understanding command statuses](monitor-commands.md). You can also use the information in this topic to help troubleshoot problems with Run Command.

**Topics**
+ [

## Some of my managed nodes are missing
](#where-are-instances)
+ [

## A step in my script failed, but the overall status is 'succeeded'
](#ts-exit-codes)
+ [

## SSM Agent isn't running properly
](#ts-ssmagent-linux)

## Some of my managed nodes are missing


In the **Run a command** page, after you choose an SSM document to run and select **Manually selecting instances** in the **Targets** section, a list is displayed of managed nodes you can choose to run the command on.

If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

After you create, activate, reboot, or restart a managed node, install Run Command on a node, or attach an AWS Identity and Access Management (IAM) instance profile to a node, it can take a few minutes for the managed node to be added to the list.

## A step in my script failed, but the overall status is 'succeeded'


Using Run Command, you can define how your scripts handle exit codes. By default, the exit code of the last command run in a script is reported as the exit code for the entire script. You can, however, include a conditional statement to exit the script if any command before the final one fails. For information and examples, see [Specify exit codes in commands](run-command-handle-exit-status.md#command-exit-codes). 

## SSM Agent isn't running properly


If you experience problems running commands using Run Command, there might be a problem with the SSM Agent. For information about investigating issues with SSM Agent, see [Troubleshooting SSM Agent](troubleshooting-ssm-agent.md). 

# AWS Systems Manager Session Manager
Session Manager

Session Manager is a fully managed AWS Systems Manager tool. With Session Manager, you can manage your Amazon Elastic Compute Cloud (Amazon EC2) instances, edge devices, on-premises servers, and virtual machines (VMs). You can use either an interactive one-click browser-based shell or the AWS Command Line Interface (AWS CLI). Session Manager provides secure node management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. Session Manager also allows you to comply with corporate policies that require controlled access to managed nodes, strict security practices, and logs with node access details, while providing end users with simple one-click cross-platform access to your managed nodes. To get started with Session Manager, open the [Systems Manager console](https://console.aws.amazon.com/systems-manager/session-manager). In the navigation pane, choose **Session Manager**.

## How can Session Manager benefit my organization?


Session Manager offers these benefits:
+  **Centralized access control to managed nodes using IAM policies** 

  Administrators have a single place to grant and revoke access to managed nodes. Using only AWS Identity and Access Management (IAM) policies, you can control which individual users or groups in your organization can use Session Manager and which managed nodes they can access. 
+  **No open inbound ports and no need to manage bastion hosts or SSH keys** 

  Leaving inbound SSH ports and remote PowerShell ports open on your managed nodes greatly increases the risk of entities running unauthorized or malicious commands on the managed nodes. Session Manager helps you improve your security posture by letting you close these inbound ports, freeing you from managing SSH keys and certificates, bastion hosts, and jump boxes.
+  **One-click access to managed nodes from the console and CLI** 

  Using the AWS Systems Manager console or Amazon EC2 console, you can start a session with a single click. Using the AWS CLI, you can also start a session that runs a single command or a sequence of commands. Because permissions to managed nodes are provided through IAM policies instead of SSH keys or other mechanisms, the connection time is greatly reduced.
+  **Connect to both Amazon EC2 instances and non-EC2 managed nodes in [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environments** 

  You can connect to both Amazon Elastic Compute Cloud (Amazon EC2) instances and non-EC2 nodes in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. 

  To connect to non-EC2 nodes using Session Manager, you must first activate the advanced-instances tier. **There is a charge to use the advanced-instances tier.** However, there is no additional charge to connect to EC2 instances using Session Manager. For information, see [Configuring instance tiers](fleet-manager-configure-instance-tiers.md).
+  **Port forwarding** 

  Redirect any port inside your managed node to a local port on a client. After that, connect to the local port and access the server application that is running inside the node.
+  **Cross-platform support for Windows, Linux, and macOS** 

  Session Manager provides support for Windows, Linux, and macOS from a single tool. For example, you don't need to use an SSH client for Linux and macOS managed nodes or an RDP connection for Windows Server managed nodes.
+  **Logging session activity** 

  To meet operational or security requirements in your organization, you might need to provide a record of the connections made to your managed nodes and the commands that were run on them. You can also receive notifications when a user in your organization starts or ends session activity. 

  Logging capabilities are provided through integration with the following AWS services:
  + **AWS CloudTrail** – AWS CloudTrail captures information about Session Manager API calls made in your AWS account and writes it to log files that are stored in an Amazon Simple Storage Service (Amazon S3) bucket you specify. One bucket is used for all CloudTrail logs for your account. For more information, see [Logging AWS Systems Manager API calls with AWS CloudTrail](monitoring-cloudtrail-logs.md). 
  + **Amazon Simple Storage Service** – You can choose to store session log data in an Amazon S3 bucket of your choice for debugging and troubleshooting purposes. Log data can be sent to your Amazon S3 bucket with or without encryption using your AWS KMS key. For more information, see [Logging session data using Amazon S3 (console)](session-manager-logging-s3.md).
  + **Amazon CloudWatch Logs** – CloudWatch Logs allows you to monitor, store, and access log files from various AWS services. You can send session log data to a CloudWatch Logs log group for debugging and troubleshooting purposes. Log data can be sent to your log group with or without AWS KMS encryption using your KMS key. For more information, see [Logging session data using Amazon CloudWatch Logs (console)](session-manager-logging-cloudwatch-logs.md).
  + **Amazon EventBridge** and **Amazon Simple Notification Service** – EventBridge allows you to set up rules to detect when changes happen to AWS resources that you specify. You can create a rule to detect when a user in your organization starts or stops a session, and then receive a notification through Amazon SNS (for example, a text or email message) about the event. You can also configure a CloudWatch event to initiate other responses. For more information, see [Monitoring session activity using Amazon EventBridge (console)](session-manager-auditing.md#session-manager-auditing-eventbridge-events).
**Note**  
Logging isn't available for Session Manager sessions that connect through port forwarding or SSH. This is because SSH encrypts all session data within the secure TLS connection established between the AWS CLI and Session Manager endpoints, and Session Manager only serves as a tunnel for SSH connections.

## Who should use Session Manager?

+ Any AWS customer who wants to improve their security posture, reduce operational overhead by centralizing access control on managed nodes, and reduce inbound node access. 
+ Information Security experts who want to monitor and track managed node access and activity, close down inbound ports on managed nodes, or allow connections to managed nodes that don't have a public IP address. 
+ Administrators who want to grant and revoke access from a single location, and who want to provide one solution to users for Linux, macOS, and Windows Server managed nodes.
+ Users who want to connect to a managed node with just one click from the browser or AWS CLI without having to provide SSH keys.

## What are the main features of Session Manager?

+ **Support for Windows Server, Linux and macOS managed nodes**

  Session Manager enables you to establish secure connections to your Amazon Elastic Compute Cloud (EC2) instances, edge devices, on-premises servers, and virtual machines (VMs). For a list of supported operating system types, see [Setting up Session Manager](session-manager-getting-started.md).
**Note**  
Session Manager support for on-premises machines is provided for the advanced-instances tier only. For information, see [Turning on the advanced-instances tier](fleet-manager-enable-advanced-instances-tier.md).
+  **Console, CLI, and SDK access to Session Manager capabilities** 

  You can work with Session Manager in the following ways:

  The **AWS Systems Manager console** includes access to all the Session Manager capabilities for both administrators and end users. You can perform any task that is related to your sessions by using the Systems Manager console. 

  The Amazon EC2 console provides the ability for end users to connect to EC2 instances for which they have been granted session permissions.

  The **AWS CLI** includes access to Session Manager capabilities for end users. You can start a session, view a list of sessions, and permanently end a session by using the AWS CLI. 
**Note**  
To use the AWS CLI to run session commands, you must be using version 1.16.12 of the CLI (or later), and you must have installed the Session Manager plugin on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md). To view the plugin on GitHub, see [session-manager-plugin](https://github.com/aws/session-manager-plugin).
+  **IAM access control** 

  Through the use of IAM policies, you can control which members of your organization can initiate sessions to managed nodes and which nodes they can access. You can also provide temporary access to your managed nodes. For example, you might want to give an on-call engineer (or a group of on-call engineers) access to production servers only for the duration of their rotation.
+  **Logging support** 

  Session Manager provide you with options for logging session histories in your AWS account through integration with a number of other AWS services. For more information, see [Logging session activity](session-manager-auditing.md) and [Enabling and disabling session logging](session-manager-logging.md).
+  **Configurable shell profiles** 

  Session Manager provides you with options to configure preferences within sessions. These customizable profiles allow you to define preferences such as shell preferences, environment variables, working directories, and running multiple commands when a session is started.
+  **Customer key data encryption support** 

  You can configure Session Manager to encrypt the session data logs that you send to an Amazon Simple Storage Service (Amazon S3) bucket or stream to a CloudWatch Logs log group. You can also configure Session Manager to further encrypt the data transmitted between client machines and your managed nodes during your sessions. For information, see [Enabling and disabling session logging](session-manager-logging.md) and [Configure session preferences](session-manager-getting-started-configure-preferences.md).
+  **AWS PrivateLink support for managed nodes without public IP addresses** 

  You can also set up VPC Endpoints for Systems Manager using AWS PrivateLink to further secure your sessions. AWS PrivateLink limits all network traffic between your managed nodes, Systems Manager, and Amazon EC2 to the Amazon network. For more information, see [Improve the security of EC2 instances by using VPC endpoints for Systems Manager](setup-create-vpc.md).
+  **Tunneling** 

  In a session, use a Session-type AWS Systems Manager (SSM) document to tunnel traffic, such as http or a custom protocol, between a local port on a client machine and a remote port on a managed node.
+  **Interactive commands** 

  Create a Session-type SSM document that uses a session to interactively run a single command, giving you a way to manage what users can do on a managed node.

## What is a session?


A session is a connection made to a managed node using Session Manager. Sessions are based on a secure bi-directional communication channel between the client (you) and the remote managed node that streams inputs and outputs for commands. Traffic between a client and a managed node is encrypted using TLS 1.2, and requests to create the connection are signed using Sigv4. This two-way communication allows interactive bash and PowerShell access to managed nodes. You can also use an AWS Key Management Service (AWS KMS) key to further encrypt data beyond the default TLS encryption.

For example, say that John is an on-call engineer in your IT department. He receives notification of an issue that requires him to remotely connect to a managed node, such as a failure that requires troubleshooting or a directive to change a simple configuration option on a node. Using the AWS Systems Manager console, the Amazon EC2 console, or the AWS CLI, John starts a session connecting him to the managed node, runs commands on the node needed to complete the task, and then ends the session.

When John sends that first command to start the session, the Session Manager service authenticates his ID, verifies the permissions granted to him by an IAM policy, checks configuration settings (such as verifying allowed limits for the sessions), and sends a message to SSM Agent to open the two-way connection. After the connection is established and John types the next command, the command output from SSM Agent is uploaded to this communication channel and sent back to his local machine.

**Topics**
+ [

## How can Session Manager benefit my organization?
](#session-manager-benefits)
+ [

## Who should use Session Manager?
](#session-manager-who)
+ [

## What are the main features of Session Manager?
](#session-manager-features)
+ [

## What is a session?
](#what-is-a-session)
+ [

# Setting up Session Manager
](session-manager-getting-started.md)
+ [

# Working with Session Manager
](session-manager-working-with.md)
+ [

# Logging session activity
](session-manager-auditing.md)
+ [

# Enabling and disabling session logging
](session-manager-logging.md)
+ [

# Session document schema
](session-manager-schema.md)
+ [

# Troubleshooting Session Manager
](session-manager-troubleshooting.md)

# Setting up Session Manager


Before you use AWS Systems Manager Session Manager to connect to the managed nodes in your account, complete the steps in the following topics.

**Topics**
+ [

# Step 1: Complete Session Manager prerequisites
](session-manager-prerequisites.md)
+ [

# Step 2: Verify or add instance permissions for Session Manager
](session-manager-getting-started-instance-profile.md)
+ [

# Step 3: Control session access to managed nodes
](session-manager-getting-started-restrict-access.md)
+ [

# Step 4: Configure session preferences
](session-manager-getting-started-configure-preferences.md)
+ [

# Step 5: (Optional) Restrict access to commands in a session
](session-manager-restrict-command-access.md)
+ [

# Step 6: (Optional) Use AWS PrivateLink to set up a VPC endpoint for Session Manager
](session-manager-getting-started-privatelink.md)
+ [

# Step 7: (Optional) Turn on or turn off ssm-user account administrative permissions
](session-manager-getting-started-ssm-user-permissions.md)
+ [

# Step 8: (Optional) Allow and control permissions for SSH connections through Session Manager
](session-manager-getting-started-enable-ssh-connections.md)

# Step 1: Complete Session Manager prerequisites


Before using Session Manager, make sure your environment meets the following requirements.


**Session Manager prerequisites**  

| Requirement | Description | 
| --- | --- | 
|  Supported operating systems  |  Session Manager supports connecting to Amazon Elastic Compute Cloud (Amazon EC2) instances, in addition to non-EC2 machines in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment that use the *advanced-instances* tier. Session Manager supports the following operating system versions:  Session Manager supports EC2 instances, edge devices, and on-premises servers and virtual machines (VMs) in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment that use the *advanced-instances* tier. For more information about advanced instances, see [Configuring instance tiers](fleet-manager-configure-instance-tiers.md).   **Linux and **macOS****  Session Manager supports all the versions of Linux and macOS that are supported by AWS Systems Manager. For information, see [Supported operating systems and machine types](operating-systems-and-machine-types.md).  ** Windows **  Session Manager supports Windows Server 2012 and later versions.  Microsoft Windows Server 2016 Nano isn't supported.   | 
|  SSM Agent  |  At minimum, AWS Systems Manager SSM Agent version 2.3.68.0 or later must be installed on the managed nodes you want to connect to through sessions.  To use the option to encrypt session data using a key created in AWS Key Management Service (AWS KMS), version 2.3.539.0 or later of SSM Agent must be installed on the managed node.  To use shell profiles in a session, SSM Agent version 3.0.161.0 or later must be installed on the managed node. To start a Session Manager port forwarding or SSH session, SSM Agent version 3.0.222.0 or later must be installed on the managed node. To stream session data using Amazon CloudWatch Logs, SSM Agent version 3.0.284.0 or later must be installed on the managed node. For information about how to determine the version number running on an instance, see [Checking the SSM Agent version number](ssm-agent-get-version.md). For information about manually installing or automatically updating SSM Agent, see [Working with SSM Agent](ssm-agent.md).  About the ssm-user account Starting with version 2.3.50.0 of SSM Agent, the agent creates a user account on the managed node, with root or administrator permissions, called `ssm-user`. (On versions before 2.3.612.0, the account is created when SSM Agent starts or restarts. On version 2.3.612.0 and later, `ssm-user` is created the first time a session starts on the managed node.) Sessions are launched using the administrative credentials of this user account. For information about restricting administrative control for this account, see [Turn off or turn on ssm-user account administrative permissions](session-manager-getting-started-ssm-user-permissions.md).   ssm-user on Windows Server domain controllers Beginning with SSM Agent version 2.3.612.0, the `ssm-user` account isn't created automatically on managed nodes that are used as Windows Server domain controllers. To use Session Manager on a Windows Server machine being used as a domain controller, you must create the `ssm-user` account manually if it isn't already present, and assign Domain Administrator permissions to the user. On Windows Server, SSM Agent sets a new password for the `ssm-user` account each time a session starts, so you don't need to specify a password when you create the account.   | 
|  Connectivity to endpoints  |  The managed nodes you connect to must also allow HTTPS (port 443) outbound traffic to the following endpoints: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-prerequisites.html) For more information, see the following topics: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-prerequisites.html) Alternatively, you can connect to the required endpoints by using interface endpoints. For more information, see [Step 6: (Optional) Use AWS PrivateLink to set up a VPC endpoint for Session Manager](session-manager-getting-started-privatelink.md).  | 
|  AWS CLI  |  (Optional) If you use the AWS Command Line Interface (AWS CLI) to start your sessions (instead of using the AWS Systems Manager console or Amazon EC2 console), version 1.16.12 or later of the CLI must be installed on your local machine. You can call `aws --version` to check the version. If you need to install or upgrade the CLI, see [Installing the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) in the AWS Command Line Interface User Guide. An updated version of SSM Agent is released whenever new tools are added to Systems Manager or updates are made to existing tools. Failing to use the latest version of the agent can prevent your managed node from using various Systems Manager tools and features. For that reason, we recommend that you automate the process of keeping SSM Agent up to date on your machines. For information, see [Automating updates to SSM Agent](ssm-agent-automatic-updates.md). Subscribe to the [SSM Agent Release Notes](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) page on GitHub to get notifications about SSM Agent updates. In addition, to use the CLI to manage your nodes with Session Manager, you must first install the Session Manager plugin on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).  | 
|  Turn on advanced-instances tier ([hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environments)  |  To connect to non-EC2 machines using Session Manager, you must turn on the advanced-instances tier in the AWS account and AWS Region where you create hybrid activations to register non-EC2 machines as managed nodes. There is a charge to use the advanced-instances tier. For more information about the advanced-instance tier, see [Configuring instance tiers](fleet-manager-configure-instance-tiers.md).  | 
|  Verify IAM service role permissions ([hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environments)  |  Hybrid-activated nodes use the AWS Identity and Access Management (IAM) service role specified in the hybrid activation to communicate with Systems Manager API operations. This service role must contain the permissions required to connect to your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) machines using Session Manager. If your service role contains the AWS managed policy `AmazonSSMManagedInstanceCore` , the required permissions for Session Manager are already provided. If you find that the service role does not contain the required permissions, you must deregister the managed instance and register it with a new hybrid activation that uses an IAM service role with the required permissions. For more information about deregistering managed instances, see [Deregistering managed nodes in a hybrid and multicloud environment](fleet-manager-deregister-hybrid-nodes.md). For more information about creating IAM policies with Session Manager permissions, see [Step 2: Verify or add instance permissions for Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-instance-profile.html).  | 

# Step 2: Verify or add instance permissions for Session Manager


By default, AWS Systems Manager doesn't have permission to perform actions on your instances. You can provide instance permissions at the account level using an AWS Identity and Access Management (IAM) role, or at the instance level using an instance profile. If your use case allows, we recommend granting access at the account level using the Default Host Management Configuration. If you've already set up the Default Host Management Configuration for your account using the `AmazonSSMManagedEC2InstanceDefaultPolicy` policy, you can proceed to the next step. For more information about the Default Host Management Configuration, see [Managing EC2 instances automatically with Default Host Management Configuration](fleet-manager-default-host-management-configuration.md).

Alternatively, you can use instance profiles to provide the required permissions to your instances. An instance profile passes an IAM role to an Amazon EC2 instance. You can attach an IAM instance profile to an Amazon EC2 instance as you launch it or to a previously launched instance. For more information, see [Using instance profiles](https://docs.aws.amazon.com/IAM/latest/UserGuide/roles-usingrole-instanceprofile.html).

For on-premises servers or virtual machines (VMs), permissions are provided by the IAM service role associated with the hybrid activation used to register your on-premises servers and VMs with Systems Manager. On-premises servers and VMs do not use instance profiles.

If you already use other Systems Manager tools, such as Run Command or Parameter Store, an instance profile with the required basic permissions for Session Manager might already be attached to your Amazon EC2 instances. If an instance profile that contains the AWS managed policy `AmazonSSMManagedInstanceCore` is already attached to your instances, the required permissions for Session Manager are already provided. This is also true if the IAM service role used in your hybrid activation contains the `AmazonSSMManagedInstanceCore` managed policy.

However, in some cases, you might need to modify the permissions attached to your instance profile. For example, you want to provide a narrower set of instance permissions, you have created a custom policy for your instance profile, or you want to use Amazon Simple Storage Service (Amazon S3) encryption or AWS Key Management Service (AWS KMS) encryption options for securing session data. For these cases, do one of the following to allow Session Manager actions to be performed on your instances:
+  **Embed permissions for Session Manager actions in a custom IAM role** 

  To add permissions for Session Manager actions to an existing IAM role that doesn't rely on the AWS-provided default policy `AmazonSSMManagedInstanceCore`, follow the steps in [Add Session Manager permissions to an existing IAM role](getting-started-add-permissions-to-existing-profile.md).
+  **Create a custom IAM role with Session Manager permissions only** 

  To create an IAM role that contains permissions only for Session Manager actions, follow the steps in [Create a custom IAM role for Session Manager](getting-started-create-iam-instance-profile.md).
+  **Create and use a new IAM role with permissions for all Systems Manager actions** 

  To create an IAM role for Systems Manager managed instances that uses a default policy supplied by AWS to grant all Systems Manager permissions, follow the steps in [Configure instance permissions required for Systems Manager](setup-instance-permissions.md).

**Topics**
+ [

# Add Session Manager permissions to an existing IAM role
](getting-started-add-permissions-to-existing-profile.md)
+ [

# Create a custom IAM role for Session Manager
](getting-started-create-iam-instance-profile.md)

# Add Session Manager permissions to an existing IAM role


Use the following procedure to add Session Manager permissions to an existing AWS Identity and Access Management (IAM) role. By adding permissions to an existing role, you can enhance the security of your computing environment without having to use the AWS `AmazonSSMManagedInstanceCore` policy for instance permissions.

**Note**  
Note the following information:  
This procedure assumes that your existing role already includes other Systems Manager `ssm` permissions for actions you want to allow access to. This policy alone isn't enough to use Session Manager.
The following policy example includes an `s3:GetEncryptionConfiguration` action. This action is required if you chose the **Enforce S3 log encryption** option in Session Manager logging preferences.
If the `ssmmessages:OpenControlChannel` permission is removed from policies attached to your IAM instance profile or IAM service role,SSM Agent on the managed node loses connectivity to the Systems Manager service in the cloud. However, it can take up to 1 hour for a connection to be terminated after the permission is removed. This is the same behavior as when the IAM instance role or IAM service role is deleted.

**To add Session Manager permissions to an existing role (console)**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**.

1. Select the name of the role that you are adding the permissions to.

1. Choose the **Permissions** tab.

1. Choose **Add permissions**, and then select **Create inline policy**.

1. Choose the **JSON** tab.

1. Replace the default policy content with the following content. Replace *key-name* with the Amazon Resource Name (ARN) of the AWS Key Management Service key (AWS KMS key) that you want to use.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "ssmmessages:CreateControlChannel",
                   "ssmmessages:CreateDataChannel",
                   "ssmmessages:OpenControlChannel",
                   "ssmmessages:OpenDataChannel"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:GetEncryptionConfiguration"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt"
               ],
               "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-name"
           }
       ]
   }
   ```

------

   For information about using a KMS key to encrypt session data, see [Turn on KMS key encryption of session data (console)](session-preferences-enable-encryption.md).

   If you won't use AWS KMS encryption for your session data, you can remove the following content from the policy.

   ```
   ,
           {
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt"
               ],
               "Resource": "key-name"
           }
   ```

1. Choose **Next: Tags**.

1. (Optional) Add tags by choosing **Add tag**, and entering the preferred tags for the policy.

1. Choose **Next: Review**.

1. On the **Review policy** page, for **Name**, enter a name for the inline policy, such as **SessionManagerPermissions**.

1. (Optional) For **Description**, enter a description for the policy. 

   Choose **Create policy**.

For information about the `ssmmessages` actions, see [Reference: ec2messages, ssmmessages, and other API operations](systems-manager-setting-up-messageAPIs.md).

# Create a custom IAM role for Session Manager


You can create an AWS Identity and Access Management (IAM) role that grants Session Manager the permission to perform actions on your Amazon EC2 managed instances. You can also include a policy to grant the permissions needed for session logs to be sent to Amazon Simple Storage Service (Amazon S3) and Amazon CloudWatch Logs.

After you create the IAM role, for information about how to attach the role to an instance, see [Attach or Replace an Instance Profile](https://aws.amazon.com/premiumsupport/knowledge-center/attach-replace-ec2-instance-profile/) at the AWS re:Post website. For more information about IAM instance profiles and roles, see [Using instance profiles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) in the *IAM User Guide* and [IAM roles for Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) in the *Amazon Elastic Compute Cloud User Guide for Linux Instances*. For more information about creating an IAM service role for on-premises machines, see [Create the IAM service role required for Systems Manager in hybrid and multicloud environments](https://docs.aws.amazon.com/systems-manager/latest/userguide/hybrid-multicloud-service-role.html).

**Topics**
+ [

## Creating an IAM role with minimal Session Manager permissions (console)
](#create-iam-instance-profile-ssn-only)
+ [

## Creating an IAM role with permissions for Session Manager and Amazon S3 and CloudWatch Logs (console)
](#create-iam-instance-profile-ssn-logging)

## Creating an IAM role with minimal Session Manager permissions (console)


Use the following procedure to create a custom IAM role with a policy that provides permissions for only Session Manager actions on your instances.

**To create an instance profile with minimal Session Manager permissions (console)**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**, and then choose **Create policy**. (If a **Get Started** button is displayed, choose it, and then choose **Create Policy**.)

1. Choose the **JSON** tab.

1. Replace the default content with the following policy. To encrypt session data using AWS Key Management Service (AWS KMS), replace *key-name* with the Amazon Resource Name (ARN) of the AWS KMS key that you want to use.
**Note**  
If the `ssmmessages:OpenControlChannel` permission is removed from policies attached to your IAM instance profile or IAM service role,SSM Agent on the managed node loses connectivity to the Systems Manager service in the cloud. However, it can take up to 1 hour for a connection to be terminated after the permission is removed. This is the same behavior as when the IAM instance role or IAM service role is deleted.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "ssm:UpdateInstanceInformation",
                   "ssmmessages:CreateControlChannel",
                   "ssmmessages:CreateDataChannel",
                   "ssmmessages:OpenControlChannel",
                   "ssmmessages:OpenDataChannel"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt"
               ],
               "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-name"
           }
       ]
   }
   ```

------

   For information about using a KMS key to encrypt session data, see [Turn on KMS key encryption of session data (console)](session-preferences-enable-encryption.md).

   If you won't use AWS KMS encryption for your session data, you can remove the following content from the policy.

   ```
   ,
           {
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt"
               ],
               "Resource": "key-name"
           }
   ```

1. Choose **Next: Tags**.

1. (Optional) Add tags by choosing **Add tag**, and entering the preferred tags for the policy.

1. Choose **Next: Review**.

1. On the **Review policy** page, for **Name**, enter a name for the inline policy, such as **SessionManagerPermissions**.

1. (Optional) For **Description**, enter a description for the policy. 

1. Choose **Create policy**.

1. In the navigation pane, choose **Roles**, and then choose **Create role**.

1. On the **Create role** page, choose **AWS service**, and for **Use case**, choose **EC2**.

1. Choose **Next**.

1. On the **Add permissions** page, select the check box to the left of name of the policy you just created, such as **SessionManagerPermissions**.

1. Choose **Next**.

1. On the **Name, review, and create** page, for **Role name**, enter a name for the IAM role, such as **MySessionManagerRole**.

1. (Optional) For **Role description**, enter a description for the instance profile. 

1. (Optional) Add tags by choosing **Add tag**, and entering the preferred tags for the role.

   Choose **Create role**.

For information about `ssmmessages` actions, see [Reference: ec2messages, ssmmessages, and other API operations](systems-manager-setting-up-messageAPIs.md).

## Creating an IAM role with permissions for Session Manager and Amazon S3 and CloudWatch Logs (console)


Use the following procedure to create a custom IAM role with a policy that provides permissions for Session Manager actions on your instances. The policy also provides the permissions needed for session logs to be stored in Amazon Simple Storage Service (Amazon S3) buckets and Amazon CloudWatch Logs log groups.

**Important**  
To output session logs to an Amazon S3 bucket owned by a different AWS account, you must add the `s3:PutObjectAcl` permission to the IAM role policy. Additionally, you must ensure that the bucket policy grants cross-account access to the IAM role used by the owning account to grant Systems Manager permissions for managed instances. If the bucket uses Key Management Service (KMS) encryption, then the bucket's KMS policy must also grant this cross-account access. For more information about configuring cross-account bucket permissions in Amazon S3, see [Granting cross-account bucket permissions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-managing-access-example2.html) in the *Amazon Simple Storage Service User Guide*. If the cross-account permissions aren't added, the account that owns the Amazon S3 bucket can't access the session output logs.

For information about specifying preferences for storing session logs, see [Enabling and disabling session logging](session-manager-logging.md).

**To create an IAM role with permissions for Session Manager and Amazon S3 and CloudWatch Logs (console)**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**, and then choose **Create policy**. (If a **Get Started** button is displayed, choose it, and then choose **Create Policy**.)

1. Choose the **JSON** tab.

1. Replace the default content with the following policy. Replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "ssmmessages:CreateControlChannel",
                   "ssmmessages:CreateDataChannel",
                   "ssmmessages:OpenControlChannel",
                   "ssmmessages:OpenDataChannel",
                   "ssm:UpdateInstanceInformation"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "logs:CreateLogStream",
                   "logs:PutLogEvents",
                   "logs:DescribeLogGroups",
                   "logs:DescribeLogStreams"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:PutObject"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/s3-prefix/*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:GetEncryptionConfiguration"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt"
               ],
               "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-name"
           },
           {
               "Effect": "Allow",
               "Action": "kms:GenerateDataKey",
               "Resource": "*"
           }
       ]
   }
   ```

------

1. Choose **Next: Tags**.

1. (Optional) Add tags by choosing **Add tag**, and entering the preferred tags for the policy.

1. Choose **Next: Review**.

1. On the **Review policy** page, for **Name**, enter a name for the inline policy, such as **SessionManagerPermissions**.

1. (Optional) For **Description**, enter a description for the policy. 

1. Choose **Create policy**.

1. In the navigation pane, choose **Roles**, and then choose **Create role**.

1. On the **Create role** page, choose **AWS service**, and for **Use case**, choose **EC2**.

1. Choose **Next**.

1. On the **Add permissions** page, select the check box to the left of name of the policy you just created, such as **SessionManagerPermissions**.

1. Choose **Next**.

1. On the **Name, review, and create** page, for **Role name**, enter a name for the IAM role, such as **MySessionManagerRole**.

1. (Optional) For **Role description**, enter a description for the role. 

1. (Optional) Add tags by choosing **Add tag**, and entering the preferred tags for the role.

1. Choose **Create role**.

# Step 3: Control session access to managed nodes


You grant or revoke Session Manager access to managed nodes by using AWS Identity and Access Management (IAM) policies. You can create a policy and attach it to an IAM user or group that specifies which managed nodes the user or group can connect to. You can also specify the Session Manager API operations the user or groups can perform on those managed nodes. 

To help you get started with IAM permission policies for Session Manager, we've created sample policies for an end user and an administrator user. You can use these policies with only minor changes. Or, use them as a guide to create custom IAM policies. For more information, see [Sample IAM policies for Session Manager](getting-started-restrict-access-quickstart.md). For information about how to create IAM policies and attach them to users or groups, see [Creating IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) and [Adding and Removing IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

**About session ID ARN formats**  
When you create an IAM policy for Session Manager access, you specify a session ID as part of the Amazon Resource Name (ARN). The session ID includes the user name as a variable. To help illustrate this, here's the format of a Session Manager ARN and an example: 

```
arn:aws:ssm:region-id:account-id:session/session-id
```

For example:

```
arn:aws:ssm:us-east-2:123456789012:session/JohnDoe-1a2b3c4d5eEXAMPLE
```

For more information about using variables in IAM policies, see [IAM Policy Elements: Variables](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html). 

**Topics**
+ [

# Start a default shell session by specifying the default session document in IAM policies
](getting-started-default-session-document.md)
+ [

# Start a session with a document by specifying the session documents in IAM policies
](getting-started-specify-session-document.md)
+ [

# Sample IAM policies for Session Manager
](getting-started-restrict-access-quickstart.md)
+ [

# Additional sample IAM policies for Session Manager
](getting-started-restrict-access-examples.md)

# Start a default shell session by specifying the default session document in IAM policies
Start a default shell session

When you configure Session Manager for your AWS account or when you change session preferences in the Systems Manager console, the system creates an SSM session document called `SSM-SessionManagerRunShell`. This is the default session document. Session Manager uses this document to store your session preferences, which include information like the following:
+ A location where you want to save session data, such an Amazon Simple Storage Service (Amazon S3) bucket or a Amazon CloudWatch Logs log group.
+ An AWS Key Management Service (AWS KMS) key ID for encrypting session data.
+ Whether Run As support is allowed for your sessions.

Here is an example of the information contained in the `SSM-SessionManagerRunShell` session preferences document.

```
{
  "schemaVersion": "1.0",
  "description": "Document to hold regional settings for Session Manager",
  "sessionType": "Standard_Stream",
  "inputs": {
    "s3BucketName": "amzn-s3-demo-bucket",
    "s3KeyPrefix": "MyS3Prefix",
    "s3EncryptionEnabled": true,
    "cloudWatchLogGroupName": "MyCWLogGroup",
    "cloudWatchEncryptionEnabled": false,
    "kmsKeyId": "1a2b3c4d",
    "runAsEnabled": true,
    "runAsDefaultUser": "RunAsUser"
  }
}
```

By default, Session Manager uses the default session document when a user starts a session from the AWS Management Console. This applies to either Fleet Manager or Session Manager in the Systems Manager console, or EC2 Connect in the Amazon EC2 console. Session Manager also uses the default session document when a user starts a session by using an AWS CLI command like the following example:

```
aws ssm start-session \
    --target i-02573cafcfEXAMPLE
```

To start a default shell session, you must specify the default session document in the IAM policy, as shown in the following example.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "EnableSSMSession",
      "Effect": "Allow",
      "Action": [
        "ssm:StartSession"
      ],
      "Resource": [
        "arn:aws:ec2:us-east-1:111122223333:instance/instance-id",
        "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "ssmmessages:OpenDataChannel"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}
```

------

# Start a session with a document by specifying the session documents in IAM policies
Start a session with a document

If you use the [start-session](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html) AWS CLI command using the default session document, you can omit the document name. The system automatically calls the `SSM-SessionManagerRunShell` session document.

In all other cases, you must specify a value for the `document-name` parameter. When a user specifies the name of a session document in a command, the systems checks their IAM policy to verify they have permission to access the document. If they don't have permission, the connection request fails. The following examples includes the `document-name` parameter with the `AWS-StartPortForwardingSession` session document.

```
aws ssm start-session \
    --target i-02573cafcfEXAMPLE \
    --document-name AWS-StartPortForwardingSession \
    --parameters '{"portNumber":["80"], "localPortNumber":["56789"]}'
```

For an example of how to specify a Session Manager session document in an IAM policy, see [Quickstart end user policies for Session Manager](getting-started-restrict-access-quickstart.md#restrict-access-quickstart-end-user).

**Note**  
To start a session using SSH, you must complete configuration steps on the target managed node *and* the user's local machine. For information, see [(Optional) Allow and control permissions for SSH connections through Session Manager](session-manager-getting-started-enable-ssh-connections.md).

# Sample IAM policies for Session Manager


Use the samples in this section to help you create AWS Identity and Access Management (IAM) policies that provide the most commonly needed permissions for Session Manager access. 

**Note**  
You can also use an AWS KMS key policy to control which IAM entities (users or roles) and AWS accounts are given access to your KMS key. For information, see [Overview of Managing Access to Your AWS KMS Resources](https://docs.aws.amazon.com/kms/latest/developerguide/control-access-overview.html) and [Using Key Policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) in the *AWS Key Management Service Developer Guide*.

**Topics**
+ [

## Quickstart end user policies for Session Manager
](#restrict-access-quickstart-end-user)
+ [

## Quickstart administrator policy for Session Manager
](#restrict-access-quickstart-admin)

## Quickstart end user policies for Session Manager


Use the following examples to create IAM end user policies for Session Manager. 

You can create a policy that allows users to start sessions from only the Session Manager console and AWS Command Line Interface (AWS CLI), from only the Amazon Elastic Compute Cloud (Amazon EC2) console, or from all three.

These policies provide end users the ability to start a session to a particular managed node and the ability to end only their own sessions. Refer to [Additional sample IAM policies for Session Manager](getting-started-restrict-access-examples.md) for examples of customizations you might want to make to the policy.

In the following sample policies, replace each *example resource placeholder* with your own information. 

Choose from the following tabs to view the sample policy for the range of session access you want to provide.

------
#### [ Session Manager and Fleet Manager ]

Use this sample policy to give users the ability to start and resume sessions from only the Session Manager and Fleet Manager consoles. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:111122223333:instance/i-02573cafcfEXAMPLE",
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeSessions",
                "ssm:GetConnectionStatus",
                "ssm:DescribeInstanceProperties",
                "ec2:DescribeInstances"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey"
            ],
            "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-name"
        }
    ]
}
```

------

------
#### [ Amazon EC2 ]

Use this sample policy to give users the ability to start and resume sessions from only the Amazon EC2 console. This policy doesn't provide all the permissions needed to start sessions from the Session Manager console and the AWS CLI.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession",
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:111122223333:instance/i-02573cafcfEXAMPLE",
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetConnectionStatus",
                "ssm:DescribeInstanceInformation"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:username}-*"
            ]
        }
    ]
}
```

------

------
#### [ AWS CLI ]

Use this sample policy to give users the ability to start and resume sessions from the AWS CLI.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession",
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:111122223333:instance/i-02573cafcfEXAMPLE",
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey"
            ],
            "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-name"
        }
    ]
}
```

------

------

**Note**  
`SSM-SessionManagerRunShell` is the default name of the SSM document that Session Manager creates to store your session configuration preferences. You can create a custom Session document and specify it in this policy instead. You can also specify the AWS-provided document `AWS-StartSSHSession` for users who are starting sessions using SSH. For information about configuration steps needed to support sessions using SSH, see [(Optional) Allow and control permissions for SSH connections through Session Manager](session-manager-getting-started-enable-ssh-connections.md).  
The `kms:GenerateDataKey` permission enables the creation of a data encryption key that will be used to encrypt session data. If you will use AWS Key Management Service (AWS KMS) encryption for your session data, replace *key-name* with the Amazon Resource Name (ARN) of the KMS key you want to use, in the format `arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-12345EXAMPLE`. If you won't use KMS key encryption for your session data, remove the following content from the policy.  

```
{
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey"
            ],
            "Resource": "key-name"
        }
```
For information about using AWS KMS for encrypting session data, see [Turn on KMS key encryption of session data (console)](session-preferences-enable-encryption.md).  
The permission for [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.html) is needed for cases where a user attempts to start a session from the Amazon EC2 console, but the SSM Agent must be updated to the minimum required version for Session Manager first. Run Command is used to send a command to the instance to update the agent.

## Quickstart administrator policy for Session Manager


Use the following examples to create IAM administrator policies for Session Manager. 

These policies provide administrators the ability to start a session to managed nodes that are tagged with `Key=Finance,Value=WebServers`, permission to create, update, and delete preferences, and permission to end only their own sessions. Refer to [Additional sample IAM policies for Session Manager](getting-started-restrict-access-examples.md) for examples of customizations you might want to make to the policy.

You can create a policy that allows administrators to perform these tasks from only the Session Manager console and AWS CLI, from only the Amazon EC2 console, or from all three.

In the following sample policies, replace each *example resource placeholder* with your own information. 

Choose from the following tabs to view the sample policy for the access scenario you want to support.

------
#### [ Session Manager and CLI ]

Use this sample policy to give administrators the ability to perform session-related tasks from only the Session Manager console and the AWS CLI. This policy doesn't provide all the permissions needed to perform session-related tasks from the Amazon EC2 console.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:*:111122223333:instance/*"
            ],
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/Finance": [
                        "WebServers"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeSessions",
                "ssm:GetConnectionStatus",
                "ssm:DescribeInstanceProperties",
                "ec2:DescribeInstances"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:CreateDocument",
                "ssm:UpdateDocument",
                "ssm:GetDocument",
                "ssm:StartSession"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        }
    ]
}
```

------

------
#### [ Amazon EC2 ]

Use this sample policy to give administrators the ability to perform session-related tasks from only the Amazon EC2 console. This policy doesn't provide all the permissions needed to perform session-related tasks from the Session Manager console and the AWS CLI.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession",
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:111122223333:instance/*"
            ],
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/tag-key": [
                        "tag-value"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetConnectionStatus",
                "ssm:DescribeInstanceInformation"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        }
    ]
}
```

------

------
#### [ Session Manager, CLI, and Amazon EC2 ]

Use this sample policy to give administrators the ability to perform session-related tasks from the Session Manager console, the AWS CLI, and the Amazon EC2 console.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession",
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:111122223333:instance/*"
            ],
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/tag-key": [
                        "tag-value"
                    ]
                }
            }
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeSessions",
                "ssm:GetConnectionStatus",
                "ssm:DescribeInstanceInformation",
                "ssm:DescribeInstanceProperties",
                "ec2:DescribeInstances"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:CreateDocument",
                "ssm:UpdateDocument",
                "ssm:GetDocument",
                "ssm:StartSession"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        }
    ]
}
```

------

------

**Note**  
The permission for [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.html) is needed for cases where a user attempts to start a session from the Amazon EC2 console, but a command must be sent to update SSM Agent first.

# Additional sample IAM policies for Session Manager


Refer to the following example policies to help you create a custom AWS Identity and Access Management (IAM) policy for any Session Manager user access scenarios you want to support.

**Topics**
+ [

## Example 1: Grant access to documents in the console
](#grant-access-documents-console-example)
+ [

## Example 2: Restrict access to specific managed nodes
](#restrict-access-example-instances)
+ [

## Example 3: Restrict access based on tags
](#restrict-access-example-instance-tags)
+ [

## Example 4: Allow a user to end only sessions they started
](#restrict-access-example-user-sessions)
+ [

## Example 5: Allow full (administrative) access to all sessions
](#restrict-access-example-full-access)

## Example 1: Grant access to documents in the console
Grant access to custom Session documents in the console

You can allow users to specify a custom document when they launch a session using the Session Manager console. The following example IAM policy grants permission to access documents with names that begin with **SessionDocument-** in the specified AWS Region and AWS account.

To use this policy, replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetDocument",
                "ssm:ListDocuments"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/SessionDocument-*"
            ]
        }
    ]
}
```

------

**Note**  
The Session Manager console only supports Session documents that have a `sessionType` of `Standard_Stream` which are used to define session preferences. For more information, see [Session document schema](session-manager-schema.md).

## Example 2: Restrict access to specific managed nodes


You can create an IAM policy that defines which managed nodes that a user is allowed to connect to using Session Manager. For example, the following policy grants a user the permission to start, end, and resume their sessions on three specific nodes. The policy restricts the user from connecting to nodes other than those specified.

**Note**  
For federated users, see [Example 4: Allow a user to end only sessions they started](#restrict-access-example-user-sessions).

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:111122223333:instance/i-1234567890EXAMPLE",
                "arn:aws:ec2:us-east-1:111122223333:instance/i-abcdefghijEXAMPLE",
                "arn:aws:ec2:us-east-1:111122223333:instance/i-0e9d8c7b6aEXAMPLE",
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       }
    ]
}
```

------

## Example 3: Restrict access based on tags


You can restrict access to managed nodes based on specific tags. In the following example, the user is allowed to start and resume sessions (`Effect: Allow, Action: ssm:StartSession, ssm:ResumeSession`) on any managed node (`Resource: arn:aws:ec2:region:987654321098:instance/*`) with the condition that the node is a Finance WebServer (`ssm:resourceTag/Finance: WebServer`). If the user sends a command to a managed node that isn't tagged or that has any tag other than `Finance: WebServer`, the command result will include `AccessDenied`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:111122223333:instance/*"
            ],
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/Finance": [
                        "WebServers"
                    ]
                }
            }
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        }
    ]
}
```

------

You can create IAM policies that allow a user to start sessions to managed nodes that are tagged with multiple tags. The following policy allows the user to start sessions to managed nodes that have both the specified tags applied to them. If a user sends a command to a managed node that isn't tagged with both of these tags, the command result will include `AccessDenied`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "ssm:StartSession"
         ],
         "Resource":"*",
         "Condition":{
            "StringLike":{
               "ssm:resourceTag/tag-key1":[
                  "tag-value1"
               ],
               "ssm:resourceTag/tag-key2":[
                  "tag-value2"
               ]
            }
         }
      },
      {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
      {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
      }
   ]
}
```

------

For more information about creating IAM policies, see [Managed Policies and Inline Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html) in the *IAM User Guide*. For more information about tagging managed nodes, see [Tagging your Amazon EC2 resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) in the *Amazon EC2 User Guide* (content applies to Windows and Linux managed nodes). For more information about increasing your security posture against unauthorized root-level commands on your managed nodes, see [Restricting access to root-level commands through SSM Agent](ssm-agent-restrict-root-level-commands.md)

## Example 4: Allow a user to end only sessions they started


Session Manager provides two methods to control which sessions a federated user in your AWS account is allowed to end.
+ Use the variable `{aws:userid}` in an AWS Identity and Access Management (IAM) permissions policy. Federated users can end only sessions they started. For unfederated users, use Method 1. For federated users, use Method 2.
+ Use tags supplied by AWS tags in an IAM permissions policy. In the policy, you include a condition that allows users to end only sessions that are tagged with specific tags that have been provided by AWS. This method works for all accounts, including those that use federated IDs to grant access to AWS.

### Method 1: Grant TerminateSession privileges using the variable `{aws:username}`


The following IAM policy allows a user to view the IDs of all sessions in your account. However, users can interact with managed nodes only through sessions they started. A user who is assigned the following policy can't connect to or end other users' sessions. The policy uses the variable `{aws:username}` to achieve this.

**Note**  
This method doesn't work for accounts that grant access to AWS using federated IDs.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "ssm:DescribeSessions"
            ],
            "Effect": "Allow",
            "Resource": [
                "*"
            ]
        },
        {
            "Action": [
                "ssm:TerminateSession"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:username}-*"
            ]
        }
    ]
}
```

------

### Method 2: Grant TerminateSession privileges using tags supplied by AWS


You can control which sessions that a user can end by including conditional tag key variables in an IAM policy. The condition specifies that the user can only end sessions that are tagged with one or both of these specific tag key variables and a specified value.

When a user in your AWS account starts a session, Session Manager applies two resource tags to the session. The first resource tag is `aws:ssmmessages:target-id`, with which you specify the ID of the target the user is allowed to end. The other resource tag is `aws:ssmmessages:session-id`, with a value in the format of `role-id:caller-specified-role-name`.

**Note**  
Session Manager doesn’t support custom tags for this IAM access control policy. You must use the resource tags supplied by AWS, described below. 

 ** `aws:ssmmessages:target-id` **   
With this tag key, you include the managed node ID as the value in policy. In the following policy block, the condition statement allows a user to end only the node i-02573cafcfEXAMPLE.    
****  

```
{
     "Version":"2012-10-17",		 	 	 
     "Statement": [
         {
             "Effect": "Allow",
             "Action": [
                "ssm:TerminateSession"
             ],
             "Resource": "*",
             "Condition": {
                 "StringLike": {
                     "ssm:resourceTag/aws:ssmmessages:target-id": [
                        "i-02573cafcfEXAMPLE"
                     ]
                 }
             }
         }
     ]
}
```
If the user tries to end a session for which they haven’t been granted this `TerminateSession` permission, they receive an `AccessDeniedException` error.

 ** `aws:ssmmessages:session-id` **   
This tag key includes a variable for the session ID as the value in the request to start a session.  
The following example demonstrates a policy for cases where the caller type is `User`. The value you supply for `aws:ssmmessages:session-id` is the ID of the user. In this example, `AIDIODR4TAW7CSEXAMPLE` represents the ID of a user in your AWS account. To retrieve the ID for a user in your AWS account, use the IAM command, `get-user`. For information, see [get-user](https://docs.aws.amazon.com/IAM/latest/UserGuide/get-user.html) in the AWS Identity and Access Management section of the *IAM User Guide*.     
****  

```
{
     "Version":"2012-10-17",		 	 	 
     "Statement": [
         {
             "Effect": "Allow",
             "Action": [
                "ssm:TerminateSession"
             ],
             "Resource": "*",
             "Condition": {
                 "StringLike": {
                     "ssm:resourceTag/aws:ssmmessages:session-id": [
                        "AIDIODR4TAW7CSEXAMPLE"
                     ]
                 }
             }
         }
     ]
}
```
The following example demonstrates a policy for cases where the caller type is `AssumedRole`. You can use the `{aws:userid}` variable for the value you supply for `aws:ssmmessages:session-id`. Alternatively, you can hardcode a role ID for the value you supply for `aws:ssmmessages:session-id`. If you hardcode a role ID, you must provide the value in the format `role-id:caller-specified-role-name`. For example, `AIDIODR4TAW7CSEXAMPLE:MyRole`.  
In order for system tags to be applied, the role ID you supply can contain the following characters only: Unicode letters, 0-9, space, `_`, `.`, `:`, `/`, `=`, `+`, `-`, `@`, and `\`.
To retrieve the role ID for a role in your AWS account, use the `get-caller-identity` command. For information, see [get-caller-identity](https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html) in the AWS CLI Command Reference.     
****  

```
{
     "Version":"2012-10-17",		 	 	 
     "Statement": [
         {
             "Effect": "Allow",
             "Action": [
                "ssm:TerminateSession"
             ],
             "Resource": "*",
             "Condition": {
                 "StringLike": {
                     "ssm:resourceTag/aws:ssmmessages:session-id": [
                        "${aws:userid}*"
                     ]
                 }
             }
         }
     ]
}
```
If a user tries to end a session for which they haven’t been granted this `TerminateSession` permission, they receive an `AccessDeniedException` error.

**`aws:ssmmessages:target-id`** and **`aws:ssmmessages:session-id`**  
You can also create IAM policies that allow a user to end sessions that are tagged with both system tags, as shown in this example.    
****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "ssm:TerminateSession"
         ],
         "Resource":"*",
         "Condition":{
            "StringLike":{
               "ssm:resourceTag/aws:ssmmessages:target-id":[
                  "i-02573cafcfEXAMPLE"
               ],
               "ssm:resourceTag/aws:ssmmessages:session-id":[
                  "${aws:userid}*"
               ]
            }
         }
      }
   ]
}
```

## Example 5: Allow full (administrative) access to all sessions


The following IAM policy allows a user to fully interact with all managed nodes and all sessions created by all users for all nodes. It should be granted only to an Administrator who needs full control over your organization's Session Manager activities.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "ssm:StartSession",
                "ssm:TerminateSession",
                "ssm:ResumeSession",
                "ssm:DescribeSessions",
                "ssm:GetConnectionStatus"
            ],
            "Effect": "Allow",
            "Resource": [
                "*"
            ]
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       }
    ]
}
```

------

# Step 4: Configure session preferences


Users that have been granted administrative permissions in their AWS Identity and Access Management (IAM) policy can configure session preferences, including the following:
+ Turn on Run As support for Linux managed nodes. This makes it possible to start sessions using the credentials of a specified operating system user instead of the credentials of a system-generated `ssm-user` account that AWS Systems Manager Session Manager can create on a managed node.
+ Configure Session Manager to use AWS KMS key encryption to provide additional protection to the data transmitted between client machines and managed nodes.
+ Configure Session Manager to create and send session history logs to an Amazon Simple Storage Service (Amazon S3) bucket or an Amazon CloudWatch Logs log group. The stored log data can then be used to report on the session connections made to your managed nodes and the commands run on them during the sessions.
+ Configure session timeouts. You can use this setting to specify when to end a session after a period of inactivity.
+ Configure Session Manager to use configurable shell profiles. These customizable profiles allow you to define preferences within sessions such as shell preferences, environment variables, working directories, and running multiple commands when a session is started.

For more information about the permissions needed to configue Session Manager preferences, see [Grant or deny a user permissions to update Session Manager preferences](preference-setting-permissions.md).

**Topics**
+ [

# Grant or deny a user permissions to update Session Manager preferences
](preference-setting-permissions.md)
+ [

# Specify an idle session timeout value
](session-preferences-timeout.md)
+ [

# Specify maximum session duration
](session-preferences-max-timeout.md)
+ [

# Allow configurable shell profiles
](session-preferences-shell-config.md)
+ [

# Turn on Run As support for Linux and macOS managed nodes
](session-preferences-run-as.md)
+ [

# Turn on KMS key encryption of session data (console)
](session-preferences-enable-encryption.md)
+ [

# Create a Session Manager preferences document (command line)
](getting-started-create-preferences-cli.md)
+ [

# Update Session Manager preferences (command line)
](getting-started-configure-preferences-cli.md)

For information about using the Systems Manager console to configure options for logging session data, see the following topics:
+  [Logging session data using Amazon S3 (console)](session-manager-logging-s3.md) 
+  [Streaming session data using Amazon CloudWatch Logs (console)](session-manager-logging-cwl-streaming.md) 
+  [Logging session data using Amazon CloudWatch Logs (console)](session-manager-logging-cloudwatch-logs.md) 

# Grant or deny a user permissions to update Session Manager preferences


Account preferences are stored as AWS Systems Manager (SSM) documents for each AWS Region. Before a user can update account preferences for sessions in your account, they must be granted the necessary permissions to access the type of SSM document where these preferences are stored. These permissions are granted through an AWS Identity and Access Management (IAM) policy.

**Administrator policy to allow preferences to be created and updated**  
An administrator can have the following policy to create and update preferences at any time. The following policy allows permission to access and update the `SSM-SessionManagerRunShell` document in the us-east-2 account 123456789012. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "ssm:CreateDocument",
                "ssm:GetDocument",
                "ssm:UpdateDocument",
                "ssm:DeleteDocument"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        }
    ]
}
```

------

**User policy to prevent preferences from being updated**  
Use the following policy to prevent end users in your account from updating or overriding any Session Manager preferences. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "ssm:CreateDocument",
                "ssm:GetDocument",
                "ssm:UpdateDocument",
                "ssm:DeleteDocument"
            ],
            "Effect": "Deny",
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        }
    ]
}
```

------

# Specify an idle session timeout value


Session Manager, a tool in AWS Systems Manager, allows you to specify the amount of time to allow a user to be inactive before the system ends a session. By default, sessions time out after 20 minutes of inactivity. You can modify this setting to specify that a session times out between 1 and 60 minutes of inactivity. Some professional computing security agencies recommend setting idle session timeouts to a maximum of 15 minutes. 

The idle session timeout timer resets when Session Manager receives client-side inputs. These inputs include, but are not limited to:
+ Keyboard input in the terminal
+ Terminal or browser window resize events
+ Session reconnection (ResumeSession), which can occur due to network interruptions, browser tab management, or WebSocket disconnections

Because these events reset the idle timer, a session might remain active longer than the configured timeout period even without direct terminal commands.

If your security requirements mandate strict session duration limits regardless of activity, use the *Maximum session duration* setting in addition to idle timeout. For more information, see [Specify maximum session duration](session-preferences-max-timeout.md).

**To allow idle session timeout (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Specify the amount of time to allow a user to be inactive before a session ends in the **minutes** field under **Idle session timeout**.

1. Choose **Save**.

# Specify maximum session duration


Session Manager, a tool in AWS Systems Manager, allows you to specify the maximum duration of a session before it ends. By default, sessions do not have a maximum duration. The value you specify for maximum session duration must be between 1 and 1,440 minutes.

**To specify maximum session duration (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Select the check box next to **Enable maximum session duration**.

1. Specify the maximum duration of session before it ends in the **minutes** field under **Maximum session duration**.

1. Choose **Save**.

# Allow configurable shell profiles


By default, sessions on EC2 instances for Linux start using the Bourne shell (sh). However, you might prefer to use another shell like bash. By allowing configurable shell profiles, you can customize preferences within sessions such as shell preferences, environment variables, working directories, and running multiple commands when a session is started.

**Important**  
Systems Manager doesn't check the commands or scripts in your shell profile to see what changes they would make to an instance before they're run. To restrict a user’s ability to modify commands or scripts entered in their shell profile, we recommend the following:  
Create a customized Session-type document for your AWS Identity and Access Management (IAM) users and roles. Then modify the IAM policy for these users and roles so the `StartSession` API operation can only use the Session-type document you have created for them. For information see, [Create a Session Manager preferences document (command line)](getting-started-create-preferences-cli.md) and [Quickstart end user policies for Session Manager](getting-started-restrict-access-quickstart.md#restrict-access-quickstart-end-user).
Modify the IAM policy for your IAM users and roles to deny access to the `UpdateDocument` API operation for the Session-type document resource you create. This allows your users and roles to use the document you created for their session preferences without allowing them to modify any of the settings.

**To turn on configurable shell profiles**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Specify the environment variables, shell preferences, or commands you want to run when your session starts in the fields for the applicable operating systems.

1. Choose **Save**.

The following are some example commands that can be added to your shell profile.

Change to the bash shell and change to the /usr directory on Linux instances.

```
exec /bin/bash
cd /usr
```

Output a timestamp and welcome message at the start of a session.

------
#### [ Linux & macOS ]

```
timestamp=$(date '+%Y-%m-%dT%H:%M:%SZ')
user=$(whoami)
echo $timestamp && echo "Welcome $user"'!'
echo "You have logged in to a production instance. Note that all session activity is being logged."
```

------
#### [  Windows  ]

```
$timestamp = (Get-Date).ToString("yyyy-MM-ddTH:mm:ssZ")
$splitName = (whoami).Split("\")
$user = $splitName[1]
Write-Host $timestamp
Write-Host "Welcome $user!"
Write-Host "You have logged in to a production instance. Note that all session activity is being logged."
```

------

View dynamic system activity at the start of a session.

------
#### [ Linux & macOS ]

```
top
```

------
#### [  Windows  ]

```
while ($true) { Get-Process | Sort-Object -Descending CPU | Select-Object -First 30; `
Start-Sleep -Seconds 2; cls
Write-Host "Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id ProcessName"; 
Write-Host "-------  ------    -----      ----- -----   ------     -- -----------"}
```

------

# Turn on Run As support for Linux and macOS managed nodes


By default, Session Manager authenticates connections using the credentials of the system-generated `ssm-user` account that is created on a managed node. (On Linux and macOS machines, this account is added to `/etc/sudoers/`.) If you choose, you can instead authenticate sessions using the credentials of an operating system (OS) user account, or a domain user for instances joined to an Active Directory. In this case, Session Manager verifies that the OS account that you specified exists on the node, or in the domain, before starting the session. If you attempt to start a session using an OS account that doesn't exist on the node, or in the domain, the connection fails.

**Note**  
Session Manager does not support using an operating system's `root` user account to authenticate connections. For sessions that are authenticated using an OS user account, the node's OS-level and directory policies, like login restrictions or system resource usage restrictions, might not apply. 

**How it works**  
If you turn on Run As support for sessions, the system checks for access permissions as follows:

1. For the user who is starting the session, has their IAM entity (user or role) been tagged with `SSMSessionRunAs = os user account name`?

   If Yes, does the OS user name exist on the managed node? If it does, start the session. If it doesn't, don't allow a session to start.

   If the IAM entity has *not* been tagged with `SSMSessionRunAs = os user account name`, continue to step 2.

1. If the IAM entity hasn't been tagged with `SSMSessionRunAs = os user account name`, has an OS user name been specified in the AWS account's Session Manager preferences?

   If Yes, does the OS user name exist on the managed node? If it does, start the session. If it doesn't, don't allow a session to start. 

**Note**  
When you activate Run As support, it prevents Session Manager from starting sessions using the `ssm-user` account on a managed node. This means that if Session Manager fails to connect using the specified OS user account, it doesn't fall back to connecting using the default method.   
If you activate Run As without specifying an OS account or tagging an IAM entity, and you have not specified an OS account in Session Manager preferences, session connection attempts will fail.

**To turn on Run As support for Linux and macOS managed nodes**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Select the check box next to **Enable Run As support for Linux instances**.

1. Do one of the following:
   + **Option 1**: In the **Operating system user name** field, enter the name of the OS user account that you want to use to start sessions. Using this option, all sessions are run by the same OS user for all users in your AWS account who connect using Session Manager.
   + **Option 2** (Recommended): Choose the **Open the IAM console** link. In the navigation pane, choose either **Users** or **Roles**. Choose the entity (user or role) to add tags to, and then choose the **Tags** tab. Enter `SSMSessionRunAs` for the key name. Enter the name of an OS user account for the key value. Choose **Save changes**.

     Using this option, you can specify unique OS users for different IAM entities if you choose. For more information about tagging IAM entities (users or roles), see [Tagging IAM resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) in the *IAM User Guide*.

     The following is an example.  
![\[Screenshot of specifying tags for Session Manager Run As permission.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/ssn-run-as-tags.png)

1. Choose **Save**.

# Turn on KMS key encryption of session data (console)


Use AWS Key Management Service (AWS KMS) to create and manage encryption keys. With AWS KMS, you can control the use of encryption across a wide range of AWS services and in your applications. You can specify that session data transmitted between your managed nodes and the local machines of users in your AWS account is encrypted using KMS key encryption. (This is in addition to the TLS 1.2/1.3 encryption that AWS already provides by default.) To encrypt Session Manager session data, create a *symmetric* KMS key using AWS KMS.

AWS KMS encryption is available for `Standard_Stream`, `InteractiveCommands`, and `NonInteractiveCommands` session types. To use the option to encrypt session data using a key created in AWS KMS, version 2.3.539.0 or later of AWS Systems Manager SSM Agent must be installed on the managed node. 

**Note**  
You must allow AWS KMS encryption in order to reset passwords on your managed nodes from the AWS Systems Manager console. For more information, see [Reset a password on a managed node](fleet-manager-reset-password.md#managed-instance-reset-a-password).

You can use a key that you created in your AWS account. You can also use a key that was created in a different AWS account. The creator of the key in a different AWS account must provide you with the permissions needed to use the key.

After you turn on KMS key encryption for your session data, both the users who start sessions and the managed nodes that they connect to must have permission to use the key. You provide permission to use the KMS key with Session Manager through AWS Identity and Access Management (IAM) policies. For information, see the following topics:
+ Add AWS KMS permissions for users in your account: [Sample IAM policies for Session Manager](getting-started-restrict-access-quickstart.md).
+ Add AWS KMS permissions for managed nodes in your account: [Step 2: Verify or add instance permissions for Session Manager](session-manager-getting-started-instance-profile.md).

For more information about creating and managing KMS keys, see the [https://docs.aws.amazon.com/kms/latest/developerguide/](https://docs.aws.amazon.com/kms/latest/developerguide/).

For information about using the AWS CLI to turn on KMS key encryption of session data in your account, see [Create a Session Manager preferences document (command line)](getting-started-create-preferences-cli.md) or [Update Session Manager preferences (command line)](getting-started-configure-preferences-cli.md).

**Note**  
There is a charge to use KMS keys. For information, see [AWS Key Management Service pricing](https://aws.amazon.com/kms/pricing/).

**To turn on KMS key encryption of session data (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Select the check box next to **Enable KMS encryption**.

1. Do one of the following:
   + Choose the button next to **Select a KMS key in my current account**, then select a key from the list.

     -or-

     Choose the button next to **Enter a KMS key alias or KMS key ARN**. Manually enter a KMS key alias for a key created in your current account, or enter the key Amazon Resource Name (ARN) for a key in another account. The following are examples:
     + Key alias: `alias/my-kms-key-alias`
     + Key ARN: `arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-12345EXAMPLE`

     -or-

     Choose **Create new key** to create a new KMS key in your account. After you create the new key, return to the **Preferences** tab and select the key for encrypting session data in your account.

   For more information about sharing keys, see [Allowing External AWS accounts to Access a key](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying.html#key-policy-modifying-external-accounts) in the *AWS Key Management Service Developer Guide*.

1. Choose **Save**.

# Create a Session Manager preferences document (command line)


Use the following procedure to create SSM documents that define your preferences for AWS Systems Manager Session Manager sessions. You can use the document to configure session options including data encryption, session duration, and logging. For example, you can specify whether to store session log data in an Amazon Simple Storage Service (Amazon S3) bucket or Amazon CloudWatch Logs log group. You can create documents that define general preferences for all sessions for an AWS account and AWS Region, or that define preferences for individual sessions. 

**Note**  
You can also configure general session preferences by using the Session Manager console.

Documents used to set Session Manager preferences must have a `sessionType` of `Standard_Stream`. For more information about Session documents, see [Session document schema](session-manager-schema.md).

For information about using the command line to update existing Session Manager preferences, see [Update Session Manager preferences (command line)](getting-started-configure-preferences-cli.md).

For an example of how to create session preferences using CloudFormation, see [Create a Systems Manager document for Session Manager preferences](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssm-document.html#aws-resource-ssm-document--examples) in the *AWS CloudFormation User Guide*.

**Note**  
This procedure describes how to create documents for setting Session Manager preferences at the AWS account level. To create documents that will be used for setting session-level preferences, specify a value other than `SSM-SessionManagerRunShell` for the file name related command inputs .   
To use your document to set preferences for sessions started from the AWS Command Line Interface (AWS CLI), provide the document name as the `--document-name` parameter value. To set preferences for sessions started from the Session Manager console, you can type or select the name of your document from a list.

**To create Session Manager preferences (command line)**

1. Create a JSON file on your local machine with a name such as `SessionManagerRunShell.json`, and then paste the following content into it.

   ```
   {
       "schemaVersion": "1.0",
       "description": "Document to hold regional settings for Session Manager",
       "sessionType": "Standard_Stream",
       "inputs": {
           "s3BucketName": "",
           "s3KeyPrefix": "",
           "s3EncryptionEnabled": true,
           "cloudWatchLogGroupName": "",
           "cloudWatchEncryptionEnabled": true,
           "cloudWatchStreamingEnabled": false,
           "kmsKeyId": "",
           "runAsEnabled": false,
           "runAsDefaultUser": "",
           "idleSessionTimeout": "",
           "maxSessionDuration": "",
           "shellProfile": {
               "windows": "date",
               "linux": "pwd;ls"
           }
       }
   }
   ```

   You can also pass values to your session preferences using parameters instead of hardcoding the values as shown in the following example.

   ```
   {
      "schemaVersion":"1.0",
      "description":"Session Document Parameter Example JSON Template",
      "sessionType":"Standard_Stream",
      "parameters":{
         "s3BucketName":{
            "type":"String",
            "default":""
         },
         "s3KeyPrefix":{
            "type":"String",
            "default":""
         },
         "s3EncryptionEnabled":{
            "type":"Boolean",
            "default":"false"
         },
         "cloudWatchLogGroupName":{
            "type":"String",
            "default":""
         },
         "cloudWatchEncryptionEnabled":{
            "type":"Boolean",
            "default":"false"
         }
      },
      "inputs":{
         "s3BucketName":"{{s3BucketName}}",
         "s3KeyPrefix":"{{s3KeyPrefix}}",
         "s3EncryptionEnabled":"{{s3EncryptionEnabled}}",
         "cloudWatchLogGroupName":"{{cloudWatchLogGroupName}}",
         "cloudWatchEncryptionEnabled":"{{cloudWatchEncryptionEnabled}}",
         "kmsKeyId":""
      }
   }
   ```

1. Specify where you want to send session data. You can specify an S3 bucket name (with an optional prefix) or a CloudWatch Logs log group name. If you want to further encrypt data between local client and managed nodes, provide the KMS key to use for encryption. The following is an example.

   ```
   {
     "schemaVersion": "1.0",
     "description": "Document to hold regional settings for Session Manager",
     "sessionType": "Standard_Stream",
     "inputs": {
       "s3BucketName": "amzn-s3-demo-bucket",
       "s3KeyPrefix": "MyS3Prefix",
       "s3EncryptionEnabled": true,
       "cloudWatchLogGroupName": "MyLogGroupName",
       "cloudWatchEncryptionEnabled": true,
       "cloudWatchStreamingEnabled": false,
       "kmsKeyId": "MyKMSKeyID",
       "runAsEnabled": true,
       "runAsDefaultUser": "MyDefaultRunAsUser",
       "idleSessionTimeout": "20",
       "maxSessionDuration": "60",
       "shellProfile": {
           "windows": "MyCommands",
           "linux": "MyCommands"
       }
     }
   }
   ```
**Note**  
If you don't want to encrypt the session log data, change `true` to `false` for `s3EncryptionEnabled`.  
If you aren't sending logs to either an Amazon S3 bucket or a CloudWatch Logs log group, don't want to encrypt active session data, or don't want to turn on Run As support for the sessions in your account, you can delete the lines for those options. Make sure the last line in the `inputs` section doesn't end with a comma.  
If you add a KMS key ID to encrypt your session data, both the users who start sessions and the managed nodes that they connect to must have permission to use the key. You provide permission to use the KMS key with Session Manager through IAM policies. For information, see the following topics:  
Add AWS KMS permissions for users in your account: [Sample IAM policies for Session Manager](getting-started-restrict-access-quickstart.md)
Add AWS KMS permissions for managed nodes in your account: [Step 2: Verify or add instance permissions for Session Manager](session-manager-getting-started-instance-profile.md)

1. Save the file.

1. In the directory where you created the JSON file, run the following command.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-document \
       --name SSM-SessionManagerRunShell \
       --content "file://SessionManagerRunShell.json" \
       --document-type "Session" \
       --document-format JSON
   ```

------
#### [  Windows  ]

   ```
   aws ssm create-document ^
       --name SSM-SessionManagerRunShell ^
       --content "file://SessionManagerRunShell.json" ^
       --document-type "Session" ^
       --document-format JSON
   ```

------
#### [   PowerShell   ]

   ```
   New-SSMDocument `
       -Name "SSM-SessionManagerRunShell" `
       -Content (Get-Content -Raw SessionManagerRunShell.json) `
       -DocumentType "Session" `
       -DocumentFormat JSON
   ```

------

   If successful, the command returns output similar to the following.

   ```
   {
       "DocumentDescription": {
           "Status": "Creating",
           "Hash": "ce4fd0a2ab9b0fae759004ba603174c3ec2231f21a81db8690a33eb66EXAMPLE",
           "Name": "SSM-SessionManagerRunShell",
           "Tags": [],
           "DocumentType": "Session",
           "PlatformTypes": [
               "Windows",
               "Linux"
           ],
           "DocumentVersion": "1",
           "HashType": "Sha256",
           "CreatedDate": 1547750660.918,
           "Owner": "111122223333",
           "SchemaVersion": "1.0",
           "DefaultVersion": "1",
           "DocumentFormat": "JSON",
           "LatestVersion": "1"
       }
   }
   ```

# Update Session Manager preferences (command line)


The following procedure describes how to use your preferred command line tool to make changes to the AWS Systems Manager Session Manager preferences for your AWS account in the selected AWS Region. Use Session Manager preferences to specify options for logging session data in an Amazon Simple Storage Service (Amazon S3) bucket or Amazon CloudWatch Logs log group. You can also use Session Manager preferences to encrypt your session data.

**To update Session Manager preferences (command line)**

1. Create a JSON file on your local machine with a name such as `SessionManagerRunShell.json`, and then paste the following content into it.

   ```
   {
       "schemaVersion": "1.0",
       "description": "Document to hold regional settings for Session Manager",
       "sessionType": "Standard_Stream",
       "inputs": {
           "s3BucketName": "",
           "s3KeyPrefix": "",
           "s3EncryptionEnabled": true,
           "cloudWatchLogGroupName": "",
           "cloudWatchEncryptionEnabled": true,
           "cloudWatchStreamingEnabled": false,
           "kmsKeyId": "",
           "runAsEnabled": true,
           "runAsDefaultUser": "",
           "idleSessionTimeout": "",
           "maxSessionDuration": "",
           "shellProfile": {
               "windows": "date",
               "linux": "pwd;ls"
           }
       }
   }
   ```

1. Specify where you want to send session data. You can specify an S3 bucket name (with an optional prefix) or a CloudWatch Logs log group name. If you want to further encrypt data between local client and managed nodes, provide the AWS KMS key to use for encryption. The following is an example.

   ```
   {
     "schemaVersion": "1.0",
     "description": "Document to hold regional settings for Session Manager",
     "sessionType": "Standard_Stream",
     "inputs": {
       "s3BucketName": "amzn-s3-demo-bucket",
       "s3KeyPrefix": "MyS3Prefix",
       "s3EncryptionEnabled": true,
       "cloudWatchLogGroupName": "MyLogGroupName",
       "cloudWatchEncryptionEnabled": true,
       "cloudWatchStreamingEnabled": false,
       "kmsKeyId": "MyKMSKeyID",
       "runAsEnabled": true,
       "runAsDefaultUser": "MyDefaultRunAsUser",
       "idleSessionTimeout": "20",
       "maxSessionDuration": "60",
       "shellProfile": {
           "windows": "MyCommands",
           "linux": "MyCommands"
       }
     }
   }
   ```
**Note**  
If you don't want to encrypt the session log data, change `true` to `false` for `s3EncryptionEnabled`.  
If you aren't sending logs to either an Amazon S3 bucket or a CloudWatch Logs log group, don't want to encrypt active session data, or don't want to turn on Run As support for the sessions in your account, you can delete the lines for those options. Make sure the last line in the `inputs` section doesn't end with a comma.  
If you add a KMS key ID to encrypt your session data, both the users who start sessions and the managed nodes that they connect to must have permission to use the key. You provide permission to use the KMS key with Session Manager through AWS Identity and Access Management (IAM) policies. For information, see the following topics:  
Add AWS KMS permissions for users in your account: [Sample IAM policies for Session Manager](getting-started-restrict-access-quickstart.md).
Add AWS KMS permissions for managed nodes in your account: [Step 2: Verify or add instance permissions for Session Manager](session-manager-getting-started-instance-profile.md).

1. Save the file.

1. In the directory where you created the JSON file, run the following command.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-document \
       --name "SSM-SessionManagerRunShell" \
       --content "file://SessionManagerRunShell.json" \
       --document-version "\$LATEST"
   ```

------
#### [  Windows  ]

   ```
   aws ssm update-document ^
       --name "SSM-SessionManagerRunShell" ^
       --content "file://SessionManagerRunShell.json" ^
       --document-version "$LATEST"
   ```

------
#### [   PowerShell   ]

   ```
   Update-SSMDocument `
       -Name "SSM-SessionManagerRunShell" `
       -Content (Get-Content -Raw SessionManagerRunShell.json) `
       -DocumentVersion '$LATEST'
   ```

------

   If successful, the command returns output similar to the following.

   ```
   {
       "DocumentDescription": {
           "Status": "Updating",
           "Hash": "ce4fd0a2ab9b0fae759004ba603174c3ec2231f21a81db8690a33eb66EXAMPLE",
           "Name": "SSM-SessionManagerRunShell",
           "Tags": [],
           "DocumentType": "Session",
           "PlatformTypes": [
               "Windows",
               "Linux"
           ],
           "DocumentVersion": "2",
           "HashType": "Sha256",
           "CreatedDate": 1537206341.565,
           "Owner": "111122223333",
           "SchemaVersion": "1.0",
           "DefaultVersion": "1",
           "DocumentFormat": "JSON",
           "LatestVersion": "2"
       }
   }
   ```

# Step 5: (Optional) Restrict access to commands in a session


You can restrict the commands that a user can run in an AWS Systems Manager Session Manager session by using a custom `Session` type AWS Systems Manager (SSM) document. In the document, you define the command that is run when the user starts a session and the parameters that the user can provide to the command. The `Session` document `schemaVersion` must be 1.0, and the `sessionType` of the document must be `InteractiveCommands`. You can then create AWS Identity and Access Management (IAM) policies that allow users to access only the `Session` documents that you define. For more information about using IAM policies to restrict access to commands in a session, see [IAM policy examples for interactive commands](#interactive-command-policy-examples).

Documents with the `sessionType` of `InteractiveCommands` are only supported for sessions started from the AWS Command Line Interface (AWS CLI). The user provides the custom document name as the `--document-name` parameter value and provides any command parameter values using the `--parameters` option. For more information about running interactive commands, see [Starting a session (interactive and noninteractive commands)](session-manager-working-with-sessions-start.md#sessions-start-interactive-commands).

Use following procedure to create a custom `Session` type SSM document that defines the command a user is allowed to run.

## Restrict access to commands in a session (console)


**To restrict the commands a user can run in a Session Manager session (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. Choose **Create command or session**.

1. For **Name**, enter a descriptive name for the document.

1. For **Document type**, choose **Session document**.

1. Enter your document content that defines the command a user can run in a Session Manager session using JSON or YAML, as shown in the following example.

------
#### [ YAML ]

   ```
   ---
   schemaVersion: '1.0'
   description: Document to view a log file on a Linux instance
   sessionType: InteractiveCommands
   parameters:
     logpath:
       type: String
       description: The log file path to read.
       default: "/var/log/amazon/ssm/amazon-ssm-agent.log"
       allowedPattern: "^[a-zA-Z0-9-_/]+(.log)$"
   properties:
     linux:
       commands: "tail -f {{ logpath }}"
       runAsElevated: true
   ```

------
#### [ JSON ]

   ```
   {
       "schemaVersion": "1.0",
       "description": "Document to view a log file on a Linux instance",
       "sessionType": "InteractiveCommands",
       "parameters": {
           "logpath": {
               "type": "String",
               "description": "The log file path to read.",
               "default": "/var/log/amazon/ssm/amazon-ssm-agent.log",
               "allowedPattern": "^[a-zA-Z0-9-_/]+(.log)$"
           }
       },
       "properties": {
           "linux": {
               "commands": "tail -f {{ logpath }}",
               "runAsElevated": true
           }
       }
   }
   ```

------

1. Choose **Create document**.

## Restrict access to commands in a session (command line)


**Before you begin**  
If you haven't already, install and configure the AWS Command Line Interface (AWS CLI) or the AWS Tools for PowerShell. For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

**To restrict the commands a user can run in a Session Manager session (command line)**

1. Create a JSON or YAML file for your document content that defines the command a user can run in a Session Manager session, as shown in the following example.

------
#### [ YAML ]

   ```
   ---
   schemaVersion: '1.0'
   description: Document to view a log file on a Linux instance
   sessionType: InteractiveCommands
   parameters:
     logpath:
       type: String
       description: The log file path to read.
       default: "/var/log/amazon/ssm/amazon-ssm-agent.log"
       allowedPattern: "^[a-zA-Z0-9-_/]+(.log)$"
   properties:
     linux:
       commands: "tail -f {{ logpath }}"
       runAsElevated: true
   ```

------
#### [ JSON ]

   ```
   {
       "schemaVersion": "1.0",
       "description": "Document to view a log file on a Linux instance",
       "sessionType": "InteractiveCommands",
       "parameters": {
           "logpath": {
               "type": "String",
               "description": "The log file path to read.",
               "default": "/var/log/amazon/ssm/amazon-ssm-agent.log",
               "allowedPattern": "^[a-zA-Z0-9-_/]+(.log)$"
           }
       },
       "properties": {
           "linux": {
               "commands": "tail -f {{ logpath }}",
               "runAsElevated": true
           }
       }
   }
   ```

------

1. Run the following commands to create an SSM document using your content that defines the command a user can run in a Session Manager session.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-document \
       --content file://path/to/file/documentContent.json \
       --name "exampleAllowedSessionDocument" \
       --document-type "Session"
   ```

------
#### [  Windows  ]

   ```
   aws ssm create-document ^
       --content file://C:\path\to\file\documentContent.json ^
       --name "exampleAllowedSessionDocument" ^
       --document-type "Session"
   ```

------
#### [   PowerShell   ]

   ```
   $json = Get-Content -Path "C:\path\to\file\documentContent.json" | Out-String
   New-SSMDocument `
       -Content $json `
       -Name "exampleAllowedSessionDocument" `
       -DocumentType "Session"
   ```

------

## Interactive command parameters and the AWS CLI


There are a variety of ways you can provide interactive command parameters when using the AWS CLI. Depending on the operating system (OS) of your client machine that you use to connect to managed nodes with the AWS CLI, the syntax you provide for commands that contain special or escape characters might differ. The following examples show some of the different ways you can provide command parameters when using the AWS CLI, and how to handle special or escape characters.

Parameters stored in Parameter Store can be referenced in the AWS CLI for your command parameters as shown in the following example.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name MyInteractiveCommandDocument \ 
    --parameters '{"command":["{{ssm:mycommand}}"]}'
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name MyInteractiveCommandDocument ^
    --parameters '{"command":["{{ssm:mycommand}}"]}'
```

------

The following example shows how you can use a shorthand syntax with the AWS CLI to pass parameters.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name MyInteractiveCommandDocument \ 
    --parameters command="ifconfig"
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name MyInteractiveCommandDocument ^
    --parameters command="ipconfig"
```

------

You can also provide parameters in JSON as shown in the following example.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name MyInteractiveCommandDocument \ 
    --parameters '{"command":["ifconfig"]}'
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name MyInteractiveCommandDocument ^
    --parameters '{"command":["ipconfig"]}'
```

------

Parameters can also be stored in a JSON file and provided to the AWS CLI as shown in the following example. For more information about using AWS CLI parameters from a file, see [Loading AWS CLI parameters from a file](https://docs.aws.amazon.com/cli/latest/userguide/;cli-usage-parameters-file.html) in the *AWS Command Line Interface User Guide*.

```
{
    "command": [
        "my command"
    ]
}
```

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name MyInteractiveCommandDocument \ 
    --parameters file://complete/path/to/file/parameters.json
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name MyInteractiveCommandDocument ^
    --parameters file://complete/path/to/file/parameters.json
```

------

You can also generate an AWS CLI skeleton from a JSON input file as shown in the following example. For more information about generating AWS CLI skeletons from JSON input files, see [Generating AWS CLI skeleton and input parameters from a JSON or YAML input file](https://docs.aws.amazon.com/cli/latest/userguide/;cli-usage-skeleton.html) in the *AWS Command Line Interface User Guide*.

```
{
    "Target": "instance-id",
    "DocumentName": "MyInteractiveCommandDocument",
    "Parameters": {
        "command": [
            "my command"
        ]
    }
}
```

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --cli-input-json file://complete/path/to/file/parameters.json
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --cli-input-json file://complete/path/to/file/parameters.json
```

------

To escape characters inside quotation marks, you must add additional backslashes to the escape characters as shown in the following example.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name MyInteractiveCommandDocument \ 
    --parameters '{"command":["printf \"abc\\\\tdef\""]}'
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name MyInteractiveCommandDocument ^
    --parameters '{"command":["printf \"abc\\\\tdef\""]}'
```

------

For information about using quotation marks with command parameters in the AWS CLI, see [Using quotation marks with strings in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/;cli-usage-parameters-quoting-strings.html) in the *AWS Command Line Interface User Guide*.

## IAM policy examples for interactive commands


You can create IAM policies that allow users to access only the `Session` documents you define. This restricts the commands a user can run in a Session Manager session to only the commands defined in your custom `Session` type SSM documents.

 **Allow a user to run an interactive command on a single managed node**     
****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":"ssm:StartSession",
         "Resource":[
            "arn:aws:ec2:us-east-1:444455556666:instance/i-02573cafcfEXAMPLE",
            "arn:aws:ssm:us-east-1:444455556666:document/allowed-session-document"
         ]
      },
      {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
      }
   ]
}
```

 **Allow a user to run an interactive command on all managed nodes**     
****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":"ssm:StartSession",
         "Resource":[
            "arn:aws:ec2:us-east-1:444455556666:instance/*",
            "arn:aws:ssm:us-east-1:444455556666:document/allowed-session-document"
         ]
      },
      {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
      }
   ]
}
```

 **Allow a user to run multiple interactive commands on all managed nodes**     
****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":"ssm:StartSession",
         "Resource":[
            "arn:aws:ec2:us-east-1:444455556666:instance/*",
            "arn:aws:ssm:us-east-1:444455556666:document/allowed-session-document",
            "arn:aws:ssm:us-east-1:444455556666:document/allowed-session-document-2"
         ]
      },
      {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
      }
   ]
}
```

# Step 6: (Optional) Use AWS PrivateLink to set up a VPC endpoint for Session Manager


You can further improve the security posture of your managed nodes by configuring AWS Systems Manager to use an interface virtual private cloud (VPC) endpoint. Interface endpoints are powered by AWS PrivateLink, a technology that allows you to privately access Amazon Elastic Compute Cloud (Amazon EC2) and Systems Manager APIs by using private IP addresses. 

AWS PrivateLink restricts all network traffic between your managed nodes, Systems Manager, and Amazon EC2 to the Amazon network. (Managed nodes don't have access to the internet.) Also, you don't need an internet gateway, a NAT device, or a virtual private gateway. 

For information about creating a VPC endpoint, see [Improve the security of EC2 instances by using VPC endpoints for Systems Manager](setup-create-vpc.md).

The alternative to using a VPC endpoint is to allow outbound internet access on your managed nodes. In this case, the managed nodes must also allow HTTPS (port 443) outbound traffic to the following endpoints:
+  `ec2messages.region.amazonaws.com` 
+  `ssm.region.amazonaws.com` 
+  `ssmmessages.region.amazonaws.com` 

Systems Manager uses the last of these endpoints, `ssmmessages.region.amazonaws.com`, to make calls from SSM Agent to the Session Manager service in the cloud.

To use optional features like AWS Key Management Service (AWS KMS) encryption, streaming logs to Amazon CloudWatch Logs (CloudWatch Logs), and sending logs to Amazon Simple Storage Service (Amazon S3) you must allow HTTPS (port 443) outbound traffic to the following endpoints:
+  `kms.region.amazonaws.com` 
+  `logs.region.amazonaws.com` 
+  `s3.region.amazonaws.com` 

For more information about required endpoints for Systems Manager, see [Reference: ec2messages, ssmmessages, and other API operations](systems-manager-setting-up-messageAPIs.md).

# Step 7: (Optional) Turn on or turn off ssm-user account administrative permissions


Starting with version 2.3.50.0 of AWS Systems Manager SSM Agent, the agent creates a local user account called `ssm-user` and adds it to `/etc/sudoers` (Linux and macOS) or to the Administrators group (Windows). On agent versions earlier than 2.3.612.0, the account is created the first time SSM Agent starts or restarts after installation. On version 2.3.612.0 and later, the `ssm-user` account is created the first time a session is started on a node. This `ssm-user` is the default operating system (OS) user when a AWS Systems Manager Session Manager session is started. SSM Agent version 2.3.612.0 was released on May 8th, 2019.

If you want to prevent Session Manager users from running administrative commands on a node, you can update the `ssm-user` account permissions. You can also restore these permissions after they have been removed.

**Topics**
+ [

## Managing ssm-user sudo account permissions on Linux and macOS
](#ssm-user-permissions-linux)
+ [

## Managing ssm-user Administrator account permissions on Windows Server
](#ssm-user-permissions-windows)

## Managing ssm-user sudo account permissions on Linux and macOS


Use one of the following procedures to turn on or turn off the ssm-user account sudo permissions on Linux and macOS managed nodes.

**Use Run Command to modify ssm-user sudo permissions (console)**
+ Use the procedure in [Running commands from the console](running-commands-console.md) with the following values:
  + For **Command document**, choose `AWS-RunShellScript`.
  + To remove sudo access, in the **Command parameters** area, paste the following in the **Commands** box.

    ```
    cd /etc/sudoers.d
    echo "#User rules for ssm-user" > ssm-agent-users
    ```

    -or-

    To restore sudo access, in the **Command parameters** area, paste the following in the **Commands** box.

    ```
    cd /etc/sudoers.d 
    echo "ssm-user ALL=(ALL) NOPASSWD:ALL" > ssm-agent-users
    ```

**Use the command line to modify ssm-user sudo permissions (AWS CLI)**

1. Connect to the managed node and run the following command.

   ```
   sudo -s
   ```

1. Change the working directory using the following command.

   ```
   cd /etc/sudoers.d
   ```

1. Open the file named `ssm-agent-users` for editing.

1. To remove sudo access, delete the following line.

   ```
   ssm-user ALL=(ALL) NOPASSWD:ALL
   ```

   -or-

   To restore sudo access, add the following line.

   ```
   ssm-user ALL=(ALL) NOPASSWD:ALL
   ```

1. Save the file.

## Managing ssm-user Administrator account permissions on Windows Server


Use one of the following procedures to turn on or turn off the ssm-user account Administrator permissions on Windows Server managed nodes.

**Use Run Command to modify Administrator permissions (console)**
+ Use the procedure in [Running commands from the console](running-commands-console.md) with the following values:

  For **Command document**, choose `AWS-RunPowerShellScript`.

  To remove administrative access, in the **Command parameters** area, paste the following in the **Commands** box.

  ```
  net localgroup "Administrators" "ssm-user" /delete
  ```

  -or-

  To restore administrative access, in the **Command parameters** area, paste the following in the **Commands** box.

  ```
  net localgroup "Administrators" "ssm-user" /add
  ```

**Use the PowerShell or command prompt window to modify Administrator permissions**

1. Connect to the managed node and open the PowerShell or Command Prompt window.

1. To remove administrative access, run the following command.

   ```
   net localgroup "Administrators" "ssm-user" /delete
   ```

   -or-

   To restore administrative access, run the following command.

   ```
   net localgroup "Administrators" "ssm-user" /add
   ```

**Use the Windows console to modify Administrator permissions**

1. Connect to the managed node and open the PowerShell or Command Prompt window.

1. From the command line, run `lusrmgr.msc` to open the **Local Users and Groups** console.

1. Open the **Users** directory, and then open **ssm-user**.

1. On the **Member Of** tab, do one of the following:
   + To remove administrative access, select **Administrators**, and then choose **Remove**.

     -or-

     To restore administrative access, enter **Administrators** in the text box, and then choose **Add**.

1. Choose **OK**.

# Step 8: (Optional) Allow and control permissions for SSH connections through Session Manager


You can allow users in your AWS account to use the AWS Command Line Interface (AWS CLI) to establish Secure Shell (SSH) connections to managed nodes using AWS Systems Manager Session Manager. Users who connect using SSH can also copy files between their local machines and managed nodes using Secure Copy Protocol (SCP). You can use this functionality to connect to managed nodes without opening inbound ports or maintaining bastion hosts.

 When you establish SSH connections through Session Manager, the AWS CLI and SSM Agent create secure WebSocket connections over TLS to Session Manager endpoints. The SSH session runs within this encrypted tunnel, providing an additional layer of security without requiring inbound ports to be opened on your managed nodes.

After allowing SSH connections, you can use AWS Identity and Access Management (IAM) policies to explicitly allow or deny users, groups, or roles to make SSH connections using Session Manager.

**Note**  
Logging isn't available for Session Manager sessions that connect through port forwarding or SSH. This is because SSH encrypts all session data within the secure TLS connection established between the AWS CLI and Session Manager endpoints, and Session Manager only serves as a tunnel for SSH connections.

**Topics**
+ [

## Allowing SSH connections for Session Manager
](#ssh-connections-enable)
+ [

## Controlling user permissions for SSH connections through Session Manager
](#ssh-connections-permissions)

## Allowing SSH connections for Session Manager


Use the following steps to allow SSH connections through Session Manager on a managed node. 

**To allow SSH connections for Session Manager**

1. On the managed node to which you want to allow SSH connections, do the following:
   + Ensure that SSH is running on the managed node. (You can close inbound ports on the node.)
   + Ensure that SSM Agent version 2.3.672.0 or later is installed on the managed node.

     For information about installing or updating SSM Agent on a managed node, see the following topics:
     + [Manually installing and uninstalling SSM Agent on EC2 instances for Windows Server](manually-install-ssm-agent-windows.md).
     +  [Manually installing and uninstalling SSM Agent on EC2 instances for Linux](manually-install-ssm-agent-linux.md) 
     +  [Manually installing and uninstalling SSM Agent on EC2 instances for macOS](manually-install-ssm-agent-macos.md) 
     +  [How to install the SSM Agent on hybrid Windows nodes](hybrid-multicloud-ssm-agent-install-windows.md) 
     +  [How to install the SSM Agent on hybrid Linux nodes](hybrid-multicloud-ssm-agent-install-linux.md) 
**Note**  
To use Session Manager with on-premises servers, edge devices, and virtual machines (VMs) that you activated as managed nodes, you must use the advanced-instances tier. For more information about advanced instances, see [Configuring instance tiers](fleet-manager-configure-instance-tiers.md).

1. On the local machine from which you want to connect to a managed node using SSH, do the following:
   + Ensure that version 1.1.23.0 or later of the Session Manager plugin is installed.

     For information about installing the Session Manager plugin, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).
   + Update the SSH configuration file to allow running a proxy command that starts a Session Manager session and transfer all data through the connection.

      **Linux and macOS** 
**Tip**  
The SSH configuration file is typically located at `~/.ssh/config`.

     Add the following to the configuration file on the local machine.

     ```
     # SSH over Session Manager
     Host i-* mi-*
         ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
         User ec2-user
     ```

      ** Windows ** 
**Tip**  
The SSH configuration file is typically located at `C:\Users\<username>\.ssh\config`.

     Add the following to the configuration file on the local machine.

     ```
     # SSH over Session Manager
     Host i-* mi-*
         ProxyCommand C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters portNumber=%p"
     ```
   + Create or verify that you have a Privacy Enhanced Mail certificate (a PEM file), or at minimum a public key, to use when establishing connections to managed nodes. This must be a key that is already associated with the managed node. The permissions of your private key file must be set so that only you can read it. You can use the following command to set the permissions of your private key file so that only you can read it.

     ```
     chmod 400 <my-key-pair>.pem
     ```

     For example, for an Amazon Elastic Compute Cloud (Amazon EC2) instance, the key pair file you created or selected when you created the instance. (You specify the path to the certificate or key as part of the command to start a session. For information about starting a session using SSH, see [Starting a session (SSH)](session-manager-working-with-sessions-start.md#sessions-start-ssh).)

## Controlling user permissions for SSH connections through Session Manager


After you enable SSH connections through Session Manager on a managed node, you can use IAM policies to allow or deny users, groups, or roles the ability to make SSH connections through Session Manager. 

**To use an IAM policy to allow SSH connections through Session Manager**
+ Use one of the following options:
  + **Option 1**: Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 

    In the navigation pane, choose **Policies**, and then update the permissions policy for the user or role you want to allow to start SSH connections through Session Manager. 

    For example, add the following element to the Quickstart policy you created in [Quickstart end user policies for Session Manager](getting-started-restrict-access-quickstart.md#restrict-access-quickstart-end-user). Replace each *example resource placeholder* with your own information. 

------
#### [ JSON ]

****  

    ```
    {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "ssm:StartSession",
                "Resource": [
                    "arn:aws:ec2:us-east-1:111122223333:instance/instance-id",
                    "arn:aws:ssm:*:*:document/AWS-StartSSHSession"
                ]
            },
            {
                "Effect": "Allow",
                "Action": "ssmmessages:OpenDataChannel",
                "Resource": "arn:aws:ssm:*:*:session/${aws:userid}-*"
            }
        ]
    }
    ```

------
  + **Option 2**: Attach an inline policy to a user policy by using the AWS Management Console, the AWS CLI, or the AWS API.

    Using the method of your choice, attach the policy statement in **Option 1** to the policy for an AWS user, group, or role.

    For information, see [Adding and Removing IAM Identity Permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

**To use an IAM policy to deny SSH connections through Session Manager**
+ Use one of the following options:
  + **Option 1**: Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). In the navigation pane, choose **Policies**, and then update the permissions policy for the user or role to block from starting Session Manager sessions. 

    For example, add the following element to the Quickstart policy you created in [Quickstart end user policies for Session Manager](getting-started-restrict-access-quickstart.md#restrict-access-quickstart-end-user).

------
#### [ JSON ]

****  

    ```
    {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
            {
                "Effect": "Deny",
                "Action": "ssm:StartSession",
                "Resource": "arn:aws:ssm:*:*:document/AWS-StartSSHSession"
            },
            {
                "Effect": "Allow",
                "Action": "ssmmessages:OpenDataChannel",
                "Resource": "arn:aws:ssm:*:*:session/${aws:userid}-*"
            }
        ]
    }
    ```

------
  + **Option 2**: Attach an inline policy to a user policy by using the AWS Management Console, the AWS CLI, or the AWS API.

    Using the method of your choice, attach the policy statement in **Option 1** to the policy for an AWS user, group, or role.

    For information, see [Adding and Removing IAM Identity Permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

# Working with Session Manager


You can use the AWS Systems Manager console, the Amazon Elastic Compute Cloud (Amazon EC2) console, or the AWS Command Line Interface (AWS CLI) to start sessions that connect you to the managed nodes your system administrator has granted you access to using AWS Identity and Access Management (IAM) policies. Depending on your permissions, you can also view information about sessions, resume inactive sessions that haven't timed out, and end sessions. After a session is established, it is not affected by IAM role session duration. For information about limiting session duration with Session Manager, see [Specify an idle session timeout value](session-preferences-timeout.md) and [Specify maximum session duration](session-preferences-max-timeout.md).

For more information about sessions, see [What is a session?](session-manager.md#what-is-a-session)

**Topics**
+ [

# Install the Session Manager plugin for the AWS CLI
](session-manager-working-with-install-plugin.md)
+ [

# Start a session
](session-manager-working-with-sessions-start.md)
+ [

# End a session
](session-manager-working-with-sessions-end.md)
+ [

# View session history
](session-manager-working-with-view-history.md)

# Install the Session Manager plugin for the AWS CLI


To initiate Session Manager sessions with your managed nodes by using the AWS Command Line Interface (AWS CLI), you must install the *Session Manager plugin* on your local machine. You can install the plugin on supported versions of Microsoft Windows Server, macOS, Linux, and Ubuntu Server.

**Note**  
To use the Session Manager plugin, you must have AWS CLI version 1.16.12 or later installed on your local machine. For more information, see [Installing or updating the latest version of the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

**Topics**
+ [

# Session Manager plugin latest version and release history
](plugin-version-history.md)
+ [

# Install the Session Manager plugin on Windows
](install-plugin-windows.md)
+ [

# Install the Session Manager plugin on macOS
](install-plugin-macos-overview.md)
+ [

# Install the Session Manager plugin on Linux
](install-plugin-linux-overview.md)
+ [

# Verify the Session Manager plugin installation
](install-plugin-verify.md)
+ [

# Session Manager plugin on GitHub
](plugin-github.md)
+ [

# (Optional) Turn on Session Manager plugin logging
](install-plugin-configure-logs.md)

# Session Manager plugin latest version and release history
Version history

Your local machine must be running a supported version of the Session Manager plugin. The current minimum supported version is 1.1.17.0. If you're running an earlier version, your Session Manager operations might not succeed. 

 

To see if you have the latest version, run the following command in the AWS CLI.

**Note**  
The command returns results only if the plugin is located in the default installation directory for your operating system type. You can also check the version in the contents of the `VERSION` file in the directory where you have installed the plugin.

```
session-manager-plugin --version
```

The following table lists all releases of the Session Manager plugin and the features and enhancements included with each version.

**Important**  
We recommend you always run the latest version. The latest version includes enhancements that improve the experience of using the plugin.


| Version | Release date | Details | 
| --- | --- | --- | 
| 1.2.804.0 |  April 2, 2026  | **Enhancement**: Bump to aws-sdk-go-v2 package. **Bug fix**: Update windows install script. **Bug fix**: Update default plugin version for local build. | 
| 1.2.792.0 |  March 17, 2026  | **Bug fix**: Add international keyboard support for Windows. | 
| 1.2.779.0 |  February 12, 2026  | **Enhancement**: Update Go version to 1.25 in Dockerfile. **Bug fix**: Add shebang lines to debian packaging scripts. | 
| 1.2.764.0 |  November 19, 2025  | **Enhancement**: Added support for signing OpenDataChannel request. **Bug fix**: Fix checkstyle issues to support newer Go version. | 
| 1.2.707.0 |  February 6, 2025  | **Enhancement**: Upgraded the Go version to 1.23 in the Dockerfile. Updated the version configuration step in the README. | 
| 1.2.694.0 |  November 20, 2024  | **Bug fix**: Rolled back change that added credentials to OpenDataChannel requests. | 
| 1.2.688.0 |  November 6, 2024  | **This version was deprecated on 11/20/2024.** **Enhancements**:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/plugin-version-history.html) | 
| 1.2.677.0 |  October 10, 2024  | **Enhancement**: Added support for passing the plugin version with OpenDataChannel requests. | 
| 1.2.650.0 |  July 02, 2024  | **Enhancement**: Upgraded aws-sdk-go to 1.54.10.**Bug fix**: Reformated comments for gofmt check. | 
| 1.2.633.0 |  May 30, 2024  | Enhancement: Updated the Dockerfile to use an Amazon Elastic Container Registry (Amazon ECR) image. | 
| 1.2.553.0 |  January 10, 2024  | Enhancement: Upgraded aws-sdk-go and dependent Golang packages. | 
| 1.2.536.0 |  December 4, 2023  | Enhancement: Added support for passing a [StartSession](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_StartSession.html) API response as an environment variable to session-manager-plugin. | 
| 1.2.497.0 |  August 1, 2023  | Enhancement: Upgraded Go SDK to v1.44.302. | 
| 1.2.463.0 |  March 15, 2023  | Enhancement: Added Mac with Apple silicon support for Apple Mac (M1) in macOS bundle installer and signed installer.  | 
| 1.2.398.0 |  October 14, 2022  | Enhancement: Support golang version 1.17. Update default session-manager-plugin runner for macOS to use python3. Update import path from SSMCLI to session-manager-plugin. | 
| 1.2.339.0 |  June 16, 2022  | Bug fix: Fix idle session timeout for port sessions. | 
| 1.2.331.0 |  May 27, 2022  | Bug fix: Fix port sessions closing prematurely when the local server doesn't connect before timeout. | 
| 1.2.323.0 |  May 19, 2022  | Bug fix: Disable smux keep alive to use idle session timeout feature. | 
| 1.2.312.0 |  March 31, 2022  | Enhancement: Supports more output message payload types. | 
| 1.2.295.0 |  January 12, 2022  | Bug fix: Hung sessions caused by client resending stream data when agent becomes inactive, and incorrect logs for start\$1publication and pause\$1publication messages. | 
| 1.2.279.0 |  October 27, 2021  | Enhancement: Zip packaging for Windows platform. | 
| 1.2.245.0 |  August 19, 2021  | Enhancement: Upgrade aws-sdk-go to latest version (v1.40.17) to support AWS IAM Identity Center. | 
| 1.2.234.0 |  July 26, 2021  | Bug fix: Handle session abruptly terminated scenario in interactive session type. | 
| 1.2.205.0 |  June 10, 2021  | Enhancement: Added support for signed macOS installer. | 
| 1.2.54.0 |  January 29, 2021  | Enhancement: Added support for running sessions in NonInteractiveCommands execution mode. | 
| 1.2.30.0 |  November 24, 2020  |  **Enhancement**: (Port forwarding sessions only) Improved overall performance.  | 
| 1.2.7.0 |  October 15, 2020  |  **Enhancement**: (Port forwarding sessions only) Reduced latency and improved overall performance.  | 
| 1.1.61.0 |  April 17, 2020  |  **Enhancement**: Added ARM support for Linux and Ubuntu Server.   | 
| 1.1.54.0 |  January 6, 2020  |  **Bug fix**: Handle race condition scenario of packets being dropped when the Session Manager plugin isn't ready.   | 
|  1.1.50.0  | November 19, 2019 |  **Enhancement**: Added support for forwarding a port to a local unix socket.  | 
|  1.1.35.0  | November 7, 2019 |  **Enhancement**: (Port forwarding sessions only) Send a TerminateSession command to SSM Agent when the local user presses `Ctrl+C`.  | 
| 1.1.33.0 | September 26, 2019 | Enhancement: (Port forwarding sessions only) Send a disconnect signal to the server when the client drops the TCP connection.  | 
| 1.1.31.0 | September 6, 2019 | Enhancement: Update to keep port forwarding session open until remote server closes the connection. | 
|  1.1.26.0  | July 30, 2019 |  **Enhancement**: Update to limit the rate of data transfer during a session.  | 
|  1.1.23.0  | July 9, 2019 |  **Enhancement**: Added support for running SSH sessions using Session Manager.  | 
| 1.1.17.0 | April 4, 2019 |  **Enhancement**: Added support for further encryption of session data using AWS Key Management Service (AWS KMS).  | 
| 1.0.37.0 | September 20, 2018 |  **Enhancement**: Bug fix for Windows version.  | 
| 1.0.0.0 | September 11, 2018 |  Initial release of the Session Manager plugin.  | 

# Install the Session Manager plugin on Windows
Install on Windows

You can install the Session Manager plugin on Windows Vista or later using the standalone installer.

When updates are released, you must repeat the installation process to get the latest version of the Session Manager plugin.

**Note**  
Note the following information.  
The Session Manager plugin installer needs Administrator rights to install the plugin.
For best results, we recommend that you start sessions on Windows clients using Windows PowerShell, version 5 or later. Alternatively, you can use the Command shell in Windows 10. The Session Manager plugin only supports PowerShell and the Command shell. Third-party command line tools might not be compatible with the plugin.

**To install the Session Manager plugin using the EXE installer**

1. Download the installer using the following URL.

   ```
   https://s3.amazonaws.com/session-manager-downloads/plugin/latest/windows/SessionManagerPluginSetup.exe
   ```

   Alternatively, you can download a zipped version of the installer using the following URL.

   ```
   https://s3.amazonaws.com/session-manager-downloads/plugin/latest/windows/SessionManagerPlugin.zip
   ```

1. Run the downloaded installer, and follow the on-screen instructions. If you downloaded the zipped version of the installer, you must unzip the installer first.

   Leave the install location box blank to install the plugin to the default directory.
   +  `%PROGRAMFILES%\Amazon\SessionManagerPlugin\bin\` 

1. Verify that the installation was successful. For information, see [Verify the Session Manager plugin installation](install-plugin-verify.md).
**Note**  
If Windows is unable to find the executable, you might need to re-open the command prompt or add the installation directory to your `PATH` environment variable manually. For information, see the troubleshooting topic [Session Manager plugin not automatically added to command line path (Windows)](session-manager-troubleshooting.md#windows-plugin-env-var-not-set).

# Install the Session Manager plugin on macOS
Install on macOS

Choose one of the following topics to install the Session Manager plugin on macOS. 

**Note**  
The signed installer is a signed `.pkg` file. The bundled installer uses a `.zip` file. After the file is unzipped, you can install the plugin using the binary.

## Install the Session Manager plugin on macOS with the signed installer


This section describes how to install the Session Manager plugin on macOS using the signed installer.

**To install the Session Manager plugin using the signed installer (macOS)**

1. Download the signed installer.

------
#### [ x86\$164 ]

   ```
   curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/mac/session-manager-plugin.pkg" -o "session-manager-plugin.pkg"
   ```

------
#### [ Mac with Apple silicon ]

   ```
   curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/mac_arm64/session-manager-plugin.pkg" -o "session-manager-plugin.pkg"
   ```

------

1. Run the install commands. If the command fails, verify that the `/usr/local/bin` folder exists. If it doesn't, create it and run the command again.

   ```
   sudo installer -pkg session-manager-plugin.pkg -target /
   sudo ln -s /usr/local/sessionmanagerplugin/bin/session-manager-plugin /usr/local/bin/session-manager-plugin
   ```

1. Verify that the installation was successful. For information, see [Verify the Session Manager plugin installation](install-plugin-verify.md).

## Install the Session Manager plugin on macOS


This section describes how to install the Session Manager plugin on macOS using the bundled installer.

**Important**  
Note the following important information.  
By default, the installer requires sudo access to run, because the script installs the plugin to the `/usr/local/sessionmanagerplugin` system directory. If you don't want to install the plugin using sudo, manually update the installer script to install the plugin to a directory that doesn't require sudo access.
The bundled installer doesn't support installing to paths that contain spaces.

**To install the Session Manager plugin using the bundled installer (macOS)**

1. Download the bundled installer.

------
#### [ x86\$164 ]

   ```
   curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/mac/sessionmanager-bundle.zip" -o "sessionmanager-bundle.zip"
   ```

------
#### [ Mac with Apple silicon ]

   ```
   curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/mac_arm64/sessionmanager-bundle.zip" -o "sessionmanager-bundle.zip"
   ```

------

1. Unzip the package.

   ```
   unzip sessionmanager-bundle.zip
   ```

1. Run the install command.

   ```
   sudo ./sessionmanager-bundle/install -i /usr/local/sessionmanagerplugin -b /usr/local/bin/session-manager-plugin
   ```
**Note**  
 The plugin requires Python 3.10 or later. By default, the install script runs under the system default version of Python. If you have installed an alternative version of Python and want to use that to install the Session Manager plugin, run the install script with that version by absolute path to the Python executable. The following is an example.  

   ```
   sudo /usr/local/bin/python3.11 sessionmanager-bundle/install -i /usr/local/sessionmanagerplugin -b /usr/local/bin/session-manager-plugin
   ```

   The installer installs the Session Manager plugin at `/usr/local/sessionmanagerplugin` and creates the symlink `session-manager-plugin` in the `/usr/local/bin` directory. This eliminates the need to specify the install directory in the user's `$PATH` variable.

   To see an explanation of the `-i` and `-b` options, use the `-h` option.

   ```
   ./sessionmanager-bundle/install -h
   ```

1. Verify that the installation was successful. For information, see [Verify the Session Manager plugin installation](install-plugin-verify.md).

**Note**  
To uninstall the plugin, run the following two commands in the order shown.  

```
sudo rm -rf /usr/local/sessionmanagerplugin
```

```
sudo rm /usr/local/bin/session-manager-plugin
```

# Install the Session Manager plugin on Linux


This section includes information about verifying the signature of the Session Manager plugin installer package and installing the plugin on the following Linux distributions:
+ Amazon Linux 2
+ AL2023
+ RHEL
+ Debian Server
+ Ubuntu Server

**Topics**
+ [

# Verify the signature of the Session Manager plugin
](install-plugin-linux-verify-signature.md)
+ [

# Install the Session Manager plugin on Amazon Linux 2, Amazon Linux 2023, and Red Hat Enterprise Linux distributions
](install-plugin-linux.md)
+ [

# Install the Session Manager plugin on Debian Server and Ubuntu Server
](install-plugin-debian-and-ubuntu.md)

# Verify the signature of the Session Manager plugin


The Session Manager plugin RPM and Debian installer packages for Linux instances are cryptographically signed. You can use a public key to verify that the plugin binary and package is original and unmodified. If the file is altered or damaged, the verification fails. You can verify the signature of the installer package using the GNU Privacy Guard (GPG) tool. The following information is for Session Manager plugin versions 1.2.707.0 or later.

Complete the following steps to verify the signature of the Session Manager plugin installer package.

**Topics**
+ [

## Step 1: Download the Session Manager plugin installer package
](#install-plugin-linux-verify-signature-installer-packages)
+ [

## Step 2: Download the associated signature file
](#install-plugin-linux-verify-signature-packages)
+ [

## Step 3: Install the GPG tool
](#install-plugin-linux-verify-signature-packages-gpg)
+ [

## Step 4: Verify the Session Manager plugin installer package on a Linux server
](#install-plugin-linux-verify-signature-packages)

## Step 1: Download the Session Manager plugin installer package


Download the Session Manager plugin installer package you want to verify.

**Amazon Linux 2, AL2023, and RHEL RPM packages**

------
#### [ x86\$164 ]

```
curl -o "session-manager-plugin.rpm" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm"
```

------
#### [ ARM64 ]

```
curl -o "session-manager-plugin.rpm" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_arm64/session-manager-plugin.rpm"
```

------

**Debian Server and Ubuntu Server Deb packages**

------
#### [ x86\$164 ]

```
curl -o "session-manager-plugin.deb" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb"
```

------
#### [ ARM64 ]

```
curl -o "session-manager-plugin.deb" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_arm64/session-manager-plugin.deb"
```

------

## Step 2: Download the associated signature file


After you download the installer package, download the associated signature file for package verification. To provide an extra layer of protection against unauthorized copying or use of the session-manager-plugin binary file inside the package, we also offer binary signatures, which you can use to validate individual binary files. You can choose to use these binary signatures based on your security needs.

**Amazon Linux 2, AL2023, and RHEL signature packages**

------
#### [ x86\$164 ]

Package:

```
curl -o "session-manager-plugin.rpm.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm.sig"
```

Binary:

```
curl -o "session-manager-plugin.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.sig"
```

------
#### [ ARM64 ]

Package:

```
curl -o "session-manager-plugin.rpm.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_arm64/session-manager-plugin.rpm.sig"
```

Binary:

```
curl -o "session-manager-plugin.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_arm64/session-manager-plugin.sig"
```

------

**Debian Server and Ubuntu Server Deb signature packages**

------
#### [ x86\$164 ]

Package:

```
curl -o "session-manager-plugin.deb.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb.sig"
```

Binary:

```
curl -o "session-manager-plugin.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.sig"
```

------
#### [ ARM64 ]

Package:

```
curl -o "session-manager-plugin.deb.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_arm64/session-manager-plugin.deb.sig"
```

Binary:

```
curl -o "session-manager-plugin.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_arm64/session-manager-plugin.sig"
```

------

## Step 3: Install the GPG tool


To verify the signature of the Session Manager plugin, you must have the GNU Privacy Guard (GPG) tool installed on your system. The verification process requires GPG version 2.1 or later. You can check your GPG version by running the following command:

```
gpg --version
```

If your GPG version is older than 2.1, update it before proceeding with the verification process. For most systems, you can update the GPG tool using your package manager. For example, on supported Amazon Linux and RHEL versions, you can use the following commands:

```
sudo yum update
sudo yum install gnupg2
```

On supported Ubuntu Server and Debian Server systems, you can use the following commands:

```
sudo apt-get update
sudo apt-get install gnupg2
```

Ensure you have the required GPG version before continuing with the verification process.

## Step 4: Verify the Session Manager plugin installer package on a Linux server


Use the following procedure to verify the Session Manager plugin installer package on a Linux server.

**Note**  
Amazon Linux 2 doesn't support the gpg tool version 2.1 or higher. If the following procedure doesn't work on your Amazon Linux 2 instances, verify the signature on a different platform before installing it on your Amazon Linux 2 instances.

1. Copy the following public key, and save it to a file named session-manager-plugin.gpg.

   ```
   -----BEGIN PGP PUBLIC KEY BLOCK-----
   
   mFIEZ5ERQxMIKoZIzj0DAQcCAwQjuZy+IjFoYg57sLTGhF3aZLBaGpzB+gY6j7Ix
   P7NqbpXyjVj8a+dy79gSd64OEaMxUb7vw/jug+CfRXwVGRMNtIBBV1MgU1NNIFNl
   c3Npb24gTWFuYWdlciA8c2Vzc2lvbi1tYW5hZ2VyLXBsdWdpbi1zaWduZXJAYW1h
   em9uLmNvbT4gKEFXUyBTeXN0ZW1zIE1hbmFnZXIgU2Vzc2lvbiBNYW5hZ2VyIFBs
   dWdpbiBMaW51eCBTaWduZXIgS2V5KYkBAAQQEwgAqAUCZ5ERQ4EcQVdTIFNTTSBT
   ZXNzaW9uIE1hbmFnZXIgPHNlc3Npb24tbWFuYWdlci1wbHVnaW4tc2lnbmVyQGFt
   YXpvbi5jb20+IChBV1MgU3lzdGVtcyBNYW5hZ2VyIFNlc3Npb24gTWFuYWdlciBQ
   bHVnaW4gTGludXggU2lnbmVyIEtleSkWIQR5WWNxJM4JOtUB1HosTUr/b2dX7gIe
   AwIbAwIVCAAKCRAsTUr/b2dX7rO1AQCa1kig3lQ78W/QHGU76uHx3XAyv0tfpE9U
   oQBCIwFLSgEA3PDHt3lZ+s6m9JLGJsy+Cp5ZFzpiF6RgluR/2gA861M=
   =2DQm
   -----END PGP PUBLIC KEY BLOCK-----
   ```

1. Import the public key into your keyring. The returned key value should be `2C4D4AFF6F6757EE`.

   ```
   $ gpg --import session-manager-plugin.gpg
   gpg: key 2C4D4AFF6F6757EE: public key "AWS SSM Session Manager <session-manager-plugin-signer@amazon.com> (AWS Systems Manager Session Manager Plugin Linux Signer Key)" imported
   gpg: Total number processed: 1
   gpg:               imported: 1
   ```

1. Run the following command to verify the fingerprint.

   ```
   gpg --fingerprint 2C4D4AFF6F6757EE
   ```

   The fingerprint for the command output should match the following.

   ```
   7959 6371 24CE 093A D501 D47A 2C4D 4AFF 6F67 57EE
   ```

   ```
   pub   nistp256 2025-01-22 [SC]
         7959 6371 24CE 093A D501  D47A 2C4D 4AFF 6F67 57EE
   uid           [ unknown] AWS SSM Session Manager <session-manager-plugin-signer@amazon.com> (AWS Systems Manager Session Manager Plugin Linux Signer Key)
   ```

   If the fingerprint doesn't match, don't install the plugin. Contact AWS Support.

1. Verify the installer package signature. Replace the *signature-filename* and *downloaded-plugin-filename* with the values you specified when downloading the signature file and session-manager-plugin, as listed in the table earlier in this topic.

   ```
   gpg --verify signature-filename downloaded-plugin-filename
   ```

   For example, for the x86\$164 architecture on Amazon Linux 2, the command is as follows:

   ```
   gpg --verify session-manager-plugin.rpm.sig session-manager-plugin.rpm
   ```

   This command returns output similar to the following.

   ```
   gpg: Signature made Mon Feb 3 20:08:32 2025 UTC gpg: using ECDSA key 2C4D4AFF6F6757EE
   gpg: Good signature from "AWS Systems Manager Session Manager <session-manager-plugin-signer@amazon.com> (AWS Systems Manager Session Manager Plugin Linux Signer Key)" [unknown] 
   gpg: WARNING: This key is not certified with a trusted signature! 
   gpg: There is no indication that the signature belongs to the owner. 
   Primary key fingerprint: 7959 6371 24CE 093A D501 D47A 2C4D 4AFF 6F67 57EE
   ```

If the output includes the phrase `BAD signature`, check whether you performed the procedure correctly. If you continue to get this response, contact AWS Support and don't install the package. The warning message about the trust doesn't mean that the signature isn't valid, only that you haven't verified the public key. A key is trusted only if you or someone who you trust has signed it. If the output includes the phrase `Can't check signature: No public key`, verify you downloaded Session Manager plugin with version 1.2.707.0 or later.

# Install the Session Manager plugin on Amazon Linux 2, Amazon Linux 2023, and Red Hat Enterprise Linux distributions
Install on Amazon Linux 2, AL2023, and RHEL distros

Use the following procedure to install the Session Manager plugin on Amazon Linux 2, Amazon Linux 2023 (AL2023), and RHEL distributions.

1. Download and install the Session Manager plugin RPM package.

------
#### [ x86\$164 ]

   On Amazon Linux 2 and RHEL 7, run the following command:

   ```
   sudo yum install -y https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm
   ```

   On AL2023 and RHEL 8 and 9, run the following command:

   ```
   sudo dnf install -y https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm
   ```

------
#### [ ARM64 ]

   On Amazon Linux 2 and RHEL 7, run the following command:

   ```
   sudo yum install -y https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_arm64/session-manager-plugin.rpm
   ```

   On AL2023 and RHEL 8 and 9, run the following command:

   ```
   sudo dnf install -y https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_arm64/session-manager-plugin.rpm
   ```

------

1. Verify that the installation was successful. For information, see [Verify the Session Manager plugin installation](install-plugin-verify.md).

**Note**  
If you want to uninstall the plugin, run `sudo yum erase session-manager-plugin -y`

# Install the Session Manager plugin on Debian Server and Ubuntu Server
Install on Debian Server and Ubuntu Server

1. Download the Session Manager plugin deb package.

------
#### [ x86\$164 ]

   ```
   curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb" -o "session-manager-plugin.deb"
   ```

------
#### [ ARM64 ]

   ```
   curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_arm64/session-manager-plugin.deb" -o "session-manager-plugin.deb"
   ```

------

1. Run the install command.

   ```
   sudo dpkg -i session-manager-plugin.deb
   ```

1. Verify that the installation was successful. For information, see [Verify the Session Manager plugin installation](install-plugin-verify.md).

**Note**  
If you ever want to uninstall the plugin, run `sudo dpkg -r session-manager-plugin`

# Verify the Session Manager plugin installation


Run the following commands to verify that the Session Manager plugin installed successfully.

```
session-manager-plugin
```

If the installation was successful, the following message is returned.

```
The Session Manager plugin is installed successfully. Use the AWS CLI to start a session.
```

You can also test the installation by running the [https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html) command in the the [AWS Command Line Interface](https://aws.amazon.com/cli/) (AWS CLI). In the following command, replace *instance-id* with your own information.

```
aws ssm start-session --target instance-id
```

This command will work only if you have installed and configured the AWS CLI, and if your Session Manager administrator has granted you the necessary IAM permissions to access the target managed node using Session Manager.

# Session Manager plugin on GitHub


The source code for Session Manager plugin is available on [https://github.com/aws/session-manager-plugin](https://github.com/aws/session-manager-plugin) so that you can adapt the plugin to meet your needs. We encourage you to submit [pull requests](https://github.com/aws/session-manager-plugin/blob/mainline/CONTRIBUTING.md) for changes that you would like to have included. However, Amazon Web Services doesn't provide support for running modified copies of this software.

# (Optional) Turn on Session Manager plugin logging


The Session Manager plugin includes an option to allow logging for sessions that you run. By default, logging is turned off.

If you allow logging, the Session Manager plugin creates log files for both application activity (`session-manager-plugin.log`) and errors (`errors.log`) on your local machine.

**Topics**
+ [

## Turn on logging for the Session Manager plugin (Windows)
](#configure-logs-windows)
+ [

## Enable logging for the Session Manager plugin (Linux and macOS)
](#configure-logs-linux)

## Turn on logging for the Session Manager plugin (Windows)


1. Locate the `seelog.xml.template` file for the plugin. 

   The default location is `C:\Program Files\Amazon\SessionManagerPlugin\seelog.xml.template`.

1. Change the name of the file to `seelog.xml`.

1. Open the file and change `minlevel="off"` to `minlevel="info"` or `minlevel="debug"`.
**Note**  
By default, log entries about opening a data channel and reconnecting sessions are recorded at the **INFO** level. Data flow (packets and acknowledgement) entries are recorded at the **DEBUG** level.

1. Change other configuration options you want to modify. Options you can change include: 
   + **Debug level**: You can change the debug level from `formatid="fmtinfo"` to `formatid="fmtdebug"`.
   + **Log file options**: You can make changes to the log file options, including where the logs are stored, with the exception of the log file names. 
**Important**  
Don't change the file names or logging won't work correctly.

     ```
     <rollingfile type="size" filename="C:\Program Files\Amazon\SessionManagerPlugin\Logs\session-manager-plugin.log" maxsize="30000000" maxrolls="5"/>
     <filter levels="error,critical" formatid="fmterror">
     <rollingfile type="size" filename="C:\Program Files\Amazon\SessionManagerPlugin\Logs\errors.log" maxsize="10000000" maxrolls="5"/>
     ```

1. Save the file.

## Enable logging for the Session Manager plugin (Linux and macOS)


1. Locate the `seelog.xml.template` file for the plugin. 

   The default location is `/usr/local/sessionmanagerplugin/seelog.xml.template`.

1. Change the name of the file to `seelog.xml`.

1. Open the file and change `minlevel="off"` to `minlevel="info"` or `minlevel="debug"`.
**Note**  
By default, log entries about opening data channels and reconnecting sessions are recorded at the **INFO** level. Data flow (packets and acknowledgement) entries are recorded at the **DEBUG** level.

1. Change other configuration options you want to modify. Options you can change include: 
   + **Debug level**: You can change the debug level from `formatid="fmtinfo"` to `outputs formatid="fmtdebug"`
   + **Log file options**: You can make changes to the log file options, including where the logs are stored, with the exception of the log file names. 
**Important**  
Don't change the file names or logging won't work correctly.

     ```
     <rollingfile type="size" filename="/usr/local/sessionmanagerplugin/logs/session-manager-plugin.log" maxsize="30000000" maxrolls="5"/>
     <filter levels="error,critical" formatid="fmterror">
     <rollingfile type="size" filename="/usr/local/sessionmanagerplugin/logs/errors.log" maxsize="10000000" maxrolls="5"/>
     ```
**Important**  
If you use the specified default directory for storing logs, you must either run session commands using **sudo** or give the directory where the plugin is installed full read and write permissions. To bypass these restrictions, change the location where logs are stored.

1. Save the file.

# Start a session
Start a session

You can use the AWS Systems Manager console, the Amazon Elastic Compute Cloud (Amazon EC2) console, the AWS Command Line Interface (AWS CLI), or SSH to start a session.

**Topics**
+ [

## Starting a session (Systems Manager console)
](#start-sys-console)
+ [

## Starting a session (Amazon EC2 console)
](#start-ec2-console)
+ [

## Starting a session (AWS CLI)
](#sessions-start-cli)
+ [

## Starting a session (SSH)
](#sessions-start-ssh)
+ [

## Starting a session (port forwarding)
](#sessions-start-port-forwarding)
+ [

## Starting a session (port forwarding to remote host)
](#sessions-remote-port-forwarding)
+ [

## Starting a session (interactive and noninteractive commands)
](#sessions-start-interactive-commands)

## Starting a session (Systems Manager console)


You can use the AWS Systems Manager console to start a session with a managed node in your account.

**Note**  
Before you start a session, make sure that you have completed the setup steps for Session Manager. For information, see [Setting up Session Manager](session-manager-getting-started.md).

**To start a session (Systems Manager console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose **Start session**.

1. (Optional) Enter a session description in the **Reason for session** field.

1. For **Target instances**, choose the option button to the left of the managed node that you want to connect to.

   If the node that you want isn't in the list, or if you select a node and receive a configuration error, see [Managed node not available or not configured for Session Manager](session-manager-troubleshooting.md#session-manager-troubleshooting-instances) for troubleshooting steps.

1. Choose **Start session** to launch the session immediately.

   -or-

   Choose **Next** for session options.

1. (Optional) For **Session document**, select the document that you want to run when the session starts. If your document supports runtime parameters, you can enter one or more comma-separated values in each parameter field.

1. Choose **Next**.

1. Choose **Start session**.

After the connection is made, you can run bash commands (Linux and macOS) or PowerShell commands (Windows) as you would through any other connection type.

**Important**  
If you want to allow users to specify a document when starting sessions in the Session Manager console, note the following:  
You must grant users the `ssm:GetDocument` and `ssm:ListDocuments` permissions in their IAM policy. For more information, see [Grant access to custom Session documents in the console](getting-started-restrict-access-examples.md#grant-access-documents-console-example).
The console only supports Session documents that have the `sessionType` defined as `Standard_Stream`. For more information, see [Session document schema](session-manager-schema.md).

## Starting a session (Amazon EC2 console)


You can use the Amazon Elastic Compute Cloud (Amazon EC2) console to start a session with an instance in your account.

**Note**  
If you receive an error that you aren't authorized to perform one or more Systems Manager actions (`ssm:command-name`, then you must contact your administrator for assistance. Your administrator is the person that provided you with your sign-in credentials. Ask that person to update your policies to allow you to start sessions from the Amazon EC2 console. If you're an administrator, see [Sample IAM policies for Session Manager](getting-started-restrict-access-quickstart.md) for more information.

**To start a session (Amazon EC2 console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Instances**.

1. Select the instance and choose **Connect**.

1. For **Connection method**, choose **Session Manager**.

1. Choose **Connect**.

After the connection is made, you can run bash commands (Linux and macOS) or PowerShell commands (Windows) as you would through any other connection type.

## Starting a session (AWS CLI)


Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

Before you start a session, make sure that you have completed the setup steps for Session Manager. For information, see [Setting up Session Manager](session-manager-getting-started.md).

To use the AWS CLI to run session commands, the Session Manager plugin must also be installed on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).

To start a session using the AWS CLI, run the following command replacing *instance-id* with your own information.

```
aws ssm start-session \
    --target instance-id
```

For information about other options you can use with the **start-session** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html) in the AWS Systems Manager section of the AWS CLI Command Reference.

## Starting a session (SSH)


To start a Session Manager SSH session, version 2.3.672.0 or later of SSM Agent must be installed on the managed node.

**SSH connection requirements**  
Take note of the following requirements and limitations for session connections using SSH through Session Manager:
+ Your target managed node must be configured to support SSH connections. For more information, see [(Optional) Allow and control permissions for SSH connections through Session Manager](session-manager-getting-started-enable-ssh-connections.md).
+ You must connect using the managed node account associated with the Privacy Enhanced Mail (PEM) certificate, not the `ssm-user` account that is used for other types of session connections. For example, on EC2 instances for Linux and macOS, the default user is `ec2-user`. For information about identifying the default user for each instance type, see [Get Information About Your Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connection-prereqs.html#connection-prereqs-get-info-about-instance) in the *Amazon EC2 User Guide*.
+ Logging isn't available for Session Manager sessions that connect through port forwarding or SSH. This is because SSH encrypts all session data within the secure TLS connection established between the AWS CLI and Session Manager endpoints, and Session Manager only serves as a tunnel for SSH connections.

**Note**  
Before you start a session, make sure that you have completed the setup steps for Session Manager. For information, see [Setting up Session Manager](session-manager-getting-started.md).

To start a session using SSH, run the following command. Replace each *example resource placeholder* with your own information.

```
ssh -i /path/my-key-pair.pem username@instance-id
```

**Tip**  
When you start a session using SSH, you can copy local files to the target managed node using the following command format.  

```
scp -i /path/my-key-pair.pem /path/ExampleFile.txt username@instance-id:~
```

For information about other options you can use with the **start-session** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html) in the AWS Systems Manager section of the AWS CLI Command Reference.

## Starting a session (port forwarding)


To start a Session Manager port forwarding session, version 2.3.672.0 or later of SSM Agent must be installed on the managed node.

**Note**  
Before you start a session, make sure that you have completed the setup steps for Session Manager. For information, see [Setting up Session Manager](session-manager-getting-started.md).  
To use the AWS CLI to run session commands, you must install the Session Manager plugin on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).  
Depending on your operating system and command line tool, the placement of quotation marks can differ and escape characters might be required.

To start a port forwarding session, run the following command from the CLI. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name AWS-StartPortForwardingSession \
    --parameters '{"portNumber":["80"], "localPortNumber":["56789"]}'
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name AWS-StartPortForwardingSession ^
    --parameters portNumber="3389",localPortNumber="56789"
```

------

`portNumber` is the remote port on the managed node where you want the session traffic to be redirected. For example, you might specify port `3389` for connecting to a Windows node using the Remote Desktop Protocol (RDP). If you don't specify the `portNumber` parameter, Session Manager uses `80` as the default value. 

`localPortNumber` is the port on your local computer where traffic starts, such as `56789`. This value is what you enter when connecting to a managed node using a client. For example, **localhost:56789**.

For information about other options you can use with the **start-session** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html) in the AWS Systems Manager section of the AWS CLI Command Reference.

For more information about port forwarding sessions, see [Port Forwarding Using AWS Systems Manager Session Manager](https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/) in the *AWS News Blog*.

## Starting a session (port forwarding to remote host)


To start a Session Manager port forwarding session to a remote host, version 3.1.1374.0 or later of SSM Agent must be installed on the managed node. The remote host isn't required to be managed by Systems Manager.

**Note**  
Before you start a session, make sure that you have completed the setup steps for Session Manager. For information, see [Setting up Session Manager](session-manager-getting-started.md).  
To use the AWS CLI to run session commands, you must install the Session Manager plugin on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).  
Depending on your operating system and command line tool, the placement of quotation marks can differ and escape characters might be required.

To start a port forwarding session, run the following command from the AWS CLI. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name AWS-StartPortForwardingSessionToRemoteHost \
    --parameters '{"host":["mydb.example.us-east-2.rds.amazonaws.com"],"portNumber":["3306"], "localPortNumber":["3306"]}'
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name AWS-StartPortForwardingSessionToRemoteHost ^
    --parameters host="mydb.example.us-east-2.rds.amazonaws.com",portNumber="3306",localPortNumber="3306"
```

------

The `host` value represents the hostname or IP address of the remote host that you want to connect to. General connectivity and name resolution requirements between the managed node and the remote host still apply.

`portNumber` is the remote port on the managed node where you want the session traffic to be redirected. For example, you might specify port `3389` for connecting to a Windows node using the Remote Desktop Protocol (RDP). If you don't specify the `portNumber` parameter, Session Manager uses `80` as the default value. 

`localPortNumber` is the port on your local computer where traffic starts, such as `56789`. This value is what you enter when connecting to a managed node using a client. For example, **localhost:56789**.

For information about other options you can use with the **start-session** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html) in the AWS Systems Manager section of the AWS CLI Command Reference.

### Starting a session with an Amazon ECS task


Session Manager supports starting a port forwarding session with a task inside an Amazon Elastic Container Service (Amazon ECS) cluster. To do so, enable ECS Exec. For more information, see [Monitor Amazon Elastic Container Service containers with ECS Exec](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html) in the *Amazon Elastic Container Service Developer Guide*.

You must also update the task role in IAM to include the following permissions:

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
       {
       "Effect": "Allow",
       "Action": [
            "ssmmessages:CreateControlChannel",
            "ssmmessages:CreateDataChannel",
            "ssmmessages:OpenControlChannel",
            "ssmmessages:OpenDataChannel"
       ],
      "Resource": "*"
      }
   ]
}
```

------

To start a port forwarding session with an Amazon ECS task, run the following command from the AWS CLI. Replace each *example resource placeholder* with your own information.

**Note**  
Remove the < and > symbols from the `target` parameter. These symbols are provided for reader clarification only.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target ecs:<ECS_cluster_name>_<ECS_container_ID>_<container_runtime_ID> \
    --document-name AWS-StartPortForwardingSessionToRemoteHost \
    --parameters '{"host":["URL"],"portNumber":["port_number"], "localPortNumber":["port_number"]}'
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target ecs:<ECS_cluster_name>_<ECS_container_ID>_<container_runtime_ID> ^
    --document-name AWS-StartPortForwardingSessionToRemoteHost ^
    --parameters host="URL",portNumber="port_number",localPortNumber="port_number"
```

------

## Starting a session (interactive and noninteractive commands)


Before you start a session, make sure that you have completed the setup steps for Session Manager. For information, see [Setting up Session Manager](session-manager-getting-started.md).

To use the AWS CLI to run session commands, the Session Manager plugin must also be installed on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).

To start an interactive command session, run the following command. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name CustomCommandSessionDocument \
    --parameters '{"logpath":["/var/log/amazon/ssm/amazon-ssm-agent.log"]}'
```

------
#### [ Windows ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name CustomCommandSessionDocument ^
    --parameters logpath="/var/log/amazon/ssm/amazon-ssm-agent.log"
```

------

For information about other options you can use with the **start-session** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html) in the AWS Systems Manager section of the AWS CLI Command Reference.

 **More info**   
+  [Use port forwarding in AWS Systems Manager Session Manager to connect to remote hosts](https://aws.amazon.com/blogs/mt/use-port-forwarding-in-aws-systems-manager-session-manager-to-connect-to-remote-hosts/) 
+  [Amazon EC2 instance port forwarding with AWS Systems Manager](https://aws.amazon.com/blogs/mt/amazon-ec2-instance-port-forwarding-with-aws-systems-manager/) 
+  [Manage AWS Managed Microsoft AD resources with Session Manager port forwarding](https://aws.amazon.com/blogs/mt/manage-aws-managed-microsoft-ad-resources-with-session-manager-port-forwarding/) 
+ [Port Forwarding Using AWS Systems Manager Session Manager](https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/) on the *AWS News Blog*.

# End a session
End a session

You can end a session that you started in your account using the AWS Systems Manager console or the AWS Command Line Interface (AWS CLI). When you choose the **Terminate** button for a session in the console or call the [TerminateSession](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_TerminateSession.html) API action by using the AWS CLI, Session Manager permanently ends the session and closes the data connection between the Session Manager client and SSM Agent on the managed node. You can't resume a terminated session.

If there is no user activity in an open session for 20 minutes, the idle state triggers a timeout. Session Manager doesn't call `TerminateSession`, but it does close the underlying channel. You can't resume a session closed because of idle timeout.

We recommend always explicitly terminating a session by using the `terminate-session` command, when using the AWS CLI, or the **Terminate** button when using the console. (**Terminate** buttons are located on both the session window and main Session Manager console page.) If you only close a browser or command window, the session remains listed as **Active** in the console for 30 days. When you don't explicitly terminate a session, or when a session times out, any processes that were running on the managed node at the time will continue to run.

**Topics**
+ [

## Ending a session (console)
](#stop-sys-console)
+ [

## Ending a session (AWS CLI)
](#stop-cli)

## Ending a session (console)


You can use the AWS Systems Manager console to end a session in your account.

**To end a session (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. For **Sessions**, choose the option button to the left of the session you want to end.

1. Choose **Terminate**.

## Ending a session (AWS CLI)


To end a session using the AWS CLI, run the following command. Replace *session-id* with your own information.

```
aws ssm terminate-session \
    --session-id session-id
```

For more information about the **terminate-session** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/terminate-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/terminate-session.html) in the AWS Systems Manager section of the AWS CLI Command Reference.

# View session history


You can use the AWS Systems Manager console or the AWS Command Line Interface (AWS CLI) to view information about sessions in your account. In the console, you can view session details such as the following:
+ The ID of the session
+ Which user connected to a managed node through a session
+ The ID of the managed node
+ When the session began and ended
+ The status of the session
+ The location specified for storing session logs (if turned on)

Using the AWS CLI, you can view a list of sessions in your account, but not the additional details that are available in the console.

For information about logging session history information, see [Enabling and disabling session logging](session-manager-logging.md).

**Topics**
+ [

## Viewing session history (console)
](#view-console)
+ [

## Viewing session history (AWS CLI)
](#view-history-cli)

## Viewing session history (console)


You can use the AWS Systems Manager console to view details about the sessions in your account.

**To view session history (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Session history** tab.

   -or-

   If the Session Manager home page opens first, choose **Configure Preferences** and then choose the **Session history** tab.

## Viewing session history (AWS CLI)


To view a list of sessions in your account using the AWS CLI, run the following command.

```
aws ssm describe-sessions \
    --state History
```

**Note**  
This command returns only results for connections to targets initiated using Session Manager. It doesn't list connections made through other means, such as Remote Desktop Protocol (RDP) or the Secure Shell Protocol (SSH).

For information about other options you can use with the **describe-sessions** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-sessions.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-sessions.html) in the AWS Systems Manager section of the AWS CLI Command Reference.

# Logging session activity


In addition to providing information about current and completed sessions in the Systems Manager console, Session Manager provides you with the ability to log session activity in your AWS account using AWS CloudTrail.

CloudTrail captures session API calls through the Systems Manager console, the AWS Command Line Interface (AWS CLI), and the Systems Manager SDK. You can view the information on the CloudTrail console or store it in a specified Amazon Simple Storage Service (Amazon S3) bucket. One Amazon S3 bucket is used for all CloudTrail logs for your account. For more information, see [Logging AWS Systems Manager API calls with AWS CloudTrail](monitoring-cloudtrail-logs.md).

**Note**  
For recurring, historical, analytical analysis of your log files, consider querying CloudTrail logs using [CloudTrail Lake](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake.html) or a table you maintain. For more information, see [Querying AWS CloudTrail logs](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html) in the *AWS CloudTrail User Guide*. 

## Monitoring session activity using Amazon EventBridge (console)


With EventBridge, you can set up rules to detect when changes happen to AWS resources. You can create a rule to detect when a user in your organization starts or ends a session, and then, for example, receive a notification through Amazon SNS about the event. 

EventBridge support for Session Manager relies on records of API operations that were recorded by CloudTrail. (You can use CloudTrail integration with EventBridge to respond to most AWS Systems Manager events.) Actions that take place within a session, such as an `exit` command, that don't make an API call aren't detected by EventBridge.

The following steps outline how to initiate notifications through Amazon Simple Notification Service (Amazon SNS) when a Session Manager API event occurs, such as **StartSession**.

**To monitor session activity using Amazon EventBridge (console)**

1. Create an Amazon SNS topic to use for sending notifications when the Session Manager event occurs that you want to track.

   For more information, see [Create a Topic](https://docs.aws.amazon.com/sns/latest/dg/CreateTopic.html) in the *Amazon Simple Notification Service Developer Guide*.

1. Create an EventBridge rule to invoke the Amazon SNS target for the type of Session Manager event you want to track.

   For information about how to create the rule, see [Creating Amazon EventBridge rules that react to events](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule.html) in the *Amazon EventBridge User Guide*.

   As you follow the steps to create the rule, make the following selections:
   + For **AWS service**, choose **Systems Manager**.
   + For **Event type**, choose **AWS API Call through CloudTrail**.
   + Choose **Specific operation(s)**, and then enter the Session Manager command or commands (one at a time) you want to receive notifications for. You can choose **StartSession**, **ResumeSession**, and **TerminateSession**. (EventBridge doesn't support `Get*`,` List*`, and `Describe*` commands.)
   + For **Select a target**, choose **SNS topic**. For **Topic**, choose the name of the Amazon SNS topic you created in Step 1.

For more information, see the *[Amazon EventBridge User Guide](https://docs.aws.amazon.com/eventbridge/latest/userguide/)* and the *[Amazon Simple Notification Service Getting Started Guide](https://docs.aws.amazon.com/sns/latest/gsg/)*.

# Enabling and disabling session logging


Session logging records information about current and completed sessions in the Systems Manager console. You can also log details about commands run during sessions in your AWS account. Session logging enables you to do the following:
+ Create and store session logs for archival purposes.
+ Generate a report showing details of every connection made to your managed nodes using Session Manager over the past 30 days.
+ Generate notifications for session logging in your AWS account, such as Amazon Simple Notification Service (Amazon SNS) notifications.
+ Automatically initiate another action on an AWS resource as the result of actions performed during a session, such as running an AWS Lambda function, starting an AWS CodePipeline pipeline, or running an AWS Systems Manager Run Command document.

**Important**  
Note the following requirements and limitations for Session Manager:  
Session Manager logs the commands you enter and their output during a session depending on your session preferences. To prevent sensitive data, such as passwords, from being viewed in your session logs we recommend using the following commands when entering sensitive data during a session.  

  ```
  stty -echo; read passwd; stty echo;
  ```

  ```
  $Passwd = Read-Host -AsSecureString
  ```
If you're using Windows Server 2012 or earlier, the data in your logs might not be formatted optimally. We recommend using Windows Server 2012 R2 and later for optimal log formats.
If you're using Linux or macOS managed nodes, ensure that the screen utility is installed. If it isn't, your log data might be truncated. On Amazon Linux 2, AL2023 and Ubuntu Server, the screen utility is installed by default. To install screen manually, depending on your version of Linux, run either `sudo yum install screen` or `sudo apt-get install screen`.
Logging isn't available for Session Manager sessions that connect through port forwarding or SSH. This is because SSH encrypts all session data within the secure TLS connection established between the AWS CLI and Session Manager endpoints, and Session Manager only serves as a tunnel for SSH connections.

For more information about the permissions required to use Amazon S3 or Amazon CloudWatch Logs for logging session data, see [Creating an IAM role with permissions for Session Manager and Amazon S3 and CloudWatch Logs (console)](getting-started-create-iam-instance-profile.md#create-iam-instance-profile-ssn-logging).

Refer to the following topics for more information about logging options for Session Manager.

**Topics**
+ [

# Streaming session data using Amazon CloudWatch Logs (console)
](session-manager-logging-cwl-streaming.md)
+ [

# Logging session data using Amazon S3 (console)
](session-manager-logging-s3.md)
+ [

# Logging session data using Amazon CloudWatch Logs (console)
](session-manager-logging-cloudwatch-logs.md)
+ [

# Configuring session logging to disk
](session-manager-logging-disk.md)
+ [

# Adjusting how long the Session Manager temporary log file is stored on disk
](session-manager-logging-disk-retention.md)
+ [

# Disabling Session Manager logging in CloudWatch Logs and Amazon S3
](session-manager-enable-and-disable-logging.md)

# Streaming session data using Amazon CloudWatch Logs (console)


You can send a continual stream of session data logs to Amazon CloudWatch Logs. Essential details, such as the commands a user has run in a session, the ID of the user who ran the commands, and timestamps for when the session data is streamed to CloudWatch Logs, are included when streaming session data. When streaming session data, the logs are JSON-formatted to help you integrate with your existing logging solutions. Streaming session data isn't supported for interactive commands.

**Note**  
To stream session data from Windows Server managed nodes, you must have PowerShell 5.1 or later installed. By default, Windows Server 2016 and later have the required PowerShell version installed. However, Windows Server 2012 and 2012 R2 don't have the required PowerShell version installed by default. If you haven't already updated PowerShell on your Windows Server 2012 or 2012 R2 managed nodes, you can do so using Run Command. For information about updating PowerShell using Run Command, see [Updating PowerShell using Run Command](run-command-tutorial-update-software.md#rc-console-pwshexample).

**Important**  
If you have the **PowerShell Transcription** policy setting configured on your Windows Server managed nodes, you won't be able to stream session data.

**To stream session data using Amazon CloudWatch Logs (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Select the check box next to **Enable** under **CloudWatch logging**.

1. Choose the **Stream session logs** option.

1. (Recommended) Select the check box next to **Allow only encrypted CloudWatch log groups**. With this option turned on, log data is encrypted using the server-side encryption key specified for the log group. If you don't want to encrypt the log data that is sent to CloudWatch Logs, clear the check box. You must also clear the check box if encryption isn't allowed on the log group.

1. For **CloudWatch logs**, to specify the existing CloudWatch Logs log group in your AWS account to upload session logs to, select one of the following:
   + Enter the name of a log group in the text box that has already been created in your account to store session log data.
   + **Browse log groups**: Select a log group that has already been created in your account to store session log data.

1. Choose **Save**.

# Logging session data using Amazon S3 (console)


You can choose to store session log data in a specified Amazon Simple Storage Service (Amazon S3) bucket for debugging and troubleshooting purposes. The default option is for logs to be sent to an encrypted Amazon S3 bucket. Encryption is performed using the key specified for the bucket, either an AWS KMS key or an Amazon S3 Server-Side Encryption (SSE) key (AES-256). 

**Important**  
When you use virtual hosted–style buckets with Secure Sockets Layer (SSL), the SSL wildcard certificate only matches buckets that don't contain periods. To work around this, use HTTP or write your own certificate verification logic. We recommend that you don't use periods (".") in bucket names when using virtual hosted–style buckets.

**Amazon S3 bucket encryption**  
In order to send logs to your Amazon S3 bucket with encryption, encryption must be allowed on the bucket. For more information about Amazon S3 bucket encryption, see [Amazon S3 Default Encryption for S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html).

**Customer managed key**  
If you're using a KMS key that you manage yourself to encrypt your bucket, then the IAM instance profile attached to your instances must have explicit permissions to read the key. If you use an AWS managed key, the instance doesn't require this explicit permission. For more information about providing the instance profile with access to use the key, see [Allows Key Users to Use the key](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html#key-policy-default-allow-users) in the *AWS Key Management Service Developer Guide*.

Follow these steps to configure Session Manager to store session logs in an Amazon S3 bucket.

**Note**  
You can also use the AWS CLI to specify or change the Amazon S3 bucket that session data is sent to. For information, see [Update Session Manager preferences (command line)](getting-started-configure-preferences-cli.md).

**To log session data using Amazon S3 (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Select the check box next to **Enable** under **S3 logging**.

1. (Recommended) Select the check box next to **Allow only encrypted S3 buckets**. With this option turned on, log data is encrypted using the server-side encryption key specified for the bucket. If you don't want to encrypt the log data that is sent to Amazon S3, clear the check box. You must also clear the check box if encryption isn't allowed on the S3 bucket.

1. For **S3 bucket name**, select one of the following:
**Note**  
We recommend that you don't use periods (".") in bucket names when using virtual hosted–style buckets. For more information about Amazon S3 bucket-naming conventions, see [Bucket Restrictions and Limitations](https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html#bucketnamingrules) in the *Amazon Simple Storage Service User Guide*.
   + **Choose a bucket name from the list**: Select an Amazon S3 bucket that has already been created in your account to store session log data.
   + **Enter a bucket name in the text box**: Enter the name of an Amazon S3 bucket that has already been created in your account to store session log data.

1. (Optional) For **S3 key prefix**, enter the name of an existing or new folder to store logs in the selected bucket.

1. Choose **Save**.

For more information about working with Amazon S3 and Amazon S3 buckets, see the *[Amazon Simple Storage Service User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/)* and the *[Amazon Simple Storage Service User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/)*.

# Logging session data using Amazon CloudWatch Logs (console)


With Amazon CloudWatch Logs, you can monitor, store, and access log files from various AWS services. You can send session log data to a CloudWatch Logs log group for debugging and troubleshooting purposes. The default option is for log data to be sent with encryption using your KMS key, but you can send the data to your log group with or without encryption. 

Follow these steps to configure AWS Systems Manager Session Manager to send session log data to a CloudWatch Logs log group at the end of your sessions.

**Note**  
You can also use the AWS CLI to specify or change the CloudWatch Logs log group that session data is sent to. For information, see [Update Session Manager preferences (command line)](getting-started-configure-preferences-cli.md).

**To log session data using Amazon CloudWatch Logs (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Select the check box next to **Enable** under **CloudWatch logging**.

1. Choose the **Upload session logs** option.

1. (Recommended) Select the check box next to **Allow only encrypted CloudWatch log groups**. With this option turned on, log data is encrypted using the server-side encryption key specified for the log group. If you don't want to encrypt the log data that is sent to CloudWatch Logs, clear the check box. You must also clear the check box if encryption isn't allowed on the log group.

1. For **CloudWatch logs**, to specify the existing CloudWatch Logs log group in your AWS account to upload session logs to, select one of the following:
   + **Choose a log group from the list**: Select a log group that has already been created in your account to store session log data.
   + **Enter a log group name in the text box**: Enter the name of a log group that has already been created in your account to store session log data.

1. Choose **Save**.

For more information about working with CloudWatch Logs, see the *[Amazon CloudWatch Logs User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/)*.

# Configuring session logging to disk


After you enable Session Manager logging to CloudWatch or Amazon S3, all commands executed during a session (and the resulting output from those commands) are logged to a temporary file on the disk of the target instance. The temporary file is named `ipcTempFile.log`. 

The `ipcTempFile.log` is controlled by the `SessionLogsDestination` parameter in the SSM Agent configuration file. This parameter accepts the following values:
+ **disk**: If you specify this parameter and session logging to CloudWatch or Amazon S3 are *enabled*, SSM Agent creates the `ipcTempFile.log` temporary log file and logs session commands and output to disk. Session Manager uploads this log to either CloudWatch or S3 during or after the session, depending on the logging configuration. The log is then deleted according to the duration specified for the SSM Agent `SessionLogsRetentionDurationHours` configuration parameter.

  If you specify this parameter and session logging to CloudWatch and Amazon S3 are *disabled*, SSM Agent still logs command history and output in the `ipcTempFile.log` file. The file will be deleted according to the duration specified for the SSM Agent `SessionLogsRetentionDurationHours` configuration parameter.
+ **none**: If you specify this parameter and session logging to CloudWatch or Amazon S3 are *enabled*, logging to disk works exactly as it does as if you'd specified the `disk` parameter. SSM Agent requires the temporary file when session logging to CloudWatch or Amazon S3 are enabled.

  If you specify this parameter and session logging to CloudWatch or Amazon S3 are *disabled*, SSM Agent doesn't create the `ipcTempFile.log` file.

Use the following procedure to enable or disable creating the `ipcTempFile.log` temporary log file to disk when a session is stared.

**To enable or disable creating the Session Manager temporary log file to disk**

1. Either install SSM Agent on your instance or upgrade to version 3.2.2086 or higher. For information about how to check the agent version number, see [Checking the SSM Agent version number](ssm-agent-get-version.md). For information about how to manually install the agent, locate the procedure for your operating system in the following sections:
   + [Manually installing and uninstalling SSM Agent on EC2 instances for Linux](manually-install-ssm-agent-linux.md)
   + [Manually installing and uninstalling SSM Agent on EC2 instances for macOS](manually-install-ssm-agent-macos.md)
   + [Manually installing and uninstalling SSM Agent on EC2 instances for Windows Server](manually-install-ssm-agent-windows.md)

1. Connect to your instance and locate the `amazon-ssm-agent.json` file in the following location.
   + **Linux**: /etc/amazon/ssm/
   + **macOS**: /opt/aws/ssm/
   + **Windows Server**: C:\$1Program Files\$1Amazon\$1SSM

   If the file `amazon-ssm-agent.json` doesn't exist, copy the contents of the `amazon-ssm-agent.json.template` to a new file in the same directory. Name the new file `amazon-ssm-agent.json`. 

1. Specify either `none` or `disk` for the `SessionLogsDestination` parameter. Save your changes.

1. [Restart](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-status-and-restart.html) SSM Agent.

If you specified `disk` for the `SessionLogsDestination` parameter, you can verify that SSM Agent creates the temporary log file by starting a new session and then locating the `ipcTempFile.log` in the following location:
+ **Linux**: /var/lib/amazon/ssm/*target ID*/session/orchestration/*session ID*/Standard\$1Stream/ipcTempFile.log
+ **macOS**: /opt/aws/ssm/data/*target ID*/session/orchestration/*session ID*/Standard\$1Stream/ipcTempFile.log
+ **Windows Server**: C:\$1ProgramData\$1Amazon\$1SSM\$1InstanceData\$1*target ID*\$1session\$1orchestration\$1*session ID*\$1Standard\$1Stream\$1ipcTempFile.log

**Note**  
By default, the temporary log file is saved on the instance for 14 days.

If you want to update the `SessionLogsDestination` parameter across multiple instances, we recommend you create an SSM Document that specifies the new configuration. You can then use Systems Manager Run Command to implement the change on your instances. For more information, see [Writing your own AWS Systems Manager documents (blog)](https://aws.amazon.com/blogs/mt/writing-your-own-aws-systems-manager-documents/) and [Running commands on managed nodes](running-commands.md).

# Adjusting how long the Session Manager temporary log file is stored on disk


After you enable Session Manager logging to CloudWatch or Amazon S3, all commands executed during a session (and the resulting output from those commands) are logged to a temporary file on the disk of the target instance. The temporary file is named `ipcTempFile.log`. During a session, or after it is completed, Session Manager uploads this temporary log to either CloudWatch or S3. The temporary log is then deleted according to the duration specified for the SSM Agent `SessionLogsRetentionDurationHours` configuration parameter. By default, the temporary log file is saved on the instance for 14 days in the following location:
+ **Linux**: /var/lib/amazon/ssm/*target ID*/session/orchestration/*session ID*/Standard\$1Stream/ipcTempFile.log
+ **macOS**: /opt/aws/ssm/data/*target ID*/session/orchestration/*session ID*/Standard\$1Stream/ipcTempFile.log
+ **Windows Server**: C:\$1ProgramData\$1Amazon\$1SSM\$1InstanceData\$1*target ID*\$1session\$1orchestration\$1*session ID*\$1Standard\$1Stream\$1ipcTempFile.log

Use the following procedure to adjust how long the Session Manager temporary log file is stored on disk.

**To adjust how long the `ipcTempFile.log` file is stored on disk**

1. Connect to your instance and locate the `amazon-ssm-agent.json` file in the following location.
   + **Linux**: /etc/amazon/ssm/
   + **macOS**: /opt/aws/ssm/
   + **Windows Server**: C:\$1Program Files\$1Amazon\$1SSM

   If the file `amazon-ssm-agent.json` doesn't exist, copy the contents of the `amazon-ssm-agent.json.template` to a new file in the same directory. Name the new file `amazon-ssm-agent.json`. 

1. Change the value of `SessionLogsRetentionDurationHours` to the desired number of hours. If `SessionLogsRetentionDurationHours` is set to 0, the temporary log file is created during the session and deleted when the session is completed. This setting should ensure the log file doesn't persist after the session ends.

1. Save your changes.

1. [Restart](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-status-and-restart.html) SSM Agent.

# Disabling Session Manager logging in CloudWatch Logs and Amazon S3


You can use the Systems Manager console or AWS CLI to disable session logging in your account.

**To disable session logging (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. To disable CloudWatch logging, in the **CloudWatch logging** section, clear the **Enable** checkbox.

1. To disable S3 logging, in the **S3 logging** section, clear the **Enable** checkbox.

1. Choose **Save**.

**To disable session logging (AWS CLI)**  
To disable session logging using the AWS CLI, follow the instructions in [Update Session Manager preferences (command line)](getting-started-configure-preferences-cli.md).

 In your JSON file, ensure that the `s3BucketName` and `cloudWatchLogGroupName` inputs contain no values. For example: 

```
"inputs": {
        "s3BucketName": "",
        ...
        "cloudWatchLogGroupName": "",
        ...
    }
```

Alternatively, to disable logging, you can remove all `S3*` and `cloudWatch*` inputs from your JSON file.

**Note**  
Depending on your configuration, after you disable CloudWatch or S3, a temporary log file might still be generated to disk by SSM Agent. For information about how to disable logging to disk, see [Configuring session logging to disk](session-manager-logging-disk.md).

# Session document schema


The following information describes the schema elements of a Session document. AWS Systems Manager Session Manager uses Session documents to determine which type of session to start, such as a standard session, a port forwarding session, or a session to run an interactive command.

 [schemaVersion](#version)   
The schema version of the Session document. Session documents only support version 1.0.  
Type: String  
Required: Yes

 [description](#descript)   
A description you specify for the Session document. For example, "Document to start port forwarding session with Session Manager".  
Type: String  
Required: No

 [sessionType](#type)   
The type of session the Session document is used to establish.  
Type: String  
Required: Yes  
Valid values: `InteractiveCommands` \$1 `NonInteractiveCommands` \$1 `Port` \$1 `Standard_Stream`

 [inputs](#in)   
The session preferences to use for sessions established using this Session document. This element is required for Session documents that are used to create `Standard_Stream` sessions.  
Type: StringMap  
Required: No    
 [s3BucketName](#bucket)   
The Amazon Simple Storage Service (Amazon S3) bucket you want to send session logs to at the end of your sessions.  
Type: String  
Required: No  
 [s3KeyPrefix](#prefix)   
The prefix to use when sending logs to the Amazon S3 bucket you specified in the `s3BucketName` input. For more information about using a shared prefix with objects stored in Amazon S3, see [How do I use folders in an S3 bucket?](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html) in the *Amazon Simple Storage Service User Guide*.  
Type: String  
Required: No  
 [s3EncryptionEnabled](#s3Encrypt)   
If set to `true`, the Amazon S3 bucket you specified in the `s3BucketName` input must be encrypted.  
Type: Boolean  
Required: Yes  
 [cloudWatchLogGroupName](#logGroup)   
The name of the Amazon CloudWatch Logs (CloudWatch Logs) group you want to send session logs to at the end of your sessions.  
Type: String  
Required: No  
 [cloudWatchEncryptionEnabled](#cwEncrypt)   
If set to `true`, the log group you specified in the `cloudWatchLogGroupName` input must be encrypted.  
Type: Boolean  
Required: Yes  
 [cloudWatchStreamingEnabled](#cwStream)   
If set to `true`, a continual stream of session data logs are sent to the log group you specified in the `cloudWatchLogGroupName` input. If set to `false`, session logs are sent to the log group you specified in the `cloudWatchLogGroupName` input at the end of your sessions.  
Type: Boolean  
Required: Yes  
 [kmsKeyId](#kms)   
The ID of the AWS KMS key you want to use to further encrypt data between your local client machines and the Amazon Elastic Compute Cloud (Amazon EC2) managed nodes you connect to.  
Type: String  
Required: No  
 [runAsEnabled](#run)   
If set to `true`, you must specify a user account that exists on the managed nodes you will be connecting to in the `runAsDefaultUser` input. Otherwise, sessions will fail to start. By default, sessions are started using the `ssm-user` account created by the AWS Systems Manager SSM Agent. The Run As feature is only supported for connecting to Linux and macOS managed nodes.  
Type: Boolean  
Required: Yes  
 [runAsDefaultUser](#runUser)   
The name of the user account to start sessions with on Linux and macOS managed nodes when the `runAsEnabled` input is set to `true`. The user account you specify for this input must exist on the managed nodes you will be connecting to; otherwise, sessions will fail to start. When determining which OS user account to use, Session Manager checks in the following order: the `SSMSessionRunAs` tag on the IAM user's session tags, then the `SSMSessionRunAs` tag on the assumed IAM role, and finally this `runAsDefaultUser` value from session preferences. For more information, see [Turn on Run As support for Linux and macOS managed nodes](session-preferences-run-as.md).  
Type: String  
Required: No  
 [idleSessionTimeout](#timeout)   
The amount of time of inactivity you want to allow before a session ends. This input is measured in minutes.  
Type: String  
Valid values: 1-60  
Required: No  
 [maxSessionDuration](#maxDuration)   
The maximum amount of time you want to allow before a session ends. This input is measured in minutes.  
Type: String  
Valid values: 1-1440  
Required: No  
 [shellProfile](#shell)   
The preferences you specify per operating system to apply within sessions such as shell preferences, environment variables, working directories, and running multiple commands when a session is started.  
Type: StringMap  
Required: No    
 [windows](#win)   
The shell preferences, environment variables, working directories, and commands you specify for sessions on Windows Server managed nodes.  
Type: String  
Required: No  
 [linux](#lin)   
The shell preferences, environment variables, working directories, and commands you specify for sessions on Linux and macOS managed nodes.  
Type: String  
Required: No

 [parameters](#param)   
An object that defines the parameters the document accepts. For more information about defining document parameters, see **parameters** in the [Top-level data elements](documents-syntax-data-elements-parameters.md#top-level). For parameters that you reference often, we recommend that you store those parameters in Systems Manager Parameter Store and then reference them. You can reference `String` and `StringList` Parameter Store parameters in this section of a document. You can't reference `SecureString` Parameter Store parameters in this section of a document. You can reference a Parameter Store parameter using the following format.  

```
{{ssm:parameter-name}}
```
For more information about Parameter Store, see [AWS Systems Manager Parameter Store](systems-manager-parameter-store.md).  
Type: StringMap  
Required: No

 [properties](#props)   
An object whose values you specify that are used in the `StartSession` API operation.  
For Session documents that are used for `InteractiveCommands` sessions, the properties object includes the commands to run on the operating systems you specify. You can also determine whether commands are run as `root` using the `runAsElevated` boolean property. For more information, see [Restrict access to commands in a session](session-manager-restrict-command-access.md).  
For Session documents that are used for `Port` sessions, the properties object contains the port number where traffic should be redirected to. For an example, see the `Port` type Session document example later in this topic.  
Type: StringMap  
Required: No

`Standard_Stream` type Session document example

------
#### [ YAML ]

```
---
schemaVersion: '1.0'
description: Document to hold regional settings for Session Manager
sessionType: Standard_Stream
inputs:
  s3BucketName: ''
  s3KeyPrefix: ''
  s3EncryptionEnabled: true
  cloudWatchLogGroupName: ''
  cloudWatchEncryptionEnabled: true
  cloudWatchStreamingEnabled: true
  kmsKeyId: ''
  runAsEnabled: true
  runAsDefaultUser: ''
  idleSessionTimeout: '20'
  maxSessionDuration: '60'
  shellProfile:
    windows: ''
    linux: ''
```

------
#### [ JSON ]

```
{
    "schemaVersion": "1.0",
    "description": "Document to hold regional settings for Session Manager",
    "sessionType": "Standard_Stream",
    "inputs": {
        "s3BucketName": "",
        "s3KeyPrefix": "",
        "s3EncryptionEnabled": true,
        "cloudWatchLogGroupName": "",
        "cloudWatchEncryptionEnabled": true,
        "cloudWatchStreamingEnabled": true,
        "kmsKeyId": "",
        "runAsEnabled": true,
        "runAsDefaultUser": "",
        "idleSessionTimeout": "20",
        "maxSessionDuration": "60",
        "shellProfile": {
            "windows": "date",
            "linux": "pwd;ls"
        }
    }
}
```

------

`InteractiveCommands` type Session document example

------
#### [ YAML ]

```
---
schemaVersion: '1.0'
description: Document to view a log file on a Linux instance
sessionType: InteractiveCommands
parameters:
  logpath:
    type: String
    description: The log file path to read.
    default: "/var/log/amazon/ssm/amazon-ssm-agent.log"
    allowedPattern: "^[a-zA-Z0-9-_/]+(.log)$"
properties:
  linux:
    commands: "tail -f {{ logpath }}"
    runAsElevated: true
```

------
#### [ JSON ]

```
{
    "schemaVersion": "1.0",
    "description": "Document to view a log file on a Linux instance",
    "sessionType": "InteractiveCommands",
    "parameters": {
        "logpath": {
            "type": "String",
            "description": "The log file path to read.",
            "default": "/var/log/amazon/ssm/amazon-ssm-agent.log",
            "allowedPattern": "^[a-zA-Z0-9-_/]+(.log)$"
        }
    },
    "properties": {
        "linux": {
            "commands": "tail -f {{ logpath }}",
            "runAsElevated": true
        }
    }
}
```

------

`Port` type Session document example

------
#### [ YAML ]

```
---
schemaVersion: '1.0'
description: Document to open given port connection over Session Manager
sessionType: Port
parameters:
  paramExample:
    type: string
    description: document parameter
properties:
  portNumber: anyPortNumber
```

------
#### [ JSON ]

```
{
    "schemaVersion": "1.0",
    "description": "Document to open given port connection over Session Manager",
    "sessionType": "Port",
    "parameters": {
        "paramExample": {
            "type": "string",
            "description": "document parameter"
        }
    },
    "properties": {
        "portNumber": "anyPortNumber"
    }
}
```

------

Session document example with special characters

------
#### [ YAML ]

```
---
schemaVersion: '1.0'
description: Example document with quotation marks
sessionType: InteractiveCommands
parameters:
  Test:
    type: String
    description: Test Input
    maxChars: 32
properties:
  windows:
    commands: |
        $Test = '{{ Test }}'
        $myVariable = \"Computer name is $env:COMPUTERNAME\"
        Write-Host "Test variable: $myVariable`.`nInput parameter: $Test"
    runAsElevated: false
```

------
#### [ JSON ]

```
{
   "schemaVersion":"1.0",
   "description":"Test document with quotation marks",
   "sessionType":"InteractiveCommands",
   "parameters":{
      "Test":{
         "type":"String",
         "description":"Test Input",
         "maxChars":32
      }
   },
   "properties":{
      "windows":{
         "commands":[
            "$Test = '{{ Test }}'",
            "$myVariable = \\\"Computer name is $env:COMPUTERNAME\\\"",
            "Write-Host \"Test variable: $myVariable`.`nInput parameter: $Test\""
         ],
         "runAsElevated":false
      }
   }
}
```

------

# Troubleshooting Session Manager


Use the following information to help you troubleshoot problems with AWS Systems Manager Session Manager.

**Topics**
+ [

## AccessDeniedException when calling the TerminateSession operation
](#session-manager-troubleshooting-access-denied-exception)
+ [

## Document process failed unexpectedly: document worker timed out
](#session-manager-troubleshooting-document-worker-timed-out)
+ [

## Session Manager can't connect from the Amazon EC2 console
](#session-manager-troubleshooting-EC2-console)
+ [

## No permission to start a session
](#session-manager-troubleshooting-start-permissions)
+ [

## SSM Agent not online
](#session-manager-troubleshooting-agent-not-online)
+ [

## No permission to change session preferences
](#session-manager-troubleshooting-preferences-permissions)
+ [

## Managed node not available or not configured for Session Manager
](#session-manager-troubleshooting-instances)
+ [

## Session Manager plugin not found
](#plugin-not-found)
+ [

## Session Manager plugin not automatically added to command line path (Windows)
](#windows-plugin-env-var-not-set)
+ [

## Session Manager plugin becomes unresponsive
](#plugin-unresponsive)
+ [

## TargetNotConnected
](#ssh-target-not-connected)
+ [

## Blank screen displays after starting a session
](#session-manager-troubleshooting-start-blank-screen)
+ [

## Managed node becomes unresponsive during long running sessions
](#session-manager-troubleshooting-log-retention)
+ [

## An error occurred (InvalidDocument) when calling the StartSession operation
](#session-manager-troubleshooting-invalid-document)

## AccessDeniedException when calling the TerminateSession operation


**Problem**: When attempting to terminate a session, Systems Manager returns the following error:

```
An error occurred (AccessDeniedException) when calling the TerminateSession operation: 
User: <user_arn> is not authorized to perform: ssm:TerminateSession on resource: 
<ssm_session_arn> because no identity-based policy allows the ssm:TerminateSession action.
```

**Solution A: Confirm that the [latest version of the Session Manager plugin](https://docs.aws.amazon.com/systems-manager/latest/userguide/plugin-version-history.html) is installed on the node**

Enter the following command in the terminal and press Enter.

```
session-manager-plugin --version
```

**Solution B: Install or reinstall the latest version of the plugin**

For more information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).

**Solution C: Attempt to reestablish a connection to the node**

Verify that the node is responding to requests. Try reestablishing the session. Or, if necessary, open the Amazon EC2 console and verify the status of the instance is running.

## Document process failed unexpectedly: document worker timed out


**Problem**: When starting a session to a Linux host, Systems Manager returns the following error:

```
document process failed unexpectedly: document worker timed out, 
check [ssm-document-worker]/[ssm-session-worker] log for crash reason
```

If you configured SSM Agent logging, as described in [Viewing SSM Agent logs](ssm-agent-logs.md), you can view more details in the debugging log. For this issue, Session Manager shows the following log entry:

```
failed to create channel: too many open files
```

This error typically indicates that there are too many Session Manager worker processes running and the underlying operating system reached a limit. You have two options for resolving this issue.

**Solution A: Increase the operating system file notification limit**

You can increase the limit by running the following command from a separate Linux host. This command uses Systems Manager Run Command. The specified value increases `max_user_instances` to 8192. This value is considerably higher than the default value of 128, but it won't strain host resources:

```
aws ssm send-command --document-name AWS-RunShellScript \
--instance-id i-02573cafcfEXAMPLE  --parameters \
"commands=sudo sysctl fs.inotify.max_user_instances=8192"
```

**Solution B: Decrease the file notifications used by Session Manager in the target host **

Run the following command from a separate Linux host to list sessions running on the target host:

```
aws ssm describe-sessions --state Active --filters key=Target,value=i-02573cafcfEXAMPLE
```

Review the command output to identify sessions that are no longer needed. You can terminate those session by running the following command from a separate Linux host:

```
aws ssm terminate-session —session-id session ID
```

Optionally, once there are no more sessions running on the remote server, you can free additional resources by running the following command from a separate Linux host. This command terminates all Session Manager processes running on the remote host, and consequently all sessions to the remote host. Before you run this command, verify there are no ongoing sessions you would like to keep:

```
aws ssm send-command --document-name AWS-RunShellScript \
            --instance-id i-02573cafcfEXAMPLE --parameters \
'{"commands":["sudo kill $(ps aux | grep ssm-session-worker | grep -v grep | awk '"'"'{print $2}'"'"')"]}'
```

## Session Manager can't connect from the Amazon EC2 console


**Problem**: After creating a new instance, the **Connect** button > **Session Manager** tab in the Amazon Elastic Compute Cloud (Amazon EC2) console doesn't give you the option to connect.

**Solution A: Create an instance profile**: If you haven't already done so (as instructed by the information on the **Session Manager** tab in the EC2 console), create an AWS Identity and Access Management (IAM) instance profile by using Quick Setup. Quick Setup is a tool in AWS Systems Manager.

Session Manager requires an IAM instance profile to connect to your instance. You can create an instance profile and assign it to your instance by creating a [host management configuration](https://docs.aws.amazon.com/systems-manager/latest/userguide/quick-setup-host-management.html) with Quick Setup. A *host management configuration* creates an instance profile with the required permissions and assigns it to your instance. A host management configuration also enables other Systems Manager tools and creates IAM roles for running those tools. There is no charge to use Quick Setup or the tools enabled by the host management configuration. [Open Quick Setup and create a host management configuration](https://console.aws.amazon.com/systems-manager/quick-setup/create-configuration&configurationType=SSMHostMgmt).

**Important**  
After you create the host management configuration, Amazon EC2 can take several minutes to register the change and refresh the **Session Manager** tab. If the tab doesn't show you a **Connect** button after two minutes, reboot your instance. After it reboots, if you still don't see the option to connect, open [Quick Setup](https://console.aws.amazon.com/systems-manager/quick-setup/create-configuration&configurationType=SSMHostMgmt) and verify you have only one host management configuration. If there are two, delete the older configuration and wait a few minutes.

If you still can't connect after creating a host management configuration, or if you receive an error, including an error about SSM Agent, see one of the following solutions:
+  [Solution B: No error, but still can't connect](#session-manager-troubleshooting-EC2-console-no-error) 
+  [Solution C: Error about missing SSM Agent](#session-manager-troubleshooting-EC2-console-no-agent) 

### Solution B: No error, but still can't connect


If you created the host management configuration, waited several minutes before trying to connect, and still can't connect, then you might need to manually apply the host management configuration to your instance. Use the following procedure to update a Quick Setup host management configuration and apply changes to an instance.

**To update a host management configuration using Quick Setup**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Quick Setup**.

1. In the **Configurations** list, choose the **Host Management** configuration you created.

1. Choose **Actions**, and then choose **Edit configuration**.

1. Near the bottom of the **Targets** section, under **Choose how you want to target instances**, choose **Manual**.

1. In the **Instances** section, choose the instance you created.

1. Choose **Update**.

Wait a few minutes for EC2 to refresh the **Session Manager** tab. If you still can't connect or if you receive an error, review the remaining solutions for this issue.

### Solution C: Error about missing SSM Agent


If you weren't able to create a host management configuration by using Quick Setup, or if you received an error about SSM Agent not being installed, you may need to manually install SSM Agent on your instance. SSM Agent is Amazon software that enables Systems Manager to connect to your instance by using Session Manager. SSM Agent is installed by default on most Amazon Machine Images (AMIs). If your instance was created from a non-standard AMI or an older AMI, you might have to manually install the agent. For the procedure to install SSM Agent, see the following topic that corresponds to your instance operating system.
+  [https://docs.aws.amazon.com/systems-manager/latest/userguide/manually-install-ssm-agent-windows.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/manually-install-ssm-agent-windows.html) 
+  [https://docs.aws.amazon.com/systems-manager/latest/userguide/manually-install-ssm-agent-macos.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/manually-install-ssm-agent-macos.html) 
+  [AlmaLinux](https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-alma.html) 
+  [Amazon Linux 2 and AL2023](https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-al2.html) 
+  [https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-deb.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-deb.html) 
+  [https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-oracle.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-oracle.html) 
+  [https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-rhel.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-rhel.html) 
+  [https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-rocky.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-rocky.html) 
+  [https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-ubuntu.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-ubuntu.html) 

For issues with SSM Agent, see [Troubleshooting SSM Agent](troubleshooting-ssm-agent.md).

## No permission to start a session


**Problem**: You try to start a session, but the system tells you that you don't have the necessary permissions.
+ **Solution**: A system administrator hasn't granted you AWS Identity and Access Management (IAM) policy permissions for starting Session Manager sessions. For information, see [Control user session access to instances](session-manager-getting-started-restrict-access.md).

## SSM Agent not online


**Problem**: You see a message on the Amazon EC2 instance **Session Manager** tab that states: "SSM Agent is not online. The SSM Agent was unable to connect to a Systems Manager endpoint to register itself with the service."

**Solution**: SSM Agent is Amazon software that runs on Amazon EC2 instances so that Session Manager can connect to them. If you see this error, SSM Agent is unable to establish a connection with the Systems Manager endpoint. Possible sources of the problem could be firewall restrictions, routing problems, or lack of internet connectivity. To resolve this issue, investigate network connectivity problems. For more information, see [Troubleshooting SSM Agent](troubleshooting-ssm-agent.md) and [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md). For information about Systems Manager endpoints, see [AWS Systems Manager endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html) in the AWS General Reference.

## No permission to change session preferences


**Problem**: You try to update global session preferences for your organization, but the system tells you that you don't have the necessary permissions.
+ **Solution**: A system administrator hasn't granted you IAM policy permissions for setting Session Manager preferences. For information, see [Grant or deny a user permissions to update Session Manager preferences](preference-setting-permissions.md).

## Managed node not available or not configured for Session Manager


**Problem 1**: You want to start a session on the **Start a session** console page, but a managed node isn't in the list.
+ **Solution A**: The managed node you want to connect to might not have been configured for AWS Systems Manager. For more information, see [Setting up Systems Manager unified console for an organization](systems-manager-setting-up-organizations.md). 
**Note**  
If AWS Systems Manager SSM Agent is already running on a managed node when you attach the IAM instance profile, you might need to restart the agent before the instance is listed on the **Start a session** console page.
+ **Solution B**: The proxy configuration you applied to the SSM Agent on your managed node might be incorrect. If the proxy configuration is incorrect, the managed node won't be able to reach the needed service endpoints, or the node might report as a different operating system to Systems Manager. For more information, see [Configuring SSM Agent to use a proxy on Linux nodes](configure-proxy-ssm-agent.md) and [Configure SSM Agent to use a proxy for Windows Server instances](configure-proxy-ssm-agent-windows.md).

**Problem 2**: A managed node you want to connect is in the list on the **Start a session** console page, but the page reports that "The instance you selected isn't configured to use Session Manager." 
+ **Solution A**: The managed node has been configured for use with the Systems Manager service, but the IAM instance profile attached to the node might not include permissions for the Session Manager tool. For information, see [Verify or Create an IAM Instance Profile with Session Manager Permissions](session-manager-getting-started-instance-profile.md).
+ **Solution B**: The managed node isn't running a version of SSM Agent that supports Session Manager. Update SSM Agent on the node to version 2.3.68.0 or later. 

  Update SSM Agent manually on a managed node by following the steps in [Manually installing and uninstalling SSM Agent on EC2 instances for Windows Server](manually-install-ssm-agent-windows.md), [Manually installing and uninstalling SSM Agent on EC2 instances for Linux](manually-install-ssm-agent-linux.md), or [Manually installing and uninstalling SSM Agent on EC2 instances for macOS](manually-install-ssm-agent-macos.md), depending on the operating system. 

  Alternatively, use the Run Command document `AWS-UpdateSSMAgent` to update the agent version on one or more managed nodes at a time. For information, see [Updating the SSM Agent using Run Command](run-command-tutorial-update-software.md#rc-console-agentexample).
**Tip**  
To always keep your agent up to date, we recommend updating SSM Agent to the latest version on an automated schedule that you define using either of the following methods:  
Run `AWS-UpdateSSMAgent` as part of a State Manager association. For information, see [Walkthrough: Automatically update SSM Agent with the AWS CLI](state-manager-update-ssm-agent-cli.md).
Run `AWS-UpdateSSMAgent` as part of a maintenance window. For information about working with maintenance windows, see [Create and manage maintenance windows using the console](sysman-maintenance-working.md) and [Tutorial: Create and configure a maintenance window using the AWS CLI](maintenance-windows-cli-tutorials-create.md).
+ **Solution C**: The managed node can't reach the requisite service endpoints. You can improve the security posture of your managed nodes by using interface endpoints powered by AWS PrivateLink to connect to Systems Manager endpoints. The alternative to using interface endpoints is to allow outbound internet access on your managed nodes. For more information, see [Use PrivateLink to set up a VPC endpoint for Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-privatelink.html).
+ **Solution D**: The managed node has limited available CPU or memory resources. Although your managed node might otherwise be functional, if the node doesn't have enough available resources, you can't establish a session. For more information, see [Troubleshooting an Unreachable Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-console.html).

## Session Manager plugin not found


To use the AWS CLI to run session commands, the Session Manager plugin must also be installed on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).

## Session Manager plugin not automatically added to command line path (Windows)


When you install the Session Manager plugin on Windows, the `session-manager-plugin` executable should be automatically added to your operating system's `PATH` environment variable. If the command failed after you ran it to check whether the Session Manager plugin installed correctly (`aws ssm start-session --target instance-id`), you might need to set it manually using the following procedure.

**To modify your PATH variable (Windows)**

1. Press the Windows key and enter **environment variables**.

1. Choose **Edit environment variables for your account**.

1. Choose **PATH** and then choose **Edit**.

1. Add paths to the **Variable value** field, separated by semicolons, as shown in this example: *`C:\existing\path`*;*`C:\new\path`*

   *`C:\existing\path`* represents the value already in the field. *`C:\new\path`* represents the path you want to add, as shown in the following example.
   + **64-bit machines**: `C:\Program Files\Amazon\SessionManagerPlugin\bin\`

1. Choose **OK** twice to apply the new settings.

1. Close any running command prompts and re-open.

## Session Manager plugin becomes unresponsive


During a port forwarding session, traffic might stop forwarding if you have antivirus software installed on your local machine. In some cases, antivirus software interferes with the Session Manager plugin causing process deadlocks. To resolve this issue, allow or exclude the Session Manager plugin from the antivirus software. For information about the default installation path for the Session Manager plugin, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).

## TargetNotConnected


**Problem**: You try to start a session, but the system returns the error message, "An error occurred (TargetNotConnected) when calling the StartSession operation: *InstanceID* isn't connected."
+ **Solution A**: This error is returned when the specified target managed node for the session isn't fully configured for use with Session Manager. For information, see [Setting up Session Manager](session-manager-getting-started.md).
+ **Solution B**: This error is also returned if you attempt to start a session on a managed node that is located in a different AWS account or AWS Region.

## Blank screen displays after starting a session


**Problem**: You start a session and Session Manager displays a blank screen.
+ **Solution A**: This issue can occur when the root volume on the managed node is full. Due to lack of disk space, SSM Agent on the node stops working. To resolve this issue, use Amazon CloudWatch to collect metrics and logs from the operating systems. For information, see [Collect metrics, logs, and traces with the CloudWatch agent](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html) in the *Amazon CloudWatch User Guide*.
+ **Solution B**: A blank screen might display if you accessed the console using a link that includes a mismatched endpoint and Region pair. For example, in the following console URL, `us-west-2` is the specified endpoint, but `us-west-1` is the specified AWS Region.

  ```
  https://us-west-2.console.aws.amazon.com/systems-manager/session-manager/sessions?region=us-west-1
  ```
+ **Solution C**: The managed node is connecting to Systems Manager using VPC endpoints, and your Session Manager preferences write session output to an Amazon S3 bucket or Amazon CloudWatch Logs log group, but an `s3` gateway endpoint or `logs` interface endpoint doesn't exist in the VPC. An `s3` endpoint in the format **`com.amazonaws.region.s3`** is required if your managed nodes are connecting to Systems Manager using VPC endpoints, and your Session Manager preferences write session output to an Amazon S3 bucket. Alternatively, a `logs` endpoint in the format **`com.amazonaws.region.logs`** is required if your managed nodes are connecting to Systems Manager using VPC endpoints, and your Session Manager preferences write session output to a CloudWatch Logs log group. For more information, see [Creating VPC endpoints for Systems Manager](setup-create-vpc.md#create-vpc-endpoints).
+ **Solution D**: The log group or Amazon S3 bucket you specified in your session preferences has been deleted. To resolve this issue, update your session preferences with a valid log group or S3 bucket.
+ **Solution E**: The log group or Amazon S3 bucket you specified in your session preferences isn't encrypted, but you have set the `cloudWatchEncryptionEnabled` or `s3EncryptionEnabled` input to `true`. To resolve this issue, update your session preferences with a log group or Amazon S3 bucket that is encrypted, or set the `cloudWatchEncryptionEnabled` or `s3EncryptionEnabled` input to `false`. This scenario is only applicable to customers who create session preferences using command line tools.

## Managed node becomes unresponsive during long running sessions


**Problem**: Your managed node becomes unresponsive or crashes during a long running session.

**Solution**: Decrease the SSM Agent log retention duration for Session Manager.

**To decrease the SSM Agent log retention duration for sessions**

1. Locate the `amazon-ssm-agent.json.template` in the `/etc/amazon/ssm/` directory for Linux, or `C:\Program Files\Amazon\SSM` for Windows.

1. Copy the contents of the `amazon-ssm-agent.json.template` to a new file in the same directory named `amazon-ssm-agent.json`.

1. Decrease the default value of the `SessionLogsRetentionDurationHours` value in the `SSM` property, and save the file.

1. Restart the SSM Agent.

## An error occurred (InvalidDocument) when calling the StartSession operation


**Problem**: You receive the following error when starting a session by using the AWS CLI.

```
An error occurred (InvalidDocument) when calling the StartSession operation: Document type: 'Command' is not supported. Only type: 'Session' is supported for Session Manager.
```

**Solution**: The SSM document you specified for the `--document-name` parameter isn't a *Session* document. Use the following procedure to view a list of Session documents in the AWS Management Console.

**To view a list of Session documents**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. In the **Categories** list, choose **Session documents**.

# AWS Systems Manager State Manager
State Manager

State Manager, a tool in AWS Systems Manager, is a secure and scalable configuration management service that automates the process of keeping your managed nodes and other AWS resources in a state that you define. To get started with State Manager, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/state-manager). In the navigation pane, choose **State Manager**.

**Note**  
State Manager and Maintenance Windows can perform some similar types of updates on your managed nodes. Which one you choose depends on whether you need to automate system compliance or perform high-priority, time-sensitive tasks during periods you specify.  
For more information, see [Choosing between State Manager and Maintenance Windows](state-manager-vs-maintenance-windows.md).

## How can State Manager benefit my organization?


By using pre-configured Systems Manager documents (SSM documents), State Manager offers the following benefits for managing your nodes:
+ Bootstrap nodes with specific software at start-up.
+ Download and update agents on a defined schedule, including the SSM Agent.
+ Configure network settings.
+ Join nodes to a Microsoft Active Directory domain.
+ Run scripts on Linux, macOS, and Windows Server managed nodes throughout their lifecycle.

To manage configuration drift across other AWS resources, you can use Automation, a tool in Systems Manager, with State Manager to perform the following types of tasks:
+ Attach a Systems Manager role to Amazon Elastic Compute Cloud (Amazon EC2) instances to make them *managed nodes*.
+ Enforce desired ingress and egress rules for a security group.
+ Create or delete Amazon DynamoDB backups.
+ Create or delete Amazon Elastic Block Store (Amazon EBS) snapshots.
+ Turn off read and write permissions on Amazon Simple Storage Service (Amazon S3) buckets.
+ Start, restart, or stop managed nodes and Amazon Relational Database Service (Amazon RDS) instances.
+ Apply patches to Linux, macOS, and Window AMIs.

For information about using State Manager with Automation runbooks, see [Scheduling automations with State Manager associations](scheduling-automations-state-manager-associations.md).

## Who should use State Manager?


State Manager is appropriate for any AWS customer that wants to improve the management and governance of their AWS resources and reduce configuration drift.

## What are the features of State Manager?


Key features of State Manager include the following:
+ 

**State Manager associations**  
A State Manager *association* is a configuration that you assign to your AWS resources. The configuration defines the state that you want to maintain on your resources. For example, an association can specify that antivirus software must be installed and running on a managed node, or that certain ports must be closed.

  An association specifies a schedule for when to apply the configuration and the targets for the association. For example, an association for antivirus software might run once a day on all managed nodes in an AWS account. If the software isn't installed on a node, then the association could instruct State Manager to install it. If the software is installed, but the service isn't running, then the association could instruct State Manager to start the service.
+ 

**Flexible scheduling options**  
State Manager offers the following options for scheduling when an association runs:
  + **Immediate or delayed processing**

    When you create an association, by default, the system immediately runs it on the specified resources. After the initial run, the association runs in intervals according to the schedule that you defined. 

    You can instruct State Manager not to run an association immediately by using the **Apply association only at the next specified Cron interval** option in the console or the `ApplyOnlyAtCronInterval` parameter from the command line.
  + **Cron and rate expressions**

    When you create an association, you specify a schedule for when State Manager applies the configuration. State Manager supports most standard cron and rate expressions for scheduling when an association runs. State Manager also supports cron expressions that include a day of the week and the number sign (\$1) to designate the *n*th day of a month to run an association and the (L) sign to indicate the last *X* day of the month.
**Note**  
State Manager doesn't currently support specifying months in cron expressions for associations.

    To further control when an association runs, for example if you want to run an association two days after patch Tuesday, you can specify an offset. An *offset* defines how many days to wait after the scheduled day to run an association.

    For information about building cron and rate expressions, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).
+ 

**Multiple targeting options**  
An association also specifies the targets for the association. State Manager supports targeting AWS resources by using tags, AWS Resource Groups, individual node IDs, or all managed nodes in the current AWS Region and AWS account.
+ 

**Amazon S3 support**  
Store the command output from association runs in an Amazon S3 bucket of your choice. For more information, see [Working with associations in Systems Manager](state-manager-associations.md).
+ 

**EventBridge support**  
This Systems Manager tool is supported as both an *event* type and a *target* type in Amazon EventBridge rules. For information, see [Monitoring Systems Manager events with Amazon EventBridge](monitoring-eventbridge-events.md) and [Reference: Amazon EventBridge event patterns and types for Systems Manager](reference-eventbridge-events.md).

## Is there a charge to use State Manager?


State Manager is available at no additional charge.

**Topics**
+ [

## How can State Manager benefit my organization?
](#state-manager-benefits)
+ [

## Who should use State Manager?
](#state-manager-who)
+ [

## What are the features of State Manager?
](#state-manager-features)
+ [

## Is there a charge to use State Manager?
](#state-manager-cost)
+ [

# Understanding how State Manager works
](state-manager-about.md)
+ [

# Working with associations in Systems Manager
](state-manager-associations.md)
+ [

# Creating associations that run MOF files
](systems-manager-state-manager-using-mof-file.md)
+ [

# Creating associations that run Ansible playbooks
](systems-manager-state-manager-ansible.md)
+ [

# Creating associations that run Chef recipes
](systems-manager-state-manager-chef.md)
+ [

# Walkthrough: Automatically update SSM Agent with the AWS CLI
](state-manager-update-ssm-agent-cli.md)
+ [

# Walkthrough: Automatically update PV drivers on EC2 instances for Windows Server
](state-manager-update-pv-drivers.md)

**More info**  
+ [Combating Configuration Drift Using Amazon EC2 Systems Manager and Windows PowerShell DSC](https://aws.amazon.com/blogs/mt/combating-configuration-drift-using-amazon-ec2-systems-manager-and-windows-powershell-dsc/)
+ [Configure Amazon EC2 Instances in an Auto Scaling Group Using State Manager](https://aws.amazon.com/blogs/mt/configure-amazon-ec2-instances-in-an-auto-scaling-group-using-state-manager/)

# Understanding how State Manager works


State Manager, a tool in AWS Systems Manager, is a secure and scalable service that automates the process of keeping managed nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) infrastructure in a state that you define.

Here's how State Manager works:

**1. Determine the state you want to apply to your AWS resources.**  
Do you want to guarantee that your managed nodes are configured with specific applications, such as antivirus or malware applications? Do you want to automate the process of updating the SSM Agent or other AWS packages such as `AWSPVDriver`? Do you need to guarantee that specific ports are closed or open? To get started with State Manager, determine the state that you want to apply to your AWS resources. The state that you want to apply determines which SSM document you use to create a State Manager association.  
A State Manager *association* is a configuration that you assign to your AWS resources. The configuration defines the state that you want to maintain on your resources. For example, an association can specify that antivirus software must be installed and running on a managed node, or that certain ports must be closed.  
An association specifies a schedule for when to apply the configuration and the targets for the association. For example, an association for antivirus software might run once a day on all managed nodes in an AWS account. If the software isn't installed on a node, then the association could instruct State Manager to install it. If the software is installed, but the service isn't running, then the association could instruct State Manager to start the service.

**2. Determine if a preconfigured SSM document can help you create the desired state on your AWS resources.**  
Systems Manager includes dozens of preconfigured SSM documents that you can use to create an association. Preconfigured documents are ready to perform common tasks like installing applications, configuring Amazon CloudWatch, running AWS Systems Manager automations, running PowerShell and Shell scripts, and joining managed nodes to a directory service domain for Active Directory.  
You can view all SSM documents in the [Systems Manager console](https://console.aws.amazon.com/systems-manager/documents). Choose the name of a document to learn more about each one. Here are two examples: [https://console.aws.amazon.com/systems-manager/documents/AWS-ConfigureAWSPackage/description](https://console.aws.amazon.com/systems-manager/documents/AWS-ConfigureAWSPackage/description) and [https://console.aws.amazon.com/systems-manager/documents/AWS-InstallApplication/description](https://console.aws.amazon.com/systems-manager/documents/AWS-InstallApplication/description).

**3. Create an association.**  
You can create an association by using the Systems Manager console, the AWS Command Line Interface (AWS CLI), AWS Tools for Windows PowerShell (Tools for Windows PowerShell), or the Systems Manager API. When you create an association, you specify the following information:  
+ A name for the association.
+ The parameters for the SSM document (for example, the path to the application to install or the script to run on the nodes).
+ Targets for the association. You can target managed nodes by specifying tags, by choosing individual node IDs, or by choosing a group in AWS Resource Groups. You can also target *all* managed nodes in the current AWS Region and AWS account. If your targets include more than 1,000 nodes, the system uses an hourly throttling mechanism. This means you might see inaccuracies in your status aggregation count since the aggregation process runs hourly and only when the execution status for a node changes.
+ A role used by association to take actions on your behalf. State Manager will assume this role and call required APIs when dispatching configurations to nodes. For information about setting up the custom-provided role, see [Setup roles for `AssociationDispatchAssumeRole`](#setup-assume-role). If no role is provided, [service-linked role for Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/using-service-linked-roles.html) will be used. 
**Note**  
It is recommended that you define a custom IAM role so that you have full control of the permissions that State Manager has when taking actions on your behalf.  
Service-linked role support in State Manager is being phased out. Associations relying on service-linked role may require updates in the future to continue functioning properly.  
For information about managing the usage of custom-provided role, see [Manage usage of AssociationDispatchAssumeRole with `ssm:AssociationDispatchAssumeRole`](#context-key-assume-role).
+ A schedule for when or how often to apply the state. You can specify a cron or rate expression. For more information about creating schedules by using cron and rate expressions, see [Cron and rate expressions for associations](reference-cron-and-rate-expressions.md#reference-cron-and-rate-expressions-association).
**Note**  
State Manager doesn't currently support specifying months in cron expressions for associations.
When you run the command to create the association, Systems Manager binds the information you specified (schedule, targets, SSM document, and parameters) to the targeted resources. The status of the association initially shows "Pending" as the system attempts to reach all targets and *immediately* apply the state specified in the association.   
If you create a new association that is scheduled to run while an earlier association is still running, the earlier association times out and the new association runs.
Systems Manager reports the status of the request to create associations on the resources. You can view status details in the console or (for managed nodes) by using the [DescribeInstanceAssociationsStatus](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeInstanceAssociationsStatus.html) API operation. If you choose to write the output of the command to Amazon Simple Storage Service (Amazon S3) when you create an association, you can also view the output in the Amazon S3 bucket you specified.  
For more information, see [Working with associations in Systems Manager](state-manager-associations.md).   
API operations that are initiated by the SSM document during an association run are not logged in AWS CloudTrail.

**4. Monitor and update.**  
After you create the association, State Manager reapplies the configuration according to the schedule that you defined in the association. You can view the status of your associations on the [State Manager page](https://console.aws.amazon.com/systems-manager/state-manager) in the console or by directly calling the association ID generated by Systems Manager when you created the association. For more information, see [Viewing association histories](state-manager-associations-history.md). You can update your association documents and reapply them as necessary. You can also create multiple versions of an association. For more information, see [Editing and creating a new version of an association](state-manager-associations-edit.md).

## Understanding when associations are applied to resources


When you create an association, you specify an SSM document that defines the configuration, a list of target resources, and a schedule for applying the configuration. By default, State Manager runs the association when you create it and then according to your schedule. State Manager also attempts to run the association in the following situations: 
+ **Association edit** – State Manager runs the association after a user edits and saves their changes to any of the following association fields: `DOCUMENT_VERSION`, `PARAMETERS`, `SCHEDULE_EXPRESSION`, `OUTPUT_S3_LOCATION`.
+ **Document edit** – State Manager runs the association after a user edits and saves changes to the SSM document that defines the association's configuration state. Specifically, the association runs after the following edits to the document:
  + A user specifies a new `$DEFAULT` document version and the association was created using the `$DEFAULT` version. 
  + A user updates a document and the association was created using the `$LATEST` version.
  + A user deletes the document that was specified when the association was created.
+ **Manual start** – State Manager runs the association when initiated by the user from either the Systems Manager console or programmatically.
+ **Target changes** – State Manager runs the association after any of the following activity occurs on a target node:
  + A managed node comes online for the first time.
  + A managed node comes online after missing a scheduled association run.
  + A managed node comes online after being stopped for more than 30 days.

     
**Note**  
State Manager doesn't monitor documents or packages used in associations across AWS accounts. If you update a document or package in one account, the update won't cause the association to run in the second account. You must manually run the association in the second account.

**Preventing associations from running when a target changes**  
In some cases, you might not want an association to run when a target that consists of managed nodes changes, but only according to its specified schedule.
**Note**  
Running an Automation runbook incurs a cost. If an association with an Automation runbook targets all instances in your account and you regularly launch a large number of instances, the runbook is run on each of the instances when it launches. This can lead to elevated automation charges.

  To prevent an association from running when the targets for that association change, select the **Apply association only at the next specified cron interval check box**. This check box is located in the **Specify schedule** area of the **Create association** and **Edit association** pages.

  This option applies to associations that incorporate either an Automation runbook or an SSM document.

## About target updates with Automation runbooks


In order for associations that are created with Automation runbooks to be applied when new target nodes are detected, the following conditions must be true:
+ The association must have been created by a [Quick Setup](systems-manager-quick-setup.md) configuration. Quick Setup is a tool in AWS Systems Manager. Associations created by other processes are not currently supported.
+ The Automation runbook must explicitly target the resource type `AWS::EC2::Instance` or `AWS::SSM::ManagedInstance`. 
+ The association must specify both parameters and targets.

  In the console, the **Parameter** and **Targets** fields are displayed when you choose a rate control execution.  
![\[Parameter and target options are presented in the console for rate control executions\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/sm_Rate_control_execution_options.png)

  When you use the [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateAssociation.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateAssociation.html), [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateAssociationBatch.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateAssociationBatch.html), or [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateAssociation.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateAssociation.html) API actions, you can specify these values using the `AutomationTargetParameterName` and `Targets` inputs. In each of these API actions, you can also prevent the association from running each time a target changes by setting the `ApplyOnlyAtCronInterval` parameter to `true`. 

  For information about using the console to control when associations run, including details for avoiding unexpectedly high costs for Automation executions, see [Understanding when associations are applied to resources](#state-manager-about-scheduling). 

## Setup roles for `AssociationDispatchAssumeRole`


To setup custom dispatch assume roles that State Manager assumes to perform actions on your behalf, the roles should trust `ssm.amazonaws.com` and have the required permission to call `ssm:SendCommand` or `ssm:StartAutomationExecution` based on association use cases. 

Sample trust policy: 

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "ssm.amazonaws.com"
                ]
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
```

## Manage usage of AssociationDispatchAssumeRole with `ssm:AssociationDispatchAssumeRole`


To manage the usage of custom dispatch assume roles that State Manager assumes to perform actions on your behalf, use the `ssm:AssociationDispatchAssumeRole` condition key. This condition controls whether associations can be created or updated without specifying a custom dispatch assume role. 

In the following sample policy, the `"Allow"` statement grants permissions to association create and update APIs only when the `AssociationDispatchAssumeRole` parameter is specified. Without this parameter in API requests, the policy does not grant permission to create or update associations: 

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:CreateAssociation",
                "ssm:UpdateAssociation",
                "ssm:CreateAssociationBatch"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ssm:AssociationDispatchAssumeRole": "*"
                }
            }
        }
    ]
}
```

# Working with associations in Systems Manager
Working with associations

This section describes how to create and manage State Manager associations by using the AWS Systems Manager console, the AWS Command Line Interface (AWS CLI), and AWS Tools for PowerShell. 

**Topics**
+ [

# Understanding targets and rate controls in State Manager associations
](systems-manager-state-manager-targets-and-rate-controls.md)
+ [

# Creating associations
](state-manager-associations-creating.md)
+ [

# Editing and creating a new version of an association
](state-manager-associations-edit.md)
+ [

# Deleting associations
](systems-manager-state-manager-delete-association.md)
+ [

# Running Auto Scaling groups with associations
](systems-manager-state-manager-asg.md)
+ [

# Viewing association histories
](state-manager-associations-history.md)
+ [

# Working with associations using IAM
](systems-manager-state-manager-iam.md)

# Understanding targets and rate controls in State Manager associations
Understanding targets and rate controls

This topic describes State Manager features that help you deploy an association to dozens or hundreds of nodes while controlling the number of nodes that run the association at the scheduled time. State Manager is a tool in AWS Systems Manager.

## Using targets


When you create a State Manager association, you choose which nodes to configure with the association in the **Targets** section of the Systems Manager console, as shown here.

![\[Different options for targeting nodes when creating a State Manager association\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/state-manager-targets.png)


If you create an association by using a command line tool such as the AWS Command Line Interface (AWS CLI), then you specify the `targets` parameter. Targeting nodes allows you to configure tens, hundreds, or thousands of nodes with an association without having to specify or choose individual node IDs. 

Each managed node can be targeted by a maximum of 20 associations.

State Manager includes the following target options when creating an association.

**Specify tags**  
Use this option to specify a tag key and (optionally) a tag value assigned to your nodes. When you run the request, the system locates and attempts to create the association on all nodes that match the specified tag key and value. If you specified multiple tag values, the association targets any node with at least one of those tag values. When the system initially creates the association, it runs the association. After this initial run, the system runs the association according to the schedule you specified.

If you create new nodes and assign the specified tag key and value to those nodes, the system automatically applies the association, runs it immediately, and then runs it according to the schedule. This applies when the association uses a Command or Policy document and doesn't apply if the association uses an Automation runbook. If you delete the specified tags from a node, the system no longer runs the association on those nodes.

**Note**  
If you use Automation runbooks with State Manager and the tagging limitation prevents you from achieving a specific goal, consider using Automation runbooks with Amazon EventBridge. For more information, see [Run automations based on EventBridge events](running-automations-event-bridge.md). For more information about using runbooks with State Manager, see [Scheduling automations with State Manager associations](scheduling-automations-state-manager-associations.md). 

As a best practice, we recommend using tags when creating associations that use a Command or Policy document. We also recommend using tags when creating associations to run Auto Scaling groups. For more information, see [Running Auto Scaling groups with associations](systems-manager-state-manager-asg.md).

**Note**  
Note the following information.  
When creating an association in the AWS Management Console that targets nodes by using tags, you can specify only one tag key for an automation association and five tag keys for a command association. *All* tag keys specified in the association must be currently assigned to the node. If they aren't, State Manager fails to target the node for an association.
If you want to use the console *and* you want to target your nodes by using more than one tag key for an automation association and five tag keys for a command association, assign the tag keys to an AWS Resource Groups group and add the nodes to it. You can then choose the **Resource Group** option in the **Targets** list when you create the State Manager association.
You can specify a maximum of five tag keys by using the AWS CLI. If you use the AWS CLI, *all* tag keys specified in the `create-association` command must be currently assigned to the node. If they aren't, State Manager fails to target the node for an association.

**Choose nodes manually**  
Use this option to manually select the nodes where you want to create the association. The **Instances** pane displays all Systems Manager managed nodes in the current AWS account and AWS Region. You can manually select as many nodes as you want. When the system initially creates the association, it runs the association. After this initial run, the system runs the association according to the schedule you specified.

**Note**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

**Choose a resource group**  
Use this option to create an association on all nodes returned by an AWS Resource Groups tag-based or AWS CloudFormation stack-based query. 

Below are details about targeting resource groups for an association.
+ If you add new nodes to a group, the system automatically maps the nodes to the association that targets the resource group. The system applies the association to the nodes when it discovers the change. After this initial run, the system runs the association according to the schedule you specified.
+ If you create an association that targets a resource group and the `AWS::SSM::ManagedInstance` resource type was specified for that group, then by design, the association runs on both Amazon Elastic Compute Cloud (Amazon EC2) instances and non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment.

  The converse is also true. If you create an association that targets a resource group and the `AWS::EC2::Instance` resource type was specified for that group, then by design, the association runs on both non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment and (Amazon EC2) instances.
+ If you create an association that targets a resource group, the resource group must not have more than five tag keys assigned to it or more than five values specified for any one tag key. If either of these conditions applies to the tags and keys assigned to your resource group, the association fails to run and returns an `InvalidTarget` error. 
+ If you create an association that targets a resource group using tags, you can't choose the **(empty value)** option for the tag value.
+ If you delete a resource group, all instances in that group no longer run the association. As a best practice, delete associations targeting the group.
+ At most you can target a single resource group for an association. Multiple or nested groups aren't supported.
+ After you create an association, State Manager periodically updates the association with information about resources in the Resource Group. If you add new resources to a Resource Group, the schedule for when the system applies the association to the new resources depends on several factors. You can determine the status of the association in the State Manager page of the Systems Manager console.

**Warning**  
An AWS Identity and Access Management (IAM) user, group, or role with permission to create an association that targets a resource group of Amazon EC2 instances automatically has root-level control of all instances in the group. Only trusted administrators should be permitted to create associations. 

For more information about Resource Groups, see [What Is AWS Resource Groups?](https://docs.aws.amazon.com/ARG/latest/userguide/) in the *AWS Resource Groups User Guide*.

**Choose all nodes**  
Use this option to target all nodes in the current AWS account and AWS Region. When you run the request, the system locates and attempts to create the association on all nodes in the current AWS account and AWS Region. When the system initially creates the association, it runs the association. After this initial run, the system runs the association according to the schedule you specified. If you create new nodes, the system automatically applies the association, runs it immediately, and then runs it according to the schedule.

## Using rate controls


You can control the execution of an association on your nodes by specifying a concurrency value and an error threshold. The concurrency value specifies how many nodes can run the association simultaneously. An error threshold specifies how many association executions can fail before Systems Manager sends a command to each node configured with that association to stop running the association. The command stops the association from running until the next scheduled execution. The concurrency and error threshold features are collectively called *rate controls*. 

![\[Different rate control options when creating a State Manager association\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/state-manager-rate-controls.png)


**Concurrency**  
Concurrency helps to limit the impact on your nodes by allowing you to specify that only a certain number of nodes can process an association at one time. You can specify either an absolute number of nodes, for example 20, or a percentage of the target set of nodes, for example 10%.

State Manager concurrency has the following restrictions and limitations:
+ If you choose to create an association by using targets, but you don't specify a concurrency value, then State Manager automatically enforces a maximum concurrency of 50 nodes.
+ If new nodes that match the target criteria come online while an association that uses concurrency is running, then the new nodes run the association if the concurrency value isn't exceeded. If the concurrency value is exceeded, then the nodes are ignored during the current association execution interval. The nodes run the association during the next scheduled interval while conforming to the concurrency requirements.
+ If you update an association that uses concurrency, and one or more nodes are processing that association when it's updated, then any node that is running the association is allowed to complete. Those associations that haven't started are stopped. After running associations complete, all target nodes immediately run the association again because it was updated. When the association runs again, the concurrency value is enforced. 

**Error thresholds**  
An error threshold specifies how many association executions are allowed to fail before Systems Manager sends a command to each node configured with that association. The command stops the association from running until the next scheduled execution. You can specify either an absolute number of errors, for example 10, or a percentage of the target set, for example 10%.

If you specify an absolute number of three errors, for example, State Manager sends the stop command when the fourth error is returned. If you specify 0, then State Manager sends the stop command after the first error result is returned.

If you specify an error threshold of 10% for 50 associations, then State Manager sends the stop command when the sixth error is returned. Associations that are already running when an error threshold is reached are allowed to complete, but some of these associations might fail. To ensure that there aren't more errors than the number specified for the error threshold, set the **Concurrency** value to 1 so that associations proceed one at a time. 

State Manager error thresholds have the following restrictions and limitations:
+ Error thresholds are enforced for the current interval.
+ Information about each error, including step-level details, is recorded in the association history.
+ If you choose to create an association by using targets, but you don't specify an error threshold, then State Manager automatically enforces a threshold of 100% failures.

# Creating associations


State Manager, a tool in AWS Systems Manager, helps you keep your AWS resources in a state that you define and reduce configuration drift. To do this, State Manager uses associations. An *association* is a configuration that you assign to your AWS resources. The configuration defines the state that you want to maintain on your resources. For example, an association can specify that antivirus software must be installed and running on a managed node, or that certain ports must be closed.

An association specifies a schedule for when to apply the configuration and the targets for the association. For example, an association for antivirus software might run once a day on all managed nodes in an AWS account. If the software isn't installed on a node, then the association could instruct State Manager to install it. If the software is installed, but the service isn't running, then the association could instruct State Manager to start the service.

**Warning**  
When you create an association, you can choose an AWS resource group of managed nodes as the target for the association. If an AWS Identity and Access Management (IAM) user, group, or role has permission to create an association that targets a resource group of managed nodes, then that user, group, or role automatically has root-level control of all nodes in the group. Permit only trusted administrators to create associations. 

**Association targets and rate controls**  
An association specifies which managed nodes, or targets, should receive the association. State Manager includes several features to help you target your managed nodes and control how the association is deployed to those targets. For more information about targets and rate controls, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

**Tagging associations**  
You can assign tags to an association when you create it by using a command line tool such as the AWS CLI or AWS Tools for PowerShell. Adding tags to an association by using the Systems Manager console isn't supported. 

**Running associations**  
By default, State Manager runs an association immediately after you create it, and then according to the schedule that you've defined. 

The system also runs associations according to the following rules:
+ State Manager attempts to run the association on all specified or targeted nodes during an interval.
+ If an association doesn't run during an interval (because, for example, a concurrency value limited the number of nodes that could process the association at one time), then State Manager attempts to run the association during the next interval.
+ State Manager runs the association after changes to the association's configuration, target nodes, documents, or parameters. For more information, see [Understanding when associations are applied to resources](state-manager-about.md#state-manager-about-scheduling)
+ State Manager records history for all skipped intervals. You can view the history on the **Execution History** tab.

## Scheduling associations


You can schedule associations to run at basic intervals such as *every 10 hours*, or you can create more advanced schedules using custom cron and rate expressions. You can also prevent associations from running when you first create them. 

**Using cron and rate expressions to schedule association runs**  
In addition to standard cron and rate expressions, State Manager also supports cron expressions that include a day of the week and the number sign (\$1) to designate the *n*th day of a month to run an association. Here is an example that runs a cron schedule on the third Tuesday of every month at 23:30 UTC:

`cron(30 23 ? * TUE#3 *)`

Here is an example that runs on the second Thursday of every month at midnight UTC:

`cron(0 0 ? * THU#2 *)`

State Manager also supports the (L) sign to indicate the last *X* day of the month. Here is an example that runs a cron schedule on the last Tuesday of every month at midnight UTC:

`cron(0 0 ? * 3L *)`

To further control when an association runs, for example if you want to run an association two days after patch Tuesday, you can specify an offset. An *offset* defines how many days to wait after the scheduled day to run an association. For example, if you specified a cron schedule of `cron(0 0 ? * THU#2 *)`, you could specify the number 3 in the **Schedule offset** field to run the association each Sunday after the second Thursday of the month.

**Note**  
To use offsets, you must either select **Apply association only at the next specified Cron interval** in the console or specify the `ApplyOnlyAtCronInterval` parameter from the command line. When either of these options are activated, State Manager doesn't run the association immediately after you create it.

For more information about cron and rate expressions, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

## Create an association (console)


The following procedure describes how to use the Systems Manager console to create a State Manager association.

**Note**  
Note the following information.  
This procedure describes how to create an association that uses either a `Command` or a `Policy` document to target managed nodes. For information about creating an association that uses an Automation runbook to target nodes or other types of AWS resources, see [Scheduling automations with State Manager associations](scheduling-automations-state-manager-associations.md).
When creating an association, you can specify a maximum of five tag keys by using the AWS Management Console. *All* tag keys specified for the association must be currently assigned to the node. If they aren't, State Manager fails to target the node for the association.

**To create a State Manager association**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Choose **Create association**.

1. In the **Name** field, specify a name.

1. In the **Document** list, choose the option next to a document name. Note the document type. This procedure applies to `Command` and `Policy` documents. For information about creating an association that uses an Automation runbook, see [Scheduling automations with State Manager associations](scheduling-automations-state-manager-associations.md).
**Important**  
State Manager doesn't support running associations that use a new version of a document if that document is shared from another account. State Manager always runs the `default` version of a document if shared from another account, even though the Systems Manager console shows that a new version was processed. If you want to run an association using a new version of a document shared from another account, you must set the document version to `default`.

1. For **Parameters**, specify the required input parameters.

1. (Optional) For **Association Dispatch Assume Role**, select a role from the drop-down. State Manager will take actions using this role on your behalf. For information about setting up the custom-provided role, see [Setup roles for `AssociationDispatchAssumeRole`](state-manager-about.md#setup-assume-role) 
**Note**  
It is recommended that you define a custom IAM role so that you have full control of the permissions that State Manager has when taking actions on your behalf.  
Service-linked role support in State Manager is being phased out. Associations relying on service-linked role may require updates in the future to continue functioning properly.  
For information about managing the usage of custom-provided role, see [Manage usage of AssociationDispatchAssumeRole with `ssm:AssociationDispatchAssumeRole`](state-manager-about.md#context-key-assume-role).

1. (Optional) Choose a CloudWatch alarm to apply to your association for monitoring. 
**Note**  
Note the following information about this step.  
The alarms list displays a maximum of 100 alarms. If you don't see your alarm in the list, use the AWS Command Line Interface to create the association. For more information, see [Create an association (command line)](#create-state-manager-association-commandline).
To attach a CloudWatch alarm to your command, the IAM principal that creates the association must have permission for the `iam:createServiceLinkedRole` action. For more information about CloudWatch alarms, see [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html).
If your alarm activates, any pending command invocations or automations do not run.

1. For **Targets**, choose an option. For information about using targets, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).
**Note**  
In order for associations that are created with Automation runbooks to be applied when new target nodes are detected, certain conditions must be met. For information, see [About target updates with Automation runbooks](state-manager-about.md#runbook-target-updates).

1. In the **Specify schedule** section, choose either **On Schedule** or **No schedule**. If you choose **On Schedule**, use the buttons provided to create a cron or rate schedule for the association. 

   If you don't want the association to run immediately after you create it, choose **Apply association only at the next specified Cron interval**.

1. (Optional) In the **Schedule offset** field, specify a number between 1 and 6. 

1. In the **Advanced options** section use **Compliance severity** to choose a severity level for the association and use **Change Calendars** to choose a change calendar for the association.

   Compliance reporting indicates whether the association state is compliant or noncompliant, along with the severity level you indicate here. For more information, see [About State Manager association compliance](compliance-about.md#compliance-about-association).

   The change calendar determines when the association runs. If the calendar is closed, the association isn't applied. If the calendar is open, the association runs accordingly. For more information, see [AWS Systems Manager Change Calendar](systems-manager-change-calendar.md).

1. In the **Rate control** section, choose options to control how the association runs on multiple nodes. For more information about using rate controls, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

   In the **Concurrency** section, choose an option: 
   + Choose **targets** to enter an absolute number of targets that can run the association simultaneously.
   + Choose **percentage** to enter a percentage of the target set that can run the association simultaneously.

   In the **Error threshold** section, choose an option:
   + Choose **errors** to enter an absolute number of errors that are allowed before State Manager stops running associations on additional targets.
   + Choose **percentage** to enter a percentage of errors that are allowed before State Manager stops running associations on additional targets.

1. (Optional) For **Output options**, to save the command output to a file, select the **Enable writing output to S3** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile assigned to the managed node, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, verify that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

   Following are the minimal permissions required to turn on Amazon S3 output for an association. You can further restrict access by attaching IAM policies to users or roles within an account. At minimum, an Amazon EC2 instance profile should have an IAM role with the `AmazonSSMManagedInstanceCore` managed policy and the following inline policy. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "s3:PutObject",
                   "s3:GetObject",
                   "s3:PutObjectAcl"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
           }
       ]
   }
   ```

------

   For minimal permissions, the Amazon S3 bucket you export to must have the default settings defined by the Amazon S3 console. For more information about creating Amazon S3 buckets, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon S3 User Guide*. 
**Note**  
API operations that are initiated by the SSM document during an association run are not logged in AWS CloudTrail.

1. Choose **Create Association**.

**Note**  
If you delete the association you created, the association no longer runs on any targets of that association.

## Create an association (command line)


The following procedure describes how to use the AWS CLI (on Linux or Windows Server) or Tools for PowerShell to create a State Manager association. This section includes several examples that show how to use targets and rate controls. Targets and rate controls allow you to assign an association to dozens or hundreds of nodes while controlling the execution of those associations. For more information about targets and rate controls, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

**Important**  
This procedure describes how to create an association that uses either a `Command` or a `Policy` document to target managed nodes. For information about creating an association that uses an Automation runbook to target nodes or other types of AWS resources, see [Scheduling automations with State Manager associations](scheduling-automations-state-manager-associations.md).

**Before you begin**  
The `targets` parameter is an array of search criteria that targets nodes using a `Key`,`Value` combination that you specify. If you plan to create an association on dozens or hundreds of node by using the `targets` parameter, review the following targeting options before you begin the procedure.

Target specific nodes by specifying IDs

```
--targets Key=InstanceIds,Values=instance-id-1,instance-id-2,instance-id-3
```

```
--targets Key=InstanceIds,Values=i-02573cafcfEXAMPLE,i-0471e04240EXAMPLE,i-07782c72faEXAMPLE
```

Target instances by using tags

```
--targets Key=tag:tag-key,Values=tag-value-1,tag-value-2,tag-value-3
```

```
--targets Key=tag:Environment,Values=Development,Test,Pre-production
```

Target nodes by using AWS Resource Groups

```
--targets Key=resource-groups:Name,Values=resource-group-name
```

```
--targets Key=resource-groups:Name,Values=WindowsInstancesGroup
```

Target all instances in the current AWS account and AWS Region

```
--targets Key=InstanceIds,Values=*
```

**Note**  
Note the following information.  
State Manager doesn't support running associations that use a new version of a document if that document is shared from another account. State Manager always runs the `default` version of a document if shared from another account, even though the Systems Manager console shows that a new version was processed. If you want to run an association using a new version of a document shared form another account, you must set the document version to `default`.
State Manager doesn't support `IncludeChildOrganizationUnits`, `ExcludeAccounts`, `TargetsMaxErrors`, `TargetsMaxConcurrency`, `Targets`, `TargetLocationAlarmConfiguration` parameters for [TargetLocation](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_TargetLocation.html).
You can specify a maximum of five tag keys by using the AWS CLI. If you use the AWS CLI, *all* tag keys specified in the `create-association` command must be currently assigned to the node. If they aren't, State Manager fails to target the node for an association.
When you create an association, you specify when the schedule runs. Specify the schedule by using a cron or rate expression. For more information about cron and rate expressions, see [Cron and rate expressions for associations](reference-cron-and-rate-expressions.md#reference-cron-and-rate-expressions-association).
In order for associations that are created with Automation runbooks to be applied when new target nodes are detected, certain conditions must be met. For information, see [About target updates with Automation runbooks](state-manager-about.md#runbook-target-updates).

**To create an association**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Use the following format to create a command that creates a State Manager association. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
       --name document_name \
       --document-version version_of_document_applied \
       --instance-id instances_to_apply_association_on \
       --parameters (if any) \
       --targets target_options \
       --association-dispatch-assume-role arn_of_role_to_be_used_when_dispatching_configurations \
       --schedule-expression "cron_or_rate_expression" \
       --apply-only-at-cron-interval required_parameter_for_schedule_offsets \
       --schedule-offset number_between_1_and_6 \
       --output-location s3_bucket_to_store_output_details \
       --association-name association_name \
       --max-errors a_number_of_errors_or_a_percentage_of_target_set \
       --max-concurrency a_number_of_instances_or_a_percentage_of_target_set \
       --compliance-severity severity_level \
       --calendar-names change_calendar_names \
       --target-locations aws_region_or_account \
       --tags "Key=tag_key,Value=tag_value"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
       --name document_name ^
       --document-version version_of_document_applied ^
       --instance-id instances_to_apply_association_on ^
       --parameters (if any) ^
       --targets target_options ^
       --association-dispatch-assume-role arn_of_role_to_be_used_when_dispatching_configurations ^
       --schedule-expression "cron_or_rate_expression" ^
       --apply-only-at-cron-interval required_parameter_for_schedule_offsets ^
       --schedule-offset number_between_1_and_6 ^
       --output-location s3_bucket_to_store_output_details ^
       --association-name association_name ^
       --max-errors a_number_of_errors_or_a_percentage_of_target_set ^
       --max-concurrency a_number_of_instances_or_a_percentage_of_target_set ^
       --compliance-severity severity_level ^
       --calendar-names change_calendar_names ^
       --target-locations aws_region_or_account ^
       --tags "Key=tag_key,Value=tag_value"
   ```

------
#### [ PowerShell ]

   ```
   New-SSMAssociation `
       -Name document_name `
       -DocumentVersion version_of_document_applied `
       -InstanceId instances_to_apply_association_on `
       -Parameters (if any) `
       -Target target_options `
       -AssociationDispatchAssumeRole arn_of_role_to_be_used_when_dispatching_configurations `
       -ScheduleExpression "cron_or_rate_expression" `
       -ApplyOnlyAtCronInterval required_parameter_for_schedule_offsets `
       -ScheduleOffSet number_between_1_and_6 `
       -OutputLocation s3_bucket_to_store_output_details `
       -AssociationName association_name `
       -MaxError  a_number_of_errors_or_a_percentage_of_target_set
       -MaxConcurrency a_number_of_instances_or_a_percentage_of_target_set `
       -ComplianceSeverity severity_level `
       -CalendarNames change_calendar_names `
       -TargetLocations aws_region_or_account `
       -Tags "Key=tag_key,Value=tag_value"
   ```

------

   The following example creates an association on nodes tagged with `"Environment,Linux"`. The association uses the `AWS-UpdateSSMAgent` document to update the SSM Agent on the targeted nodes at 2:00 UTC every Sunday morning. This association runs simultaneously on 10 nodes maximum at any given time. Also, this association stops running on more nodes for a particular execution interval if the error count exceeds 5. For compliance reporting, this association is assigned a severity level of Medium.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
     --association-name Update_SSM_Agent_Linux \
     --targets Key=tag:Environment,Values=Linux \
     --name AWS-UpdateSSMAgent  \
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole \
     --compliance-severity "MEDIUM" \
     --schedule-expression "cron(0 2 ? * SUN *)" \
     --max-errors "5" \
     --max-concurrency "10"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
     --association-name Update_SSM_Agent_Linux ^
     --targets Key=tag:Environment,Values=Linux ^
     --name AWS-UpdateSSMAgent  ^
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole ^
     --compliance-severity "MEDIUM" ^
     --schedule-expression "cron(0 2 ? * SUN *)" ^
     --max-errors "5" ^
     --max-concurrency "10"
   ```

------
#### [ PowerShell ]

   ```
   New-SSMAssociation `
     -AssociationName Update_SSM_Agent_Linux `
     -Name AWS-UpdateSSMAgent `
     -AssociationDispatchAssumeRole "arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole" `
     -Target @{
         "Key"="tag:Environment"
         "Values"="Linux"
       } `
     -ComplianceSeverity MEDIUM `
     -ScheduleExpression "cron(0 2 ? * SUN *)" `
     -MaxConcurrency 10 `
     -MaxError 5
   ```

------

   The following example targets node IDs by specifying a wildcard value (\$1). This allows Systems Manager to create an association on *all* nodes in the current AWS account and AWS Region. This association runs simultaneously on 10 nodes maximum at any given time. Also, this association stops running on more nodes for a particular execution interval if the error count exceeds 5. For compliance reporting, this association is assigned a severity level of Medium. This association uses a schedule offset, which means it runs two days after the specified cron schedule. It also includes the `ApplyOnlyAtCronInterval` parameter, which is required to use the schedule offset and which means the association won't run immediately after it is created.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
     --association-name Update_SSM_Agent_Linux \
     --name "AWS-UpdateSSMAgent" \
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole \
     --targets "Key=instanceids,Values=*" \
     --compliance-severity "MEDIUM" \
     --schedule-expression "cron(0 2 ? * SUN#2 *)" \
     --apply-only-at-cron-interval \
     --schedule-offset 2 \
     --max-errors "5" \
     --max-concurrency "10" \
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
     --association-name Update_SSM_Agent_Linux ^
     --name "AWS-UpdateSSMAgent" ^
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole ^
     --targets "Key=instanceids,Values=*" ^
     --compliance-severity "MEDIUM" ^
     --schedule-expression "cron(0 2 ? * SUN#2 *)" ^
     --apply-only-at-cron-interval ^
     --schedule-offset 2 ^
     --max-errors "5" ^
     --max-concurrency "10" ^
     --apply-only-at-cron-interval
   ```

------
#### [ PowerShell ]

   ```
   New-SSMAssociation `
     -AssociationName Update_SSM_Agent_All `
     -Name AWS-UpdateSSMAgent `
     -AssociationDispatchAssumeRole "arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole" `
     -Target @{
         "Key"="InstanceIds"
         "Values"="*"
       } `
     -ScheduleExpression "cron(0 2 ? * SUN#2 *)" `
     -ApplyOnlyAtCronInterval `
     -ScheduleOffset 2 `
     -MaxConcurrency 10 `
     -MaxError 5 `
     -ComplianceSeverity MEDIUM `
     -ApplyOnlyAtCronInterval
   ```

------

   The following example creates an association on nodes in Resource Groups. The group is named "HR-Department". The association uses the `AWS-UpdateSSMAgent` document to update SSM Agent on the targeted nodes at 2:00 UTC every Sunday morning. This association runs simultaneously on 10 nodes maximum at any given time. Also, this association stops running on more nodes for a particular execution interval if the error count exceeds 5. For compliance reporting, this association is assigned a severity level of Medium. This association runs at the specified cron schedule. It doesn't run immediately after the association is created.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
     --association-name Update_SSM_Agent_Linux \
     --targets Key=resource-groups:Name,Values=HR-Department \
     --name AWS-UpdateSSMAgent  \
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole \
     --compliance-severity "MEDIUM" \
     --schedule-expression "cron(0 2 ? * SUN *)" \
     --max-errors "5" \
     --max-concurrency "10" \
     --apply-only-at-cron-interval
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
     --association-name Update_SSM_Agent_Linux ^
     --targets Key=resource-groups:Name,Values=HR-Department ^
     --name AWS-UpdateSSMAgent  ^
     -association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole ^
     --compliance-severity "MEDIUM" ^
     --schedule-expression "cron(0 2 ? * SUN *)" ^
     --max-errors "5" ^
     --max-concurrency "10" ^
     --apply-only-at-cron-interval
   ```

------
#### [ PowerShell ]

   ```
   New-SSMAssociation `
     -AssociationName Update_SSM_Agent_Linux `
     -Name AWS-UpdateSSMAgent `
     -AssociationDispatchAssumeRole "arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole" `
     -Target @{
         "Key"="resource-groups:Name"
         "Values"="HR-Department"
       } `
     -ScheduleExpression "cron(0 2 ? * SUN *)" `
     -MaxConcurrency 10 `
     -MaxError 5 `
     -ComplianceSeverity MEDIUM `
     -ApplyOnlyAtCronInterval
   ```

------

   The following example creates an association that runs on nodes tagged with a specific node ID. The association uses the SSM Agent document to update SSM Agent on the targeted nodes once when the change calendar is open. The association checks the calendar state when it runs. If the calendar is closed at launch time and the association is only run once, it won't run again because the association run window has passed. If the calendar is open, the association runs accordingly.
**Note**  
If you add new nodes to the tags or resource groups that an association acts on when the change calendar is closed, the association is applied to those nodes once the change calendar opens.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
     --association-name CalendarAssociation \
     --targets "Key=instanceids,Values=i-0cb2b964d3e14fd9f" \
     --name AWS-UpdateSSMAgent  \
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole \
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" \
     --schedule-expression "rate(1day)"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
     --association-name CalendarAssociation ^
     --targets "Key=instanceids,Values=i-0cb2b964d3e14fd9f" ^
     --name AWS-UpdateSSMAgent  ^
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole ^
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" ^
     --schedule-expression "rate(1day)"
   ```

------
#### [ PowerShell ]

   ```
   New-SSMAssociation `
     -AssociationName CalendarAssociation `
     -Target @{
         "Key"="tag:instanceids"
         "Values"="i-0cb2b964d3e14fd9f"
       } `
     -Name AWS-UpdateSSMAgent `
     -AssociationDispatchAssumeRole "arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole" `
     -CalendarNames "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" `
     -ScheduleExpression "rate(1day)"
   ```

------

   The following example creates an association that runs on nodes tagged with a specific node ID. The association uses the SSM Agent document to update SSM Agent on the targeted nodes on the targeted nodes at 2:00 AM every Sunday. This association runs only at the specified cron schedule when the change calendar is open. When the association is created, it checks the calendar state. If the calendar is closed, the association isn't applied. When the interval to apply the association starts at 2:00 AM on Sunday, the association checks to see if the calendar is open. If the calendar is open, the association runs accordingly.
**Note**  
If you add new nodes to the tags or resource groups that an association acts on when the change calendar is closed, the association is applied to those nodes once the change calendar opens.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
     --association-name MultiCalendarAssociation \
     --targets "Key=instanceids,Values=i-0cb2b964d3e14fd9f" \
     --name AWS-UpdateSSMAgent  \
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole \
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" "arn:aws:ssm:us-east-2:123456789012:document/testCalendar2" \
     --schedule-expression "cron(0 2 ? * SUN *)"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
     --association-name MultiCalendarAssociation ^
     --targets "Key=instanceids,Values=i-0cb2b964d3e14fd9f" ^
     --name AWS-UpdateSSMAgent  ^
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole ^
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" "arn:aws:ssm:us-east-2:123456789012:document/testCalendar2" ^
     --schedule-expression "cron(0 2 ? * SUN *)"
   ```

------
#### [ PowerShell ]

   ```
   New-SSMAssociation `
     -AssociationName MultiCalendarAssociation `
     -Name AWS-UpdateSSMAgent `
     -AssociationDispatchAssumeRole "arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole" `
     -Target @{
         "Key"="tag:instanceids"
         "Values"="i-0cb2b964d3e14fd9f"
       } `
     -CalendarNames "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" "arn:aws:ssm:us-east-2:123456789012:document/testCalendar2" `
     -ScheduleExpression "cron(0 2 ? * SUN *)"
   ```

------

**Note**  
If you delete the association you created, the association no longer runs on any targets of that association. Also, if you specified the `apply-only-at-cron-interval` parameter, you can reset this option. To do so, specify the `no-apply-only-at-cron-interval` parameter when you update the association from the command line. This parameter forces the association to run immediately after updating the association and according to the interval specified.

# Editing and creating a new version of an association
Editing an association

You can edit a State Manager association to specify a new name, schedule, severity level, targets, or other values. For associations based on SSM Command-type documents, you can also choose to write the output of the command to an Amazon Simple Storage Service (Amazon S3) bucket. After you edit an association, State Manager creates a new version. You can view different versions after editing, as described in the following procedures. 

**Note**  
In order for associations that are created with Automation runbooks to be applied when new target nodes are detected, certain conditions must be met. For information, see [About target updates with Automation runbooks](state-manager-about.md#runbook-target-updates).

The following procedures describe how to edit and create a new version of an association using the Systems Manager console, AWS Command Line Interface (AWS CLI), and AWS Tools for PowerShell (Tools for PowerShell). 

**Important**  
State Manager doesn't support running associations that use a new version of a document if that document is shared from another account. State Manager always runs the `default` version of a document if shared from another account, even though the Systems Manager console shows that a new version was processed. If you want to run an association using a new version of a document shared form another account, you must set the document version to `default`.

## Edit an association (console)


The following procedure describes how to use the Systems Manager console to edit and create a new version of an association.

**Note**  
For associations that use SSM Command documents, not Automation runbooks, this procedure requires that you have write access to an existing Amazon S3 bucket. If you haven't used Amazon S3 before, be aware that you will incur charges for using Amazon S3. For information about how to create a bucket, see [Create a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingABucket.html).

**To edit a State Manager association**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Choose an existing association, and then choose **Edit**.

1. Reconfigure the association to meet your current requirements. 

   For information about association options with `Command` and `Policy` documents, see [Creating associations](state-manager-associations-creating.md). For information about association options with Automation runbooks, see [Scheduling automations with State Manager associations](scheduling-automations-state-manager-associations.md).

1. Choose **Save Changes**. 

1. (Optional) To view association information, in the **Associations** page, choose the name of the association you edited, and then choose the **Versions** tab. The system lists each version of the association you created and edited.

1. (Optional) To view output for associations based on SSM `Command` documents, do the following:

   1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. Choose the name of the Amazon S3 bucket you specified for storing command output, and then choose the folder named with the ID of the node that ran the association. (If you chose to store output in a folder in the bucket, open it first.)

   1. Drill down several levels, through the `awsrunPowerShell` folder, to the `stdout` file.

   1. Choose **Open** or **Download** to view the host name.

## Edit an association (command line)


The following procedure describes how to use the AWS CLI (on Linux or Windows Server) or AWS Tools for PowerShell to edit and create a new version of an association.

**To edit a State Manager association**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Use the following format to create a command to edit and create a new version of an existing State Manager association. Replace each *example resource placeholder* with your own information.
**Important**  
When you call `[https://docs.aws.amazon.com/cli/latest/reference/ssm/desupdatecribe-association.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/desupdatecribe-association.html)`, the system drops all optional parameters from the request and overwrites the association with null values for those parameters. This is by design. You must specify all optional parameters in the call, even if you are not changing the parameters. This includes the `--name` parameter. Before calling this action, we recommend that you call the `[https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-association.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-association.html)` operation and make a note of all optional parameters required for your `update-association` call.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-association \
       --name document_name \
       --document-version version_of_document_applied \
       --instance-id instances_to_apply_association_on \
       --parameters (if any) \
       --targets target_options \
       --association-dispatch-assume-role arn_of_role_to_be_used_when_dispatching_configurations \
       --schedule-expression "cron_or_rate_expression" \
       --schedule-offset "number_between_1_and_6" \
       --output-location s3_bucket_to_store_output_details \
       --association-name association_name \
       --max-errors a_number_of_errors_or_a_percentage_of_target_set \
       --max-concurrency a_number_of_instances_or_a_percentage_of_target_set \
       --compliance-severity severity_level \
       --calendar-names change_calendar_names \
       --target-locations aws_region_or_account
   ```

------
#### [ Windows ]

   ```
   aws ssm update-association ^
       --name document_name ^
       --document-version version_of_document_applied ^
       --instance-id instances_to_apply_association_on ^
       --parameters (if any) ^
       --targets target_options ^
       --association-dispatch-assume-role arn_of_role_to_be_used_when_dispatching_configurations ^
       --schedule-expression "cron_or_rate_expression" ^
       --schedule-offset "number_between_1_and_6" ^
       --output-location s3_bucket_to_store_output_details ^
       --association-name association_name ^
       --max-errors a_number_of_errors_or_a_percentage_of_target_set ^
       --max-concurrency a_number_of_instances_or_a_percentage_of_target_set ^
       --compliance-severity severity_level ^
       --calendar-names change_calendar_names ^
       --target-locations aws_region_or_account
   ```

------
#### [ PowerShell ]

   ```
   Update-SSMAssociation `
       -Name document_name `
       -DocumentVersion version_of_document_applied `
       -InstanceId instances_to_apply_association_on `
       -Parameters (if any) `
       -Target target_options `
       -AssociationDispatchAssumeRole arn_of_role_to_be_used_when_dispatching_configurations `
       -ScheduleExpression "cron_or_rate_expression" `
       -ScheduleOffset "number_between_1_and_6" `
       -OutputLocation s3_bucket_to_store_output_details `
       -AssociationName association_name `
       -MaxError  a_number_of_errors_or_a_percentage_of_target_set
       -MaxConcurrency a_number_of_instances_or_a_percentage_of_target_set `
       -ComplianceSeverity severity_level `
       -CalendarNames change_calendar_names `
       -TargetLocations aws_region_or_account
   ```

------

   The following example updates an existing association to change the name to `TestHostnameAssociation2`. The new association version runs every hour and writes the output of commands to the specified Amazon S3 bucket.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-association \
     --association-id 8dfe3659-4309-493a-8755-01234EXAMPLE \
     --association-name TestHostnameAssociation2 \
     --parameters commands="echo Association" \
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole \
     --output-location S3Location='{OutputS3Region=us-east-1,OutputS3BucketName=amzn-s3-demo-bucket,OutputS3KeyPrefix=logs}' \
     --schedule-expression "cron(0 */1 * * ? *)"
   ```

------
#### [ Windows ]

   ```
   aws ssm update-association ^
     --association-id 8dfe3659-4309-493a-8755-01234EXAMPLE ^
     --association-name TestHostnameAssociation2 ^
     --parameters commands="echo Association" ^
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole ^
     --output-location S3Location='{OutputS3Region=us-east-1,OutputS3BucketName=amzn-s3-demo-bucket,OutputS3KeyPrefix=logs}' ^
     --schedule-expression "cron(0 */1 * * ? *)"
   ```

------
#### [ PowerShell ]

   ```
   Update-SSMAssociation `
     -AssociationId b85ccafe-9f02-4812-9b81-01234EXAMPLE `
     -AssociationName TestHostnameAssociation2 `
     -Parameter @{"commands"="echo Association"} `
     -AssociationDispatchAssumeRole "arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole" `
     -S3Location_OutputS3BucketName amzn-s3-demo-bucket `
     -S3Location_OutputS3KeyPrefix logs `
     -S3Location_OutputS3Region us-east-1 `
     -ScheduleExpression "cron(0 */1 * * ? *)"
   ```

------

   The following example updates an existing association to change the name to `CalendarAssociation`. The new association runs when the calendar is open and writes command output to the specified Amazon S3 bucket. 

------
#### [ Linux & macOS ]

   ```
   aws ssm update-association \
     --association-id 8dfe3659-4309-493a-8755-01234EXAMPLE \
     --association-name CalendarAssociation \
     --parameters commands="echo Association" \
     --output-location S3Location='{OutputS3Region=us-east-1,OutputS3BucketName=amzn-s3-demo-bucket,OutputS3KeyPrefix=logs}' \
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar2"
   ```

------
#### [ Windows ]

   ```
   aws ssm update-association ^
     --association-id 8dfe3659-4309-493a-8755-01234EXAMPLE ^
     --association-name CalendarAssociation ^
     --parameters commands="echo Association" ^
     --output-location S3Location='{OutputS3Region=us-east-1,OutputS3BucketName=amzn-s3-demo-bucket,OutputS3KeyPrefix=logs}' ^
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar2"
   ```

------
#### [ PowerShell ]

   ```
   Update-SSMAssociation `
     -AssociationId b85ccafe-9f02-4812-9b81-01234EXAMPLE `
     -AssociationName CalendarAssociation `
     -AssociationName OneTimeAssociation `
     -Parameter @{"commands"="echo Association"} `
     -S3Location_OutputS3BucketName amzn-s3-demo-bucket `
     -CalendarNames "arn:aws:ssm:us-east-1:123456789012:document/testCalendar2"
   ```

------

   The following example updates an existing association to change the name to `MultiCalendarAssociation`. The new association runs when the calendars are open and writes command output to the specified Amazon S3 bucket. 

------
#### [ Linux & macOS ]

   ```
   aws ssm update-association \
     --association-id 8dfe3659-4309-493a-8755-01234EXAMPLE \
     --association-name MultiCalendarAssociation \
     --parameters commands="echo Association" \
     --output-location S3Location='{OutputS3Region=us-east-1,OutputS3BucketName=amzn-s3-demo-bucket,OutputS3KeyPrefix=logs}' \
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" "arn:aws:ssm:us-east-2:123456789012:document/testCalendar2"
   ```

------
#### [ Windows ]

   ```
   aws ssm update-association ^
     --association-id 8dfe3659-4309-493a-8755-01234EXAMPLE ^
     --association-name MultiCalendarAssociation ^
     --parameters commands="echo Association" ^
     --output-location S3Location='{OutputS3Region=us-east-1,OutputS3BucketName=amzn-s3-demo-bucket,OutputS3KeyPrefix=logs}' ^
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" "arn:aws:ssm:us-east-2:123456789012:document/testCalendar2"
   ```

------
#### [ PowerShell ]

   ```
   Update-SSMAssociation `
     -AssociationId b85ccafe-9f02-4812-9b81-01234EXAMPLE `
     -AssociationName MultiCalendarAssociation `
     -Parameter @{"commands"="echo Association"} `
     -S3Location_OutputS3BucketName amzn-s3-demo-bucket `
     -CalendarNames "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" "arn:aws:ssm:us-east-2:123456789012:document/testCalendar2"
   ```

------

1. To view the new version of the association, run the following command.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-association \
     --association-id b85ccafe-9f02-4812-9b81-01234EXAMPLE
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-association ^
     --association-id b85ccafe-9f02-4812-9b81-01234EXAMPLE
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAssociation `
     -AssociationId b85ccafe-9f02-4812-9b81-01234EXAMPLE | Select-Object *
   ```

------

   The system returns information like the following.

------
#### [ Linux & macOS ]

   ```
   {
       "AssociationDescription": {
           "ScheduleExpression": "cron(0 */1 * * ? *)",
           "OutputLocation": {
               "S3Location": {
                   "OutputS3KeyPrefix": "logs",
                   "OutputS3BucketName": "amzn-s3-demo-bucket",
                   "OutputS3Region": "us-east-1"
               }
           },
           "Name": "AWS-RunPowerShellScript",
           "Parameters": {
               "commands": [
                   "echo Association"
               ]
           },
           "LastExecutionDate": 1559316400.338,
           "Overview": {
               "Status": "Success",
               "DetailedStatus": "Success",
               "AssociationStatusAggregatedCount": {}
           },
           "AssociationId": "b85ccafe-9f02-4812-9b81-01234EXAMPLE",
           "DocumentVersion": "$DEFAULT",
           "LastSuccessfulExecutionDate": 1559316400.338,
           "LastUpdateAssociationDate": 1559316389.753,
           "Date": 1559314038.532,
           "AssociationVersion": "2",
           "AssociationName": "TestHostnameAssociation2",
           "Targets": [
               {
                   "Values": [
                       "Windows"
                   ],
                   "Key": "tag:Environment"
               }
           ]
       }
   }
   ```

------
#### [ Windows ]

   ```
   {
       "AssociationDescription": {
           "ScheduleExpression": "cron(0 */1 * * ? *)",
           "OutputLocation": {
               "S3Location": {
                   "OutputS3KeyPrefix": "logs",
                   "OutputS3BucketName": "amzn-s3-demo-bucket",
                   "OutputS3Region": "us-east-1"
               }
           },
           "Name": "AWS-RunPowerShellScript",
           "Parameters": {
               "commands": [
                   "echo Association"
               ]
           },
           "LastExecutionDate": 1559316400.338,
           "Overview": {
               "Status": "Success",
               "DetailedStatus": "Success",
               "AssociationStatusAggregatedCount": {}
           },
           "AssociationId": "b85ccafe-9f02-4812-9b81-01234EXAMPLE",
           "DocumentVersion": "$DEFAULT",
           "LastSuccessfulExecutionDate": 1559316400.338,
           "LastUpdateAssociationDate": 1559316389.753,
           "Date": 1559314038.532,
           "AssociationVersion": "2",
           "AssociationName": "TestHostnameAssociation2",
           "Targets": [
               {
                   "Values": [
                       "Windows"
                   ],
                   "Key": "tag:Environment"
               }
           ]
       }
   }
   ```

------
#### [ PowerShell ]

   ```
   AssociationId                 : b85ccafe-9f02-4812-9b81-01234EXAMPLE
   AssociationName               : TestHostnameAssociation2
   AssociationVersion            : 2
   AutomationTargetParameterName : 
   ComplianceSeverity            : 
   Date                          : 5/31/2019 2:47:18 PM
   DocumentVersion               : $DEFAULT
   InstanceId                    : 
   LastExecutionDate             : 5/31/2019 3:26:40 PM
   LastSuccessfulExecutionDate   : 5/31/2019 3:26:40 PM
   LastUpdateAssociationDate     : 5/31/2019 3:26:29 PM
   MaxConcurrency                : 
   MaxErrors                     : 
   Name                          : AWS-RunPowerShellScript
   OutputLocation                : Amazon.SimpleSystemsManagement.Model.InstanceAssociationOutputLocation
   Overview                      : Amazon.SimpleSystemsManagement.Model.AssociationOverview
   Parameters                    : {[commands, Amazon.Runtime.Internal.Util.AlwaysSendList`1[System.String]]}
   ScheduleExpression            : cron(0 */1 * * ? *)
   Status                        : 
   Targets                       : {tag:Environment}
   ```

------

# Deleting associations


Use the following procedure to delete an association by using the AWS Systems Manager console.

**To delete an association**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Select an association and then choose **Delete**.

You can delete multiple associations in a single operation by running an automation from the AWS Systems Manager console. When you select multiple associations for deletion, State Manager launches the automation runbook start page with the association IDs entered as input parameter values. 

**To delete multiple associations in a single operation**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Select each association that you want to delete and then choose **Delete**.

1. (Optional) In the **Additional input parameters** area, select the Amazon Resource Name (ARN) for the *assume role* that you want the automation to use while running. To create a new assume role, choose **Create**.

1. Choose **Submit**.

# Running Auto Scaling groups with associations


The best practice when using associations to run Auto Scaling groups is to use tag targets. Not using tags might cause you to reach the association limit. 

If all nodes are tagged with the same key and value, you only need one association to run your Auto Scaling group. The following procedure describes how to create such an association.

**To create an association that runs Auto Scaling groups**

1. Ensure all nodes in the Auto Scaling group are tagged with the same key and value. For more instructions on tagging nodes, see [Tagging Auto Scaling groups and instances](https://docs.aws.amazon.com//autoscaling/ec2/userguide/autoscaling-tagging.html) in the *AWS Auto Scaling User Guide*. 

1. Create an association by using the procedure in [Working with associations in Systems Manager](state-manager-associations.md). 

   If you're working in the console, choose **Specify instance tags** in the **Targets** field. For **Instance tags**, enter the **Tag** key and value for your Auto Scaling group.

   If you're using the AWS Command Line Interface (AWS CLI), specify `--targets Key=tag:tag-key,Values=tag-value` where the key and value match what you tagged your nodes with. 

# Viewing association histories


You can view all executions for a specific association ID by using the [DescribeAssociationExecutions](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeAssociationExecutions.html) API operation. Use this operation to see the status, detailed status, results, last execution time, and more information for a State Manager association. State Manager is a tool in AWS Systems Manager. This API operation also includes filters to help you locate associations according to the criteria you specify. For example, you can specify an exact date and time, and use a GREATER\$1THAN filter to view executions processed after the specified date and time.

If, for example, an association execution failed, you can drill down into the details of a specific execution by using the [DescribeAssociationExecutionTargets](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeAssociationExecutionTargets.html) API operation. This operation shows you the resources, such as node IDs, where the association ran and the various association statuses. You can then see which resource or node failed to run an association. With the resource ID you can then view the command execution details to see which step in a command failed.

The examples in this section also include information about how to use the [StartAssociationsOnce](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_StartAssociationsOnce.html) API operation to run an association once at the time of creation. You can use this API operation when you investigate failed association executions. If you see that an association failed, you can make a change on the resource, and then immediately run the association to see if the change on the resource allows the association to run successfully.

**Note**  
API operations that are initiated by the SSM document during an association run are not logged in AWS CloudTrail.

## Viewing association histories (console)


Use the following procedure to view the execution history for a specific association ID and then view execution details for one or more resources. 

**To view execution history for a specific association ID**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. Choose **State Manager**.

1. In the **Association id** field, choose an association for which you want to view the history.

1. Choose the **View details** button.

1. Choose the **Execution history** tab.

1. Choose an association for which you want to view resource-level execution details. For example, choose an association that shows a status of **Failed**. You can then view the execution details for the nodes that failed to run the association.

   Use the search box filters to locate the execution for which you want to view details.  
![\[Filtering the list of State Manager association executions.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/sysman-state-executions-filter.png)

1. Choose an execution ID. The **Association execution targets** page opens. This page shows all the resources that ran the association.

1. Choose a resource ID to view specific information about that resource.

   Use the search box filters to locate the resource for which you want to view details.  
![\[Filtering the list of State Manager association executions targets.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/sysman-state-executions-targets-filter.png)

1. If you're investigating an association that failed to run, you can use the **Apply association now** button to run an association once at the time of creation. After you made changes on the resource where the association failed to run, choose the **Association ID** link in the navigation breadcrumb.

1. Choose the **Apply association now** button. After the execution is complete, verify that the association execution succeeded.

## Viewing association histories (command line)


The following procedure describes how to use the AWS Command Line Interface (AWS CLI) (on Linux or Windows Server) or AWS Tools for PowerShell to view the execution history for a specific association ID. Following this, the procedure describes how to view execution details for one or more resources.

**To view execution history for a specific association ID**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Run the following command to view a list of executions for a specific association ID.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-association-executions \
     --association-id ID \
     --filters Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=GREATER_THAN
   ```

**Note**  
This command includes a filter to limit the results to only those executions that occurred after a specific date and time. If you want to view all executions for a specific association ID, remove the `--filters` parameter and ` Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=GREATER_THAN` value.

------
#### [ Windows ]

   ```
   aws ssm describe-association-executions ^
     --association-id ID ^
     --filters Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=GREATER_THAN
   ```

**Note**  
This command includes a filter to limit the results to only those executions that occurred after a specific date and time. If you want to view all executions for a specific association ID, remove the `--filters` parameter and ` Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=GREATER_THAN` value.

------
#### [ PowerShell ]

   ```
   Get-SSMAssociationExecution `
     -AssociationId ID `
     -Filter @{"Key"="CreatedTime";"Value"="2019-06-01T19:15:38.372Z";"Type"="GREATER_THAN"}
   ```

**Note**  
This command includes a filter to limit the results to only those executions that occurred after a specific date and time. If you want to view all executions for a specific association ID, remove the `-Filter` parameter and ` @{"Key"="CreatedTime";"Value"="2019-06-01T19:15:38.372Z";"Type"="GREATER_THAN"}` value.

------

   The system returns information like the following.

------
#### [ Linux & macOS ]

   ```
   {
      "AssociationExecutions":[
         {
            "Status":"Success",
            "DetailedStatus":"Success",
            "AssociationId":"c336d2ab-09de-44ba-8f6a-6136cEXAMPLE",
            "ExecutionId":"76a5a04f-caf6-490c-b448-92c02EXAMPLE",
            "CreatedTime":1523986028.219,
            "AssociationVersion":"1"
         },
         {
            "Status":"Success",
            "DetailedStatus":"Success",
            "AssociationId":"c336d2ab-09de-44ba-8f6a-6136cEXAMPLE",
            "ExecutionId":"791b72e0-f0da-4021-8b35-f95dfEXAMPLE",
            "CreatedTime":1523984226.074,
            "AssociationVersion":"1"
         },
         {
            "Status":"Success",
            "DetailedStatus":"Success",
            "AssociationId":"c336d2ab-09de-44ba-8f6a-6136cEXAMPLE",
            "ExecutionId":"ecec60fa-6bb0-4d26-98c7-140308EXAMPLE",
            "CreatedTime":1523982404.013,
            "AssociationVersion":"1"
         }
      ]
   }
   ```

------
#### [ Windows ]

   ```
   {
      "AssociationExecutions":[
         {
            "Status":"Success",
            "DetailedStatus":"Success",
            "AssociationId":"c336d2ab-09de-44ba-8f6a-6136cEXAMPLE",
            "ExecutionId":"76a5a04f-caf6-490c-b448-92c02EXAMPLE",
            "CreatedTime":1523986028.219,
            "AssociationVersion":"1"
         },
         {
            "Status":"Success",
            "DetailedStatus":"Success",
            "AssociationId":"c336d2ab-09de-44ba-8f6a-6136cEXAMPLE",
            "ExecutionId":"791b72e0-f0da-4021-8b35-f95dfEXAMPLE",
            "CreatedTime":1523984226.074,
            "AssociationVersion":"1"
         },
         {
            "Status":"Success",
            "DetailedStatus":"Success",
            "AssociationId":"c336d2ab-09de-44ba-8f6a-6136cEXAMPLE",
            "ExecutionId":"ecec60fa-6bb0-4d26-98c7-140308EXAMPLE",
            "CreatedTime":1523982404.013,
            "AssociationVersion":"1"
         }
      ]
   }
   ```

------
#### [ PowerShell ]

   ```
   AssociationId         : c336d2ab-09de-44ba-8f6a-6136cEXAMPLE
   AssociationVersion    : 1
   CreatedTime           : 8/18/2019 2:00:50 AM
   DetailedStatus        : Success
   ExecutionId           : 76a5a04f-caf6-490c-b448-92c02EXAMPLE
   LastExecutionDate     : 1/1/0001 12:00:00 AM
   ResourceCountByStatus : {Success=1}
   Status                : Success
   
   AssociationId         : c336d2ab-09de-44ba-8f6a-6136cEXAMPLE
   AssociationVersion    : 1
   CreatedTime           : 8/11/2019 2:00:54 AM
   DetailedStatus        : Success
   ExecutionId           : 791b72e0-f0da-4021-8b35-f95dfEXAMPLE
   LastExecutionDate     : 1/1/0001 12:00:00 AM
   ResourceCountByStatus : {Success=1}
   Status                : Success
   
   AssociationId         : c336d2ab-09de-44ba-8f6a-6136cEXAMPLE
   AssociationVersion    : 1
   CreatedTime           : 8/4/2019 2:01:00 AM
   DetailedStatus        : Success
   ExecutionId           : ecec60fa-6bb0-4d26-98c7-140308EXAMPLE
   LastExecutionDate     : 1/1/0001 12:00:00 AM
   ResourceCountByStatus : {Success=1}
   Status                : Success
   ```

------

   You can limit the results by using one or more filters. The following example returns all associations that were run before a specific date and time. 

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-association-executions \
     --association-id ID \
     --filters Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=LESS_THAN
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-association-executions ^
     --association-id ID ^
     --filters Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=LESS_THAN
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAssociationExecution `
     -AssociationId 14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE `
     -Filter @{"Key"="CreatedTime";"Value"="2019-06-01T19:15:38.372Z";"Type"="LESS_THAN"}
   ```

------

   The following returns all associations that were *successfully* run after a specific date and time.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-association-executions \
     --association-id ID \
     --filters Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=GREATER_THAN Key=Status,Value=Success,Type=EQUAL
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-association-executions ^
     --association-id ID ^
     --filters Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=GREATER_THAN Key=Status,Value=Success,Type=EQUAL
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAssociationExecution `
     -AssociationId 14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE `
     -Filter @{
         "Key"="CreatedTime";
         "Value"="2019-06-01T19:15:38.372Z";
         "Type"="GREATER_THAN"
       },
       @{
         "Key"="Status";
         "Value"="Success";
         "Type"="EQUAL"
       }
   ```

------

1. Run the following command to view all targets where the specific execution ran.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-association-execution-targets \
     --association-id ID \
     --execution-id ID
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-association-execution-targets ^
     --association-id ID ^
     --execution-id ID
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAssociationExecutionTarget `
     -AssociationId 14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE `
     -ExecutionId 76a5a04f-caf6-490c-b448-92c02EXAMPLE
   ```

------

   You can limit the results by using one or more filters. The following example returns information about all targets where the specific association failed to run.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-association-execution-targets \
     --association-id ID \
     --execution-id ID \
     --filters Key=Status,Value="Failed"
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-association-execution-targets ^
     --association-id ID ^
     --execution-id ID ^
     --filters Key=Status,Value="Failed"
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAssociationExecutionTarget `
     -AssociationId 14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE `
     -ExecutionId 76a5a04f-caf6-490c-b448-92c02EXAMPLE `
     -Filter @{
         "Key"="Status";
         "Value"="Failed"
       }
   ```

------

   The following example returns information about a specific managed node where an association failed to run.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-association-execution-targets \
     --association-id ID \
     --execution-id ID \
     --filters Key=Status,Value=Failed Key=ResourceId,Value="i-02573cafcfEXAMPLE" Key=ResourceType,Value=ManagedInstance
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-association-execution-targets ^
     --association-id ID ^
     --execution-id ID ^
     --filters Key=Status,Value=Failed Key=ResourceId,Value="i-02573cafcfEXAMPLE" Key=ResourceType,Value=ManagedInstance
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAssociationExecutionTarget `
     -AssociationId 14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE `
     -ExecutionId 76a5a04f-caf6-490c-b448-92c02EXAMPLE `
     -Filter @{
         "Key"="Status";
         "Value"="Success"
       },
       @{
         "Key"="ResourceId";
         "Value"="i-02573cafcfEXAMPLE"
       },
       @{
         "Key"="ResourceType";
         "Value"="ManagedInstance"
       }
   ```

------

1. If you're investigating an association that failed to run, you can use the [StartAssociationsOnce](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_StartAssociationsOnce.html) API operation to run an association immediately and only one time. After you change the resource where the association failed to run, run the following command to run the association immediately and only one time.

------
#### [ Linux & macOS ]

   ```
   aws ssm start-associations-once \
     --association-id ID
   ```

------
#### [ Windows ]

   ```
   aws ssm start-associations-once ^
     --association-id ID
   ```

------
#### [ PowerShell ]

   ```
   Start-SSMAssociationsOnce `
     -AssociationId ID
   ```

------

# Working with associations using IAM


State Manager, a tool in AWS Systems Manager, uses [targets](systems-manager-state-manager-targets-and-rate-controls.md#systems-manager-state-manager-targets-and-rate-controls-about-targets) to choose which instances you configure your associations with. Originally, associations were created by specifying a document name (`Name`) and instance ID (`InstanceId`). This created an association between a document and an instance or managed node. Associations used to be identified by these parameters. These parameters are now deprecated, but they're still supported. The resources `instance` and `managed-instance` were added as resources to actions with `Name` and `InstanceId`.

AWS Identity and Access Management (IAM) policy enforcement behavior depends on the type of resource specified. Resources for State Manager operations are only enforced based on the passed-in request. State Manager doesn't perform a deep check for the properties of resources in your account. A request is only validated against policy resources if the request parameter contains the specified policy resources. For example, if you specify an instance in the resource block, the policy is enforced if the request uses the `InstanceId` parameter. The `Targets` parameter for each resource in the account isn't checked for that `InstanceId`. 

Following are some cases with confusing behavior:
+  [DescribeAssociation](https://docs.aws.amazon.com//systems-manager/latest/APIReference/API_DescribeActivations.html), [DeleteAssociation](https://docs.aws.amazon.com//systems-manager/latest/APIReference/API_DeleteAssociation.html), and [UpdateAssociation](https://docs.aws.amazon.com//systems-manager/latest/APIReference/API_UpdateAssociation.html) use `instance`, `managed-instance`, and `document` resources to specify the deprecated way of referring to associations. This includes all associations created with the deprecated `InstanceId` parameter.
+ [CreateAssociation](https://docs.aws.amazon.com//systems-manager/latest/APIReference/API_CreateAssociation.html), [CreateAssociationBatch](https://docs.aws.amazon.com//systems-manager/latest/APIReference/API_CreateAssociationBatch.html), and [UpdateAssociation](https://docs.aws.amazon.com//systems-manager/latest/APIReference/API_UpdateAssociation.html) use `instance` and `managed-instance` resources to specify the deprecated way of referring to associations. This includes all associations created with the deprecated `InstanceId` parameter. The `document` resource type is part of the deprecated way of referring to associations and is an actual property of an association. This means you can construct IAM policies with `Allow` or `Deny` permissions for both `Create` and `Update` actions based on document name.

For more information about using IAM policies with Systems Manager, see [Identity and access management for AWS Systems Manager](security-iam.md) or [Actions, resources, and condition keys for AWS Systems Manager](https://docs.aws.amazon.com/service-authorization/latest/reference/list_awssystemsmanager.html) in the *Service Authorization Reference*.

# Creating associations that run MOF files


You can run Managed Object Format (MOF) files to enforce a target state on Windows Server managed nodes with State Manager, a tool in AWS Systems Manager, by using the `AWS-ApplyDSCMofs` SSM document. The `AWS-ApplyDSCMofs` document has two execution modes. With the first mode, you can configure the association to scan and report if the managed nodes are in the desired state defined in the specified MOF files. In the second mode, you can run the MOF files and change the configuration of your nodes based on the resources and their values defined in the MOF files. The `AWS-ApplyDSCMofs` document allows you to download and run MOF configuration files from Amazon Simple Storage Service (Amazon S3), a local share, or from a secure website with an HTTPS domain.

State Manager logs and reports the status of each MOF file execution during each association run. State Manager also reports the output of each MOF file execution as a compliance event which you can view on the [AWS Systems Manager Compliance](https://console.aws.amazon.com/systems-manager/compliance) page.

MOF file execution is built on Windows PowerShell Desired State Configuration (PowerShell DSC). PowerShell DSC is a declarative platform used for configuration, deployment, and management of Windows systems. PowerShell DSC allows administrators to describe, in simple text documents called DSC configurations, how they want a server to be configured. A PowerShell DSC configuration is a specialized PowerShell script that states what to do, but not how to do it. Running the configuration produces a MOF file. The MOF file can be applied to one or more servers to achieve the desired configuration for those servers. PowerShell DSC resources do the actual work of enforcing configuration. For more information, see [Windows PowerShell Desired State Configuration Overview](https://download.microsoft.com/download/4/3/1/43113F44-548B-4DEA-B471-0C2C8578FBF8/Quick_Reference_DSC_WS12R2.pdf).

**Topics**
+ [

## Using Amazon S3 to store artifacts
](#systems-manager-state-manager-using-mof-file-S3-storage)
+ [

## Resolving credentials in MOF files
](#systems-manager-state-manager-using-mof-file-credentials)
+ [

## Using tokens in MOF files
](#systems-manager-state-manager-using-mof-file-tokens)
+ [

## Prerequisites for creating associations that run MOF files
](#systems-manager-state-manager-using-mof-file-prereqs)
+ [

## Creating an association that runs MOF files
](#systems-manager-state-manager-using-mof-file-creating)
+ [

## Troubleshooting issues when creating associations that run MOF files
](#systems-manager-state-manager-using-mof-file-troubleshooting)
+ [

## Viewing DSC resource compliance details
](#systems-manager-state-manager-viewing-mof-file-compliance)

## Using Amazon S3 to store artifacts


If you're using Amazon S3 to store PowerShell modules, MOF files, compliance reports, or status reports, then the AWS Identity and Access Management (IAM) role used by AWS Systems Manager SSM Agent must have `GetObject` and `ListBucket` permissions on the bucket. If you don't provide these permissions, the system returns an *Access Denied* error. Below is important information about storing artifacts in Amazon S3.
+ If the bucket is in a different AWS account, create a bucket resource policy that grants the account (or the IAM role) `GetObject` and `ListBucket` permissions.
+ If you want to use custom DSC resources, you can download these resources from an Amazon S3 bucket. You can also install them automatically from the PowerShell gallery. 
+ If you're using Amazon S3 as a module source, upload the module as a Zip file in the following case-sensitive format: *ModuleName*\$1*ModuleVersion*.zip. For example: MyModule\$11.0.0.zip.
+ All files must be in the bucket root. Folder structures aren't supported.

## Resolving credentials in MOF files


Credentials are resolved by using [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/) or [AWS Systems Manager Parameter Store](systems-manager-parameter-store.md). This allows you to set up automatic credential rotation. This also allows DSC to automatically propagate credentials to your servers without redeploying MOFs.

To use an AWS Secrets Manager secret in a configuration, create a PSCredential object where the Username is the SecretId or SecretARN of the secret containing the credential. You can specify any value for the password. The value is ignored. Following is an example.

```
Configuration MyConfig
{
   $ss = ConvertTo-SecureString -String 'a_string' -AsPlaintext -Force
   $credential = New-Object PSCredential('a_secret_or_ARN', $ss)

    Node localhost
    {
       File file_name
       {
           DestinationPath = 'C:\MyFile.txt'
           SourcePath = '\\FileServer\Share\MyFile.txt'
           Credential = $credential
       }
    }
}
```

Compile your MOF using the PsAllowPlaintextPassword setting in configuration data. This is OK because the credential only contains a label. 

In Secrets Manager, ensure that the node has GetSecretValue access in an IAM Managed Policy, and optionally in the Secret Resource Policy if one exists. To work with DSC, the secret must be in the following format.

```
{ 'Username': 'a_name', 'Password': 'a_password' }
```

The secret can have other properties (for example, properties used for rotation), but it must at least have the username and password properties.

We recommended that you use a multi-user rotation method, where you have two different usernames and passwords, and the rotation AWS Lambda function flips between them. This method allows you to have multiple active accounts while eliminating the risk of locking out a user during rotation.

## Using tokens in MOF files


Tokens give you the ability to modify resource property values *after* the MOF has been compiled. This allows you to reuse common MOF files on multiple servers that require similar configurations.

Token substitution only works for Resource Properties of type `String`. However, if your resource has a nested CIM node property, it also resolves tokens from `String` properties in that CIM node. You can't use token substitution for numerals or arrays.

For example, consider a scenario where you're using the xComputerManagement resource and you want to rename the computer using DSC. Normally you would need a dedicated MOF file for that machine. However, with token support, you can create a single MOF file and apply it to all your nodes. In the `ComputerName` property, instead of hardcoding the computer name into the MOF, you can use an Instance Tag type token. The value is resolved during MOF parsing. See the following example.

```
Configuration MyConfig
{
    xComputer Computer
    {
        ComputerName = '{tag:ComputerName}'
    }
}
```

You then set a tag on either the managed node in the Systems Manager console, or an Amazon Elastic Compute Cloud (Amazon EC2) tag in the Amazon EC2 console. When you run the document, the script substitutes the \$1tag:ComputerName\$1 token for the value of the instance tag.

You can also combine multiple tags into a single property, as shown in the following example.

```
Configuration MyConfig
{
    File MyFile
    {
        DestinationPath = '{env:TMP}\{tag:ComputerName}'
        Type = 'Directory'
    }
}
```

There are five different types of tokens you can use:
+ **tag**: Amazon EC2 or managed node tags.
+ **tagb64**: This is the same as tag, but the system use base64 to decode the value. This allows you to use special characters in tag values.
+ **env**: Resolves Environment variables.
+ **ssm**: Parameter Store values. Only String and Secure String types are supported.
+ **tagssm**: This is the same as tag, but if the tag isn't set on the node, the system tries to resolve the value from a Systems Manager parameter with the same name. This is useful in situations when you want a 'default global value' but you want to be able to override it on a single node (for example, one-box deployments).

Here is a Parameter Store example that uses the `ssm` token type. 

```
File MyFile
{
    DestinationPath = "C:\ProgramData\ConnectionData.txt"
    Content = "{ssm:%servicePath%/ConnectionData}"
}
```

Tokens play an important role in reducing redundant code by making MOF files generic and reusable. If you can avoid server-specific MOF file, then there’s no need for a MOF building service. A MOF building service increases costs, slows provisioning time, and increases the risk of configuration drift between grouped nodes due to differing module versions being installed on the build server when their MOFs were compiled.

## Prerequisites for creating associations that run MOF files


Before you create an association that runs MOF files, verify that your managed nodes have the following prerequisites installed:
+ Windows PowerShell version 5.0 or later. For more information, see [Windows PowerShell System Requirements](https://docs.microsoft.com/en-us/powershell/scripting/install/windows-powershell-system-requirements?view=powershell-6) on Microsoft.com.
+ [AWS Tools for Windows PowerShell](https://aws.amazon.com/powershell/) version 3.3.261.0 or later.
+ SSM Agent version 2.2 or later.

## Creating an association that runs MOF files


**To create an association that runs MOF files**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Choose **State Manager**, and then choose **Create association**.

1. In the **Name** field, specify a name. This is optional, but recommended. A name can help you understand the purpose of the association when you created it. Spaces aren't allowed in the name.

1. In the **Document** list, choose **`AWS-ApplyDSCMofs`**.

1. In the **Parameters** section, specify your choices for the required and optional input parameters.

   1. **Mofs To Apply**: Specify one or more MOF files to run when this association runs. Use commas to separate a list of MOF files. Systems Manager iterates through the list of MOF files and runs them in the order specified by the comma separated list.
      + An Amazon S3 bucket name. Bucket names must use lowercase letters. Specify this information by using the following format.

        ```
        s3:amzn-s3-demo-bucket:MOF_file_name.mof
        ```

        If you want to specify an AWS Region, then use the following format.

        ```
        s3:bucket_Region:amzn-s3-demo-bucket:MOF_file_name.mof
        ```
      + A secure website. Specify this information by using the following format.

        ```
        https://domain_name/MOF_file_name.mof
        ```

        Here is an example.

        ```
        https://www.example.com/TestMOF.mof
        ```
      + A file system on a local share. Specify this information by using the following format.

        ```
        \server_name\shared_folder_name\MOF_file_name.mof
        ```

        Here is an example.

        ```
        \StateManagerAssociationsBox\MOFs_folder\MyMof.mof
        ```

   1. **Service Path**: (Optional) A service path is either an Amazon S3 bucket prefix where you want to write reports and status information. Or, a service path is a path for Parameter Store parameter-based tags. When resolving parameter-based tags, the system uses \$1ssm:%servicePath%/*parameter\$1name*\$1 to inject the servicePath value into the parameter name. For example, if your service path is "WebServers/Production" then the systems resolves the parameter as: WebServers/Production/*parameter\$1name*. This is useful for when you're running multiple environments in the same account.

   1. **Report Bucket Name**: (Optional) Enter the name of an Amazon S3 bucket where you want to write compliance data. Reports are saved in this bucket in JSON format.
**Note**  
You can prefix the bucket name with a Region where the bucket is located. Here's an example: us-west-2:MyMOFBucket. If you're using a proxy for Amazon S3 endpoints in a specific Region that doesn't include us-east-1, prefix the bucket name with a Region. If the bucket name isn't prefixed, it automatically discovers the bucket Region by using the us-east-1 endpoint.

   1. **Mof Operation Mode**: Choose State Manager behavior when running the **`AWS-ApplyDSCMofs`** association:
      + **Apply**: Correct node configurations that aren't compliant. 
      + **ReportOnly**: Don't correct node configurations, but instead log all compliance data and report nodes that aren't compliant.

   1. **Status Bucket Name**: (Optional) Enter the name of an Amazon S3 bucket where you want to write MOF execution status information. These status reports are singleton summaries of the most recent compliance run of a node. This means that the report is overwritten the next time the association runs MOF files.
**Note**  
You can prefix the bucket name with a Region where the bucket is located. Here's an example: `us-west-2:amzn-s3-demo-bucket`. If you're using a proxy for Amazon S3 endpoints in a specific Region that doesn't include us-east-1, prefix the bucket name with a Region. If the bucket name isn't prefixed, it automatically discovers the bucket Region using the us-east-1 endpoint.

   1. **Module Source Bucket Name**: (Optional) Enter the name of an Amazon S3 bucket that contains PowerShell module files. If you specify **None**, choose **True** for the next option, **Allow PS Gallery Module Source**.
**Note**  
You can prefix the bucket name with a Region where the bucket is located. Here's an example: `us-west-2:amzn-s3-demo-bucket`. If you're using a proxy for Amazon S3 endpoints in a specific Region that doesn't include us-east-1, prefix the bucket name with a Region. If the bucket name isn't prefixed, it automatically discovers the bucket Region using the us-east-1 endpoint.

   1. **Allow PS Gallery Module Source**: (Optional) Choose **True** to download PowerShell modules from [https://www.powershellgallery.com/](https://www.powershellgallery.com/). If you choose **False**, specify a source for the previous option, **ModuleSourceBucketName**.

   1. **Proxy Uri**: (Optional) Use this option to download MOF files from a proxy server.

   1. **Reboot Behavior**: (Optional) Specify one of the following reboot behaviors if your MOF file execution requires rebooting:
      + **AfterMof**: Reboots the node after all MOF executions are complete. Even if multiple MOF executions request reboots, the system waits until all MOF executions are complete to reboot.
      + **Immediately**: Reboots the node whenever a MOF execution requests it. If running multiple MOF files that request reboots, then the nodes are rebooted multiple times.
      + **Never**: Nodes aren't rebooted, even if the MOF execution explicitly requests a reboot.

   1. **Use Computer Name For Reporting**: (Optional) Turn on this option to use the name of the computer when reporting compliance information. The default value is **false**, which means that the system uses the node ID when reporting compliance information.

   1. **Turn on Verbose Logging**: (Optional) We recommend that you turn on verbose logging when deploying MOF files for the first time.
**Important**  
When allowed, verbose logging writes more data to your Amazon S3 bucket than standard association execution logging. This might result in slower performance and higher storage charges for Amazon S3. To mitigate storage size issues, we recommend that you turn on lifecycle policies on your Amazon S3 bucket. For more information, see [How Do I Create a Lifecycle Policy for an S3 Bucket?](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-lifecycle.html) in the *Amazon Simple Storage Service User Guide*.

   1. **Turn on Debug Logging**: (Optional) We recommend that you turn on debug logging to troubleshoot MOF failures. We also recommend that you deactivate this option for normal use.
**Important**  
When allowed, debug logging writes more data to your Amazon S3 bucket than standard association execution logging. This might result in slower performance and higher storage charges for Amazon S3. To mitigate storage size issues, we recommend that you turn on lifecycle policies on your Amazon S3 bucket. For more information, see [How Do I Create a Lifecycle Policy for an S3 Bucket?](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-lifecycle.html) in the *Amazon Simple Storage Service User Guide*.

   1. **Compliance Type**: (Optional) Specify the compliance type to use when reporting compliance information. The default compliance type is **Custom:DSC**. If you create multiple associations that run MOF files, then be sure to specify a different compliance type for each association. If you don't, each additional association that uses **Custom:DSC** overwrites the existing compliance data.

   1. **Pre Reboot Script**: (Optional) Specify a script to run if the configuration has indicated that a reboot is necessary. The script runs before the reboot. The script must be a single line. Separate additional lines by using semicolons.

1. In the **Targets** section, choose either **Specifying tags** or **Manually Selecting Instance**. If you choose to target resources by using tags, then enter a tag key and a tag value in the fields provided. For more information about using targets, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

1. In the **Specify schedule** section, choose either **On Schedule** or **No schedule**. If you choose **On Schedule**, then use the buttons provided to create a cron or rate schedule for the association. 

1. In the **Advanced options** section:
   + In **Compliance severity**, choose a severity level for the association. Compliance reporting indicates whether the association state is compliant or noncompliant, along with the severity level you indicate here. For more information, see [About State Manager association compliance](compliance-about.md#compliance-about-association).

1. In the **Rate control** section, configure options for running State Manager associations across of fleet of managed nodes. For more information about these options, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

   In the **Concurrency** section, choose an option: 
   + Choose **targets** to enter an absolute number of targets that can run the association simultaneously.
   + Choose **percentage** to enter a percentage of the target set that can run the association simultaneously.

   In the **Error threshold** section, choose an option:
   + Choose **errors** to enter an absolute number of errors allowed before State Manager stops running associations on additional targets.
   + Choose **percentage** to enter a percentage of errors allowed before State Manager stops running associations on additional targets.

1. (Optional) For **Output options**, to save the command output to a file, select the **Enable writing output to S3** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile assigned to the managed node, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, verify that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. Choose **Create Association**. 

State Manager creates and immediately runs the association on the specified nodes or targets. After the initial execution, the association runs in intervals according to the schedule that you defined and according to the following rules:
+ State Manager runs associations on nodes that are online when the interval starts and skips offline nodes.
+ State Manager attempts to run the association on all configured nodes during an interval.
+ If an association isn't run during an interval (because, for example, a concurrency value limited the number of nodes that could process the association at one time), then State Manager attempts to run the association during the next interval.
+ State Manager records history for all skipped intervals. You can view the history on the **Execution History** tab.

**Note**  
The `AWS-ApplyDSCMofs` is a Systems Manager Command document. This means that you can also run this document by using Run Command, a tool in AWS Systems Manager. For more information, see [AWS Systems Manager Run Command](run-command.md).

## Troubleshooting issues when creating associations that run MOF files


This section includes information to help you troubleshoot issues creating associations that run MOF files.

**Turn on enhanced logging**  
As a first step to troubleshooting, turn on enhanced logging. More specifically, do the following:

1. Verify that the association is configured to write command output to either Amazon S3 or Amazon CloudWatch Logs (CloudWatch).

1. Set the **Enable Verbose Logging** parameter to True.

1. Set the **Enable Debug Logging** parameter to True.

With verbose and debug logging turned on, the **Stdout** output file includes details about the script execution. This output file can help you identify where the script failed. The **Stderr** output file contains errors that occurred during the script execution. 

**Common problems when creating associations that run MOF files**  
This section includes information about common problems that can occur when creating associations that run MOF files and steps to troubleshoot these issues.

**My MOF wasn't applied**  
If State Manager failed to apply the association to your nodes, then start by reviewing the **Stderr** output file. This file can help you understand the root cause of the issue. Also, verify the following:
+ The node has the required access permissions to all MOF-related Amazon S3 buckets. Specifically:
  + **s3:GetObject permissions**: This is required for MOF files in private Amazon S3 buckets and custom modules in Amazon S3 buckets.
  + **s3:PutObject permission**: This is required to write compliance reports and compliance status to Amazon S3 buckets.
+ If you're using tags, then ensure that the node has the required IAM policy. Using tags requires the instance IAM role to have a policy allowing the `ec2:DescribeInstances` and `ssm:ListTagsForResource` actions.
+ Ensure that the node has the expected tags or SSM parameters assigned.
+ Ensure that the tags or SSM parameters aren't misspelled.
+ Try applying the MOF locally on the node to make sure there isn't an issue with the MOF file itself.

**My MOF seemed to fail, but the Systems Manager execution was successful**  
If the `AWS-ApplyDSCMofs` document successfully ran, then the Systems Manager execution status shows **Success**. This status doesn't reflect the compliance status of your node against the configuration requirements in the MOF file. To view the compliance status of your nodes, view the compliance reports. You can view a JSON report in the Amazon S3 Report Bucket. This applies to Run Command and State Manager executions. Also, for State Manager, you can view compliance details on the Systems Manager Compliance page.

**Stderr states: Name resolution failure attempting to reach service**  
This error indicates that the script can't reach a remote service. Most likely, the script can't reach Amazon S3. This issue most often occurs when the script attempts to write compliance reports or compliance status to the Amazon S3 bucket supplied in the document parameters. Typically, this error occurs when a computing environment uses a firewall or transparent proxy that includes an allow list. To resolve this issue:
+ Use Region-specific bucket syntax for all Amazon S3 bucket parameters. For example, the **Mofs to Apply** parameter should be formatted as follows:

  s3:*bucket-region*:*amzn-s3-demo-bucket*:*mof-file-name*.mof.

  Here is an example:` s3:us-west-2:amzn-s3-demo-bucket:my-mof.mof`

  The Report, Status, and Module Source bucket names should be formatted as follows.

  *bucket-region*:*amzn-s3-demo-bucket*. Here is an example: `us-west-1:amzn-s3-demo-bucket;`
+ If Region-specific syntax doesn't fix the problem, then make sure that the targeted nodes can access Amazon S3 in the desired Region. To verify this:

  1. Find the endpoint name for Amazon S3 in the appropriate Amazon S3 Region. For information, see [Amazon S3 Service Endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html#s3_region) in the *Amazon Web Services General Reference*.

  1. Log on to the target node and run the following ping command.

     ```
     ping s3.s3-region.amazonaws.com
     ```

     If the ping failed, it means that either Amazon S3 is down, or a firewall/transparent proxy is blocking access to the Amazon S3 Region, or the node can't access the internet.

## Viewing DSC resource compliance details


Systems Manager captures compliance information about DSC resource failures in the Amazon S3 **Status Bucket** you specified when you ran the `AWS-ApplyDSCMofs` document. Searching for information about DSC resource failures in an Amazon S3 bucket can be time consuming. Instead, you can view this information in the Systems Manager **Compliance** page. 

The **Compliance resources summary** section displays a count of resources that failed. In the following example, the **ComplianceType** is **Custom:DSC** and one resource is noncompliant.

**Note**  
Custom:DSC is the default **ComplianceType** value in the `AWS-ApplyDSCMofs` document. This value is customizable.

![\[Viewing counts in the Compliance resources summary section of the Compliance page.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/state-manager-mof-detailed-status-3.png)


The **Details overview for resources** section displays information about the AWS resource with the noncompliant DSC resource. This section also includes the MOF name, script execution steps, and (when applicable) a **View output** link to view detailed status information. 

![\[Viewing compliance details for a MOF execution resource failure\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/state-manager-mof-detailed-status-1.png)


The **View output** link displays the last 4,000 characters of the detailed status. Systems Manager starts with the exception as the first element, and then scans back through the verbose messages and prepends as many as it can until it reaches the 4,000 character quota. This process displays the log messages that were output before the exception was thrown, which are the most relevant messages for troubleshooting.

![\[Viewing detailed output for MOF resource compliance issue\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/state-manager-mof-detailed-status-2.png)


For information about how to view compliance information, see [AWS Systems Manager Compliance](systems-manager-compliance.md).

**Situations that affect compliance reporting**  
If the State Manager association fails, then no compliance data is reported. More specifically, if a MOF fails to process, then Systems Manager doesn’t report any compliance items because the associations fails. For example, if Systems Manager attempts to download a MOF from an Amazon S3 bucket that the node doesn't have permission to access, then the association fails and no compliance data is reported.

If a resource in a second MOF fails, then Systems Manager *does* report compliance data. For example, if a MOF tries to create a file on a drive that doesn’t exist, then Systems Manager reports compliance because the `AWS-ApplyDSCMofs` document is able to process completely, which means the association successfully runs. 

# Creating associations that run Ansible playbooks
Creating associations that run Ansible playbooks

You can create State Manager associations that run Ansible playbooks by using the `AWS-ApplyAnsiblePlaybooks` SSM document. State Manager is a tool in AWS Systems Manager. This document offers the following benefits for running playbooks:
+ Support for running complex playbooks
+ Support for downloading playbooks from GitHub and Amazon Simple Storage Service (Amazon S3)
+ Support for compressed playbook structure
+ Enhanced logging
+ Ability to specify which playbook to run when playbooks are bundled

**Note**  
Systems Manager includes two SSM documents that allow you to create State Manager associations that run Ansible playbooks: `AWS-RunAnsiblePlaybook` and `AWS-ApplyAnsiblePlaybooks`. The `AWS-RunAnsiblePlaybook` document is deprecated. It remains available in Systems Manager for legacy purposes. We recommend that you use the `AWS-ApplyAnsiblePlaybooks` document because of the enhancements described here.  
Associations that run Ansible playbooks aren't supported on macOS.

**Support for running complex playbooks**

The `AWS-ApplyAnsiblePlaybooks` document supports bundled, complex playbooks because it copies the entire file structure to a local directory before executing the specified main playbook. You can provide source playbooks in Zip files or in a directory structure. The Zip file or directory can be stored in GitHub or Amazon S3. 

**Support for downloading playbooks from GitHub**

The `AWS-ApplyAnsiblePlaybooks` document uses the `aws:downloadContent` plugin to download playbook files. Files can be stored in GitHub in a single file or as a combined set of playbook files. To download content from GitHub, specify information about your GitHub repository in JSON format. Here is an example.

```
{
   "owner":"TestUser",
   "repository":"GitHubTest",
   "path":"scripts/python/test-script",
   "getOptions":"branch:master",
   "tokenInfo":"{{ssm-secure:secure-string-token}}"
}
```

**Support for downloading playbooks from Amazon S3**

You can also store and download Ansible playbooks in Amazon S3 as either a single .zip file or a directory structure. To download content from Amazon S3, specify the path to the file. Here are two examples.

**Example 1: Download a specific playbook file**

```
{
   "path":"https://s3.amazonaws.com/amzn-s3-demo-bucket/playbook.yml"
}
```

**Example 2: Download the contents of a directory**

```
{
   "path":"https://s3.amazonaws.com/amzn-s3-demo-bucket/ansible/webservers/"
}
```

**Important**  
If you specify Amazon S3, then the AWS Identity and Access Management (IAM) instance profile on your managed nodes must include permissions for the S3 bucket. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md). 

**Support for compressed playbook structure**

The `AWS-ApplyAnsiblePlaybooks` document allows you to run compressed .zip files in the downloaded bundle. The document checks if the downloaded files contain a compressed file in .zip format. If a .zip is found, the document automatically decompresses the file and then runs the specified Ansible automation.

**Enhanced logging**

The `AWS-ApplyAnsiblePlaybooks` document includes an optional parameter for specifying different levels of logging. Specify -v for low verbosity, -vv or –vvv for medium verbosity, and -vvvv for debug level logging. These options directly map to Ansible verbosity options.

**Ability to specify which playbook to run when playbooks are bundled**

The `AWS-ApplyAnsiblePlaybooks` document includes a required parameter for specifying which playbook to run when multiple playbooks are bundled. This option provides flexibility for running playbooks to support different use cases.

## Understanding installed dependencies


If you specify **True** for the **InstallDependencies** parameter, then Systems Manager verifies that your nodes have the following dependencies installed:
+ **Ubuntu Server/Debian Server**: Apt-get (Package Management), Python 3, Ansible, Unzip
+ **Amazon Linux** supported versions: Ansible
+ **RHEL**: Python 3, Ansible, Unzip

If one or more of these dependencies aren't found, then Systems Manager automatically installs them.

## Create an association that runs Ansible playbooks (console)


The following procedure describes how to use the Systems Manager console to create a State Manager association that runs Ansible playbooks by using the `AWS-ApplyAnsiblePlaybooks` document.

**To create an association that runs Ansible playbooks (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Choose **State Manager**, and then choose **Create association**.

1. For **Name**, specify a name that helps you remember the purpose of the association.

1. In the **Document** list, choose **`AWS-ApplyAnsiblePlaybooks`**.

1. In the **Parameters** section, for **Source Type**, choose either **GitHub** or **S3**.

   **GitHub**

   If you choose **GitHub**, enter repository information in the following format.

   ```
   {
      "owner":"user_name",
      "repository":"name",
      "path":"path_to_directory_or_playbook_to_download",
      "getOptions":"branch:branch_name",
      "tokenInfo":"{{(Optional)_token_information}}"
   }
   ```

   **S3**

   If you choose **S3**, enter path information in the following format.

   ```
   {
      "path":"https://s3.amazonaws.com/path_to_directory_or_playbook_to_download"
   }
   ```

1. For **Install Dependencies**, choose an option.

1. (Optional) For **Playbook File**, enter a file name. If a Zip file contains the playbook, specify a relative path to the Zip file.

1. (Optional) For **Extra Variables**, enter variables that you want State Manager to send to Ansible at runtime.

1. (Optional) For **Check**, choose an option.

1. (Optional) For **Verbose**, choose an option.

1. For **Targets**, choose an option. For information about using targets, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

1. In the **Specify schedule** section, choose either **On schedule** or **No schedule**. If you choose **On schedule**, then use the buttons provided to create a cron or rate schedule for the association. 

1. In the **Advanced options** section, for **Compliance severity**, choose a severity level for the association. Compliance reporting indicates whether the association state is compliant or noncompliant, along with the severity level you indicate here. For more information, see [About State Manager association compliance](compliance-about.md#compliance-about-association).

1. In the **Rate control** section, configure options to run State Manager associations across a fleet of managed nodes. For information about using rate controls, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

   In the **Concurrency** section, choose an option: 
   + Choose **targets** to enter an absolute number of targets that can run the association simultaneously.
   + Choose **percentage** to enter a percentage of the target set that can run the association simultaneously.

   In the **Error threshold** section, choose an option:
   + Choose **errors** to enter an absolute number of errors that are allowed before State Manager stops running associations on additional targets.
   + Choose **percentage** to enter a percentage of errors that are allowed before State Manager stops running associations on additional targets.

1. (Optional) For **Output options**, to save the command output to a file, select the **Enable writing output to S3** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile assigned to the managed node, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, verify that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. Choose **Create Association**.

**Note**  
If you use tags to create an association on one or more target nodes, and then you remove the tags from a node, that node no longer runs the association. The node is disassociated from the State Manager document. 

## Create an association that runs Ansible playbooks (CLI)


The following procedure describes how to use the AWS Command Line Interface (AWS CLI) to create a State Manager association that runs Ansible playbooks by using the `AWS-ApplyAnsiblePlaybooks` document. 

**To create an association that runs Ansible playbooks (CLI)**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run one of the following commands to create an association that runs Ansible playbooks by targeting nodes using tags. Replace each *example resource placeholder* with your own information. Command (A) specifies GitHub as the source type. Command (B) specifies Amazon S3 as the source type.

   **(A) GitHub source**

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" \
       --targets Key=tag:TagKey,Values=TagValue \
       --parameters '{"SourceType":["GitHub"],"SourceInfo":["{\"owner\":\"owner_name\", \"repository\": \"name\", \"getOptions\": \"branch:master\"}"],"InstallDependencies":["True_or_False"],"PlaybookFile":["file_name.yml"],"ExtraVariables":["key/value_pairs_separated_by_a_space"],"Check":["True_or_False"],"Verbose":["-v,-vv,-vvv, or -vvvv"],"TimeoutSeconds":["3600"]}' \
       --association-name "name" \
       --schedule-expression "cron_or_rate_expression"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" ^
       --targets Key=tag:TagKey,Values=TagValue ^
       --parameters '{"SourceType":["GitHub"],"SourceInfo":["{\"owner\":\"owner_name\", \"repository\": \"name\", \"getOptions\": \"branch:master\"}"],"InstallDependencies":["True_or_False"],"PlaybookFile":["file_name.yml"],"ExtraVariables":["key/value_pairs_separated_by_a_space"],"Check":["True_or_False"],"Verbose":["-v,-vv,-vvv, or -vvvv"], "TimeoutSeconds":["3600"]}' ^
       --association-name "name" ^
       --schedule-expression "cron_or_rate_expression"
   ```

------

   Here is an example.

   ```
   aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" \
       --targets "Key=tag:OS,Values=Linux" \
       --parameters '{"SourceType":["GitHub"],"SourceInfo":["{\"owner\":\"ansibleDocumentTest\", \"repository\": \"Ansible\", \"getOptions\": \"branch:master\"}"],"InstallDependencies":["True"],"PlaybookFile":["hello-world-playbook.yml"],"ExtraVariables":["SSM=True"],"Check":["False"],"Verbose":["-v"]}' \
       --association-name "AnsibleAssociation" \
       --schedule-expression "cron(0 2 ? * SUN *)"
   ```

   **(B) S3 source**

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" \
       --targets Key=tag:TagKey,Values=TagValue \
       --parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/path_to_Zip_file,_directory,_or_playbook_to_download\"}"],"InstallDependencies":["True_or_False"],"PlaybookFile":["file_name.yml"],"ExtraVariables":["key/value_pairs_separated_by_a_space"],"Check":["True_or_False"],"Verbose":["-v,-vv,-vvv, or -vvvv"]}' \
       --association-name "name" \
       --schedule-expression "cron_or_rate_expression"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" ^
       --targets Key=tag:TagKey,Values=TagValue ^
       --parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/path_to_Zip_file,_directory,_or_playbook_to_download\"}"],"InstallDependencies":["True_or_False"],"PlaybookFile":["file_name.yml"],"ExtraVariables":["key/value_pairs_separated_by_a_space"],"Check":["True_or_False"],"Verbose":["-v,-vv,-vvv, or -vvvv"]}' ^
       --association-name "name" ^
       --schedule-expression "cron_or_rate_expression"
   ```

------

   Here is an example.

   ```
   aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" \
       --targets "Key=tag:OS,Values=Linux" \
       --parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/amzn-s3-demo-bucket/playbook.yml\"}"],"InstallDependencies":["True"],"PlaybookFile":["playbook.yml"],"ExtraVariables":["SSM=True"],"Check":["False"],"Verbose":["-v"]}' \
       --association-name "AnsibleAssociation" \
       --schedule-expression "cron(0 2 ? * SUN *)"
   ```
**Note**  
State Manager associations don't support all cron and rate expressions. For more information about creating cron and rate expressions for associations, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

   The system attempts to create the association on the nodes and immediately apply the state. 

1. Run the following command to view an updated status of the association you just created. 

   ```
   aws ssm describe-association --association-id "ID"
   ```

# Creating associations that run Chef recipes
Creating associations that run Chef recipes

You can create State Manager associations that run Chef recipes by using the `AWS-ApplyChefRecipes` SSM document. State Manager is a tool in AWS Systems Manager. You can target Linux-based Systems Manager managed nodes with the `AWS-ApplyChefRecipes` SSM document. This document offers the following benefits for running Chef recipes:
+ Supports multiple releases of Chef (Chef 11 through Chef 18).
+ Automatically installs the Chef client software on target nodes.
+ Optionally runs [Systems Manager compliance checks](systems-manager-compliance.md) on target nodes, and stores the results of compliance checks in an Amazon Simple Storage Service (Amazon S3) bucket.
+ Runs multiple cookbooks and recipes in a single run of the document.
+ Optionally runs recipes in `why-run` mode, to show which recipes change on target nodes without making changes.
+ Optionally applies custom JSON attributes to `chef-client` runs.
+ Optionally applies custom JSON attributes from a source file that is stored at a location that you specify.

You can use [Git](#state-manager-chef-git), [GitHub](#state-manager-chef-github), [HTTP](#state-manager-chef-http), or [Amazon S3](#state-manager-chef-s3) buckets as download sources for Chef cookbooks and recipes that you specify in an `AWS-ApplyChefRecipes` document.

**Note**  
Associations that run Chef recipes aren't supported on macOS.

## Getting started


Before you create an `AWS-ApplyChefRecipes` document, prepare your Chef cookbooks and cookbook repository. If you don't already have a Chef cookbook that you want to use, you can get started by using a test `HelloWorld` cookbook that AWS has prepared for you. The `AWS-ApplyChefRecipes` document already points to this cookbook by default. Your cookbooks should be set up similarly to the following directory structure. In the following example, `jenkins` and `nginx` are examples of Chef cookbooks that are available in the [https://supermarket.chef.io/](https://supermarket.chef.io/) on the Chef website.

Though AWS can't officially support cookbooks on the [https://supermarket.chef.io/](https://supermarket.chef.io/) website, many of them work with the `AWS-ApplyChefRecipes` document. The following are examples of criteria to determine when you're testing a community cookbook:
+ The cookbook should support the Linux-based operating systems of the Systems Manager managed nodes that you're targeting.
+ The cookbook should be valid for the Chef client version (Chef 11 through Chef 18) that you use.
+ The cookbook is compatible with Chef Infra Client, and, doesn't require a Chef server.

Verify that you can reach the `Chef.io` website, so that any cookbooks you specify in your run list can be installed when the Systems Manager document (SSM document) runs. Using a nested `cookbooks` folder is supported, but not required; you can store cookbooks directly under the root level.

```
<Top-level directory, or the top level of the archive file (ZIP or tgz or tar.gz)>
    └── cookbooks (optional level)
        ├── jenkins
        │   ├── metadata.rb
        │   └── recipes
        └── nginx
            ├── metadata.rb
            └── recipes
```

**Important**  
Before you create a State Manager association that runs Chef recipes, be aware that the document run installs the Chef client software on your Systems Manager managed nodes, unless you set the value of **Chef client version** to `None`. This operation uses an installation script from Chef to install Chef components on your behalf. Before you run an `AWS-ApplyChefRecipes` document, be sure your enterprise can comply with any applicable legal requirements, including license terms applicable to the use of Chef software. For more information, see the [Chef website](https://www.chef.io/).

Systems Manager can deliver compliance reports to an S3 bucket, the Systems Manager console, or make compliance results available in response to Systems Manager API commands. To run Systems Manager compliance reports, the instance profile attached to Systems Manager managed nodes must have permissions to write to the S3 bucket. The instance profile must have permissions to use the Systems Manager `PutComplianceItem` API. For more information about Systems Manager compliance, see [AWS Systems Manager Compliance](systems-manager-compliance.md).

### Logging the document run


When you run a Systems Manager document (SSM document) by using a State Manager association, you can configure the association to choose the output of the document run, and you can send the output to Amazon S3 or Amazon CloudWatch Logs (CloudWatch Logs). To help ease troubleshooting when an association has finished running, verify that the association is configured to write command output to either an Amazon S3 bucket or CloudWatch Logs. For more information, see [Working with associations in Systems Manager](state-manager-associations.md).

## Applying JSON attributes to targets when running a recipe


You can specify JSON attributes for your Chef client to apply to target nodes during an association run. When setting up the association, you can provide raw JSON or provide the path to a JSON file stored in Amazon S3.

Use JSON attributes when you want to customize how the recipe is run without having to modify the recipe itself, for example:
+ **Overriding a small number of attributes**

  Use custom JSON to avoid having to maintain multiple versions of a recipe to accommodate minor differences.
+ **Providing variable values**

  Use custom JSON to specify values that may change from run-to-run. For example, if your Chef cookbooks configure a third-party application that accepts payments, you might use custom JSON to specify the payment endpoint URL. 

**Specifying attributes in raw JSON**

The following is an example of the format you can use to specify custom JSON attributes for your Chef recipe.

```
{"filepath":"/tmp/example.txt", "content":"Hello, World!"}
```

**Specifying a path to a JSON file**  
The following is an example of the format you can use to specify the path to custom JSON attributes for your Chef recipe.

```
{"sourceType":"s3", "sourceInfo":"someS3URL1"}, {"sourceType":"s3", "sourceInfo":"someS3URL2"}
```

## Use Git as a cookbook source


The `AWS-ApplyChefRecipes` document uses the [aws:downloadContent](documents-command-ssm-plugin-reference.md#aws-downloadContent) plugin to download Chef cookbooks. To download content from Git, specify information about your Git repository in JSON format as in the following example. Replace each *example-resource-placeholder* with your own information.

```
{
   "repository":"GitCookbookRepository",
   "privateSSHKey":"{{ssm-secure:ssh-key-secure-string-parameter}}",
   "skipHostKeyChecking":"false",
   "getOptions":"branch:refs/head/main",
   "username":"{{ssm-secure:username-secure-string-parameter}}",
   "password":"{{ssm-secure:password-secure-string-parameter}}"
}
```

## Use GitHub as a cookbook source


The `AWS-ApplyChefRecipes` document uses the [aws:downloadContent](documents-command-ssm-plugin-reference.md#aws-downloadContent) plugin to download cookbooks. To download content from GitHub, specify information about your GitHub repository in JSON format as in the following example. Replace each *example-resource-placeholder* with your own information.

```
{
   "owner":"TestUser",
   "repository":"GitHubCookbookRepository",
   "path":"cookbooks/HelloWorld",
   "getOptions":"branch:refs/head/main",
   "tokenInfo":"{{ssm-secure:token-secure-string-parameter}}"
}
```

## Use HTTP as a cookbook source


You can store Chef cookbooks at a custom HTTP location as either a single `.zip` or `tar.gz` file, or a directory structure. To download content from HTTP, specify the path to the file or directory in JSON format as in the following example. Replace each *example-resource-placeholder* with your own information.

```
{
   "url":"https://my.website.com/chef-cookbooks/HelloWorld.zip",
   "allowInsecureDownload":"false",
   "authMethod":"Basic",
   "username":"{{ssm-secure:username-secure-string-parameter}}",
   "password":"{{ssm-secure:password-secure-string-parameter}}"
}
```

## Use Amazon S3 as a cookbook source


You can also store and download Chef cookbooks in Amazon S3 as either a single `.zip` or `tar.gz` file, or a directory structure. To download content from Amazon S3, specify the path to the file in JSON format as in the following examples. Replace each *example-resource-placeholder* with your own information.

**Example 1: Download a specific cookbook**

```
{
   "path":"https://s3.amazonaws.com/chef-cookbooks/HelloWorld.zip"
}
```

**Example 2: Download the contents of a directory**

```
{
   "path":"https://s3.amazonaws.com/chef-cookbooks-test/HelloWorld"
}
```

**Important**  
If you specify Amazon S3, the AWS Identity and Access Management (IAM) instance profile on your managed nodes must be configured with the `AmazonS3ReadOnlyAccess` policy. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md).

## Create an association that runs Chef recipes (console)


The following procedure describes how to use the Systems Manager console to create a State Manager association that runs Chef cookbooks by using the `AWS-ApplyChefRecipes` document.

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Choose **State Manager**, and then choose **Create association**.

1. For **Name**, enter a name that helps you remember the purpose of the association.

1. In the **Document** list, choose **`AWS-ApplyChefRecipes`**.

1. In **Parameters**, for **Source Type**, select either **Git**, **GitHub**, **HTTP**, or **S3**.

1. For **Source info**, enter cookbook source information using the appropriate format for the **Source Type** that you selected in step 6. For more information, see the following topics:
   + [Use Git as a cookbook source](#state-manager-chef-git)
   + [Use GitHub as a cookbook source](#state-manager-chef-github)
   + [Use HTTP as a cookbook source](#state-manager-chef-http)
   + [Use Amazon S3 as a cookbook source](#state-manager-chef-s3)

1. In **Run list**, list the recipes that you want to run in the following format, separating each recipe with a comma as shown. Don't include a space after the comma. Replace each *example-resource-placeholder* with your own information.

   ```
   recipe[cookbook-name1::recipe-name],recipe[cookbook-name2::recipe-name]
   ```

1. (Optional) Specify custom JSON attributes that you want the Chef client to pass to your target nodes.

   1. In **JSON attributes content**, add any attributes that you want the Chef client to pass to your target nodes.

   1. In **JSON attributes sources**, add the paths to any attributes that you want the Chef client to pass to your target nodes.

   For more information, see [Applying JSON attributes to targets when running a recipe](#apply-custom-json-attributes).

1. For **Chef client version**, specify a Chef version. Valid values are `11` through `18`, or `None`. If you specify a number between `11` `18` (inclusive), Systems Manager installs the correct Chef client version on your target nodes. If you specify `None`, Systems Manager doesn't install the Chef client on target nodes before running the document's recipes.

1. (Optional) For **Chef client arguments**, specify additional arguments that are supported for the version of Chef you're using. To learn more about supported arguments, run `chef-client -h` on a node that is running the Chef client.

1. (Optional) Turn on **Why-run** to show changes made to target nodes if the recipes are run, without actually changing target nodes.

1. For **Compliance severity**, choose the severity of Systems Manager Compliance results that you want reported. Compliance reporting indicates whether the association state is compliant or noncompliant, along with the severity level you specify. Compliance reports are stored in an S3 bucket that you specify as the value of the **Compliance report bucket** parameter (step 14). For more information about Compliance, see [Learn details about Compliance](compliance-about.md) in this guide.

   Compliance scans measure drift between configuration that is specified in your Chef recipes and node resources. Valid values are `Critical`, `High`, `Medium`, `Low`, `Informational`, `Unspecified`, or `None`. To skip compliance reporting, choose `None`.

1. For **Compliance type**, specify the compliance type for which you want results reported. Valid values are `Association` for State Manager associations, or `Custom:`*custom-type*. The default value is `Custom:Chef`.

1. For **Compliance report bucket**, enter the name of an S3 bucket in which to store information about every Chef run performed by this document, including resource configuration and Compliance results.

1. In **Rate control**, configure options to run State Manager associations across a fleet of managed nodes. For information about using rate controls, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

   In **Concurrency**, choose an option:
   + Choose **targets** to enter an absolute number of targets that can run the association simultaneously.
   + Choose **percentage** to enter a percentage of the target set that can run the association simultaneously.

   In **Error threshold**, choose an option:
   + Choose **errors** to enter an absolute number of errors that are allowed before State Manager stops running associations on additional targets.
   + Choose **percentage** to enter a percentage of errors that are allowed before State Manager stops running associations on additional targets.

1. (Optional) For **Output options**, to save the command output to a file, select the **Enable writing output to S3** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile assigned to the managed node, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, verify that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. Choose **Create Association**.

## Create an association that runs Chef recipes (CLI)


The following procedure describes how to use the AWS Command Line Interface (AWS CLI) to create a State Manager association that runs Chef cookbooks by using the `AWS-ApplyChefRecipes` document.

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run one of the following commands to create an association that runs Chef cookbooks on target nodes that have the specified tags. Use the command that is appropriate for your cookbook source type and operating system. Replace each *example-resource-placeholder* with your own information.

   1. **Git source**

------
#### [ Linux & macOS ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" \
          --targets Key=tag:TagKey,Values=TagValue \
          --parameters '{"SourceType":["Git"],"SourceInfo":["{\"repository\":\"repository-name\", \"getOptions\": \"branch:branch-name\", \"username\": \"{{ ssm-secure:username-secure-string-parameter }}\", \"password\": \"{{ ssm-secure:password-secure-string-parameter }}\"}"], "RunList":["{\"recipe[cookbook-name-1::recipe-name]\", \"recipe[cookbook-name-2::recipe-name]\"}"], "JsonAttributesContent": ["{custom-json-content}"], "JsonAttributesSources": "{\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-1\"}, {\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-2\"}", "ChefClientVersion": ["version-number"], "ChefClientArguments":["{chef-client-arguments}"], "WhyRun": boolean, "ComplianceSeverity": ["severity-value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["s3-bucket-name"]}' \
          --association-name "name" \
          --schedule-expression "cron-or-rate-expression"
      ```

------
#### [ Windows ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" ^
          --targets Key=tag:TagKey,Values=TagValue ^
          --parameters '{"SourceType":["Git"],"SourceInfo":["{\"repository\":\"repository-name\", \"getOptions\": \"branch:branch-name\", \"username\": \"{{ ssm-secure:username-secure-string-parameter }}\", \"password\": \"{{ ssm-secure:password-secure-string-parameter }}\"}"], "RunList":["{\"recipe[cookbook-name-1::recipe-name]\", \"recipe[cookbook-name-2::recipe-name]\"}"], "JsonAttributesContent": ["{custom-json}"], "JsonAttributesSources": "{\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-1\"}, {\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-2\"}", "ChefClientVersion": ["version-number"], "ChefClientArguments":["{chef-client-arguments}"], "WhyRun": boolean, "ComplianceSeverity": ["severity-value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["s3-bucket-name"]}' ^
          --association-name "name" ^
          --schedule-expression "cron-or-rate-expression"
      ```

------

   1. **GitHub source**

------
#### [ Linux & macOS ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" \
          --targets Key=tag:TagKey,Values=TagValue \
          --parameters '{"SourceType":["GitHub"],"SourceInfo":["{\"owner\":\"owner-name\", \"repository\": \"name\", \"path\": \"path-to-directory-or-cookbook-to-download\", \"getOptions\": \"branch:branch-name\"}"], "RunList":["{\"recipe[cookbook-name-1::recipe-name]\", \"recipe[cookbook-name-2::recipe-name]\"}"], "JsonAttributesContent": ["{custom-json}"], "ChefClientVersion": ["version-number"], "ChefClientArguments":["{chef-client-arguments}"], "WhyRun": boolean, "ComplianceSeverity": ["severity-value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["s3-bucket-name"]}' \
          --association-name "name" \
          --schedule-expression "cron-or-rate-expression"
      ```

------
#### [ Windows ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" ^
          --targets Key=tag:TagKey,Values=TagValue \
          --parameters '{"SourceType":["GitHub"],"SourceInfo":["{\"owner\":\"owner-name\", \"repository\": \"name\", \"path\": \"path-to-directory-or-cookbook-to-download\", \"getOptions\": \"branch:branch-name\"}"], "RunList":["{\"recipe[cookbook-name-1::recipe-name]\", \"recipe[cookbook-name-2::recipe-name]\"}"], "JsonAttributesContent": ["{custom-json}"], "ChefClientVersion": ["version-number"], "ChefClientArguments":["{chef-client-arguments}"], "WhyRun": boolean, "ComplianceSeverity": ["severity-value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["s3-bucket-name"]}' ^
          --association-name "name" ^
          --schedule-expression "cron-or-rate-expression"
      ```

------

      Here is an example.

------
#### [ Linux & macOS ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" \
          --targets Key=tag:OS,Values=Linux \
          --parameters '{"SourceType":["GitHub"],"SourceInfo":["{\"owner\":\"ChefRecipeTest\", \"repository\": \"ChefCookbooks\", \"path\": \"cookbooks/HelloWorld\", \"getOptions\": \"branch:master\"}"], "RunList":["{\"recipe[HelloWorld::HelloWorldRecipe]\", \"recipe[HelloWorld::InstallApp]\"}"], "JsonAttributesContent": ["{\"state\": \"visible\",\"colors\": {\"foreground\": \"light-blue\",\"background\": \"dark-gray\"}}"], "ChefClientVersion": ["14"], "ChefClientArguments":["{--fips}"], "WhyRun": false, "ComplianceSeverity": ["Medium"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["ChefComplianceResultsBucket"]}' \
          --association-name "MyChefAssociation" \
          --schedule-expression "cron(0 2 ? * SUN *)"
      ```

------
#### [ Windows ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" ^
          --targets Key=tag:OS,Values=Linux ^
          --parameters '{"SourceType":["GitHub"],"SourceInfo":["{\"owner\":\"ChefRecipeTest\", \"repository\": \"ChefCookbooks\", \"path\": \"cookbooks/HelloWorld\", \"getOptions\": \"branch:master\"}"], "RunList":["{\"recipe[HelloWorld::HelloWorldRecipe]\", \"recipe[HelloWorld::InstallApp]\"}"], "JsonAttributesContent": ["{\"state\": \"visible\",\"colors\": {\"foreground\": \"light-blue\",\"background\": \"dark-gray\"}}"], "ChefClientVersion": ["14"], "ChefClientArguments":["{--fips}"], "WhyRun": false, "ComplianceSeverity": ["Medium"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["ChefComplianceResultsBucket"]}' ^
          --association-name "MyChefAssociation" ^
          --schedule-expression "cron(0 2 ? * SUN *)"
      ```

------

   1. **HTTP source**

------
#### [ Linux & macOS ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" \
          --targets Key=tag:TagKey,Values=TagValue \
          --parameters '{"SourceType":["HTTP"],"SourceInfo":["{\"url\":\"url-to-zip-file|directory|cookbook\", \"authMethod\": \"auth-method\", \"username\": \"{{ ssm-secure:username-secure-string-parameter }}\", \"password\": \"{{ ssm-secure:password-secure-string-parameter }}\"}"], "RunList":["{\"recipe[cookbook-name-1::recipe-name]\", \"recipe[cookbook-name-2::recipe-name]\"}"], "JsonAttributesContent": ["{custom-json-content}"], "JsonAttributesSources": "{\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-1\"}, {\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-2\"}", "ChefClientVersion": ["version-number"], "ChefClientArguments":["{chef-client-arguments}"], "WhyRun": boolean, "ComplianceSeverity": ["severity-value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["s3-bucket-name"]}' \
          --association-name "name" \
          --schedule-expression "cron-or-rate-expression"
      ```

------
#### [ Windows ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" ^
          --targets Key=tag:TagKey,Values=TagValue ^
          --parameters '{"SourceType":["HTTP"],"SourceInfo":["{\"url\":\"url-to-zip-file|directory|cookbook\", \"authMethod\": \"auth-method\", \"username\": \"{{ ssm-secure:username-secure-string-parameter }}\", \"password\": \"{{ ssm-secure:password-secure-string-parameter }}\"}"], "RunList":["{\"recipe[cookbook-name-1::recipe-name]\", \"recipe[cookbook-name-2::recipe-name]\"}"], "JsonAttributesContent": ["{custom-json-content}"], "JsonAttributesSources": "{\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-1\"}, {\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-2\"}", "ChefClientVersion": ["version-number"], "ChefClientArguments":["{chef-client-arguments}"], "WhyRun": boolean, "ComplianceSeverity": ["severity-value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["s3-bucket-name"]}' \
          --association-name "name" ^
          --schedule-expression "cron-or-rate-expression"
      ```

------

   1. **Amazon S3 source**

------
#### [ Linux & macOS ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" \
          --targets Key=tag:TagKey,Values=TagValue \
          --parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/path_to_Zip_file,_directory,_or_cookbook_to_download\"}"], "RunList":["{\"recipe[cookbook_name1::recipe_name]\", \"recipe[cookbook_name2::recipe_name]\"}"], "JsonAttributesContent": ["{Custom_JSON}"], "ChefClientVersion": ["version_number"], "ChefClientArguments":["{chef_client_arguments}"], "WhyRun": true_or_false, "ComplianceSeverity": ["severity_value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["amzn-s3-demo-bucket"]}' \
          --association-name "name" \
          --schedule-expression "cron_or_rate_expression"
      ```

------
#### [ Windows ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" ^
          --targets Key=tag:TagKey,Values=TagValue ^
          --parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/path_to_Zip_file,_directory,_or_cookbook_to_download\"}"], "RunList":["{\"recipe[cookbook_name1::recipe_name]\", \"recipe[cookbook_name2::recipe_name]\"}"], "JsonAttributesContent": ["{Custom_JSON}"], "ChefClientVersion": ["version_number"], "ChefClientArguments":["{chef_client_arguments}"], "WhyRun": true_or_false, "ComplianceSeverity": ["severity_value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["amzn-s3-demo-bucket"]}' ^
          --association-name "name" ^
          --schedule-expression "cron_or_rate_expression"
      ```

------

      Here is an example.

------
#### [ Linux & macOS ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" \
          --targets "Key=tag:OS,Values= Linux" \
          --parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/amzn-s3-demo-bucket/HelloWorld\"}"], "RunList":["{\"recipe[HelloWorld::HelloWorldRecipe]\", \"recipe[HelloWorld::InstallApp]\"}"], "JsonAttributesContent": ["{\"state\": \"visible\",\"colors\": {\"foreground\": \"light-blue\",\"background\": \"dark-gray\"}}"], "ChefClientVersion": ["14"], "ChefClientArguments":["{--fips}"], "WhyRun": false, "ComplianceSeverity": ["Medium"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["ChefComplianceResultsBucket"]}' \
          --association-name "name" \
          --schedule-expression "cron(0 2 ? * SUN *)"
      ```

------
#### [ Windows ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" ^
          --targets "Key=tag:OS,Values= Linux" ^
          --parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/amzn-s3-demo-bucket/HelloWorld\"}"], "RunList":["{\"recipe[HelloWorld::HelloWorldRecipe]\", \"recipe[HelloWorld::InstallApp]\"}"], "JsonAttributesContent": ["{\"state\": \"visible\",\"colors\": {\"foreground\": \"light-blue\",\"background\": \"dark-gray\"}}"], "ChefClientVersion": ["14"], "ChefClientArguments":["{--fips}"], "WhyRun": false, "ComplianceSeverity": ["Medium"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["ChefComplianceResultsBucket"]}' ^
          --association-name "name" ^
          --schedule-expression "cron(0 2 ? * SUN *)"
      ```

------

      The system creates the association, and unless your specified cron or rate expression prevents it, the system runs the association on the target nodes.
**Note**  
State Manager associations don't support all cron and rate expressions. For more information about creating cron and rate expressions for associations, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

1. Run the following command to view the status of the association you just created. 

   ```
   aws ssm describe-association --association-id "ID"
   ```

## Viewing Chef resource compliance details


Systems Manager captures compliance information about Chef-managed resources in the Amazon S3 **Compliance report bucket** value that you specified when you ran the `AWS-ApplyChefRecipes` document. Searching for information about Chef resource failures in an S3 bucket can be time consuming. Instead, you can view this information on the Systems Manager **Compliance** page.

A Systems Manager Compliance scan collects information about resources on your managed nodes that were created or checked in the most recent Chef run. The resources can include files, directories, `systemd` services, `yum` packages, templated files, `gem` packages, and dependent cookbooks, among others.

The **Compliance resources summary** section displays a count of resources that failed. In the following example, the **ComplianceType** is **Custom:Chef** and one resource is noncompliant.

**Note**  
`Custom:Chef` is the default **ComplianceType** value in the `AWS-ApplyChefRecipes` document. This value is customizable.

![\[Viewing counts in the Compliance resources summary section of the Compliance page.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/state-manager-chef-compliance-summary.png)


The **Details overview for resources** section shows information about the AWS resource that isn't in compliance. This section also includes the Chef resource type against which compliance was run, severity of issue, compliance status, and links to more information when applicable.

![\[Viewing compliance details for a Chef managed resource failure\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/state-manager-chef-compliance-details.png)


**View output** shows the last 4,000 characters of the detailed status. Systems Manager starts with the exception as the first element, finds verbose messages, and shows them until it reaches the 4,000 character quota. This process displays the log messages that were output before the exception was thrown, which are the most relevant messages for troubleshooting.

For information about how to view compliance information, see [AWS Systems Manager Compliance](systems-manager-compliance.md).

**Important**  
If the State Manager association fails, no compliance data is reported. For example, if Systems Manager attempts to download a Chef cookbook from an S3 bucket that the node doesn't have permission to access, the association fails, and Systems Manager reports no compliance data.

# Walkthrough: Automatically update SSM Agent with the AWS CLI


The following procedure walks you through the process of creating a State Manager association using the AWS Command Line Interface. The association automatically updates the SSM Agent according to a schedule that you specify. For more information about SSM Agent, see [Working with SSM Agent](ssm-agent.md). To customize the update schedule for SSM Agent using the console, see [Automatically updating SSM Agent](ssm-agent-automatic-updates.md#ssm-agent-automatic-updates-console).

To be notified about SSM Agent updates, subscribe to the [SSM Agent Release Notes](https://github.com/aws/amazon-ssm-agent/blob/master/RELEASENOTES.md) page on GitHub.

**Before you begin**  
Before you complete the following procedure, verify that you have at least one running Amazon Elastic Compute Cloud (Amazon EC2) instance for Linux, macOS, or Windows Server that is configured for Systems Manager. For more information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md). 

If you create an association by using either the AWS CLI or AWS Tools for Windows PowerShell, use the `--Targets` parameter to target instances, as shown in the following example. Don't use the `--InstanceID` parameter. The `--InstanceID` parameter is a legacy parameter.

**To create an association for automatically updating SSM Agent**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to create an association by targeting instances using Amazon Elastic Compute Cloud (Amazon EC2) tags. Replace each *example resource placeholder* with your own information. The `Schedule` parameter sets a schedule to run the association every Sunday morning at 2:00 a.m. (UTC).

   State Manager associations don't support all cron and rate expressions. For more information about creating cron and rate expressions for associations, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --targets Key=tag:tag_key,Values=tag_value \
   --name AWS-UpdateSSMAgent \
   --schedule-expression "cron(0 2 ? * SUN *)"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --targets Key=tag:tag_key,Values=tag_value ^
   --name AWS-UpdateSSMAgent ^
   --schedule-expression "cron(0 2 ? * SUN *)"
   ```

------

   You can target multiple instances by specifying instances IDs in a comma-separated list.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --targets Key=instanceids,Values=instance_ID,instance_ID,instance_ID \
   --name AWS-UpdateSSMAgent \
   --schedule-expression "cron(0 2 ? * SUN *)"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --targets Key=instanceids,Values=instance_ID,instance_ID,instance_ID ^
   --name AWS-UpdateSSMAgent ^
   --schedule-expression "cron(0 2 ? * SUN *)"
   ```

------

   You can specify the version of the SSM Agent you want to update to.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --targets Key=instanceids,Values=instance_ID,instance_ID,instance_ID \
   --name AWS-UpdateSSMAgent \
   --schedule-expression "cron(0 2 ? * SUN *)" \
   --parameters version=ssm_agent_version_number
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --targets Key=instanceids,Values=instance_ID,instance_ID,instance_ID ^
   --name AWS-UpdateSSMAgent ^
   --schedule-expression "cron(0 2 ? * SUN *)" ^
   --parameters version=ssm_agent_version_number
   ```

------

   The system returns information like the following.

   ```
   {
       "AssociationDescription": {
           "ScheduleExpression": "cron(0 2 ? * SUN *)",
           "Name": "AWS-UpdateSSMAgent",
           "Overview": {
               "Status": "Pending",
               "DetailedStatus": "Creating"
           },
           "AssociationId": "123..............",
           "DocumentVersion": "$DEFAULT",
           "LastUpdateAssociationDate": 1504034257.98,
           "Date": 1504034257.98,
           "AssociationVersion": "1",
           "Targets": [
               {
                   "Values": [
                       "TagValue"
                   ],
                   "Key": "tag:TagKey"
               }
           ]
       }
   }
   ```

   The system attempts to create the association on the instance(s) and applies the state following creation. The association status shows `Pending`.

1. Run the following command to view an updated status of the association you created. 

   ```
   aws ssm list-associations
   ```

   If your instances *aren't* running the most recent version of the SSM Agent, the status shows `Failed`. When a new version of SSM Agent is published, the association automatically installs the new agent, and the status shows `Success`.

# Walkthrough: Automatically update PV drivers on EC2 instances for Windows Server


Amazon Windows Server Amazon Machine Images (AMIs) contain a set of drivers to permit access to virtualized hardware. These drivers are used by Amazon Elastic Compute Cloud (Amazon EC2) to map instance store and Amazon Elastic Block Store (Amazon EBS) volumes to their devices. We recommend that you install the latest drivers to improve stability and performance of your EC2 instances for Windows Server. For more information about PV drivers, see [AWS PV Drivers](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/xen-drivers-overview.html#xen-driver-awspv).

The following walkthrough shows you how to configure a State Manager association to automatically download and install new AWS PV drivers when the drivers become available. State Manager is a tool in AWS Systems Manager.

**Before you begin**  
Before you complete the following procedure, verify that you have at least one Amazon EC2 instance for Windows Server running that is configured for Systems Manager. For more information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md). 

**To create a State Manager association that automatically updates PV drivers**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Choose **Create association**.

1. In the **Name** field, enter a descriptive name for the association.

1. In the **Document** list, choose `AWS-ConfigureAWSPackage`.

1. In the **Parameters** area, do the following:
   + For **Action**, choose **Install**.
   + For **Installation type**, choose **Uninstall and reinstall**.
**Note**  
In-place upgrades are not supported for this package. It must be uninstalled and reinstalled.
   + For **Name**, enter **AWSPVDriver**.

     You don't need to enter anything for **Version** and **Additional Arguments**.

1. In the **Targets** section, choose the managed nodes on which you want to run this operation by specifying tags, selecting instances or edge devices manually, or specifying a resource group.
**Tip**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.
**Note**  
If you choose to target instances by using tags, and you specify tags that map to Linux instances, the association succeeds on the Windows Server instance but fails on the Linux instances. The overall status of the association shows **Failed**.

1. In the **Specify schedule** area, choose whether to run the association on a schedule that you configure, or just once. Updated PV drivers are released a several times a year, so you can schedule the association to run once a month, if you want.

1. In the **Advanced options** area, for **Compliance severity**, choose a severity level for the association. Compliance reporting indicates whether the association state is compliant or noncompliant, along with the severity level you indicate here. For more information, see [About State Manager association compliance](compliance-about.md#compliance-about-association).

1. For **Rate control**:
   + For **Concurrency**, specify either a number or a percentage of managed nodes on which to run the command at the same time.
**Note**  
If you selected targets by specifying tags applied to managed nodes or by specifying AWS resource groups, and you aren't certain how many managed nodes are targeted, then restrict the number of targets that can run the document at the same time by specifying a percentage.
   + For **Error threshold**, specify when to stop running the command on other managed nodes after it fails on either a number or a percentage of nodes. For example, if you specify three errors, then Systems Manager stops sending the command when the fourth error is received. Managed nodes still processing the command might also send errors.

1. (Optional) For **Output options**, to save the command output to a file, select the **Enable writing output to S3** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile assigned to the managed node, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, verify that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. (Optional) In the **CloudWatch alarm** section, for **Alarm name**, choose a CloudWatch alarm to apply to your association for monitoring. 
**Note**  
Note the following information about this step.  
The alarms list displays a maximum of 100 alarms. If you don't see your alarm in the list, use the AWS Command Line Interface to create the association. For more information, see [Create an association (command line)](state-manager-associations-creating.md#create-state-manager-association-commandline).
To attach a CloudWatch alarm to your command, the IAM principal that creates the association must have permission for the `iam:createServiceLinkedRole` action. For more information about CloudWatch alarms, see [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html).
If your alarm activates, any pending command invocations or automations do not run.

1. Choose **Create association**, and then choose **Close**. The system attempts to create the association on the instances and immediately apply the state. 

   If you created the association on one or more Amazon EC2 instances for Windows Server, the status changes to **Success**. If your instances aren't configured for Systems Manager, or if you inadvertently targeted Linux instances, the status shows **Failed**.

   If the status is **Failed**, choose the association ID, choose the **Resources** tab, and then verify that the association was successfully created on your EC2 instances for Windows Server. If EC2 instances for Windows Server show a status of **Failed**, verify that the SSM Agent is running on the instance, and verify that the instance is configured with an AWS Identity and Access Management (IAM) role for Systems Manager. For more information, see [Setting up Systems Manager unified console for an organization](systems-manager-setting-up-organizations.md).