

# Prerequisites
<a name="sap-nw-pacemaker-sles-prerequisites"></a>

**Topics**
+ [AWS Infrastructure Setup](sap-nw-pacemaker-sles-infra-setup.md)
+ [EC2 Instance Configuration](sap-nw-pacemaker-sles-ec2-configuration.md)
+ [Operating System Requirements](sap-nw-pacemaker-sles-os-settings.md)

# AWS Infrastructure Setup
<a name="sap-nw-pacemaker-sles-infra-setup"></a>

This section covers the one-time setup tasks required to prepare your AWS environment for the cluster deployment:

**Note**  
We recommend using administrative privileges from an administrative workstation or AWS Console for the initial infrastructure setup instead of granting instance-based privileges, as this maintains the principle of least privilege. Infrastructure setup APIs (such as CreateRoute, ModifyInstanceAttribute, and CreateTags) are only required during initial configuration and are not needed for ongoing cluster operations.

**Topics**
+ [Create IAM Roles and Policies for Pacemaker](#iam-roles-sles)
+ [Modify Security Groups for Cluster Communication](#sg-sles)
+ [Add VPC Route Table Entries for Overlay IPs](#rt-sles)

## Create IAM Roles and Policies for Pacemaker
<a name="iam-roles-sles"></a>

In addition to the permissions required for standard SAP operations, two IAM policies are required for the cluster to control AWS resources. These policies must be assigned to your Amazon EC2 instance using an IAM role. This enables Amazon EC2 instance, and therefore the cluster to call AWS services.

**Note**  
Create policies with least-privilege permissions, granting access to only the specific resources that are required within the cluster. For multiple clusters, you may need to create multiple policies.

For more information, see [IAM roles for Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#ec2-instance-profile).

### STONITH Policy
<a name="stonith-policy-nw-sles"></a>

The SLES STONITH resource agent (external/ec2) requires permission to start and stop both the nodes of the cluster. Create a policy as shown in the following example. Attach this policy to the IAM role assigned to both Amazon EC2 instances in the cluster.

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DescribeInstances",
        "ec2:DescribeTags"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:StartInstances",
        "ec2:StopInstances"
      ],
      "Resource": [
        "arn:aws:ec2:us-east-1:123456789012:instance/arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0",
        "arn:aws:ec2:us-east-1:123456789012:instance/arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0"
      ]
    }
  ]
}
```

### AWS Overlay IP Policy
<a name="overlay-policy-nw-sles"></a>

The SLES Overlay IP resource agent (aws-vpc-move-ip) requires permission to modify a routing entry in route tables. Create a policy as shown in the following example. Attach this policy to the IAM role assigned to both Amazon EC2 instances in the cluster.

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "ec2:ReplaceRoute",
            "Resource": [
                 "arn:aws:ec2:us-east-1:123456789012:route-table/rtb-0123456789abcdef0",
                 "arn:aws:ec2:us-east-1:123456789012:route-table/rtb-0123456789abcdef0"
                        ]
        },
        {
            "Effect": "Allow",
            "Action": "ec2:DescribeRouteTables",
            "Resource": "*"
        }
    ]
}
```

### Shared VPC (optional)
<a name="_shared_vpc_optional"></a>

**Note**  
The following directions are only required for setups which include a Shared VPC.

Amazon VPC sharing enables you to share subnets with other AWS accounts within the same AWS Organizations. Amazon EC2 instances can be deployed using the subnets of the shared Amazon VPC.

In the pacemaker cluster, the aws-vpc-move-ip resource agent has been enhanced to support a shared VPC setup while maintaining backward compatibility with previous existing features.

The following checks and changes are required. We refer to the AWS account that owns Amazon VPC as the sharing VPC account, and to the consumer account where the cluster nodes are going to be deployed as the cluster account.

**Minimum Version Requirements**  
The latest version of the aws-vpc-move-ip agent shipped with SLES15 SP3 supports the shared VPC setup by default. The following are the minimum version required to support a shared VPC Setup:
+ SLES 12 SP5 - resource-agents-4.3.018.a7fb5035-3.79.1.x86\$164
+ SLES 15 SP2 - resource-agents-4.4.0\$1git57.70549516-3.30.1.x86\$164
+ SLES 15 SP3 - resource-agents-4.8.0\$1git30.d0077df0-8.5.1

**IAM Roles and Policies**  
Using the Overlay IP agent with a shared Amazon VPC requires a different set of IAM permissions to be granted on both AWS accounts (sharing VPC account and cluster account).

**Sharing VPC Account**  
In sharing VPC account, create an IAM role to delegate permissions to the EC2 instances that will be part of the cluster. During the IAM Role creation, select "Another AWS account" as the type of trusted entity, and enter the AWS account ID where the EC2 instances will be deployed/running from.

After the IAM role has been created, create the following IAM policy on the sharing VPC account, and attach it to an IAM role. Add or remove route table entries as needed.

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": "ec2:ReplaceRoute",
      "Resource": [
        "arn:aws:ec2:us-east-1:123456789012:route-table/rtb-0123456789abcdef0",
        "arn:aws:ec2:us-east-1:123456789012:route-table/rtb-0123456789abcdef0"
      ]
    },
    {
      "Sid": "VisualEditor1",
      "Effect": "Allow",
      "Action": "ec2:DescribeRouteTables",
      "Resource": "*"
    }
  ]
}
```

Next, edit move to the "Trust relationships" tab in the IAM role, and ensure that the AWS account you entered while creating the role has been correctly added.

In cluster account, create the following IAM policy, and attach it to an IAM role. This is the IAM Role that is going to be attached to the EC2 instances.

 **STS Policy** 

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": "sts:AssumeRole",
      "Resource": "arn:aws:iam::123456789012:role/sharing-vpc-account-cluster-role"
    }
  ]
}
```

 **STONITH Policy** 

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": [
        "ec2:StartInstances",
        "ec2:StopInstances"
      ],
      "Resource": [
        "arn:aws:ec2:us-east-1:123456789012:instance/arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0",
        "arn:aws:ec2:us-east-1:123456789012:instance/arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0"
      ]
    },
    {
      "Sid": "VisualEditor1",
      "Effect": "Allow",
      "Action": "ec2:DescribeInstances",
      "Resource": "*"
    }
  ]
}
```

## Modify Security Groups for Cluster Communication
<a name="sg-sles"></a>

A security group controls the traffic that is allowed to reach and leave the resources that it is associated with. For more information, see [Control traffic to your AWS resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html).

In addition to the standard ports required to access SAP and administrative functions, the following rules must be applied to the security groups assigned to all Amazon EC2 instances in the cluster.


| Source | Protocol | Port range | Description | 
| --- | --- | --- | --- | 
|  The security group ID (its own resource ID)  |  UDP  |  5405  |  Allows UDP traffic between cluster resources for corosync communication  | 
|  Bastion host security group or CIDR range for administration  |  TCP  |  7630  |  (optional) Used for SLES Hawk2 Interface for monitoring and administration using a Web Interface. For more details, see SUSE documentation [Configuring and Managing Cluster Resources with Hawk2](https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/cha-ha-manage-resources.html#sec-conf-hawk2-manage-edit).  | 
+ Note the use of the `UDP` protocol.
+ If you are running a local firewall, such as iptables, ensure that communication on the preceding ports is allowed between two Amazon EC2 instances.

## Add VPC Route Table Entries for Overlay IPs
<a name="rt-sles"></a>

You need to add initial route table entries for the Overlay IP. For more information on Overlay IP, see [AWS – Overlay IP](sap-nw-pacemaker-sles-concepts.md#overlay-ip-sles).

Add entries to the VPC route table or tables associated with the subnets of your Amazon EC2 instance for the cluster. The entries for destination (Overlay IP CIDR) and target (Amazon EC2 instance or ENI) must be added manually for the ASCS and the ERS. This ensures that the cluster resource has a route to modify. It also supports the install of SAP using the virtual names associated with the Overlay IP before the configuration of the cluster.

Using either the Amazon VPC console, or an AWS CLI command add a route to the table or tables for the Overlay IP.

------
#### [  AWS Console ]

1. Identify the EC2 instance IDs for both cluster nodes and determine which route tables are associated with their subnets. For details, see [Parameter Reference](sap-nw-pacemaker-sles-parameters.md#sap-pacemaker-resource-parameters-nw-sles) 

1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc

1. In the navigation pane, choose **Route Tables**, select the first route table.

1. Choose **Actions** → **Edit routes**.

1. Choose **Add route** and configure the ASCS route:    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-netweaver/sap-nw-pacemaker-sles-infra-setup.html)

1. Choose **Add route** and configure the ERS route:    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-netweaver/sap-nw-pacemaker-sles-infra-setup.html)

1. Choose **Save changes**.

1. Repeat for any additional associated route tables or route tables from the VPC which require connectivity to the ASCS.

   Your route table now includes entries for required Overlay IPs, in addition to the standard routes.

------
#### [  AWS CLI ]

Identify the EC2 instance IDs for both cluster nodes and determine which route tables are associated with their subnets. For details, see. [Parameter Reference](sap-nw-pacemaker-sles-parameters.md#sap-pacemaker-resource-parameters-nw-sles).

For the ASCS:

```
$ aws ec2 create-route --route-table-id <routetable_id> --destination-cidr-block <ascs_overlayip>/32 --instance-id <instance_id_1>
```

For the ERS:

```
$ aws ec2 create-route --route-table-id <routetable_id> --destination-cidr-block <ers_overlayip>/32 --instance-id <instance_id_2>
```

------

# EC2 Instance Configuration
<a name="sap-nw-pacemaker-sles-ec2-configuration"></a>

Amazon EC2 instance settings can be applied using Infrastructure as Code or manually using AWS Command Line Interface or AWS Console. We recommend Infrastructure as Code automation to reduce manual steps, and ensure consistency.

**Topics**
+ [Assign or Review Pacemaker IAM Role](#assign-review-pacemaker-iam-role-nw-sles)
+ [Assign or Review Security Groups](#assign-review-security-groups-nw-sles)
+ [Assign Secondary IP Addresses](#assign-secondary-ip-addresses-nw-sles)
+ [Disable Source/Destination Check](#source-dest-nw-sles)
+ [Review Stop Protection](#stop-protection-nw-sles)
+ [Review Automatic Recovery](#auto-recovery-nw-sles)
+ [Create Amazon EC2 Resource Tags Used by Amazon EC2 STONITH Agent](#create-cluster-tags-nw-sles)

**Important**  
The following configurations must be performed on all cluster nodes. Ensure consistency across nodes to prevent cluster issues.

## Assign or Review Pacemaker IAM Role
<a name="assign-review-pacemaker-iam-role-nw-sles"></a>

The two cluster resource IAM policies must be assigned to an IAM role associated with your Amazon EC2 instance. If an IAM role is not associated to your instance, create a new IAM role for cluster operations.

The following AWS Console or AWS CLI commands can be used to modify the IAM role assignment.

------
#### [  AWS Console ]

1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2.

1. Select one of your cluster nodes.

1. In the navigation pane, choose **Actions** → **Security** → **Modify IAM role**.

1. Choose the IAM role that contains the policies created in [Create IAM Roles and Policies for Pacemaker](sap-nw-pacemaker-sles-infra-setup.md#iam-roles-sles).

1. Choose **Update IAM role**.

1. Repeat these steps for all nodes in the cluster.

------
#### [  AWS CLI ]

To assign an IAM role using the AWS CLI:

```
$ aws ec2 associate-iam-instance-profile --instance-id <instance_id> --iam-instance-profile Name=<iam_instance_profile_name>
```

Repeat for all nodes in the cluster.

------

You can verify the IAM role assignment on your instances using the AWS CLI:

```
$ aws ec2 describe-instances --instance-ids <instance_id> --query 'Reservations[0].Instances[0].IamInstanceProfile' --output table
```

You can check the specific permissions of the roles created for pacemaker in [Create IAM Roles and Policies for Pacemaker](sap-nw-pacemaker-sles-infra-setup.md#iam-roles-sles) by running the following on both your instances.

When --dry-run is used, the AWS CLI or SDK sends the request to the EC2 service with this flag. EC2 then performs all necessary permission checks and validates the request parameters. If the user has the required permissions and the request is well-formed, the service returns a DryRunOperation error response, indicating that the operation would have succeeded.

Check that the tags are correctly set and can be queried from both instances if using the ec2/stonith fencing agent:

```
$ aws ec2 describe-tags --filters "Name=resource-id,Values=<instance_id_1>" "Name=key,Values=
<cluster_tag>" --region=<region> --output=text | cut -f5
```

Check that the fencing resource has the permission to shut down both instances:

```
$ aws ec2 stop-instances --instance-ids <instance_id_1> --dry-run
$ aws ec2 stop-instances --instance-ids <instance_id_2> --dry-run
```

Check that the overlay IP resource has the pemissions to update the route tables:

```
$ aws ec2 replace-route --route-table-id <routetable_id> --destination-cidr-block <ascs_overlayip>/32 --instance-id <instance_id_1> --dry-run
```

## Assign or Review Security Groups
<a name="assign-review-security-groups-nw-sles"></a>

The security group rules created in the AWS [Modify Security Groups for Cluster Communication](sap-nw-pacemaker-sles-infra-setup.md#sg-sles) section must be assigned to your Amazon EC2 instances. If a security group is not associated with your instance, or if the required rules are not present in the assigned security group, add the security group or update the rules.

The following AWS Console or AWS CLI commands can be used to modify security group assignments.

------
#### [  AWS Console ]

1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2.

1. Select one of your cluster nodes.

1. In the **Security** tab, review the security groups, ports, and source of traffic.

1. If required, choose **Actions** → **Security** → **Change security groups**.

1. Under **Associated security groups**, search for and select the required groups.

1. Choose **Save**.

1. Repeat these steps for all nodes in the cluster.

------
#### [  AWS CLI ]

To modify security groups using the AWS CLI:

```
$ aws ec2 modify-instance-attribute --instance-id <instance_id> --groups <security_group_id1> <security_group_id2>
```

Repeat for all nodes in the cluster.

------

You can verify the security group rules on your instances using the AWS CLI:

```
$ aws ec2 describe-instance-attribute --instance-id <instance_id> --attribute groupSet
```

## Assign Secondary IP Addresses
<a name="assign-secondary-ip-addresses-nw-sles"></a>

Secondary IP addresses are used to create a redundant communication channel (secondary ring) in corosync for clusters. The cluster nodes can use the secondary ring to communicate in case of underlying network disruptions.

These IPs are only used in cluster configurations. The secondary IPs provide the same fault tolerance as a secondary Elastic Network Interface (ENI). For more information, see [Secondary IP addresses for your EC2 Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-secondary-ip-addresses.html).

The following AWS Console or AWS CLI commands can be used to assign secondary IP addresses.

------
#### [  AWS Console ]

1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2.

1. Select one of your cluster nodes.

1. In the **Networking** tab, choose the network interface ID.

1. Choose **Actions** → **Manage IP addresses**.

1. Choose **Assign new IP address**.

1. Select **Auto-assign** or specify an IP from the subnet range.

1. Choose **Yes, Update**.

1. Repeat these steps for all nodes in the cluster.

------
#### [  AWS CLI ]

To assign secondary IP addresses using the AWS CLI:

```
$ ENI_ID=$(aws ec2 describe-instances --instance-id <instance_id> \
    --query 'Reservations[0].Instances[0].NetworkInterfaces[0].NetworkInterfaceId' \
    --output text)
$ aws ec2 assign-private-ip-addresses --network-interface-id $ENI_ID --secondary-private-ip-address-count 1
```

Repeat for all nodes in the cluster.

------

You can verify the secondary IP configuration on your instances using the AWS CLI:

```
$ aws ec2 describe-instances --instance-id <instance_id> \
    --query 'Reservations[*].Instances[*].NetworkInterfaces[*].PrivateIpAddresses[*].PrivateIpAddress' \
    --output text
```

Verify that:
+ Each instance returns two IP addresses from the same subnet
+ The primary network interface (eth0) has both IPs assigned
+ The secondary IPs will be used later for ring0\$1addr and ring1\$1addr in corosync.conf

## Disable Source/Destination Check
<a name="source-dest-nw-sles"></a>

Amazon EC2 instances perform source/destination checks by default, requiring that an instance is either the source or the destination of any traffic it sends or receives. In the pacemaker cluster, source/destination check must be disabled on both instances receiving traffic from the Overlay IP.

The following AWS Console or AWS CLI commands can be used to modify the attribute.

------
#### [  AWS Console ]

1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2.

1. Select one of your cluster nodes.

1. In the navigation pane, choose **Actions** → **Networking** → **Change source/destination check**.

1. For Source/Destination Checking, choose **Stop** to allow traffic when the source or destination is not the instance itself.

1. Repeat these steps for all nodes in the cluster.

------
#### [  AWS CLI ]

To modify using the AWS CLI (requires appropriate configuration permissions):

```
$ aws ec2 modify-instance-attribute --instance-id <instance_id> --no-source-dest-check
```

Repeat for all nodes in the cluster.

------

To confirm the value of an attribute for a particular instance, use the following command. The value `false` means source/destination checking is disabled

```
$ aws ec2 describe-instance-attribute --instance-id <instance_id> --attribute sourceDestCheck
```

The output

```
{
    "InstanceId": "i-xxxxinstidforhost1",
    "SourceDestCheck": {
        "Value": false
    }
}
```

## Review Stop Protection
<a name="stop-protection-nw-sles"></a>

To ensure that STONITH actions can be executed, you must ensure that stop protection is disabled for Amazon EC2 instances that are part of a pacemaker cluster. If the default settings have been modified, use the following commands for both instances to disable stop protection via AWS CLI.

The following AWS Console or CLI commands can be used to modify the attribute.

------
#### [  AWS Console ]

1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2.

1. Select one of your cluster nodes.

1. Choose **Actions** → **Instance settings** → **Change stop protection**.

1. Ensure **Stop protection** is not enabled.

1. Repeat these steps for all nodes in the cluster.

------
#### [  AWS CLI ]

To modify using the AWS CLI (requires appropriate configuration permissions):

```
$ aws ec2 modify-instance-attribute --instance-id <instance_id> --no-disable-api-stop
```

Repeat this command for all nodes in the cluster.

------

To confirm the value of an attribute for a particular instance, use the following command. The value `false` means it is possible to stop the instance using an AWS CLI.

```
$ aws ec2 describe-instance-attribute --instance-id <instance_id> --attribute disableApiStop
```

The output

```
{
    "InstanceId": "i-xxxxinstidforhost1",
    "DisableApiStop": {
        "Value": false
    }
}
```

## Review Automatic Recovery
<a name="auto-recovery-nw-sles"></a>

After a failure, cluster-controlled operations must be resumed in a coordinated way. This helps ensure that the cause of failure is known and addressed, and the status of the cluster is as expected. For example, verifying that there are no pending fencing actions.

The following AWS Console or CLI commands can be used to modify the attribute.

------
#### [  AWS Console ]

1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2.

1. Select one of your cluster nodes.

1. Choose **Actions** → **Instance settings** → **Change auto-recovery behavior**.

1. Select **Off** to disable auto-recovery for system status check failures.

1. Repeat these steps for all nodes in the cluster.

------
#### [  AWS CLI ]

To modify auto-recovery settings (requires appropriate configuration permissions):

```
$ aws ec2 modify-instance-maintenance-options --instance-id <instance_id> --auto-recovery disabled
```

Repeat this command for all nodes in the cluster.

------

To confirm the value of an attribute for a particular instance, use the following command. The value `disabled` means autorecovery will not be attempted.

```
$ aws ec2 describe-instances --instance-ids <instance_id> --query 'Reservations[*].Instances[*].MaintenanceOptions.AutoRecovery'
```

The output:

```
[
    [
        "disabled"
    ]
]
```

## Create Amazon EC2 Resource Tags Used by Amazon EC2 STONITH Agent
<a name="create-cluster-tags-nw-sles"></a>

Amazon EC2 STONITH agent uses AWS resource tags to identify Amazon EC2 instances. Create tag for the primary and secondary Amazon EC2 instances via AWS Console or AWS CLI. For more information, see [Using Tags](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html).

Use the same tag key and the local hostname returned using the command hostname across instances. For example, a configuration with the values defined in Global AWS parameters would require the tags shown in the following table.


| Amazon EC2 | Key example | Value example | 
| --- | --- | --- | 
|   `<instance_id>`   |   `<cluster_tag>`   |   `<hostname>`   | 
|   `i-xxxxinstidforhost1`   |   `pacemaker`   |   `slxhost01`   | 
|   `i-xxxxinstidforhost2`   |   `pacemaker`   |   `slxhost02`   | 

The following AWS Console or AWS CLI commands can be used to create resource tags.

------
#### [  AWS Console ]

1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2.

1. Select one of your cluster nodes.

1. In the **Tags** tab, choose **Manage tags**.

1. Choose **Add tag**.

1. For **Key**, enter the cluster tag (for example, `pacemaker`).

1. For **Value**, enter the hostname of the instance.

1. Choose **Save**.

1. Repeat these steps for all nodes in the cluster.

------
#### [  AWS CLI ]

To create tags using the AWS CLI:

```
$ aws ec2 create-tags --resources <instance_id> --tags Key=<cluster_tag>,Value=<hostname>
```

Repeat for all nodes in the cluster with their respective hostnames.

------

You can run the following command locally to validate the tag values and IAM permissions to describe the tags. Run this command on all instances in the cluster, for all instances in the cluster.

```
$ aws ec2 describe-tags --filters "Name=resource-id,Values=<instance_id>" "Name=key,Values=<cluster_tag>" --region=<region> --output=text | cut -f5
```

# Operating System Requirements
<a name="sap-nw-pacemaker-sles-os-settings"></a>

This section outlines the required operating system configurations for SUSE Linux Enterprise Server for SAP (SLES for SAP) cluster nodes. Note that this is not a comprehensive list of configuration requirements for running SAP on AWS, but rather focuses specifically on cluster management prerequisites.

Consider using configuration management tools or automated deployment scripts to ensure accurate and repeatable setup across your cluster infrastructure.

**Topics**
+ [Root Access](#_root_access)
+ [Install Missing Operating System Packages](#packages-nw-sles)
+ [Update and Check Operating System Versions](#_update_and_check_operating_system_versions)
+ [System Logging](#_system_logging)
+ [Time Synchronization Services](#_time_synchronization_services)
+ [Install AWS CLI and Configure Profiles](#install_shared_aws_cli_and_configure_profiles)
+ [Pacemaker Proxy Settings (Optional)](#_pacemaker_proxy_settings_optional)

**Important**  
The following configurations must be performed on all cluster nodes. Ensure consistency across nodes to prevent cluster issues.

## Root Access
<a name="_root_access"></a>

Verify root access on both cluster nodes. The majority of the setup commands in this document are performed with the root user. Assume that commands should be run as root unless there is an explicit call out to choose otherwise.

## Install Missing Operating System Packages
<a name="packages-nw-sles"></a>

This is applicable to all cluster nodes. You must install any missing operating system packages.

The following packages and their dependencies are required for the pacemaker setup. Depending on your baseline image, for example, SLES for SAP, these packages may already be installed.


| Package | Description | Category | Required | Configuration Pattern | 
| --- | --- | --- | --- | --- | 
|  chrony  |  Time Synchronization  |  System Support  |  Mandatory  |  All  | 
|  rsyslog  |  System Logging  |  System Support  |  Mandatory  |  All  | 
|  pacemaker  |  Cluster Resource Manager  |  Core Cluster  |  Mandatory  |  All  | 
|  corosync  |  Cluster Communication Engine  |  Core Cluster  |  Mandatory  |  All  | 
|  resource-agents  |  Resource Agents including SAPInstance  |  Core Cluster  |  Mandatory  |  All  | 
|  fence-agents  |  Fencing Capabilities  |  Core Cluster  |  Mandatory  |  All  | 
|  sap-suse-cluster-connector  |  SAP HA-Script Connector (≥3.1.1 for SimpleMount)  |  SAP Integration  |  Mandatory  |  All  | 
|  sapstartsrv-resource-agents  |  SAP Start Service Resource Agents  |  SAP Integration  |  Mandatory\$1  |  SimpleMount  | 
|  supportutils  |  System Information Gathering  |  Support Tools  |  Recommended  |  All  | 
|  sysstat  |  Performance Monitoring Tools  |  Support Tools  |  Recommended  |  All  | 
|  zypper-lifecycle-plugin  |  Software Lifecycle Management  |  Support Tools  |  Recommended  |  All  | 
|  supportutils-plugin-ha-sap  |  HA/SAP Support Data Collection  |  Support Tools  |  Recommended  |  All  | 
|  supportutils-plugin-suse-public-cloud  |  Cloud Support Data Collection  |  Support Tools  |  Recommended  |  All  | 
|  dstat  |  System Resource Statistics  |  Monitoring  |  Recommended  |  All  | 
|  iotop  |  I/O Monitoring  |  Monitoring  |  Recommended  |  All  | 

**Note**  
Refer to [Vendor Support of Deployment Types](sap-nw-pacemaker-sles-references.md#deployments-sles) for more information on Configuration Patterns. `Mandatory*` indicates that this package is mandatory based on the Configuration Pattern.

```
#!/bin/bash
# Mandatory core packages for SAP NetWeaver HA on AWS
mandatory_packages="corosync pacemaker resource-agents fence-agents rsyslog chrony sap-suse-cluster-connector"

# SimpleMount specific packages
simplemount_packages="sapstartsrv-resource-agents"

# Recommended monitoring and support packages
support_packages="supportutils supportutils-plugin-ha-sap supportutils-plugin-suse-public-cloud sysstat dstat iotop zypper-lifecycle-plugin"

# Default to checking all packages
packages="${mandatory_packages} ${simplemount_packages} ${support_packages}"

missingpackages=""

echo "Checking SAP NetWeaver HA package requirements..."
echo "Note: sapstartsrv-resource-agents is only required for SimpleMount architecture"

for package in ${packages}; do
    echo "Checking if ${package} is installed..."
    if ! rpm -q ${package} --quiet; then
        echo " ${package} is missing and needs to be installed"
        missingpackages="${missingpackages} ${package}"
    fi
done

if [ -z "$missingpackages" ]; then
    echo "All packages are installed."
else
    echo "Missing mandatory packages: $(echo ${missingpackages} | tr ' ' '\n' | grep -E "^($(echo ${mandatory_packages} | tr ' ' '|'))$")"
    echo "Missing SimpleMount packages: $(echo ${missingpackages} | tr ' ' '\n' | grep -E "^($(echo ${simplemount_packages} | tr ' ' '|'))$")"
    echo "Missing support packages: $(echo ${missingpackages} | tr ' ' '\n' | grep -E "^($(echo ${support_packages} | tr ' ' '|'))$")"

    echo -n "Do you want to install the missing packages (y/n)? "
    read response
    if [ "$response" = "y" ]; then
        zypper install -y $missingpackages
    fi
fi

# Check sap-suse-cluster-connector version if installed
if rpm -q sap-suse-cluster-connector --quiet; then
    version=$(rpm -q sap-suse-cluster-connector --qf '%{VERSION}')
    echo "sap-suse-cluster-connector version: $version"
    if [[ $(echo "$version" | cut -d. -f1) -ge 3 ]] && [[ $(echo "$version" | cut -d. -f2) -ge 1 ]] && [[ $(echo "$version" | cut -d. -f3) -ge 1 ]]; then
        echo "sap-suse-cluster-connector version is suitable for SimpleMount architecture"
    else
        echo "WARNING: SimpleMount architecture requires sap-suse-cluster-connector version 3.1.1 or higher"
    fi
fi
```

If a package is not installed, and you are unable to install it using zypper, it may be because SUSE Linux Enterprise High Availability extension is not available as a repository in your chosen image. You can verify the availability of the extension using the following command:

```
$ sudo zypper repos
```

To install or update a package or packages with confirmation, use the following command:

```
$ sudo zypper install <package_name(s)>
```

## Update and Check Operating System Versions
<a name="_update_and_check_operating_system_versions"></a>

You must update and confirm versions across nodes. Apply all the latest patches to your operating system versions. This ensures that bugs are addressed and new features are available.

You can update the patches individually or update all system patches using the `zypper update` command. A clean reboot is recommended prior to setting up a cluster.

```
$ sudo zypper update
$ sudo reboot
```

Compare the operating system package versions on the two cluster nodes and ensure that the versions match on both nodes.

## System Logging
<a name="_system_logging"></a>

Both systemd-journald and rsyslog are suggested for comprehensive logging. Systemd-journald (enabled by default) provides structured, indexed logging with immediate access to events, while rsyslog is maintained for backward compatibility and traditional file-based logging. This dual approach ensures both modern logging capabilities and compatibility with existing log management tools and practices.

 **1. Enable and start rsyslog:** 

```
# systemctl enable --now rsyslog
```

**2. (Optional) Configure persistent logging for systemd-journald:**  
If you are not using a logging agent (like the AWS CloudWatch Unified Agent or Vector) to ship logs to a centralized location, you may want to configure persistent logging to retain logs after system reboots.

```
# mkdir -p /etc/systemd/journald.conf.d
```

Create `/etc/systemd/journald.conf.d/99-logstorage.conf` with:

```
[Journal]
Storage=persistent
```

Persistent logging requires careful storage management. Configure appropriate retention and rotation settings in `journald.conf` to prevent logs from consuming excessive disk space. Review `man journald.conf` for available options such as SystemMaxUse, RuntimeMaxUse, and MaxRetentionSec.

To apply the changes, restart journald:

```
# systemctl restart systemd-journald
```

After enabling persistent storage, only new logs will be stored persistently. Existing logs from the current boot session will remain in volatile storage until the next reboot.

 **3. Verify services are running:** 

```
# systemctl status systemd-journald
# systemctl status rsyslog
```

## Time Synchronization Services
<a name="_time_synchronization_services"></a>

Time synchronization is important for cluster operation. Ensure that chrony rpm is installed, and configure appropriate time servers in the configuration file.

You can use Amazon Time Sync Service that is available on any instance running in a VPC. It does not require internet access. To ensure consistency in the handling of leap seconds, don’t mix Amazon Time Sync Service with any other ntp time sync servers or pools.

Create or check the `/etc/chrony.d/ec2.conf` file to define the server:

```
# Amazon EC2 time source config
server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4
```

Start the chronyd.service, using the following command:

```
# systemctl enable --now chronyd.service
# systemctl status chronyd
```

Verify time synchronization is working:

```
# chronyc tracking
```

Ensure the output shows `Reference ID : A9FEA97B (169.254.169.123)` confirming synchronization with Amazon Time Sync Service.

For more information, see [Set the time for your Linux instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html).

## Install AWS CLI and Configure Profiles
<a name="install_shared_aws_cli_and_configure_profiles"></a>

The AWS cluster resource agents require AWS Command Line Interface (AWS CLI). Check if AWS CLI is already installed, and install it if necessary.

Check if AWS CLI is installed:

```
# aws --version
```

If the command is not found, install AWS CLI v2 using the following commands:

```
# cd /tmp
# curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
# dnf install -y unzip
# unzip awscliv2.zip
# sudo ./aws/install --update
```

Create symlinks to ensure AWS CLI is in the system PATH:

```
# sudo ln -sf /usr/local/bin/aws /usr/bin/aws
```

Verify the installation:

```
# aws --version
```

The installation creates a symbolic link at `/usr/local/bin/aws` which is typically in the system PATH by default.

For more information, see [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

After installing AWS CLI, you need to create an AWS CLI profile for the root account.

You can either edit the config file at `/root/.aws` manually or by using the `aws configure` AWS CLI command.

You should skip providing the information for the access and secret access keys. The permissions are provided through IAM roles attached to Amazon EC2 instances.

```
# aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: <region>
Default output format [None]:
```

The profile name is `default` unless configured. If you choose to use a different name you can specify `--profile`. The name chosen in this example is cluster. It is used in the AWS resource agent definition for pacemaker. The AWS Region must be the default AWS Region of the instance.

```
# aws configure --profile cluster
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: <region>
Default output format [None]:
```

On the hosts, you can verify the available profiles using the following command:

```
# aws configure list-profiles
```

And review that an assumed role is associated by querying the caller identity:

```
# aws sts get-caller-identity --profile=<profile_name>
```

## Pacemaker Proxy Settings (Optional)
<a name="_pacemaker_proxy_settings_optional"></a>

If your Amazon EC2 instance has been configured to access the internet and/or AWS Cloud through proxy servers, then you need to replicate the settings in the pacemaker configuration. For more information, see [Using an HTTP Proxy](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-proxy.html).

Add the following lines to `/etc/sysconfig/pacemaker`:

```
http_proxy=http://<proxyhost>:<proxyport>
https_proxy=http://<proxyhost>:<proxyport>
no_proxy=127.0.0.1,localhost,169.254.169.254,fd00:ec2::254
```
+ Modify proxyhost and proxyport to match your settings.
+ Ensure that you exempt the address used to access the instance metadata.
+ Configure no\$1proxy to include the IP address of the instance metadata service – 169.254.169.254 (IPV4) and fd00:ec2::254 (IPV6). This address does not vary.