

# Deployment
<a name="rhel-ase-ha-deployment"></a>

This section covers the following topics.

**Topics**
+ [Settings and prerequisites](rhel-ase-ha-settings.md)
+ [SAP and cluster setup](rhel-sap-ase-ha-setup.md)
+ [Cluster configuration](rhel-netweaver-ha-cluster-configuration.md)

# Settings and prerequisites
<a name="rhel-ase-ha-settings"></a>

The cluster setup uses parameters, including `DBSID` that is unique to your setup. It is useful to predetermine the values with the following examples and guidance.

**Topics**
+ [Define reference parameters for setup](#define-parameters)
+ [Amazon EC2 instance settings](#instance-settings)
+ [Operating system prerequisites](#os-prerequisites)
+ [IP and hostname resolution prerequisites](#ip-prerequisites)
+ [FSx for ONTAP prerequisites](#filesystem-prerequisites)
+ [Shared VPC – *optional*](#rhel-ase-ha-shared-vpc)

## Define reference parameters for setup
<a name="define-parameters"></a>

The cluster setup relies on the following parameters.

**Topics**
+ [Global AWS parameters](#global-aws-parameters)
+ [Amazon EC2 instance parameters](#ec2-parameters)
+ [SAP and Pacemaker resource parameters](#sap-pacemaker-resource-parameters)
+ [RHEL cluster parameters](#rhel-cluster-parameters)

### Global AWS parameters
<a name="global-aws-parameters"></a>


| Name | Parameter | Example | 
| --- | --- | --- | 
|   AWS account ID  |   `<account_id>`   |   `123456789100`   | 
|   AWS Region  |   `<region_id>`   |   `us-east-1`   | 
+  AWS account – For more details, see [Your AWS account ID and its alias](https://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html).
+  AWS Region – For more details, see [Describe your Regions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#using-regions-availability-zones-describe).

### Amazon EC2 instance parameters
<a name="ec2-parameters"></a>


| Name | Parameter | Primary example | Secondary example | 
| --- | --- | --- | --- | 
|  Amazon EC2 instance ID  |   `<instance_id>`   |   `i-xxxxinstidforhost1`   |   `i- xxxxinstidforhost2`   | 
|  Hostname  |   `<hostname>`   |   `rhxdbhost01`   |   `rhxdbhost02`   | 
|  Host IP  |   `<host_ip>`   |   `10.1.10.1`   |   `10.1.20.1`   | 
|  Host additional IP  |   `<host_additional_ip>`   |   `10.1.10.2`   |   `10.1.20.2`   | 
|  Configured subnet  |   `<subnet_id>`   |   `subnet-xxxxxxxxxxsubnet1`   |   `subnet-xxxxxxxxxxsubnet2`   | 
+ Hostname – Hostnames must comply with SAP requirements outlined in [SAP Note 611361 - Hostnames of SAP ABAP Platform servers](https://me.sap.com/notes/611361) (requires SAP portal access).

  Run the following command on your instances to retrieve the hostname.

  ```
  hostname
  ```
+ Amazon EC2 instance ID – run the following command (IMDSv2 compatible) on your instances to retrieve instance metadata.

  ```
  /usr/bin/curl --noproxy '*' -w "\n" -s -H "X-aws-ec2-metadata-token: $(curl --noproxy '*' -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")" http://169.254.169.254/latest/meta-data/instance-id
  ```

  For more details, see [Retrieve instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html) and [Instance identity documents](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-documents.html).

### SAP and Pacemaker resource parameters
<a name="sap-pacemaker-resource-parameters"></a>


| Name | Parameter | Example | 
| --- | --- | --- | 
|  DBSID  |   `<DBSID>` or `<dbsid>`   |   `ARD`   | 
|  Virtual hostname  |   `<db_virt_hostname>`   |   `rhxvdb`   | 
|  Database Overlay IP  |   `<ase_db_oip>`   |   `172.16.0.23`   | 
|  VPC Route Tables  |   `<rtb_id>`   |   `rtb-xxxxxroutetable1`   | 
|  FSx for ONTAP mount points  |   `<ase_db_fs>`   |   `svm-xxx.fs-xxx.fsx.us- east-1.amazonaws.com`   | 
+ SAP details – SAP parameters, including SID and instance number must follow the guidance and limitations of SAP and Software Provisioning Manager. Refer to [SAP Note 1979280 - Reserved SAP System Identifiers (SAPSID) with Software Provisioning Manager](https://me.sap.com/notes/1979280) for more details.

  Post-installation, use the following command to find the details of the instances running on a host.

  ```
  sudo /usr/sap/hostctrl/exe/saphostctrl -function ListDatabases
  ```
+ Overlay IP – This value is defined by you. For more information, see [Overlay IP](https://docs.aws.amazon.com/sap/latest/sap-netweaver/rhel-netweaver-ha-planning.html#overlay-ip).
+ FSx for ONTAP mount points – This value is defined by you. Consider the required mount points specified in [SAP ASE on AWS with Amazon FSx for NetApp ONTAP](https://docs.aws.amazon.com/sap/latest/sap-AnyDB/sap-ase-amazon-fsx.html).

### RHEL cluster parameters
<a name="rhel-cluster-parameters"></a>


| Name | Parameter | Example | 
| --- | --- | --- | 
|  Cluster name  |   `cluster_name`   |   `rhelha`   | 
|  Cluster user  |   `cluster_user`   |   `hacluster`   | 
|  Cluster password  |   `cluster_password`   |  | 

## Amazon EC2 instance settings
<a name="instance-settings"></a>

Amazon EC2 instance settings can be applied using Infrastructure as Code or manually using AWS Command Line Interface or AWS Management Console. We recommend Infrastructure as Code automation to reduce manual steps, and ensure consistency.

**Topics**
+ [Create IAM roles and policies](#iam)
+ [AWS Overlay IP policy](#overlay-ip-policy)
+ [Assign IAM role](#role)
+ [Modify security groups for cluster communication](#security-groups)
+ [Disable source/destination check](#disable-check)
+ [Review automatic recovery and stop protection](#auto-recovery)

### Create IAM roles and policies
<a name="iam"></a>

In addition to the permissions required for standard SAP operations, two IAM policies are required for the cluster to control AWS resources on ASCS. These policies must be assigned to your Amazon EC2 instance using an IAM role. This enables Amazon EC2 instance, and therefore the cluster to call AWS services.

Create these policies with least-privilege permissions, granting access to only the specific resources that are required within the cluster. For multiple clusters, you need to create multiple policies.

For more information, see [IAM roles for Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#ec2-instance-profile).

#### STONITH policy
<a name="stonith"></a>

The RHEL STONITH agent requires permission to start and stop both the nodes of the cluster. Create a policy as shown in the following example. Attach this policy to the IAM role assigned to both Amazon EC2 instances in the cluster.

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:DescribeTags"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances"
            ],
            "Resource": [              
              "arn:aws:ec2:us-east-1:123456789012:instance/arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0",
              "arn:aws:ec2:us-east-1:123456789012:instance/arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0"
            ]
        }
    ]
}
```

### AWS Overlay IP policy
<a name="overlay-ip-policy"></a>

The RHEL Overlay IP resource agent (`aws-vpc-move-ip`) requires permission to modify a routing entry in route tables. Create a policy as shown in the following example. Attach this policy to the IAM role assigned to both Amazon EC2 instances in the cluster.

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "ec2:ReplaceRoute",
            "Resource": [
                 "arn:aws:ec2:us-east-1:123456789012:route-table/rtb-0123456789abcdef0",
                 "arn:aws:ec2:us-east-1:123456789012:route-table/rtb-0123456789abcdef0"
                        ]
        },
        {
            "Effect": "Allow",
            "Action": "ec2:DescribeRouteTables",
            "Resource": "*"
        }
    ]
}
```

**Note**  
If you are using a Shared VPCs, see [Shared VPC – optional](#rhel-ase-ha-shared-vpc).

### Assign IAM role
<a name="role"></a>

The two cluster resource IAM policies must be assigned to an IAM role associated with your Amazon EC2 instance. If an IAM role is not associated to your instance, create a new IAM role for cluster operations. To assign the role, go to https://console.aws.amazon.com/ec2/, select each or both instance(s), and then choose **Actions** > **Security** > **Modify IAM role**.

### Modify security groups for cluster communication
<a name="security-groups"></a>

A security group controls the traffic that is allowed to reach and leave the resources that it is associated with. For more information, see [Control traffic to your AWS resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html).

In addition to the standard ports required to access SAP and administrative functions, the following rules must be applied to the security groups assigned to both Amazon EC2 instances in the cluster.


**Inbound**  

| Source | Protocol | Port range | Description | 
| --- | --- | --- | --- | 
|  The security group ID (its own resource ID)  |   **UDP**   |  5405  |  Allows UDP traffic between cluster resources for corosync communication  | 

**Note**  
Note the use of the UDP protocol.

If you are running a local firewall, such as `iptables`, ensure that communication on the preceding ports is allowed between two Amazon EC2 instances.

### Disable source/destination check
<a name="disable-check"></a>

Amazon EC2 instances perform source/destination checks by default, requiring that an instance is either the source or the destination of any traffic it sends or receives.

In the pacemaker cluster, source/destination check must be disabled on both instances receiving traffic from the Overlay IP. You can disable check using AWS CLI or AWS Management Console.

**Example**  
Use the [modify-instance-attribute](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/modify-instance-attribute.html) command to disable source/destination check.  
Run the following commands on both instances in the cluster.  
+ Primary example –

  ```
  aws ec2 modify-instance-attribute --instance-id <i-xxxxinstidforhost1> --no-source-dest-check
  ```
+ Secondary example –

  ```
  aws ec2 modify-instance-attribute --instance-id <i-xxxxinstidforhost2> --no-source-dest-check
  ```
Ensure that the **Stop** option is checked in https://console.aws.amazon.com/ec2/.

### Review automatic recovery and stop protection
<a name="auto-recovery"></a>

After a failure, cluster-controlled operations must be resumed in a coordinated way. This helps ensure that the cause of failure is known and addressed, and the status of the cluster is as expected. For example, verifying that there are no pending fencing actions.

This can be achieved by not enabling pacemaker to run as a service at the operating system level or by avoiding auto restarts for hardware failure.

If you want to control the restarts resulting from hardware failure, disable simplified automatic recovery and do not configure Amazon CloudWatch action-based recovery for Amazon EC2 instances that are part of a pacemaker cluster. Use the following commands on both Amazon EC2 instances in the pacemaker cluster, to disable simplified automatic recovery via AWS CLI. If making the change via AWS CLI, run the command for both Amazon EC2 instances in the cluster.

**Note**  
Modifying instance maintenance options will require admin privileges not covered by the IAM instance roles defined for operations of the cluster.

```
aws ec2 modify-instance-maintenance-options --instance-id <i-xxxxinstidforhost1> --auto-recovery disabled
```

```
aws ec2 modify-instance-maintenance-options --instance-id <i-xxxxinstidforhost2> --auto-recovery disabled
```

To ensure that STONITH actions can be executed, you must ensure that stop protection is disabled for Amazon EC2 instances that are part of a pacemaker cluster. If the default settings have been modified, use the following commands for both instances to disable stop protection via AWS CLI.

**Note**  
Modifying instance attributes will require admin privileges not covered by the IAM instance roles defined for operations of the cluster.

```
aws ec2 modify-instance-attribute --instance-id <i-xxxxinstidforhost> --no-disable-api-stop
```

```
aws ec2 modify-instance-attribute --instance-id <i-xxxxinstidforhost2> --no-disable-api-stop
```

## Operating system prerequisites
<a name="os-prerequisites"></a>

This section covers the following topics.

**Topics**
+ [Root access](#root-access)
+ [Install missing operating system packages](#os-packages)
+ [Update and check operating system versions](#confirm-versions)
+ [Stop and disable `nm-cloud-setup`](#disable-nm-cloud-setup)
+ [Time synchronization services](#time-sync)
+ [AWS CLI profile](#cli-profile)
+ [Pacemaker proxy settings](#proxy-settings)

### Root access
<a name="root-access"></a>

Verify root access on both cluster nodes. The majority of the setup commands in this document are performed with the root user. Assume that commands should be run as root unless there is an explicit call out to choose otherwise.

### Install missing operating system packages
<a name="os-packages"></a>

This is applicable to both cluster nodes. You must install any missing operating system packages.

The following packages and their dependencies are required for the pacemaker setup. Depending on your baseline image, for example, RHEL for SAP, these packages may already be installed.

```
awscli
chrony
corosync
pcs
pacemaker
fence-agents-aws
resource-agents-sap (Version resource-agents-sap-3.9.5-124.el7.x86_64 or higher)
sap-cluster-connector
```

We highly recommend installing the following additional packages for troubleshooting.

```
sysstat
pcp-system-tools
sos
```

See Red Hat documentation [What are all the Performance Co-Pilot (PCP) RPM packages in RHEL?](https://access.redhat.com/articles/1146003) 

**Note**  
The preceding list of packages is not a complete list required for running SAP applications. For the complete list, see [SAP and Red Hat references](https://docs.aws.amazon.com/sap/latest/sap-netweaver/rhel-netweaver-ha-planning.html#references).

Use the following command to check packages and versions.

```
for package in awscli chrony corosync pcs pacemaker fence-agents-aws resource-agents-sap sap-cluster-connector sysstat pcp-system-tools sos; do
echo "Checking if ${package} is installed..."
RPM_RC=$(rpm -q ${package} --quiet; echo $?)
if [ ${RPM_RC} -ne 0 ];then
echo "   ${package} is missing and needs to be installed"
fi
done
```

If a package is not installed, and you are unable to install it using `yum`, it may be because Red Hat Enterprise Linux for SAP extension is not available as a repository in your chosen image. You can verify the availability of the extension using the following command.

```
yum repolist
```

To install or update a package or packages with confirmation, use the following command.

```
yum install <package_name(s)>
```

### Update and check operating system versions
<a name="confirm-versions"></a>

You must update and confirm versions across nodes. Apply all the latest patches to your operating system versions. This ensures that bugs are addresses and new features are available.

You can update the patches individually or use the `yum` update. A clean reboot is recommended prior to setting up a cluster.

```
yum update
reboot
```

Compare the operating system package versions on the two cluster nodes and ensure that the versions match on both nodes.

### Stop and disable `nm-cloud-setup`
<a name="disable-nm-cloud-setup"></a>

This is applicable on both cluster nodes. If you are using Red Hat 8.6 or later, the following services must be stopped and disabled on both the cluster nodes. This prevents the NetworkManager from removing the overlay IP address from the network interface.

```
systemctl disable nm-cloud-setup.timer
systemctl stop nm-cloud-setup.timer
systemctl disable nm-cloud-setup
systemctl stop nm-cloud-setup
```

### Time synchronization services
<a name="time-sync"></a>

This is applicable to both cluster nodes. Time synchronization is important for cluster operation. Ensure that `chrony rpm` is installed, and configure appropriate time servers in the configuration file.

You can use Amazon Time Sync Service that is available on any instance running in a VPC. It does not require internet access. To ensure consistency in the handling of leap seconds, don’t mix Amazon Time Sync Service with any other `ntp` time sync servers or pools.

Create or check the `/etc/chrony.d/ec2.conf` file to define the server.

```
# Amazon EC2 time source config
server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4
```

Start the `chronyd.service`, using the following command.

```
systemctl enable --now chronyd.service
systemctl status chronyd
```

For more information, see [Set the time for your Linux instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html).

### AWS CLI profile
<a name="cli-profile"></a>

This is applicable to both cluster nodes. The cluster resource agents use AWS Command Line Interface (AWS CLI). You need to create an AWS CLI profile for the root account on both instances.

You can either edit the config file at `/root/.aws` manually or by using [aws configure](https://docs.aws.amazon.com/cli/latest/reference/configure/index.html) AWS CLI command.

You can skip providing the information for the access and secret access keys. The permissions are provided through IAM roles attached to Amazon EC2 instances.

```
aws configure
{aws} Access Key ID [None]:
{aws} Secret Access Key [None]:
Default region name [None]: <region_id>
Default output format [None]:
```

### Pacemaker proxy settings
<a name="proxy-settings"></a>

This is applicable to both cluster nodes. If your Amazon EC2 instance has been configured to access the internet and/or AWS Cloud through proxy servers, then you need to replicate the settings in the pacemaker configuration. For more information, see [Use an HTTP proxy](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-proxy.html).

Add the following lines to `/etc/sysconfig/pacemaker`.

```
http_proxy=http://<proxyhost>:<proxyport>
https_proxy= http://<proxyhost>:<proxyport>
no_proxy=127.0.0.1,localhost,169.254.169.254,fd00:ec2::254
```

Modify `proxyhost` and `proxyport` to match your settings. Ensure that you exempt the address used to access the instance metadata. Configure `no_proxy` to include the IP address of the instance metadata service – ** `169.254.169.254` ** (IPV4) and ** `fd00:ec2::254` ** (IPV6). This address does not vary.

## IP and hostname resolution prerequisites
<a name="ip-prerequisites"></a>

This section covers the following topics.

**Topics**
+ [Add initial VPC route table entries for overlay IPs](#route-entries)
+ [Add overlay IPs to host IP configuration](#overlay-host)
+ [Hostname resolution](#hostname-resolution)

### Add initial VPC route table entries for overlay IPs
<a name="route-entries"></a>

You need to add initial route table entries for overlay IPs. For more information on overlay IP, see [Overlay IP](https://docs.aws.amazon.com/sap/latest/sap-netweaver/rhel-netweaver-ha-planning.html#overlay-ip).

Add entries to the VPC route table or tables associated with the subnets of your Amazon EC2 instance for the cluster. The entries for destination (overlay IP CIDR) and target (Amazon EC2 instance or ENI) must be added manually for ASCS and ERS. This ensures that the cluster resource has a route to modify. It also supports the install of SAP using the virtual names associated with the overlay IP before the configuration of the cluster.

 **Modify or add a route to a route table using AWS Management Console** 

1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.

1. In the navigation pane, choose **Route Tables**, and select the route table associated with the subnets where your instances have been deployed.

1. Choose **Actions**, **Edit routes**.

1. To add a route, choose **Add route**.

1. Add your chosen overlay IP address CIDR and the instance ID of your primary instance for SAP ASE database. See the following table for an **example**.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-AnyDB/rhel-ase-ha-settings.html)

1. Choose **Save changes**.

The preceding steps can also be performed programmatically. We suggest performing the steps using administrative privileges, instead of instance-based privileges to preserve least privilege. CreateRoute API isn’t necessary for ongoing operations.

Run the following command as a dry run on both nodes to confirm that the instances have the necessary permissions.

```
aws ec2 replace-route --route-table-id <rtb-xxxxxroutetable1> --destination-cidr-block <172.16.0.23/32> --instance-id <i-xxxxinstidforhost1> --dry-run --profile <aws_cli_cluster_profile>
```

### Add overlay IPs to host IP configuration
<a name="overlay-host"></a>

You must configure the overlay IP as an additional IP address on the standard interface to enable SAP install. This action is managed by the cluster IP resource. However, to install SAP using the correct IP addresses prior to having the cluster configuration in place, you need to add these entries manually.

If you need to reboot the instance during setup, the assignment is lost, and must be re-added.

See the following **examples**. You must update the commands with your chosen IP addresses.

On EC2 instance 1, where you are installing SAP ASE database, add the overlay IP.

```
ip addr add <172.16.0.23/32> dev eth0
```

### Hostname resolution
<a name="hostname-resolution"></a>

This is applicable to both cluster nodes. You must ensure that both instances can resolve all hostnames in use. Add the hostnames for cluster nodes to `/etc/hosts` file on both cluster nodes. This ensures that hostnames for cluster nodes can be resolved even in case of DNS issues. See the following example.

```
cat /etc/hosts
<10.1.10.1 rhxdbhost01.example.com rhxdbhost01>
<10.1.20.1 rhxdbhost02.example.com rhxdbhost02>
<172.16.0.23 rhxvdb.example.com rhxvdb>
```

**Important**  
The overlay IP is out of VPC range, and cannot be reached from locations not associated with the route table, including on-premises.

## FSx for ONTAP prerequisites
<a name="filesystem-prerequisites"></a>

This section covers the following topics.

**Topics**
+ [Shared file systems](#shared-filesystems)
+ [Create volumes and file systems](#create-filesystems)

### Shared file systems
<a name="shared-filesystems"></a>

Amazon FSx for NetApp ONTAP is supported for SAP ASE database file systems.

FSx for ONTAP provides fully managed shared storage in AWS Cloud with data access and management capabilities of ONTAP. For more information, see [Create an Amazon FSx for NetApp ONTAP file system](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/getting-started-step1.html).

Select a file system based on your business requirements, evaluating the resilience, performance, and cost of your choice.

The SVM’s DNS name is your simplest mounting option. The file system DNS name automatically resolves to the mount target’s IP address on the Availability ZOne of the connecting Amazon EC2 instance.

 `svm-id.fs-id.fsx.aws-region.amazonaws.com` 

**Note**  
Review the `enableDnsHostnames` and `enableDnsSupport` DNS attributes for your VPC. For more information, see [View and update DNS attributes for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-updating).

### Create volumes and file systems
<a name="create-filesystems"></a>

You can review the following resources to understand the FSx for ONTAP mount points for SAP ASE database.
+  [Host setup for SAP ASE](https://docs.aws.amazon.com/sap/latest/sap-AnyDB/host-setup-fsx-sap-ase.html) 
+ SAP – [Setup of Database Layout](https://help.sap.com/docs/SLTOOLSET/e345db692e3c43928199d701df58c0d8/f231f7924dd34e9e85291bfb9af709f1.html?version=CURRENT_VERSION) (ABAP)
+ SAP – [Setup of Database Layout](https://help.sap.com/docs/SLTOOLSET/01f04921ac57452983980fe83a3ce10d/f231f7924dd34e9e85291bfb9af709f1.html?version=CURRENT_VERSION) (JAVA)

The following are the FSx for ONTAP mount points covered in this topic.


| Unique NFS Location (example) | File system location | 
| --- | --- | 
|  SVM-xxx:/sybase  |  /sybase  | 
|  SVM-xxx:/asedata  |  /sybase/<DBSID>/sapdata\$11  | 
|  SVM-xxx:/aselog  |  /sybase/<DBSID>/saplog\$11  | 
|  SVM-xxx:/sapdiag  |  /sybase/<DBSID>/sapdiag  | 
|  SVM-xxx:/saptmp  |  /sybase/<DBSID>/saptmp  | 
|  SVM-xxx:/backup  |  /sybasebackup  | 
|  SVM-xxx:/usrsap  |  /usr/sap  | 

Ensure that you have properly mounted the file systems, and the necessary adjustments for host setup have been performed. See [Host setup for SAP ASE](https://docs.aws.amazon.com/sap/latest/sap-AnyDB/host-setup-fsx-sap-ase.html). You can temporarily add the entries to `/etc/fstab` to not lose them during a reboot. The entries must be removed prior to configuring the cluster. The cluster resource manages the mounting of the NFS.

You need to perform this step only on the primary Amazon EC2 instance for the initial installation.

Review the mount options to ensure that they match with your operating system, NFS file system type, and SAP’s latest recommendations.

Use the following command to check that the required file systems are available.

```
df -h
```

## Shared VPC – *optional*
<a name="rhel-ase-ha-shared-vpc"></a>

Amazon VPC sharing enables you to share subnets with other AWS accounts within the same AWS Organizations. Amazon EC2 instances can be deployed using the subnets of the shared Amazon VPC.

In the pacemaker cluster, the `aws-vpc-move-ip` resource agent has been enhanced to support a shared VPC setup while maintaining backward compatibility with previous existing features.

The following checks and changes are required. We refer to the AWS account that owns Amazon VPC as the sharing VPC account, and to the consumer account where the cluster nodes are going to be deployed as the cluster account.

This section covers the following topics.

**Topics**
+ [Minimum version requirements](#minimum-version-requirements)
+ [IAM roles and policies](#iam-roles-policies)
+ [Shared VPC cluster resources](#shared-vpc-clsuter-resources-rhel-ase)

### Minimum version requirements
<a name="minimum-version-requirements"></a>

The latest version of the `aws-vpc-move-ip` agent shipped with Red Hat 8.2 supports the shared VPC setup by default. The following are the minimum version required to support a shared VPC Setup:
+ Red Hat 7.9 - resource-agents-4.1.1-61.10
+ Red Hat 8.1 - resource-agents-4.1.1-33.10
+ Red Hat 8.2 - resource-agents-4.1.1-44.12

### IAM roles and policies
<a name="iam-roles-policies"></a>

Using the overlay IP agent with a shared Amazon VPC requires a different set of IAM permissions to be granted on both AWS accounts (sharing VPC account and cluster account).

#### Sharing VPC account
<a name="sharing-vpc-account"></a>

In sharing VPC account, create an IAM role to delegate permissions to the EC2 instances that will be part of the cluster. During the IAM Role creation, select "Another AWS account" as the type of trusted entity, and enter the AWS account ID where the EC2 instances will be deployed/running from.

After the IAM role has been created, create the following IAM policy on the sharing VPC account, and attach it to an IAM role. Add or remove route table entries as needed.

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
      {
        "Sid": "VisualEditor0",
        "Effect": "Allow",
        "Action": "ec2:ReplaceRoute",
        "Resource": [
            "arn:aws:ec2:us-east-1:123456789012:route-table/rtb-0123456789abcdef0",
            "arn:aws:ec2:us-east-1:123456789012:route-table/rtb-0123456789abcdef0"
        ]
      },
      {
        "Sid": "VisualEditor1",
        "Effect": "Allow",
        "Action": "ec2:DescribeRouteTables",
        "Resource": "*"
      }
    ]  
}
```

Next, edit move to the "Trust relationships" tab in the IAM role, and ensure that the AWS account you entered while creating the role has been correctly added.

#### Cluster account
<a name="cluster-account"></a>

In cluster account, create the following IAM policy, and attach it to an IAM role. This is the IAM Role that is going to be attached to the EC2 instances.

 ** AWS STS policy** 

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
      {
        "Sid": "VisualEditor0",
        "Effect": "Allow",
        "Action": "sts:AssumeRole",
        "Resource": "arn:aws:iam::123456789012:role/sharing-vpc-account-cluster-role"
      }
    ]
}
```

 **STONITH policy** 

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
      {
        "Sid": "VisualEditor0",
        "Effect": "Allow",
        "Action": [
            "ec2:StartInstances",
            "ec2:StopInstances"
        ],
        "Resource": [
            "arn:aws:ec2:us-east-1:123456789012:instance/arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0",
            "arn:aws:ec2:us-east-1:123456789012:instance/arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0"
        ]
      },
      {
        "Sid": "VisualEditor1",
        "Effect": "Allow",
        "Action": "ec2:DescribeInstances",
        "Resource": "*"
      }
    ]
}
```

### Shared VPC cluster resources
<a name="shared-vpc-clsuter-resources-rhel-ase"></a>

The cluster resource agent `aws-vpc-move-ip` also uses a different configuration syntax. When configuring the `aws-vpc-move-ip` resource agent, the following new parameters must be used:
+ lookup\$1type=NetworkInterfaceId
+ routing\$1table\$1role="arn:aws:iam::<account\$1id>:role/<VPC-Account-Cluster-Role>"

The following IP Resource for SAP ASE database needs to be created.

```
pcs resource create rsc_ip_ARD_ASEDB ocf:heartbeat:aws-vpc-move-ip ip=172.16.0.23 interface=eth0  routing_table=rtb-xxxxxroutetable1 lookup_type=NetworkInterfaceId  routing_table_role="arn:aws:iam::<sharing_vpc_account_id>:role/<sharing_vpc_account_cluster_role>" op monitor interval=20s timeout=40s --group rsc_asedb_group
```

# SAP and cluster setup
<a name="rhel-sap-ase-ha-setup"></a>

This section covers the following topics.

**Topics**
+ [Install SAP](#install-sap)
+ [Cluster prerequisites](#cluster-prerequisites)
+ [Create cluster and node associations](#associations)

## Install SAP
<a name="install-sap"></a>

The following topics provide information about installing SAP ASE database on AWS Cloud in a highly available cluster. Review SAP Documentation for more details.

**Topics**
+ [Use SWPM with high availability](#swpm-ha)
+ [Install SAP database instance](#sap-instances)
+ [Check SAP host agent version](#host-agent-version)

### Use SWPM with high availability
<a name="swpm-ha"></a>

Before running SAP Software Provisioning Manager (SWPM), ensure that the following prerequisites are met.
+ If the operating system groups for SAP are pre-defined, ensure that the user identifier (UID) and group identifier (GID) values for `<syb>adm`, `sapadm`, and `sapsys` are consistent across both instances.
+ You have downloaded the most recent version of Software Provisioning Manager for your SAP version. For more information, see SAP Documentation [Software Provisioning Manager](https://support.sap.com/en/tools/software-logistics-tools/software-provisioning-manager.html?anchorId=section).
+ Ensure that routes, overlay IPs, and virtual host names are mapped to the instance where the installation will run. This is to ensure that the virtual hostname for SAP ASE database is available on the primary instance. For more information, see [IP and hostname resolution prerequisites](https://docs.aws.amazon.com/sap/latest/sap-netweaver/sles-setup.html#ip-prerequisites).
+ Ensure that FSx for ONTAP mount points are available, either in `/etc/fstab` or using the mount command. For more information, see [File system prerequisites](https://docs.aws.amazon.com/sap/latest/sap-netweaver/sles-setup.html#filesystem-prerequisites). If you are adding the entries in `/etc/fstab`, ensure that they are removed before configuring the cluster.

### Install SAP database instance
<a name="sap-instances"></a>

The commands in this section use the example values provided in [Define reference parameters for setup](https://docs.aws.amazon.com/sap/latest/sap-netweaver/sles-setup.html#define-parameters).

Install SAP ASE database on `<rhxdbhost01>` with virtual hostname `rhxvdb`, using the high availability option of Software Provisioning Manager (SWPM) tool. You can use the `SAPINST_USE_HOSTNAME` parameter to install SAP using a virtual hostname.

```
<swpm location>/sapinst SAPINST_USE_HOSTNAME=<rhxvdb>
```

**Note**  
Before installing SAP ASE database, ASCS and ERS must be installed, and the `/sapmnt` directory must be available on the database server.

### Check SAP host agent version
<a name="host-agent-version"></a>

The SAP host agent is used for ASE database instance control and monitoring. This agent is used by SAP cluster resource agents and hooks. It is recommended that you have the latest version installed on both instances. For more details, see [SAP Note 2219592 – Upgrade Strategy of SAP Host Agent](https://me.sap.com/notes/2219592).

Use the following command to check the version of the host agent.

```
/usr/sap/hostctrl/exe/saphostexec -version
```

## Cluster prerequisites
<a name="cluster-prerequisites"></a>

This section covers the following topics.

**Topics**
+ [Update the `hacluster` password](#update-hacluster)
+ [Setup passwordless authentication between nodes](#setup-authentication)

### Update the `hacluster` password
<a name="update-hacluster"></a>

This is applicable to both cluster nodes. Change the password of the operating system user `hacluster` using the following command.

```
passwd hacluster
```

### Setup passwordless authentication between nodes
<a name="setup-authentication"></a>

For a more comprehensive and easily consumable view of cluster activity, Red Hat provides additional reporting tools. Many of these tools require access to both nodes without entering a password. Red Hat recommends performing this setup for root user.

For more details, see Red Hat documentation [How to setup SSH Key passwordless login in Red Hat Enterprise Linux?](https://access.redhat.com/solutions/9194) 

## Create cluster and node associations
<a name="associations"></a>

This section covers the following topics.

**Topics**
+ [Start `pcsd` service](#start-pcsd)
+ [Reset configuration – *optional*](#reset-configuration)
+ [Authenticate `pcs` with user `hacluster`](#autheticate-pcs)
+ [Setup node configuration](#node-configuration)

### Start `pcsd` service
<a name="start-pcsd"></a>

This is applicable on both cluster nodes. Run the following command to enable and start the cluster service `pcsd` (pacemaker/corosync configuration system daemon) on both, the primary and secondary node.

```
systemctl start pcsd.service
systemctl enable pcsd.service
```

Run the following command to check the status of cluster service.

```
systemctl status pcsd.service
● pcsd.service - PCS GUI and remote configuration interface
   Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2023-01-13 14:15:32 IST; 7min ago
     Docs: man:pcsd(8)
           man:pcs(8)
 Main PID: 1445 (pcsd)
    Tasks: 1 (limit: 47675)
   Memory: 27.1M
   CGroup: /system.slice/pcsd.service
           └─1445 /usr/libexec/platform-python -Es /usr/sbin/pcsd
```

### Reset configuration – *optional*
<a name="reset-configuration"></a>

**Note**  
The following instructions help you reset the complete configuration. Run these commands only if you want to start setup from the beginning. You can make minor changes with the `crm edit` command.

Run the following command to back up the current configuration for reference.

```
pcs config show > /tmp/pcsconfig_backup.txt
```

Run the following command to clear the current configuration.

```
pcs cluster destroy
```

### Authenticate `pcs` with user `hacluster`
<a name="autheticate-pcs"></a>

The following command authenticates `pcs` to the `pcs daemon` on cluster nodes. It should be run on only one of the cluster nodes. The username and password for the `pcs` user must be the same, and the username should be `<hacluster>`.

 **RHEL 7.x** 

```
pcs cluster auth <rhxdbhost01> <rhxdbhost02>
Username: <hacluster>
Password:
rhxhost02: Authorized
rhxhost01: Authorized
```

 **RHEL 8.x** 

```
pcs host auth <rhxdbhost01> <rhxdbhost02>
Username: <hacluster>
Password:
rhxhost02: Authorized
rhxhost01: Authorized
```

### Setup node configuration
<a name="node-configuration"></a>

The following command configures the `cluster configuration` file, and syncs the configuration on both nodes. It should be run on only one of the cluster nodes.

 **RHEL 7.x** 

```
pcs cluster setup --name <rhelha> <rhxdbhost01> <rhxdbhost02>
Destroying cluster on nodes: <rhxdbhost01>, <rhxdbhost02>...
<rhxdbhost02>: Stopping Cluster (pacemaker)...
<rhxdbhost01>: Stopping Cluster (pacemaker)...
<rhxdbhost02>: Successfully destroyed cluster
<rhxdbhost01>: Successfully destroyed cluster

Sending 'pacemaker_remote authkey' to '<rhxdbhost01>', '<rhxdbhost02>'
<rhxdbhost01>: successful distribution of the file 'pacemaker_remote authkey'
<rhxdbhost02>: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
<rhxdbhost01>: Succeeded
<rhxdbhost02>: Succeeded

Synchronizing pcsd certificates on nodes <rhxdbhost01>, <rhxdbhost02>...
<rhxdbhost01>: Success
<rhxdbhost02>: Success
Restarting pcsd on the nodes in order to reload the certificates...
<rhxdbhost01>: Success
<rhxdbhost02>: Success.
```

 **RHEL 8.x** 

```
pcs cluster setup <rhelha> <rhxdbhost01> <rhxdbhost02>
        No addresses specified for host '<rhxdbhost01>', using '<rhxdbhost01>'
        No addresses specified for host '<rhxdbhost02>', using '<rhxdbhost02>'
        Destroying cluster on hosts: '<rhxdbhost01>', '<rhxdbhost02>'...
        <rhxdbhost01>: Successfully destroyed cluster
        <rhxdbhost02>: Successfully destroyed cluster
        Requesting remove 'pcsd settings' from '<rhxdbhost01>', '<rhxdbhost02>'
        <rhxdbhost01>: successful removal of the file 'pcsd settings'
        <rhxdbhost02>: successful removal of the file 'pcsd settings'
        Sending 'corosync authkey', 'pacemaker authkey' to '<rhxdbhost01>', '<rhxdbhost02>'
        <rhxdbhost01>: successful distribution of the file 'corosync authkey'
        <rhxdbhost01>: successful distribution of the file 'pacemaker authkey'
        <rhxdbhost02>: successful distribution of the file 'corosync authkey'
        <rhxdbhost02>: successful distribution of the file 'pacemaker authkey'
        Sending 'corosync.conf' to '<rhxdbhost01>', '<rhxdbhost02>'
        <rhxdbhost01>: successful distribution of the file 'corosync.conf'
        <rhxdbhost02>: successful distribution of the file 'corosync.conf'
        Cluster has been successfully set up.
```

# Cluster configuration
<a name="rhel-netweaver-ha-cluster-configuration"></a>

This section covers the following topics.

**Topics**
+ [Cluster resources](rhel-ase-ha-cluster-resources.md)
+ [Sample configuration (pcs config show)](rhel-ase-sample-configuration.md)

# Cluster resources
<a name="rhel-ase-ha-cluster-resources"></a>

This section covers the following topics.

**Topics**
+ [Enable and start the cluster](#start-cluster)
+ [Increase corosync totem timeout](#increase-timeout)
+ [Check cluster status](#cluster-status)
+ [Prepare for resource creation](#resource-creation)
+ [Cluster bootstrap](#cluster-bootstrap)
+ [Create `fence_aws` STONITH resource](#create-stonith)
+ [Create file system resources](#filesystem-resources)
+ [Create overlay IP resources](#overlay-ip-resources)
+ [Create SAP ASE database resource](#ase-database-resource)
+ [Activate cluster](#activate-cluster)

## Enable and start the cluster
<a name="start-cluster"></a>

This is applicable to both cluster nodes. Run the following command to enable and start the `pacemaker` cluster service on both nodes.

```
pcs cluster enable --all
rhxdbhost01: Cluster Enabled
rhxdbhost02: Cluster Enabled


pcs cluster start --all
rhxdbhost01: Starting Cluster...
rhxdbhost02: Starting Cluster...
```

By enabling the `pacemaker` service, the server automatically joins the cluster after a reboot. This ensures that your system is protected. Alternatively, you can start the `pacemaker` service manually on boot. You can then investigate the cause of failure. However, this is generally not required for SAP NetWeaver ASCS cluster.

## Increase corosync totem timeout
<a name="increase-timeout"></a>

 **RHEL 7.x** 

1. Edit the `/etc/corosync/corosync.conf` file in all cluster nodes to increase the token value or to add a value if it is not present.

   ```
   totem {
       version: 2
       secauth: off
       cluster_name: my-rhel-sap-cluster
       transport: udpu
       rrp_mode: passive
       token: 29000  <------ Value to be set
   }
   ```

1. Reload the corosync with the following command, on any one of the cluster nodes. This does not cause any downtime.

   ```
   pcs cluster reload corosync
   ```

1. Use the following command to confirm if the changes are active.

   ```
   corosync-cmapctl | grep totem.token
   Runtime.config.totem.token (u32) = 29000
   ```

 **RHEL 8.x** 

Use the following command to increase the token value or to add a value if it is not present.

```
pcs cluster config update totem token=29000
```

## Check cluster status
<a name="cluster-status"></a>

Once the cluster service `pacemaker` is started, check the cluster status with `pcs status` command, as shown in the following example. Both the primary (`rhxdbhost01`) and secondary (`rhxdbhost02`) servers should be seen as online.

```
pcs status
Cluster name: rhelha

WARNINGS:
No stonith devices and stonith-enabled is not false

Cluster Summary:
  * Stack: corosync
  * Current DC: rhxdbhost01 (version 2.0.3-5.el8_2.5-4b1f869f0f) - partition with quorum
  * Last updated: Tue Jan 10 21:32:15 2023
  * Last change:  Tue Jan 10 19:46:50 2023 by hacluster via crmd on rhxdbhost01
  * 2 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ rhxdbhost01 rhxdbhost02 ]

Full List of Resources:
  * No resources

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
```

## Prepare for resource creation
<a name="resource-creation"></a>

To ensure that the cluster does not perform any unexpected actions during setup of resources and configuration, set the maintenance mode to true.

Run the following command to put the cluster in maintenance mode.

```
pcs property set maintenance-mode=true
```

## Cluster bootstrap
<a name="cluster-bootstrap"></a>

Configure the cluster bootstrap parameters by running the following commands.

```
pcs resource defaults update resource-stickiness=1
pcs resource defaults update migration-threshold=3
```

## Create `fence_aws` STONITH resource
<a name="create-stonith"></a>

Modify the commands in the following table to match your configuration values.

```
pcs stonith create rsc_aws_stonith_<DBSID> fence_aws region=us-east-1 pcmk_host_map="rhxdbhost01:i-xxxxinstidforhost1;rhxdbhost02:i-xxxxinstidforhost2" power_timeout=240 pcmk_reboot_timeout=300 pcmk_reboot_retries=2 pcmk_delay_max=30 pcmk_reboot_action=reboot op start timeout=180 op stop timeout=180 op monitor interval=180 timeout=60
```

**Note**  
The default `pcmk` action is reboot. If you want to have the instance remain in a stopped state until it has been investigated, and then manually started again, add `pcmk_reboot_action=off`. Any High Memory (`u-tb1. `) instance or metal instance running on a dedicated host won’t support reboot, and will require `pcmk_reboot_action=off`.

## Create file system resources
<a name="filesystem-resources"></a>

Mounting and unmounting file system resources to align with the location of SAP ASE database is done using cluster resources.

Modify and run the following commands to create these file system resources.

 **/sybase** 

```
pcs resource create rsc_fs_<DBSID>_sybase ocf:heartbeat:Filesystem params device="<nfs.fqdn>:/sybase" directory="/sybase" fstype="nfs4" options=" rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,nconnect=2,timeo=600,retrans=2" op start timeout=60s interval=0 op stop timeout=60s interval=0 op monitor interval=20s timeout=40s
```

 **/sybase/<DBSID>/sapdata\$11** 

```
pcs resource create rsc_fs_<DBSID>_data ocf:heartbeat:Filesystem params device="<nfs.fqdn>:/asedata" directory="/sybase/<DBSID>/sapdata_1" fstype="nfs4"
options="rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,nconnect=8,timeo=600,retrans=2" op start timeout=60s interval=0 op stop timeout=60s interval=0 op monitor interval=20s timeout=40s
```

 **/sybase/<DBSID>/saplog\$11** 

```
pcs resource create rsc_fs_<DBSID>_log ocf:heartbeat:Filesystem params device="<nfs.fqdn>:/aselog" directory="/sybase/<DBSID>/saplog_1" fstype="nfs4"
options="rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,nconnect=2,timeo=600,retrans=2" op start timeout=60s interval=0 op stop timeout=60s interval=0 op monitor interval=20s timeout=40s
```

 **/sybase/<DBSID>/sapdiag** 

```
pcs resource create rsc_fs_<DBSID>_diag ocf:heartbeat:Filesystem params device="<nfs.fqdn>:/sapdiag" directory="/sybase/<DBSID>/sapdiag" fstype="nfs4"
options="rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,nconnect=2,timeo=600,retrans=2" op start timeout=60s interval=0 op stop timeout=60s interval=0 op monitor interval=20s timeout=40s
```

 **/sybase/<DBSID>/saptmp** 

```
pcs resource create rsc_fs_<DBSID>_tmp ocf:heartbeat:Filesystem params device="<nfs.fqdn>:/saptmp" directory="/sybase/<DBSID>/saptmp" fstype="nfs4"
options="rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,nconnect=2,timeo=600,retrans=2" op start timeout=60s interval=0 op stop timeout=60s interval=0 op monitor interval=20s timeout=40s
```

 **/sybasebackup** 

```
pcs resource create rsc_fs_<DBSID>_bkp ocf:heartbeat:Filesystem params device="<nfs.fqdn>:/sybasebackup" directory="/backup" fstype="nfs4"
options="rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,nconnect=2,timeo=600,retrans=2" op start timeout=60s interval=0 op stop timeout=60s interval=0 op monitor interval=20s timeout=40s
```

 **/usr/sap** 

```
pcs resource create rsc_fs_<DBSID>_sap ocf:heartbeat:Filesystem params device="<nfs.fqdn>:/usrsap" directory="/usr/sap" fstype="nfs4"
options="rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,nconnect=2,timeo=600,retrans=2" op start timeout=60s interval=0 op stop timeout=60s interval=0 op monitor interval=20s timeout=40s
```

 **Notes** 
+ Review the mount options to ensure that they match with your operating system, NFS file system type, and the latest recommendations from SAP and AWS.
+ <nfs.fqdn> must be the alias of the FSx for ONTAP resource. For example, `fs-xxxxxx.efs.xxxxxx.amazonaws.com`.
+ It is important to create the resources in the proper mount order/sequence.

## Create overlay IP resources
<a name="overlay-ip-resources"></a>

The IP resource provides the details necessary to update the route table entry for overlay IP.

Use the following command to create an SAP ASE database IP resource.

```
pcs resource create rsc_ip_<DBSID>_ASEDB ocf:heartbeat:aws-vpc-move-ip ip=172.16.0.23 interface=eth0 routing_table=rtb-xxxxxroutetable1 op monitor interval=20s timeout=40s --group rsc_asedb_group
```

 **Notes** 
+ If more than one route table is required for connectivity or because of subnet associations, the `routing_table` parameter can have multiple values separated by a comma. For example, `routing_table=rtb-xxxxxroutetable1, rtb-xxxxxroutetable2`.
+ Additional parameters – `lookup_type` and `routing_table_role` are required for shared VPC. For more information, see [Shared VPC – optional](https://docs.aws.amazon.com/sap/latest/sap-netweaver/rhel-netweaver-ha-settings.html#rhel-netweaver-ha-shared-vpc).

## Create SAP ASE database resource
<a name="ase-database-resource"></a>

SAP ASE database is started and stopped using cluster resources.

Modify and run the following command to create the `SAPDatabase` resource.

```
pcs resource create rsc_ase_<DBSID>_ASEDB ocf:heartbeat:SAPDatabase SID=<DBSID> DBTYPE=SYB STRICT_MONITORING=TRUE op start timeout=300 op stop timeout=300 --group grp_<DBSID>_ASEDB
```

Use the following command for more details on the resource.

```
pcs resource describe ocf:heartbeat:SAPDatabase
```

## Activate cluster
<a name="activate-cluster"></a>

Use `pcs config show` to review that all the values have been entered correctly.

On confirmation of correct values, set the maintenance mode to false using the following command. This enables the cluster to take control of the resources.

```
pcs property set maintenance-mode=false
```

See the [Sample configuration](https://docs.aws.amazon.com/sap/latest/sap-netweaver/rhel-sample-configuration.html).

# Sample configuration (pcs config show)
<a name="rhel-ase-sample-configuration"></a>

The following sample configuration is based on ENSA2.

```
pcs config show
Cluster Name: rhelha
Corosync Nodes:
 rhxdbhost01 rhxdbhost02
Pacemaker Nodes:
 rhxdbhost01 rhxdbhost02

Resources:
 Group: rsc_asedb_group
  Meta Attrs: resource-stickiness=5000
  Resource: rsc_vip_asedb (class=ocf provider=heartbeat type=aws-vpc-move-ip)
   Attributes: interface=eth0 ip=172.16.0.23 routing_table=rtb-0b3f1d6196f45300d
   Operations: monitor interval=60s timeout=30s (rsc_vip_asedb-monitor-interval-60s)
               start interval=0s timeout=180s (rsc_vip_asedb-start-interval-0s)
               stop interval=0s timeout=180s (rsc_vip_asedb-stop-interval-0s)
  Resource: rsc_fs_sybase (class=ocf provider=heartbeat type=Filesystem)
   Attributes: device=svm-09794aeece44cc025.fs-04af26e8311974f41.fsx.us-east-1.amazonaws.com:/sybase directory=/sybase force_unmount=safe fstype=nfs4 options=rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,nconnect=2,timeo=600,retrans=2
   Operations: monitor interval=20s timeout=40s (rsc_fs_sybase-monitor-interval-20s)
               start interval=0s timeout=60s (rsc_fs_sybase-start-interval-0s)
               stop interval=0s timeout=60s (rsc_fs_sybase-stop-interval-0s)
  Resource: rsc_fs_data (class=ocf provider=heartbeat type=Filesystem)
   Attributes: device=svm-01c02d046ae5a24a2.fs-04af26e8311974f41.fsx.us-east-1.amazonaws.com:/asedata directory=/sybase/ARD/sapdata_1 force_unmount=safe fstype=nfs4 options=rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,nconnect=8,timeo=600,retrans=2
   Operations: monitor interval=20s timeout=40s (rsc_fs_data-monitor-interval-20s)
               start interval=0s timeout=60s (rsc_fs_data-start-interval-0s)
               stop interval=0s timeout=60s (rsc_fs_data-stop-interval-0s)
  Resource: rsc_fs_log (class=ocf provider=heartbeat type=Filesystem)
   Attributes: device=svm-04cd525dbd0b354d2.fs-04af26e8311974f41.fsx.us-east-1.amazonaws.com:/aselog directory=/sybase/ARD/saplog_1 force_unmount=safe fstype=nfs4 options=rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,nconnect=2,timeo=600,retrans=2
   Operations: monitor interval=20s timeout=40s (rsc_fs_log-monitor-interval-20s)
               start interval=0s timeout=60s (rsc_fs_log-start-interval-0s)
               stop interval=0s timeout=60s (rsc_fs_log-stop-interval-0s)
  Resource: rsc_fs_sapdiag (class=ocf provider=heartbeat type=Filesystem)
   Attributes: device=svm-09794aeece44cc025.fs-04af26e8311974f41.fsx.us-east-1.amazonaws.com:/sapdiag directory=/sybase/ARD/sapdiag force_unmount=safe fstype=nfs4 options=rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,nconnect=2,timeo=600,retrans=2
   Operations: monitor interval=20s timeout=40s (rsc_fs_sapdiag-monitor-interval-20s)
               start interval=0s timeout=60s (rsc_fs_sapdiag-start-interval-0s)
               stop interval=0s timeout=60s (rsc_fs_sapdiag-stop-interval-0s)
  Resource: rsc_fs_saptmp (class=ocf provider=heartbeat type=Filesystem)
   Attributes: device=svm-09794aeece44cc025.fs-04af26e8311974f41.fsx.us-east-1.amazonaws.com:/saptmp directory=/sybase/ARD/saptmp force_unmount=safe fstype=nfs4 options=rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,nconnect=2,timeo=600,retrans=2
   Operations: monitor interval=20s timeout=40s (rsc_fs_saptmp-monitor-interval-20s)
               start interval=0s timeout=60s (rsc_fs_saptmp-start-interval-0s)
               stop interval=0s timeout=60s (rsc_fs_saptmp-stop-interval-0s)
  Resource: rsc_fs_backup (class=ocf provider=heartbeat type=Filesystem)
   Attributes: device=svm-09794aeece44cc025.fs-04af26e8311974f41.fsx.us-east-1.amazonaws.com:/backup directory=/sybasebackup force_unmount=safe fstype=nfs4 options=rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,nconnect=2,timeo=600,retrans=2
   Operations: monitor interval=20s timeout=40s (rsc_fs_backup-monitor-interval-20s)
               start interval=0s timeout=60s (rsc_fs_backup-start-interval-0s)
               stop interval=0s timeout=60s (rsc_fs_backup-stop-interval-0s)
  Resource: rsc_fs_usrsap (class=ocf provider=heartbeat type=Filesystem)
   Attributes: device=svm-09794aeece44cc025.fs-04af26e8311974f41.fsx.us-east-1.amazonaws.com:/usrsap directory=/usr/sap force_unmount=safe fstype=nfs4 options=rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,nconnect=2,timeo=600,retrans=2
   Operations: monitor interval=20s timeout=40s (rsc_fs_usrsap-monitor-interval-20s)
               start interval=0s timeout=60s (rsc_fs_usrsap-start-interval-0s)
               stop interval=0s timeout=60s (rsc_fs_usrsap-stop-interval-0s)
  Resource: sybaseARD (class=ocf provider=heartbeat type=SAPDatabase)
   Attributes: DBTYPE=SYB SID=ARD STRICT_MONITORING=TRUE
   Operations: methods interval=0s timeout=5s (sybaseARD-methods-interval-0s)
               monitor interval=120s timeout=60s (sybaseARD-monitor-interval-120s)
               start interval=0s timeout=300 (sybaseARD-start-interval-0s)
               stop interval=0s timeout=300 (sybaseARD-stop-interval-0s)
Stonith Devices:
 Resource: clusterfence (class=stonith type=fence_aws)
  Attributes: pcmk_delay_max=45 pcmk_host_map=rhxdbhost01:i-03939ad3f07e14e3f;rhxdbhost02:i-09f138e3a1290bfde pcmk_reboot_action=off pcmk_reboot_retries=4 pcmk_reboot_timeout=600 power_timeout=240 region=us-east-1
  Operations: monitor interval=300 timeout=60 (clusterfence-monitor-interval-300)
              start interval=0s timeout=600 (clusterfence-start-interval-0s)
Fencing Levels:
Location Constraints:
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:
Alerts:
 No alerts defined
Resources Defaults:
  Meta Attrs: rsc_defaults-meta_attributes
    migration-threshold=1
Operations Defaults:
  No defaults set
Cluster Properties:
 cluster-infrastructure: corosync
 cluster-name: rhelha
 dc-version: 2.1.2-4.el8_6.7-ada5c3b36e2
 have-watchdog: false
 last-lrm-refresh: 1693394303
 maintenance-mode: false
Tags:
 No tags defined
Quorum:
  Options:
```