

# SAP ASCS and Cluster Setup
<a name="sap-nw-pacemaker-rhel-setup"></a>

This section covers the following topics.

**Topics**
+ [SAP Shared File Systems](sap-shared-filesystems-nw-rhel.md)
+ [Check IP availability and resolution](check-ip-availability-resolution-nw-rhel.md)
+ [Install SAP](install-sap-nw-rhel.md)
+ [Configure SAP for Cluster Control](sap-ascs-service-control-nw-rhel.md)
+ [Cluster Node Setup](cluster-node-setup-nw-rhel.md)
+ [Cluster Configuration](cluster-config-nw-rhel.md)

# SAP Shared File Systems
<a name="sap-shared-filesystems-nw-rhel"></a>

**Topics**
+ [Select Shared Storage](#select-storage-type-nw-rhel)
+ [Create file systems](#create-filesystems-nw-rhel)
+ [Create mount point directories](#create-mount-dirs-nw-rhel)
+ [Update /etc/fstab](#update-fstab-nw-rhel)
+ [Temporarily mount ASCS and ERS directories for installation (classic only)](#temp-mount-dirs-nw-rhel)

## Select Shared Storage
<a name="select-storage-type-nw-rhel"></a>

SAP NetWeaver high availability deployments require shared file systems. On Linux, you can use either [Amazon Elastic File System](https://aws.amazon.com/efs/) or [Amazon FSx for NetApp ONTAP](https://aws.amazon.com/fsx/netapp-ontap/). Choose between these options based on your requirements for resilience, performance, and cost. For detailed setup information, see [Getting started with Amazon Elastic File System](https://docs.aws.amazon.com/efs/latest/ug/getting-started.html) or [Getting started with Amazon FSx for NetApp ONTAP](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/getting-started.html).

We recommend sharing a single Amazon EFS or FSx for ONTAP file system across multiple SIDs within an account.

The file system’s DNS name is the simplest mounting option. When connecting from an Amazon EC2 instance, the DNS automatically resolves to the mount target’s IP address in that instance’s Availability Zone. You can also create an alias (CNAME) to help identify the shared file system’s purpose. Throughout this document, we use `<nfs.fqdn>`.

Examples:
+  `file-system-id.efs.aws-region.amazonaws.com` 
+  `svm-id.fs-id.fsx.aws-region.amazonaws.com` 
+  `qas_sapmnt_share.example.com` 

**Note**  
Review the `enableDnsHostnames` and `enableDnsSupport` DNS attributes for your VPC. For more information, see [View and update DNS attributes for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-updating).

## Create file systems
<a name="create-filesystems-nw-rhel"></a>

The following shared file systems are covered in this document:


| NFS Location Structure | NFS Location Example | File System Location Structure | File System Location Example | 
| --- | --- | --- | --- | 
|  <SID>\$1sapmnt  |   `RHX_sapmnt`   |  /sapmnt/<SID>  |   `/sapmnt/RHX`   | 
|  <SID>\$1ASCS<ascs\$1sys\$1nr>  |   `RHX_ASCS00`   |  /usr/sap/<SID>/ASCS<ascs\$1sys\$1nr>  |   `/usr/sap/RHX/ASCS00`   | 
|  <SID>\$1ERS<ers\$1sys\$1nr>  |   `RHX_ERS10`   |  /usr/sap/<SID>/ERS<ers\$1sys\$1nr>  |   `/usr/sap/RHX/ERS10`   | 

The following options can differ depending on how you architect and operate your systems:
+ ASCS and ERS mount points - In simple-mount architecture, you can share the entire `/usr/sap/<SID>` directory. This document uses separate mount points to simplify migration and follow SAP’s recommendation for local application server executables when co-hosting ASCS/ERS.
+ Transport directory - `/usr/sap/trans` is optional for ASCS installations. Add this shared directory if your change management processes require it.
+ Home directory - This document uses local home directories to ensure `<sid>adm` access during NFS issues. Consider a shared home directory if you need consistent user environments across nodes.
+ NFS location naming - The "NFS Location" names are arbitrary and can be chosen based on your naming conventions (e.g., `myEFSMount1`, `prod_sapmnt`, etc.). The "File system location" follows the standard SAP directory structure and should use the parameter references shown.

For more information, see [SAP System Directories on UNIX](https://help.sap.com/docs/SAP_NETWEAVER_750/ff18034f08af4d7bb33894c2047c3b71/2744f17a26a74a8abfd202c4f5dc9a0f.html).

Using the NFS ID created in the previous step, temporarily mount the root directory of the NFS. `/mnt` is available by default; it can also be substituted with another temporary location.

**Note**  
The following commands use the NFS location names from the table above. Replace `<SID>_sapmnt`, `<SID>_ASCS<ascs_sys_nr>`, and `<SID>_ERS<ers_sys_nr>` with your chosen NFS location names and parameter values.

```
# mount <nfs.fqdn>:/ /mnt
# mkdir -p /mnt/<SID>_sapmnt
# mkdir -p /mnt/<SID>_ASCS<ascs_sys_nr>
# mkdir -p /mnt/<SID>_ERS<ers_sys_nr>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # mount fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/ /mnt
  # mkdir -p /mnt/RHX_sapmnt
  # mkdir -p /mnt/RHX_ASCS00
  # mkdir -p /mnt/RHX_ERS10
  ```

During SAP installation, the `<sid>adm` user and proper directory ownership will be created. Until then, we need to ensure the installation process has sufficient access. Set temporary permissions on the directories:

```
# chmod 777 /mnt/<SID>_sapmnt /mnt/<SID>_ASCS<ascs_sys_nr> /mnt/<SID>_ERS<ers_sys_nr>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # chmod 777 /mnt/RHX_sapmnt /mnt/RHX_ASCS00 /mnt/RHX_ERS10
  ```

The SAP installation process will automatically set the correct ownership and permissions for operational use.

Unmount the temporary mount:

```
# umount /mnt
```

## Create mount point directories
<a name="create-mount-dirs-nw-rhel"></a>

This is applicable to both cluster nodes. Create the directories for the required mount points (permanent or cluster controlled):

```
# mkdir /sapmnt
# mkdir /usr/sap/<SID>/ASCS<ascs_sys_nr>
# mkdir /usr/sap/<SID>/ERS<ers_sys_nr>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # mkdir /sapmnt
  # mkdir /usr/sap/RHX/ASCS00
  # mkdir /usr/sap/RHX/ERS10
  ```

## Update /etc/fstab
<a name="update-fstab-nw-rhel"></a>

This is applicable to both cluster nodes. `/etc/fstab` is a configuration table containing the details required for mounting and unmounting file systems to a host.

Add the file systems not managed by the cluster to `/etc/fstab`.

For both **simple-mount** and **classic** architectures, prepare and append an entry for the `sapmnt` file system to `/etc/fstab`:

```
<nfs.fqdn>/<SID>_sapmnt    /sapmnt    nfs    nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport    0    0
```

 **Simple-mount only** – prepare and append entries for the ASCS and ERS file systems to `/etc/fstab`:

```
<nfs.fqdn>:/<SID>_ASCS<ascs_sys_nr>   /usr/sap/<SID>/ASCS<ascs_sys_nr>  nfs    nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport    0    0
<nfs.fqdn>:/<SID>_ERS<ers_sys_nr>     /usr/sap/<SID>/ERS<ers_sys_nr>    nfs    nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport    0    0
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/RHX_sapmnt    /sapmnt               nfs    nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport    0    0
  fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/RHX_ASCS00    /usr/sap/RHX/ASCS00   nfs    nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport    0    0
  fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/RHX_ERS10     /usr/sap/RHX/ERS10    nfs    nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport    0    0
  ```

Verify that your mount options are:
+ Compatible with your operating system version
+ Supported by your chosen NFS file system type (EFS or FSx for ONTAP)
+ Aligned with current SAP recommendations

Consult SAP and AWS documentation for the latest mount option recommendations.

Use the following command to mount the file systems defined in `/etc/fstab`:

```
# mount -a
```

Use the following command to check that the required file systems are available:

```
# df -h
```

## Temporarily mount ASCS and ERS directories for installation (classic only)
<a name="temp-mount-dirs-nw-rhel"></a>

This is only applicable to the classic architecture. Simple-mount architecture has these directories permanently available in `/etc/fstab`.

Mount ASCS and ERS directories for installation.

Use the following command on the instance where you plan to install ASCS:

```
# mount <nfs.fqdn>:/<SID>_ASCS<ascs_sys_nr>  /usr/sap/<SID>/ASCS<ascs_sys_nr>
```

Use the following command on the instance where you plan to install ERS:

```
# mount <nfs.fqdn>:/<SID>_ERS<ers_sys_nr>  /usr/sap/<SID>/ERS<ers_sys_nr>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # mount fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/RHX_ASCS00  /usr/sap/RHX/ASCS00
  # mount fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/RHX_ERS10   /usr/sap/RHX/ERS10
  ```

# Check IP availability and resolution
<a name="check-ip-availability-resolution-nw-rhel"></a>

## Add Overlay IP for SAP Installation
<a name="add-oip-sapinst-nw-rhel"></a>

SAP Installation should be done using the virtual names assigned to the overlay IP. Before adding the overlay IPs to the instances, ensure that the VPC route table entries have been created as described in [Add VPC Route Table Entries for Overlay IPs](sap-nw-pacemaker-rhel-infra-setup.md#rt-rhel).

To facilitate SAP installation, manually add the Overlay IPs to the instances:

1. To the instance where you intend to install the **ASCS** 

   ```
   # ip addr add <ascs_overlayip>/32 dev eth0
   ```

1. To the instance where you intend to install the **ERS** 

   ```
   # ip addr add <ers_overlayip>/32 dev eth0
   ```

Note the following:
+ Route table entries for the overlay IPs must be created first (see [Add VPC Route Table Entries for Overlay IPs](sap-nw-pacemaker-rhel-infra-setup.md#rt-rhel))
+ This IP configuration is temporary and will be lost after instance reboot
+ The cluster will take over management of these IPs once configured

## Hostname Resolution
<a name="hostname-resolution-nw-rhel"></a>

You must ensure that all instances can resolve all hostnames in use. Add the hostnames for cluster nodes to `/etc/hosts` file on all cluster nodes. This ensures that hostnames for cluster nodes can be resolved even in case of DNS issues. Configure the `/etc/hosts` file for a two-node cluster:

```
# cat /etc/hosts
<primary_ip_1> <hostname_1>.example.com <hostname_1>
<primary_ip_2> <hostname_2>.example.com <hostname_2>
<ascs_overlayip> <ascs_virt_hostname>.example.com <ascs_virt_hostname>
<ers_overlayip> <ers_virt_hostname>.example.com <ers_virt_hostname>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # cat /etc/hosts
  10.1.10.1 rhxhost01.example.com rhxhost01
  10.1.20.1 rhxhost02.example.com rhxhost02
  172.16.30.5 rhxascs.example.com rhxascs
  172.16.30.6 rhxers.example.com rhxers
  ```

In this configuration, the secondary IPs used for the second cluster ring are not mentioned. They are only used in the cluster configuration. You can allocate virtual hostnames for administration and identification purposes.

**Important**  
The Overlay IP is out of VPC range, and cannot be reached from locations not associated with the route table, including on-premises.

# Install SAP
<a name="install-sap-nw-rhel"></a>

The following topics provide information about installing SAP on AWS Cloud in a highly available cluster. Review SAP Documentation for more details.

**Topics**
+ [Final checks for software provisioning](#final-checks-software-provisioning-nw-rhel)
+ [Install SAP ASCS and ERS instances](#install-sap-instances-nw-rhel)
+ [Kernel upgrade and ENSA2 – optional](#kernel-ensa2-nw-rhel)
+ [Check SAP host agent version](#check-host-agent-nw-rhel)

## Final checks for software provisioning
<a name="final-checks-software-provisioning-nw-rhel"></a>

Before running SAP Software Provisioning Manager (SWPM), ensure that the following prerequisites are consistent across both cluster nodes:
+ Collect any missing details and populate the [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) section to ensure clarity on the specific values used in installation commands.
+  **User and Group Configuration** - If operating system groups are pre-defined, ensure matching UID and GID values for `<sid>adm` and `sapsys` across both cluster nodes.
+  **Installation Software** - Download the latest version of Software Provisioning Manager (SWPM) and SAP installation media for your SAP release from [Software Provisioning Manager](https://support.sap.com/en/tools/software-logistics-tools/software-provisioning-manager.html).
+  **Network Configuration** - Verify both cluster nodes have identical configuration with all routes, overlay IPs, and virtual hostnames accessible. This ensures that either node can run ASCS or ERS roles.
+  **File Systems** - Verify all shared file systems are mounted and accessible from both nodes with consistent mount points and permissions.

## Install SAP ASCS and ERS instances
<a name="install-sap-instances-nw-rhel"></a>

Install the SAP ASCS and ERS instances using their virtual hostnames to ensure installation against the overlay IP addresses. This approach is required for proper cluster integration.

Install the ASCS instance on `<instance_id_1>` using virtual hostname `<ascs_virt_hostname>` with the `SAPINST_USE_HOSTNAME` parameter. This ensures the installation uses the overlay IP rather than the physical hostname:

 *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

```
# <swpm location>/sapinst SAPINST_USE_HOSTNAME=<ascs_virt_hostname>
```

Install the ERS instance on `<instance_id_2>` using virtual hostname `<ers_virt_hostname>` with the `SAPINST_USE_HOSTNAME` parameter. This ensures the installation uses the overlay IP rather than the physical hostname:

```
# <swpm location>/sapinst SAPINST_USE_HOSTNAME=<ers_virt_hostname>
```

Once the ASCS and ERS installations are complete, you will need to install and configure the database and SAP Primary Application Server (PAS) - these components are not covered in this cluster setup documentation. Optionally, you can also install and configure Additional Application Server (AAS). For more details on installing these SAP NetWeaver components, refer to SAP Help Portal.

For additional information on unattended installation options, see [SAP Note 2230669 – System Provisioning Using an Input Parameter File](https://me.sap.com/notes/2230669) (requires SAP portal access).

## Kernel upgrade and ENSA2 – optional
<a name="kernel-ensa2-nw-rhel"></a>

As of AS ABAP Release 7.53 (ABAP Platform 1809), the new Standalone Enqueue Server 2 (ENSA2) is installed by default. ENSA2 replaces the previous version – ENSA1.

If you have an older version of SAP NetWeaver, consider following the SAP guidance to upgrade the kernel and update the Enqueue Server configuration. An upgrade will allow you to take advantage of the features available in the latest version. For more information, see the following SAP Notes (require SAP portal access):
+  [SAP Note 2630416 – Support for Standalone Enqueue Server 2](https://me.sap.com/notes/2630416) 
+  [SAP Note 2711036 – Usage of the Standalone Enqueue Server 2 in an HA Environment](https://me.sap.com/notes/2711036) 

## Check SAP host agent version
<a name="check-host-agent-nw-rhel"></a>

This is applicable to both cluster nodes. The SAP host agent is used for system instance control and monitoring. This agent is used by SAP cluster resource agents and hooks. It is recommended that you have the latest version installed on both instances. For more details, see [SAP Note 2219592 – Upgrade Strategy of SAP Host Agent](https://me.sap.com/notes/2219592).

Use the following command to check the version of the host agent:

```
# /usr/sap/hostctrl/exe/saphostexec -version
```

# Configure SAP for Cluster Control
<a name="sap-ascs-service-control-nw-rhel"></a>

Modify SAP service configurations, user permissions, and system integration settings to enable proper cluster control of ASCS and ERS instances.

**Topics**
+ [Add <sid>adm to haclient group](#add-sidadm-haclient-nw-rhel)
+ [Modify SAP profiles for start operations and cluster hook](#modify-sap-profiles-nw-rhel)
+ [Enable sapping and sappong Services (Simple-Mount Only)](#sapping-sappong-services-nw-rhel)
+ [Ensure ASCS and ERS SAP Services can run on either node (systemd)](#modify-sapservices-nw-rhel)
+ [Configure dependencies for Pacemaker and SAP services (systemd)](#configure-systemd-deps-nw-rhel)
+ [(Alternative) Ensure ASCS and ERS SAP Services can run on either node (sysV)](#modify-sapservices-sysv-nw-rhel)

## Add <sid>adm to haclient group
<a name="add-sidadm-haclient-nw-rhel"></a>

This is applicable to both cluster nodes. An `haclient` operating system group is created when the cluster connector package is installed. Adding the `<sid>adm` user to this group ensures that your cluster has necessary access. Run the following command as root:

```
# usermod -a -G haclient <sid>adm
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # usermod -a -G haclient rhxadm
  ```

## Modify SAP profiles for start operations and cluster hook
<a name="modify-sap-profiles-nw-rhel"></a>

This action ensures that there is compatibility between the SAP start framework and cluster actions. Modify SAP profiles to change the start behavior of the SAP instance and processes. Ensure that `sapcontrol` is aware that the system is being managed by a pacemaker cluster.
+ ASCS profile – `/usr/sap/<SID>/SYS/profile/<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>` 
+ ERS profile – `/usr/sap/<SID>/SYS/profile/<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>` 

The profile directory /usr/sap/<SID>/SYS/profile/ is typically a symbolic link to /sapmnt/<SID>/profile/ on the shared NFS filesystem. This means profile modifications made on one node are immediately visible on all cluster nodes. You can modify the profiles from either node.
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:
  + ASCS profile example – `/usr/sap/RHX/SYS/profile/RHX_ASCS00_rhxascs` 
  + ERS profile example – `/usr/sap/RHX/SYS/profile/RHX_ERS10_rhxers` 

Follow the procedure outlined below to make the necessary changes:

1.  **Program or process start behavior** – In case of failure, processes must be restarted. Determining where the process starts and in what order needs to be controlled by the cluster, and not SAP start framework behavior defined in the profiles. Your locks can be lost if this parameter is not changed. In newer SAP installations, the profiles may already contain `Start_Program_XX` instead of `Restart_Program_XX`. If `Start_Program_XX` is already present, no changes are needed for this step.  
**Example**  

------
#### [ ENSA1 ]

    **ASCS** 

   ```
   #For ENSA1 (_EN)
   #Changing Restart to Start for Cluster compatibility
   #Old value: Restart_Program_XX = local $(_EN) pf=$(_PF)
   
   Start_Program_XX = local $(_EN) pf=$(_PF)
   ```

    **ERS** 

   ```
   #For ENSA1 (_ER)
   #Changing Restart to Start for Cluster compatibility
   #Old value: Restart_Program_XX = local $(_ER) pf=$(_PFL)NR=$(SCSID)
   
   Start_Program_XX = local $(_ER) pf=$(_PFL) NR=$(SCSID)
   ```

    *`XX` indicates the start-up order. This value may be different in your install; retain the unchanged value.* 

------
#### [ ENSA2 ]

    **ASCS** 

   ```
   #For ENSA2 (_ENQ)
   #Changing Restart to Start for Cluster compatibility
   #Old value: Restart_Program_XX = local $(_ENQ) pf=$(_PF)
   
   Start_Program_XX = local $(_ENQ) pf=$(_PF)
   ```

    **ERS** 

   ```
   #For ENSA2 (_ENQR)
   #Changing Restart to Start for Cluster compatibility
   #Old value: Restart_Program_XX = local $(_ENQR) pf=$(_PFL)NR=$(SCSID)
   
   Start_Program_XX = local $(_ENQR) pf=$(_PFL) NR=$(SCSID)
   ```

    *`XX` indicates the start order. This value may be different in your install; retain the unchanged value.* 

------

1.  **Disable instance auto start in both profiles** – When an instance restarts, SAP start framework should not start ASCS and ERS automatically. Add the following parameter on both profiles to prevent an auto start:

   ```
   # Disable instance auto start
   Autostart = 0
   ```

1.  **Add cluster connector details in both profiles** – The connector integrates the SAP start and control frameworks of SAP NetWeaver with Red Hat cluster to assist with maintenance and awareness of state. Add the following parameters on both profiles:

   ```
   # Added for Cluster Connectivity
   service/halib = $(DIR_EXECUTABLE)/saphascriptco.so
   service/halib_cluster_connector = /usr/bin/sap_cluster_connector
   ```
**Important**  
RPM package `sap-cluster-connector` has *dashes*. The executable `/usr/bin/sap_cluster_connector` available after installation has *underscores*. Ensure that the correct name, that is executable `/usr/bin/sap_cluster_connector`, is used in both profiles.

1.  **Restart services** – Restart SAP services for ASCS and ERS to ensure that the preceding settings take effect. Adjust the system number to match the service.

    **ASCS** 

   ```
   # /usr/sap/hostctrl/exe/sapcontrol -nr 00 -function RestartService
   ```

    **ERS** 

   ```
   # /usr/sap/hostctrl/exe/sapcontrol -nr <ers_sys_nr> -function RestartService
   ```
   +  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

      **ASCS** 

     ```
     # /usr/sap/hostctrl/exe/sapcontrol -nr 00 -function RestartService
     ```

      **ERS** 

     ```
     # /usr/sap/hostctrl/exe/sapcontrol -nr 10 -function RestartService
     ```

1.  **Check integration using `sapcontrol` ** – `sapcontrol` includes functions: `HACheckConfig` and `HACheckFailoverConfig`. These functions can be used to check configuration, including awareness of the cluster connector. These checks have limited value before the cluster is configured, but you can run HACheckFailoverConfig to ensure the base configuration is in place.

    **ASCS** 

   ```
   # /usr/sap/hostctrl/exe/sapcontrol -nr <ascs_sys_nr> -function HACheckFailoverConfig
   ```
   +  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

      **ASCS** 

     ```
     # /usr/sap/hostctrl/exe/sapcontrol -nr 00 -function HACheckFailoverConfig
     
     10.10.2025 01:23:55
     HACheckFailoverConfig
     OK
     state, category, description, comment
     SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch
     ```

## Enable sapping and sappong Services (Simple-Mount Only)
<a name="sapping-sappong-services-nw-rhel"></a>

For simple-mount architecture, enable the sapping and sappong systemd services on both cluster nodes. These services ensure proper SAP instance startup coordination between systemd and the cluster.

The sapping service runs before sapinit during boot and temporarily hides the `/usr/sap/sapservices` file to prevent automatic SAP instance startup. The sappong service runs after sapinit and restores the sapservices file, making it available for cluster management while maintaining compatibility with SAP management tools.

```
# systemctl enable sapping
# systemctl enable sappong
```

Verify the services are enabled:

```
# systemctl status sapping
# systemctl status sappong
```

**Note**  
Both services will show "inactive (dead)" status, which is normal for one-shot services that only run during system boot.

## Ensure ASCS and ERS SAP Services can run on either node (systemd)
<a name="modify-sapservices-nw-rhel"></a>

This is applicable to both cluster nodes.

To ensure that the cluster can orchestrate availability by starting and stopping instances on either cluster node, the SAP Services must be registerd on both nodes and auto-start should be disabled.

In recent Operating System and SAP kernel versions, SAP offers systemd integration for sapstartsrv which controls how SAP instances are stopped and started. This is the recommended configuration and a requirement for Simple Mount Configuration.

For more details, see the following SAP Notes (require SAP portal access):
+  [SAP Note 3139184 – Linux: systemd integration for sapstartsrv and SAP Host Agent](https://me.sap.com/notes/3139184) 
+  [SAP Note 3115048 – sapstartsrv with native Linux systemd support](https://me.sap.com/notes/3115048) 

You can confirm whether systemd is in place by running the following command. Systemd is in place if SAP Services (e.g., SAPRHX\$100.service, SAPRHX\$110.service) are listed.

```
# systemctl list-unit-files SAP*
```

If you have installed an ASCS or ERS on this host but no SAP Services are returned, the classic SysV init may be in use. In that case you can skip to section [(Alternative) Ensure ASCS and ERS SAP Services can run on either node (sysV)](#modify-sapservices-sysv-nw-rhel) 

1.  **On the instance where the ASCS was installed** 

   Register the missing ERS service on the node where you have installed ASCS.

   1. Temporarily mount the ERS directory (classic only):

      ```
      # mount <nfs.fqdn>:/<SID>_ERS<ers_sys_nr>  /usr/sap/<SID>/ERS<ers_sys_nr>
      ```

   1. Register the ERS service:

      ```
      # export LD_LIBRARY_PATH=/usr/sap/<SID>/ERS<ers_sys_nr>/exe
      # /usr/sap/<SID>/ERS<ers_sys_nr>/exe/sapstartsrv pf=/usr/sap/<SID>/SYS/profile/<SID>_ERS<ers_sys_nr>_<ers_virt_hostname> -reg
      # systemctl start SAP<SID>_<ers_sys_nr>
      ```

   1. Check the existence and state of SAP services (example):

      ```
      # systemctl list-unit-files SAP*
      UNIT FILE                    STATE   VENDOR PRESET
      SAPRHX_00.service           disabled disabled
      SAPRHX_10.service           disabled disabled
      SAP.slice                   static  -
      3 unit files listed.
      ```

   1. If the state is not disabled, run the following command to disable `sapservices` integration for `SAP<SID>_<ascs_sys_nr>` and `SAP<SID>_<ers_sys_nr>` on both nodes:
**Important**  
Stopping these services also stops the associated SAP instances.

      ```
      # systemctl stop SAP<SID>_<ascs_sys_nr>.service
      # systemctl disable SAP<SID>_<ascs_sys_nr>.service
      # systemctl stop SAP<SID>_<ers_sys_nr>.service
      # systemctl disable SAP<SID>_<ers_sys_nr>.service
      ```

   1. Unmount the ERS directory (classic only):

      ```
      # umount /usr/sap/<SID>/ERS<ers_sys_nr>
      ```
      +  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

        ```
        # mount <nfs.fqdn>:/RHX_ERS10  /usr/sap/RHX/ERS10
        # export LD_LIBRARY_PATH=/usr/sap/RHX/ERS10/exe
        # /usr/sap/RHX/ERS10/exe/sapstartsrv pf=/usr/sap/RHX/SYS/profile/RHX_ERS10_rhxers -reg
        # systemctl start SAPRHX_10
        # systemctl stop SAPRHX_00.service
        # systemctl disable SAPRHX_00.service
        # systemctl stop SAPRHX_10.service
        # systemctl disable SAPRHX_10.service
        # umount /usr/sap/RHX/ERS10
        ```

1.  **On the instance where the ERS was installed** 

   Register the missing ASCS service on the node where you have installed ERS.

   1. Temporarily mount the ASCS directory (classic only):

      ```
      # mount <nfs.fqdn>:/<SID>_ASCS<ascs_sys_nr> /usr/sap/<SID>/ASCS<ascs_sys_nr>
      ```

   1. Register the ASCS service:

      ```
      # export LD_LIBRARY_PATH=/usr/sap/<SID>/ASCS<ascs_sys_nr>/exe
      # /usr/sap/<SID>/ASCS<ascs_sys_nr>/exe/sapstartsrv pf=/usr/sap/<SID>/SYS/profile/<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname> -reg
      # systemctl start SAP<SID>_<ascs_sys_nr>
      ```

   1. Check the existence and state of SAP services (example):

      ```
      # systemctl list-unit-files SAP*
      UNIT FILE                    STATE   VENDOR PRESET
      SAPRHX_00.service           disabled disabled
      SAPRHX_10.service           disabled disabled
      SAP.slice                   static   -
      3 unit files listed.
      ```

   1. If the state is not disabled, run the following command to disable `sapservices` integration for `SAP<SID>_<ascs_sys_nr>` and `SAP<SID>_<ers_sys_nr>` on both nodes:
**Important**  
Stopping these services also stops the associated SAP instances.

      ```
      # systemctl stop SAP<SID>_<ascs_sys_nr>.service
      # systemctl disable SAP<SID>_<ascs_sys_nr>.service
      # systemctl stop SAP<SID>_<ers_sys_nr>.service
      # systemctl disable SAP<SID>_<ers_sys_nr>.service
      ```

   1. Unmount the ASCS directory (classic only):

      ```
      # umount /usr/sap/<SID>/ASCS<ascs_sys_nr>
      ```
      +  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

        ```
        # mount <nfs.fqdn>:/RHX_ASCS00 /usr/sap/RHX/ASCS00
        # export LD_LIBRARY_PATH=/usr/sap/RHX/ASCS00/exe
        # /usr/sap/RHX/ASCS00/exe/sapstartsrv pf=/usr/sap/RHX/SYS/profile/RHX_ASCS00_rhxascs -reg
        # systemctl start SAPRHX_00
        # systemctl stop SAPRHX_00.service
        # systemctl disable SAPRHX_00.service
        # systemctl stop SAPRHX_10.service
        # systemctl disable SAPRHX_10.service
        # umount /usr/sap/RHX/ASCS00
        ```

## Configure dependencies for Pacemaker and SAP services (systemd)
<a name="configure-systemd-deps-nw-rhel"></a>

This step is required on both cluster nodes when using systemd integration.

When an EC2 instance shuts down unexpectedly, Pacemaker (the cluster resource manager) may trigger unnecessary fencing actions because it cannot distinguish between planned SAP service shutdowns and system failures. To prevent this, configure systemd dependencies that inform Pacemaker about the relationship between SAP services and cluster operations.

Create a systemd drop-in configuration for the `resource-agents-deps.target`, which is a systemd target that Pacemaker uses to understand external service dependencies:

```
# mkdir -p /etc/systemd/system/resource-agents-deps.target.d/
# cd /etc/systemd/system/resource-agents-deps.target.d/

# cat > sap_systemd_<sid>.conf <<_EOF
[Unit]
Requires=sapinit.service
After=sapinit.service
After=SAP<SID>_<ascs_sys_nr>.service
After=SAP<SID>_<ers_sys_nr>.service
_EOF

# systemctl daemon-reload
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # cat > sap_systemd_rhx.conf <<_EOF
  [Unit]
  Requires=sapinit.service
  After=sapinit.service
  After=SAPRHX_00.service
  After=SAPRHX_10.service
  _EOF
  
  # systemctl daemon-reload
  ```

## (Alternative) Ensure ASCS and ERS SAP Services can run on either node (sysV)
<a name="modify-sapservices-sysv-nw-rhel"></a>

This is only applicable for if systemd integration is not in place.

To ensure that SAP instance can be managed by the cluster and also manually during planned maintenance activities, add the missing entries for ASCS and ERS `sapstartsrv` service in `/usr/sap/sapservices` file on both cluster nodes (ASCS and ERS host). Copy the missing entry from both hosts. Post-modifications, the `/usr/sap/sapservices` file looks as follows on both hosts:

```
#!/bin/sh
LD_LIBRARY_PATH=/usr/sap/<SID>/ASCS<ascs_sys_nr>/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/<SID>/ASCS<ascs_sys_nr>/exe/sapstartsrv pf=/usr/sap/<SID>/SYS/profile/<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname> -D -u <sid>adm
LD_LIBRARY_PATH=/usr/sap/<SID>/ERS<ers_sys_nr>/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/<SID>/ERS<ers_sys_nr>/exe/sapstartsrv pf=/usr/sap/<SID>/SYS/profile/<SID>_ERS<ers_sys_nr>_<ers_virt_hostname> -D -u <sid>adm
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  #!/bin/sh
  LD_LIBRARY_PATH=/usr/sap/RHX/ASCS00/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/RHX/ASCS00/exe/sapstartsrv pf=/usr/sap/RHX/SYS/profile/RHX_ASCS00_rhxascs -D -u rhxadm
  LD_LIBRARY_PATH=/usr/sap/RHX/ERS10/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/RHX/ERS10/exe/sapstartsrv pf=/usr/sap/RHX/SYS/profile/RHX_ERS10_rhxers -D -u rhxadm
  ```

# Cluster Node Setup
<a name="cluster-node-setup-nw-rhel"></a>

Establish cluster communication between nodes using Corosync and configure required authentication.

**Topics**
+ [Change the hacluster Password](#change-hacluster-password-nw-rhel)
+ [Setup Passwordless Authentication](#setup-passwordless-auth-nw-rhel)
+ [Start and Enable the pcsd Service](#start-pcsd-service-nw-rhel)
+ [Authorize the Cluster](#configure-cluster-nodes-nw-rhel)
+ [Generate Corosync Configuration](#generate-corosync-config-nw-rhel)
+ [Start and Verify the Cluster](#start-cluster-nw-rhel)
+ [Configure Cluster Services](#configure-cluster-services-nw-rhel)
+ [Verify Cluster Status](#verify-cluster-status-nw-rhel)

## Change the hacluster Password
<a name="change-hacluster-password-nw-rhel"></a>

On all cluster nodes, change the password of the operating system user hacluster:

```
# passwd hacluster
```

## Setup Passwordless Authentication
<a name="setup-passwordless-auth-nw-rhel"></a>

Red Hat cluster tools provide comprehensive reporting and troubleshooting capabilities for cluster activity. Many of these tools require passwordless SSH access between nodes to collect cluster-wide information effectively. Red Hat recommends configuring passwordless SSH for the root user to enable seamless cluster diagnostics and reporting.

For more details, see Red Hat Documentation [How to setup SSH Key passwordless login in Red Hat Enterprise Linux](https://access.redhat.com/solutions/9194).

**Warning**  
Review the security implications for your organization, including root access controls and network segmentation, before implementing this configuration.

## Start and Enable the pcsd Service
<a name="start-pcsd-service-nw-rhel"></a>

On all cluster nodes, enable and start the pcsd service:

```
# systemctl enable pcsd --now
```

## Authorize the Cluster
<a name="configure-cluster-nodes-nw-rhel"></a>

Run the following command to authenticate the cluster nodes. You will be prompted for the hacluster password you set earlier:

```
# pcs host auth <hostname_1> <hostname_2> -u hacluster -p <password>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # pcs host auth rhxhost01 rhxhost02 -u hacluster -p <password>
  ```

## Generate Corosync Configuration
<a name="generate-corosync-config-nw-rhel"></a>

Corosync provides membership and member-communication needs for high availability clusters. Initial setup can be performed using the following command with dual network rings for redundant communication:

```
# pcs cluster setup <cluster_name> \
<hostname_1> addr=<host_ip_1> addr=<host_additional_ip_1> \
<hostname_2> addr=<host_ip_2> addr=<host_additional_ip_2>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # pcs cluster setup myCluster rhxhost01 addr=10.1.10.1 addr=10.1.10.2 rhxhost02 addr=10.1.20.1 addr=10.1.20.2
  Destroying cluster on hosts: 'rhxhost01', 'rhxhost02'...
  rhxhost01: Successfully destroyed cluster
  rhxhost02: Successfully destroyed cluster
  Requesting remove 'pcsd settings' from 'rhxhost01', 'rhxhost02'
  rhxhost01: successful removal of the file 'pcsd settings'
  rhxhost02: successful removal of the file 'pcsd settings'
  Sending 'corosync authkey', 'pacemaker authkey' to 'rhxhost01', 'rhxhost02'
  rhxhost01: successful distribution of the file 'corosync authkey'
  rhxhost01: successful distribution of the file 'pacemaker authkey'
  rhxhost02: successful distribution of the file 'corosync authkey'
  rhxhost02: successful distribution of the file 'pacemaker authkey'
  Sending 'corosync.conf' to 'rhxhost01', 'rhxhost02'
  rhxhost01: successful distribution of the file 'corosync.conf'
  rhxhost02: successful distribution of the file 'corosync.conf'
  Cluster has been successfully set up.
  ```

The timing parameters are optimized for AWS cloud environments. Update the token timeout to provide reliable cluster operation while accommodating normal cloud network characteristics:

```
# pcs cluster config update totem token=15000
```

## Start and Verify the Cluster
<a name="start-cluster-nw-rhel"></a>

Start the cluster on all nodes:

```
# pcs cluster start --all
```

**Note**  
By enabling the pacemaker service, the server automatically joins the cluster after a reboot. This ensures that your system is protected. Alternatively, you can start the pacemaker service manually on boot to investigate the cause of any failure.

Run the following command to check the cluster status:

```
# pcs status
```

Example output:

```
Cluster name: myCluster

WARNINGS:
No stonith devices and stonith-enabled is not false

Cluster Summary:
  * Stack: corosync
  * Current DC: rhxhost01 (version 2.1.2-4.el9_0.5-ada5c3b36e2) - partition with quorum
  * Last updated: Fri Oct 24 06:35:46 2025
  * Last change:  Fri Oct 24 06:26:38 2025 by hacluster via crmd on rhxhost01
  * 2 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ rhxhost01 rhxhost02 ]

Full List of Resources:
  * No resources

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled
```

Both cluster nodes must show up as online. You can find the ring status and the associated IP addresses of the cluster with the corosync-cfgtool command:

```
# corosync-cfgtool -s
```

Example output:

```
Local node ID 1, transport knet
LINK ID 0 udp
        addr    = 10.1.10.114
        status:
                nodeid:          1:     localhost
                nodeid:          2:     connected
LINK ID 1 udp
        addr    = 10.1.10.215
        status:
                nodeid:          1:     localhost
                nodeid:          2:     connected
```

Both network rings should report "active with no faults". If either ring is missing, review the corosync configuration and check that `/etc/corosync/corosync.conf` changes have been synced to the secondary node. You may need to do this manually. Restart the cluster if needed.

## Configure Cluster Services
<a name="configure-cluster-services-nw-rhel"></a>

Enable pacemaker to start automatically after reboot:

```
# pcs cluster enable --all
```

Enabling pacemaker also handles corosync through service dependencies. The cluster will start automatically after reboot. For troubleshooting scenarios, you can choose to manually start services after boot instead.

## Verify Cluster Status
<a name="verify-cluster-status-nw-rhel"></a>

 **1. Check pacemaker service status:** 

```
# systemctl status pacemaker
```

 **2. Verify cluster status:** 

```
# pcs status
```

 *Example output*:

```
Cluster name: myCluster
Cluster Summary:
  * Stack: corosync
  * Current DC: rhxhost01 (version 2.1.5+20221208.a3f44794f) - partition with quorum
  * 2 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ rhxhost01 rhxhost02 ]

Full List of Resources:
  * No resources
```

# Cluster Configuration
<a name="cluster-config-nw-rhel"></a>

The following sections provide details on the resources, groups and constraints necessary to ensure high availability of SAP Central Services.

**Topics**
+ [Prepare for Resource Creation](#prepare-resource-nw-rhel)
+ [Cluster Bootstrap](#cluster-bootstrap-nw-rhel)
+ [Create STONITH Fencing Resource](#create-stonith-ec2-nw-rhel)
+ [SAP Resource Groups and Ordering](#resource-groups-nw-rhel)
+ [Create Filesystem resources (classic only)](#filesystem-resources-nw-rhel)
+ [Create overlay IP resources](#overlay-ip-resources-nw-rhel)
+ [Create SAPStartSrv resources (simple-mount only)](#sapstartsrv-resources-nw-rhel)
+ [Create SAPInstance resources (simple-mount only)](#sap-resources-simple-nw-rhel)
+ [Create SAPInstance resources (classic only)](#sap-resources-classic-nw-rhel)
+ [Review ASCS Resource group and modify stickiness.](#resource-groups-review-nw-rhel)
+ [Create resource constraints](#resource-constraints-nw-rhel)
+ [Reset Configuration – Optional](#reset-config-nw-rhel)

## Prepare for Resource Creation
<a name="prepare-resource-nw-rhel"></a>

To ensure that the cluster does not perform any unexpected actions during setup of resources and configuration, set the maintenance mode to true.

Run the following command to put the cluster in maintenance mode:

```
# pcs property set maintenance-mode=true
```

To verify the current maintenance state:

```
$ pcs status
```

**Note**  
There are two types of maintenance mode:  
Cluster-wide maintenance (set with `pcs property set maintenance-mode=true`)
Node-specific maintenance (set with `pcs node maintenance nodename`)
Always use cluster-wide maintenance mode when making configuration changes. For node-specific operations like hardware maintenance, refer to the Operations section for proper procedures.  
To disable maintenance mode after configuration is complete:  

```
# pcs property set maintenance-mode=false
```

## Cluster Bootstrap
<a name="cluster-bootstrap-nw-rhel"></a>

### Configure Cluster Properties
<a name="_configure_cluster_properties"></a>

Configure cluster properties to establish fencing behavior and resource failover settings:

```
# pcs property set stonith-enabled="true"
# pcs property set stonith-timeout="600"
# pcs property set priority-fencing-delay="20"
```
+ The **priority-fencing-delay** is recommended for protecting the SAP ASCS nodes during network partitioning events. When a cluster partition occurs, this delay gives preference to nodes hosting higher priority resources, with the ASCS receiving additional priority weighting over the ERS . This helps ensure the ASCS node survives in split-brain scenarios. The recommended 20 second priority-fencing-delay works in conjunction with the pcmk\$1delay\$1max (10 seconds) configured in the stonith resource, providing a total potential delay of up to 30 seconds before fencing occurs

To verify your cluster property settings:

```
# pcs property config
# pcs property config <property_name>
```

### Configure Resource Defaults
<a name="_configure_resource_defaults"></a>

Configure resource default behaviors:

------
#### [ RHEL 8.4 and above ]

```
# pcs resource defaults update resource-stickiness="1"
# pcs resource defaults update migration-threshold="3"
# pcs resource defaults update failure-timeout="600s"
```

------
#### [ RHEL 7.x and RHEL 8.0 to 8.3 ]

```
# pcs resource defaults resource-stickiness="1"
# pcs resource defaults migration-threshold="3"
# pcs resource defaults failure-timeout="600s"
```
+ The **resource-stickiness** value of 1 encourages the ASCS resource to stay on its current node, avoiding unnecessary resource movement.
+ The **migration-threshold** causes a resource to move to a different node after 3 consecutive failures, ensuring timely failover when issues persist.
+ The **failure-timeout** automatically removes a failure count after 10 minutes, preventing individual historical failures from accumulating and affecting long-term resource behavior. If testing failover scenarios in quick succession, it may be necessary to manually query and clear accumulated failure counts between tests. Use `pcs resource failcount` and `pcs resource refresh`.

------

Individual resources may override these defaults with their own defined values.

To verify your resource default settings:

```
# pcs resource defaults
```

### Configure Operation Defaults
<a name="_configure_operation_defaults"></a>

```
# pcs resource op defaults update timeout="600"
```
+ The **op\$1defaults timeout** ensures all cluster operations have a reasonable default timeout of 600 seconds. Individual resources may override this with their own timeout values.

To verify your operation default settings:

```
# pcs resource op defaults
```

## Create STONITH Fencing Resource
<a name="create-stonith-ec2-nw-rhel"></a>

An AWS STONITH resource is required for proper cluster fencing operations. The `fence_aws` resource is recommended for AWS deployments as it leverages the AWS API to safely fence failed or incommunicable nodes by stopping their EC2 instances.

Create the STONITH resource using resource agent ** `fence_aws` **:

```
# pcs stonith create <stonith_resource_name> fence_aws \
pcmk_host_map="<hostname_1>:<instance_id_1>;<hostname_2>:<instance_id_2>" \
region="<aws_region>" \
skip_os_shutdown="true" \
pcmk_delay_max="10" \
pcmk_reboot_timeout="300" \
pcmk_reboot_retries="2" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="180" timeout="60"
```

Details:
+  **pcmk\$1host\$1map** - Maps cluster node hostnames to their EC2 instance IDs. This mapping must be unique within the AWS account and follow the format hostname:instance-id, with multiple entries separated by semicolons.
+  **region** - AWS region where the EC2 instances are deployed
+  **pcmk\$1delay\$1max** - Random delay before fencing operations. Works in conjunction with cluster property `priority-fencing-delay` to prevent simultaneous fencing in 2-node clusters. Historically set to higher values, but with `priority-fencing-delay` now handling primary node protection, a lower value (10s) is sufficient. Omit in clusters with real quorum (3\$1 nodes) to avoid unnecessary delay.
+  **pcmk\$1reboot\$1timeout** - Maximum time in seconds allowed for a reboot operation
+  **pcmk\$1reboot\$1retries** - Number of times to retry a failed reboot operation
+  **skip\$1os\$1shutdown** (NEW) - Leverages a new ec2 stop-instance API flag to forcefully stop an EC2 Instance by skipping the shutdown of the Operating System.
  +  [Red Hat Solution 4963741 - fence\$1aws fence action fails with "Timed out waiting to power OFF"](https://access.redhat.com/solutions/4963741) (requires Red Hat Customer Portal access)

------
#### [ ENSA1 ]

 *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

```
# pcs stonith create rsc_fence_aws fence_aws \
pcmk_host_map="rhxhost01:i-xxxxinstidforhost1;rhxhost02:i-xxxxinstidforhost2" \
region="us-east-1" \
skip_os_shutdown="true" \
pcmk_delay_max="30" \
pcmk_reboot_timeout="120" \
pcmk_reboot_retries="4" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="180" timeout="60"
```

------
#### [ ENSA2 ]

 *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

```
# pcs stonith create rsc_fence_aws fence_aws \
pcmk_host_map="rhxhost01:i-xxxxinstidforhost1;rhxhost02:i-xxxxinstidforhost2" \
region="us-east-1" \
skip_os_shutdown="true" \
pcmk_delay_max="10" \
pcmk_reboot_timeout="120" \
pcmk_reboot_retries="4" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="180" timeout="60"
```

------

## SAP Resource Groups and Ordering
<a name="resource-groups-nw-rhel"></a>

When creating the resources for the SAP ASCS and ERS, it is necessary to specify a group.

A cluster resource group is a set of resources that need to be located together, start sequentially, and stopped in the reverse order.

Depending on the configuration pattern the following groups will be created for the ASCS and ERS
+  **Classic**: Filesystem, IP, SAPInstance
+  **SimpleMount**: IP, SAPStartSrv, SAPInstance

Since RHEL 9.4 a new syntax for creating a resource in a group has been introduced in addition to the --group parameter. You receive the following deprecation warning now:

```
Deprecation Warning: Using '--group' is deprecated and will be replaced with 'group' in a future release. Specify --future to switch to the future behavior.
```

## Create Filesystem resources (classic only)
<a name="filesystem-resources-nw-rhel"></a>

In classic configuration, the mounting and unmounting of file system resources to align with the location of the SAP services is done using cluster resources.

Create **ASCS** file system resources:

```
# pcs resource create rsc_fs_<SID>_ASCS<ascs_sys_nr> ocf:heartbeat:Filesystem \
device="<nfs.fqdn>:/<SID>_ASCS<ascs_sys_nr>" \
directory="/usr/sap/<SID>/ASCS<ascs_sys_nr>" \
fstype="nfs4" \
options="rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2" \
force_unmount="safe" \
fast_stop="no" \
op start timeout="60" interval="0" \
op stop timeout="60" interval="0" \
op monitor interval="20" timeout="40" \
--group "grp_<SID>_ASCS<ascs_sys_nr>"
```

Create **ERS** file system resources:

```
# pcs resource create rsc_fs_<SID>_ERS<ers_sys_nr> ocf:heartbeat:Filesystem \
device="<nfs.fqdn>:/<SID>_ERS<ers_sys_nr>" \
directory="/usr/sap/<SID>/ERS<ers_sys_nr>" \
fstype="nfs4" \
force_unmount="safe" \
fast_stop="no" \
options="rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2" \
op start timeout="60" interval="0" \
op stop timeout="60" interval="0" \
op monitor interval="20" timeout="40" \
--group "grp_<SID>_ERS<ers_sys_nr>"
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # pcs resource create rsc_fs_RHX_ASCS00 ocf:heartbeat:Filesystem \
  device="fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/RHX_ASCS00" \
  directory="/usr/sap/RHX/ASCS00" \
  fstype="nfs4" \
  force_unmount="safe" \
  fast_stop="no" \
  options="rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2" \
  op start timeout="60" interval="0" \
  op stop timeout="60" interval="0" \
  op monitor interval="20" timeout="40"
  
  # pcs resource create rsc_fs_RHX_ERS10 ocf:heartbeat:Filesystem \
  device="fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/RHX_ERS10" \
  directory="/usr/sap/RHX/ERS10" \
  fstype="nfs4" \
  force_unmount="safe" \
  fast_stop="no" \
  options="rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2" \
  op start timeout="60" interval="0" \
  op stop timeout="60" interval="0" \
  op monitor interval="20" timeout="40"
  ```

 **Notes** 
+ Review the mount options to ensure that they match with your operating system, NFS file system type, and the latest recommendations from SAP.
+ <nfs.fqdn> can either be an alias or the default file system resource name of the NFS or FSx for ONTAP resource. For example, `fs-xxxxxx.efs.xxxxxx.amazonaws.com`.
+  `force_unmount` and `fast_stop` are recommendations for ensuring the filesystem can be quickly unmounted. See Red Hat solutions:
  +  [Red Hat Solution 3357961 - During failover of a pacemaker resources, a Filesystem resource kills processes not using the filesystem](https://access.redhat.com/solutions/3357961) (requires Red Hat customer portal login)
  +  [Red Hat Solution 4801371 - What is the fast\$1stop option for a Filesystem resource in a Pacemaker cluster?](https://access.redhat.com/solutions/4801371) (requires Red Hat customer portal login)

## Create overlay IP resources
<a name="overlay-ip-resources-nw-rhel"></a>

The IP resource provides the details necessary to update the route table entry for overlay IP.

Create **ASCS** IP Resource:

```
# pcs resource create rsc_ip_<SID>_ASCS<ascs_sys_nr> ocf:heartbeat:aws-vpc-move-ip \
ip="<ascs_overlayip>" \
routing_table="<routetable_id>" \
interface="eth0" \
profile="<cli_cluster_profile>" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="20" timeout="40"
--group "grp_<SID>_ASCS<ascs_sys_nr>"
```

Create **ERS** IP Resource:

```
# pcs resource create rsc_ip_<SID>_ERS<ers_sys_nr> ocf:heartbeat:aws-vpc-move-ip \
ip="<ers_overlayip>" \
routing_table="<routetable_id>" \
interface="eth0" \
profile="<cli_cluster_profile>" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="20" timeout="40" \
--group "grp_<SID>_ERS<ers_sys_nr>"
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # pcs resource create rsc_ip_RHX_ASCS00 ocf:heartbeat:aws-vpc-move-ip \
  ip="172.16.30.5" \
  routing_table="rtb-xxxxxroutetable1" \
  interface="eth0" \
  profile="cluster" \
  op start interval="0" timeout="180" \
  op stop interval="0" timeout="180" \
  op monitor interval="20" timeout="40" \
  --group grp_RHX_ASCS00
  
  # pcs resource create rsc_ip_RHX_ERS10 ocf:heartbeat:aws-vpc-move-ip \
  ip="172.16.30.6" \
  routing_table="rtb-xxxxxroutetable1" \
  interface="eth0" \
  profile="cluster" \
  op start interval="0" timeout="180" \
  op stop interval="0" timeout="180" \
  op monitor interval="20" timeout="40"
  ```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # pcs resource create rsc_ip_RHX_ASCS00 ocf:heartbeat:aws-vpc-move-ip \
  ip="172.16.30.5" \
  routing_table="rtb-xxxxxroutetable1" \
  interface="eth0" \
  profile="cluster" \
  op start interval="0" timeout="180" \
  op stop interval="0" timeout="180" \
  op monitor interval="20" timeout="40" \
  --group grp_RHX_ASCS00
  
  # pcs resource create rsc_ip_RHX_ERS10 ocf:heartbeat:aws-vpc-move-ip \
  ip="172.16.30.6" \
  routing_table="rtb-xxxxxroutetable1" \
  interface="eth0" \
  profile="cluster" \
  op start interval="0" timeout="180" \
  op stop interval="0" timeout="180" \
  op monitor interval="20" timeout="40"
  ```

 **Notes** 
+ If more than one route table is required for connectivity or because of subnet associations, the `routing_table` parameter can have multiple values separated by a comma. For example, `routing_table=rtb-xxxxxroutetable1,rtb-xxxxxroutetable2`.
+ Additional parameters – `lookup_type` and `routing_table_role` are required for shared VPC. For more information, see \$1https---docs-aws-amazon-com-sap-latest-sap-netweaver-rhel-netweaver-ha-settings-html-rhel-netweaver-ha-shared-vpc\$1[Shared VPC – optional].

## Create SAPStartSrv resources (simple-mount only)
<a name="sapstartsrv-resources-nw-rhel"></a>

In simple-mount architecture, the `sapstartsrv` process that is used to control start/stop and monitoring of an SAP instance, is controlled by a cluster resource. This new resource adds additional control that removes the requirement for file system resources to be restricted to a single node.

Modify and run the commands in the table to create `sapstartsrv` resource.

Create **ASCS** SAPStartSrv Resource

Use the following command to create an ASCS SAPStartSrv resource.

```
# pcs resource create rsc_sapstart_<SID>_ASCS<ascs_sys_nr> ocf:heartbeat:SAPStartSrv \
InstanceName=<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>
op monitor interval=0 timeout=20 enabled=0
--group grp_<SID>_ASCS<instance>
```

Create **ERS** SAPStartSrv Resource

Use the following command to create an ERS SAPStartSrv resource.

```
# pcs resource create rsc_sapstart_<SID>_ERS<ers_sys_nr> ocf:heartbeat:SAPStartSrv \
InstanceName=<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>
op monitor interval=0 timeout=20 enabled=0
--group grp_<SID>_ERS<ers_sys_nr>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  #crm configure primitive rsc_sapstart_RHX_ASCS00 ocf:heartbeat:SAPStartSrv \
  params \
  InstanceName=RHX_ASCS00_rhxascs \
  op monitor interval=0 timeout=20 enabled=0 \
  --group grp_RHX_ASCS00
  
  #crm configure primitive rsc_sapstart_RHX_ERS10 ocf:heartbeat:SAPStartSrv \
  params \
  InstanceName=RHX_ERS10_rhxers \
  op monitor interval=0 timeout=20 enabled=0 \
  --group grp_RHX_ERS10
  ```

## Create SAPInstance resources (simple-mount only)
<a name="sap-resources-simple-nw-rhel"></a>

The minor difference in creating SAP instance resources between classic and simple-mount configurations is the addition of `MINIMAL_PROBE=true` parameters.

The SAP instance is started and stopped using cluster resources.

**Example**  
Create an **ASCS** SAP instance resource:  

```
# pcs resource create rsc_sap_<SID>_ASCS<ascs_sys_nr> ocf:heartbeat:SAPInstance \
InstanceName="<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
MINIMAL_PROBE="true" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="240" \
op monitor interval="11" timeout="60" on-fail="restart" \
meta resource-stickiness="5000" \
meta failure-timeout="60" \
meta migration-threshold="1" \
meta priority="10"
```
Create an **ERS** SAP instance resource:  

```
# pcs resource create rsc_sap_<SID>_ERS<ers_sys_nr> ocf:heartbeat:SAPInstance \
InstanceName="<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
MINIMAL_PROBE="true" \
IS_ERS="true" \
op start interval="0" timeout="240" \
op stop interval="0" timeout="240" \
op monitor interval="11" timeout="60" on-fail="restart" \
meta priority="1000"
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # pcs resource create rsc_sap_RHX_ASCS00 ocf:heartbeat:SAPInstance \
  InstanceName="RHX_ASCS00_rhxascs" \
  START_PROFILE="/usr/sap/RHX/SYS/profile/RHX_ASCS00_rhxascs" \
  AUTOMATIC_RECOVER="false" \
  MINIMAL_PROBE="true" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="240" \
  op monitor interval="11" timeout="60" on-fail="restart" \
  meta resource-stickiness="5000" \
  meta failure-timeout="60" \
  meta migration-threshold="1" \
  meta priority="10"
  
  # pcs resource create rsc_sap_RHX_ERS10 ocf:heartbeat:SAPInstance \
  InstanceName="RHX_ERS10_rhxers" \
  START_PROFILE="/usr/sap/RHX/SYS/profile/RHX_ERS10_rhxers" \
  AUTOMATIC_RECOVER="false" \
  MINIMAL_PROBE="true" \
  IS_ERS="true" \
  op start interval="0" timeout="240" \
  op stop interval="0" timeout="240" \
  op monitor interval="11" timeout="60" on-fail="restart" \
  meta priority="1000"
  ```
Create an **ASCS** SAP instance resource:  

```
# pcs resource create rsc_sap_<SID>_ASCS<ascs_sys_nr> ocf:heartbeat:SAPInstance \
InstanceName="<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
MINIMAL_PROBE="true" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="240" \
op monitor interval="20" timeout="60" on-fail="restart" \
meta resource-stickiness="5000" \
meta priority="1000"
```
Create an **ERS** SAP instance resource:  

```
# pcs resource create rsc_sap_<SID>_ERS<ers_sys_nr> ocf:heartbeat:SAPInstance \
InstanceName="<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
IS_ERS="true" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="240" \
op monitor interval="20" timeout="60" on-fail="restart"
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # pcs resource create rsc_sap_RHX_ASCS00 ocf:heartbeat:SAPInstance \
  InstanceName="RHX_ASCS00_rhxascs" \
  START_PROFILE="/usr/sap/RHX/SYS/profile/RHX_ASCS00_rhxascs" \
  AUTOMATIC_RECOVER="false" \
  MINIMAL_PROBE="true" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="240" \
  op monitor interval="20" timeout="60" on-fail="restart" \
  meta resource-stickiness="5000" \
  meta priority="1000"
  
  # pcs resource create rsc_sap_RHX_ERS10 ocf:heartbeat:SAPInstance \
  InstanceName="RHX_ERS10_rhxers" \
  START_PROFILE="/usr/sap/RHX/SYS/profile/RHX_ERS10_rhxers" \
  AUTOMATIC_RECOVER="false" \
  IS_ERS="true" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="240" \
  op monitor interval="20" timeout="60" on-fail="restart"
  ```

The difference between ENSA1 and ENSA2 is that ENSA2 allows the lock table to be consumed remotely, which means that for ENSA2, ASCS can restart in its current location (assuming the node is still available). This change impacts stickiness, migration and priority parameters. Ensure that you use the right command for your enqueue version.

## Create SAPInstance resources (classic only)
<a name="sap-resources-classic-nw-rhel"></a>

The SAP instance is started and stopped using cluster resources.

**Example**  
Create an **ASCS** SAPInstance resource:  

```
# pcs resource create rsc_sap_<SID>_ASCS<ascs_sys_nr> ocf:heartbeat:SAPInstance \
InstanceName="<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="240" \
op monitor interval="11" timeout="60" on-fail="restart" \
meta resource-stickiness="5000" \
meta failure-timeout="60" \
meta migration-threshold="1" \
meta priority="10" \
--group "grp_<SID>_ASCS<ascs_sys_nr>"
```
Create an **ERS** SAPInstance resource:  

```
# pcs resource create rsc_sap_<SID>_ERS<ers_sys_nr> ocf:heartbeat:SAPInstance \
InstanceName="<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
IS_ERS="true" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="240" \
op monitor interval="11" timeout="60" on-fail="restart" \
meta \
priority="1000"
--group "grp_<SID>_ERS<ers_sys_nr>"
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # pcs resource create rsc_sap_RHX_ASCS00 ocf:heartbeat:SAPInstance \
  InstanceName="RHX_ASCS00_rhxascs" \
  START_PROFILE="/usr/sap/RHX/SYS/profile/RHX_ASCS00_rhxascs" \
  AUTOMATIC_RECOVER="false" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="240" \
  op monitor interval="11" timeout="60" on-fail="restart" \
  meta resource-stickiness="5000" \
  meta failure-timeout="60" \
  meta migration-threshold="1" \
  meta priority="10"
  
  # pcs resource create rsc_sap_RHX_ERS10 ocf:heartbeat:SAPInstance \
  InstanceName="RHX_ERS10_rhxers" \
  START_PROFILE="/usr/sap/RHX/SYS/profile/RHX_ERS10_rhxers" \
  AUTOMATIC_RECOVER="false" \
  IS_ERS="true" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="240" \
  op monitor interval="11" timeout="60" on-fail="restart" \
  meta priority="1000"
  ```
Create an **ASCS** SAPInstance resource:  

```
# pcs resource create rsc_sap_<SID>_ASCS<ascs_sys_nr> ocf:heartbeat:SAPInstance \
InstanceName="<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="240" \
op monitor interval="11" timeout="60" on-fail="restart" \
meta resource-stickiness="5000" \
meta priority="1000" \
--group "grp_<SID>_ASCS<ascs_sys_nr>"
```
Create an **ERS** SAP instance resource:  

```
# pcs resource create rsc_sap_<SID>_ERS<ers_sys_nr> ocf:heartbeat:SAPInstance \
InstanceName="<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
IS_ERS="true" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="240" \
op monitor interval="11" timeout="60" on-fail="restart" \
--group "grp_<SID>_ERS<ers_sys_nr>"
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # pcs resource create rsc_sap_RHX_ASCS00 ocf:heartbeat:SAPInstance \
  InstanceName="RHX_ASCS00_rhxascs" \
  START_PROFILE="/usr/sap/RHX/SYS/profile/RHX_ASCS00_rhxascs" \
  AUTOMATIC_RECOVER="false" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="240" \
  op monitor interval="11" timeout="60" on-fail="restart" \
  meta resource-stickiness="5000" \
  meta priority="1000"
  
  # pcs resource create rsc_sap_RHX_ERS10 ocf:heartbeat:SAPInstance \
  InstanceName="RHX_ERS10_rhxers" \
  START_PROFILE="/usr/sap/RHX/SYS/profile/RHX_ERS10_rhxers" \
  AUTOMATIC_RECOVER="false" \
  IS_ERS="true" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="240" \
  op monitor interval="11" timeout="60" on-fail="restart"
  ```

The change between ENSA1 and ENSA2 allows the lock table to be consumed remotely. If the node is still available, ASCS can restart in its current location for ENSA2. This impacts stickiness, migration, and priority parameters. Make sure to use the right command, depending on your enqueue server.

## Review ASCS Resource group and modify stickiness.
<a name="resource-groups-review-nw-rhel"></a>

A cluster resource group is a set of resources that need to be located together, start sequentially, and stopped in the reverse order.

```
# pcs resource meta grp_<SID>_ASCS<ascs_sys_nr> resource-stickiness=3000
```

In simple-mount architecture, the overlay IP must be available first, then the SAP services are started before the SAP instance can start.

## Create resource constraints
<a name="resource-constraints-nw-rhel"></a>

Resource constraints are used to determine where resources run per the conditions. Constraints for SAP NetWeaver ensure that ASCS and ERS are started on separate nodes and locks are preserved in case of failures. The following are the different types of constraints.

### Colocation constraint
<a name="_colocation_constraint"></a>

The negative score ensures that ASCS and ERS are run on separate nodes, wherever possible.

```
# pcs constraint colocation add grp_<SID>_ERS<ers_sys_nr> with grp_<SID>_ASCS<ascs_sys_nr> score=-5000
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # pcs constraint colocation add grp_RHX_ERS10 with grp_RHX_ASCS00 score=-5000
  ```

### Order constraint
<a name="_order_constraint"></a>

This constraint ensures the ASCS instance is started prior to stopping the ERS instance. This is necessary to consume the lock table.

```
# pcs constraint order start rsc_sap_<SID>_ASCS<ascs_sys_nr> then stop rsc_sap_<SID>_ERS<ers_sys_nr> kind=Optional symmetrical=false
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # pcs constraint order start rsc_sap_RHX_ASCS00 then stop rsc_sap_RHX_ERS10 kind=Optional symmetrical=false
  ```

### Location constraint (ENSA1 only)
<a name="_location_constraint_ensa1_only"></a>

This constraint is only required for ENSA1. The lock table can be retrieved remotely for ENSA2, and as a result ASCS doesn’t failover to where ERS is running.

```
# pcs constraint location rsc_sap_<SID>_ASCS<ascs_sys_nr> rule score=2000 runs_ers_<SID> eq 1
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-rhel-parameters.md) *:

  ```
  # pcs constraint location rsc_sap_RHX_ASCS00 rule score=2000 runs_ers_RHX eq 1
  ```

## Reset Configuration – Optional
<a name="reset-config-nw-rhel"></a>

**Important**  
The following instructions help you reset the complete configuration. Run these commands only if you want to start setup from the beginning. You can make minor changes with the crm edit command.

Run the following command to back up the current configuration for reference:

```
# pcs config > /tmp/pcsconfig_backup.txt
```

Run the following command to clear the current configuration:

```
# pcs cluster cib-push --config /dev/null
```

Once the preceding command is executed, it removes all of the cluster resources from Cluster Information Base (CIB). Before starting the resource configuration, run pcs cluster start --all to ensure the cluster is running properly. The restart removes maintenance mode. Reapply maintenance mode before commencing additional configuration and resource setup.