

# SAP ASCS and Cluster Setup
<a name="sap-nw-pacemaker-sles-setup"></a>

This section covers the following topics.

**Topics**
+ [

# SAP Shared File Systems
](sap-shared-filesystems-nw-sles.md)
+ [

# Check IP availability and resolution
](check-ip-availability-resolution-nw-sles.md)
+ [

# Install SAP
](install-sap-nw-sles.md)
+ [

# Configure SAP for Cluster Control
](sap-ascs-service-control-nw-sles.md)
+ [

# Cluster Node Setup
](cluster-node-setup-nw-sles.md)
+ [

# Cluster Configuration
](cluster-config-nw-sles.md)

# SAP Shared File Systems
<a name="sap-shared-filesystems-nw-sles"></a>

**Topics**
+ [

## Select Shared Storage
](#select-storage-type-nw-sles)
+ [

## Create file systems
](#create-filesystems-nw-sles)
+ [

## Create mount point directories
](#create-mount-dirs-nw-sles)
+ [

## Update /etc/fstab
](#update-fstab-nw-sles)
+ [

## Temporarily mount ASCS and ERS directories for installation (classic only)
](#temp-mount-dirs-nw-sles)

## Select Shared Storage
<a name="select-storage-type-nw-sles"></a>

SAP NetWeaver high availability deployments require shared file systems. On Linux, you can use either [Amazon Elastic File System](https://aws.amazon.com/efs/) or [Amazon FSx for NetApp ONTAP](https://aws.amazon.com/fsx/netapp-ontap/). Choose between these options based on your requirements for resilience, performance, and cost. For detailed setup information, see [Getting started with Amazon Elastic File System](https://docs.aws.amazon.com/efs/latest/ug/getting-started.html) or [Getting started with Amazon FSx for NetApp ONTAP](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/getting-started.html).

We recommend sharing a single Amazon EFS or FSx for ONTAP file system across multiple SIDs within an account.

The file system’s DNS name is the simplest mounting option. When connecting from an Amazon EC2 instance, the DNS automatically resolves to the mount target’s IP address in that instance’s Availability Zone. You can also create an alias (CNAME) to help identify the shared file system’s purpose. Throughout this document, we use `<nfs.fqdn>`.

Examples:
+  `file-system-id.efs.aws-region.amazonaws.com` 
+  `svm-id.fs-id.fsx.aws-region.amazonaws.com` 
+  `qas_sapmnt_share.example.com` 

**Note**  
Review the `enableDnsHostnames` and `enableDnsSupport` DNS attributes for your VPC. For more information, see [View and update DNS attributes for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-updating).

## Create file systems
<a name="create-filesystems-nw-sles"></a>

The following shared file systems are covered in this document:


| NFS Location Structure | NFS Location Example | File System Location Structure | File System Location Example | 
| --- | --- | --- | --- | 
|  <SID>\$1sapmnt  |   `SLX_sapmnt`   |  /sapmnt/<SID>  |   `/sapmnt/SLX`   | 
|  <SID>\$1ASCS<ascs\$1sys\$1nr>  |   `SLX_ASCS00`   |  /usr/sap/<SID>/ASCS<ascs\$1sys\$1nr>  |   `/usr/sap/SLX/ASCS00`   | 
|  <SID>\$1ERS<ers\$1sys\$1nr>  |   `SLX_ERS10`   |  /usr/sap/<SID>/ERS<ers\$1sys\$1nr>  |   `/usr/sap/SLX/ERS10`   | 

The following options can differ depending on how you architect and operate your systems:
+ ASCS and ERS mount points - In simple-mount architecture, you can share the entire `/usr/sap/<SID>` directory. This document uses separate mount points to simplify migration and follow SAP’s recommendation for local application server executables when co-hosting ASCS/ERS.
+ Transport directory - `/usr/sap/trans` is optional for ASCS installations. Add this shared directory if your change management processes require it.
+ Home directory - This document uses local home directories to ensure `<sid>adm` access during NFS issues. Consider a shared home directory if you need consistent user environments across nodes.
+ NFS location naming - The "NFS Location" names are arbitrary and can be chosen based on your naming conventions (e.g., `myEFSMount1`, `prod_sapmnt`, etc.). The "File system location" follows the standard SAP directory structure and should use the parameter references shown.

For more information, see [SAP System Directories on UNIX](https://help.sap.com/docs/SAP_NETWEAVER_750/ff18034f08af4d7bb33894c2047c3b71/2744f17a26a74a8abfd202c4f5dc9a0f.html).

Using the NFS ID created in the previous step, temporarily mount the root directory of the NFS. `/mnt` is available by default; it can also be substituted with another temporary location.

**Note**  
The following commands use the NFS location names from the table above. Replace `<SID>_sapmnt`, `<SID>_ASCS<ascs_sys_nr>`, and `<SID>_ERS<ers_sys_nr>` with your chosen NFS location names and parameter values.

```
# mount <nfs.fqdn>:/ /mnt
# mkdir -p /mnt/<SID>_sapmnt
# mkdir -p /mnt/<SID>_ASCS<ascs_sys_nr>
# mkdir -p /mnt/<SID>_ERS<ers_sys_nr>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # mount fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/ /mnt
  # mkdir -p /mnt/SLX_sapmnt
  # mkdir -p /mnt/SLX_ASCS00
  # mkdir -p /mnt/SLX_ERS10
  ```

During SAP installation, the `<sid>adm` user and proper directory ownership will be created. Until then, we need to ensure the installation process has sufficient access. Set temporary permissions on the directories:

```
# chmod 777 /mnt/<SID>_sapmnt /mnt/<SID>_ASCS<ascs_sys_nr> /mnt/<SID>_ERS<ers_sys_nr>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # chmod 777 /mnt/SLX_sapmnt /mnt/SLX_ASCS00 /mnt/SLX_ERS10
  ```

The SAP installation process will automatically set the correct ownership and permissions for operational use.

Unmount the temporary mount:

```
# umount /mnt
```

## Create mount point directories
<a name="create-mount-dirs-nw-sles"></a>

This is applicable to both cluster nodes. Create the directories for the required mount points (permanent or cluster controlled):

```
# mkdir /sapmnt
# mkdir /usr/sap/<SID>/ASCS<ascs_sys_nr>
# mkdir /usr/sap/<SID>/ERS<ers_sys_nr>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # mkdir /sapmnt
  # mkdir /usr/sap/SLX/ASCS00
  # mkdir /usr/sap/SLX/ERS10
  ```

## Update /etc/fstab
<a name="update-fstab-nw-sles"></a>

This is applicable to both cluster nodes. `/etc/fstab` is a configuration table containing the details required for mounting and unmounting file systems to a host.

Add the file systems not managed by the cluster to `/etc/fstab`.

For both **simple-mount** and **classic** architectures, prepare and append an entry for the `sapmnt` file system to `/etc/fstab`:

```
<nfs.fqdn>/<SID>_sapmnt    /sapmnt    nfs    nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport    0    0
```

 **Simple-mount only** – prepare and append entries for the ASCS and ERS file systems to `/etc/fstab`:

```
<nfs.fqdn>:/<SID>_ASCS<ascs_sys_nr>   /usr/sap/<SID>/ASCS<ascs_sys_nr>  nfs    nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport    0    0
<nfs.fqdn>:/<SID>_ERS<ers_sys_nr>     /usr/sap/<SID>/ERS<ers_sys_nr>    nfs    nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport    0    0
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/SLX_sapmnt    /sapmnt               nfs    nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport    0    0
  fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/SLX_ASCS00    /usr/sap/SLX/ASCS00   nfs    nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport    0    0
  fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/SLX_ERS10     /usr/sap/SLX/ERS10    nfs    nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport    0    0
  ```

Verify that your mount options are:
+ Compatible with your operating system version
+ Supported by your chosen NFS file system type (EFS or FSx for ONTAP)
+ Aligned with current SAP recommendations

Consult SAP and AWS documentation for the latest mount option recommendations.

Use the following command to mount the file systems defined in `/etc/fstab`:

```
# mount -a
```

Use the following command to check that the required file systems are available:

```
# df -h
```

## Temporarily mount ASCS and ERS directories for installation (classic only)
<a name="temp-mount-dirs-nw-sles"></a>

This is only applicable to the classic architecture. Simple-mount architecture has these directories permanently available in `/etc/fstab`.

Mount ASCS and ERS directories for installation.

Use the following command on the instance where you plan to install ASCS:

```
# mount <nfs.fqdn>:/<SID>_ASCS<ascs_sys_nr>  /usr/sap/<SID>/ASCS<ascs_sys_nr>
```

Use the following command on the instance where you plan to install ERS:

```
# mount <nfs.fqdn>:/<SID>_ERS<ers_sys_nr>  /usr/sap/<SID>/ERS<ers_sys_nr>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # mount fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/SLX_ASCS00  /usr/sap/SLX/ASCS00
  # mount fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/SLX_ERS10   /usr/sap/SLX/ERS10
  ```

# Check IP availability and resolution
<a name="check-ip-availability-resolution-nw-sles"></a>

## Add Overlay IP for SAP Installation
<a name="add-oip-sapinst-nw-sles"></a>

SAP Installation should be done using the virtual names assigned to the overlay IP. Before adding the overlay IPs to the instances, ensure that the VPC route table entries have been created as described in [Add VPC Route Table Entries for Overlay IPs](sap-nw-pacemaker-sles-infra-setup.md#rt-sles).

To facilitate SAP installation, manually add the Overlay IPs to the instances:

1. To the instance where you intend to install the **ASCS** 

   ```
   # ip addr add <ascs_overlayip>/32 dev eth0
   ```

1. To the instance where you intend to install the **ERS** 

   ```
   # ip addr add <ers_overlayip>/32 dev eth0
   ```

Note the following:
+ Route table entries for the overlay IPs must be created first (see [Add VPC Route Table Entries for Overlay IPs](sap-nw-pacemaker-sles-infra-setup.md#rt-sles))
+ This IP configuration is temporary and will be lost after instance reboot
+ The cluster will take over management of these IPs once configured

## Hostname Resolution
<a name="hostname-resolution-nw-sles"></a>

You must ensure that all instances can resolve all hostnames in use. Add the hostnames for cluster nodes to `/etc/hosts` file on all cluster nodes. This ensures that hostnames for cluster nodes can be resolved even in case of DNS issues. Configure the `/etc/hosts` file for a two-node cluster:

```
# cat /etc/hosts
<primary_ip_1> <hostname_1>.example.com <hostname_1>
<primary_ip_2> <hostname_2>.example.com <hostname_2>
<ascs_overlayip> <ascs_virt_hostname>.example.com <ascs_virt_hostname>
<ers_overlayip> <ers_virt_hostname>.example.com <ers_virt_hostname>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # cat /etc/hosts
  10.1.10.1 slxhost01.example.com slxhost01
  10.1.20.1 slxhost02.example.com slxhost02
  172.16.30.5 slxascs.example.com slxascs
  172.16.30.6 slxers.example.com slxers
  ```

In this configuration, the secondary IPs used for the second cluster ring are not mentioned. They are only used in the cluster configuration. You can allocate virtual hostnames for administration and identification purposes.

**Important**  
The Overlay IP is out of VPC range, and cannot be reached from locations not associated with the route table, including on-premises.

# Install SAP
<a name="install-sap-nw-sles"></a>

The following topics provide information about installing SAP on AWS Cloud in a highly available cluster. Review SAP Documentation for more details.

**Topics**
+ [

## Final checks for software provisioning
](#final-checks-software-provisioning-nw-sles)
+ [

## Install SAP ASCS and ERS instances
](#install-sap-instances-nw-sles)
+ [

## Kernel upgrade and ENSA2 – optional
](#kernel-ensa2-nw-sles)
+ [

## Check SAP host agent version
](#check-host-agent-nw-sles)

## Final checks for software provisioning
<a name="final-checks-software-provisioning-nw-sles"></a>

Before running SAP Software Provisioning Manager (SWPM), ensure that the following prerequisites are consistent across both cluster nodes:
+ Collect any missing details and populate the [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) section to ensure clarity on the specific values used in installation commands.
+  **User and Group Configuration** - If operating system groups are pre-defined, ensure matching UID and GID values for `<sid>adm` and `sapsys` across both cluster nodes.
+  **Installation Software** - Download the latest version of Software Provisioning Manager (SWPM) and SAP installation media for your SAP release from [Software Provisioning Manager](https://support.sap.com/en/tools/software-logistics-tools/software-provisioning-manager.html).
+  **Network Configuration** - Verify both cluster nodes have identical configuration with all routes, overlay IPs, and virtual hostnames accessible. This ensures that either node can run ASCS or ERS roles.
+  **File Systems** - Verify all shared file systems are mounted and accessible from both nodes with consistent mount points and permissions.

## Install SAP ASCS and ERS instances
<a name="install-sap-instances-nw-sles"></a>

Install the SAP ASCS and ERS instances using their virtual hostnames to ensure installation against the overlay IP addresses. This approach is required for proper cluster integration.

Install the ASCS instance on `<instance_id_1>` using virtual hostname `<ascs_virt_hostname>` with the `SAPINST_USE_HOSTNAME` parameter. This ensures the installation uses the overlay IP rather than the physical hostname:

 *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

```
# <swpm location>/sapinst SAPINST_USE_HOSTNAME=<ascs_virt_hostname>
```

Install the ERS instance on `<instance_id_2>` using virtual hostname `<ers_virt_hostname>` with the `SAPINST_USE_HOSTNAME` parameter. This ensures the installation uses the overlay IP rather than the physical hostname:

```
# <swpm location>/sapinst SAPINST_USE_HOSTNAME=<ers_virt_hostname>
```

Once the ASCS and ERS installations are complete, you will need to install and configure the database and SAP Primary Application Server (PAS) - these components are not covered in this cluster setup documentation. Optionally, you can also install and configure Additional Application Server (AAS). For more details on installing these SAP NetWeaver components, refer to SAP Help Portal.

For additional information on unattended installation options, see [SAP Note 2230669 – System Provisioning Using an Input Parameter File](https://me.sap.com/notes/2230669) (requires SAP portal access).

## Kernel upgrade and ENSA2 – optional
<a name="kernel-ensa2-nw-sles"></a>

As of AS ABAP Release 7.53 (ABAP Platform 1809), the new Standalone Enqueue Server 2 (ENSA2) is installed by default. ENSA2 replaces the previous version – ENSA1.

If you have an older version of SAP NetWeaver, consider following the SAP guidance to upgrade the kernel and update the Enqueue Server configuration. An upgrade will allow you to take advantage of the features available in the latest version. For more information, see the following SAP Notes (require SAP portal access):
+  [SAP Note 2630416 – Support for Standalone Enqueue Server 2](https://me.sap.com/notes/2630416) 
+  [SAP Note 2711036 – Usage of the Standalone Enqueue Server 2 in an HA Environment](https://me.sap.com/notes/2711036) 

## Check SAP host agent version
<a name="check-host-agent-nw-sles"></a>

This is applicable to both cluster nodes. The SAP host agent is used for system instance control and monitoring. This agent is used by SAP cluster resource agents and hooks. It is recommended that you have the latest version installed on both instances. For more details, see [SAP Note 2219592 – Upgrade Strategy of SAP Host Agent](https://me.sap.com/notes/2219592).

Use the following command to check the version of the host agent:

```
# /usr/sap/hostctrl/exe/saphostexec -version
```

# Configure SAP for Cluster Control
<a name="sap-ascs-service-control-nw-sles"></a>

Modify SAP service configurations, user permissions, and system integration settings to enable proper cluster control of ASCS and ERS instances.

**Topics**
+ [

## Add <sid>adm to haclient group
](#add-sidadm-haclient-nw-sles)
+ [

## Modify SAP profiles for start operations and cluster hook
](#modify-sap-profiles-nw-sles)
+ [

## Enable sapping and sappong Services (Simple-Mount Only)
](#sapping-sappong-services-nw-sles)
+ [

## Ensure ASCS and ERS SAP Services can run on either node (systemd)
](#modify-sapservices-nw-sles)
+ [

## Configure dependencies for Pacemaker and SAP services (systemd)
](#configure-systemd-deps-nw-sles)
+ [

## (Alternative) Ensure ASCS and ERS SAP Services can run on either node (sysV)
](#modify-sapservices-sysv-nw-sles)

## Add <sid>adm to haclient group
<a name="add-sidadm-haclient-nw-sles"></a>

This is applicable to both cluster nodes. An `haclient` operating system group is created when the cluster connector package is installed. Adding the `<sid>adm` user to this group ensures that your cluster has necessary access. Run the following command as root:

```
# usermod -a -G haclient <sid>adm
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # usermod -a -G haclient slxadm
  ```

## Modify SAP profiles for start operations and cluster hook
<a name="modify-sap-profiles-nw-sles"></a>

This action ensures that there is compatibility between the SAP start framework and cluster actions. Modify SAP profiles to change the start behavior of the SAP instance and processes. Ensure that `sapcontrol` is aware that the system is being managed by a pacemaker cluster.
+ ASCS profile – `/usr/sap/<SID>/SYS/profile/<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>` 
+ ERS profile – `/usr/sap/<SID>/SYS/profile/<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>` 

The profile directory /usr/sap/<SID>/SYS/profile/ is typically a symbolic link to /sapmnt/<SID>/profile/ on the shared NFS filesystem. This means profile modifications made on one node are immediately visible on all cluster nodes. You can modify the profiles from either node.
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:
  + ASCS profile example – `/usr/sap/SLX/SYS/profile/SLX_ASCS00_slxascs` 
  + ERS profile example – `/usr/sap/SLX/SYS/profile/SLX_ERS10_slxers` 

Follow the procedure outlined below to make the necessary changes:

1.  **Program or process start behavior** – In case of failure, processes must be restarted. Determining where the process starts and in what order needs to be controlled by the cluster, and not SAP start framework behavior defined in the profiles. Your locks can be lost if this parameter is not changed. In newer SAP installations, the profiles may already contain `Start_Program_XX` instead of `Restart_Program_XX`. If `Start_Program_XX` is already present, no changes are needed for this step.  
**Example**  

------
#### [ ENSA1 ]

    **ASCS** 

   ```
   #For ENSA1 (_EN)
   #Changing Restart to Start for Cluster compatibility
   #Old value: Restart_Program_XX = local $(_EN) pf=$(_PF)
   
   Start_Program_XX = local $(_EN) pf=$(_PF)
   ```

    **ERS** 

   ```
   #For ENSA1 (_ER)
   #Changing Restart to Start for Cluster compatibility
   #Old value: Restart_Program_XX = local $(_ER) pf=$(_PFL)NR=$(SCSID)
   
   Start_Program_XX = local $(_ER) pf=$(_PFL) NR=$(SCSID)
   ```

    *`XX` indicates the start-up order. This value may be different in your install; retain the unchanged value.* 

------
#### [ ENSA2 ]

    **ASCS** 

   ```
   #For ENSA2 (_ENQ)
   #Changing Restart to Start for Cluster compatibility
   #Old value: Restart_Program_XX = local $(_ENQ) pf=$(_PF)
   
   Start_Program_XX = local $(_ENQ) pf=$(_PF)
   ```

    **ERS** 

   ```
   #For ENSA2 (_ENQR)
   #Changing Restart to Start for Cluster compatibility
   #Old value: Restart_Program_XX = local $(_ENQR) pf=$(_PFL)NR=$(SCSID)
   
   Start_Program_XX = local $(_ENQR) pf=$(_PFL) NR=$(SCSID)
   ```

    *`XX` indicates the start order. This value may be different in your install; retain the unchanged value.* 

------

1.  **Disable instance auto start in both profiles** – When an instance restarts, SAP start framework should not start ASCS and ERS automatically. Add the following parameter on both profiles to prevent an auto start:

   ```
   # Disable instance auto start
   Autostart = 0
   ```

1.  **Add cluster connector details in both profiles** – The connector integrates the SAP start and control frameworks of SAP NetWeaver with SUSE cluster to assist with maintenance and awareness of state. Add the following parameters on both profiles:

   ```
   # Added for Cluster Connectivity
   service/halib = $(DIR_EXECUTABLE)/saphascriptco.so
   service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector
   ```
**Important**  
RPM package `sap-suse-cluster-connector` has *dashes*. The executable `/usr/bin/sap_suse_cluster_connector` available after installation has *underscores*. Ensure that the correct name, that is executable `/usr/bin/sap_suse_cluster_connector`, is used in both profiles.

1.  **Restart services** – Restart SAP services for ASCS and ERS to ensure that the preceding settings take effect. Adjust the system number to match the service.

    **ASCS** 

   ```
   # /usr/sap/hostctrl/exe/sapcontrol -nr <ascs_sys_nr> -function RestartService
   ```

    **ERS** 

   ```
   # /usr/sap/hostctrl/exe/sapcontrol -nr <ers_sys_nr> -function RestartService
   ```
   +  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

      **ASCS** 

     ```
     # /usr/sap/hostctrl/exe/sapcontrol -nr 00 -function RestartService
     ```

      **ERS** 

     ```
     # /usr/sap/hostctrl/exe/sapcontrol -nr 10 -function RestartService
     ```

1.  **Check integration using `sapcontrol` ** – `sapcontrol` includes functions: `HACheckConfig` and `HACheckFailoverConfig`. These functions can be used to check configuration, including awareness of the cluster connector. These checks have limited value before the cluster is configured, but you can run HACheckFailoverConfig to ensure the base configuration is in place.

    **ASCS** 

   ```
   # /usr/sap/hostctrl/exe/sapcontrol -nr <ascs_sys_nr> -function HACheckFailoverConfig
   ```
   +  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

      **ASCS** 

     ```
     # /usr/sap/hostctrl/exe/sapcontrol -nr 00 -function HACheckFailoverConfig
     
     10.10.2025 01:23:55
     HACheckFailoverConfig
     OK
     state, category, description, comment
     SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch
     ```

## Enable sapping and sappong Services (Simple-Mount Only)
<a name="sapping-sappong-services-nw-sles"></a>

For simple-mount architecture, enable the sapping and sappong systemd services on both cluster nodes. These services ensure proper SAP instance startup coordination between systemd and the cluster.

The sapping service runs before sapinit during boot and temporarily hides the `/usr/sap/sapservices` file to prevent automatic SAP instance startup. The sappong service runs after sapinit and restores the sapservices file, making it available for cluster management while maintaining compatibility with SAP management tools.

```
# systemctl enable sapping
# systemctl enable sappong
```

Verify the services are enabled:

```
# systemctl status sapping
# systemctl status sappong
```

**Note**  
Both services will show "inactive (dead)" status, which is normal for one-shot services that only run during system boot.

## Ensure ASCS and ERS SAP Services can run on either node (systemd)
<a name="modify-sapservices-nw-sles"></a>

This is applicable to both cluster nodes.

To ensure that the cluster can orchestrate availability by starting and stopping instances on either cluster node, the SAP Services must be registerd on both nodes and auto-start should be disabled.

In recent Operating System and SAP kernel versions, SAP offers systemd integration for sapstartsrv which controls how SAP instances are stopped and started. This is the recommended configuration and a requirement for Simple Mount Configuration.

For more details, see the following SAP Notes (require SAP portal access):
+  [SAP Note 3139184 – Linux: systemd integration for sapstartsrv and SAP Host Agent](https://me.sap.com/notes/3139184) 
+  [SAP Note 3115048 – sapstartsrv with native Linux systemd support](https://me.sap.com/notes/3115048) 

You can confirm whether systemd is in place by running the following command. Systemd is in place if SAP Services (e.g., SAPSLX\$100.service, SAPSLX\$110.service) are listed.

```
# systemctl list-unit-files SAP*
```

If you have installed an ASCS or ERS on this host but no SAP Services are returned, the classic SysV init may be in use. In that case you can skip to section [(Alternative) Ensure ASCS and ERS SAP Services can run on either node (sysV)](#modify-sapservices-sysv-nw-sles) 

1.  **On the instance where the ASCS was installed** 

   Register the missing ERS service on the node where you have installed ASCS.

   1. Temporarily mount the ERS directory (classic only):

      ```
      # mount <nfs.fqdn>:/<SID>_ERS<ers_sys_nr>  /usr/sap/<SID>/ERS<ers_sys_nr>
      ```

   1. Register the ERS service:

      ```
      # export LD_LIBRARY_PATH=/usr/sap/<SID>/ERS<ers_sys_nr>/exe
      # /usr/sap/<SID>/ERS<ers_sys_nr>/exe/sapstartsrv pf=/usr/sap/<SID>/SYS/profile/<SID>_ERS<ers_sys_nr>_<ers_virt_hostname> -reg
      # systemctl start SAP<SID>_<ers_sys_nr>
      ```

   1. Check the existence and state of SAP services (example):

      ```
      # systemctl list-unit-files SAP*
      UNIT FILE                    STATE   VENDOR PRESET
      SAPSLX.service            disabled disabled
      SAPSLX.service            disabled disabled
      SAP.slice                   static   -
      3 unit files listed.
      ```

   1. If the state is not disabled, run the following command to disable `sapservices` integration for `SAP<SID>_<ascs_sys_nr>` and `SAP<SID>_<ers_sys_nr>` on both nodes:
**Important**  
Stopping these services also stops the associated SAP instances.

      ```
      # systemctl stop SAP<SID>_<ascs_sys_nr>.service
      # systemctl disable SAP<SID>_<ascs_sys_nr>.service
      # systemctl stop SAP<SID>_<ers_sys_nr>.service
      # systemctl disable SAP<SID>_<ers_sys_nr>.service
      ```

   1. Unmount the ERS directory (classic only):

      ```
      # umount /usr/sap/<SID>/ERS<ers_sys_nr>
      ```
      +  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

        ```
        # mount <nfs.fqdn>:/SLX_ERS10  /usr/sap/SLX/ERS10
        # export LD_LIBRARY_PATH=/usr/sap/SLX/ERS10/exe
        # /usr/sap/SLX/ERS10/exe/sapstartsrv pf=/usr/sap/SLX/SYS/profile/SLX_ERS10_slxers -reg
        # systemctl start SAPSLX_10
        # systemctl stop SAPSLX_00.service
        # systemctl disable SAPSLX_00.service
        # systemctl stop SAPSLX_10.service
        # systemctl disable SAPSLX_10.service
        # umount /usr/sap/SLX/ERS10
        ```

1.  **On the instance where the ERS was installed** 

   Register the missing ASCS service on the node where you have installed ERS.

   1. Temporarily mount the ASCS directory (classic only):

      ```
      # mount <nfs.fqdn>:/<SID>_ASCS<ascs_sys_nr> /usr/sap/<SID>/ASCS<ascs_sys_nr>
      ```

   1. Register the ASCS service:

      ```
      # export LD_LIBRARY_PATH=/usr/sap/<SID>/ASCS<ascs_sys_nr>/exe
      # /usr/sap/<SID>/ASCS<ascs_sys_nr>/exe/sapstartsrv pf=/usr/sap/<SID>/SYS/profile/<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname> -reg
      # systemctl start SAP<SID>_<ascs_sys_nr>
      ```

   1. Check the existence and state of SAP services (example):

      ```
      # systemctl list-unit-files SAP*
      UNIT FILE                    STATE   VENDOR PRESET
      SAPSLX00.service           disabled disabled
      SAPSLX00.service           disabled disabled
      SAP.slice                   static   -
      3 unit files listed.
      ```

   1. If the state is not disabled, run the following command to disable `sapservices` integration for `SAP<SID>_<ascs_sys_nr>` and `SAP<SID>_<ers_sys_nr>` on both nodes:
**Important**  
Stopping these services also stops the associated SAP instances.

      ```
      # systemctl stop SAP<SID>_<ascs_sys_nr>.service
      # systemctl disable SAP<SID>_<ascs_sys_nr>.service
      # systemctl stop SAP<SID>_<ers_sys_nr>.service
      # systemctl disable SAP<SID>_<ers_sys_nr>.service
      ```

   1. Unmount the ASCS directory (classic only):

      ```
      # umount /usr/sap/<SID>/ASCS<ascs_sys_nr>
      ```
      +  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

        ```
        # mount <nfs.fqdn>:/SLX_ASCS00 /usr/sap/SLX/ASCS00
        # export LD_LIBRARY_PATH=/usr/sap/SLX/ASCS00/exe
        # /usr/sap/SLX/ASCS00/exe/sapstartsrv pf=/usr/sap/SLX/SYS/profile/SLX_ASCS00_slxascs -reg
        # systemctl start SAPSLX_00
        # systemctl stop SAPSLX_00.service
        # systemctl disable SAPSLX_00.service
        # systemctl stop SAPSLX_10.service
        # systemctl disable SAPSLX_10.service
        # umount /usr/sap/SLX/ASCS00
        ```

## Configure dependencies for Pacemaker and SAP services (systemd)
<a name="configure-systemd-deps-nw-sles"></a>

This step is required on both cluster nodes when using systemd integration.

When an EC2 instance shuts down unexpectedly, Pacemaker (the cluster resource manager) may trigger unnecessary fencing actions because it cannot distinguish between planned SAP service shutdowns and system failures. To prevent this, configure systemd dependencies that inform Pacemaker about the relationship between SAP services and cluster operations.

Create a systemd drop-in configuration for the `resource-agents-deps.target`, which is a systemd target that Pacemaker uses to understand external service dependencies:

```
# mkdir -p /etc/systemd/system/resource-agents-deps.target.d/
# cd /etc/systemd/system/resource-agents-deps.target.d/

# cat > sap_systemd_<sid>.conf <<_EOF
[Unit]
Requires=sapinit.service
After=sapinit.service
After=SAP<SID>_<ascs_sys_nr>.service
After=SAP<SID>_<ers_sys_nr>.service
_EOF

# systemctl daemon-reload
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # cat > sap_systemd_slx.conf <<_EOF
  [Unit]
  Requires=sapinit.service
  After=sapinit.service
  After=SAPSLX_00.service
  After=SAPSLX_10.service
  _EOF
  
  # systemctl daemon-reload
  ```

## (Alternative) Ensure ASCS and ERS SAP Services can run on either node (sysV)
<a name="modify-sapservices-sysv-nw-sles"></a>

This is only applicable for if systemd integration is not in place.

To ensure that SAP instance can be managed by the cluster and also manually during planned maintenance activities, add the missing entries for ASCS and ERS `sapstartsrv` service in `/usr/sap/sapservices` file on both cluster nodes (ASCS and ERS host). Copy the missing entry from both hosts. Post-modifications, the `/usr/sap/sapservices` file looks as follows on both hosts:

```
#!/bin/sh
LD_LIBRARY_PATH=/usr/sap/<SID>/ASCS<ascs_sys_nr>/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/<SID>/ASCS<ascs_sys_nr>/exe/sapstartsrv pf=/usr/sap/<SID>/SYS/profile/<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname> -D -u <sid>adm
LD_LIBRARY_PATH=/usr/sap/<SID>/ERS<ers_sys_nr>/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/<SID>/ERS<ers_sys_nr>/exe/sapstartsrv pf=/usr/sap/<SID>/SYS/profile/<SID>_ERS<ers_sys_nr>_<ers_virt_hostname> -D -u <sid>adm
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  #!/bin/sh
  LD_LIBRARY_PATH=/usr/sap/SLX/ASCS00/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/SLX/ASCS00/exe/sapstartsrv pf=/usr/sap/SLX/SYS/profile/SLX_ASCS00_slxascs -D -u slxadm
  LD_LIBRARY_PATH=/usr/sap/SLX/ERS10/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/SLX/ERS10/exe/sapstartsrv pf=/usr/sap/SLX/SYS/profile/SLX_ERS10_slxers -D -u slxadm
  ```

# Cluster Node Setup
<a name="cluster-node-setup-nw-sles"></a>

Establish cluster communication between nodes using Corosync and configure required authentication.

**Topics**
+ [

## Change the hacluster Password
](#change-hacluster-password-nw-sles)
+ [

## Setup Passwordless Authentication
](#setup-passwordless-auth-nw-sles)
+ [

## Configure the Cluster Nodes
](#configure-cluster-nodes-nw-sles)
+ [

## Modify Generated Corosync Configuration
](#modify-corosync-config-nw-sles)
+ [

## Verify Corosync Configuration
](#verify-corosync-config-nw-sles)
+ [

## Configure Cluster Services
](#configure-cluster-services-nw-sles)
+ [

## Verify Cluster Status
](#verify-cluster-status-nw-sles)

## Change the hacluster Password
<a name="change-hacluster-password-nw-sles"></a>

On all cluster nodes, change the password of the operating system user hacluster:

```
# passwd hacluster
```

## Setup Passwordless Authentication
<a name="setup-passwordless-auth-nw-sles"></a>

SUSE cluster tools provide comprehensive reporting and troubleshooting capabilities for cluster activity. Many of these tools require passwordless SSH access between nodes to collect cluster-wide information effectively. SUSE recommends configuring passwordless SSH for the root user to enable seamless cluster diagnostics and reporting.

EC2 instances typically have no root password set. Use the shared `/sapmnt` filesystem to exchange SSH keys:

 **On the primary node (<hostname1>):** 

```
# ssh-keygen -t rsa -b 4096 -f /root/.ssh/id_rsa -N ''
# cp /root/.ssh/id_rsa.pub /sapmnt/node1_key.pub
```

 **On the secondary node (<hostname2>):** 

```
# ssh-keygen -t rsa -b 4096 -f /root/.ssh/id_rsa -N ''
# cp /root/.ssh/id_rsa.pub /sapmnt/node2_key.pub
# cat /sapmnt/node1_key.pub >> /root/.ssh/authorized_keys
# chmod 600 /root/.ssh/authorized_keys
```

 **Back on the primary node (<hostname1>):** 

```
# cat /sapmnt/node2_key.pub >> /root/.ssh/authorized_keys
# chmod 600 /root/.ssh/authorized_keys
```

 **Test connectivity from both nodes:** 

```
# ssh root@<opposite_hostname> 'hostname'
```

 **Clean up temporary files (from either node):** 

```
# rm /sapmnt/node1_key.pub /sapmnt/node2_key.pub
```

An alternative is to review the SUSE Dcoumentation for [Running cluster reports without root access](https://documentation.suse.com/sle-ha/15-SP7/html/SLE-HA-all/app-crmreport-nonroot.html) 

**Warning**  
Review the security implications for your organization, including root access controls and network segmentation, before implementing this configuration.

## Configure the Cluster Nodes
<a name="configure-cluster-nodes-nw-sles"></a>

Initialize the cluster framework on the first node to recognise both cluster nodes.

On the primary node as root, run:

```
# crm cluster init -u -n <cluster_name> -N <hostname_1> <hostname_2>
```

 *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

```
# crm cluster init -u -y -n slx-sap-cluster -N slxhost01 -N slxhost02
INFO: Detected "amazon-web-services" platform
INFO: Loading "default" profile from /etc/crm/profiles.yml
INFO: "amazon-web-services" profile does not exist in /etc/crm/profiles.yml

INFO: Configuring csync2
INFO: Starting csync2.socket service on slxhost01
INFO: BEGIN csync2 checking files
INFO: END csync2 checking files
INFO: Configuring corosync (unicast)
WARNING: Not configuring SBD - STONITH will be disabled.
INFO: Hawk cluster interface is now running. To see cluster status, open:
INFO:   https://10.2.10.1:7630/
INFO: Log in with username 'hacluster'
INFO: Starting pacemaker.service on slxhost01
INFO: BEGIN Waiting for cluster
...........
INFO: END Waiting for cluster
INFO: Loading initial cluster configuration
INFO: Done (log saved to /var/log/crmsh/crmsh.log on slxhost01)
INFO: Adding node slxhost02 to cluster
INFO: Running command on slxhost02: crm cluster join -y  -c root@slxhost01
INFO: Configuring csync2
INFO: Starting csync2.socket service
INFO: BEGIN csync2 syncing files in cluster
INFO: END csync2 syncing files in cluster
INFO: Merging known_hosts
INFO: BEGIN Probing for new partitions
INFO: END Probing for new partitions
INFO: Hawk cluster interface is now running. To see cluster status, open:
INFO:   https://10.1.20.7:7630/
INFO: Log in with username 'hacluster'
INFO: Starting pacemaker.service on slxhost02
INFO: BEGIN Waiting for cluster
INFO: END Waiting for cluster
INFO: Set property "priority" in rsc_defaults to 1
INFO: BEGIN Reloading cluster configuration
INFO: END Reloading cluster configuration
INFO: Done (log saved to /var/log/crmsh/crmsh.log on slxhost02)
```

This command:
+ Initializes a two-node cluster named `myCluster` 
+ Configures unicast communication (-u)
+ Sets up the basic corosync configuration
+ Automatically joins the second node to the cluster
+ We do not configure SBD as an AWS Fencing Agent will be used for STONITH in AWS environments.
+ QDevice configuration is possible but not covered in this document. Refer to [SUSE Linux Enterprise High Availability Documentation - QDevice and QNetD](https://documentation.suse.com/en-us/sle-ha/15-SP7/html/SLE-HA-all/cha-ha-qdevice.html).

## Modify Generated Corosync Configuration
<a name="modify-corosync-config-nw-sles"></a>

After initializing the cluster, the generated corosync configuration requires some modification to be optimised for cloud envrironments.

 **1. Edit the corosync configuration:** 

```
# vi /etc/corosync/corosync.conf
```

The generated file typically looks like this:

```
# Please read the corosync.conf.5 manual page
totem {
        version: 2
        cluster_name: myCluster
        clear_node_high_bit: yes
        interface {
                ringnumber: 0
                mcastport: 5405
                ttl: 1
        }

        transport: udpu
        crypto_hash: sha1
        crypto_cipher: aes256
        token: 5000     # This needs to be changed
        join: 60
        max_messages: 20
        token_retransmits_before_loss_const: 10
}

logging {
        fileline: off
        to_stderr: no
        to_logfile: yes
        logfile: /var/log/cluster/corosync.log
        to_syslog: yes
        debug: off
        timestamp: on
        logger_subsys {
                subsys: QUORUM
                debug: off
        }

}

nodelist {
    node {
        ring0_addr: <node1_primary_ip>    # Only single ring configured
        nodeid: 1
    }
    node {
        ring0_addr: <node2_primary_ip>    # Only single ring configured
        nodeid: 2
    }
}

quorum {

        # Enable and configure quorum subsystem (default: off)
        # see also corosync.conf.5 and votequorum.5
        provider: corosync_votequorum
        expected_votes: 2
        two_node: 1
}

totem {
    version: 2
    token: 5000             # This needs to be changed
    transport: udpu
    interface {
        ringnumber: 0
        mcastport: 5405
    }
}
```

 **2. Modify the configuration to add the second ring and optimize settings:** 

```
totem {
    token: 15000           # Changed from 5000 to 15000
    rrp_mode: passive      # Added for dual ring support
}

nodelist {
    node {
        ring0_addr: <node1_primary_ip>     # Primary network
        ring1_addr: <node1_secondary_ip>   # Added secondary network
        nodeid: 1
    }
    node {
        ring0_addr: <node2_primary_ip>     # Primary network
        ring1_addr: <node2_secondary_ip>   # Added secondary network
        nodeid: 2
    }
}
```

 *Example IP configuration:* 


| Network Interface | Node 1 | Node 2 | 
| --- | --- | --- | 
|  ring0\$1addr  |  10.2.10.1  |  10.2.20.1  | 
|  ring1\$1addr  |  10.2.10.2  |  10.2.20.2  | 

 **3. Synchronize the modified configuration to all nodes:** 

```
# csync2 -xvF /etc/corosync/corosync.conf
```

 **4. Restart the cluster** 

```
# crm cluster restart
# ssh root@<hostname2> 'crm cluster restart'
```

## Verify Corosync Configuration
<a name="verify-corosync-config-nw-sles"></a>

Verify network rings are active:

```
# corosync-cfgtool -s
```

 *Example output*:

```
Printing ring status.
Local node ID 1
RING ID 0
        id      = 10.2.10.1
        status  = ring 0 active with no faults
RING ID 1
        id      = 10.2.10.2
        status  = ring 1 active with no faults
```

Both network rings should report "active with no faults". If either ring is missing, review the corosync configuration and check that `/etc/corosync/corosync.conf` changes have been synced to the secondary node. You may need to do this manually. Restart the cluster if needed.

## Configure Cluster Services
<a name="configure-cluster-services-nw-sles"></a>

Enable pacemaker to start automatically after reboot:

```
# systemctl enable pacemaker
```

Enabling pacemaker also handles corosync through service dependencies. The cluster will start automatically after reboot. For troubleshooting scenarios, you can choose to manually start services after boot instead.

## Verify Cluster Status
<a name="verify-cluster-status-nw-sles"></a>

 **1. Check pacemaker service status:** 

```
# systemctl status pacemaker
```

 **2. Verify cluster status:** 

```
# crm_mon -1
```

 *Example output*:

```
Cluster Summary:
  * Stack: corosync
  * Current DC: slxhost01 (version 2.1.5+20221208.a3f44794f) - partition with quorum
  * 2 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ slxhost01 slxhost02 ]

Active Resources:
  * No active resources
```

# Cluster Configuration
<a name="cluster-config-nw-sles"></a>

The following sections provide details on the resources, groups and constraints necessary to ensure high availability of SAP Central Services.

**Topics**
+ [

## Prepare for Resource Creation
](#prepare-resource-nw-sles)
+ [

## Cluster Bootstrap
](#cluster-bootstrap-nw-sles)
+ [

## Create STONITH (external/ec2) resource
](#create-stonith-ec2-nw-sles)
+ [

## Create Filesystem resources (classic only)
](#filesystem-resources-nw-sles)
+ [

## Create Overlay IP (aws-vpc-move-ip) resources
](#overlay-ip-resources-nw-sles)
+ [

## Create SAPStartSrv resources (simple-mount only)
](#sapstartsrv-resources-nw-sles)
+ [

## Create SAPInstance resources (simple-mount only)
](#sap-resources-simple-nw-sles)
+ [

## Create SAPInstance resources (classic only)
](#sap-resources-classic-nw-sles)
+ [

## Create resource groups for aws-vpc-move-ip / SAPStartSrv / SAPInstance (simple-mount only)
](#resource-groups-simple-nw-sles)
+ [

## Create resource groups for Filesystem / aws-vpc-move-ip / SAPInstance (classic only)
](#resource-groups-classic-nw-sles)
+ [

## Create resource constraints
](#resource-constraints-nw-sles)
+ [

## Reset Configuration – Optional
](#reset-config-nw-sles)

## Prepare for Resource Creation
<a name="prepare-resource-nw-sles"></a>

To ensure that the cluster does not perform unexpected actions during setup of resources and configuration, set the maintenance mode to true.

Run the following command to put the cluster in maintenance mode:

```
# crm maintenance on
```

To verify the current maintenance state:

```
# crm status
```

**Note**  
There are two types of maintenance mode:  
Cluster-wide maintenance (set with `crm maintenance on`)
Node-specific maintenance (set with `crm node maintenance nodename`)
Always use cluster-wide maintenance mode when making configuration changes. For node-specific operations like hardware maintenance, refer to the Operations for proper procedures.  
To disable maintenance mode after configuration is complete:  

```
# crm maintenance off
```

## Cluster Bootstrap
<a name="cluster-bootstrap-nw-sles"></a>

### Configure Cluster Properties
<a name="_configure_cluster_properties"></a>

Configure cluster properties to establish fencing behavior and resource failover settings:

```
# crm configure property stonith-enabled="true"
# crm configure property stonith-timeout="600"
# crm configure property priority-fencing-delay="20"
```
+ The **priority-fencing-delay** is recommended for protecting the SAP ASCS nodes during network partitioning events. When a cluster partition occurs, this delay gives preference to nodes hosting higher priority resources, with the ASCS receiving additional priority weighting over the ERS . This helps ensure the ASCS node survives in split-brain scenarios. The recommended 20 second priority-fencing-delay works in conjunction with the pcmk\$1delay\$1max (10 seconds) configured in the stonith resource, providing a total potential delay of up to 30 seconds before fencing occurs

To verify your cluster property settings:

```
# crm configure show property
```

### Configure Resource Defaults
<a name="_configure_resource_defaults"></a>

Configure resource default behaviors:

```
# crm configure rsc_defaults resource-stickiness="1"
# crm configure rsc_defaults migration-threshold="3"
# crm configure rsc_defaults failure-timeout="600s"
```
+ The **resource-stickiness** value of 1 encourages the ASCS resource to stay on its current node, avoiding unnecessary resource movement.
+ The **migration-threshold** of causes a resource to move to a different node after 3 consecutive failures, ensuring timely failover when issues persist.
+ The **failure-timeout** automatically removes a failure count after 10 minutes, preventing individual historical failures from accumulating and affecting long-term resource behavior. If testing failover scenarios in quick succession, it may be necessary to manually query and clear accumulated failure counts between tests. Use `crm resource failcount <resource_name> show <hostname>` and `crm resource refresh`.

Individual resources may override these defaults with their own defined values.

To verify your resource default settings:

```
# crm configure show rsc_defaults
```

### Configure Operation Defaults
<a name="_configure_operation_defaults"></a>

Configure operation timeout defaults:

```
# crm configure op_defaults timeout="600"
```
+ The **op\$1defaults timeout** ensures all cluster operations have a reasonable default timeout of 600 seconds. Individual resources may override this with their own timeout values.

To verify your operation default settings:

```
# crm configure show op_defaults
```

## Create STONITH (external/ec2) resource
<a name="create-stonith-ec2-nw-sles"></a>

Create the STONITH or Fencing resource using resource agent ** `external/ec2` **:

```
# crm configure primitive <stonith_resource_name> stonith:external/ec2 \
params tag="<cluster_tag>" profile="<cli_cluster_profile>" pcmk_delay_max="<delay_value>" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="300" timeout="60"
```

Details:
+  **tag** - EC2 instance tag key name that associates instances with this cluster configuration. This tag key must be unique within the AWS account and have a value which matches the instance hostname. See [Create Amazon EC2 Resource Tags Used by Amazon EC2 STONITH Agent](sap-nw-pacemaker-sles-ec2-configuration.md#create-cluster-tags-nw-sles) for EC2 instance tagging configuration.
+  **profile** - (optional) AWS CLI profile name for API authentication. Verify profile exists with `aws configure list-profiles`. If a profile is not explicitly configured the default profile will be used.
+  **pcmk\$1delay\$1max** - Random delay before fencing operations. Works in conjunction with cluster property `priority-fencing-delay` to prevent simultaneous fencing om 2-node clusters. For ENSA1 use 30 seconds, for ENSA2 use 10 seconds (lower value sufficient as `priority-fencing-delay` handles primary node protection). Omit in clusters with real quorum (3\$1 nodes) to avoid unnecessary delay.

**Example**  
 *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:  

```
# crm configure primitive res_stonith_ec2 stonith:external/ec2 \
params tag="pacemaker" profile="cluster" \
pcmk_delay_max="30" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="300" timeout="60"
```
 *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:  

```
# crm configure primitive res_stonith_ec2 stonith:external/ec2 \
params tag="pacemaker" profile="cluster" \
pcmk_delay_max="10" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="300" timeout="60"
```

## Create Filesystem resources (classic only)
<a name="filesystem-resources-nw-sles"></a>

In classic configuration, the mounting and unmounting of file system resources to align with the location of the SAP services is done using cluster resources.

Create **ASCS** file system resources:

```
# crm configure primitive rsc_fs_<SID>_ASCS<ascs_sys_nr> ocf:heartbeat:Filesystem \
params \
device="<nfs.fqdn>:/<SID>_ASCS<ascs_sys_nr>" \
directory="/usr/sap/<SID>/ASCS<ascs_sys_nr>" \
fstype="nfs4" \
options="rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2" \
op start timeout="60" interval="0" \
op stop timeout="60" interval="0" \
op monitor interval="20" timeout="40"
```

Create **ERS** file system resources:

```
# crm configure primitive rsc_fs_<SID>_ERS<ers_sys_nr> ocf:heartbeat:Filesystem \
params \
device="<nfs.fqdn>:/<SID>_ERS<ers_sys_nr>" \
directory="/usr/sap/<SID>/ERS<ers_sys_nr>" \
fstype="nfs4" \
options="rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2" \
op start timeout="60" interval="0" \
op stop timeout="60" interval="0" \
op monitor interval="20" timeout="40"
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # crm configure primitive rsc_fs_SLX_ASCS00 ocf:heartbeat:Filesystem \
  params \
  device="fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/SLX_ASCS00" \
  directory="/usr/sap/SLX/ASCS00" \
  fstype="nfs4" \
  options="rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2" \
  op start timeout="60" interval="0" \
  op stop timeout="60" interval="0" \
  op monitor interval="20" timeout="40"
  
  # crm configure primitive rsc_fs_SLX_ERS10 ocf:heartbeat:Filesystem \
  params \
  device="fs-xxxxxxxxxxxxxefs1.efs.us-east-1.amazonaws.com:/SLX_ERS10" \
  directory="/usr/sap/SLX/ERS10" \
  fstype="nfs4" \
  options="rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2" \
  op start timeout="60" interval="0" \
  op stop timeout="60" interval="0" \
  op monitor interval="20" timeout="40"
  ```

 **Notes** 
+ Review the mount options to ensure that they match with your operating system, NFS file system type, and the latest recommendations from SAP.
+ <nfs.fqdn> can either be an alias or the default file system resource name of the NFS or FSx for ONTAP resource. For example, `fs-xxxxxx.efs.xxxxxx.amazonaws.com`.

## Create Overlay IP (aws-vpc-move-ip) resources
<a name="overlay-ip-resources-nw-sles"></a>

The IP resource provides the details necessary to update the route table entry for overlay IP.

Create **ASCS** IP Resource:

```
# crm configure primitive rsc_ip_<SID>_ASCS<ascs_sys_nr> ocf:heartbeat:aws-vpc-move-ip \
params \
ip="<ascs_overlayip>" \
routing_table="<routetable_id>" \
interface="eth0" \
profile="<cli_cluster_profile>" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="20" timeout="40"
```

Create **ERS** IP Resource:

```
# crm configure primitive rsc_ip_<SID>_ERS<ers_sys_nr> ocf:heartbeat:aws-vpc-move-ip \
params \
ip="<ers_overlayip>" \
routing_table="<routetable_id>" \
interface="eth0" \
profile="<cli_cluster_profile>" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="20" timeout="40"
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # crm configure primitive rsc_ip_SLX_ASCS00 ocf:heartbeat:aws-vpc-move-ip \
  params \
  ip="172.16.30.5" \
  routing_table="rtb-xxxxxroutetable1" \
  interface="eth0" \
  profile="cluster" \
  op start interval="0" timeout="180" \
  op stop interval="0" timeout="180" \
  op monitor interval="20" timeout="40"
  
  # crm configure primitive rsc_ip_SLX_ERS10 ocf:heartbeat:aws-vpc-move-ip \
  params \
  ip="172.16.30.6" \
  routing_table="rtb-xxxxxroutetable1" \
  interface="eth0" \
  profile="cluster" \
  op start interval="0" timeout="180" \
  op stop interval="0" timeout="180" \
  op monitor interval="20" timeout="40"
  ```

 **Notes** 
+ If more than one route table is required for connectivity or because of subnet associations, the `routing_table` parameter can have multiple values separated by a comma. For example, `routing_table=rtb-xxxxxroutetable1,rtb-xxxxxroutetable2`.
+ Additional parameters – `lookup_type` and `routing_table_role` are required for shared VPC. For more information, see \$1https---docs-aws-amazon-com-sap-latest-sap-netweaver-sles-netweaver-ha-settings-html-sles-netweaver-ha-shared-vpc\$1[Shared VPC – optional].

## Create SAPStartSrv resources (simple-mount only)
<a name="sapstartsrv-resources-nw-sles"></a>

In simple-mount architecture, the `sapstartsrv` process that is used to control start/stop and monitoring of an SAP instance, is controlled by a cluster resource. This new resource adds additional control that removes the requirement for file system resources to be restricted to a single node.

Modify and run the commands in the table to create `sapstartsrv` resource.

Create **ASCS** SAPStartSrv Resource

Use the following command to create an ASCS SAPStartSrv resource.

```
# crm configure primitive rsc_sapstart_<SID>_ASCS<ascs_sys_nr> ocf:suse:SAPStartSrv \
params \
InstanceName=<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>
```

Create **ERS** SAPStartSrv Resource

Use the following command to create an ERS SAPStartSrv resource.

```
# crm configure primitive rsc_sapstart_<SID>_ERS<ers_sys_nr> ocf:suse:SAPStartSrv \
params  \
InstanceName=<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  #crm configure primitive rsc_sapstart_SLX_ASCS00 ocf:suse:SAPStartSrv \
  params \
  InstanceName=SLX_ASCS00_slxascs
  
  #crm configure primitive rsc_sapstart_SLX_ERS10 ocf:suse:SAPStartSrv \
  params \
  InstanceName=SLX_ERS10_slxers
  ```

## Create SAPInstance resources (simple-mount only)
<a name="sap-resources-simple-nw-sles"></a>

The minor difference in creating SAP instance resources between classic and simple-mount configurations is the addition of `MINIMAL_PROBE=true` parameters.

The SAP instance is started and stopped using cluster resources.

**Example**  
Create an **ASCS** SAP instance resource:  

```
# crm configure primitive rsc_sap_<SID>_ASCS<ascs_sys_nr> ocf:heartbeat:SAPInstance \
params \
InstanceName="<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
MINIMAL_PROBE="true" \
operations \$id="rsc_sap_<SID>_ASCS<ascs_sys_nr>-operations" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="240" \
op monitor interval="11" timeout="60" on-fail="restart" \
meta \
resource-stickiness="5000" \
failure-timeout="60" \
migration-threshold="1" \
priority="10"
```
Create an **ERS** SAP instance resource:  

```
# crm configure primitive rsc_sap_<SID>_ERS<ers_sys_nr> ocf:heartbeat:SAPInstance \
params \
InstanceName="<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
MINIMAL_PROBE="true" \
IS_ERS="true" \
operations \$id="rsc_sap_<SID>_ERS<ers_sys_nr>-operations" \
op start interval="0" timeout="240" \
op stop interval="0" timeout="240" \
op monitor interval="11" timeout="60" on-fail="restart" \
meta \
priority="1000"
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # crm configure primitive rsc_sap_SLX_ASCS00 ocf:heartbeat:SAPInstance \
  params \
  InstanceName="SLX_ASCS00_slxascs" \
  START_PROFILE="/usr/sap/SLX/SYS/profile/SLX_ASCS00_slxascs" \
  AUTOMATIC_RECOVER="false" \
  MINIMAL_PROBE="true" \
  operations \$id="rsc_sap_SLX_ASCS00-operations" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="240" \
  op monitor interval="11" timeout="60" on-fail="restart" \
  meta \
  resource-stickiness="5000" \
  failure-timeout="60" \
  migration-threshold="1" \
  priority="10"
  
  # crm configure primitive rsc_sap_SLX_ERS10 ocf:heartbeat:SAPInstance \
  params \
  InstanceName="SLX_ERS10_slxers" \
  START_PROFILE="/usr/sap/SLX/SYS/profile/SLX_ERS10_slxers" \
  AUTOMATIC_RECOVER="false" \
  MINIMAL_PROBE="true" \
  IS_ERS="true" \
  operations \$id="rsc_sap_SLX_ERS10-operations" \
  op start interval="0" timeout="240" \
  op stop interval="0" timeout="240" \
  op monitor interval="11" timeout="60" on-fail="restart" \
  meta \
  priority="1000"
  ```
Create an **ASCS** SAP instance resource:  

```
# crm configure primitive rsc_sap_<SID>_ASCS<ascs_sys_nr> ocf:heartbeat:SAPInstance \
params \
InstanceName="<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
MINIMAL_PROBE="true" \
operations \$id="rsc_sap_<SID>_ASCS<ascs_sys_nr>-operations" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="240" \
op monitor interval="11" timeout="60" on-fail="restart" \
meta \
resource-stickiness="5000" \
priority="1000"
```
Create an **ERS** SAP instance resource:  

```
# crm configure primitive rsc_sap_<SID>_ERS<ers_sys_nr> ocf:heartbeat:SAPInstance \
params \
InstanceName="<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
MINIMAL_PROBE="true" \
IS_ERS="true" \
operations \$id="rsc_sap_<SID>_ERS<ers_sys_nr>-operations" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="240" \
op monitor interval="11" timeout="60" on-fail="restart"
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # crm configure primitive rsc_sap_SLX_ASCS00 ocf:heartbeat:SAPInstance \
  params \
  InstanceName="SLX_ASCS00_slxascs" \
  START_PROFILE="/usr/sap/SLX/SYS/profile/SLX_ASCS00_slxascs" \
  AUTOMATIC_RECOVER="false" \
  MINIMAL_PROBE="true" \
  operations \$id="rsc_sap_SLX_ASCS00-operations" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="240" \
  op monitor interval="11" timeout="60" on-fail="restart" \
  meta \
  resource-stickiness="5000" \
  priority="1000"
  
  # crm configure primitive rsc_sap_SLX_ERS10 ocf:heartbeat:SAPInstance \
  params \
  InstanceName="SLX_ERS10_slxers" \
  START_PROFILE="/usr/sap/SLX/SYS/profile/SLX_ERS10_slxers" \
  AUTOMATIC_RECOVER="false" \
  MINIMAL_PROBE="true" \
  IS_ERS="true" \
  operations \$id="rsc_sap_SLX_ERS10-operations" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="240" \
  op monitor interval="11" timeout="60" on-fail="restart"
  ```

The difference between ENSA1 and ENSA2 is that ENSA2 allows the lock table to be consumed remotely, which means that for ENSA2, ASCS can restart in its current location (assuming the node is still available). This change impacts stickiness, migration and priority parameters. Ensure that you use the right command for your enqueue version.

## Create SAPInstance resources (classic only)
<a name="sap-resources-classic-nw-sles"></a>

The SAP instance is started and stopped using cluster resources.

**Example**  
Create an **ASCS** SAPInstance resource:  

```
# crm configure primitive rsc_sap_<SID>_ASCS<ascs_sys_nr> ocf:heartbeat:SAPInstance \
params \
InstanceName="<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
operations \$id="rsc_sap_<SID>_ASCS<ascs_sys_nr>-operations" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="240" \
op monitor interval="11" timeout="60" on-fail="restart" \
meta \
resource-stickiness="5000" \
failure-timeout="60" \
migration-threshold="1" \
priority="10"
```
Create an **ERS** SAPInstance resource:  

```
# crm configure primitive rsc_sap_<SID>_ERS<ers_sys_nr> ocf:heartbeat:SAPInstance \
params \
InstanceName="<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
IS_ERS="true" \
operations \$id="rsc_sap_<SID>_ERS<ers_sys_nr>-operations" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="240" \
op monitor interval="11" timeout="60" on-fail="restart" \
meta \
priority="1000"
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # crm configure primitive rsc_sap_SLX_ASCS00 ocf:heartbeat:SAPInstance \
  params \
  InstanceName="SLX_ASCS00_slxascs" \
  START_PROFILE="/usr/sap/SLX/SYS/profile/SLX_ASCS00_slxascs" \
  AUTOMATIC_RECOVER="false" \
  operations \$id="rsc_sap_SLX_ASCS00-operations" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="240" \
  op monitor interval="11" timeout="60" on-fail="restart" \
  meta \
  resource-stickiness="5000" \
  failure-timeout="60" \
  migration-threshold="1" \
  priority="10"
  
  # crm configure primitive rsc_sap_SLX_ERS10 ocf:heartbeat:SAPInstance \
  params \
  InstanceName="SLX_ERS10_slxers" \
  START_PROFILE="/usr/sap/SLX/SYS/profile/SLX_ERS10_slxers" \
  AUTOMATIC_RECOVER="false" \
  IS_ERS="true" \
  operations \$id="rsc_sap_SLX_ERS10-operations" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="240" \
  op monitor interval="11" timeout="60" on-fail="restart" \
  meta \
  priority="1000"
  ```
Create an **ASCS** SAPInstance resource:  

```
# crm configure primitive rsc_sap_<SID>_ASCS<ascs_sys_nr> ocf:heartbeat:SAPInstance \
params \
InstanceName="<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ASCS<ascs_sys_nr>_<ascs_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
operations \$id="rsc_sap_<SID>_ASCS<ascs_sys_nr>-operations" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="240" \
op monitor interval="11" timeout="60" on-fail="restart" \
meta \
resource-stickiness="5000" \
priority="1000"
```
Create an **ERS** SAPInstance resource:  

```
# crm configure primitive rsc_sap_<SID>_ERS<ers_sys_nr> ocf:heartbeat:SAPInstance \
params \
InstanceName="<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ERS<ers_sys_nr>_<ers_virt_hostname>" \
AUTOMATIC_RECOVER="false" \
IS_ERS="true" \
operations \$id="rsc_sap_<SID>_ERS<ers_sys_nr>-operations" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="240" \
op monitor interval="11" timeout="60" on-fail="restart"
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # crm configure primitive rsc_sap_SLX_ASCS00 ocf:heartbeat:SAPInstance \
  params \
  InstanceName="SLX_ASCS00_slxascs" \
  START_PROFILE="/usr/sap/SLX/SYS/profile/SLX_ASCS00_slxascs" \
  AUTOMATIC_RECOVER="false" \
  operations \$id="rsc_sap_SLX_ASCS00-operations" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="240" \
  op monitor interval="11" timeout="60" on-fail="restart" \
  meta \
  resource-stickiness="5000" \
  priority="1000"
  
  # crm configure primitive rsc_sap_SLX_ERS10 ocf:heartbeat:SAPInstance \
  params \
  InstanceName="SLX_ERS10_slxers" \
  START_PROFILE="/usr/sap/SLX/SYS/profile/SLX_ERS10_slxers" \
  AUTOMATIC_RECOVER="false" \
  IS_ERS="true" \
  operations \$id="rsc_sap_SLX_ERS10-operations" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="240" \
  op monitor interval="11" timeout="60" on-fail="restart"
  ```

The change between ENSA1 and ENSA2 allows the lock table to be consumed remotely. If the node is still available, ASCS can restart in its current location for ENSA2. This impacts stickiness, migration, and priority parameters. Make sure to use the right command, depending on your enqueue server.

## Create resource groups for aws-vpc-move-ip / SAPStartSrv / SAPInstance (simple-mount only)
<a name="resource-groups-simple-nw-sles"></a>

A cluster resource group is a set of resources that need to be located together, start sequentially, and stopped in the reverse order.

In simple-mount architecture, the overlay IP must be available first, then the SAP start services are started before the SAP instance can start. The order of the group must be as defined here.

Create an **ASCS** cluster resource group:

```
# crm configure group grp_<SID>_ASCS<ascs_sys_nr> \
rsc_ip_<SID>_ASCS<ascs_sys_nr> \
rsc_sapstart_<SID>_ASCS<ascs_sys_nr> \
rsc_sap_<SID>_ASCS<ascs_sys_nr> \
meta resource-stickiness="3000"
```

Create an **ERS** cluster resource group:

```
# crm configure group grp_<SID>_ERS<ers_sys_nr> \
rsc_ip_<SID>_ERS<ers_sys_nr> \
rsc_sapstart_<SID>_ERS<ers_sys_nr> \
rsc_sap_<SID>_ERS<ers_sys_nr>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # crm configure group grp_SLX_ASCS00 \
  rsc_ip_SLX_ASCS00 \
  rsc_sapstart_SLX_ASCS00 \
  rsc_sap_SLX_ASCS00 \
  meta resource-stickiness="3000"
  
  # crm configure group grp_SLX_ERS10 \
  rsc_ip_SLX_ERS10 \
  rsc_sapstart_SLX_ERS10 \
  rsc_sap_SLX_ERS10
  ```

## Create resource groups for Filesystem / aws-vpc-move-ip / SAPInstance (classic only)
<a name="resource-groups-classic-nw-sles"></a>

A cluster resource group is a set of resources that need to be located together, start sequentially, and stopped in the reverse order.

In classic architecture, the file system is mounted first, then the overlay IP must be available before the SAP instance can start.

Create an **ASCS** cluster resource group:

```
# crm configure group grp_<SID>_ASCS<ascs_sys_nr> \
rsc_fs_<SID>_ASCS<ascs_sys_nr> \
rsc_ip_<SID>_ASCS<ascs_sys_nr> \
rsc_sap_<SID>_ASCS<ascs_sys_nr> \
meta resource-stickiness="3000"
```

Create an **ERS** cluster resource group:

```
# crm configure group grp_<SID>_ERS<ers_sys_nr> \
rsc_fs_<SID>_ERS<ers_sys_nr> \
rsc_ip_<SID>_ERS<ers_sys_nr> \
rsc_sap_<SID>_ERS<ers_sys_nr>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # crm configure group grp_SLX_ASCS00 \
  rsc_fs_SLX_ASCS00 \
  rsc_ip_SLX_ASCS00 \
  rsc_sap_SLX_ASCS00 \
  meta resource-stickiness="3000"
  
  # crm configure group grp_SLX_ERS10 \
  rsc_fs_SLX_ERS10 \
  rsc_ip_SLX_ERS10 \
  rsc_sap_SLX_ERS10
  ```

## Create resource constraints
<a name="resource-constraints-nw-sles"></a>

Resource constraints are used to determine where resources run per the conditions. Constraints for SAP NetWeaver ensure that ASCS and ERS are started on separate nodes and locks are preserved in case of failures. The following are the different types of constraints.

### Colocation constraint
<a name="_colocation_constraint"></a>

The negative score ensures that ASCS and ERS are run on separate nodes, wherever possible.

```
# crm configure colocation col_sap_<SID>_ascs_ers_separate_nodes \
-5000: grp_<SID>_ERS<ers_sys_nr> grp_<SID>_ASCS<ascs_sys_nr>
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # crm configure colocation col_sap_SLX_ascs_ers_separate_nodes \
  -5000: grp_SLX_ERS10 grp_SLX_ASCS00
  ```

### Order constraint
<a name="_order_constraint"></a>

This constraint ensures the ASCS instance is started prior to stopping the ERS instance. This is necessary to consume the lock table.

```
# crm configure order ord_sap_<SID>_ascs_start_before_ers_stop \
Optional: rsc_sap_<SID>_ASCS<ascs_sys_nr>:start rsc_sap_<SID>_ERS<ers_sys_nr>:stop \
symmetrical="false"
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # crm configure order ord_sap_SLX_ascs_start_before_ers_stop \
  Optional: rsc_sap_SLX_ASCS00:start rsc_sap_SLX_ERS10:stop \
  symmetrical="false"
  ```

### Location constraint (ENSA1 only)
<a name="_location_constraint_ensa1_only"></a>

This constraint is only required for ENSA1. The lock table can be retrieved remotely for ENSA2, and as a result ASCS doesn’t failover to where ERS is running.

```
# crm configure location loc_sap_<SID>_ascs_follows_ers \
rsc_sap_<SID>_ASCS<ascs_sys_nr> rule 2000: runs_ers_<SID> eq 1
```
+  *Example using values from [Parameter Reference](sap-nw-pacemaker-sles-parameters.md) *:

  ```
  # crm configure location loc_sap_SLX_ascs_follows_ers \
  rsc_sap_SLX_ASCS00 rule 2000: runs_ers_SLX eq 1
  ```

## Reset Configuration – Optional
<a name="reset-config-nw-sles"></a>

**Important**  
The following instructions help you reset the complete configuration. Run these commands only if you want to start setup from the beginning. You can make minor changes with the crm edit command.

Run the following command to back up the current configuration for reference:

```
# crm config show > /tmp/crmconfig_backup.txt
```

Run the following command to clear the current configuration:

```
# crm configure erase
```

Once the preceding erase command is executed, it removes all of the cluster resources from Cluster Information Base (CIB), and disconnects the communication from corosync to the cluster. Before starting the resource configuration run crm cluster restart, so that cluster reestablishes communication with corosync, and retrieves the configuration. The restart of cluster removes maintenance mode. Reapply before commencing additional configuration and resource setup.