

# SAP HANA and Cluster Setup
<a name="sap-hana-pacemaker-rhel-deployment-cluster"></a>

**Topics**
+ [SAP HANA Setup and HSR](sap-hana-pacemaker-rhel-hana-setup-hsr.md)
+ [SAP HANA Service Control](sap-hana-pacemaker-rhel-hana-control.md)
+ [Cluster Node Setup](sap-hana-pacemaker-rhel-cluster-node-setup.md)
+ [Cluster Configuration](sap-hana-pacemaker-rhel-cluster-config.md)
+ [Client Connectivity](sap-hana-pacemaker-rhel-client-connectivity.md)

# SAP HANA Setup and HSR
<a name="sap-hana-pacemaker-rhel-hana-setup-hsr"></a>

Prepare SAP HANA for System Replication (HSR) by configuring parameters and creating required backups.

**Topics**
+ [Review AWS and SAP Installation Guides](#review_guides)
+ [Check global.ini parameters](#global_ini)
+ [Create a SAP HANA Backup on the Primary System](#pre_setup_backup)
+ [Configure System Replication on Primary and Secondary Systems](#register_hsr)
+ [Check SAP Host Agent Version](#sap_host_agent)

**Important**  
This guide assumes that SAP HANA Platform has been installed either as a scale-up configuration with two EC2 instances in different availability zones, or as a scale-out configuration with multiple EC2 instances in two availability zones, following the guidance from AWS and SAP.

## Review AWS and SAP Installation Guides
<a name="review_guides"></a>
+  AWS Documentation - [SAP HANA Environment Setup on AWS](https://docs.aws.amazon.com/sap/latest/sap-hana/std-sap-hana-environment-setup.html) 
+ SAP Documentation - [SAP HANA Server Installation and Update Guide](https://help.sap.com/docs/SAP_HANA_PLATFORM/2c1988d620e04368aa4103bf26f17727/7eb0167eb35e4e2885415205b8383584.html) 

SAP provides documentation on how to configure SAP HANA System Replication using the SAP HANA Cockpit, SAP HANA Studio or `hdbnsutil` on the command line. Review the documentation for your SAP HANA Version to ensure no changes to the guidance or to use a method other than command line.
+ SAP Documentation: [Configuring SAP HANA System Replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/4e9b18c116aa42fc84c7dbfd02111aba/442bf027937746248f69701aa9b94112.html) 

## Check global.ini parameters
<a name="global_ini"></a>

Run the following as <sid>adm. These commands will prompt for the system password for the SYSTEMDB database.

**Check log\$1mode is set to normal**  
Ensure that the configuration parameter log\$1mode is set to `normal` in the persistence section of the global.ini file:

```
hdbsql -jx -i <hana_sys_nr> -u system -d SYSTEMDB "SELECT VALUE FROM M_INIFILE_CONTENTS WHERE FILE_NAME = 'global.ini' AND SECTION = 'persistence' AND KEY = 'log_mode';"
```

For example:

```
hdbadm> hdbsql -jx -i 00 -u system -d SYSTEMDB "SELECT VALUE FROM M_INIFILE_CONTENTS WHERE FILE_NAME = 'global.ini' AND SECTION = 'persistence' AND KEY = 'log_mode';"
VALUE
"normal"
```

**Review global.ini file replication**  
SAP HANA System Replication requires consistent configuration between primary and secondary systems to ensure proper operation, especially during failover scenarios. The `inifile_checker/replicate` parameter in global.ini provides an automated solution to this requirement. When enabled on the primary system, any configuration changes made to ini files on the primary are automatically synchronized to the secondary site. This removes the need for manual configuration replication and helps prevent configuration mismatches that could impact system availability. The parameter only needs to be configured on the primary system, as the secondary system will receive these configuration changes through the normal System Replication process.

Add the following to `global.ini`:

```
[inifile_checker]
replicate = true
```

See SAP Note [2978895 - Changing parameters on Primary and Secondary site of SAP HANA system](https://me.sap.com/notes/2978895) 

## Create a SAP HANA Backup on the Primary System
<a name="pre_setup_backup"></a>

 **Get a list of all active databases:** 

```
hdbsql -jx -i <hana_sys_nr> -u system -d SYSTEMDB "SELECT DATABASE_NAME,ACTIVE_STATUS from M_DATABASES"
```

For example:

```
hdbadm> hdbsql -jx -i 00 -u system -d SYSTEMDB "SELECT DATABASE_NAME,ACTIVE_STATUS from M_DATABASES"
Password:
DATABASE_NAME,ACTIVE_STATUS
"SYSTEMDB","YES"
"HDB","YES"
```

**Create a backup of the SYSTEMDB and each tenant database:**  
The following commands are examples for file based backups. Backups can be performed using your preferred tool and location. If using a filesystem (e.g. /backup) ensure there is sufficient space for a full backup.

------
#### [ Backint ]

For the SystemDB

```
hdbsql -i 00 -u SYSTEM  -d SYSTEMDB "BACKUP DATA USING BACKINT ('initial_hsr_db_SYSTEMDB') COMMENT 'Initial backup for HSR'";
```

For each Tenant DB

```
hdbsql -i 00 -u SYSTEM  -d <TENANT_DB> "BACKUP DATA USING BACKINT ('initial_hsr_db_<TENANT_DB>') COMMENT 'Initial backup for HSR'";
```
+ Run as <sid>adm
+ Ensure that backint has been configured correctly
+ You will be prompted to provide a password or alternatively can use `-p password` 

------
#### [ File ]

For the SystemDB

```
hdbsql -i <hana_sys_nr> -u system -d SYSTEMDB "BACKUP DATA USING FILE ('/<backup location>/initial_hsr_db_SYSTEMDB') COMMENT 'Initial backup for HSR'";
```

For each Tenant DB

```
hdbsql -i <hana_sys_nr> -u system -d <TENANT_DB> "BACKUP DATA USING FILE ('/<backup location>/initial_hsr_db_<TENANT_DB>') COMMENT 'Initial backup for HSR'";
```
+ Run as <sid>adm
+ Ensure that a backup location exists with sufficient space and the correct file permissions
+ You will be prompted to provide a password or alternatively can use `-p password` 

------

### Stop the Secondary System and Copy System PKI Keys
<a name="copy_keys"></a>

**Stop the secondary system**  
Stop the hana application on the secondary, as <sid>adm

```
sapcontrol -nr <hana_sys_nr> -function StopSystem <SID>
```

**Copy the system PKI keys**  
Copy the following system PKI SSFS key and data files from the primary system to the same location on the secondary system using scp, a shared file system, or an S3 bucket:

```
/usr/sap/<SID>/SYS/global/security/rsecssfs/data/SSFS_<SID>.DAT
/usr/sap/<SID>/SYS/global/security/rsecssfs/key/SSFS_<SID>.KEY
```

For example using scp:

```
hdbadm>scp -p /usr/sap/HDB/SYS/global/security/rsecssfs/data/SSFS_HDB.DAT hdbadm@hanahost02:/usr/sap/HDB/SYS/global/security/rsecssfs/data/SSFS_HDB.DAT
hdbadm>scp -p /usr/sap/HDB/SYS/global/security/rsecssfs/key/SSFS_HDB.KEY hdbadm@hanahost02:/usr/sap/HDB/SYS/global/security/rsecssfs/key/SSFS_HDB.KEY
```

## Configure System Replication on Primary and Secondary Systems
<a name="register_hsr"></a>

**Enable System Replication on the Primary System**  
Ensure the primary SAP HANA system is **started**, then as <sid>adm, enable system replication using a unique site name:

```
hdbnsutil -sr_enable --name=<site_1>
```

For example:

```
hdbadm> hdbnsutil -sr_enable --name=siteA
```

**Register System Replication on the Secondary System**  
Ensure the secondary SAP HANA system is **stopped**, then as <sid>adm, enable system replication using a unique site name, the connection details of the primary system and preferred replication options.

```
hdbnsutil -sr_register \
 --name=<site_2> \
 --remoteHost=<hostname_1> \
 --remoteInstance=<hana_sys_nr> \
 --replicationMode=[sync|syncmem] \
 --operationMode=[logreplay|logreplay_readenabled]
```

For example:

```
hdbadm> hdbnsutil -sr_register --name=siteB --remoteHost=hanahost01 --remoteInstance=00 --replicationMode=syncmem --operationMode=logreplay
```

Alternatively, if your setup requires active/active read-enabled access to the secondary:

```
hdbadm> hdbnsutil -sr_register --name=siteB --remoteHost=hanahost01 --remoteInstance=00 --replicationMode=syncmem --operationMode=logreplay_readenabled
```
+  `hostname_1` is the hostname used to install SAP HANA, which may be a virtual name.
+ The replication mode can be either `sync` or `syncmem`.
+ For replication to support a clustered system and a hot standby, the operation mode must be `logreplay` or `logreplay_readenabled`.
+ For more information review the SAP Documentation
  + SAP Documentation: [Replication Modes for SAP HANA System Replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/c039a1a5b8824ecfa754b55e0caffc01.html) 
  + SAP Documentation: [Operaton Modes for SAP HANA System Replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/627bd11e86c84ec2b9fcdf585d24011c.html) 
  + SAP Documentation: [SAP HANA System Replication - Active/Active (Read Enabled)](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/fe5fc53706a34048bf4a3a93a5d7c866.html) 

## Check SAP Host Agent Version
<a name="sap_host_agent"></a>

The SAP host agent is used for SAP instance control and monitoring. This agent is used by SAP cluster resource agents and hooks. It is recommended that you have the latest version installed on all instances. For more details, see [SAP Note 2219592 – Upgrade Strategy of SAP Host Agent](https://me.sap.com/notes/2219592).

Use the following command to check the version of the host agent, repeat on all SAP HANA nodes:

```
# /usr/sap/hostctrl/exe/saphostexec -version
```

# SAP HANA Service Control
<a name="sap-hana-pacemaker-rhel-hana-control"></a>

Modify how SAP HANA services are managed to enable cluster takeover and operation.

**Topics**
+ [Add sidadm to haclient Group](#_add_sidadm_to_haclient_group)
+ [Modify SAP Profile for HANA](#_modify_sap_profile_for_hana)
+ [Configure SAPHanaSR Cluster Hook for Optimized Cluster Response](#hook_saphanasr)
+ [(Optional) Configure Fast Start Option](#_optional_configure_fast_start_option)
+ [Review systemd Integration](#_review_systemd_integration)

## Add sidadm to haclient Group
<a name="_add_sidadm_to_haclient_group"></a>

The pacemaker software creates a haclient operating system group. To ensure proper cluster access permissions, add the sidadm user to this group on all cluster nodes. Run the following command as root:

```
# usermod -a -G haclient hdbadm
```

## Modify SAP Profile for HANA
<a name="_modify_sap_profile_for_hana"></a>

To prevent automatic SAP HANA startup by the SAP start framework when an instance restarts, modify the SAP HANA instance profiles on all nodes. These profiles are located at `/usr/sap/<SID>/SYS/profile/`.

As <sid>adm, edit the SAP HANA profile `<SID>_HDB<hana_sys_nr>_<hostname>` and modify or add the Autostart parameter, ensuring it is set to 0:

```
Autostart = 0
```

## Configure SAPHanaSR Cluster Hook for Optimized Cluster Response
<a name="hook_saphanasr"></a>

The SAPHanaSR hook provides immediate notification to the cluster if system replication fails, complementing the standard cluster polling mechanism. This optimization can significantly improve failover response time.

Follow these steps to configure the SAPHanaSR hook:

1.  **Verify Cluster Package** 

   The hook configuration varies based on the resource agents in use (see [Deployment Guidance](sap-hana-pacemaker-rhel-references.md#deployments-rhel) for details).

------
#### [ SAPHanaSR ]

   Check the expected package is installed

   ```
   # rpm -qa resource-agents-sap-hana
   ```

------
#### [ SAPHanaSR-angi ]

   Check the expected package is installed

   ```
   # rpm -qa sap-hana-ha
   ```

------

1.  **Confirm Hook Location** 

   By default the package is installed in `/usr/share/sap-hana-ha/` or `/usr/share/SAPHanaSR/srHook`. We suggest using the default location but optionally you can copy it to a custom directory; for example, `/hana/share/myHooks`. The hook must be available on all SAP HANA cluster nodes.

1.  **Configure global.ini** 

   Update the `global.ini` file located at `/hana/shared/<SID>/global/hdb/custom/config/` on each SAP HANA cluster node. Make a backup copy before proceeding.

------
#### [ SAPHanaSR ]

   ```
   [ha_dr_provider_SAPHanaSR]
   provider = SAPHanaSR
   path = /usr/share/SAPHanaSR/srHook
   execution_order = 1
   
   [trace]
   ha_dr_saphanasr = info
   ```

**Note**  
Update the path if you have modified the package location.

------
#### [ sap-hana-ha (newer agent) ]

   ```
   [ha_dr_provider_sushanasr]
   provider = HanaSR
   path = /usr/share/sap-hana-ha/
   execution_order = 1
   
   [trace]
   ha_dr_sushanasr = info
   ```

**Note**  
Update the path if you have modified the package location.

------

1.  **Configure Sudo Privileges** 

   The SAPHanaSR Python hook requires sudo privileges for the <sid>adm user to access cluster attributes:

   1. Create a new sudoers file as root user in `/etc/sudoers.d/`, for example `60-SAPHanaSR-hook` 

   1. Use visudo to safely edit the new file `visudo /etc/sudoers.d/60-SAPHanaSR-hook` 

   1. Add the following configuration, replacing <sid> with lowercase system ID and <SID> with uppercase system ID:

      ```
      Cmnd_Alias SITE_SOK = /usr/sbin/crm_attribute -n hana_<sid>_site_srHook_[a-zA-Z0-9_]* -v SOK -t crm_config -s SAPHanaSR
      Cmnd_Alias SITE_SFAIL = /usr/sbin/crm_attribute -n hana_<sid>_site_srHook_[a-zA-Z0-9_]* -v SFAIL -t crm_config -s SAPHanaSR
      Cmnd_Alias HOOK_HELPER  = /usr/sbin/SAPHanaSR-hookHelper --sid=<SID> --case=checkTakeover
      hdbadm ALL=(ALL) NOPASSWD: SITE_SOK, SITE_SFAIL, HOOK_HELPER
      ```

      For example:

      ```
      Cmnd_Alias SITE_SOK = /usr/sbin/crm_attribute -n hana_hdb_site_srHook_[a-zA-Z0-9_]* -v SOK -t crm_config -s SAPHanaSR
      Cmnd_Alias SITE_SFAIL = /usr/sbin/crm_attribute -n hana_hdb_site_srHook_[a-zA-Z0-9_]* -v SFAIL -t crm_config -s SAPHanaSR
      Cmnd_Alias HOOK_HELPER  = /usr/sbin/SAPHanaSR-hookHelper --sid=HDB --case=checkTakeover
      hdbadm ALL=(ALL) NOPASSWD: SITE_SOK, SITE_SFAIL, HOOK_HELPER
      ```
**Note**  
The syntax uses a glob expression which allows it to adapt to different HSR site names whilst avoiding the use of wild cards. This ensures flexibility and security. A modification is still required if the SID changes. Replace the `<sid>` with a lowercase `sid` and `<SID>` with an uppercase `SID` which matches your installation.

1.  **Reload Configuration** 

   As <sid>adm reload the changes to `global.ini` using either a HANA restart or the command:

   ```
   hdbadm> hdbnsutil -reconfig
   ```

1.  **Verify Hook Configuration** 

   As <sid>adm, verify the hook is loaded:

   ```
   hdbadm> cdtrace
   hdbadm> grep "loading HA/DR Provider" nameserver*
   ```

1.  **Replicate Configuration to Secondary** 

   1. Confirm that global.ini changes have been replicated to the secondary system

   1. Create corresponding sudoers.d file on the secondary system

## (Optional) Configure Fast Start Option
<a name="_optional_configure_fast_start_option"></a>

Although out of scope of this document, the SAP HANA Fast Restart option uses tmpfs file systems to preserve and reuse MAIN data fragments to speed up SAP HANA restarts. This is effective in cases where the operating system is not restarted including local restarts of the Index Server.

Fast Start Option may be an alternative to the susChkSrv hook.

For more information, see SAP Documentation: [SAP HANA Fast Restart Option](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/ce158d28135147f099b761f8b1ee43fc.html) 

## Review systemd Integration
<a name="_review_systemd_integration"></a>

Review HANA version and systemd version to determine whether the prerequisites for systemd are available:

```
sidadm> systemctl --version
```

**OS versions**
+ Red Hat Enterprise Linux 8 (systemd version 239)

**SAP HANA Revisions**
+ SAP HANA SPS07 revision 70

When using an SAP HANA version with systemd integration (SPS07 and later), you must run the following steps to prevent the nodes from being fenced when Amazon EC2 instances are intentionally stopped. See Note [3189534 - Linux: systemd integration for sapstartsrv and SAP HANA](https://me.sap.com/notes/3189534) 

1. Verify if SAP HANA is integrated with systemd. If it is integrated, a systemd service name, such as SAP<SID>\$1<hana\$1sys\$1nr>.service is present. For example, for SID HDB and instance number 00, SAPHDB\$100.service is the service name.

   Use the following command as root to find SAP systemd services:

   ```
   # systemctl list-unit-files | grep -i sap
   ```

1. Create a pacemaker service drop-in file:

   ```
   # mkdir -p /etc/systemd/system/pacemaker.service.d/
   ```

1. Create the file `/etc/systemd/system/pacemaker.service.d/50-saphana.conf` with the following content:

   ```
   [Unit]
   Description=pacemaker needs SAP instance service
   Documentation=man:SAPHanaSR_basic_cluster(7)
   Wants=SAP<SID>_<hana_sys_nr>.service
   After=SAP<SID>_<hana_sys_nr>.service
   ```

1. Enable the drop-in file by reloading systemd:

   ```
   # systemctl daemon-reload
   ```

1. Verify that the change is active:

   ```
   # systemctl show pacemaker.service | grep SAP<SID>_<hana_sys_nr>
   ```

   For example, for SID HDB and instance number 00, the following output is expected:

   ```
   # systemctl show pacemaker.service | grep SAPHDB_00
   Wants=SAPHDB_00.service resource-agents-deps.target dbus.service
   After=system.slice network.target corosync.service resource-agents-deps.target basic.target rsyslog.service SAPHDB_00.service systemd-journald.socket sysinit.target time-sync.target dbus.service sbd.service
   ```

# Cluster Node Setup
<a name="sap-hana-pacemaker-rhel-cluster-node-setup"></a>

Establish cluster communication between nodes using Corosync and configure required authentication.

**Topics**
+ [Deploy a Majority Maker Node (Scale-Out Clusters Only)](#_deploy_a_majority_maker_node_scale_out_clusters_only)
+ [Setup Passwordless Authentication](#_setup_passwordless_authentication)
+ [Start and Enable the pcsd service](#_start_and_enable_the_pcsd_service)
+ [Authorize the Cluster](#_authorize_the_cluster)
+ [Generate Corosync Configuration](#_generate_corosync_configuration)
+ [Verify Configuration](#_verify_configuration)

## Deploy a Majority Maker Node (Scale-Out Clusters Only)
<a name="_deploy_a_majority_maker_node_scale_out_clusters_only"></a>

**Note**  
Only required for clusters with more than two nodes.

When deploying an SAP HANA Scale-Out cluster in AWS, you must include a majority maker node in a third Availability Zone (AZ). The majority maker (tie-breaker) node ensures the cluster remains operational if one AZ fails by preserving the quorum. For the Scale-Out cluster to function, at least all nodes in one AZ plus the majority maker node must be running. If this minimum requirement is not met, the cluster loses its quorum state and any remaining SAP HANA nodes are fenced.

The majority maker requires a minimum EC2 instance configuration of 2 vCPUs, 2 GB RAM, and 50 GB disk space; this instance is exclusively used for quorum management and does not host an SAP HANA database or any other cluster resources. === Change the hacluster Password

On all cluster nodes, change the password of the operating system user hacluster:

```
# passwd hacluster
```

## Setup Passwordless Authentication
<a name="_setup_passwordless_authentication"></a>

Red Hat cluster tools provide comprehensive reporting and troubleshooting capabilities for cluster activity. Many of these tools require passwordless SSH access between nodes to collect cluster-wide information effectively. Red Hat recommends configuring passwordless SSH for the root user to enable seamless cluster diagnostics and reporting.

See Redhat Documentation [How to setup SSH Key passwordless login in Red Hat Enterprise Linux](https://access.redhat.com/solutions/9194) 

See [Accessing the Red Hat Knowledge base portal](https://docs.aws.amazon.com/systems-manager/latest/userguide/fleet-manager-red-hat-knowledge-base-access.html) 

**Warning**  
Review the security implications for your organization, including root access controls and network segmentation, before implementing this configuration.

## Start and Enable the pcsd service
<a name="_start_and_enable_the_pcsd_service"></a>

```
# systemctl enable pcsd --now
```

## Authorize the Cluster
<a name="_authorize_the_cluster"></a>

Run the following command to enable and start the pacemaker cluster service on both nodes:

```
# pcs host auth <hostname_1> <hostname_2> -u hacluster -p <password>
```
+ You will be prompted for the hacluster password you set earlier.

## Generate Corosync Configuration
<a name="_generate_corosync_configuration"></a>

Corosync provides membership and member-communication needs for high availability clusters.

Initial setup can be performed using the following command

```
# pcs cluster setup <cluster_name> \
<hostname_1> addr=<host_ip_1> addr=<host_additional_ip_1> \
<hostname_2> addr=<host_ip_2> addr=<host_additional_ip_2>
```
+ Example

```
# pcs cluster setup hana_cluster hanahost01 addr=10.1.20.1 addr=10.1.20.2 hanahost02 addr=10.2.20.1 addr=10.2.20.2
```


| IP address type | Example | 
| --- | --- | 
|  <host\$1ip\$11>  |  10.2.10.1  | 
|  <host\$1additional\$1ip\$11>  |  10.2.10.2  | 
|  <host\$1ip\$12>  |  10.2.20.1  | 
|  <host\$1additional\$1ip\$12>  |  10.2.20.2  | 

The timing parameters are optimized for AWS cloud environments:
+ Increasing the value of totem token to 15s provides reliable cluster operation while accommodating normal cloud network characteristics. These settings prevent unnecessary failovers during brief network variations
+ When scaling beyond two nodes, remove the two\$1node parameter from the quorum section. The timing parameters will automatically adjust using the token\$1coefficient feature to maintain appropriate failure detection as nodes are added.

```
# pcs cluster config update totem token=15000
```

## Verify Configuration
<a name="_verify_configuration"></a>

```
# pcs cluster start --all
```

**Example**  
By enabling the pacemaker service, the server automatically joins the cluster after a reboot. This ensures that your system is protected. Alternatively, you can start the pacemaker service manually on boot. You can then investigate the cause of failure.

Run the following command to check the status of the pacemaker service:

```
# systemctl status pacemaker
```

Example output:

```
● pacemaker.service - Pacemaker High Availability Cluster Manager
     Loaded: loaded (/usr/lib/systemd/system/pacemaker.service; enabled; vendor preset: disabled)
     Active: active (running) since Mon 2025-06-02 13:27:48 AEST; 39s ago
       Docs: man:pacemakerd
             https://clusterlabs.org/pacemaker/doc/
   Main PID: 38554 (pacemakerd)
      Tasks: 7
     Memory: 31.3M
        CPU: 136ms
     CGroup: /system.slice/pacemaker.service
             ├─38554 /usr/sbin/pacemakerd
             ├─38555 /usr/libexec/pacemaker/pacemaker-based
             ├─38556 /usr/libexec/pacemaker/pacemaker-fenced
             ├─38557 /usr/libexec/pacemaker/pacemaker-execd
             ├─38558 /usr/libexec/pacemaker/pacemaker-attrd
             ├─38559 /usr/libexec/pacemaker/pacemaker-schedulerd
             └─38560 /usr/libexec/pacemaker/pacemaker-controld
```

Once the cluster service pacemaker is started, check the cluster status with pcs command, as shown in the following example:

```
# pcs status
```

Example output:

```
# pcs status
Cluster name: hana_cluster

WARNINGS:
No stonith devices and stonith-enabled is not false

Cluster Summary:
  * Stack: corosync
  * Current DC: hanahost02 (version 2.0.5-9.el8_4.8-ba59be7122) - partition with quorum
  * Last updated: Mon May 12 12:59:35 2025
  * Last change:  Mon May 12 12:59:25 2025 by hacluster via crmd on hanahost02
  * 2 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ hanahost01 hanahost02 ]

Full List of Resources:
  * No resources

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled
```

The primary (hanahost01) and secondary (hanahost02) must show up as online. You can find the ring status and the associated IP address of the cluster with corosync-cfgtool command, as shown in the following example:

```
# corosync-cfgtool -s
```

Example output:

```
Local node ID 1, transport knet
LINK ID 0 udp
        addr    = 10.2.10.1
        status:
                nodeid:          1:     localhost
                nodeid:          2:     connected
LINK ID 1 udp
        addr    = 10.2.10.2
        status:
                nodeid:          1:     localhost
                nodeid:          2:     connected
```

# Cluster Configuration
<a name="sap-hana-pacemaker-rhel-cluster-config"></a>

Bootstrap the cluster and configure all required cluster resources and constraints.

**Topics**
+ [Prepare for Resource Creation](#_prepare_for_resource_creation)
+ [Cluster Bootstrap](#cluster-bootstrap)
+ [Create STONITH Fencing Resource](#resource-stonith)
+ [Create Overlay IP Resources](#resource-overlayip)
+ [Create SAPHanaTopology Resource](#resource-saphanatop)
+ [Create SAPHANA Resource (based on resource agent SAPHana or SAPHanaController)](#resource-saphana)
+ [Create Resource Constraints](#resource-constraints)
+ [Reset Configuration – Optional](#_reset_configuration_optional)

## Prepare for Resource Creation
<a name="_prepare_for_resource_creation"></a>

To ensure that the cluster does not perform any unexpected actions during setup of resources and configuration, set the maintenance mode to true.

Run the following command to put the cluster in maintenance mode:

```
# pcs property set maintenance-mode=true
```

To verify the current maintenance state:

```
$ pcs status
```

**Note**  
There are two types of maintenance mode:  
Cluster-wide maintenance (set with `pcs property set maintenance-mode=true`)
Node-specific maintenance (set with `pcs node maintenance nodename`)
Always use cluster-wide maintenance mode when making configuration changes. For node-specific operations like hardware maintenance, refer to the Operations section for proper procedures.  
To disable maintenance mode after configuration is complete:  

```
# pcs property set maintenance-mode=false
```

## Cluster Bootstrap
<a name="cluster-bootstrap"></a>

### Configure Cluster Properties
<a name="_configure_cluster_properties"></a>

Configure cluster properties to establish fencing behavior and resource failover settings:

```
# pcs property set stonith-enabled="true"
# pcs property set stonith-timeout="600"
# pcs property set priority-fencing-delay="20"
```
+ The **priority-fencing-delay** is recommended for protecting SAP HANA nodes during network partitioning events. When a cluster partition occurs, this delay gives preference to nodes hosting higher priority resources, with SAP HANA Primary (promoted) instances receiving additional priority weighting. This helps ensure the Primary HANA node survives in split-brain scenarios. The recommended 20 second priority-fencing-delay works in conjunction with the pcmk\$1delay\$1max (10 seconds) configured in the stonith resource, providing a total potential delay of up to 30 seconds before fencing occurs.

To verify your cluster property settings:

```
# pcs property list
# pcs property config <property_name>
```

### Configure Resource Defaults
<a name="_configure_resource_defaults"></a>

Configure resource default behaviors:

------
#### [ RHEL 8.4 and above ]

```
# pcs resource defaults update resource-stickiness="1000"
# pcs resource defaults update migration-threshold="5000"
```

------
#### [ RHEL 7.x and RHEL 8.0 to 8.3 ]

```
# pcs resource defaults resource-stickiness="1000"
# pcs resource defaults migration-threshold="5000"
```
+ The **resource-stickiness** value prevents unnecessary resource movement, effectively setting a "cost" for moving resources. A value of 1000 strongly encourages resources to remain on their current node, avoiding the downtime associated with movement.
+ The **migration-threshold** of 5000 ensures the cluster will attempt to recover a resource on the same node many times before declaring that node unsuitable for hosting the resource.

------

Individual resources may override these defaults with their own defined values.

To verify your resource default settings:

### Configure Operation Defaults
<a name="_configure_operation_defaults"></a>

```
# pcs resource op defaults update timeout="600"
```

The op\$1defaults timeout ensures all cluster operations have a reasonable default timeout of 600 seconds when resource-specific timeouts are not defined. Defaults do not apply to resources which override them with their own defined values

## Create STONITH Fencing Resource
<a name="resource-stonith"></a>

An AWS STONITH resource is required for proper cluster fencing operations. The `fence_aws` resource is recommended for AWS deployments as it leverages the AWS API to safely fence failed or incommunicable nodes by stopping their EC2 instances.

Create the STONITH resource using resource agent ** `fence_aws` **:

```
# pcs stonith create <stonith_resource_name> fence_aws \
pcmk_host_map="<hostname_1>:<instance_id_1>;<hostname_2>:<instance_id_2>" \
region="<aws_region>" \
skip_os_shutdown="true" \
pcmk_delay_max="10" \
pcmk_reboot_timeout="600" \
pcmk_reboot_retries="4" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="180" \
op monitor interval="300" timeout="60"
```

Details:
+  **pcmk\$1host\$1map** - Maps cluster node hostnames to their EC2 instance IDs. This mapping must be unique within the AWS account and follow the format hostname:instance-id, with multiple entries separated by semicolons.
+  **region** - AWS region where the EC2 instances are deployed
+  **pcmk\$1delay\$1max** - Random delay before fencing operations. Works in conjunction with cluster property `priority-fencing-delay` to prevent simultaneous fencing in 2-node clusters. Historically set to higher values, but with `priority-fencing-delay` now handling primary node protection, a lower value (10s) is sufficient. Omit in clusters with real quorum (3\$1 nodes) to avoid unnecessary delay.
+  **pcmk\$1reboot\$1timeout** - Maximum time in seconds allowed for a reboot operation
+  **pcmk\$1reboot\$1retries** - Number of times to retry a failed reboot operation
+  **skip\$1os\$1shutdown** (NEW) - Leverages a new ec2 stop-instance API flag to forcefully stop an EC2 Instance by skipping the shutdown of the Operating System.
  +  [Red Hat Solution 4963741 - fence\$1aws fence action fails with "Timed out waiting to power OFF"](https://access.redhat.com/solutions/4963741) (requires Red Hat Customer Portal access)
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-rhel-parameters.md) * :  
**Example**  

  ```
  # pcs stonith create rsc_fence_aws fence_aws \
  pcmk_host_map="hanahost01:i-xxxxinstidforhost1;hanahost02:i-xxxxinstidforhost2" \
  region="us-east-1" \
  skip_os_shutdown="true" \
  pcmk_delay_max="10" \
  pcmk_reboot_timeout="600" \
  pcmk_reboot_retries="4" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="180" \
  op monitor interval="300" timeout="60"
  ```

**Note**  
When configuring the STONITH resource, consider your instance’s startup and shutdown times. The default pcmk\$1reboot\$1action is 'reboot', where the cluster waits for both stop and start actions to complete before considering the fencing action successful. This allows the cluster to return to a protected state. Setting `pcmk_reboot_action=off` allows the cluster to proceed immediately after shutdown. For High Memory Metal instances, only 'off' is recommended due to the extended time to initialize memory during startup.  

```
# pcs resource update <stonith_resource_name> pcmk_reboot_action="off"
# pcs resource update <stonith_resource_name> pcmk_off_timeout="600"
# pcs resource update <stonith_resource_name> pcmk_off_retries="4"
```

## Create Overlay IP Resources
<a name="resource-overlayip"></a>

This resource ensures client connections follow the SAP HANA primary instance during failover by updating AWS route table entries. It manages an overlay IP address that always points to the active SAP HANA database

Create the IP resource:

```
# pcs resource create rsc_ip_<SID>_HDB<hana_sys_nr> ocf:heartbeat:aws-vpc-move-ip \
ip="<hana_overlayip>" \
routing_table="<routetable_id>" \
interface="eth0" \
profile="<cli_cluster_profile>" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="60" timeout="60"
```
+  **ip** - Overlay IP address that will be used to connect to the Primary SAP HANA database. See [Overlay IP Concept](sap-hana-pacemaker-rhel-concepts.md#overlay-ip-rhel) 
+  **routing\$1table** - AWS route table ID(s) that need to be updated. Multiple route tables can be specified using commas (For example, `routing_table=rtb-xxxxxroutetable1,rtb-xxxxxroutetable2`). Ensure initial entries have been created following [Add VPC Route Table Entries for Overlay IPs](sap-hana-pacemaker-rhel-infra-setup.md#rt-rhel) 
+  **interface** - Network interface for the IP address (typically eth0)
+  **profile** - (optional) AWS CLI profile name for API authentication. Verify profile exists with `aws configure list-profiles`. If a profile is not explicitly configured the default profile will be used.
+  **awscli** - (optional) Path to the AWS CLI executable. The default path is `/usr/bin/aws`. Only specify this parameter if the AWS CLI is installed in a different location. To confirm the path on your system, run `which aws`.
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-rhel-parameters.md) * :  
**Example**  

  ```
  # pcs resource create rsc_ip_HDB_HDB00 ocf:heartbeat:aws-vpc-move-ip \
  ip="172.16.52.1" \
  routing_table="rtb-xxxxxroutetable1" \
  interface="eth0" \
  profile="cluster" \
  op start interval="0" timeout="180" \
  op stop interval="0" timeout="180" \
  op monitor interval="60" timeout="60"
  ```

**Note**  
To update any resource parameter after creation, use `pcs resource update`. For example, if the AWS CLI is not installed at the default path (`/usr/bin/aws`), run:  

```
# pcs resource update rsc_ip_<SID>_HDB<hana_sys_nr> awscli=$(which aws)
```

**For Active/Active Read Enabled**  
Only if you are using `logreplay_readenabled` and require that your secondary is accessible via overlay IP. You can create an additional IP resource.

```
# pcs resource create primitive rsc_ip_<SID>_HDB<hana_sys_nr>_readenabled ocf:heartbeat:aws-vpc-move-ip \
ip="<readenabled_overlayip>" \
routing_table="<routetable_id>" \
interface="eth0" \
profile="<cli_cluster_profile>" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="60" timeout="60"
```
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-rhel-parameters.md) * :  
**Example**  

  ```
  # crm configure primitive rsc_ip_HDB_HDB00_readenabled ocf:heartbeat:aws-vpc-move-ip \
  params ip="172.16.52.2" \
  routing_table="rtb-xxxxxroutetable1" \
  interface="eth0" \
  profile="cluster" \
  op start interval="0" timeout="180" \
  op stop interval="0" timeout="180" \
  op monitor interval="60" timeout="60"
  ```

**For Shared VPC**  
If your configuration requires a shared vpc, two additional parameters are required.

```
# pcs resource create primitive rsc_ip_<SID>_HDB<hana_sys_nr> ocf:heartbeat:aws-vpc-move-ip \
ip="<hana_overlayip>" routing_table=<routetable_id> interface=eth0 \
profile="<cli_cluster_profile>" lookup_type=NetworkInterfaceId \
routing_table_role="arn:aws:iam::<sharing_vpc_account_id>:role/<sharing_vpc_account_cluster_role>" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="60" timeout="60"
```

Additional details:
+ lookup\$1type=NetworkInterfaceId
+ routing\$1table\$1role="arn:aws:iam::<shared\$1vpc\$1account\$1id>:role/<sharing\$1vpc\$1account\$1cluster\$1role>"

## Create SAPHanaTopology Resource
<a name="resource-saphanatop"></a>

The SAPHanaTopology resource agent helps manage high availability for SAP HANA databases with system replication. It analyzes the HANA topology and reports findings via node status attributes. These attributes are used by either the SAPHana or SAPHanaController resource agents to control the HANA databases. SAPHanaTopology starts and monitors the local saphostagent, leveraging SAP interfaces like landscapeHostConfiguration.py, hdbnsutil, and saphostctrl to gather information about system status, roles, and configuration.

For both scale-up and scale-out deployments

For documentation on the resource you can review the man page.

```
# man ocf_heartbeat_SAPHanaTopology
```

------
#### [ For scale-up (2-node) ]

For the primitive and clone:

```
# pcs resource create rsc_SAPHanaTopology_<SID>_HDB<hana_sys_nr> ocf:heartbeat:SAPHanaTopology \
SID="<SID>" InstanceNumber="<hana_sys_nr>" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="300" \
op monitor interval="10" timeout="600" \
clone clone-node-max="1" interleave="true" clone-max="2"
```
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-rhel-parameters.md) * :  
**Example**  

  ```
  # pcs resource create rsc_SAPHanaTopology_HDB_HDB00 ocf:heartbeat:SAPHanaTopology \
  SID="HDB" \
  InstanceNumber="00" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="300" \
  op monitor interval="10" timeout="600" \
  clone clone-node-max="1" interleave="true" clone-max="2"
  ```

------
#### [ For scale-out ]

For the primitive and clone:

```
# pcs resource create rsc_SAPHanaTopology_<SID>_HDB<hana_sys_nr> ocf:heartbeat:SAPHanaTopology \
SID="<SID>" InstanceNumber="<hana_sys_nr>" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="300" \
op monitor interval="10" timeout="600" \
clone clone-node-max="1" interleave="true" clone-max="<number-of-nodes>"
```
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-rhel-parameters.md) * :  
**Example**  

  ```
  # pcs resource create rsc_SAPHanaTopology_HDB_HDB00 ocf:heartbeat:SAPHanaTopology \
  SID="HDB" InstanceNumber="00" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="300" \
  op monitor interval="10" timeout="600" \
  clone clone-node-max="1" interleave="true" clone-max="6"
  ```

------

Details:
+  **SID** - SAP System ID for the HANA instance
+  **InstanceNumber** - Instance number of the SAP HANA instance
+  **clone-node-max** - Defines how many copies of the resource agent can be started on a single node (set to 1)
+  **interleave** - Enables parallel starting of dependent clone resources on the same node (set to true)
+  **clone-max** - Defines the total number of clone instances that can be started in the cluster (For example, use 2 for scale-out or set to 6 for scale-out with 3 nodes per site, do not include majority maker node)

## Create SAPHANA Resource (based on resource agent SAPHana or SAPHanaController)
<a name="resource-saphana"></a>

The SAP HANA resource agents manage system replication and failover between SAP HANA databases. These agents control start, stop, and monitoring operations while checking synchronization status to maintain data consistency. They leverage SAP interfaces including sapcontrol, landscapeHostConfiguration, hdbnsutil, systemReplicationStatus, and saphostctrl. All configurations work in conjunction with the SAPHanaTopology agent, which gathers information about the system replication status across cluster nodes.

Choose the appropriate resource agent configuration based on your SAP HANA architecture:

### SAPHanaSR-angi Deployments (Available in RHEL 9.6 and 10\$1)
<a name="_saphanasr_angi_deployments_available_in_rhel_9_6_and_10"></a>

Available and recommended for new deployments on RHEL 9.6 and 10 \$1. The SAPHanaController resource agent with next generation system replication architecture (SAPHanaSR-angi) provides improved integration and management capabilities for both scale-up and scale-out deployments. For detailed information:

For documentation on the resource you can review the man page.

```
# man ocf_heartbeat_SAPHanaController
```

------
#### [ For scale-up (2-node) ]

Create the primitive

```
# pcs resource create rsc_SAPHanaController_<SID>_HDB<hana_sys_nr> ocf:heartbeat:SAPHanaController \
SID="<SID>" \
InstanceNumber="<hana_sys_nr>" \
PREFER_SITE_TAKEOVER="true" \
DUPLICATE_PRIMARY_TIMEOUT="7200" \
AUTOMATED_REGISTER="true" \
op start interval="0" timeout="3600" \
op stop interval="0" timeout="3600" \
op promote interval="0" timeout="3600" \
op monitor interval="60" role="Promoted" timeout="700" \
op monitor interval="61" role="Unpromoted" timeout="700" \
promotable notify="true" clone-node-max="1" interleave="true" clone-max="2" \
meta priority="100"
```
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-rhel-parameters.md) * :

  ```
  # pcs resource create rsc_SAPHanaController_HDB_HDB00 ocf:heartbeat:SAPHanaController \
  SID="HDB" \
  InstanceNumber="00" \
  PREFER_SITE_TAKEOVER="true" \
  DUPLICATE_PRIMARY_TIMEOUT="7200" \
  AUTOMATED_REGISTER="true" \
  op start interval="0" timeout="3600" \
  op stop interval="0" timeout="3600" \
  op promote interval="0" timeout="3600" \
  op monitor interval="60" role="Promoted" timeout="700" \
  op monitor interval="61" role="Unpromoted" timeout="700" \
  promotable notify="true" clone-node-max="1" interleave="true" clone-max="2" \
  meta priority="100"
  ```

------
#### [ For scale-out ]

Create the primitive using the SAPHanaController Resource Agent:

```
# pcs resource create rsc_SAPHanaController_<SID>_HDB<hana_sys_nr> ocf:heartbeat:SAPHanaController \
SID="<SID>" \
InstanceNumber="<hana_sys_nr>" \
PREFER_SITE_TAKEOVER="true" \
DUPLICATE_PRIMARY_TIMEOUT="7200" \
AUTOMATED_REGISTER="true" \
op start interval="0" timeout="3600" \
op stop interval="0" timeout="3600" \
op promote interval="0" timeout="3600" \
op monitor interval="60" role="Promoted" timeout="700" \
op monitor interval="61" role="Unpromoted" timeout="700" \
promotable notify="true" clone-node-max="1" interleave="true" clone-max="<number-of-nodes>"
```
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-rhel-parameters.md) * :  
**Example**  

  ```
  # pcs resource create rsc_SAPHanaController_<SID>_HDB<hana_sys_nr> ocf:heartbeat:SAPHanaController \
  params SID="HDB" \
  InstanceNumber="00" \
  PREFER_SITE_TAKEOVER="true" \
  DUPLICATE_PRIMARY_TIMEOUT="7200" \
  AUTOMATED_REGISTER="true" \
  op start interval="0" timeout="3600" \
  op stop interval="0" timeout="3600" \
  op promote interval="0" timeout="3600" "\
  op monitor interval="60" role="Promoted" timeout="700" \
  op monitor interval="61" role="Unpromoted" timeout="700" \
  promotable notify="true" clone-node-max="1" interleave="true" clone-max="<number-of-nodes>"
  ```

------

Details:
+  **SID** - SAP System ID for the HANA instance
+  **InstanceNumber** - Instance number of the SAP HANA instance
+  **clone-node-max** - Defines how many copies of the resource agent can be started on a single node (set to 1)
+  **interleave** - Enables parallel starting of dependent clone resources on the same node (set to true)
+  **clone-max** - Defines the total number of clone instances that can be started in the cluster (For example, use 2 for scale-out or set to 6 for scale-out with 3 nodes per site, do not include majority maker node)
+  **PREFER\$1SITE\$1TAKEOVER** defines whether a takeover to the secondary is preferred. Review for non standard deployments.
+  **AUTOMATED\$1REGISTER** defines whether the ex-primary should be registered as a secondary. Review for non standard deployments.
+  **DUPLICATE\$1PRIMARY\$1TIMEOUT** is the wait time to minimise the risk of an unintended dual primary.
+  **meta priority** - Setting this to 100 works in conjunction with priority-fencing-delay to ensure proper failover order and prevent simultaneous fencing operations
+ The start and stop timeout values (3600s) may need to be increased for larger databases. Adjust these values based on your database size and observed startup/shutdown times
+ If you need to update your configuration, the following examples may help you with the right command

  ```
  # pcs resource update rsc_SAPHanaController_HDB_HDB00 op monitor role="Promoted" timeout=900
  # pcs resource update rsc_SAPHanaController_HDB_HDB00 DUPLICATE_PRIMARY_TIMEOUT=3600
  # pcs resource meta rsc_SAPHanaController_HDB_HDB00-clone priority=100
  ```

### Classic Deployments
<a name="_classic_deployments"></a>

For classic scale-up deployments, the SAPHana resource agent manages takeover between two SAP HANA databases. For detailed information:

```
# man ocf_heartbeat_SAPHana
```

------
#### [ For scale-up (2-node) ]

Create the primitive using the SAPHana Resource Agent

```
# pcs resource create rsc_SAPHana_<SID>_HDB<hana_sys_nr> ocf:heartbeat:SAPHana \
SID="<SID>" \
InstanceNumber="<hana_sys_nr>" \
PREFER_SITE_TAKEOVER="true" \
DUPLICATE_PRIMARY_TIMEOUT="7200" \
AUTOMATED_REGISTER="true" \
op start interval="0" timeout="3600" \
op stop interval="0" timeout="3600" \
op promote interval="0" timeout="3600" \
op monitor interval="60" role="Promoted" timeout="700" \
op monitor interval="61" role="Unpromoted" timeout="700" \
promotable notify="true" clone-node-max="1" interleave="true" clone-max="2" \
meta priority="100"
```
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-rhel-parameters.md) * :

  ```
  # pcs resource create rsc_SAPHana_HDB_HDB00 ocf:heartbeat:SAPHana \
  SID="HDB" \
  InstanceNumber="00" \
  PREFER_SITE_TAKEOVER="true" \
  DUPLICATE_PRIMARY_TIMEOUT="7200" \
  AUTOMATED_REGISTER="true" \
  op start interval="0" timeout="3600" \
  op stop interval="0" timeout="3600" \
  op promote interval="0" timeout="3600" \
  op monitor interval="60" role="Promoted" timeout="700" \
  op monitor interval="61" role="Unpromoted" timeout="700" \
  promotable notify="true" clone-node-max="1" interleave="true" clone-max="2" \
  meta priority="100"
  ```

------
#### [ For scale-out ]

Create the primitive using the SAPHanaController Resource Agent:

```
# pcs resource create rsc_SAPHanaController_<SID>_HDB<hana_sys_nr> ocf:heartbeat:SAPHanaController \
SID="<SID>" \
InstanceNumber="<hana_sys_nr>" \
PREFER_SITE_TAKEOVER="true" \
DUPLICATE_PRIMARY_TIMEOUT="7200" \
AUTOMATED_REGISTER="true" \
op start interval="0" timeout="3600" \
op stop interval="0" timeout="3600" \
op promote interval="0" timeout="3600" \
op monitor interval="60" role="Promoted" timeout="700" \
op monitor interval="61" role="Unpromoted" timeout="700" \
promotable notify="true" clone-node-max="1" interleave="true" clone-max="<number-of-nodes>"
```
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-rhel-parameters.md) * :

  ```
  # pcs resource create rsc_SAPHanaController_<SID>_HDB<hana_sys_nr> ocf:heartbeat:SAPHanaController \
  params SID="HDB" \
  InstanceNumber="00" \
  PREFER_SITE_TAKEOVER="true" \
  DUPLICATE_PRIMARY_TIMEOUT="7200" \
  AUTOMATED_REGISTER="true" \
  op start interval="0" timeout="3600" \
  op stop interval="0" timeout="3600" \
  op promote interval="0" timeout="3600" "\
  op monitor interval="60" role="Promoted" timeout="700" \
  op monitor interval="61" role="Unpromoted" timeout="700" \
  promotable notify="true" clone-node-max="1" interleave="true" clone-max="<number-of-nodes>"
  ```

------

Details:
+  **SID** - SAP System ID for the HANA instance
+  **InstanceNumber** - Instance number of the SAP HANA instance
+  **clone-node-max** - Defines how many copies of the resource agent can be started on a single node (set to 1)
+  **interleave** - Enables parallel starting of dependent clone resources on the same node (set to true)
+  **clone-max** - Defines the total number of clone instances that can be started in the cluster (For example, use 2 for scale-out or set to 6 for scale-out with 3 nodes per site, do not include majority maker node)
+  **PREFER\$1SITE\$1TAKEOVER** defines whether a takeover to the secondary is preferred. Review for non standard deployments.
+  **AUTOMATED\$1REGISTER** defines whether the ex-primary should be registered as a secondary. Review for non standard deployments.
+  **DUPLICATE\$1PRIMARY\$1TIMEOUT** is the wait time to minimise the risk of an unintended dual primary.
+  **meta priority** - Setting this to 100 works in conjunction with priority-fencing-delay to ensure proper failover order and prevent simultaneous fencing operations
+ The start and stop timeout values (3600s) may need to be increased for larger databases. Adjust these values based on your database size and observed startup/shutdown times
+ If you need to update your configuration, the following examples may help you with the right command

  ```
  # pcs resource update rsc_SAPHana_HDB_HDB00 op monitor role="Promoted" timeout=900
  # pcs resource update rsc_SAPHana_HDB_HDB00 DUPLICATE_PRIMARY_TIMEOUT=3600
  # pcs resource meta rsc_SAPHana_HDB_HDB00-clone priority=100
  ```

## Create Resource Constraints
<a name="resource-constraints"></a>

The following constraints are required.

### Order Constraint
<a name="_order_constraint"></a>

This constraint defines the start order between the SAPHanaTopology and SAPHana resources:

```
# pcs constraint order <SAPHanaTopology-clone> <SAPHana/SAPHanaController-clone> symmetrical=false
```
+  *Example* :

  ```
  # pcs constraint order start rsc_SAPHanaTopology_HDB_HDB00-clone then rsc_SAPHana_HDB_HDB00-clone symmetrical=false
  ```

### Colocation Constraint
<a name="_colocation_constraint"></a>

#### IP with Primary
<a name="_ip_with_primary"></a>

This constraint ensures that the IP resource which determines the target of the overlay IP runs on the node which has the primary SAP Hana role:

```
# pcs constraint colocation add <ip_resource> with promoted <SAPHana/SAPHanaController-clone> 2000
```
+  *Example* :

  ```
  # pcs constraint colocation add rsc_ip_HDB_HDB00 with promoted rsc_SAPHana_HDB_HDB00-clone 2000
  ```

#### ReadOnly IP with Secondary (Only for ReadOnly Patterns)
<a name="_readonly_ip_with_secondary_only_for_readonly_patterns"></a>

This constraint ensures that the read-enabled IP resource runs on the secondary (Unpromoted) node. When the secondary node is unavailable, the IP will move to the primary node, where read workloads will share capacity with primary workloads:

```
# pcs constraint colocation add <ip_resource> with unpromoted <SAPHana/SAPHanaController-clone> 2000
```
+  *Example* :

  ```
  # pcs constraint colocation add rsc_ip_HDB_HDB00_readenabled  with unpromoted rsc_SAPHana_HDB_HDB00-clone 2000
  ```

### Location Constraint
<a name="_location_constraint"></a>

#### No SAP HANA Resources on the Majority Maker (Scale Out Only)
<a name="_no_sap_hana_resources_on_the_majority_maker_scale_out_only"></a>

This location constraint ensures that SAP HANA Resources avoid the Majority Maker, which is not suited to running them.

```
# pcs constraint location <SAPHanaTopology-clone> avoids <hostname_mm>
# pcs constraint location <SAPHana/SAPHanaController-clone> avoids <hostname_mm>
```

### Activate Cluster
<a name="_activate_cluster"></a>

Use `pcs config show` to review that all the values have been entered correctly.

On confirmation of correct values, set the maintenance mode to false using the following command. This allows the cluster to take control of the resources:

```
# pcs property set maintenance-mode=false
```

## Reset Configuration – Optional
<a name="_reset_configuration_optional"></a>

**Important**  
The following instructions help you reset the complete configuration. Run these commands only if you want to start setup from the beginning.

Run the following command to back up the current configuration for reference:

```
# pcs config backup /tmp/cluster_backup_$(date +%Y%m%d)
# pcs config show > /tmp/config_backup_$(date +%Y%m%d).txt
```

Run the following command to stop and clear the current configuration

```
# pcs cluster stop --all
hanahost02: Stopping Cluster (pacemaker)...
hanahost01: Stopping Cluster (pacemaker)...
hanahost02: Stopping Cluster (corosync)...
hanahost01: Stopping Cluster (corosync)...
# pcs cluster destroy
Shutting down pacemaker/corosync services...
Killing any remaining services...
Removing all cluster configuration files...
```

Once the preceding erase command is executed, it removes all of the cluster resources from Cluster Information Base (CIB), and disconnects the communication from corosync to the cluster. Only perform these steps if you absolutely need to reset everything to defaults. For minor changes, use pcs resource update or pcs property set instead.

# Client Connectivity
<a name="sap-hana-pacemaker-rhel-client-connectivity"></a>

For proper SAP HANA database connectivity:
+ Ensure that the Overlay IP can be correctly resolved in all application servers
+ DNS configuration or local host entries must be valid
+ Network routing must be properly configured
+ SAP HANA client libraries must be installed and up to date

Ensure that the connectivity data for the SAP HANA Database references the hostname associated with the Overlay IP. For more information see SAP Documentation: [Setting Connectivity Data for the SAP HANA Database](https://help.sap.com/docs/SLTOOLSET/39c32e9783f6439e871410848f61544c/b7ed2d55b0a7f857e10000000a441470.html?version=CURRENT_VERSION_SWPM20) 

Test database connectivity using R3trans utility:

```
sidadm> R3trans -d
```

Review additional connections to SAP HANA that require High Availability. While application connectivity should use the overlay IP, administrative tools (SAP HANA Studio, hdbsql commands, monitoring tools) require direct connectivity to individual SAP HANA instances.