

# SAP HANA and Cluster Setup
<a name="sap-hana-pacemaker-sles-deployment-cluster"></a>

**Topics**
+ [SAP HANA Setup and HSR](sap-hana-pacemaker-sles-hana-setup-hsr.md)
+ [SAP HANA Service Control](sap-hana-pacemaker-sles-hana-control.md)
+ [Cluster Node Setup](sap-hana-pacemaker-sles-cluster-node-setup.md)
+ [Cluster Configuration](sap-hana-pacemaker-sles-cluster-config.md)
+ [Client Connectivity](sap-hana-pacemaker-sles-client-connectivity.md)

# SAP HANA Setup and HSR
<a name="sap-hana-pacemaker-sles-hana-setup-hsr"></a>

Prepare SAP HANA for System Replication (HSR) by configuring parameters and creating required backups.

**Topics**
+ [Review AWS and SAP Installation Guides](#review_guides)
+ [Check global.ini parameters](#global_ini)
+ [Create a SAP HANA Backup on the Primary System](#pre_setup_backup)
+ [Configure System Replication on Primary and Secondary Systems](#register_hsr)
+ [Check SAP Host Agent Version](#sap_host_agent)

**Important**  
This guide assumes that SAP HANA Platform has been installed either as a scale-up configuration with two EC2 instances in different availability zones, or as a scale-out configuration with multiple EC2 instances in two availability zones, following the guidance from AWS and SAP.

## Review AWS and SAP Installation Guides
<a name="review_guides"></a>
+  AWS Documentation - [SAP HANA Environment Setup on AWS](https://docs.aws.amazon.com/sap/latest/sap-hana/std-sap-hana-environment-setup.html) 
+ SAP Documentation - [SAP HANA Server Installation and Update Guide](https://help.sap.com/docs/SAP_HANA_PLATFORM/2c1988d620e04368aa4103bf26f17727/7eb0167eb35e4e2885415205b8383584.html) 

SAP provides documentation on how to configure SAP HANA System Replication using the SAP HANA Cockpit, SAP HANA Studio or `hdbnsutil` on the command line. Review the documentation for your SAP HANA Version to ensure no changes to the guidance or to use a method other than command line.
+ SAP Documentation: [Configuring SAP HANA System Replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/4e9b18c116aa42fc84c7dbfd02111aba/442bf027937746248f69701aa9b94112.html) 

## Check global.ini parameters
<a name="global_ini"></a>

Run the following as <sid>adm. These commands will prompt for the system password for the SYSTEMDB database.

**Check log\$1mode is set to normal**  
Ensure that the configuration parameter log\$1mode is set to `normal` in the persistence section of the global.ini file:

```
hdbsql -jx -i <hana_sys_nr> -u system -d SYSTEMDB "SELECT VALUE FROM M_INIFILE_CONTENTS WHERE FILE_NAME = 'global.ini' AND SECTION = 'persistence' AND KEY = 'log_mode';"
```

For example:

```
hdbadm> hdbsql -jx -i 00 -u system -d SYSTEMDB "SELECT VALUE FROM M_INIFILE_CONTENTS WHERE FILE_NAME = 'global.ini' AND SECTION = 'persistence' AND KEY = 'log_mode';"
VALUE
"normal"
```

**Review global.ini file replication**  
SAP HANA System Replication requires consistent configuration between primary and secondary systems to ensure proper operation, especially during failover scenarios. The `inifile_checker/replicate` parameter in global.ini provides an automated solution to this requirement. When enabled on the primary system, any configuration changes made to ini files on the primary are automatically synchronized to the secondary site. This removes the need for manual configuration replication and helps prevent configuration mismatches that could impact system availability. The parameter only needs to be configured on the primary system, as the secondary system will receive these configuration changes through the normal System Replication process.

Add the following to `global.ini`:

```
[inifile_checker]
replicate = true
```

See SAP Note [2978895 - Changing parameters on Primary and Secondary site of SAP HANA system](https://me.sap.com/notes/2978895) 

## Create a SAP HANA Backup on the Primary System
<a name="pre_setup_backup"></a>

 **Get a list of all active databases:** 

```
hdbsql -jx -i <hana_sys_nr> -u system -d SYSTEMDB "SELECT DATABASE_NAME,ACTIVE_STATUS from M_DATABASES"
```

For example:

```
hdbadm> hdbsql -jx -i 00 -u system -d SYSTEMDB "SELECT DATABASE_NAME,ACTIVE_STATUS from M_DATABASES"
Password:
DATABASE_NAME,ACTIVE_STATUS
"SYSTEMDB","YES"
"HDB","YES"
```

**Create a backup of the SYSTEMDB and each tenant database:**  
The following commands are examples for file based backups. Backups can be performed using your preferred tool and location. If using a filesystem (e.g. /backup) ensure there is sufficient space for a full backup.

------
#### [ Backint ]

For the SystemDB

```
hdbsql -i 00 -u SYSTEM  -d SYSTEMDB "BACKUP DATA USING BACKINT ('initial_hsr_db_SYSTEMDB') COMMENT 'Initial backup for HSR'";
```

For each Tenant DB

```
hdbsql -i 00 -u SYSTEM  -d <TENANT_DB> "BACKUP DATA USING BACKINT ('initial_hsr_db_<TENANT_DB>') COMMENT 'Initial backup for HSR'";
```
+ Run as <sid>adm
+ Ensure that backint has been configured correctly
+ You will be prompted to provide a password or alternatively can use `-p password` 

------
#### [ File ]

For the SystemDB

```
hdbsql -i <hana_sys_nr> -u system -d SYSTEMDB "BACKUP DATA USING FILE ('/<backup location>/initial_hsr_db_SYSTEMDB') COMMENT 'Initial backup for HSR'";
```

For each Tenant DB

```
hdbsql -i <hana_sys_nr> -u system -d <TENANT_DB> "BACKUP DATA USING FILE ('/<backup location>/initial_hsr_db_<TENANT_DB>') COMMENT 'Initial backup for HSR'";
```
+ Run as <sid>adm
+ Ensure that a backup location exists with sufficient space and the correct file permissions
+ You will be prompted to provide a password or alternatively can use `-p password` 

------

### Stop the Secondary System and Copy System PKI Keys
<a name="copy_keys"></a>

**Stop the secondary system**  
Stop the hana application on the secondary, as <sid>adm

```
sapcontrol -nr <hana_sys_nr> -function StopSystem <SID>
```

**Copy the system PKI keys**  
Copy the following system PKI SSFS key and data files from the primary system to the same location on the secondary system using scp, a shared file system, or an S3 bucket:

```
/usr/sap/<SID>/SYS/global/security/rsecssfs/data/SSFS_<SID>.DAT
/usr/sap/<SID>/SYS/global/security/rsecssfs/key/SSFS_<SID>.KEY
```

For example using scp:

```
hdbadm>scp -p /usr/sap/HDB/SYS/global/security/rsecssfs/data/SSFS_HDB.DAT hdbadm@hanahost02:/usr/sap/HDB/SYS/global/security/rsecssfs/data/SSFS_HDB.DAT
hdbadm>scp -p /usr/sap/HDB/SYS/global/security/rsecssfs/key/SSFS_HDB.KEY hdbadm@hanahost02:/usr/sap/HDB/SYS/global/security/rsecssfs/key/SSFS_HDB.KEY
```

## Configure System Replication on Primary and Secondary Systems
<a name="register_hsr"></a>

**Enable System Replication on the Primary System**  
Ensure the primary SAP HANA system is **started**, then as <sid>adm, enable system replication using a unique site name:

```
hdbnsutil -sr_enable --name=<site_1>
```

For example:

```
hdbadm> hdbnsutil -sr_enable --name=siteA
```

**Register System Replication on the Secondary System**  
Ensure the secondary SAP HANA system is **stopped**, then as <sid>adm, enable system replication using a unique site name, the connection details of the primary system and preferred replication options.

```
hdbnsutil -sr_register \
 --name=<site_2> \
 --remoteHost=<hostname_1> \
 --remoteInstance=<hana_sys_nr> \
 --replicationMode=[sync|syncmem] \
 --operationMode=[logreplay|logreplay_readenabled]
```

For example:

```
hdbadm> hdbnsutil -sr_register --name=siteB --remoteHost=hanahost01 --remoteInstance=00 --replicationMode=syncmem --operationMode=logreplay
```

Alternatively, if your setup requires active/active read-enabled access to the secondary:

```
hdbadm> hdbnsutil -sr_register --name=siteB --remoteHost=hanahost01 --remoteInstance=00 --replicationMode=syncmem --operationMode=logreplay_readenabled
```
+  `hostname_1` is the hostname used to install SAP HANA, which may be a virtual name.
+ The replication mode can be either `sync` or `syncmem`.
+ For replication to support a clustered system and a hot standby, the operation mode must be `logreplay` or `logreplay_readenabled`.
+ For more information review the SAP Documentation
  + SAP Documentation: [Replication Modes for SAP HANA System Replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/c039a1a5b8824ecfa754b55e0caffc01.html) 
  + SAP Documentation: [Operaton Modes for SAP HANA System Replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/627bd11e86c84ec2b9fcdf585d24011c.html) 
  + SAP Documentation: [SAP HANA System Replication - Active/Active (Read Enabled)](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/fe5fc53706a34048bf4a3a93a5d7c866.html) 

## Check SAP Host Agent Version
<a name="sap_host_agent"></a>

The SAP host agent is used for SAP instance control and monitoring. This agent is used by SAP cluster resource agents and hooks. It is recommended that you have the latest version installed on all instances. For more details, see [SAP Note 2219592 – Upgrade Strategy of SAP Host Agent](https://me.sap.com/notes/2219592).

Use the following command to check the version of the host agent, repeat on all SAP HANA nodes:

```
# /usr/sap/hostctrl/exe/saphostexec -version
```

# SAP HANA Service Control
<a name="sap-hana-pacemaker-sles-hana-control"></a>

Modify how SAP HANA services are managed to enable cluster takeover and operation.

**Topics**
+ [Add sidadm to haclient Group](#_add_sidadm_to_haclient_group)
+ [Modify SAP Profile for HANA](#_modify_sap_profile_for_hana)
+ [Configure SAPHanaSR Cluster Hook for Optimized Cluster Response](#hook_saphanasr)
+ [Configure susTkOver Cluster Hook to Ensure Cluster Awareness of Manual Takeover](#hook_sustkover)
+ [(Optional) Configure susChkSrv Cluster Hook (Fast Dying Index Server)](#hook_suschksrv)
+ [(Optional) Configure Fast Start Option](#_optional_configure_fast_start_option)
+ [Review systemd Integration](#_review_systemd_integration)

## Add sidadm to haclient Group
<a name="_add_sidadm_to_haclient_group"></a>

The pacemaker software creates a haclient operating system group. To ensure proper cluster access permissions, add the sidadm user to this group on all cluster nodes. Run the following command as root:

```
# usermod -a -G haclient hdbadm
```

## Modify SAP Profile for HANA
<a name="_modify_sap_profile_for_hana"></a>

To prevent automatic SAP HANA startup by the SAP start framework when an instance restarts, modify the SAP HANA instance profiles on all nodes. These profiles are located at `/usr/sap/<SID>/SYS/profile/`.

As <sid>adm, edit the SAP HANA profile `<SID>_HDB<hana_sys_nr>_<hostname>` and modify or add the Autostart parameter, ensuring it is set to 0:

```
Autostart = 0
```

## Configure SAPHanaSR Cluster Hook for Optimized Cluster Response
<a name="hook_saphanasr"></a>

The SAPHanaSR hook provides immediate notification to the cluster if system replication fails, complementing the standard cluster polling mechanism. This optimization can significantly improve failover response time.

Follow these steps to configure the SAPHanaSR hook:

1.  **Verify Cluster Package** 

   The hook configuration varies based on the resource agents in use (see [Deployment Guidance](sap-hana-pacemaker-rhel-references.md#deployments-rhel) for details).

------
#### [ SAPHanaSR ]

   Check the expected package is installed

   ```
   # rpm -qa SAPHanaSR
   ```

   Review the man pages for more details.

   ```
   # man SAPHanaSR
   # man SAPHanaSR.py
   ```

------
#### [ SAPHanaSR-angi ]

   Check the expected package is installed

   ```
   # rpm -qa SAPHanaSR-angi
   ```

   Review the man pages for more details

   ```
   # man SAPHanaSR-angi
   # man SAPHanaSR.py
   ```

------

1.  **Confirm Hook Location** 

   By default the package is installed in `/usr/share/SAPHanaSR-angi` or `/usr/share/SAPHanaSR`. We suggest using the default location but optionally you can copy it to a custom directory; for example, `/hana/share/myHooks`. The hook must be available on all SAP HANA cluster nodes.

1.  **Configure global.ini** 

   Update the `global.ini` file located at `/hana/shared/<SID>/global/hdb/custom/config/` on each SAP HANA cluster node. Make a backup copy before proceeding.

------
#### [ SAPHanaSR ]

   ```
   [ha_dr_provider_SAPHanaSR]
   provider = SAPHanaSR
   path = /usr/share/SAPHanaSR
   execution_order = 1
   
   [trace]
   ha_dr_saphanasr = info
   ```

**Note**  
Update the path if you have modified the package location.

------
#### [ SAPHanaSR-angi ]

   ```
   [ha_dr_provider_sushanasr]
   provider = susHanaSR
   path = /usr/share/SAPHanaSR-angi
   execution_order = 1
   
   [trace]
   ha_dr_sushanasr = info
   ```

**Note**  
Update the path if you have modified the package location.

------

1.  **Configure Sudo Privileges** 

   The SAPHanaSR Python hook requires sudo privileges for the <sid>adm user to access cluster attributes:

   1. Create a new sudoers file as root user in `/etc/sudoers.d/`, for example `60-SAPHanaSR-hook` 

   1. Use visudo to safely edit the new file `visudo /etc/sudoers.d/60-SAPHanaSR-hook` 

   1. Add the following configuration, replacing <sid> with lowercase system ID and <SID> with uppercase system ID:

      ```
      Cmnd_Alias SITE_SOK = /usr/sbin/crm_attribute -n hana_<sid>_site_srHook_[a-zA-Z0-9_]* -v SOK -t crm_config -s SAPHanaSR
      Cmnd_Alias SITE_SFAIL = /usr/sbin/crm_attribute -n hana_<sid>_site_srHook_[a-zA-Z0-9_]* -v SFAIL -t crm_config -s SAPHanaSR
      Cmnd_Alias HOOK_HELPER  = /usr/sbin/SAPHanaSR-hookHelper --sid=<SID> --case=checkTakeover
      <sid>adm ALL=(ALL) NOPASSWD: SITE_SOK, SITE_SFAIL, HOOK_HELPER
      ```

      For example:

      ```
      Cmnd_Alias SITE_SOK = /usr/sbin/crm_attribute -n hana_hdb_site_srHook_[a-zA-Z0-9_]* -v SOK -t crm_config -s SAPHanaSR
      Cmnd_Alias SITE_SFAIL = /usr/sbin/crm_attribute -n hana_hdb_site_srHook_[a-zA-Z0-9_]* -v SFAIL -t crm_config -s SAPHanaSR
      Cmnd_Alias HOOK_HELPER  = /usr/sbin/SAPHanaSR-hookHelper --sid=HDB --case=checkTakeover
      hdbadm ALL=(ALL) NOPASSWD: SITE_SOK, SITE_SFAIL, HOOK_HELPER
      ```
**Note**  
The syntax uses a glob expression which allows it to adapt to different HSR site names whilst avoiding the use of wild cards. This ensures flexibility and security. A modification is still required if the SID changes. Replace the `<sid>` with a lowercase `sid` and `<SID>` with an uppercase `SID` which matches your installation.

1.  **Reload Configuration** 

   As <sid>adm reload the changes to `global.ini` using either a HANA restart or the command:

   ```
   hdbadm> hdbnsutil -reconfig
   ```

1.  **Verify Hook Configuration** 

   As <sid>adm, verify the hook is loaded:

   ```
   hdbadm> cdtrace
   hdbadm> grep "loading HA/DR Provider" nameserver*
   ```

1.  **Replicate Configuration to Secondary** 

   1. Confirm that global.ini changes have been replicated to the secondary system

   1. Create corresponding sudoers.d file on the secondary system

## Configure susTkOver Cluster Hook to Ensure Cluster Awareness of Manual Takeover
<a name="hook_sustkover"></a>

susTkOver.py prevents a manual takeover of the HANA primary if the SAP HANA multi-state resource (managed by SAPHana or SAPHanaController) is active, unless the cluster is set into maintenance mode or the Linux cluster is stopped.

For more details:

```
# man susTkOver.py
```

In addition to the steps for the previous hook, add an additional entry in the global.ini on each node. It is necessary to restart Hana:

```
[ha_dr_provider_susTkOver]
provider = susTkOver
path = /usr/share/SAPHanaSR
execution_order = 2
sustkover_timeout = 30

[trace]
ha_dr_sustkover = info
```

## (Optional) Configure susChkSrv Cluster Hook (Fast Dying Index Server)
<a name="hook_suschksrv"></a>

In the default configuration, a failure of the SAP HANA IndexServer will result in the process being restarted locally even when protected by a cluster. The time taken to stop the process and reload the memory can impact both the Recovery Time Objective (RTO) and performance. The SAP HANA hook susChksrv provides an option to trigger an action, such as a fencing or shutdown based on the HA/DR provider hook method srServiceStateChanged(), which in turn will trigger a failover.

**Important**  
As this hook can be configured using several different options. We suggest consulting the man page or SUSE documentation, and evaluating the best option for your setup.

```
# man susChksrv.py
```

Test the scenario with a Production Sized system to assess whether the time to resume operations aligns with your non functional requirements.

For more information, see SUSE Blog: [Emergency Braking for SAP HANA Dying Index Server](https://www.suse.com/c/emergency-braking-for-sap-hana-dying-indexserver/) 

## (Optional) Configure Fast Start Option
<a name="_optional_configure_fast_start_option"></a>

Although out of scope of this document, the SAP HANA Fast Restart option uses tmpfs file systems to preserve and reuse MAIN data fragments to speed up SAP HANA restarts. This is effective in cases where the operating system is not restarted including local restarts of the Index Server.

Fast Start Option may be an alternative to the susChkSrv hook.

For more information, see SAP Documentation: [SAP HANA Fast Restart Option](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/ce158d28135147f099b761f8b1ee43fc.html) 

## Review systemd Integration
<a name="_review_systemd_integration"></a>

Review SAP HANA version and systemd version to determine whether the prerequisites for systemd are available:

```
sidadm> systemctl --version
```

**Operating System versions**
+ SUSE Linux Enterprise Server 15 (systemd version 234)

**SAP HANA Revisions**
+ SAP HANA SPS07 revision 70

When using an SAP HANA version with systemd integration (SPS07 and later), you must run the following steps to prevent the nodes from being fenced when Amazon EC2 instances are intentionally stopped. See Note [3189534 - Linux: systemd integration for sapstartsrv and SAP HANA](https://me.sap.com/notes/3189534) 

1. Verify if SAP HANA is integrated with systemd. If it is integrated, a systemd service name, such as `SAP<SID>_<hana_sys_nr>.service` is present. For example, for SID HDB and instance number 00, `SAPHDB_00.service` is the service name.

   Use the following command as root to find SAP systemd services:

   ```
   # systemctl list-unit-files | grep -i sap
   ```

1. Create a pacemaker service drop-in file:

   ```
   # mkdir -p /etc/systemd/system/pacemaker.service.d/
   ```

1. Create the file /etc/systemd/system/pacemaker.service.d/50-saphana.conf with the following content:

   ```
   [Unit]
   Description=pacemaker needs SAP instance service
   Documentation=man:SAPHanaSR_basic_cluster(7)
   Wants=SAP<SID>_<hana_sys_nr>.service
   After=SAP<SID>_<hana_sys_nr>.service
   ```

1. Enable the drop-in file by reloading systemd:

   ```
   # systemctl daemon-reload
   ```

1. Verify that the change is active:

   ```
   # systemctl show pacemaker.service | grep SAP<SID>_<hana_sys_nr>
   ```

   For example, for SID HDB and instance number 00, the following output is expected:

   ```
   # systemctl show pacemaker.service | grep SAPHDB_00
   Wants=SAPHDB_00.service resource-agents-deps.target dbus.service
   After=system.slice network.target corosync.service resource-agents-deps.target basic.target rsyslog.service SAPHDB_00.service systemd-journald.socket sysinit.target time-sync.target dbus.service sbd.service
   ```

# Cluster Node Setup
<a name="sap-hana-pacemaker-sles-cluster-node-setup"></a>

Establish cluster communication between nodes using Corosync and configure required authentication.

**Topics**
+ [Deploy a Majority Maker Node (Scale-Out Clusters Only)](#_deploy_a_majority_maker_node_scale_out_clusters_only)
+ [Change the hacluster Password](#_change_the_hacluster_password)
+ [Setup Passwordless Authentication](#_setup_passwordless_authentication)
+ [Configure the Cluster Nodes](#_configure_the_cluster_nodes)
+ [Modify Generated Corosync Configuration](#_modify_generated_corosync_configuration)
+ [Verify Corosync Configuration](#_verify_corosync_configuration)
+ [Configure Cluster Services](#_configure_cluster_services)
+ [Verify Cluster Status](#_verify_cluster_status)

## Deploy a Majority Maker Node (Scale-Out Clusters Only)
<a name="_deploy_a_majority_maker_node_scale_out_clusters_only"></a>

**Note**  
Only required for clusters with more than two nodes.

When deploying an SAP HANA Scale-Out cluster in AWS, you must include a majority maker node in a third Availability Zone (AZ). The majority maker (tie-breaker) node ensures the cluster remains operational if one AZ fails by preserving the quorum. For the Scale-Out cluster to function, at least all nodes in one AZ plus the majority maker node must be running. If this minimum requirement is not met, the cluster loses its quorum state and any remaining SAP HANA nodes are fenced.

The majority maker requires a minimum EC2 instance configuration of 2 vCPUs, 2 GB RAM, and 50 GB disk space; this instance is exclusively used for quorum management and does not host an SAP HANA database or any other cluster resources.

## Change the hacluster Password
<a name="_change_the_hacluster_password"></a>

On all cluster nodes, change the password of the operating system user hacluster:

```
# passwd hacluster
```

## Setup Passwordless Authentication
<a name="_setup_passwordless_authentication"></a>

For a more comprehensive and easily consumable view of cluster activity, SUSE provides additional reporting tools. Many of these tools require access to both nodes without entering a password. SUSE recommends performing this setup for root user.

For more details, see Configuration to collect cluster report as root with root SSH access between cluster nodes section in SUSE Documentation [Usage of hb\$1report for SLES HAE](https://www.suse.com/support/kb/doc/?id=000017501).

**Warning**  
Review the security implications for your organization, including root access controls and network segmentation, before implementing this configuration.

## Configure the Cluster Nodes
<a name="_configure_the_cluster_nodes"></a>

Initialize the cluster framework on the first node, including all known cluster nodes.

On the primary node as root, run:

```
# crm cluster init -u -n <cluster_name> -N <hostname_1> -N <hostname_2>
```

 *Example using values from [Parameter Reference](sap-hana-pacemaker-sles-parameters.md) *:

```
hanahost01:~ # crm cluster init -u -n myCluster -N hanahost01 -N hanahost02
INFO: Detected "amazon-web-services" platform
INFO: Loading "default" profile from /etc/crm/profiles.yml
INFO: Configure Corosync (unicast):
  This will configure the cluster messaging layer.  You will need
  to specify a network address over which to communicate (default
  is eth0's network, but you can use the network address of any
  active interface).

Address for ring0 [10.2.10.1]
Port for ring0 [5405]

Do you wish to use SBD (y/n)? n
WARNING: Not configuring SBD - STONITH will be disabled.

Do you wish to configure a virtual IP address (y/n)? n

Do you want to configure QDevice (y/n)? n
INFO: Done (log saved to /var/log/crmsh/crmsh.log)

INFO: Adding node hanahost02 to cluster
INFO: Running command on hanahost02: crm cluster join -y -c root@hanahost01
...
INFO: Done (log saved to /var/log/crmsh/crmsh.log)
```

This command:
+ Initializes a two-node cluster named `myCluster` 
+ Configures unicast communication (-u)
+ Sets up the basic corosync configuration
+ Automatically joins the second node to the cluster
+ We do not configure SBD as `fence_aws` will be used for STONITH in AWS environments.
+ QDevice configuration is possible but not covered in this document. Refer to [SUSE Linux Enterprise High Availability Documentation - QDevice and QNetD](https://documentation.suse.com/en-us/sle-ha/15-SP7/html/SLE-HA-all/cha-ha-qdevice.html).
+ For clusters with more than two nodes, additional nodes can be added either during initialization with additional `-N <hostname_3>` parameters, or later using the following command on each new node:

  ```
  # crm cluster join -c <hostname_1>
  ```

## Modify Generated Corosync Configuration
<a name="_modify_generated_corosync_configuration"></a>

After initializing the cluster, the generated corosync configuration requires some modification to be optimised for cloud envrironments.

 **1. Edit the corosync configuration:** 

```
# vi /etc/corosync/corosync.conf
```

The generated file typically looks like this:

```
# Please read the corosync.conf.5 manual page
totem {
        version: 2
        cluster_name: myCluster
        clear_node_high_bit: yes
        interface {
                ringnumber: 0
                mcastport: 5405
                ttl: 1
        }

        transport: udpu
        crypto_hash: sha1
        crypto_cipher: aes256
        token: 5000     # This needs to be changed
        join: 60
        max_messages: 20
        token_retransmits_before_loss_const: 10
}

logging {
        fileline: off
        to_stderr: no
        to_logfile: yes
        logfile: /var/log/cluster/corosync.log
        to_syslog: yes
        debug: off
        timestamp: on
        logger_subsys {
                subsys: QUORUM
                debug: off
        }

}

nodelist {
    node {
        ring0_addr: <node1_primary_ip>    # Only single ring configured
        nodeid: 1
    }
    node {
        ring0_addr: <node2_primary_ip>    # Only single ring configured
        nodeid: 2
    }
}

quorum {

        # Enable and configure quorum subsystem (default: off)
        # see also corosync.conf.5 and votequorum.5
        provider: corosync_votequorum
        expected_votes: 2
        two_node: 1
}

totem {
    version: 2
    token: 5000             # This needs to be changed
    transport: udpu
    interface {
        ringnumber: 0
        mcastport: 5405
    }
}
```

 **2. Modify the configuration to add the second ring and optimize settings:** 

```
totem {
    token: 15000           # Changed from 5000 to 15000
    rrp_mode: passive      # Added for dual ring support
}

nodelist {
    node {
        ring0_addr: <node1_primary_ip>     # Primary network
        ring1_addr: <node1_secondary_ip>   # Added secondary network
        nodeid: 1
    }
    node {
        ring0_addr: <node2_primary_ip>     # Primary network
        ring1_addr: <node2_secondary_ip>   # Added secondary network
        nodeid: 2
    }
}
```

 *Example IP configuration:* 


| Network Interface | Node 1 | Node 2 | 
| --- | --- | --- | 
|  ring0\$1addr  |  10.2.10.1  |  10.2.20.1  | 
|  ring1\$1addr  |  10.2.10.2  |  10.2.20.2  | 

 **3. Synchronize the modified configuration to all nodes:** 

```
# csync2 -f /etc/corosync/corosync.conf
```

 **4. Restart the cluster** 

```
# crm cluster restart --all
```

## Verify Corosync Configuration
<a name="_verify_corosync_configuration"></a>

Verify network rings are active:

```
# corosync-cfgtool -s
```

 *Example output*:

```
Printing ring status.
Local node ID 1
RING ID 0
        id      = 10.2.10.1
        status  = ring 0 active with no faults
RING ID 1
        id      = 10.2.10.2
        status  = ring 1 active with no faults
```

Both network rings should report "active with no faults". If either ring is missing, review the corosync configuration and check that `/etc/corosync/corosync.conf` changes have been synced to the secondary node. You may need to do this manually. Restart the cluster if needed.

## Configure Cluster Services
<a name="_configure_cluster_services"></a>

Enable pacemaker to start automatically after reboot:

```
# systemctl enable pacemaker
```

Enabling pacemaker also handles corosync through service dependencies. The cluster will start automatically after reboot. For troubleshooting scenarios, you can choose to manually start services after boot instead.

## Verify Cluster Status
<a name="_verify_cluster_status"></a>

 **1. Check pacemaker service status:** 

```
# systemctl status pacemaker
```

 **2. Verify cluster status:** 

```
# crm_mon -1
```

 *Example output*:

```
Cluster Summary:
  * Stack: corosync
  * Current DC: hanahost01 (version 2.1.5+20221208.a3f44794f) - partition with quorum
  * 2 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ hanahost01 hanahost02 ]

Active Resources:
  * No active resources
```

# Cluster Configuration
<a name="sap-hana-pacemaker-sles-cluster-config"></a>

Bootstrap the cluster and configure all required cluster resources and constraints.

**Topics**
+ [Prepare for Resource Creation](#_prepare_for_resource_creation)
+ [Cluster Bootstrap](#cluster-bootstrap)
+ [Create STONITH Fencing Resource](#resource-stonith)
+ [Create Overlay IP Resources](#resource-overlayip)
+ [Create SAPHanaTopology Resource](#resource-saphanatop)
+ [Create SAPHANA Resource (based on resource agent SAPHana or SAPHanaController)](#resource-saphana)
+ [Create Resource Constraints](#resource-constraints)
+ [Activate Cluster](#_activate_cluster)
+ [Reset Configuration – Optional](#_reset_configuration_optional)

## Prepare for Resource Creation
<a name="_prepare_for_resource_creation"></a>

To ensure that the cluster does not perform unexpected actions during setup of resources and configuration, set the maintenance mode to true.

Run the following command to put the cluster in maintenance mode:

```
# crm maintenance on
```

To verify the current maintenance state:

```
# crm status
```

**Note**  
There are two types of maintenance mode:  
Cluster-wide maintenance (set with `crm maintenance on`)
Node-specific maintenance (set with `crm node maintenance nodename`)
Always use cluster-wide maintenance mode when making configuration changes. For node-specific operations like hardware maintenance, refer to the Operations for proper procedures.  
To disable maintenance mode after configuration is complete:  

```
# crm maintenance off
```

## Cluster Bootstrap
<a name="cluster-bootstrap"></a>

### Configure Cluster Properties
<a name="_configure_cluster_properties"></a>

Configure cluster properties to establish fencing behavior and resource failover settings:

```
# crm configure property stonith-enabled="true"
# crm configure property stonith-timeout="600"
# crm configure property priority-fencing-delay="20"
# crm configure property stonith-action="off"
```
+ The **priority-fencing-delay** is recommended for protecting SAP HANA nodes during network partitioning events. When a cluster partition occurs, this delay gives preference to nodes hosting higher priority resources, with SAP HANA Primary (promoted) instances receiving additional priority weighting. This helps ensure the Primary HANA node survives in split-brain scenarios. The recommended 20 second priority-fencing-delay works in conjunction with the pcmk\$1delay\$1max (10 seconds) configured in the stonith resource, providing a total potential delay of up to 30 seconds before fencing occurs
+ Setting **stonith-action="off"** ensures fenced nodes remain down until manually investigated, preventing potentially compromised nodes from automatically rejoining the cluster. While "reboot" is available as an alternative if automated recovery is preferred, "off" is recommended for SAP HANA clusters to prevent potential data corruption and enable root cause analysis

To verify your cluster property settings:

```
# crm configure show property
```

### Configure Resource Defaults
<a name="_configure_resource_defaults"></a>

Configure resource default behaviors:

```
# crm configure rsc_defaults resource-stickiness="1000"
# crm configure rsc_defaults migration-threshold="5000"
```
+ The **resource-stickiness** value prevents unnecessary resource movement, effectively setting a "cost" for moving resources. A value of 1000 strongly encourages resources to remain on their current node, avoiding the downtime associated with movement.
+ The **migration-threshold** of 5000 ensures the cluster will attempt to recover a resource on the same node many times before declaring that node unsuitable for hosting the resource.

Individual resources may override these defaults with their own defined values.

To verify your resource default settings:

```
# crm configure show rsc_defaults
```

### Configure Operation Defaults
<a name="_configure_operation_defaults"></a>

Configure operation timeout defaults:

```
# crm configure op_defaults timeout="600"
```
+ The **op\$1defaults timeout** ensures all cluster operations have a reasonable default timeout of 600 seconds. Individual resources may override this with their own timeout values.

To verify your operation default settings:

```
# crm configure show op_defaults
```

## Create STONITH Fencing Resource
<a name="resource-stonith"></a>

An AWS STONITH resource agent is recommended for AWS deployments on SUSE as it leverages the AWS API to safely fence failed or incommunicable nodes by stopping the EC2 instances. See [Pacemaker - STONITH Fencing Agent](sap-hana-pacemaker-sles-concepts.md#fencing-sles).

Create the STONITH resource using resource agent ** `external/ec2` **:

```
# crm configure primitive <stonith_resource_name> stonith:external/ec2 \
params tag="<cluster_tag>" profile="<cli_cluster_profile>" pcmk_delay_max="10" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="300" timeout="60"
```

Details:
+  **tag** - EC2 instance tag key name that associates instances with this cluster configuration. This tag key must be unique within the AWS account and have a value which matches the instance hostname. See [Create Amazon EC2 Resource Tags Used by Amazon EC2 STONITH Agent](sap-hana-pacemaker-sles-ec2-configuration.md#create-cluster-tags) for EC2 instance tagging configuration.
+  **profile** - (optional) AWS CLI profile name for API authentication. Verify profile exists with `aws configure list-profiles`. If a profile is not explicitly configured the default profile will be used.
+  **pcmk\$1delay\$1max** - Random delay before fencing operations. Works in conjunction with cluster property `priority-fencing-delay` to prevent simultaneous fencing. Historically set to higher values (45s), but with `priority-fencing-delay` now handling primary node protection, a lower value (10s) is sufficient.
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-sles-parameters.md) * :

```
# crm configure primitive res_stonith_ec2 stonith:external/ec2 \
params tag="pacemaker" profile="cluster" \
pcmk_delay_max="10" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="300" timeout="60"
```

## Create Overlay IP Resources
<a name="resource-overlayip"></a>

This resource ensures client connections follow the SAP HANA primary instance during failover by updating AWS route table entries. It manages an overlay IP address that always points to the active SAP HANA database

Create the IP resource:

```
# crm configure primitive rsc_ip_<SID>_HDB<hana_sys_nr> ocf:heartbeat:aws-vpc-move-ip \
params ip="<hana_overlayip>" \
routing_table="<routetable_id>" \
interface="eth0" \
profile="<cli_cluster_profile>" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="60" timeout="60"
```

Details:
+  **ip** - Overlay IP address that will be used to connect to the Primary SAP HANA database. See [Overlay IP Concept](sap-hana-pacemaker-sles-concepts.md#overlay-ip-sles) 
+  **routing\$1table** - AWS route table ID(s) that need to be updated. Multiple route tables can be specified using commas (For example, `routing_table=rtb-xxxxxroutetable1,rtb-xxxxxroutetable2`). Ensure initial entries have been created following [Add VPC Route Table Entries for Overlay IPs](sap-hana-pacemaker-sles-infra-setup.md#rt-sles) 
+  **interface** - Network interface for the IP address (typically eth0)
+  **profile** - (optional) AWS CLI profile name for API authentication. Verify profile exists with `aws configure list-profiles`. If a profile is not explicitly configured the default profile will be used.
+  **awscli** - (optional) Path to the AWS CLI executable. The default path is `/usr/bin/aws`. Only specify this parameter if the AWS CLI is installed in a different location. To confirm the path on your system, run `which aws`.
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-rhel-parameters.md) * :  
**Example**  

  ```
  # crm configure primitive rsc_ip_HDB_HDB00 ocf:heartbeat:aws-vpc-move-ip \
  params ip="172.16.52.1" \
  routing_table="rtb-xxxxxroutetable1" \
  interface="eth0" \
  profile="cluster" \
  op start interval="0" timeout="180" \
  op stop interval="0" timeout="180" \
  op monitor interval="60" timeout="60"
  ```

**For Active/Active Read Enabled**  
Only if you are using `logreplay_readenabled` and require that your secondary is accessible via overlay IP. You can create an additional IP resource.

```
# crm configure primitive rsc_ip_<SID>_HDB<hana_sys_nr>_readenabled ocf:heartbeat:aws-vpc-move-ip \
params ip="<readenabled_overlayip>" \
routing_table="<routetable_id>" \
interface="eth0" \
profile="<cli_cluster_profile>" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="60" timeout="60"
```
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-sles-parameters.md) * :  
**Example**  

  ```
  # crm configure primitive rsc_ip_HDB_HDB00_readenabled ocf:heartbeat:aws-vpc-move-ip \
  params ip="172.16.52.2" \
  routing_table="rtb-xxxxxroutetable1" \
  interface="eth0" \
  profile="cluster" \
  op start interval="0" timeout="180" \
  op stop interval="0" timeout="180" \
  op monitor interval="60" timeout="60"
  ```

**For Shared VPC**  
If your configuration requires a shared vpc, two additional parameters are required.

```
# crm configure primitive rsc_ip_<SID>_HDB<hana_sys_nr> ocf:heartbeat:aws-vpc-move-ip \
params ip="<hana_overlayip>" routing_table=<routetable_id> interface=eth0 \
profile="<cli_cluster_profile>" lookup_type=NetworkInterfaceId \
routing_table_role="arn:aws:iam::<sharing_vpc_account_id>:role/<sharing_vpc_account_cluster_role>" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op monitor interval="60" timeout="60"
```

Additional details:
+  **lookup\$1type** = NetworkInterfaceId
+  **routing\$1table\$1role** = "arn:aws:iam::<shared\$1vpc\$1account\$1id>:role/<sharing\$1vpc\$1account\$1cluster\$1role>"

## Create SAPHanaTopology Resource
<a name="resource-saphanatop"></a>

The SAPHanaTopology resource agent helps manage high availability for SAP HANA databases with system replication. It analyzes the SAP HANA topology and reports findings via node status attributes. These attributes are used by either the SAPHana or SAPHanaController resource agents to control the SAP HANA databases. SAPHanaTopology starts and monitors the local saphostagent, leveraging SAP interfaces like landscapeHostConfiguration.py, hdbnsutil, and saphostctrl to gather information about system status, roles, and configuration.

### SAPHanaSR-angi and Classic Deployments
<a name="_saphanasr_angi_and_classic_deployments"></a>

For both scale-up and scale-out deployments

For documentation on the resource you can review the man page.

```
# man ocf_suse_SAPHanaTopology
```

------
#### [ For scale-up (2-node) ]

For the primitive:

```
# crm configure primitive rsc_SAPHanaTopology_<SID>_HDB<hana_sys_nr> ocf:suse:SAPHanaTopology \
params SID="<SID>" \
InstanceNumber="<hana_sys_nr>" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="300" \
op monitor interval="10" timeout="600"
```

For the clone:

```
# crm configure clone cln_SAPHanaTopology_<SID>_HDB<hana_sys_nr> rsc_SAPHanaTopology_<SID>_HDB<hana_sys_nr> \
meta clone-node-max="1" interleave="true" clone-max="2"
```
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-sles-parameters.md) * :  
**Example**  

  ```
  # crm configure primitive rsc_SAPHanaTopology_HDB_HDB00 ocf:suse:SAPHanaTopology \
  params SID="HDB" \
  InstanceNumber="00" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="300" \
  op monitor interval="10" timeout="600"
  
  # crm configure clone cln_SAPHanaTopology_HDB_HDB00 rsc_SAPHanaTopology_HDB_HDB00 \
  meta clone-node-max="1" interleave="true" clone-max="2"
  ```

------
#### [ For scale-out ]

For the primitive:

```
# crm configure primitive rsc_SAPHanaTopology_<SID>_HDB<hana_sys_nr> ocf:suse:SAPHanaTopology \
params SID="<SID>" InstanceNumber="<hana_sys_nr>" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="300" \
op monitor interval="10" timeout="600"
```

For the clone:

```
# crm configure clone cln_SAPHanaTopology_<SID>_HDB<hana_sys_nr> rsc_SAPHanaTopology_<SID>_HDB<hana_sys_nr> \
meta clone-node-max="1" interleave="true" clone-max="<number-of-nodes>"
```
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-sles-parameters.md) * :  
**Example**  

  ```
  # crm configure primitive rsc_SAPHanaTopology_HDB_HDB00 ocf:suse:SAPHanaTopology \
  params SID="HDB" InstanceNumber="00" \
  op start interval="0" timeout="600" \
  op stop interval="0" timeout="300" \
  op monitor interval="10" timeout="600"
  
  # crm configure clone cln_SAPHanaTopology_HDB_HDB00 rsc_SAPHanaTopology_HDB_HDB00 \
  meta clone-node-max="1" interleave="true" clone-max="6"
  ```

------

Details:
+  **SID** - SAP System ID for the HANA instance
+  **InstanceNumber** - Instance number of the SAP HANA instance
+  **clone-node-max** - Defines how many copies of the resource agent can be started on a single node (set to 1)
+  **interleave** - Enables parallel starting of dependent clone resources on the same node (set to true)
+  **clone-max** - Defines the total number of clone instances that can be started in the cluster (For example, use 2 for scale-out or set to 6 for scale-out with 3 nodes per site, do not include majority maker node)

## Create SAPHANA Resource (based on resource agent SAPHana or SAPHanaController)
<a name="resource-saphana"></a>

The SAP HANA resource agents manage system replication and failover between SAP HANA databases. These agents control start, stop, and monitoring operations while checking synchronization status to maintain data consistency. They leverage SAP interfaces including sapcontrol, landscapeHostConfiguration, hdbnsutil, systemReplicationStatus, and saphostctrl. All configurations work in conjunction with the SAPHanaTopology agent, which gathers information about the system replication status across cluster nodes.

Choose the appropriate resource agent configuration based on your SAP HANA architecture:

### SAPHanaSR-angi Deployments (Available in SLES 15 SP4\$1)
<a name="_saphanasr_angi_deployments_available_in_sles_15_sp4"></a>

Available and recommended for new deployments on SLES 15 SP4 and above. The SAPHanaController resource agent with next generation system replication architecture (SAPHanaSR-angi) provides improved integration and management capabilities for both scale-up and scale-out deployments. For detailed information:

For documentation on the resource you can review the man page.

```
# man ocf_suse_SAPHanaController
```

------
#### [ For scale-up (2-node) ]

Create the primitive

```
# crm configure primitive rsc_SAPHanaController_<SID>_HDB<hana_sys_nr> ocf:suse:SAPHanaController \
params SID="<SID>" \
InstanceNumber="<hana_sys_nr>" \
PREFER_SITE_TAKEOVER="true" \
DUPLICATE_PRIMARY_TIMEOUT="7200" \
AUTOMATED_REGISTER="true" \
op start interval="0" timeout="3600" \
op stop interval="0" timeout="3600" \
op promote interval="0" timeout="3600" \
op monitor interval="60" role="Promoted" timeout="700" \
op monitor interval="61" role="Unpromoted" timeout="700" \
meta priority="100"
```

Create the clone

```
# crm configure clone msl_SAPHanaController_<SID>_HDB<hana_sys_nr> rsc_SAPHanaController_<SID>_HDB<hana_sys_nr> \
meta clone-node-max="1" promotable="true" interleave="true" clone-max="2"
```
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-sles-parameters.md) * :  
**Example**  

  ```
  # crm configure primitive rsc_SAPHanaController_HDB_HDB00 ocf:suse:SAPHanaController \
  params SID="HDB" \
  InstanceNumber="00" \
  PREFER_SITE_TAKEOVER="true" \
  DUPLICATE_PRIMARY_TIMEOUT="7200" \
  AUTOMATED_REGISTER="true" \
  op start interval="0" timeout="3600" \
  op stop interval="0" timeout="3600" \
  op promote interval="0" timeout="3600" \
  op monitor interval="60" role="Promoted" timeout="700" \
  op monitor interval="61" role="Unpromoted" timeout="700" \
  meta priority="100"
  # crm configure clone msl_SAPHanaController_HDB_HDB00 rsc_SAPHanaController_HDB_HDB00 \
  meta clone-node-max="1" promotable="true" interleave="true" clone-max="2"
  ```

------
#### [ For scale-out ]

Create the primitive

```
# crm configure primitive rsc_SAPHanaController_<SID>_HDB<hana_sys_nr> ocf:suse:SAPHanaController \
params SID="<SID>" \
InstanceNumber="<hana_sys_nr>" \
PREFER_SITE_TAKEOVER="true" \
DUPLICATE_PRIMARY_TIMEOUT="7200" \
AUTOMATED_REGISTER="true" \
op start interval="0" timeout="3600" \
op stop interval="0" timeout="3600" \
op promote interval="0" timeout="3600" "\
op monitor interval="60" role="Promoted" timeout="700" \
op monitor interval="61" role="Unpromoted" timeout="700"
```

Create the clone

```
# crm configure clone msl_SAPHanaController_<SID>_HDB<hana_sys_nr> rsc_SAPHanaController_<SID>_HDB<hana_sys_nr> \
meta clone-node-max="1" promotable="true" interleave="true" clone-max="<number-of-nodes>"
```
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-sles-parameters.md) * :  
**Example**  

  ```
  # crm configure primitive rsc_SAPHanaController_HDB_HDB00 ocf:suse:SAPHanaController \
  params SID="HDB" \
  InstanceNumber="00" \
  PREFER_SITE_TAKEOVER="true" \
  DUPLICATE_PRIMARY_TIMEOUT="7200" \
  AUTOMATED_REGISTER="true" \
  op start interval="0" timeout="3600" \
  op stop interval="0" timeout="3600" \
  op promote interval="0" timeout="3600" "\
  op monitor interval="60" role="Promoted" timeout="700" \
  op monitor interval="61" role="Unpromoted" timeout="700"
  
  # crm configure clone msl_SAPHanaController_HDB_HDB00 rsc_SAPHanaController_HDB_HDB00 \
  meta clone-node-max="1" promotable="true" interleave="true" clone-max="6"
  ```

------

Details:
+  **SID** - SAP System ID for the HANA instance
+  **InstanceNumber** - Instance number of the SAP HANA instance
+  **clone-node-max** - Defines how many copies of the resource agent can be started on a single node (set to 1)
+  **interleave** - Enables parallel starting of dependent clone resources on the same node (set to true)
+  **clone-max** - Defines the total number of clone instances that can be started in the cluster (For example, use 2 for scale-out or set to 6 for scale-out with 3 nodes per site, do not include majority maker node)
+  **PREFER\$1SITE\$1TAKEOVER** defines whether a takeover to the secondary is preferred. Review for non standard deployments.
+  **AUTOMATED\$1REGISTER** defines whether the ex-primary should be registered as a secondary. Review for non standard deployments.
+  **DUPLICATE\$1PRIMARY\$1TIMEOUT** is the wait time to minimise the risk of an unintended dual primary.
+  **meta priority** - Setting this to 100 works in conjunction with priority-fencing-delay to ensure proper failover order and prevent simultaneous fencing operations
+ The start and stop timeout values (3600s) may need to be increased for larger databases. Adjust these values based on your database size and observed startup/shutdown times

### Classic Deployments
<a name="_classic_deployments"></a>

For classic scale-up deployments, the SAPHana resource agent manages takeover between two SAP HANA databases. For detailed information:

```
# man ocf_suse_SAPHana
```

------
#### [ For scale-up (2-node) ]

Create the primitive using the SAPHana Resource Agent

```
# crm configure primitive rsc_SAPHana_<SID>_HDB<hana_sys_nr> ocf:suse:SAPHana \
params SID="<SID>" \
InstanceNumber="<hana_sys_nr>" \
PREFER_SITE_TAKEOVER="true" \
DUPLICATE_PRIMARY_TIMEOUT="7200" \
AUTOMATED_REGISTER="true" \
op start interval="0" timeout="3600" \
op stop interval="0" timeout="3600" \
op promote interval="0" timeout="3600" \
op monitor interval="60" role="Master" timeout="700" \
op monitor interval="61" role="Slave" timeout="700" \
meta priority="100"
```

Create the clone

```
# crm configure ms msl_SAPHana_<SID>_HDB<hana_sys_nr> rsc_SAPHana_<SID>_HDB<hana_sys_nr> \
meta clone-node-max="1" interleave="true" clone-max="2"
```
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-sles-parameters.md) * :

  ```
  # crm configure primitive rsc_SAPHana_HDB_HDB00 ocf:suse:SAPHana \
  params SID="HDB" \
  InstanceNumber="00" \
  PREFER_SITE_TAKEOVER="true" \
  DUPLICATE_PRIMARY_TIMEOUT="7200" \
  AUTOMATED_REGISTER="true" \
  op start interval="0" timeout="3600" \
  op stop interval="0" timeout="3600" \
  op promote interval="0" timeout="3600" "\
  op monitor interval="60" role="Master" timeout="700" \
  op monitor interval="61" role="Slave" timeout="700" \
  meta priority="100"
  
  # crm configure ms msl_SAPHana_HDB_HDB00 rsc_SAPHana_HDB_HDB00 \
  meta clone-node-max="1" interleave="true" clone-max="2"
  ```

------
#### [ For scale-out ]

Create the primitive using the SAPHanaController Resource Agent:

```
# crm configure primitive rsc_SAPHanaController_<SID>_HDB<hana_sys_nr> ocf:suse:SAPHanaController \
params SID="<SID>"
InstanceNumber="<hana_sys_nr>" \
PREFER_SITE_TAKEOVER="true" \
DUPLICATE_PRIMARY_TIMEOUT="7200" \
AUTOMATED_REGISTER="true" \
op start interval="0" timeout="3600" \
op stop interval="0" timeout="3600" \
op promote interval="0" timeout="3600" \
op monitor interval="60" role="Master" timeout="700" \
op monitor interval="61" role="Slave" timeout="700"
```

Create the clone

```
# crm configure clone msl_SAPHanaController_<SID>_HDB<hana_sys_nr> rsc_SAPHanaController_<SID>_HDB<hana_sys_nr> \
meta clone-node-max="1" interleave="true" clone-max="<number-of-nodes>"
```
+  *Example using values from [Parameter Reference](sap-hana-pacemaker-sles-parameters.md) * :

  ```
  # crm configure primitive rsc_SAPHanaController_HDB_HDB00 ocf:suse:SAPHanaController \
  params SID="HDB" \
  InstanceNumber="00" \
  PREFER_SITE_TAKEOVER="true" \
  DUPLICATE_PRIMARY_TIMEOUT="7200" \
  AUTOMATED_REGISTER="true" \
  op start interval="0" timeout="3600" \
  op stop interval="0" timeout="3600" \
  op promote interval="0" timeout="3600" \
  op monitor interval="60" role="Master" timeout="700" \
  op monitor interval="61" role="Slave" timeout="700"
  
  # crm configure ms msl_SAPHana_HDB_HDB00 rsc_SAPHana_HDB_HDB00 \
  meta clone-node-max="1" interleave="true" clone-max="6"
  ```

------

Details:
+  **SID** - SAP System ID for the HANA instance
+  **InstanceNumber** - Instance number of the SAP HANA instance
+  **clone-node-max** - Defines how many copies of the resource agent can be started on a single node (set to 1)
+  **interleave** - Enables parallel starting of dependent clone resources on the same node (set to true)
+  **clone-max** - Defines the total number of clone instances that can be started in the cluster (For example, use 2 for scale-out or set to 6 for scale-out with 3 nodes per site, do not include majority maker node)
+  **PREFER\$1SITE\$1TAKEOVER** defines whether a takeover to the secondary is preferred. Review for non standard deployments.
+  **AUTOMATED\$1REGISTER** defines whether the ex-primary should be registered as a secondary. Review for non standard deployments.
+  **DUPLICATE\$1PRIMARY\$1TIMEOUT** is the wait time to minimise the risk of an unintended dual primary.
+  **meta priority** - Setting this to 100 works in conjunction with priority-fencing-delay to ensure proper failover order and prevent simultaneous fencing operations
+ The start and stop timeout values (3600s) may need to be increased for larger databases. Adjust these values based on your database size and observed startup/shutdown times

## Create Resource Constraints
<a name="resource-constraints"></a>

The following constraints are required.

### Order Constraint
<a name="_order_constraint"></a>

This constraint defines the start order between the SAPHanaTopology and SAPHana resources:

```
# crm configure order <order_rule_name> Optional: <SAPHanaTopology_clone> <SAPHana/SAPHanaController_Clone>
```
+  *Example* :

  ```
  # crm configure order ord_SAPHana Optional: cln_SAPHanaTopology_HDB_HDB00 msl_SAPHana_HDB_HDB00
  ```

### Colocation Constraint
<a name="_colocation_constraint"></a>

#### IP with Primary
<a name="_ip_with_primary"></a>

This constraint ensures that the IP resource which determines the target of the overlay IP runs on the node which has the primary SAP HANA role:

```
# crm configure colocation <colocation_rule_name> 2000: <ip_resource_name> <saphana/saphanacontroller name>:Master
```
+  *Example* :

  ```
  # crm configure colocation col_ip_SAPHana_Primary 2000: rsc_ip_HDB_HDB00 msl_SAPHana_HDB_HDB00:Master
  ```

#### ReadOnly IP with Secondary (Only for ReadOnly Patterns)
<a name="_readonly_ip_with_secondary_only_for_readonly_patterns"></a>

This constraint ensures that the read-enabled IP resource runs on the secondary (Unpromoted) node. When the secondary node is unavailable, the IP will move to the primary node, where read workloads will share capacity with primary workloads:

```
# crm configure colocation <colocation_rule_name> 2000: rsc_ip_<SID>_HDB<hana_sys_nr>_readenabled msl_SAPHana/SAPHanaController_<SID>_HDB<hana_sys_nr>:Unpromoted
```
+  *Example* :

  ```
  # crm configure colocation col_ip_readenabled_SAPHana_Secondary 2000: rsc_ip_HDB_HDB00_readenabled msl_SAPHana_HDB_HDB00:Unpromoted
  ```

### Location Constraint
<a name="_location_constraint"></a>

#### No SAP HANA Resources on the Majority Maker (Scale Out Only)
<a name="_no_sap_hana_resources_on_the_majority_maker_scale_out_only"></a>

This location constraint ensures that SAP HANA Resources avoid the Majority Maker, which is not suited to running them.

```
# crm configure location loc_SAPHanaTopology_avoid_majority_maker cln_SAPHanaTopology_<SID>_HDB<hana_sys_nr> -inf:<hostname_mm>

# crm configure location loc_SAPHana/SAPHanaController_avoid_majority_maker msl_SAPHana/SAPHanaController_<SID>_HDB<hana_sys_nr> -inf:<hostname_mm>
```
+  *Example* :

  ```
  # crm configure location loc_SAPHanaTopology_avoid_majority_maker cln_SAPHanaTopology_HDB_HDB00 -inf:hanamm
  # crm configure location loc_SAPHana_avoid_majority_maker msl_SAPHana_HDB_HDB00 -inf:hanamm
  ```

## Activate Cluster
<a name="_activate_cluster"></a>

Use `crm config show` and `crm config edit` commands to review that all the values have been entered correctly.

On confirmation of correct values, set the maintenance mode to false using the following command. This enables the cluster to take control of the resources:

```
# crm maintenance off
```

## Reset Configuration – Optional
<a name="_reset_configuration_optional"></a>

**Important**  
The following instructions help you reset the complete configuration. Run these commands only if you want to start setup from the beginning. You can make minor changes with the crm edit command.

Run the following command to back up the current configuration for reference:

```
# crm config show > /tmp/crmconfig_backup.txt
```

Run the following command to clear the current configuration:

```
# crm configure erase
```

Once the preceding erase command is executed, it removes all of the cluster resources from Cluster Information Base (CIB), and disconnects the communication from corosync to the cluster. Before starting the resource configuration run crm cluster restart, so that cluster reestablishes communication with corosync, and retrieves the configuration. The restart of cluster removes maintenance mode. Reapply before commencing additional configuration and resource setup.

# Client Connectivity
<a name="sap-hana-pacemaker-sles-client-connectivity"></a>

For proper SAP HANA database connectivity:
+ Ensure that the Overlay IP can be correctly resolved in all application servers
+ DNS configuration or local host entries must be valid
+ Network routing must be properly configured
+ SAP HANA client libraries must be installed and up to date

Ensure that the connectivity data for the SAP HANA Database references the hostname associated with the Overlay IP. For more information see SAP Documentation: [Setting Connectivity Data for the SAP HANA Database](https://help.sap.com/docs/SLTOOLSET/39c32e9783f6439e871410848f61544c/b7ed2d55b0a7f857e10000000a441470.html?version=CURRENT_VERSION_SWPM20) 

Test database connectivity using R3trans utility:

```
sidadm> R3trans -d
```

Review additional connections to SAP HANA that require High Availability. While application connectivity should use the overlay IP, administrative tools (SAP HANA Studio, hdbsql commands, monitoring tools) require direct connectivity to individual SAP HANA instances.