

# Configure storage (FSx for ONTAP)
<a name="sap-hana-amazon-fsx"></a>

Amazon FSx for NetApp ONTAP is a fully managed service that provides highly reliable, scalable, high-performing, and feature-rich file storage built on NetApp’s popular ONTAP file system. You can now deploy and operate SAP HANA on AWS with Amazon FSx for NetApp ONTAP. For more information, see [Amazon FSx for NetApp ONTAP](https://aws.amazon.com/fsx/netapp-ontap/).

SAP HANA stores and processes all of its data in memory and provides protection against data loss by saving the data in persistent storage locations. To achieve optimal performance, the storage solution used for SAP HANA data and log volumes must meet SAP’s storage KPI. As a fully managed service, Amazon FSx for NetApp ONTAP makes it easier to launch and scale reliable, high-performing, and secure shared file storage in the cloud.

If you are a first-time user, see [How Amazon FSx for NetApp ONTAP works](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/how-it-works-fsx-ontap.html).

This guide covers the following topics.
+  [Supported configurations](instances-sizing-sap-hana-amazon-fsx.md) 
+  [Set up FSx for ONTAP file system SVMs and volumes](amazon-fsx-sap-hana.md) 
+  [Set up host](host-setup-fsx-sap-hana.md) 

For SAP specifications, refer to [SAP Note 2039883 - FAQ: SAP HANA database and data snapshots](https://me.sap.com/notes/2039883) and [SAP Note 3024346 - Linux Kernel Settings for NetApp NFS](https://me.sap.com/notes/3024346).

# Supported configurations
<a name="instances-sizing-sap-hana-amazon-fsx"></a>

The following rules and limitations are applicable for deploying SAP HANA on AWS with Amazon FSx for NetApp ONTAP.
+ FSx for ONTAP file systems for SAP HANA data and log volumes are only supported for single Availability Zone deployment.
+ Amazon EC2 instances where you plan to deploy your SAP HANA workload and FSx for ONTAP file systems must be in the same subnet.
+ Use separate storage virtual machines (SVM) for SAP HANA data and log volumes at no additional cost. This ensures that your I/O traffic flows through different IP addresses and TCP sessions.
+ For SAP HANA scale-out with standby node, the `basepath_shared` must be set to *Yes*. You can locate it in the *Persistence* section of the `global.ini` file.
+ SAP HANA on FSx for ONTAP is only supported with the NFSv4.1 protocol. SAP HANA volumes must be created and mounted using the NFSv4.1 protocol.
+ SAP HANA on FSx for ONTAP is only supported on the following operating systems:
  + Red Hat Enterprise Linux 8.4 and above
  + SUSE Linux Enterprise Server 15 SP2 and above
+  `/hana/data` and `/hana/log` must have their own FSx for ONTAP volumes. `/hana/shared`, and `/usr/sap` can share a volume.

## Supported Amazon EC2 instance types
<a name="instance-types-sap-hana-amazon-fsx"></a>

Amazon FSx for NetApp ONTAP is certified by SAP for scale-up and scale-out (OLTP/OLAP) SAP HANA workloads in a single Availability Zone setup. You can use Amazon FSx for NetApp ONTAP as the primary storage for SAP HANA data, log, binary, and shared volumes. For a complete list of supported Amazon EC2 instances for SAP HANA, see [SAP HANA certified instances](https://docs.aws.amazon.com/sap/latest/general/sap-hana-aws-ec2.html).

## Sizing
<a name="sizing-sap-hana-amazon-fsx"></a>

You can configure the throughput capacity of FSx for ONTAP when you create a new file system by scaling up to 4 GB/s of read throughput and 1000 MB/s of write throughput in a single Availability Zone deployment. For more information, see [Amazon FSx for NetApp ONTAP performance](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/performance.html).

**Topics**
+ [SAP KPIs](#sizing-sap-kpi)
+ [Minimum requirement](#sizing-min-req)
+ [Higher throughput](#sizing-high-throughput)

### SAP KPIs
<a name="sizing-sap-kpi"></a>

 **SAP requires the following KPIs for SAP HANA volumes.** 


|  | Read | Write | 
| --- | --- | --- | 
|  Data  |  400 MB/s  |  250 MB/s  | 
|  Log  |  250 MB/s  |  250 MB/s  | 
|  Latency for log  |  Less than 1 millisecond write latency with 4K and 16K block sized I/O  | 

### Minimum requirement
<a name="sizing-min-req"></a>

You must provision FSx for ONTAP volumes with sufficient capacity and performance, based on the requirements of your SAP HANA workload. To meet the storage KPIs for SAP HANA, you need a throughput capacity of at least **1,024 MB/s**. Lower throughput may be acceptable for non-production systems.

Sharing a file system between multiple SAP HANA nodes is supported when the file system meets the requirements of all SAP HANA nodes. When sharing a file system, you can use the quality of service feature for consistent performance and reduced interference between competing workloads. For more information, see [Using Quality of Service in Amazon FSx for NetApp ONTAP](https://aws.amazon.com/blogs/storage/using-quality-of-service-in-amazon-fsx-for-netapp-ontap/).

### Higher throughput
<a name="sizing-high-throughput"></a>

If you require higher throughput, you can do one of the following:
+ Create separate data and log volumes on different FSx for ONTAP file systems.
+ Create additional data volume partitions across multiple FSx for ONTAP file systems.

To learn more about FSx for ONTAP performance, see [Performance details](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/performance.html#performance-details-fsxw).

## SAP HANA parameters
<a name="sap-hana-amazon-fsx"></a>

Set the following SAP HANA database parameters in the `global.ini` file.

```
[fileio]
max_parallel_io_requests=128
async_read_submit=on
async_write_submit_active=on
async_write_submit_blocks=all
```

Use the following SQL commands to set these parameters on `SYSTEM` level.

```
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'max_parallel_io_requests') = '128' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'async_read_submit') = 'on' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'async_write_submit_active') = 'on' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'async_write_submit_blocks') = 'all' WITH RECONFIGURE;
```

# Set up FSx for ONTAP file system, SVMs, and volumes
<a name="amazon-fsx-sap-hana"></a>

Before you create FSx for ONTAP file system, determine the total storage space you need for your SAP HANA workload. You can increase the storage size later. To decrease the storage size, you must create a new file system.

To create a FSx for ONTAP file system, see [Step 1: Create an Amazon FSx for NetApp ONTAP file system](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/getting-started-step1.html). For more information, see [Managing FSx for ONTAP file systems](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/managing-file-systems.html).

**Note**  
Only single Availability Zone file systems are supported for SAP HANA workloads.

**Topics**
+ [Create storage virtual machines (SVM)](#svm-sap-hana)
+ [Volume configuration](#volume-fsx-sap-hana)
+ [Sample estimate](#sizing-estimation)
+ [Volume layout](#vol-layout-fsx-sap-hana)
+ [File system setup](#filesys-fsx-sap-hana)
+ [Disable snapshots](#snaps-fsx-sap-hana)
+ [Quality of Service (QoS)](#fsx-qos)
+ [Backup](#fsx-backup)

## Create storage virtual machines (SVM)
<a name="svm-sap-hana"></a>

You get one SVM per FSx for ONTAP file system by default. You can create additional SVMs at any time. For optimal performance, mount data and log volumes using different IP addresses. You can achieve this using separate SVMs for data and log volumes. If you plan to use NetApp SnapCenter, all SVMs used for SAP HANA must have unique names. You don’t need to join your file system to Active Directory for SAP HANA. For more information, see [Managing FSx for ONTAP storage virtual machines](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/managing-svms.html).

## Volume configuration
<a name="volume-fsx-sap-hana"></a>

The storage capacity of your file system should align with the needs of `/hana/shared`, `/hana/data`, and `/hana/log` volumes. You must also consider the capacity required for snapshots, if applicable.

We recommend creating separate FSx for ONTAP volumes for each of SAP HANA data, log, shared, and binary volumes. The following table lists the recommended minimum sizes per volume.


| Volume | Recommended size for scale-up | Recommended size for scale-out | 
| --- | --- | --- | 
|   `/usr/sap`   |  50 GiB  |  50 GiB  | 
|   `/hana/shared`   |  Minimum of 1 x memory of your Amazon EC2 instance or 1TB  |  1 x memory of your Amazon EC2 instance for every 4 subordinate nodes\$1  | 
|   `/hana/data`   |  At least 1.2 x memory of your Amazon EC2 instance  |  At least 1.2 x memory of your Amazon EC2 instance  | 
|   `/hana/log`   |  Minimum of 0.5 x memory of your Amazon EC2 instance or 600 GiB  |  Minimum of 0.5 x memory of your Amazon EC2 instance or 600 GiB  | 

\$1For example, if you have 2-4 scale-out nodes, you need 1 x memory of your single Amazon EC2 instance. If you have 5-8 scale-out nodes, you need 2 x memory of your single Amazon EC2 instance.

The following limitations apply when you create a FSx for ONTAP file system for SAP HANA.
+  *Capacity Pool Tiering* is not supported for SAP HANA and must be set to **None**.
+  *Daily automatic backups* must be **disabled** for SAP HANA. Default FSx for ONTAP backups are not application-aware and cannot be used to restore SAP HANA to a consistent state.

## Sample estimate
<a name="sizing-estimation"></a>

You can use the formulas in the following table to create estimates for SAP HANA performance KPIs for production systems. These systems can be in single Availability Zone setup or a multi-Availability Zone setup. See the storage architecture for [Amazon FSx for NetApp ONTAP](https://docs.aws.amazon.com/sap/latest/sap-hana/architecture-fsx.html) to learn more.

Note: Amazon EC2 root volumes used as boot volumes for the operating system always need to be based on Amazon EBS. For example, `gp3` – using an EBS-based SAP HANA log volume with FSx for ONTAP is supported.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/amazon-fsx-sap-hana.html)

**Note**  
(\$1) You must provision a secondary FSx for ONTAP volume for SAP HANA multi-Availability Zone deployments.
(\$1\$1) This can be deployed in a single-Availability Zone setup for cost efficiency.

 **Common parameters** 
+ CHANGE-RATE-DB: 30%for prod, 5% for non-prod
+ CHANGE-RATE-BINARIES: 5%
+ LOG-RATE: 5%
+ SNAPSHOTS-KEPT-AT-PRIMARY: 3 days
+ RETENTION: 30 days

## Volume layout
<a name="vol-layout-fsx-sap-hana"></a>

**Topics**
+ [SAP HANA scale-up](#fsx-volume-layout-scaleup)
+ [SAP HANA scale-out](#fsx-volume-layout-scaleout)

### SAP HANA scale-up
<a name="fsx-volume-layout-scaleup"></a>

The following table presents an example of volume and mount point configuration for scale-up setup. It includes a single host. `HDB` is the SAP HANA system ID. To place the home directory of the `hdbadm` user on the central storage, the `/usr/sap/HDB` file system must be mounted from the `HDB_shared` volume.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/amazon-fsx-sap-hana.html)

### SAP HANA scale-out
<a name="fsx-volume-layout-scaleout"></a>

You must mount all the data, log, and shared volumes in every node, including the standby node.

The following table presents an example of volume and mount point configuration for a scale-out setup. It includes four active and one standby host. `HDB` is the SAP HANA system ID. The home (`/usr/sap/HDB`) and shared (`(/hana/shared`) directories of every host are stored in the `HDB_shared` volume. To place the home directory of the `hdbadm` user on the central storage, the `/usr/sap/HDB` file system must be mounted from the `HDB_shared` volume.


| Volume name | Junction path | Directory | Mount point | Note | 
| --- | --- | --- | --- | --- | 
|  HDB\$1data\$1mnt00001  |  HDB\$1data\$1mnt00001  |  N/A  |  /hana/data/HDB/mnt00001  |  Mounted on all hosts  | 
|  HDB\$1log\$1mnt00001  |  HDB\$1log\$1mnt00001  |  N/A  |  /hana/log/HDB/mnt00001  |  Mounted on all hosts  | 
|  HDB\$1data\$1mnt00002  |  HDB\$1data\$1mnt00002  |  N/A  |  /hana/data/HDB/mnt00002  |  Mounted on all hosts  | 
|  HDB\$1log\$1mnt00002  |  HDB\$1log\$1mnt00002  |  N/A  |  /hana/log/HDB/mnt00002  |  Mounted on all hosts  | 
|  HDB\$1data\$1mnt00003  |  HDB\$1data\$1mnt00003  |  N/A  |  /hana/data/HDB/mnt00003  |  Mounted on all hosts  | 
|  HDB\$1log\$1mnt00003  |  HDB\$1log\$1mnt00003  |  N/A  |  /hana/log/HDB/mnt00003  |  Mounted on all hosts  | 
|  HDB\$1data\$1mnt00004  |  HDB\$1data\$1mnt00004  |  N/A  |  /hana/data/HDB/mnt00004  |  Mounted on all hosts  | 
|  HDB\$1log\$1mnt00004  |  HDB\$1log\$1mnt00004  |  N/A  |  /hana/log/HDB/mnt00004  |  Mounted on all hosts  | 
|  HDB\$1shared  |  HDB\$1shared  |  HDB\$1shared  |  /hana/shared/HDB  |  Mounted on all hosts  | 
|  HDB\$1shared  |  HDB\$1shared  |  usr-sap-host1  |  /usr/sap/HDB  |  Mounted on host 1  | 
|  HDB\$1shared  |  HDB\$1shared  |  usr-sap-host2  |  /usr/sap/HDB  |  Mounted on host 2  | 
|  HDB\$1shared  |  HDB\$1shared  |  usr-sap-host3  |  /usr/sap/HDB  |  Mounted on host 3  | 
|  HDB\$1shared  |  HDB\$1shared  |  usr-sap-host4  |  /usr/sap/HDB  |  Mounted on host 4  | 
|  HDB\$1shared  |  HDB\$1shared  |  usr-sap-host5  |  /usr/sap/HDB  |  Mounted on host 5  | 

## File system setup
<a name="filesys-fsx-sap-hana"></a>

After creating a FSx for ONTAP file system, you must complete additional file system setup.

### Set administrative password
<a name="password-filesys-fsx-sap-hana"></a>

If you did not create an administrative password during FSx for ONTAP file system creation, you must set an ONTAP administrative password for `fsxadmin` user.

The administrative password enables you to access the file system via SSH, the ONTAP CLI, and REST API. To use tools like NetApp SnapCenter, you must have an administrative password.

### Sign in to the management endpoint via SSH
<a name="ssh-filesys-fsx-sap-hana"></a>

Get the DNS name of the management endpoint from AWS console. Sign in to the management endpoint via SSH, using the `fsxadmin` user and administrative password.

```
ssh fsxadmin@management.<file-system-id>.fsx.<aws-region>.amazonaws.com Password:
```

### Set TCP max transfer size
<a name="tcp-filesys-fsx-sap-hana"></a>

We recommend a TCP max transfer size of 262,144 for your SAP HANA workloads. Elevate the privilege level to *advanced* and use the following command on each SVM.

```
set advanced
nfs modify -vserver <svm> -tcp-max-xfer-size 262144
set admin
```

### Set the lease time on NFSv4 protocol
<a name="nfs-filesys-fsx-sap-hana"></a>

This task applies to SAP HANA scale-out with standby node setup.

Lease period refers to the time in which ONTAP irrevocably grants a lock to a client. It is set to 30 seconds by default. You can have faster server recovery by setting shorter lease time.

You can change the lease time with the following command.

```
set advanced
nfs modify -vserver <svm> -v4-lease-seconds 10
set admin
```

**Note**  
Starting with SAP HANA 2.0 SPS4, SAP provides parameters to control failover behavior. NetApp recommends using these parameters instead of setting the lease time at the SVM level. For more details, see.

## Disable snapshots
<a name="snaps-fsx-sap-hana"></a>

FSx for ONTAP automatically enables a snapshot policy for volumes that take hourly snapshots. The default policy offers limited value to SAP HANA due to missing application awareness. We recommend disabling the automatic snapshots by setting the policy to none. You can disable snapshots during volume creation or by using the following command.

```
volume modify -vserver <vserver-name> -volume <volume-name> -snapshot-policy none
```

### Data volume
<a name="data-snaps-fsx-sap-hana"></a>

The automatic FSx for ONTAP snapshots do not have application awareness. A database-consistent snapshot of the SAP HANA data volume must be prepared by creating a data snapshot. For more information, see [Create a Data Snapshot](https://help.sap.com/docs/SAP_HANA_COCKPIT/afa922439b204e9caf22c78b6b69e4f2/9fd1c8bb3b60455caa93b7491ae6d830.html).

### Log volume
<a name="log-snaps-fsx-sap-hana"></a>

The log volume is automatically backed up every 15 minutes by SAP HANA. An hourly volume snapshot does not offer any additional value in terms of RPO reduction.

The high frequency of changes on the log volume can rapidly increase the total capacity used for snapshots. This can cause the log volume to run out of capacity, making the SAP HANA workload unresponsive.

## Quality of Service (QoS)
<a name="fsx-qos"></a>

Quality of Service (QoS) enables FSx for ONTAP to consistently deliver predictable performance to multiple applications, and eliminate noisy neighbor applications. When sharing a file system, you can use the quality of service feature for consistent performance and reduced interference between competing workloads. For more information, see [Using Quality of Service in Amazon FSx for NetApp ONTAP](https://aws.amazon.com/blogs/storage/using-quality-of-service-in-amazon-fsx-for-netapp-ontap/).

QoS is configured by creating a QoS policy group, setting ceiling or floor performance levels (minimum or maximum performance), and assigning the policy to an SVM or volume. Performance can be specified in either IOPS or throughput.

 **Example** 

You are creating a test system, based on a snapshot from production, on the same file system as your production SAP HANA database. You want to ensure that the test system does not impact the performance of the production system. You create a QoS policy group (`qos-test`) and define an upper limit of 200 MB/s for data and log volumes (`vol-data` and `vol-log`), which share the same SVM (`svm-test`).

```
 Create QoS policy group
qos policy-group create -policy-group qos-test -vserver svm-test -is-shared false -max-throughput 200MBs

 Assign QoS policy group to data on log volumes
volume modify -vserver svm-test -volume vol-data -qos-policy-group qos-test
volume modify -vserver svm-test -volume vol-log -qos-policy-group qos-test
```

## Backup
<a name="fsx-backup"></a>

You must disable automatic backups for FSx for ONTAP volumes and file systems for SAP HANA. The backups cannot be used to restore SAP HANA to a consistent state. You can use the SnapCenter plugin for SAP HANA backups. For more details, see NetApp docs – [SnapCenter Plug-in for SAP HANA Database overview](https://docs.netapp.com/us-en/snapcenter/protect-hana/concept_snapcenter_plug_in_for_sap_hana_database_overview.html) and [SAP HANA on Amazon FSx for NetApp ONTAP - Backup and recovery with SnapCenter](https://docs.netapp.com/us-en/netapp-solutions-sap/backup/fsxn-overview.html).

You can also use SnapMirror for SAP HANA backups. For more information, see [How can I optimize SnapMirror performance, and what are the best practices for FSx for ONTAP?](https://repost.aws/knowledge-center/fsx-ontap-optimize-snapmirror) 

For point-in-time resilient restores, we highly recommend storing three days of snapshots on a local disk and replicating older backups via SnapVault to a secondary FSx for ONTAP file system using the capacity pool tier. For more information, see [Managing storage capacity](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/managing-storage-capacity.html#storage-tiers).

# Set up host
<a name="host-setup-fsx-sap-hana"></a>

This section walks you through an example host setup for deploying SAP HANA scale-up and scale-out systems on AWS using Amazon FSx for NetApp ONTAP as the primary storage solution.

You must configure your Amazon EC2 instance on an operating system level to use FSx for ONTAP with SAP HANA on AWS.

**Note**  
The following examples apply to an SAP HANA workload with SAP System ID `HDB`. The operating system user is `hdbadm`.

**Topics**
+ [SAP HANA scale-up](fsx-host-scaleup.md)
+ [SAP HANA scale-out](fsx-host-scaleout.md)

# SAP HANA scale-up
<a name="fsx-host-scaleup"></a>

The following section is an example host setup for SAP HANA scale-up deployment with FSx for ONTAP.

**Topics**
+ [Linux kernel parameters](#linux-setup-scaleup)
+ [Network File System (NFS)](#nfs-setup-scaleup)
+ [Create subdirectories](#subdirectories-scaleup)
+ [Create mount points](#mount-points-scaleup)
+ [Mount file systems](#mount-filesys-scaleup)
+ [Data volume partitions](#partitions-scaleup)

## Linux kernel parameters
<a name="linux-setup-scaleup"></a>

1. Create a file `/etc/sysctl.d/91-NetApp-HANA.conf` with the following configurations

   ```
   net.core.rmem_max = 16777216
   net.core.wmem_max = 16777216
   net.ipv4.tcp_rmem = 4096 131072 16777216
   net.ipv4.tcp_wmem = 4096 16384  16777216
   net.core.netdev_max_backlog = 300000
   net.ipv4.tcp_slow_start_after_idle = 0
   net.ipv4.tcp_no_metrics_save = 1
   net.ipv4.tcp_moderate_rcvbuf = 1
   net.ipv4.tcp_window_scaling = 1
   net.ipv4.tcp_timestamps = 1
   net.ipv4.tcp_sack = 1
   sunrpc.tcp_slot_table_entries = 128
   ```

1. To reduce I/O errors during failover of FSx for ONTAP Single-AZ file systems, including [planned maintenance windows](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/maintenance-windows.html) create an additional file `/etc/sysctl.d/99-fsx-failover.conf`. These parameters optimize NFS client behavior to detect and respond to failover events more quickly.

   ```
   # NFS client optimizations for faster failover detection
   # Replace 'default' with your interface name (e.g., eth0, ens5) to target a specific interface
   net.ipv4.neigh.default.base_reachable_time_ms = 5000
   net.ipv4.neigh.default.delay_first_probe_time = 1
   net.ipv4.neigh.default.ucast_solicit = 0
   net.ipv4.tcp_syn_retries = 3
   ```

   For more information and options, see [Troubleshooting I/O errors and NFS lock reclaim failures](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/nfs-failover-issues.html).

   If these errors occur, in some cases they may cause SAP HANA to perform an emergency shutdown of the indexserver process to protect database consistency.

1. Increase the max sessions slots for NFSv4 to 180.

   ```
   echo options nfs max_session_slots = 180 > /etc/modprobe.d/nfsclient.conf
   ```

To activate these changes, run `sysctl -p` for the kernel parameters and reload the NFS module, or reboot the instance during a planned maintenance window (recommended).

## Network File System (NFS)
<a name="nfs-setup-scaleup"></a>

Network File System (NFS) version 4 and higher requires user authentication. You can authenticate with Lightweight Directory Access Protocol (LDAP) server or local user accounts.

If you are using local user accounts, the NFSv4 domain must be set to the same value on all Linux servers and SVMs. You can set the domain parameter (`Domain = <domain name>`) in the `/etc/idmapd.conf` file on the Linux hosts.

To identify the domain setting of the SVM, use the following command:

```
nfs show -vserver hana-data -fields v4-id-domain
```

The following is example output:

```
vserver   v4-id-domain
--------- ------------
hana-data ec2.internal
```

## Create subdirectories
<a name="subdirectories-scaleup"></a>

Mount the `/hana/shared` volume, create `shared` and `usr-sap` subdirectories, and unmount.

```
mkdir /mnt/tmp
mount -t nfs -o sec=sys,vers=4.1 <svm-shared>:/HDB-shared /mnt/tmp
cd /mnt/tmp
mkdir shared
mkdir lss-shared
mkdir usr-sap
cd ..
umount /mnt/tmp
```

## Create mount points
<a name="mount-points-scaleup"></a>

On single-host systems, create the following mount points on your Amazon EC2 instance.

```
mkdir -p /hana/data/HDB/mnt00001
mkdir -p /hana/log/HDB/mnt00001
mkdir -p /hana/shared
mkdir -p /lss/shared/
mkdir -p /usr/sap/HDB
```

## Mount file systems
<a name="mount-filesys-scaleup"></a>

The created file systems must be mounted as NFS file systems on Amazon EC2. The following table is an example recommendation of NFS options for different SAP HANA file systems.


|  |  |  |  |  | 
| --- |--- |--- |--- |--- |
|   **File systems**   |   **Common mount options**   |   **Version options**   |   **Transfer size options**   |   **Connection options**   | 
|  SAP HANA data  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=4  | 
|  SAP HANA log  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
|  SAP HANA shared  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
|  SAP HANA binary  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
|  SAP HANA LSS shared  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
+ Changes to the `nconnect` parameter take effect only if the NFS file system is unmounted and mounted again.
+ Client systems must have unique host names when accessing FSx for ONTAP. If there are systems with the same name, the second system may not be able to access FSx for ONTAP.

 **Example** 

Add the following lines to `/etc/fstab` to preserve mounted file systems during an instance reboot. You can then run `mount -a` to mount the NFS file systems.

```
<svm-data>:/HDB_data_mnt00001 /hana/data/HDB/mnt00001 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=4
<svm-log>:/HDB_log_mnt00001 /hana/log/HDB/mnt00001 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
<svm-shared>:/HDB_shared/usr-sap /usr/sap/HDB nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
<svm-shared>:/HDB_shared/shared /hana/shared nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
```

## Data volume partitions
<a name="partitions-scaleup"></a>

With SAP HANA 2.0 SPS4, additional data volume partitions allow configuring two or more file system volumes for the DATA volume of an SAP HANA tenant database in a single-host or multi-host system. Data volume partitions enable SAP HANA to scale beyond the size and performance limits of a single volume. You can add additional data volume partitions at any time. For more information, see [Adding additional data volume partitions](https://docs.netapp.com/us-en/netapp-solutions-sap/bp/hana-aff-nfs-add-data-volume-partitions.html).

### Host preparation
<a name="host-preparation-scaleup"></a>

Additional mount points and `/etc/fstab` entries must be created and the new volumes must be mounted.
+ Create additional mount points and assign the required permissions, group, and ownership.

  ```
  mkdir -p /hana/data2/HDB/mnt00001
  chmod -R 777 /hana/data2/HDB/mnt00001
  ```
+ Add additional file systems to `/etc/fstab`.

  ```
  <data2>:/data2 /hana/data/HDB/mnt00001 nfs <mount options>
  ```
+ Set the permissions to 777. This is required to enable SAP HANA to add a new data volume in the subsequent step. SAP HANA sets more restrictive permissions automatically during data volume creation.

### Enabling data volume partitioning
<a name="enable-partition-scaleup"></a>

To enable data volume partitions, add the following entry in the `global.ini` file in the `SYSTEMDB` configuration.

```
[customizable_functionalities]
persistence_datavolume_partition_multipath = true
```

```
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM')
SET ('customizable_functionalities', 'PERSISTENCE_DATAVOLUME_PARTITION_MULTIPATH') = 'true'
WITH RECONFIGURE;
```

**Note**  
You must restart your database after updating the `global.ini` file.

### Adding additional data volume partition
<a name="add-partition-scaleup"></a>

Run the following SQL statement against the tenant database to add an additional data volume partition to your tenant database.

```
ALTER SYSTEM ALTER DATAVOLUME ADD PARTITION PATH '/hana/data/HDB/mnt00002/';
```

Adding a data volume partition is quick. The new data volume partitions are empty after creation. Data is distributed equally across data volumes over time.

After you configure and mount FSx for ONTAP file systems, you can install and setup your SAP HANA workload on AWS. For more information, see [SAP HANA Environment Setup on AWS](https://docs.aws.amazon.com/sap/latest/sap-hana/std-sap-hana-environment-setup.html).

# SAP HANA scale-out
<a name="fsx-host-scaleout"></a>

The following section is an example host setup for SAP HANA scale-out with standby node on AWS using FSx for ONTAP as the primary storage solution. You can use SAP HANA host auto failover, an automated solution provided by SAP, for recovering from a failure on your SAP HANA host. For more information, see [SAP HANA - Host Auto-Failover](https://www.sap.com/documents/2016/06/f6b3861d-767c-0010-82c7-eda71af511fa.html).

**Topics**
+ [Linux kernel parameters](#linux-setup-scaleout)
+ [Network File System (NFS)](#nfs-setup-scaleout)
+ [Create subdirectories](#subdirectories-scaleout)
+ [Create mount points](#mount-points-scaleout)
+ [Mount file systems](#mount-filesys-scaleout)
+ [Set ownership for directories](#directories-scaleout)
+ [SAP HANA parameters](#parameters-scaleout)
+ [Data volume partitions](#partitions-scaleout)
+ [Testing host auto failover](#failover-scaleout)

## Linux kernel parameters
<a name="linux-setup-scaleout"></a>

1. Create a file `/etc/sysctl.d/91-NetApp-HANA.conf` with the following configurations

   ```
   net.core.rmem_max = 16777216
   net.core.wmem_max = 16777216
   net.ipv4.tcp_rmem = 4096 131072 16777216
   net.ipv4.tcp_wmem = 4096 16384  16777216
   net.core.netdev_max_backlog = 300000
   net.ipv4.tcp_slow_start_after_idle = 0
   net.ipv4.tcp_no_metrics_save = 1
   net.ipv4.tcp_moderate_rcvbuf = 1
   net.ipv4.tcp_window_scaling = 1
   net.ipv4.tcp_timestamps = 1
   net.ipv4.tcp_sack = 1
   sunrpc.tcp_slot_table_entries = 128
   ```

1. To reduce I/O errors during failover of FSx for ONTAP Single-AZ file systems, including [planned maintenance windows](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/maintenance-windows.html) create an additional file `/etc/sysctl.d/99-fsx-failover.conf`. These parameters optimize NFS client behavior to detect and respond to failover events more quickly.

   ```
   # NFS client optimizations for faster failover detection
   # Replace 'default' with your interface name (e.g., eth0, ens5) to target a specific interface
   net.ipv4.neigh.default.base_reachable_time_ms = 5000
   net.ipv4.neigh.default.delay_first_probe_time = 1
   net.ipv4.neigh.default.ucast_solicit = 0
   net.ipv4.tcp_syn_retries = 3
   ```

   For more information and options, see [Troubleshooting I/O errors and NFS lock reclaim failures](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/nfs-failover-issues.html).

   If these errors occur, in some cases they may cause SAP HANA to perform an emergency shutdown of the indexserver process to protect database consistency.

1. Increase the max sessions slots for NFSv4 to 180.

   ```
   echo options nfs max_session_slots = 180 > /etc/modprobe.d/nfsclient.conf
   ```

To activate these changes, run `sysctl -p` for the kernel parameters and reload the NFS module, or reboot the instance during a planned maintenance window (recommended).

## Network File System (NFS)
<a name="nfs-setup-scaleout"></a>

**Important**  
For SAP HANA scale-out systems, FSx for ONTAP only supports NFS version 4.1.

Network File System (NFS) version 4 and higher requires user authentication. You can authenticate with Lightweight Directory Access Protocol (LDAP) server or local user accounts.

If you are using local user accounts, the NFSv4 domain must be set to the same value on all Linux servers and SVMs. You can set the domain parameter (`Domain = <domain name>`) in the `/etc/idmapd.conf` file on the Linux hosts.

To identify the domain setting of the SVM, use the following command:

```
nfs show -vserver hana-data -fields v4-id-domain
```

The following is example output:

```
vserver   v4-id-domain
--------- ------------
hana-data ec2.internal
```

## Create subdirectories
<a name="subdirectories-scaleout"></a>

Mount the `/hana/shared` volume and create `shared` and `usr-sap` subdirectories for each host. The following example command applies to 4\$11 SAP HANA scale-out systems.

```
mkdir /mnt/tmp
mount -t nfs -o sec=sys,vers=4.1 <svm-shared>:/HDB-shared /mnt/tmp
cd /mnt/tmp
mkdir shared
mkdir lss-shared
mkdir usr-sap-host1
mkdir usr-sap-host2
mkdir usr-sap-host3
mkdir usr-sap-host4
mkdir usr-sap-host5
cd
umount /mnt/tmp
```

## Create mount points
<a name="mount-points-scaleout"></a>

On scale-out systems, create the following mount points on all the subordinate and standby nodes. The following example command applies to 4\$11 SAP HANA scale-out systems.

```
mkdir -p /hana/data/HDB/mnt00001
mkdir -p /hana/log/HDB/mnt00001
mkdir -p /hana/data/HDB/mnt00002
mkdir -p /hana/log/HDB/mnt00002
mkdir -p /hana/data/HDB/mnt00003
mkdir -p /hana/log/HDB/mnt00003
mkdir -p /hana/data/HDB/mnt00004
mkdir -p /hana/log/HDB/mnt00004
mkdir -p /hana/shared
mkdir -p /lss/shared
mkdir -p /usr/sap/HDB
```

## Mount file systems
<a name="mount-filesys-scaleout"></a>

The created file systems must be mounted as NFS file systems on Amazon EC2. The following table is an example recommendation of NFS options for different SAP HANA file systems.


|  |  |  |  |  | 
| --- |--- |--- |--- |--- |
|   **File systems**   |   **Common mount options**   |   **Version options**   |   **Transfer size options**   |   **Connection options**   | 
|  SAP HANA data  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=4  | 
|  SAP HANA log  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
|  SAP HANA shared  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
|  SAP HANA binary  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
|  SAP HANA LSS shared  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
+ Changes to the `nconnect` parameter take effect only if the NFS file system is unmounted and mounted again.
+ Client systems must have unique host names when accessing FSx for ONTAP. If there are systems with the same name, the second system may not be able to access FSx for ONTAP.

 **Example - mount shared volumes** 

Add the following lines to `/etc/fstab` on **all** the hosts to preserve mounted file systems during an instance reboot. You can then run `mount -a` to mount the NFS file systems.

```
<svm-data_1>:/HDB_data_mnt00001 /hana/data/HDB/mnt00001 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=4
<svm-log_1>:/HDB_log_mnt00001 /hana/log/HDB/mnt00001 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
<svm-data_2>:/HDB_data_mnt00002 /hana/data/HDB/mnt00002 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=4
<svm-log_2>:/HDB_log_mnt00002 /hana/log/HDB/mnt00002 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
<svm-data_3>:/HDB_data_mnt00003 /hana/data/HDB/mnt00003 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=4
<svm-log_3>:/HDB_log_mnt00003 /hana/log/HDB/mnt00003 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
<svm-data_4>:/HDB_data_mnt00004 /hana/data/HDB/mnt00004 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=4
<svm-log_4>:/HDB_log_mnt00004 /hana/log/HDB/mnt00004 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
<svm-shared>:/HDB_shared/shared /hana/shared nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
<svm-lss-shared>:/HDB_shared/lss-shared /lss/shared nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
```

 **Example - mount host-specific volumes** 

Add the host-specific line to `/etc/fstab` of **each** host to preserve mounted file systems during an instance reboot. You can then run `mount -a` to mount the NFS file systems.


| Host | Line | 
| --- | --- | 
|  Host 1  |   `<svm-shared>:/HDB_shared/usr-sap-host1 /usr/sap/HDB nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2`   | 
|  Host 2  |   `<svm-shared>:/HDB_shared/usr-sap-host2 /usr/sap/HDB nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2`   | 
|  Host 3  |   `<svm-shared>:/HDB_shared/usr-sap-host3 /usr/sap/HDB nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2`   | 
|  Host 4  |   `<svm-shared>:/HDB_shared/usr-sap-host4 /usr/sap/HDB nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2`   | 
|  Host 5 (standby host)  |   `<svm-shared>:/HDB_shared/usr-sap-host5 /usr/sap/HDB nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2`   | 

## Set ownership for directories
<a name="directories-scaleout"></a>

Use the following command to set the `hdbadm` ownership on SAP HANA data and log directories.

```
sudo chown hdbadm:sapsys /hana/data/HDB
sudo chown hdbadm:sapsys /hana/log/HDB
```

## SAP HANA parameters
<a name="parameters-scaleout"></a>

Install your SAP HANA system with the required configuration, and then set the following parameters. For more information on SAP HANA installation, see [SAP HANA Server Installation and Update Guide](https://help.sap.com/docs/SAP_HANA_PLATFORM/2c1988d620e04368aa4103bf26f17727/7eb0167eb35e4e2885415205b8383584.html?version=2.0.04).

**Topics**
+ [Optimal performance](#parameters-performance-scaleout)
+ [NFS lock lease](#parameters-nfslock-scaleout)

### Optimal performance
<a name="parameters-performance-scaleout"></a>

For optimal performance, set the following parameters in the `global.ini` file.

```
[fileio]
max_parallel_io_requests=128
async_read_submit=on
async_write_submit_active=on
async_write_submit_blocks=all
```

The following SQL commands can be used to set these parameters on `SYSTEM` level.

```
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'max_parallel_io_requests') = '128' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'async_read_submit') = 'on' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'async_write_submit_active') = 'on' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'async_write_submit_blocks') = 'all' WITH RECONFIGURE;
```

### NFS lock lease
<a name="parameters-nfslock-scaleout"></a>

Starting with SAP HANA 2.0 SPS4, SAP HANA provides parameters to control the failover behavior. It is recommended to use these parameters instead of setting the lease time at the `SVM` level. The following parameters are configured in the `namerserver.ini` file.


| Section | Parameter | Value | 
| --- | --- | --- | 
|   `failover`   |   `normal_retries`   |  9  | 
|   `distributed_watchdog`   |   `deactivation_retries`   |  11  | 
|   `distributed_watchdog`   |   `takeover_retries`   |  9  | 

The following SQL commands can be used to set these parameters on `SYSTEM` level.

```
ALTER SYSTEM ALTER CONFIGURATION ('nameserver.ini', 'SYSTEM') SET ('failover', 'normal_retries') = '9' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('nameserver.ini', 'SYSTEM') SET ('distributed_watchdog', 'deactivation_retries') = '11' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('nameserver.ini', 'SYSTEM') SET ('distributed_watchdog', 'takeover_retries') = '9' WITH RECONFIGURE;
```

## Data volume partitions
<a name="partitions-scaleout"></a>

With SAP HANA 2.0 SPS4, additional data volume partitions allow configuring two or more file system volumes for the DATA volume of an SAP HANA tenant database in a single-host or multi-host system. Data volume partitions enable SAP HANA to scale beyond the size and performance limits of a single volume. You can add additional data volume partitions at any time. For more information, see [Adding additional data volume partitions](https://docs.netapp.com/us-en/netapp-solutions-sap/bp/hana-aff-nfs-add-data-volume-partitions.html).

**Topics**
+ [Host preparation](#host-preparation-scaleout)
+ [Enabling data volume partitioning](#enable-partition-scaleout)
+ [Adding additional data volume partition](#add-partition-scaleout)

### Host preparation
<a name="host-preparation-scaleout"></a>

Additional mount points and `/etc/fstab` entries must be created and the new volumes must be mounted.
+ Create additional mount points and assign the required permissions, group, and ownership.

  ```
  mkdir -p /hana/data2/HDB/mnt00001
  chmod -R 777 /hana/data2/HDB/mnt00001
  ```
+ Add additional file systems to `/etc/fstab`.

  ```
  <data2>:/data2 /hana/data2/HDB/mnt00001 nfs <mount options>
  ```
+ Set the permissions to 777. This is required to enable SAP HANA to add a new data volume in the subsequent step. SAP HANA sets more restrictive permissions automatically during data volume creation.

### Enabling data volume partitioning
<a name="enable-partition-scaleout"></a>

To enable data volume partitions, add the following entry in the `global.ini` file in the `SYSTEMDB` configuration.

```
[customizable_functionalities]
persistence_datavolume_partition_multipath = true
```

```
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM')
SET ('customizable_functionalities', 'PERSISTENCE_DATAVOLUME_PARTITION_MULTIPATH') = 'true'
WITH RECONFIGURE;
```

**Note**  
You must restart your database after updating the `global.ini` file.

### Adding additional data volume partition
<a name="add-partition-scaleout"></a>

Run the following SQL statement against the tenant database to add an additional data volume partition to your tenant database.

```
ALTER SYSTEM ALTER DATAVOLUME ADD PARTITION PATH '/hana/data2/HDB/';
```

Adding a data volume partition is quick. The new data volume partitions are empty after creation. Data is distributed equally across data volumes over time.

## Testing host auto failover
<a name="failover-scaleout"></a>

We recommend testing your SAP HANA host auto failover scenarios. For more information, see [SAP HANA - Host Auto-Failover](https://www.sap.com/documents/2016/06/f6b3861d-767c-0010-82c7-eda71af511fa.html).

Some words have been redacted and replaced by inclusive terms. These words may appear different in your product, system code or table. For additional details, see [Inclusive Language at SAP](https://help.sap.com/docs/TERMINOLOGY/25cbeaaad3c24eba8ea10b579ce81aa1/83a23df24013403ea4c1fdd0107cc0fd.html).

The following table presents the expected results of different test scenarios.


| Scenario | Expected result | 
| --- | --- | 
|  SAP HANA subordinate node failure using `echo b > /proc/sysrq-trigger`   |  Subordinate node failover to standby node  | 
|  SAP HANA coordinator node failure using `HDB` kill  |  SAP HANA service failover to standby node (other candidate for coordinator node)  | 
|  SAP HANA coordinator node failure while other coordinator nodes act as subordinate nodes  |  Coordinator node failover to standby node while other coordinator nodes act as subordinate nodes  | 

**Topics**
+ [SAP HANA subordinate node failure](#scenario1-scaleout)
+ [SAP HANA coordinator node failure](#scenario2-scaleout)
+ [SAP HANA coordinator node failure while other coordinator nodes act as subordinate nodes](#scenario3-scaleout)

### SAP HANA subordinate node failure
<a name="scenario1-scaleout"></a>

Check the status of the landscape before testing.

```
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> python landscapeHostConfiguration.py
| Host    | Host   | Host   | Failover | Remove | Storage   | Storage   | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host    | Host    | Worker  | Worker  |
|         | Active | Status | Status   | Status | Config    | Actual    | Config   | Actual   | Config     | Actual     | Config      | Actual      | Config  | Actual  | Config  | Actual  |
|         |        |        |          |        | Partition | Partition | Group    | Group    | Role       | Role       | Role        | Role        | Roles   | Roles   | Groups  | Groups  |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hana    | yes    | ok     |          |        |         1 |         1 | default  | default  | coordinator 1   | coordinator     | worker      | coordinator      | worker  | worker  | default | default |
| hanaw01 | yes    | ok     |          |        |         2 |         2 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw02 | yes    | ok     |          |        |         3 |         3 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw03 | yes    | ok     |          |        |         4 |         4 | default  | default  | coordinator 3   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw04 | yes    | ignore |          |        |         0 |         0 | default  | default  | coordinator 2   | subordinate      | standby     | standby     | standby | standby | default | -       |

overall host status: ok
```

Run the following command on the subordinate node as `root` to simulate a node crash. In this case, the subordinate node is `hanaw01`.

```
echo b > /proc/sysrq-trigger
```

```
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> python landscapeHostConfiguration.py
| Host    | Host   | Host   | Failover | Remove | Storage   | Storage   | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host    | Host    | Worker  | Worker  |
|         | Active | Status | Status   | Status | Config    | Actual    | Config   | Actual   | Config     | Actual     | Config      | Actual      | Config  | Actual  | Config  | Actual  |
|         |        |        |          |        | Partition | Partition | Group    | Group    | Role       | Role       | Role        | Role        | Roles   | Roles   | Groups  | Groups  |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hana    | yes    | ok     |          |        |         1 |         1 | default  | default  | coordinator 1   | coordinator     | worker      | coordinator      | worker  | worker  | default | default |
| hanaw01 | no     | info   |          |        |         2 |         0 | default  | default  | subordinate      | subordinate      | worker      | standby     | worker  | standby | default | -       |
| hanaw02 | yes    | ok     |          |        |         3 |         3 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw03 | yes    | ok     |          |        |         4 |         4 | default  | default  | coordinator 3   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw04 | yes    | info   |          |        |         0 |         2 | default  | default  | coordinator 2   | subordinate      | standby     | subordinate       | standby | worker  | default | default |

overall host status: info
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support>
```

### SAP HANA coordinator node failure
<a name="scenario2-scaleout"></a>

Check the status of the landscape before crashing the node.

```
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> python landscapeHostConfiguration.py
| Host    | Host   | Host   | Failover | Remove | Storage   | Storage   | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host    | Host    | Worker  | Worker  |
|         | Active | Status | Status   | Status | Config    | Actual    | Config   | Actual   | Config     | Actual     | Config      | Actual      | Config  | Actual  | Config  | Actual  |
|         |        |        |          |        | Partition | Partition | Group    | Group    | Role       | Role       | Role        | Role        | Roles   | Roles   | Groups  | Groups  |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hana    | yes    | ok     |          |        |         1 |         1 | default  | default  | coordinator 1   | coordinator     | worker      | coordinator      | worker  | worker  | default | default |
| hanaw01 | yes    | ok     |          |        |         2 |         2 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw02 | yes    | ok     |          |        |         3 |         3 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw03 | yes    | ok     |          |        |         4 |         4 | default  | default  | coordinator 3   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw04 | yes    | ignore |          |        |         0 |         0 | default  | default  | coordinator 2   | subordinate      | standby     | standby     | standby | standby | default | -       |

overall host status: ok
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support>
```

Use the following command to simulate failure, by interrupting SAP HANA processes, on the coordinator node. In this case, the coordinator node is `hana`.

```
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> HDB kill
```

```
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> python landscapeHostConfiguration.py
nameserver hana:30001 not responding.
| Host    | Host   | Host   | Failover | Remove | Storage   | Storage   | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host    | Host    | Worker  | Worker  |
|         | Active | Status | Status   | Status | Config    | Actual    | Config   | Actual   | Config     | Actual     | Config      | Actual      | Config  | Actual  | Config  | Actual  |
|         |        |        |          |        | Partition | Partition | Group    | Group    | Role       | Role       | Role        | Role        | Roles   | Roles   | Groups  | Groups  |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hana    | no     | info   |          |        |         1 |         0 | default  | default  | coordinator 1   | subordinate      | worker      | standby     | worker  | standby | default | -       |
| hanaw01 | yes    | ok     |          |        |         2 |         2 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw02 | yes    | ok     |          |        |         3 |         3 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw03 | yes    | ok     |          |        |         4 |         4 | default  | default  | coordinator 3   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw04 | yes    | info   |          |        |         0 |         1 | default  | default  | coordinator 2   | coordinator     | standby     | coordinator      | standby | worker  | default | default |

overall host status: info
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support>
```

### SAP HANA coordinator node failure while other coordinator nodes act as subordinate nodes
<a name="scenario3-scaleout"></a>

Check the status of the landscape before testing.

```
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> python landscapeHostConfiguration.py
| Host    | Host   | Host   | Failover | Remove | Storage   | Storage   | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host    | Host    | Worker  | Worker  |
|         | Active | Status | Status   | Status | Config    | Actual    | Config   | Actual   | Config     | Actual     | Config      | Actual      | Config  | Actual  | Config  | Actual  |
|         |        |        |          |        | Partition | Partition | Group    | Group    | Role       | Role       | Role        | Role        | Roles   | Roles   | Groups  | Groups  |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hana    | yes    | ok     |          |        |         1 |         2 | default  | default  | coordinator 1   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw01 | yes    | info   |          |        |         2 |         0 | default  | default  | subordinate      | subordinate      | worker      | standby     | worker  | standby | default | -       |
| hanaw02 | yes    | ok     |          |        |         3 |         4 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw03 | yes    | ok     |          |        |         4 |         3 | default  | default  | coordinator 3   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw04 | yes    | info   |          |        |         0 |         1 | default  | default  | coordinator 2   | coordinator     | standby     | coordinator      | standby | worker  | default | default |

overall host status: info
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support>
```

Use the following command to simulate failure, by interrupting SAP HANA processes, on the coordinator node. In this case, the coordinator node is `hana04`.

```
hdbadm@hanaw04:/usr/sap/HDB/HDB00> HDB kill
```

```
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> python landscapeHostConfiguration.py
| Host    | Host     | Host    | Failover         | Remove | Storage   | Storage   | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host    | Host    | Worker  | Worker  |
|         | Active   | Status  | Status           | Status | Config    | Actual    | Config   | Actual   | Config     | Actual     | Config      | Actual      | Config  | Actual  | Config  | Actual  |
|         |          |         |                  |        | Partition | Partition | Group    | Group    | Role       | Role       | Role        | Role        | Roles   | Roles   | Groups  | Groups  |
| ------- | -------- | ------- | ---------------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hana    | starting | warning |                  |        |         1 |         1 | default  | default  | coordinator 1   | coordinator     | worker      | coordinator      | worker  | worker  | default | default |
| hanaw01 | starting | warning |                  |        |         2 |         2 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw02 | yes      | ok      |                  |        |         3 |         3 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw03 | yes      | ok      |                  |        |         4 |         4 | default  | default  | coordinator 3   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw04 | no       | warning | failover to hana |        |         0 |         0 | default  | default  | coordinator 2   | subordinate      | standby     | standby     | standby | standby | default | -       |

overall host status: warning
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> python landscapeHostConfiguration.py
| Host    | Host   | Host   | Failover | Remove | Storage   | Storage   | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host    | Host    | Worker  | Worker  |
|         | Active | Status | Status   | Status | Config    | Actual    | Config   | Actual   | Config     | Actual     | Config      | Actual      | Config  | Actual  | Config  | Actual  |
|         |        |        |          |        | Partition | Partition | Group    | Group    | Role       | Role       | Role        | Role        | Roles   | Roles   | Groups  | Groups  |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hana    | yes    | ok     |          |        |         1 |         1 | default  | default  | coordinator 1   | coordinator     | worker      | coordinator      | worker  | worker  | default | default |
| hanaw01 | yes    | ok     |          |        |         2 |         2 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw02 | yes    | ok     |          |        |         3 |         3 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw03 | yes    | ok     |          |        |         4 |         4 | default  | default  | coordinator 3   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw04 | no     | ignore |          |        |         0 |         0 | default  | default  | coordinator 2   | subordinate      | standby     | standby     | standby | standby | default | -       |

overall host status: ok
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support>
```