

# SAP HANA Environment Setup on AWS
<a name="std-sap-hana-environment-setup"></a>

 *Last updated: December 2022* 

This guide is part of a content series that provides detailed information about hosting, configuring, and using SAP technologies in the AWS Cloud. For the other guides in the series, ranging from overviews to advanced topics, see the [SAP on AWS Technical Documentation home page](https://aws.amazon.com/sap/docs/).

This document provides guidance on how to set up AWS resources and configure SUSE Linux Enterprise Server (SLES) and Red Hat Enterprise Linux (RHEL) operating systems to deploy SAP HANA on Amazon Elastic Compute Cloud (Amazon EC2) instances in an existing virtual private cloud (VPC). It includes instructions for configuring storage for scale-up and scale-out workloads with Amazon Elastic Block Store (Amazon EBS), Amazon Elastic File System (Amazon EFS), and Amazon FSx for NetApp ONTAP (FSx for ONTAP).

This document follows AWS best practices to ensure that your system meets all key performance indicators (KPIs) that are required for Tailored Data Center Integration (TDI)–based SAP HANA implementations on AWS. In addition, this document also follows recommendations provided by SAP, SUSE, and Red Hat for SAP HANA in the following SAP OSS Notes (requires SAP portal access).
+  [1944799 - SAP HANA Guidelines for SLES Operating System Installation](https://me.sap.com/notes/1944799) 
+  [2205917 - SAP HANA DB: Recommended OS settings for SLES 12 / SLES for SAP Applications 12](https://me.sap.com/notes/2205917) 
+  [2684254 - SAP HANA DB: Recommended OS settings for SLES 15 / SLES for SAP Applications 15](https://me.sap.com/notes/2684254) 
+  [2009879 - SAP HANA Guidelines for Red Hat Enterprise Linux (RHEL) Operating System](https://me.sap.com/notes/2009879) 
+  [2292690 - SAP HANA DB: Recommended OS settings for RHEL 7](https://me.sap.com/notes/2292690) 
+  [2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8](https://me.sap.com/notes/2777782) 

**Note**  
SAP, SUSE, and Red Hat regularly updates these OSS notes. Review the latest version of the OSS notes for up-to-date information before proceeding.

This guide is intended for users with a good understanding of AWS services, network concepts, the Linux operating system and SAP HANA administration to successfully launch and configure the resources that are required for SAP HANA.

 AWS Launch Wizard for SAP is a service that guides you through the sizing, configuration and deployment of SAP HANA based applications on AWS, and follows the best practices from AWS, SAP, and operating system vendors, including SUSE and Red Hat. AWS Launch Wizard for SAP supports a wide range of deployment models, including SAP HANA database in a scale-up and scale-out mode with cross-Availability Zone high availability. AWS Launch Wizard for SAP enables you to setup your SAP HANA based systems in a few hours with minimal manual intervention. For more information, see [AWS Launch Wizard for SAP](https://docs.aws.amazon.com/launchwizard/latest/userguide/launch-wizard-sap.html).

If your organization can’t use AWS Launch Wizard for SAP for the deployment and you require additional customization to meet internal policies, you can follow the steps in this document to manually set up AWS resources such as Amazon EC2, Amazon EBS, Amazon EFS, and FSx for ONTAP by using the AWS Command Line Interface (AWS CLI) or the AWS Management Console.

This document doesn’t provide guidance on how to set up network and security constructs such as Amazon VPC, subnets, route tables, access control lists (ACLs), NAT Gateway, AWS Identity and Access Management (IAM) roles, security groups, etc. Instead, this document focuses on configuring compute, storage, and operating system resources for SAP HANA deployment on AWS.

# Prerequisites
<a name="prerequisites"></a>

## Specialized Knowledge
<a name="specialized-knowledge"></a>

If you are new to AWS, see [Getting Started with AWS](https://aws.amazon.com/getting-started/).

## Technical Requirements
<a name="technical-requirements"></a>

1. If necessary, [request a service limit increase](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase) for the instance type that you’re planning to use for your SAP HANA system. If you already have an existing deployment that uses this instance type, and you think you might exceed the default limit with this deployment, you will need to request an increase. For details, see [Amazon EC2 Service Limits](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-resource-limits.html) in the AWS documentation.

1. Ensure that you have a key pair that you can use to launch your Amazon EC2 instance. If you need to create or import a key pair, refer to [Amazon EC2 Key Pairs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in the AWS documentation.

1. Ensure that you have the network details of the VPC, such as VPC ID and subnet ID, where you plan to launch the Amazon EC2 instance that will host SAP HANA.

1. Ensure that you have a security group to attach to the Amazon EC2 instance that will host SAP HANA and that the required ports are open. If needed, create a new security group that allows the traffic for SAP HANA ports. For additional details on the list of ports, see [Security groups in AWS Launch Wizard for SAP](https://docs.aws.amazon.com/launchwizard/latest/userguide/launch-wizard-sap-security-groups.html).

1. If you intend to use AWS CLI to launch your instances, ensure that you have installed and configured AWS CLI with the necessary credentials. For details, see [Installing the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) in the AWS documentation.

1. If you intend to use the console to launch your instances, ensure that you have credentials and permissions to launch and configure Amazon EC2, Amazon EBS, and other services. For details, see [Access Management](https://docs.aws.amazon.com/IAM/latest/UserGuide/access.html) in the AWS documentation.

# Plan the deployment
<a name="planning-the-deployment"></a>

Consider the following when planning your SAP HANA deployment.

**Topics**
+ [Compute](#compute)
+ [Operating System](#operating-system)
+ [Amazon Machine Image (AMI)](#amazon-machine-image-ami)
+ [Storage](#storage)
+ [Network](#network)

## Compute
<a name="compute"></a>

 AWS provides multiple instance families with different sizes to run SAP HANA workloads. See the SAP [Certified and Supported SAP HANA Hardware Directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:23) and the [Amazon EC2 Instance Types for SAP](https://aws.amazon.com/sap/instance-types/) page to find the list of certified Amazon EC2 instances. For your production workloads, ensure that you choose an instance type that has been certified by SAP. You can run your non-production workloads on any size of a particular certified instance family to save costs.

## Operating System
<a name="operating-system"></a>

You can deploy your SAP HANA workload on SLES, SLES for SAP, RHEL for SAP with high availability and Update Services or RHEL for SAP Solutions.

SLES for SAP and RHEL for SAP with high availability and US products are available in AWS Marketplace under an hourly or an annual subscription model.

**SLES for SAP**  
SLES for SAP provides additional benefits, including Extended Service Pack Overlap Support (ESPOS), configuration and tuning packages for SAP applications, and High Availability Extensions (HAE). For details, see the SUSE [SLES for SAP product page](https://www.suse.com/products/sles-for-sap/) to learn more about the benefits of using SLES for SAP. We strongly recommend using SLES for SAP instead of SLES for all your SAP workloads.

If you plan to use Bring Your Own Subscription (BYOS) images provided by SUSE, ensure that you have the registration code required to register your instance with SUSE to access repositories for software updates.

**RHEL for SAP**  
RHEL for SAP with high availability and Update services provides access to Red Hat Pacemaker cluster software for High Availability, extended update support, and the libraries that are required to run SAP HANA. For details, see the [RHEL for SAP Offerings on AWS FAQ](https://access.redhat.com/articles/3671571) in the Red Hat Knowledgebase.

If you plan to use the BYOS model with RHEL, either through the [Red Hat Cloud Access](https://access.redhat.com/articles/3490141) program or another means, ensure that you have access to a RHEL for SAP Solutions subscription. For details, see [Introduction to Red Hat Enterprise Linux for SAP Solutions](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions) in the Red Hat documentation.

## Amazon Machine Image (AMI)
<a name="amazon-machine-image-ami"></a>

A base AMI is required to launch an Amazon EC2 instance. Depending on your choice of operating system, ensure that you have access to the appropriate AMI in your target region for the deployment.

If you plan to use the SLES for SAP or RHEL for SAP Amazon Machine Images (AMIs) offered in AWS Marketplace, ensure that you have completed the subscription process. You can search for *SLES for SAP* or *RHEL for SAP* in the AWS Marketplace, and follow the instructions to complete your subscription.

If you are using AWS CLI, you will need to provide the AMI ID when you launch the instance.

## Storage
<a name="storage"></a>

Deploying SAP HANA on AWS requires specific storage size and performance to ensure that SAP HANA data and log volumes both meet the SAP KPIs and sizing recommendations. Refer the [SAP HANA on AWS Operations Guide](https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-storage-config.html) to understand the storage configuration details for different instance types. You need to configure your storage based on these recommendations during instance launch. If you plan to use FSx for ONTAP storage, see [SAP HANA on AWS with FSx for ONTAP](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-hana-amazon-fsx.html) for more details.

## Network
<a name="network"></a>

Ensure that your network constructs are set up to deploy resources related to SAP HANA. If you haven’t already set up network components such as Amazon VPC, subnets, route table, etc., you can use the AWS Modular and Scalable VPC reference deployment to easily deploy a scalable VPC architecture in minutes. For details, see the [reference deployment guide](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-getting-started.html).

# Configure the operating system
<a name="operating-system-configuration"></a>

This section includes instructions for configuring your operating system for SAP HANA.

**Topics**
+ [SLES 12/15](configure-operating-system-sles-for-sap-12.x.md)
+ [RHEL 7/8/9](configure-operating-system-rhel-for-sap-7.x.md)

**Note**  
For scale-out workloads, you must repeat these steps for every node in the cluster.

# Configure SLES 12/15 for SAP
<a name="configure-operating-system-sles-for-sap-12.x"></a>

**Important**  
In the following steps, you need to update several configuration files. We recommend taking a backup of the files before you modify them. This will help you to revert to the previous configuration if needed.

1. After your instance is up and running, connect to the instance by using Secure Shell (SSH) and the key pair that you used to launch the instance.
**Note**  
Depending on your network and security settings, you might have to first connect by using SSH to a bastion host before accessing your SAP HANA instance, or you might have to add IP addresses or ports to the security group to allow SSH access.

1. Switch to root user.

   Alternatively, you can use `sudo` to execute the following commands as ec2-user.

1. Set a hostname and fully qualified domain name (FQDN) for your instance by executing the `hostnamectl` command and updating the `/etc/hostname` file.

   ```
      hostnamectl set-hostname --static <your_hostname>
      echo <your_hostname.example.com> > /etc/hostname
   ```

   Open a new session to verify the hostname change.

1. Ensure that the `DHCLIENT_SET_HOSTNAME` parameter is set to **no** to prevent DHCP from changing the hostname during restart.

   ```
      grep DHCLIENT_SET_HOSTNAME /etc/sysconfig/network/dhcp
   ```

1. Set the `preserve_hostname` parameter to true to ensure your hostname is preserved during restart.

   ```
      sed -i '/preserve_hostname/ c\preserve_hostname: true' /etc/cloud/cloud.cfg
   ```

1. Add an entry to the `/etc/hosts` file with the new hostname and IP address.

   ```
     <ip_address> <hostname.example.com> <hostname>
   ```

1. If you are using a BYOS SLES for SAP image, register your instance with SUSE. Ensure that your subscription is for SLES for SAP.

   ```
      SUSEConnect -r <Your_Registration_Code>
      SUSEConnect -s
   ```

1. Ensure that the following packages are installed:

    `systemd`, `tuned`, `saptune`, `libgcc_s1`, `libstdc++6`, `cpupower`, `autofs`, `nvme-cli`, `libssh2-1`, `libopenssl1_0_0` 

   You can use the `rpm` command to check whether a package is installed.

   ```
      rpm -qi <package_name>
   ```

   You can then use the zypper install command to install the missing packages.

   ```
      zypper install <package_name>
   ```
**Note**  
If you are importing your own SLES image, additional packages might be required to ensure that your instance is optimally setup. For the latest information, refer to the Package List section in the SLES for SAP Application Configuration Guide for SAP HANA, which is attached to SAP OSS Note [1944799](https://me.sap.com/notes/1944799) 

1. Ensure that your instance is running on a kernel version that is recommended in SAP OSS Note [2205917](https://me.sap.com/notes/2205917) or [2684254](https://me.sap.com/notes/2684254) depending on your version. If needed, update your system to meet the minimum kernel version. You can check the version of the kernel and other packages by using the following command:

   ```
      rpm -qi kernel*
   ```

1. Start `saptune daemon` and use the following command to set it to automatically start when the system reboots.

   ```
      saptune daemon start
   ```

1. Check whether the `force_latency` parameter is set in the `saptune` configuration file.

   ```
      grep force_latency /usr/lib/tuned/saptune/tuned.conf
   ```

   If the parameter is set, skip the next step and proceed with activating the HANA profile with `saptune`.

1. Update the `saptune HANA` profile according to SAP OSS Note [2205917](https://me.sap.com/notes/2205917), and then run the following commands to create a custom profile for SAP HANA. This step is not required if the `force_latency` parameter is already set.

   ```
      mkdir /etc/tuned/saptune
      cp /usr/lib/tuned/saptune/tuned.conf /etc/tuned/saptune/tuned.conf
      sed -i "/\[cpu\]/ a force_latency=70" /etc/tuned/saptune/tuned.conf
      sed -i "s/script.sh/\/usr\/lib\/tuned\/saptune\/script.sh/"
   ```

1. Switch the `tuned` profile to HANA and verify that all settings are configured appropriately.

   ```
      saptune solution apply HANA
      saptune solution verify HANA
   ```

1. Configure and start the Network Time Protocol (NTP) service. You can adjust the NTP server pool based on your requirements; for example:
**Note**  
Remove any existing invalid NTP server pools from `/etc/ntp.conf` before adding the following.

   ```
      echo "server 0.pool.ntp.org" >> /etc/ntp.conf
      echo "server 1.pool.ntp.org" >> /etc/ntp.conf
      echo "server 2.pool.ntp.org" >> /etc/ntp.conf
      echo "server 3.pool.ntp.org" >> /etc/ntp.conf
      systemctl enable ntpd.service
      systemctl start ntpd.service
   ```
**Tip**  
Instead of connecting to the global NTP server pool, you can connect to your internal NTP server if needed. Or you can use [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html) to keep your system time in sync.

1. Set the clocksource to `tsc` by updating the `current_clocksource` file and the GRUB2 boot loader.

   ```
      echo "tsc" > /sys/devices/system/clocksource/*/current_clocksource
      cp /etc/default/grub /etc/default/grub.backup
      sed -i '/GRUB_CMDLINE_LINUX/ s|"| clocksource=tsc"|2' /etc/default/grub
      grub2-mkconfig -o /boot/grub2/grub.cfg
   ```

1. Reboot your system for the changes to take effect.

1. Continue with [storage configuration for SAP HANA](configure-storage-for-sap-hana.md).

# Configure RHEL 7/8/9 for SAP
<a name="configure-operating-system-rhel-for-sap-7.x"></a>

**Important**  
In the following steps, you need to update several configuration files. We recommend taking a backup of the files before you modify them. This will help you to revert to the previous configuration if needed.

1. After your instance is up and running, connect to the instance by using Secure Shell (SSH) and the key pair that you used to launch the instance.
**Note**  
Depending on your network and security settings, you might have to first connect by using SSH to a bastion host before accessing your SAP HANA instance, or you might have to add IP addresses or ports to the security group to allow SSH access.

1. Switch to root user.

   Alternatively, you can use sudo to execute the following commands as ec2-user.

1. Set a hostname for your instance by executing the `hostnamectl` command and update the `/etc/cloud/cloud.cfg` file to ensure that your hostname is preserved during system reboots.

   ```
      hostnamectl set-hostname --static <your_hostname>
      echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg
   ```

   Open a new session to verify the hostname change.

1. Add an entry to the `/etc/hosts` file with the new hostname and IP address.

   ```
     <ip address> <hostname.example.com> <hostname>
   ```

   Ensure that the packages listed in the following SAP Notes (SAP portal access required) are installed:
   +  [SAP Note 2002167 - Red Hat Enterprise Linux 7.x: Installation and Upgrade](https://me.sap.com/notes/2002167) 
   +  [SAP Note 2772999 - Red Hat Enterprise Linux 8.x: Installation and Configuration](https://me.sap.com/notes/2772999) 
   +  [SAP Note 3108316 - Red Hat Enterprise Linux 9.x: Installation and Configuration](https://me.sap.com/notes/3108316) 

     Note that your instance should have access to the SAP HANA channel to install libraries requires for SAP HANA installations.

     You can use the `rpm` command to check whether a package is installed:

     ```
       rpm -qi <package_name>
     ```

     You can then install any missing packages by using the `yum –y install` command.

     ```
       yum -y install <package name>
     ```
**Note**  
Depending on your base RHEL image, additional packages might be required to ensure that your instance is optimally setup. (You can skip this step if you are using the RHEL for SAP with HA & US image.) For the latest information, refer to the RHEL configuration guide that is attached to SAP OSS Note [2009879](https://me.sap.com/notes/2009879). Review the packages in the Install Additional Required Packages section and the Appendix–Required Packages for SAP HANA on RHEL 7 section.

1. Ensure that your instance is running on a kernel version that is recommended in SAP OSS Note [2292690](https://me.sap.com/notes/2292690), [2777782](https://me.sap.com/notes/2777782), and [3108302](https://me.sap.com/notes/3108302). If needed, update your system to meet the minimum kernel version. You can check the version of the kernel and other packages using the following command.

   ```
   rpm -qi kernel*
   ```

1. Start `tuned daemon` and use the following commands to set it to automatically start when the system reboots.

   ```
   systemctl start tuned
   
   systemctl enable tuned
   ```

1. Configure the `tuned HANA` profile to optimize your instance for SAP HANA workloads.

   Check whether the `force_latency` parameter is already set in the `/usr/lib/tuned/sap-hana/tuned.conf` file. If the parameter is set, execute the following commands to apply and activate the `sap-hana` profile.

   ```
   tuned-adm profile sap-hana
   tuned-adm active
   ```

   If the `force_latency` parameter is not set, execute the following steps to modify and activate the `sap-hana` profile.

   ```
   mkdir /etc/tuned/sap-hana
   cp /usr/lib/tuned/sap-hana/tuned.conf /etc/tuned/sap-hana/tuned.conf
   sed -i '/force_latency/ c\force_latency=70' /etc/tuned/sap-hana/tuned.conf
   tuned-adm profile sap-hana
   tuned-adm active
   ```

1. Disable Security-Enhanced Linux (SELinux) by running the following command. (Skip this step if you are using the RHEL for SAP with HA & US image.)

   ```
      sed -i 's/\(SELINUX=enforcing\|SELINUX=permissive\)/SELINUX=disabled/g' \/etc/selinux/config
   ```

1. Disable Transparent Hugepages (THP) at boot time by adding the following to the line that starts with GRUB\$1CMDLINE\$1LINUX in the `/etc/default/grub` file. Execute the following commands to add the required parameter and to re-configure grub (Skip this step if you are using the RHEL for SAP with HA & US image.)

   ```
      sed -i '/GRUB_CMDLINE_LINUX/ s|"| transparent_hugepage=never"|2' /etc/default/grub
      cat /etc/default/grub
      grub2-mkconfig -o /boot/grub2/grub.cfg
   ```

1. Add symbolic links by executing following commands. (Skip this step if you are using the RHEL for SAP with HA & US image.)

   ```
      ln -s /usr/lib64/libssl.so.10 /usr/lib64/libssl.so.1.0.1
      ln -s /usr/lib64/libcrypto.so.10 /usr/lib64/libcrypto.so.1.0.1
   ```

1. Configure and start the Network Time Protocol (NTP) service. You can adjust the NTP server pool based on your requirements. The following is just an example.
**Note**  
Remove any existing invalid NTP server pools from `/etc/ntp.conf` before adding the following.

   ```
      echo "server 0.pool.ntp.org" >> /etc/ntp.conf
      echo "server 1.pool.ntp.org" >> /etc/ntp.conf
      echo "server 2.pool.ntp.org" >> /etc/ntp.conf
      echo "server 3.pool.ntp.org" >> /etc/ntp.conf
      systemctl enable ntpd.service
      systemctl start ntpd.service
      systemctl restart systemd-timedated.service
   ```
**Tip**  
Instead of connecting to the global NTP server pool, you can connect to your internal NTP server if needed. Alternatively, you can also use [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html) to keep your system time in sync.

1. Set clocksource to `tsc` by the updating the `current_clocksource` file and the GRUB2 boot loader.

   ```
      echo "tsc" > /sys/devices/system/clocksource/*/current_clocksource
      cp /etc/default/grub /etc/default/grub.backup
      sed -i '/GRUB_CMDLINE_LINUX/ s|"| clocksource=tsc"|2' /etc/default/grub
      grub2-mkconfig -o /boot/grub2/grub.cfg
   ```

1. For RHEL9 only, disable the LVM device persistence using the following commands.

   ```
   sed -i'.bkp' -e 's/ use_devicesfile = 0/use_devicesfile = 1/g' /etc/lvm/lvm.conf
   mv /etc/lvm/devices/system.devices /etc/lvm/devices/system.devices.bkp
   ```

1. Reboot your system for the changes to take effect.

1. After the reboot, log in as root and execute the `tuned-adm` command to verify that all SAP recommended settings are in place.

   ```
      tuned-adm verify
   
     “tuned-adm verify” creates a log file under /var/log/tuned/tuned.log Review this log file and ensure that  all checks have passed.
   ```

1. Continue with storage configuration.

# Configure storage
<a name="configure-storage"></a>

This section includes instructions for configuring your storage for SAP HANA.

**Topics**
+ [Storage architecture](architecture.md)
+ [Configure storage (EBS)](storage-configuration-ebs.md)
+ [Configure storage (FSx for ONTAP)](sap-hana-amazon-fsx.md)
+ [Configure storage (EFS)](configure-nfs-for-scale-out-workloads.md)

# Storage architecture
<a name="architecture"></a>

This section includes architecture diagrams for scale-up and scale-out environments for SAP HANA.

**Topics**
+ [Amazon FSx for NetApp ONTAP](architecture-fsx.md)

# Amazon FSx for NetApp ONTAP
<a name="architecture-fsx"></a>

The following architecture diagrams show different options for SAP HANA workloads using Amazon FSx for NetApp ONTAP.

**Topics**
+ [Scale-up environment](#fsx-scale-up)
+ [Scale-out environment](#fsx-scale-out)
+ [Single Availability Zone deployment](#fsx-single)
+ [Multi-Availability Zone deployment](#fsx-multi)

## Scale-up environment
<a name="fsx-scale-up"></a>

The following architecture diagram shows a scale-up environment for SAP HANA workloads using FSx for ONTAP.

![\[Diagram of a scale-up environment for SAP HANA workloads using FSx for ONTAP.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/scaleup.png)


## Scale-out environment
<a name="fsx-scale-out"></a>

The following architecture diagram shows a scale-out environment for SAP HANA workloads using FSx for ONTAP.

![\[Diagram of a scale-out environment for SAP HANA workloads using FSx for ONTAP.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/scaleout.png)


## Single Availability Zone deployment
<a name="fsx-single"></a>

The following architecture diagram shows a single Availability Zone deployment for SAP HANA workloads using FSx for ONTAP.

![\[Diagram of a single Availability Zone deployment for SAP HANA workloads using FSx for ONTAP.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/fsx-single-az.png)


## Multi-Availability Zone deployment
<a name="fsx-multi"></a>

The following architecture diagram shows a multi-Availability Zone deployment for SAP HANA workloads using FSx for ONTAP.

![\[Diagram of a multi-Availability Zone deployment for SAP HANA workloads using FSx for ONTAP.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/fsx-multi-az.png)


# Configure storage (Amazon EBS)
<a name="storage-configuration-ebs"></a>

This section explains how to deploy and configure SAP HANA scale-up and scale-out workloads with Amazon EBS.

**Topics**
+ [Calculate Requirements](hana-storage-config-ebs.md)
+ [Storage Reference](hana-storage-config-reference-layout.md)
+ [Deploy Workloads](deployment-steps-using-the-aws-management-console.md)
+ [Configure Fileystems](configure-storage-for-sap-hana.md)
+ [Architecture](architecture-ebs.md)
+ [Legacy Reference](hana-storage-legacy-ebs.md)

# Calculate EBS Storage Requirements for SAP HANA
<a name="hana-storage-config-ebs"></a>

## Overview
<a name="_overview"></a>

This guide provides storage configuration recommendations for SAP HANA workloads running on Amazon EC2. Learn how to configure Amazon EBS volumes to meet SAP’s storage key performance indicators (KPIs).

SAP HANA stores and processes its data primarily in memory, and provides protection against data loss by saving the data to persistent storage locations. To achieve optimal performance, the storage solution used for SAP HANA data and log volumes should meet SAP’s storage KPI requirements. AWS has worked with SAP to certify both Amazon EBS General Purpose SSD (gp2 and gp3) and Provisioned IOPS SSD (io1, io2 Block Express) storage solutions for SAP HANA workloads.

## New Amazon EBS Storage Guidelines for SAP HANA
<a name="_new_amazon_ebs_storage_guidelines_for_sap_hana"></a>

This document introduces a memory-based formula approach for storage sizing, replacing previous instance-specific recommendations. This change enables customers to better understand storage configuration logic and maintain greater control over performance optimization decisions.

The new guidance focuses on gp3 and io2 Block Express volumes as the current standard recommendation for all new deployments. While gp2 and io1 volumes remain supported for existing deployments, we recommend gp3 for new implementations due to its predictable performance and cost-effectiveness, with io2 Block Express as the upgrade path for systems requiring additional performance.

**Note**  
If your SAP HANA system was deployed using previous guidance, including the use of Launch Wizard, it is not necessary to change the configuration. Existing configurations based on previous recommendations continue to meet the necessary requirements.

## Testing with SAP HANA Hardware and Cloud Measurement Tools
<a name="_testing_with_sap_hana_hardware_and_cloud_measurement_tools"></a>

 AWS has ensured that the storage configuration guidelines meet the Key Performance Indicators (KPIs) for running SAP HANA. However, for workloads with high performance requirements or those that deviate from the standard recommendation, we strongly recommended validating the performance of your storage configuration with SAP HANA Hardware and Cloud Measurement Tools.

See:
+ SAP Note: [2493172 - SAP HANA Hardware and Cloud Measurement Tools](https://me.sap.com/notes/2493172) 
+ SAP Documentation: [SAP HANA Hardware and Cloud Measurement Tools](https://help.sap.com/viewer/product/HANA_HW_CLOUD_TOOLS/latest/en-US) 

## EBS Storage Volume Configurations
<a name="_ebs_storage_volume_configurations"></a>

This section provides the formulas and methodology for calculating SAP HANA storage requirements. The calculations factor in memory size and workload characteristics to determine appropriate volume sizing and performance configuration. Adjust these baseline recommendations based on your specific workload requirements and growth projections.

Refer to [SAP HANA EBS Storage Reference](hana-storage-config-reference-layout.md) for calculated requirements based on available memory.

**Topics**
+ [Root and SAP Binary Volumes](#root_and_sap)
+ [HANA Data Volume](#hana_data)
+ [HANA Log Volume](#hana_log)
+ [HANA Shared Volume](#hana_shared)
+ [HANA Backup Volume (Optional)](#hana_backup)
+ [When to Stripe Volumes](#_when_to_stripe_volumes)

**Important**  
Some EC2 Instances Types may include [instance storage](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html), this storage is ephemeral and must not be used for SAP HANA files. Configure Amazon EBS volumes for all SAP HANA storage requirements.

### Root and SAP Binary Volumes
<a name="root_and_sap"></a>

The root volume contains operating system files, configuration, and logs. The sizing recommendation is suitable for most SAP HANA deployments and sizes, but may vary based on your AMI strategy and non-SAP software requirements. Consider using a separate volume for the `/tmp` filesystem in high-usage environments.

The SAP binaries volume (`default /usr/sap/<SID>/`) contains common SAP executables and binaries.

**Storage Type**  
Use gp3 volumes for root storage. The baseline performance characteristics meet operating system requirements.

 **Size Calculation** 

```
root_and_sap_volume_size = 50 GiB
```

 **IOPS Formula** 

```
root_and_sap_iops_target = 3000 (fixed)
```

 **Throughput Formula** 

```
root_and_sap_throughput_target = 125 MB/s (fixed)
```

**Example**  
 *ANY Memory System Root Volume:* 

**Example**  
+ Size = 50 GiB
+ IOPS = 3000
+ Throughput = 125 MB/s

### HANA Data Volume
<a name="hana_data"></a>

The Storage\$1DATA filesystem (default `/hana/data`) stores the persistent copy of the SAP HANA in-memory database. While SAP HANA operates with data in-memory, this volume ensures data durability through regular savepoints. The storage must handle mixed workload patterns including random reads and writes during normal operation, and sequential patterns during savepoints, with consistently low latency to maintain database performance.

**Size Calculation**  
The data volume size requirements are derived from system memory size. While actual storage requirements depend on your specific workload, compression ratios, and growth projections, use the following calculation as a baseline. Consult SAP sizing tools for precise calculations.
+ SAP Documentation: [SAP Benchmark Sizing](https://www.sap.com/about/benchmark/sizing.html) 

```
data_volume_size = MROUND(memory_size * 1.2, 100)

Where:
- Size factor = 1.2
- Rounding factor = 100
```

**Note**  
While SAP has updated their size factor recommendation from 1.2 to 1.5 to accomodate operational requirement, AWS maintains the 1.2 factor as the baseline for initial deployments. This is a cost-effective approach that leverages the dynamic scaling capabilities of EBS volumes, allowing you to expand storage capacity online as your needs grow. You can easily increase volume size without service interruption when additional space is required.

**Storage Type Selection**
+ Use gp3 with custom IOPS/throughput up to volume limits
+ Consider io2 Block Express when requiring consistent sub-millisecond latency
+ For Xen based instances, use gp2 (striped) or io2 Block Express since gp3 may not meet the SAP HANA storage latency KPI for log writes.

 **IOPS Formula** 

```
data_iops_target = MROUND(7200 + (0.45 * memory_size), 100)

Where:
- Base IOPS = 7200
- IOPS factor = 0.45 per GiB of memory
- Rounding factor = 100
```
+ Large instances may require multiple volumes to achieve the specified data\$1iops\$1target. Refer to the striping guidelines below.
+ The minimum IOPS required to meet SAP HANA KPIs for Data is 7000.

 **Throughput Formula** 

```
data_throughput_target = MIN(MROUND(450 + (0.2 * memory_size), 125), 2000)

Where:
- Base throughput = 450 MB/s
- Throughput factor = 0.2 MB/s per GiB of memory
- Maximum throughput = 2000 MB/s (see exception)
- Rounding factor = 125
```
+ For large instances using gp3 volumes, a single volume might not achieve the required `data_throughput_target`. For more information about using multiple volumes, see [When to Stripe Volumes](#_when_to_stripe_volumes).
+ SAP’s minimum throughput requirement for HANA data volumes is 400 MB/s. The base throughput value of 450 MB/s in our formula ensures this SAP KPI is met with additional headroom for optimal performance.
+ Every instance type has its own Amazon EBS throughput maximum. For details, see [Amazon EBS-Optimized Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html) in the AWS documentation.
+ Exception: For instances of 32 TB and larger (currently instance type u7inh-32tb.480xlarge), we recommend provisioning 4000 MB/s of throughput or higher. For all other instance sizes, if you need more than 2000 MB/s of throughput, you can adjust the maximum throughput value in the formula accordingly.

**Volume Striping**  
Implement volume striping when you need to meet specific technical limits, performance requirements, or operational demands. Refer to the [When to Stripe Volumes](#_when_to_stripe_volumes) for detailed guidance on when striping is appropriate.

For gp3 volumes, throughput is typically the first limit you’ll encounter. For io2 Block Express volumes, throughput is calculated as IOPS × I/O size. SAP HANA workloads typically use 256 KiB I/O operations - at this size, a single io2 Block Express volume can achieve 4,000 MB/s throughput with 16,000 IOPS. Given these capabilities, volume striping is not required for most HANA deployments on io2 Block Express. If higher throughput is needed, you can adjust the provisioned IOPS accordingly.

If implementing striping for data volumes, use a 256 KB stripe size to optimize for data operations.

**Examples**  
 *512 GiB Memory System HANA Data Volume:* 

**Example**  
+ Storage Type Selection = GP3
+ Size = MROUND((512 GiB \$1 1.2),100) = 600 GiB
+ IOPS = MROUND(7,200 \$1 (0.45 \$1 512), 100) = 7,460 IOPS
+ Throughput = MIN(MROUND(450 \$1 (0.2 \$1 512), 125), 2,000) = 500 MB/s
+ Striping = Not required.

 *4 TiB Memory System HANA Data Volume:* 

**Example**  
+ Storage Type Selection = GP3
+ Size = MROUND((4,096 GiB \$1 1.2),100) = 4,900 GiB
+ IOPS = MROUND(7,200 \$1 (0.45 \$1 4096), 100) = 9,000 IOPS
+ Throughput = MIN(MROUND(450 \$1 (0.2 \$1 4,096), 125), 2000) = 1250 MB/s
+ Striping = Required for throughput. Consider 2 x 2,450 GiB Filesystems, 4500 IOPS, 625 MB/s Throughput

### HANA Log Volume
<a name="hana_log"></a>

The Storage\$1LOG filesystem (default `/hana/log`) stores the redo log files that ensure data durability and consistency. This filesystem handles write-intensive workloads with high-frequency, small, sequential writes. Because log writes directly impact database response time and transaction performance, storage volumes require consistent sub-millisecond latency.

**Size Calculation**  
The log volume size requirements are derived from system memory size. Modifications can be made based on transaction volume and log backup frequency.

```
log_volume_size = MROUND((memory_size * 0.5),100)

Where:
- Minimum Size = 50 GiB
- Maximum Size = 500 GiB
- Rounding factor = 100
```

**Storage Type Selection**
+ Use gp3 with custom IOPS/throughput up to volume limits
+ Consider io2 Block Express when requiring consistent sub-millisecond latency
+ For Xen based instances, use gp2 (striped) or io2 Block Express since gp3 may not meet the SAP HANA storage latency KPI for log writes.

 **IOPS Formula** 

```
log_iops_target = 3000

Where:
- Base IOPS = 3000
```
+ The minimum IOPS required to meet SAP HANA KPIs for Data is 3000.

 **Throughput Formula** 

```
log_throughput_target = MIN(MROUND(300 + (0.015 * memory_size), 300), 500)

Where:
- Base throughput = 300 MB/s
- Throughput factor = 0.015 MB/s per GiB of memory
- Maximum throughput = 500 MB/s
- Rounding factor = 300
```
+ SAP’s minimum throughput requirement for HANA log volumes is 250 MB/s. The base throughput value of 300 MB/s and the rounding factor in our formula ensures this value remains static with changing sizes and that the SAP KPI is met with additional headroom for optimal performance.

**Volume Striping**  
For log volumes, striping is generally not required to achieve the `log_throughput_target` when using gp3 or io2 Block Express volumes. Single volumes typically provide sufficient performance for log operations.

If implementing striping for log volumes, use a 64 KB stripe size to optimize for sequential write patterns typical of log operations. Refer to the [When to Stripe Volumes](#_when_to_stripe_volumes) section to understand where striping is required in order to achieve the throughput, IOPS or performance targets.

**Examples**  
 *512 GiB Memory System HANA Log Volume:* 

**Example**  
+ Storage Type Selection = GP3
+ Size = MROUND512 GiB \$1 0.5),100 = 300 GiB (within 500 GiB maximum)
+ IOPS = 3000
+ Throughput = MIN(MROUND(300 \$1 (0.015 \$1 512), 300), 500) = 300 MB/s
+ Striping = Not required.

### HANA Shared Volume
<a name="hana_shared"></a>

The HANA Shared filesystem (default `/hana/shared`) contains SAP HANA installation files, trace files, and shared configuration files.

**Note**  
This file system must be accessible to all nodes in scale-out deployments.

**Size Calculation**  
For single-node deployments:

```
shared_volume_size = MIN(memory_size, 1024)

Where:
- memory_size is system memory in GiB
- 1024 represents 1 TiB maximum
```

For scale-out deployments:

For scale-out SAP HANA systems, the /hana/shared filesystem requires disk space equal to one worker node’s memory for every four worker nodes in the deployment.

```
shared_volume_size = worker_node_memory * CEILING(number_of_worker_nodes/4)

Where:
- worker_node_memory is the memory size of a single worker node in GiB
- number_of_worker_nodes is the total number of worker nodes
- CEILING rounds up to the nearest whole number
```

 **Examples for scale-out deployments** 


|  |  |  |  | 
| --- |--- |--- |--- |
|  Worker Node Memory  |  Number of Nodes  |  Calculation  |  Required Size  | 
|  2 TiB  |  1-4 nodes  |  2048 \$1 1  |  2 TiB  | 
|  2 TiB  |  5-8 nodes  |  2048 \$1 2  |  4 TiB  | 
|  2 TiB  |  9-11 nodes  |  2048 \$1 3  |  6 TiB  | 
|  2 TiB  |  12-15 nodes  |  2048 \$1 4  |  8 TiB  | 

**Storage Type Selection**
+ GP3 provides the required performance characteristics for scale-up deployments
+ Amazon EFS is a viable option for both scale-up and scale-out deployments, providing shared access across all nodes with the required performance characteristics. For scale-out configurations, see [Configure Storage (EFS)](configure-nfs-for-scale-out-workloads.md) 

 **IOPS Formula** 

```
shared_iops_target = 3000

Where:
- Base IOPS = 3000 (fixed)
```

 **Throughput Formula** 

```
shared_throughput_target = 125

Where:
- Base throughput = 125 MB/s (fixed)
```

**Examples**  
 *512 GiB Memory System HANA Shared Volume:* 

**Example**  
+ Size = 512 GiB
+ IOPS = 3000
+ Throughput = 125 MB/s

### HANA Backup Volume (Optional)
<a name="hana_backup"></a>

The `/backup` filesystem provides local storage for SAP HANA file-based backups, including data and log backups. While local filesystem backups can be useful for non-critical systems or as a secondary backup option, they present several challenges in production environments:
+ An additional sync step is required to move backups to durable storage like Amazon S3
+ The recovery point objectives may be impacted if there is a disk or hardware failure
+ Careful management of local storage capacity is required via housekeeping and monitoring
+ In scale-out, volume needs to be accessible across all nodes

**Important**  
 AWS recommends using [AWS Backup for SAP HANA](https://docs.aws.amazon.com/sap/latest/sap-hana/aws-backint-agent-backup.html) or the [AWS Backint Agent](https://docs.aws.amazon.com/sap/latest/sap-hana/aws-backint-agent-backup.html) instead of file-based backups. These solutions provide direct backup to durable storage and simplify backup management.

**Size Calculation**  
The size of the backup volume is very dependent on the usage of the system. The following should be used as an initial baseline, but adapted post deployment depending on backup size, volume of change, retention of local copies and contingency.

```
backup_volume_size = memory_size * 3

Where:
- memory_size is system memory in GiB
```

**Storage Type Selection**
+ For single-node deployment, we recommend using Amazon EBS Throughput Optimized HDD (st1) volumes for SAP HANA to perform file-based backup. This volume type provides low-cost magnetic storage designed for large sequential workloads. SAP HANA uses sequential I/O with large blocks to back up the database, so st1 volumes provide a low-cost, high-performance option for this scenario. To learn more about st1 volumes, see [Amazon EBS Volume Types](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html).
+ For multi-node deployment, we recommend using Amazon EFS for SAP HANA to perform file-based backup. It can support performance over 10 GB/sec and over 500,000 IOPS.

 **IOPS Formula** 

```
backup_iops_target = n/a
```

**Note**  
ST1 Baseline is 500 IOPs but this is not configurable. Backup operations typically depend more on throughput than IOPS performance.

**Throughput Formula**  
For ST1 volumes, use this formula as a starting point to determine the number of volumes needed for backup throughput. Adjust the final volume count based on your actual backup window requirements and performance monitoring data.

```
backup_volumes_for_throughput = CEILING(memory_size/6000) * 500

Where:
- memory_size is system memory in GiB
- 6000 represents the GiB threshold for striping
- 500 is maximum throughput MB/s per st1 volume
- Result indicates number of volumes needed for throughput
```

 **Examples for scale-out deployments** 


|  |  |  | 
| --- |--- |--- |
|  Worker Node Memory  |  Volumes  |  Throughput  | 
|  4 TiB  |  1  |  500 MB/s  | 
|  12 TiB  |  2  |  1000 MB/s  | 
|  24 TiB  |  4  |  2000 MB/s  | 

### When to Stripe Volumes
<a name="_when_to_stripe_volumes"></a>

Linux Logical Volume Management (LVM) striping distributes data across multiple EBS volumes to increase I/O performance. The striped volumes act as a single logical volume, with reads and writes distributed across all volumes in the stripe set.

Implement storage volume striping in the following scenarios:

Technical Limits  
+ Throughput requirements exceed single volume maximum (1,000 MB/s for gp3, 4,000 MB/s for io2 Block Express).
  + For io2 Block Express volumes, throughput is calculated as IOPS × I/O size. SAP HANA workloads typically use 256 KiB I/O operations - at this size, a single io2 Block Express volume can achieve 4,000 MB/s throughput with 16,000 IOPS. Given these capabilities, volume striping is not required for most HANA deployments on io2 Block Express. If higher throughput is needed, you can adjust the provisioned IOPS accordingly.
+ IOPS requirements exceed single volume maximum (16,000 for gp3, 256,000 for io2 Block Express)
+ Volume size requirements exceed single volume maximum (16 TiB for gp3, 64 TiB for io2 Block Express)

Operational Requirements  
+ Large data loads or backups that need to complete within specific time windows
+ Systems with memory sizes > 4 TiB where data operations exceed acceptable durations
+ High-throughput analytical workloads requiring sustained parallel I/O
+ Expected growth that will exceed single-volume limits

**Important**  
Before implementing striping, first consider using higher performance EBS volume types or adjusting the IOPS and throughput settings within single-volume limits. Striping requires volumes of the same type and size, and balanced I/O patterns to be effective.

# SAP HANA EBS Storage Reference
<a name="hana-storage-config-reference-layout"></a>

**Important**  
These values serve as a starting point. For guidance on how to size and configure storage for your specific workload, including calculations and striping considerations, refer to [Calculate Requirements](hana-storage-config-ebs.md).

**Topics**
+ [Certified Instances - General](#general)
+ [Certified Instances - High Memory](#_certified_instances_high_memory)
+ [Suitable for Non-Production Use](#_suitable_for_non_production_use)

## Certified Instances - General
<a name="general"></a>

For systems with less than 2 TiB of memory storage can typically be configured using standard Amazon EBS volumes. gp3 volumes usually balance price and performance for a variety of workloads, while io2 volumes should be considered when higher durability is required or to improve startup and EBS snapshot restore times.

Sample layouts are provided for the following memory configurations:

Memory sizes: [256 GiB](#mem-256), [384 GiB](#mem-384), [488 GiB](#mem-512), [512 GiB](#mem-512), [768 GiB](#mem-768), [976 GiB](#mem-1024), [1024 GiB](#mem-1024), [1536 GiB](#mem-1536), [2 TiB](#mem-2tb) 

### 256 GiB Memory Systems
<a name="mem-256"></a>

Applicable Instance Types: **r8i.8xlarge**, **r7i.8xlarge**, **r6i.8xlarge**, **r5.8xlarge**, **r5b.8xlarge**, **x2iedn.2xlarge**, **r4.8xlarge**, **r3.8xlarge1 **, **x1e.2xlarge1 ** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  300  |  7,300  |  500  |  gp3/io2  |  Not Required  |  | 
|  HANA Log  |  100  |  3,000  |  300  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  256  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

 1 Xen instance types. We suggest migrating to a Nitro instance type.

### 384 GiB Memory Systems
<a name="mem-384"></a>

Applicable Instance Types: **r8i.12xlarge**, **r7i.12xlarge**, **r6i.12xlarge**, **r5.12xlarge**, **r5b.12xlarge** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  500  |  7,400  |  500  |  gp3/io2  |  Not Required  |  | 
|  HANA Log  |  200  |  3,000  |  300  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  384  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

### 488 GiB / 512 GiB Memory Systems
<a name="mem-512"></a>

Applicable Instance Types: **r8i.16xlarge**, **r7i.16xlarge**, **r6i.16xlarge**, **r5.16xlarge**, **r5b.16xlarge**, **x2iedn.4xlarge**, **r4.16xlarge**, **x1e.4xlarge** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  600  |  7,400  |  500  |  gp3/io2  |  Not Required  |  | 
|  HANA Log  |  300  |  3,000  |  300  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  512  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

### 768 GiB Memory Systems
<a name="mem-768"></a>

Applicable Instance Types: **r8i.24xlarge**, **r7i.24xlarge**, **r6i.24xlarge**, **r5.24xlarge**, **r5.metal**, **r5b.24xlarge**, **r5b.metal**, **x8i.12xlarge** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  900  |  7,500  |  625  |  gp3/io2  |  Not Required  |  | 
|  HANA Log  |  400  |  3,000  |  300  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  768  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

### 976 GiB / 1024 GiB Memory Systems
<a name="mem-1024"></a>

Applicable Instance Types: **x2idn.16xlarge**, **r6i.32xlarge**, **x1.16xlarge1 **, **x8i.16xlarge** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  1,200  |  7,700  |  625  |  gp3/io2  |  Not Required  |  | 
|  HANA Log  |  500  |  3,000  |  300  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  1,024  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

 1 Xen instance types. We suggest migrating to a Nitro instance type.

### 1,536 GiB Memory Systems
<a name="mem-1536"></a>

Applicable Instance Types: **r8i.48xlarge**, **x2idn.24xlarge**, **r7i.48xlarge**, **x8i.24xlarge** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  1,800  |  7,900  |  750  |  gp3/io2  |  Not Required  |  | 
|  HANA Log  |  500  |  3,000  |  300  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  1,024  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

### 2 TiB Memory Systems
<a name="mem-2tb"></a>

Applicable Instance Types: **x2idn.32xlarge**, **x1.32xlarge**, **x8i.32xlarge** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  2,500  |  8,100  |  875  |  gp3/io2  |  Not Required  |  | 
|  HANA Log  |  500  |  3,000  |  300  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  1,024  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

## Certified Instances - High Memory
<a name="_certified_instances_high_memory"></a>

Storage configuration for high memory systems requires careful planning to meet increased I/O demands. Multiple EBS volumes in striped configurations and/or io2 may be required to meet the higher IOPs and throughput demands, particularly for data volumes. As with smaller systems, durability, startup and snapshot restore times should also be considered.

Sample layouts are provided for the following memory configurations:

Memory sizes: [3 TiB](#mem-3tb), [4 TiB](#mem-4tb), [6 TiB](#mem-6tb), [8 TiB](#mem-8tb), [9 TiB](#mem-9tb), [12 TiB](#mem-12tb), [16 TiB](#mem-16tb), [18 TiB](#mem-18tb), [24 TiB](#mem-24tb), [32 TiB](#mem-32tb) 

### 3 TiB Memory Systems
<a name="mem-3tb"></a>

Applicable Instance Types: **r8i.96xlarge**, **x2iedn.24xlarge**, , **x8i.48xlarge**, **u-3tb1.56xlarge** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  3,700  |  8,600  |  1,125  |  gp3/io2  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/hana-storage-config-reference-layout.html)  |  | 
|  HANA Log  |  500  |  3,000  |  300  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  1,024  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

### 4 TiB Memory Systems
<a name="mem-4tb"></a>

Applicable Instance Types: **x2iedn.32xlarge**, **x1e.32xlarge**, **x8i.64xlarge** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  4,900  |  9,000  |  1,250  |  gp3/io2  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/hana-storage-config-reference-layout.html)  |  | 
|  HANA Log  |  500  |  3,000  |  300  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  1,024  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

### 6 TiB Memory Systems
<a name="mem-6tb"></a>

Applicable Instance Types: , **x8i.96xlarge**, **u-6tb1.112xlarge**, **u-6tb1.56xlarge**, **u-6tb1.metal**, **u7i-6tb.112xlarge** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  7,300  |  10,000  |  1,625  |  gp3/io2  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/hana-storage-config-reference-layout.html)  |  | 
|  HANA Log  |  500  |  3,000  |  300  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  1,024  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

### 8 TiB Memory Systems
<a name="mem-8tb"></a>

Applicable Instance Types: **u7i-8tb.112xlarge** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  9,800  |  10,900  |  2,000  |  gp3/io2  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/hana-storage-config-reference-layout.html)  |  | 
|  HANA Log  |  500  |  3,000  |  300  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  1,024  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

### 9 TiB Memory Systems
<a name="mem-9tb"></a>

Applicable Instance Types: **u-9tb1.112xlarge**, **u-9tb1.metal** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  11,100  |  11,300  |  2,000  |  gp3/io2  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/hana-storage-config-reference-layout.html)  |  | 
|  HANA Log  |  500  |  3,000  |  300  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  1,024  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

### 12 TiB Memory Systems
<a name="mem-12tb"></a>

Applicable Instance Types: **u-12tb1.112xlarge**, **u-12tb1.metal**, **u7i-12tb.224xlarge** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  14,700  |  12,700  |  2,000  |  gp3/io2  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/hana-storage-config-reference-layout.html)  |  | 
|  HANA Log  |  500  |  3,000  |  500  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  1,024  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

### 16 TiB Memory Systems
<a name="mem-16tb"></a>

Applicable Instance Types: **u7in-16tb.112xlarge**, **u7in-16tb.224xlarge** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  19,700  |  14,600  |  2,000  |  gp3/io2  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/hana-storage-config-reference-layout.html)  |  | 
|  HANA Log  |  500  |  3,000  |  500  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  1,024  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

### 18 TiB Memory Systems
<a name="mem-18tb"></a>

Applicable Instance Types: **u-18tb1.112xlarge**, **u-18tb1.metal** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  22,100  |  15,500  |  2,000  |  gp3/io2  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/hana-storage-config-reference-layout.html)  |  | 
|  HANA Log  |  500  |  3,000  |  500  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  1,024  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

### 24 TiB Memory Systems
<a name="mem-24tb"></a>

Applicable Instance Types: **u7in-24tb.224xlarge**, **u-24tb1.metal** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  29,500  |  18,300  |  2,000  |  gp3/io2 (io2 recommended)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/hana-storage-config-reference-layout.html)  |  Throughput target can be met with 2 stripes for gp3, 3 recommended to reduce volume size.  | 
|  HANA Log  |  500  |  3,000  |  500  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  1,024  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

### 32 TiB Memory Systems
<a name="mem-32tb"></a>

Applicable Instance Types: **u7inh-32tb.480xlarge** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  39,300  |  21,900  |  4,000  |  gp3/io2 (io2 recommended)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/hana-storage-config-reference-layout.html)  |  | 
|  HANA Log  |  500  |  3,000  |  500  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  1,024  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

## Suitable for Non-Production Use
<a name="_suitable_for_non_production_use"></a>

While not SAP-certified, these configurations are suitable for small non-production environments where cost optimization is a priority. Storage targets listed represent minimum requirements and can be increased to improve performance or meet SAP storage KPIs.

Sample layouts are provided for the following memory configurations:

Memory sizes: [64 GiB](#mem-64), [128 GiB](#mem-128) 

### 64 GiB Memory Systems
<a name="mem-64"></a>

Applicable Instance Types: **r8i.2xlarge**, **r7i.2xlarge**, **r6i.2xlarge**, **r5.2xlarge**, **r5b.2xlarge**, **r4.2xlarge1 **, **r3.2xlarge1 ** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  100  |  3,000  |  125  |  gp3  |  Not required  |  | 
|  HANA Log  |  50  |  3,000  |  125  |  gp3  |  Not required  |  | 
|  HANA Shared  |  64  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

 1 Xen instance types. We suggest migrating to a Nitro instance type.

### 128 GiB Memory Systems
<a name="mem-128"></a>

Applicable Instance Types: **r8i.4xlarge**, **x2iedn.xlarge**, **r7i.4xlarge**, **r6i.4xlarge**, **r5.4xlarge**, **r5b.4xlarge**, **x1e.xlarge**, **r4.4xlarge1 **, **r3.4xlarge1 ** 

Suggested Storage Configuration:


| System Configuration | Target Size (GiB) | Target IOPS | Target Throughput (MB/s) | Target Volume Type | Stripe Configuration | Comments | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Root/OS  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  SAP Binaries  |  50  |  3,000  |  125  |  gp3  |  |  | 
|  HANA Data  |  200  |  7,300  |  500  |  gp3/io2  |  Not Required  |  | 
|  HANA Log  |  100  |  3,000  |  300  |  gp3/io2  |  Not Required  |  | 
|  HANA Shared  |  128  |  3,000  |  125  |  gp3  |  |  For scale out, review formula or use EFS  | 
|  HANA Backup  |  -  |  -  |  -  |  st1/efs  |  |  Optional and Workload Dependent. Review [HANA Backup](hana-storage-config-ebs.md#hana_backup)   | 

 1 Xen instance types. We suggest migrating to a Nitro instance type.

# Deploy SAP HANA Workloads with Amazon EBS Volumes
<a name="deployment-steps-using-the-aws-management-console"></a>

This topic explains how to assign EBS Volumes when launching an Amazon EC2 Instance. Choose one of the following methods.

**Example**  

1. Log in to the console with appropriate permissions and ensure that you have the right Region selected.

1. Choose **Services**, and then choose **EC2** (under **Compute**).

1. Choose **Launch Instance**.

1. In Section **Application and OS Images (Amazon Machine Images)**:
   + Choose a recently used AMI or **My AMIs** to search for your BYOS or custom AMI ID.
   + Choose **Browse more AMIs** to search for more AMIs from AWS, Marketplace and the Community.

1. In Section **Choose an Instance Type** page, select the instance type that you identified when [planning the deployment](planning-the-deployment.md#compute) 

1. In Section **Key Pair (login)** . Select an existing key pair if you have one. Otherwise, create a new key pair.

1. In Section **Network Settings** 
   + Select the VPC ID and subnet for the network.
   + Turn off the **Auto-assign Public IP** option.
   + Select **Security Groups** 
     + Choose **Select an existing security group** and select a security group, if you have one, to attach to your instance. Otherwise, choose **Create a new security group** and configure the **Type**, **Protocol**, **Port Range**, and the **Source IP address** from where you want to allow traffic to your SAP HANA instance. Refer to [Security groups in AWS Launch Wizard for SAP](https://docs.aws.amazon.com/launchwizard/latest/userguide/launch-wizard-sap-security-groups.html) for a list of ports that we recommend. You can change the port as needed to meet your security requirements.

1. In Section **Configure Storage** 
   + Choose **Advanced** to see extended details, and **Add new volume** to provision volumes for SAP binaries, and SAP HANA data, log, shared and optionally backup. Ensure that you follow the guidance for size, IOPS and Throughput in [Calculate Requirements](hana-storage-config-ebs.md) or [Storage Reference](hana-storage-config-reference-layout.md).
   + If you are planning to deploy scale-out workloads, you can optionally include EFS or FSX **filesystems** for SAP HANA shared and backup volumes.  
![\[Image of EC2 console showing the storage configuration\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/std-sap-hana-storage-configuration.png)

      **Figure 1: SAP HANA Storage Configuration with the console** 

1. In Section **Advanced Details**, review and modify the options to suit your workload.

1. Select **Launch Instance** 

1. Your instance should be launching now with the selected configuration. After the instance is launched, you can proceed with the operating system and storage configuration steps.

1.  **Prepare Storage Configuration for SAP HANA** 

   Use the editor of your choice to create a .json file that contains block device mapping details similar to the following example, and save your file in a temporary directory. The example shows the block device mapping details for the x2iedn.24xlarge instance with gp3 volumes for HANA data and log. Change the details depending on instance and storage type that you intend to use for your deployment.

   ```
   [
   {"DeviceName":"/dev/sda1","Ebs":{"VolumeSize":50,"VolumeType":"gp3","Iops":3000,"Throughput":125,"Encrypted":true,"DeleteOnTermination":true}},
   {"DeviceName":"/dev/sdb","Ebs":{"VolumeSize":50,"VolumeType":"gp3","Iops":3000,"Throughput":125,"Encrypted":true,"DeleteOnTermination":true}},
   {"DeviceName":"/dev/sdc","Ebs":{"VolumeSize":2300,"VolumeType":"gp3","Iops":3600,"Throughput":625,"Encrypted":true,"DeleteOnTermination":true}},
   {"DeviceName":"/dev/sdd","Ebs":{"VolumeSize":2300,"VolumeType":"gp3","Iops":3600,"Throughput":625,"Encrypted":true,"DeleteOnTermination":true}},
   {"DeviceName":"/dev/sde","Ebs":{"VolumeSize":500,"VolumeType":"gp3","Iops":3000,"Throughput":300,"Encrypted":true,"DeleteOnTermination":true}},
   {"DeviceName":"/dev/sdf","Ebs":{"VolumeSize":1024,"VolumeType":"gp3","Iops":3000,"Throughput":125,"Encrypted":true,"DeleteOnTermination":true}}
   ]
   ```

    **Notes** 
   + The initial Device name for root should match the AMI you are trying to assign it to. Query this with

     ```
     $ aws ec2 describe-images --image-ids ami-0123456789abcdef0 --query 'Images[].RootDeviceName' --output text
     ```
   + You may choose to set `DeleteOnTermination` flag to false so that Amazon EBS volumes are not deleted when you terminate your Amazon EC2 instance. This helps preserve your data from accidental termination of your Amazon EC2 instance. When you terminate the instance, you need to manually delete the Amazon EBS volumes that are associated with the terminated instance to stop incurring storage cost.
   + If you plan to deploy scale-out workloads, you can use Amazon EFS and Network File System (NFS) to mount the SAP HANA shared and backup volumes to your coordinator and subordinate nodes after deployment.

1.  **Launch the Amazon EC2 instance** 

   Use AWS CLI to launch the Amazon EC2 instance for SAP HANA, including Amazon EBS storage, in the VPC in your target AWS Region by using the information you gathered during the preparation steps; for example:

   ```
   aws ec2 run-instances \
     --image-id ami-0123456789abcdef0 \
     --instance-type x2iedn.24xlarge \
     --count 1 \
     --region us-west-2 \
     --key-name my_key \
     --security-group-ids sg-0123456789abcdef0 \
     --subnet-id subnet-0123456789abcdef0 \
     --block-device-mappings file:///tmp/ebs_hana.json \
     --tag-specifications \
         'ResourceType=instance,Tags=[{Key=Name,Value=PRD-HANA01},{Key=Environment,Value=Production},{Key=SID,Value=PRD},{Key=ApplicationComponent,Value=HANA}]' \
         'ResourceType=volume,Tags=[{Key=Environment,Value=Production},{Key=SID,Value=PRD}]' \
     --ebs-optimized \
     --metadata-options "HttpTokens=required,HttpEndpoint=enabled"
   ```

    **Notes** 
   + This is a sample command only, with a focus on block-device-mappings. Review instance requirements seperately. It can be helpful to explore the options in the Console and then generate and adjust the code to replicate the setup for future deployments.
   +  `iam-instance-profile` and `user-data` flags can be used to ensure connectivity via Systems Manager.

# Configure SAP HANA Filesystems
<a name="configure-storage-for-sap-hana"></a>

## Overview
<a name="_overview"></a>

This guide explains how to configure Amazon EBS storage for SAP HANA on Amazon EC2. It covers volume identification, filesystem creation, and LVM configuration where required.

**Note**  
This guide uses NVMe device names (e.g., /dev/nvme1n1) which are standard on Nitro-based instances. On non-Nitro instances, devices will use different naming (e.g., /dev/sdb). Adjust commands according to your device names.

Before beginning configuration, verify that you have the following:
+ An EC2 instance with appropriate EBS volumes attached
+ Root or administrative access to the instance

## Identify Volumes
<a name="_identify_volumes"></a>

Identify block devices, their sizes, and associated volume IDs in order to assign them to the appropriate filesystems.

1.  **Run lsblk to view the associations** 

   As root, on the host, run the following:

   ```
   # lsblk -o NAME,SIZE,TYPE,FSTYPE,LABEL,PATH,SERIAL | sed 's/vol0/vol-0/g'
   ```

    *Example* 

   ```
   NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS FSTYPE LABEL PATH           SERIAL
   nvme1n1     259:0    0  2.2T  0 disk                          /dev/nvme1n1   vol-0abc123def456789a
   nvme0n1     259:1    0   50G  0 disk                          /dev/nvme0n1   vol-0xyz987uvw654321b
   ├─nvme0n1p1 259:5    0    2M  0 part                          /dev/nvme0n1p1
   ├─nvme0n1p2 259:6    0   20M  0 part /boot/efi   vfat   EFI   /dev/nvme0n1p2
   └─nvme0n1p3 259:7    0   50G  0 part /           xfs    ROOT  /dev/nvme0n1p3
   nvme4n1     259:2    0    1T  0 disk                          /dev/nvme4n1   vol-0pqr456mno789123c
   nvme2n1     259:3    0  2.2T  0 disk                          /dev/nvme2n1   vol-0jkl789ghi123456d
   nvme3n1     259:4    0  500G  0 disk                          /dev/nvme3n1   vol-0def456abc789123e
   ```

1.  **Record the Volume Associations** 

   Document the volume requirements and assignments in a structured format. This table will help ensure the correct commands for setting up the volumes.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/configure-storage-for-sap-hana.html)

1.  **Review or Assign Tags (optional)** 

   Tags help identify volumes in the AWS console and API commands, particularly useful during maintenance, volume extensions, or backup/restore operations. Review existing, or add new tags using the following command or the AWS Console.

    *Example* 

   ```
   $ aws ec2 create-tags --resources vol-0abc123def456789a --tags Key=Name,Value="PRD - Hana Data Volume 1 of 2"
   ```

   Repeat for all volumes.

## Create Filesystems
<a name="_create_filesystems"></a>

Create Filesystems according to whether or not striping has been identifed as a requirement

1.  **Configure Single Volumes** 

   When performance requirements can be met using a single volume (including capacity for growth), create XFS filesystems directly on the device.

    *Example* 

   ```
   # Create XFS filesystem with label for HANA Shared
   mkfs.xfs -f /dev/nvme4n1 -L HANA_SHARED
   
   # Create XFS filesystem with label for HANA Log
   mkfs.xfs -f /dev/nvme3n1 -L HANA_LOG
   ```
**Tip**  
Labels provide consistent device identification across instance restarts. You can add or change a label on an existing XFS filesystem using `xfs_admin -L LABEL_NAME /dev/device_name`. Always use labels in /etc/fstab by referencing `/dev/disk/by-label/LABEL_NAME`.

1.  **Configure Striped Volumes** 

   Logical Volume Management (LVM) manages storage in three layers: Physical Volumes (created with pvcreate) are the actual disks, Volume Groups (created with vgcreate) combine these disks into storage pools, and Logical Volumes (created with lvcreate) are virtual partitions that can span multiple disks for features like striping.

    *Example* 

   ```
   # Create physical volumes
   pvcreate /dev/nvme1n1 /dev/nvme2n1
   
   # Create volume group
   vgcreate vg_hana_data /dev/nvme1n1 /dev/nvme2n1
   
   # Create striped logical volume
   lvcreate -i 2 -I 256 -l 100%VG -n lv_hana_data vg_hana_data
   
   # Create XFS filesystem with label for HANA data
   mkfs.xfs -L HANA_DATA /dev/vg_hana_data/lv_hana_data
   ```
**Important**  
Use 256 KB stripe size (`-I 256`) for data volumes
Use 64 KB stripe size (`-I 64`) for log volumes
The `-i` parameter should match the number of physical volumes. In the example we have 2 volumes.

## Create Mount Points
<a name="_create_mount_points"></a>

1.  **Create filesystems and modify permissions** 

   ```
   # mkdir -p /hana/data /hana/log /hana/shared
   # chown <sid>adm:sapsys /hana/data /hana/log /hana/shared
   # chmod 750 /hana/data /hana/log /hana/shared
   ```

1.  **Configure fstab** 

   The fstab file controls how Linux filesystem partitions, remote filesystems, and block devices are mounted into the filesystem.

   Add the following entries to `/etc/fstab`:

    *Example* 

   ```
   # SAP HANA Storage Configuration
   /dev/disk/by-label/HANA_DATA       /hana/data       xfs    noatime,nodiratime,logbsize=256k       0  0
   /dev/disk/by-label/HANA_LOG        /hana/log        xfs    noatime,nodiratime,logbsize=256k       0  0
   /dev/disk/by-label/HANA_SHARED     /hana/shared     xfs    noatime,nodiratime,logbsize=256k       0  0
   ```

## Mount and Verify
<a name="_mount_and_verify"></a>

1.  **Mount all filesytems** 

   ```
   # mount -a
   ```

1.  **Verify your final configuration** 

   ```
   # lsblk -o NAME,SIZE,TYPE,FSTYPE,LABEL,PATH,SERIAL | sed 's/vol0/vol-0/g'
   ```

    *Example* 

   ```
   NAME                         SIZE TYPE FSTYPE      LABEL       PATH                                  SERIAL
   nvme0n1                       50G disk                         /dev/nvme0n1                          vol-0xyz987uvw654321b
   ├─nvme0n1p1                    2M part                         /dev/nvme0n1p1
   ├─nvme0n1p2                   20M part vfat        EFI         /dev/nvme0n1p2
   └─nvme0n1p3                   50G part xfs         ROOT        /dev/nvme0n1p3
   nvme1n1                      2.2T disk LVM2_member             /dev/nvme1n1                          vol-0abc123def456789a
   └─vg_hana_data-lv_hana_data  4.5T lvm  xfs         HANA_DATA   /dev/mapper/vg_hana_data-lv_hana_data
   nvme2n1                      2.2T disk LVM2_member             /dev/nvme2n1                          vol-0jkl789ghi123456d
   └─vg_hana_data-lv_hana_data  4.5T lvm  xfs         HANA_DATA   /dev/mapper/vg_hana_data-lv_hana_data
   nvme3n1                      500G disk xfs         HANA_LOG    /dev/nvme3n1                          vol-0def456abc789123e
   nvme4n1                        1T disk xfs         HANA_SHARED /dev/nvme4n1                          vol-0pqr456mno789123c
   ```

1.  **Restart System** 

   Verify all mount points are correct using `mount` and `df -h` before rebooting, as incorrect /etc/fstab entries can prevent successful system boot. Once confirmed, restart the operating system to ensure filesystem persistence before HANA installation.

# Architecture
<a name="architecture-ebs"></a>

The following architecture diagrams show scale-up and scale-out environments for SAP HANA workloads using Amazon EBS volumes.

**Topics**
+ [Scale-up environment](#ebs-scale-up)
+ [Scale-out environment](#ebs-scale-out)

## Scale-up environment
<a name="ebs-scale-up"></a>

The following architecture diagram shows a scale-up environment for SAP HANA workloads using Amazon EBS volumes.

![\[Configuration for scale-up SAP HANA workloads.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/std-sap-hana-scale-up-diagram.png)


## Scale-out environment
<a name="ebs-scale-out"></a>

The following architecture diagram shows a scale-out environment for SAP HANA workloads using Amazon EBS volumes.

![\[configuration for scale-out SAP HANA workloads.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/std-sap-hana-scale-out-diagram.png)


# Legacy storage configuration (Amazon EBS)
<a name="hana-storage-legacy-ebs"></a>

**Important**  
This page contains the previous instance-specific storage configuration tables for SAP HANA on Amazon EBS. This content is provided as a reference for existing deployments and is no longer being updated to include new instance types.  
For all new deployments, use the memory-based sizing approach in [Calculate EBS Storage Requirements](hana-storage-config-ebs.md) and the pre-calculated values in [SAP HANA EBS Storage Reference](hana-storage-config-reference-layout.md).  
If your SAP HANA system was deployed using the configurations below, including through Launch Wizard, it is not necessary to change the configuration. Existing configurations continue to meet the necessary requirements.

For multi-node deployments, storage volumes for SAP HANA data and logs are provisioned in the master and worker nodes.

In the following configurations, we intentionally kept the same storage configuration for SAP HANA data and log volumes for all R3, certain R4 and R5, and smaller X1e/X2iedn instance types so you can scale up from smaller instances to larger instances without having to reconfigure your storage.

**Note**  
The X1, X1e, X2idn, and X2iedn instance types include instance storage but should not be used to persist any SAP HANA related files.

## `gp2` and `gp3` for HANA
<a name="gp2-gp3-hana-legacy"></a>

**Example**  


**Certified for production use**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **General Purpose SSD (gp2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Total baseline IOPS**   |   **Total burst IOPS**   | 
|   **u-24tb1.112xlarge**   |  24,576  |  448  |  6 x 4,800 GiB  |  1,500  |  86,400  |  N/A  | 
|   **u-24tb1.metal**   |  24,576  |  448  |  6 x 4,800 GiB  |  1,500  |  86,400  |  N/A  | 
|   **u-18tb1.112xlarge**   |  18,432  |  448  |  6 x 3,600 GiB  |  1,500  |  64,800  |  N/A  | 
|   **u-18tb1.metal**   |  18,432  |  448  |  6 x 3,600 GiB  |  1,500  |  64,800  |  N/A  | 
|   **u-12tb1.112xlarge**   |  12,288  |  448  |  6 x 2,400 GiB  |  1,500  |  43,200  |  N/A  | 
|   **u-12tb1.metal**   |  12,288  |  448  |  6 x 2,400 GiB  |  1,500  |  43,200  |  N/A  | 
|   **u-9tb1.112xlarge**   |  9,216  |  448  |  6 x 1,800 GiB  |  1,500  |  32,400  |  N/A  | 
|   **u-9tb1.metal**   |  9,216  |  448  |  6 x 1,800 GiB  |  1,500  |  32,400  |  N/A  | 
|   **u7in-24tb.112xlarge**   |  24,576  |  896  |  6 x 4,800 GiB  |  1,500  |  86,400  |  N/A  | 
|   **u7in-16tb.112xlarge**   |  16,384  |  896  |  6 x 3,200 GiB  |  1,500  |  57,600  |  N/A  | 
|   **u7i-12tb.224xlarge**   |  12,288  |  896  |  6 x 2,400 GiB  |  1,500  |  43,200  |  N/A  | 
|   **u7i-8tb.112xlarge**   |  8,192  |  448  |  6 x 1,600 GiB  |  1,500  |  28,800  |  N/A  | 
|   **u7i-6tb.112xlarge**   |  6,144  |  448  |  6 x 1,200 GiB  |  1,500  |  21,600  |  N/A  | 
|   **u7inh-32tb.480xlarge**   |  32,768  |  1,920  |  6 x 6,400 GiB  |  1,500  |  96,000  |  N/A  | 
|   **u-6tb1.112xlarge**   |  6,144  |  448  |  6 x 1,200 GiB  |  1,500  |  21,600  |  N/A  | 
|   **u-6tb1.56xlarge**   |  6,144  |  224  |  6 x 1,200 GiB  |  1,500  |  21,600  |  N/A  | 
|   **u-6tb1.metal**   |  6,144  |  448  |  6 x 1,200 GiB  |  1,500  |  21,600  |  N/A  | 
|   **u-3tb1.56xlarge**   |  3,072  |  224  |  3 x 1,200 GiB  |  750  |  10,800  |  N/A  | 
|   **x2iedn.32xlarge**   |  4,096  |  128  |  3 x 1,600 GiB  |  750  |  14,400  |  N/A  | 
|   **x2iedn.24xlarge**   |  3,072  |  96  |  3 x 1,200 GiB  |  750  |  10,800  |  N/A  | 
|   **x2idn.32xlarge**   |  2,048  |  128  |  3 x 800 GiB  |  750  |  7,200  |  9,000  | 
|   **x2idn.24xlarge**   |  1,536  |  96  |  3 x 600 GiB  |  750  |  5,400  |  9,000  | 
|   **x2idn.16xlarge**   |  1,024  |  64  |  3 x 400 GiB  |  750  |  3,600  |  9,000  | 
|   **x1e.32xlarge**   |  3,904  |  128  |  3 x 1,600 GiB  |  750  |  14,400  |  N/A  | 
|   **x1.32xlarge**   |  1,952  |  128  |  3 x 800 GiB  |  750  |  7,200  |  9,000  | 
|   **x1.16xlarge**   |  976  |  64  |  3 x 400 GiB  |  750  |  3,600  |  9,000  | 
|   **r7i.48xlarge**   |  1,536  |  192  |  3 x 600 GiB  |  750  |  5,400  |  9,000  | 
|   **r7i.24xlarge**   |  768  |  96  |  3 x 400 GiB  |  750  |  3,600  |  9,000  | 
|   **r7i.16xlarge**   |  512  |  64  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r7i.12xlarge**   |  384  |  48  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r7i.8xlarge**   |  256  |  32  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r6i.32xlarge**   |  1,024  |  128  |  3 x 400 GiB  |  750  |  3,600  |  9,000  | 
|   **r6i.24xlarge**   |  768  |  96  |  3 x 400 GiB  |  750  |  3,600  |  9,000  | 
|   **r6i.16xlarge**   |  512  |  64  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r6i.12xlarge**   |  384  |  48  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r6i.8xlarge**   |  256  |  32  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r5.24xlarge**   |  768  |  96  |  3 x 400 GiB  |  750  |  3,600  |  9,000  | 
|   **r5.16xlarge**   |  512  |  64  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r5.12xlarge**   |  384  |  48  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r5.8xlarge**   |  256  |  32  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r5.metal**   |  768  |  96  |  3 x 400 GiB  |  750  |  3,600  |  9,000  | 
|   **r5b.24xlarge**   |  768  |  96  |  3 x 400 GiB  |  750  |  3,600  |  9,000  | 
|   **r5b.16xlarge**   |  512  |  64  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r5b.12xlarge**   |  384  |  48  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r5b.8xlarge**   |  256  |  32  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r5b.metal**   |  768  |  96  |  3 x 400 GiB  |  750  |  3,600  |  9,000  | 
|   **r4.16xlarge**   |  488  |  64  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r4.8xlarge**   |  244  |  32  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r3.8xlarge**   |  244  |  32  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 


**Supported for nonproduction use only**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **General Purpose SSD (gp2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Total baseline IOPS**   |   **Total burst IOPS**   | 
|   **x2iedn.4xlarge**   |  512  |  16  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **x2iedn.2xlarge**   |  256  |  8  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **x2iedn.xlarge**   |  128  |  4  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **x1e.4xlarge**   |  488  |  16  |  3 x 225 GiB  |  750\$1\$1  |  2,025  |  9,000  | 
|   **x1e.2xlarge**   |  244  |  8  |  3 x 225 GiB  |  750\$1\$1  |  2,025  |  9,000  | 
|   **x1e.xlarge**   |  122  |  4  |  3 x 225 GiB  |  750\$1\$1  |  2,025  |  9,000  | 
|   **r7i.4xlarge**   |  128  |  16  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r7i.2xlarge**   |  64  |  8  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r6i.4xlarge**   |  128  |  16  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r6i.2xlarge**   |  64  |  8  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r5.4xlarge**   |  128  |  16  |  3 x 225 GiB  |  750\$1\$1  |  2,025  |  9,000  | 
|   **r5.2xlarge**   |  64  |  8  |  3 x 225 GiB  |  750\$1\$1  |  2,025  |  9,000  | 
|   **r5b.4xlarge**   |  128  |  16  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r5b.2xlarge**   |  64  |  8  |  3 x 225 GiB  |  750  |  2,025  |  9,000  | 
|   **r4.4xlarge**   |  122  |  16  |  3 x 225 GiB  |  750\$1\$1  |  2,025  |  9,000  | 
|   **r4.2xlarge**   |  61  |  8  |  3 x 225 GiB  |  750\$1\$1  |  2,025  |  9,000  | 
|   **r3.4xlarge**   |  122  |  16  |  3 x 225 GiB  |  750\$1\$1  |  2,025  |  9,000  | 
|   **r3.2xlarge**   |  61  |  8  |  3 x 225 GiB  |  750\$1\$1  |  2,025  |  9,000  | 
+ Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a physical CPU core.
  + This value represents the maximum throughput that could be achieved when striping multiple EBS volumes. Actual throughput depends on the instance type. Every instance type has its own Amazon EBS throughput maximum. For details, see [Amazon EBS-Optimized Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html) in the AWS documentation.

    \$1\$1\$1gp3 based configurations are only supported in production for Nitro based instances, not for Xen based instances as SAP HANA HCMT storage tests may not meet the minimum required KPI for log writes.


**Certified for production use**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **General Purpose SSD (gp2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Total baseline IOPS**   |   **Total burst IOPS**   | 
|   **u-24tb1.112xlarge**   |  24,576  |  448  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u-24tb1.metal**   |  24,576  |  448  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u-18tb1.112xlarge**   |  18,432  |  448  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u-18tb1.metal**   |  18,432  |  448  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u-12tb1.112xlarge**   |  12,288  |  448  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u-12tb1.metal**   |  12,288  |  448  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u-9tb1.112xlarge**   |  9,216  |  448  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u-9tb1.metal**   |  9,216  |  448  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u7in-24tb.112xlarge**   |  24,576  |  896  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u7in-16tb.112xlarge**   |  16,384  |  896  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u7i-12tb.224xlarge**   |  12,288  |  896  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u7i-8tb.112xlarge**   |  8,192  |  448  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u7i-6tb.112xlarge**   |  6,144  |  448  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u7inh-32tb.480xlarge**   |  32,768  |  1,920  |  2 x 300 GiB  |  500  |  1,800  |  6000  | 
|   **u-6tb1.112xlarge**   |  6,144  |  448  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u-6tb1.56xlarge**   |  6,144  |  224  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u-6tb1.metal**   |  6,144  |  448  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **u-3tb1.56xlarge**   |  3,072  |  224  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **x2iedn.32xlarge**   |  4,096  |  128  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **x2iedn.24xlarge**   |  3,072  |  96  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **x2idn.32xlarge**   |  2,048  |  128  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **x2idn.24xlarge**   |  1,536  |  96  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **x2idn.16xlarge**   |  1,024  |  64  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **x1e.32xlarge**   |  3,904  |  128  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **x1.32xlarge**   |  1,952  |  128  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **x1.16xlarge**   |  976  |  64  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r7i.48xlarge**   |  1,536  |  192  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r7i.24xlarge**   |  768  |  96  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r7i.16xlarge**   |  512  |  64  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r7i.12xlarge**   |  384  |  48  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
|   **r7i.8xlarge**   |  256  |  32  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
|   **r6i.32xlarge**   |  1,024  |  128  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r6i.24xlarge**   |  768  |  96  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r6i.16xlarge**   |  512  |  64  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r6i.12xlarge**   |  384  |  48  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r6i.8xlarge**   |  256  |  32  |  2 x 175 GiB  |  500  |  1,050  |  6,000  | 
|   **r5.24xlarge**   |  768  |  96  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r5.16xlarge**   |  512  |  64  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r5.12xlarge**   |  384  |  48  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r5.8xlarge**   |  256  |  32  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r5.metal**   |  768  |  96  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r5b.24xlarge**   |  768  |  96  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r5b.16xlarge**   |  512  |  64  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r5b.12xlarge**   |  384  |  48  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r5b.8xlarge**   |  256  |  32  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r5b.metal**   |  768  |  96  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r4.16xlarge**   |  488  |  64  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r4.8xlarge**   |  244  |  32  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 
|   **r3.8xlarge**   |  244  |  32  |  2 x 300 GiB  |  500  |  1,800  |  6,000  | 


**Supported for nonproduction use only**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**   |   **General Purpose SSD (gp2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Total baseline IOPS**   |   **Total burst IOPS**   | 
|   **x2iedn.4xlarge**   |  512  |  16  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
|   **x2iedn.2xlarge**   |  256  |  8  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
|   **x2iedn.xlarge**   |  128  |  4  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
|   **x1e.4xlarge**   |  488  |  16  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
|   **x1e.2xlarge**   |  244  |  8  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
|   **x1e.xlarge**   |  122  |  4  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
|   **r7i.4xlarge**   |  128  |  16  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
|   **r7i.2xlarge**   |  64  |  8  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
|   **r6i.4xlarge**   |  128  |  16  |  2 x 175 GiB  |  500  |  1,050  |  6,000  | 
|   **r6i.2xlarge**   |  64  |  8  |  2 x 175 GiB  |  500  |  1,050  |  6,000  | 
|   **r5.4xlarge**   |  128  |  16  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
|   **r5.2xlarge**   |  64  |  8  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
|   **r5b.4xlarge**   |  128  |  16  |  2 x 175 GiB  |  500  |  1,050  |  6,000  | 
|   **r5b.2xlarge**   |  64  |  8  |  2 x 175 GiB  |  500  |  1,050  |  6,000  | 
|   **r4.4xlarge**   |  122  |  16  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
|   **r4.2xlarge**   |  61  |  8  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
|   **r3.4xlarge**   |  122  |  16  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
|   **r3.2xlarge**   |  61  |  8  |  2 x 175 GiB  |  500\$1\$1  |  1,050  |  6,000  | 
+ Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a physical CPU core.
  + This value represents the maximum throughput that could be achieved when striping multiple EBS volumes. Actual throughput depends on the instance type. Every instance type has its own Amazon EBS throughput maximum. For details, see [Amazon EBS-Optimized Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html) in the AWS documentation.

    \$1\$1\$1gp3 based configurations are only supported in production for Nitro based instances, not for Xen based instances as SAP HANA HCMT storage tests may not meet the minimum required KPI for log writes.


**Certified for production use**  

|  |  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **General Purpose SSD (gp3) storage with LVM**   |   **Configured throughput per volume (MiB/s)**   |   **Configured IOPS per volume**   |   **Total throughput (MiB/s)**   |   **Total IOPS**   | 
|   **u-24tb1.112xlarge**   |  24,576  |  448  |  2 x 14,400 GiB  |  1,000  |  9,000  |  2,000  |  18,000  | 
|   **u-24tb1.metal**   |  24,576  |  448  |  2 x 14,400 GiB  |  1,000  |  9,000  |  2,000  |  18,000  | 
|   **u-18tb1.112xlarge**   |  18,432  |  448  |  2 x 10,800 GiB  |  1,000  |  9,000  |  2,000  |  18,000  | 
|   **u-18tb1.metal**   |  18,432  |  448  |  2 x 10,800 GiB  |  1,000  |  9,000  |  2,000  |  18,000  | 
|   **u-12tb1.112xlarge**   |  12,228  |  448  |  2 x 7,200 GiB  |  1,000  |  6,000  |  2,000  |  12,000  | 
|   **u-12tb1.metal**   |  12,228  |  448  |  2 x 7,200 GiB  |  1,000  |  6,000  |  2,000  |  12,000  | 
|   **u-9tb1.112xlarge**   |  9,216  |  448  |  2 x 5,400 GiB  |  1,000  |  6,000  |  2,000  |  12,000  | 
|   **u-9tb1.metal**   |  9,216  |  448  |  2 x 5,400 GiB  |  1,000  |  6,000  |  2,000  |  12,000  | 
|   **u7in-24tb.112xlarge**   |  24,576  |  896  |  2 x 14,400 GiB  |  1,000  |  9,000  |  2,000  |  18,000  | 
|   **u7in-16tb.112xlarge**   |  16,384  |  896  |  2 x 9,600 GiB  |  1,000  |  9,000  |  2,000  |  18,000  | 
|   **u7i-12tb.224xlarge**   |  12,288  |  896  |  2 x 7,200 GiB  |  1,000  |  6,000  |  2,000  |  12,000  | 
|   **u7i-8tb.112xlarge**   |  8,192  |  448  |  2 x 4,800 GiB  |  1,000  |  6,000  |  2,000  |  12,000  | 
|   **u7i-6tb.112xlarge**   |  6,144  |  448  |  2 x 3,600 GiB  |  1,000  |  6,000  |  2,000  |  12,000  | 
|   **u7inh-32tb.480xlarge**   |  32,768  |  1,920  |  4 x 9,600 GiB  |  1,000  |  6,000  |  4,000  |  24,000  | 
|   **u-6tb1.112xlarge**   |  6,114  |  448  |  2 x 3,600 GiB  |  1,000  |  6,000  |  2,000  |  12,000  | 
|   **u-6tb1.56xlarge**   |  6,114  |  224  |  2 x 3,600 GiB  |  1,000  |  6,000  |  2,000  |  12,000  | 
|   **u-6tb1.metal**   |  6,114  |  448  |  2 x 3,600 GiB  |  1,000  |  6,000  |  2,000  |  12,000  | 
|   **u-3tb1.56xlarge**   |  3,072  |  224  |  2 x 1,800 GiB  |  750  |  4,500  |  1,500  |  9,000  | 
|   **x2iedn.32xlarge**   |  4,096  |  128  |  2 x 2,400 GiB  |  750  |  4,500  |  1,500  |  9,000  | 
|   **x2iedn.24xlarge**   |  3,072  |  96  |  2 x 1,800 GiB  |  750  |  4,500  |  1,500  |  9,000  | 
|   **x2idn.32xlarge**   |  2,048  |  128  |  2 x 1,200 GiB  |  750  |  4,500  |  1,500  |  9,000  | 
|   **x2idn.24xlarge**   |  1,536  |  96  |  2 x 900 GiB  |  750  |  4,500  |  1,500  |  9,000  | 
|   **x2idn.16xlarge**   |  1,024  |  64  |  2 x 600 GiB  |  500  |  3,750  |  1,000  |  7,500  | 
|   **x1e.32xlarge**   |  3,904  |  128  |  2 x 2,400 GiB  |  750  |  4,500  |  1,500  |  9,000  | 
|   **x1.32xlarge**   |  1,952  |  128  |  2 x 1,200 GiB  |  750  |  4,500  |  1,500  |  9,000  | 
|   **x1.16xlarge**   |  976  |  64  |  1 x 1,200 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r7i.48xlarge**   |  1,536  |  192  |  2 x 900 GiB  |  750  |  4,500  |  1,500  |  9,000  | 
|   **r7i.24xlarge**   |  768  |  96  |  1 x 920 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r7i.16xlarge**   |  512  |  64  |  1 x 615 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r7i.12xlarge**   |  384  |  48  |  1 x 460 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r7i.8xlarge**   |  256  |  32  |  1 x 320 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r6i.32xlarge**   |  1,024  |  128  |  1 x 1,200 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r6i.24xlarge**   |  768  |  96  |  1 x 920 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r6i.16xlarge**   |  512  |  64  |  1 x 615 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r6i.12xlarge**   |  384  |  48  |  1 x 460 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r6i.8xlarge**   |  256  |  32  |  1 x 320 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r5.24xlarge**   |  768  |  96  |  1 x 920 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r5.16xlarge**   |  512  |  64  |  1 x 615 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r5.12xlarge**   |  384  |  48  |  1 x 460 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r5.8xlarge**   |  256  |  32  |  1 x 320 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r5.metal**   |  768  |  96  |  1 x 920 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r5b.24xlarge**   |  768  |  96  |  1 x 920 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r5b.16xlarge**   |  512  |  64  |  1 x 615 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r5b.12xlarge**   |  384  |  48  |  1 x 460 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r5b.8xlarge**   |  256  |  32  |  1 x 320 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r5b.metal**   |  768  |  96  |  1 x 920 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r4.16xlarge**   |  488  |  64  |  1 x 585 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r4.8xlarge**   |  244  |  32  |  1 x 300 GiB  |  500  |  7,500  |  500  |  7,500  | 
|   **r3.8xlarge**   |  244  |  32  |  1 x 300 GiB  |  500  |  7,500  |  500  |  7,500  | 


**Supported for nonproduction use only**  

|  |  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **General Purpose SSD (gp3) storage with LVM**   |   **Configured throughput per volume (MiB/s)**   |   **Configured IOPS per volume**   |   **Total throughput (MiB/s)**   |   **Total IOPS**   | 
|   **x2iedn.4xlarge**   |  512  |  16  |  1 x 585 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **x2iedn.2xlarge**   |  256  |  8  |  1 x 295 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **x2iedn.xlarge**   |  128  |  4  |  1 x 150 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **x1e.4xlarge**   |  488  |  16  |  1 x 585 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **x1e.2xlarge**   |  244  |  8  |  1 x 295 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **x1e.xlarge**   |  122  |  4  |  1 x 150 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r7i.4xlarge**   |  128  |  16  |  1 x 150 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r7i.2xlarge**   |  64  |  8  |  1 x 80 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r6i.4xlarge**   |  128  |  16  |  1 x 150 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r6i.2xlarge**   |  64  |  8  |  1 x 80 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r5.4xlarge**   |  128  |  16  |  1 x 150 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r5.2xlarge**   |  64  |  8  |  1 x 80 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r5b.4xlarge**   |  128  |  16  |  1 x 150 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r5b.2xlarge**   |  64  |  8  |  1 x 80 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r4.4xlarge**   |  122  |  16  |  1 x 150 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r4.2xlarge**   |  61  |  8  |  1 x 80 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r3.4xlarge**   |  122  |  16  |  1 x 150 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r3.2xlarge**   |  61  |  8  |  1 x 80 GiB  |  125  |  3,000  |  125  |  3,000  | 
+ Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a physical CPU core.
  + This value represents the maximum throughput that could be achieved when striping multiple EBS volumes. Actual throughput depends on the instance type. Every instance type has its own Amazon EBS throughput maximum. For details, see [Amazon EBS-Optimized Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html) in the AWS documentation.

    \$1\$1\$1gp3 based configurations are only supported in production for Nitro based instances, not for Xen based instances as SAP HANA HCMT storage tests may not meet the minimum required KPI for log writes.


**Certified for production use**  

|  |  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **General Purpose SSD (gp3) storage with LVM**   |   **Configured throughput per volume (MiB/s)**   |   **Configured IOPS per volume**   |   **Total throughput (MiB/s)**   |   **Total IOPS**   | 
|   **u-24tb1.112xlarge**   |  24,576  |  448  |  1 x 512 GiB  |  500  |  3,000  |  500  |  3,000  | 
|   **u-24tb1.metal**   |  24,576  |  448  |  1 x 512 GiB  |  500  |  3,000  |  500  |  3,000  | 
|   **u-18tb1.112xlarge**   |  18,432  |  448  |  1 x 512 GiB  |  500  |  3,000  |  500  |  3,000  | 
|   **u-18tb1.metal**   |  18,432  |  448  |  1 x 512 GiB  |  500  |  3,000  |  500  |  3,000  | 
|   **u-12tb1.112xlarge**   |  12,228  |  448  |  1 x 512 GiB  |  500  |  3,000  |  500  |  3,000  | 
|   **u-12tb1.metal**   |  12,228  |  448  |  1 x 512 GiB  |  500  |  3000  |  500  |  3,000  | 
|   **u-9tb1.112xlarge**   |  9,216  |  448  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **u-9tb1.metal**   |  9,216  |  448  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **u7in-24tb.112xlarge**   |  24,576  |  896  |  1 x 512 GiB  |  500  |  3,000  |  500  |  3,000  | 
|   **u7in-16tb.112xlarge**   |  16,384  |  896  |  1 x 512 GiB  |  500  |  3,000  |  500  |  3,000  | 
|   **u7i-12tb.224xlarge**   |  12,288  |  896  |  1 x 512 GiB  |  500  |  3,000  |  500  |  3,000  | 
|   **u7i-8tb.112xlarge**   |  8,192  |  448  |  1 x 512 GiB  |  500  |  3,000  |  500  |  3,000  | 
|   **u7i-6tb.112xlarge**   |  6,144  |  448  |  1 x 512 GiB  |  500  |  3,000  |  500  |  3,000  | 
|   **u7inh-32tb.480xlarge**   |  32,768  |  1,920  |  1 x 512 GiB  |  500  |  3,000  |  500  |  3,000  | 
|   **u-6tb1.112xlarge**   |  6,114  |  448  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **u-6tb1.56xlarge**   |  6,114  |  224  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **u-6tb1.metal**   |  6,114  |  448  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **u-3tb1.56xlarge**   |  3,072  |  224  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **x2iedn.32xlarge**   |  4,096  |  128  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **x2iedn.24xlarge**   |  3,072  |  96  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **x2idn.32xlarge**   |  2,048  |  128  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **x2idn.24xlarge**   |  1,536  |  96  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **x2idn.16xlarge**   |  1,024  |  64  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **x1e.32xlarge **   |  3,904  |  128  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **x1.32xlarge**   |  1,952  |  128  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **x1.16xlarge**   |  976  |  64  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r7i.48xlarge**   |  1,536  |  192  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r7i.24xlarge**   |  768  |  96  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r7i.16xlarge**   |  512  |  64  |  1 x 256 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r7i.12xlarge**   |  384  |  48  |  1 x 192 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r7i.8xlarge**   |  256  |  32  |  1 x 128 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r6i.32xlarge**   |  1,024  |  128  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r6i.24xlarge**   |  768  |  96  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r6i.16xlarge**   |  512  |  64  |  1 x 256 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r6i.12xlarge**   |  384  |  48  |  1 x 192 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r6i.8xlarge**   |  256  |  32  |  1 x 128 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r5.24xlarge**   |  768  |  96  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r5.16xlarge**   |  512  |  64  |  1 x 256 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r5.12xlarge**   |  384  |  48  |  1 x 192 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r5.8xlarge**   |  256  |  32  |  1 x 128 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r5.metal**   |  768  |  96  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r5b.24xlarge**   |  768  |  96  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r5b.16xlarge**   |  512  |  64  |  1 x 256 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r5b.12xlarge**   |  384  |  48  |  1 x 192 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r5b.8xlarge**   |  256  |  32  |  1 x 128 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r5b.metal**   |  768  |  96  |  1 x 512 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r4.16xlarge **   |  488  |  64  |  1 x 256 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r4.8xlarge **   |  244  |  32  |  1 x 128 GiB  |  300  |  3,000  |  300  |  3,000  | 
|   **r3.8xlarge **   |  244  |  32  |  1 x 128 GiB  |  300  |  3,000  |  300  |  3,000  | 


**Supported for nonproduction use only**  

|  |  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **General Purpose SSD (gp3) storage with LVM**   |   **Configured throughput per volume (MiB/s)**   |   **Configured IOPS per volume**   |   **Total throughput (MiB/s)**   |   **Total IOPS**   | 
|   **x2iedn.4xlarge**   |  512  |  16  |  1 x 245 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **x2iedn.2xlarge**   |  256  |  8  |  1 x 125 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **x2iedn.xlarge**   |  128  |  4  |  1 x 64 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **x1e.4xlarge**   |  488  |  16  |  1 x 245 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **x1e.2xlarge**   |  244  |  8  |  1 x 125 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **x1e.xlarge**   |  122  |  4  |  1 x 64 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r7i.4xlarge**   |  128  |  16  |  1 x 64 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r7i.2xlarge**   |  64  |  8  |  1 x 32 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r6i.4xlarge**   |  128  |  16  |  1 x 64 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r6i.2xlarge**   |  64  |  8  |  1 x 32 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r5.4xlarge**   |  128  |  16  |  1 x 64 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r5.2xlarge**   |  64  |  8  |  1 x 32 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r5b.4xlarge**   |  128  |  16  |  1 x 64 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r5b.2xlarge**   |  64  |  8  |  1 x 32 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r4.4xlarge**   |  122  |  16  |  1 x 64 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r4.2xlarge**   |  61  |  8  |  1 x 32 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r3.4xlarge**   |  122  |  16  |  1 x 64 GiB  |  125  |  3,000  |  125  |  3,000  | 
|   **r3.2xlarge**   |  61  |  8  |  1 x 32 GiB  |  125  |  3,000  |  125  |  3,000  | 
+ Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a physical CPU core.
  + This value represents the maximum throughput that could be achieved when striping multiple EBS volumes. Actual throughput depends on the instance type. Every instance type has its own Amazon EBS throughput maximum. For details, see [Amazon EBS-Optimized Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html) in the AWS documentation.

    \$1\$1\$1gp3 based configurations are only supported in production for Nitro based instances, not for Xen based instances as SAP HANA HCMT storage tests may not meet the minimum required KPI for log writes.

General Purpose SSD (`gp2`) volumes created or modified after 12/03/2018 have a throughput maximum between 128 MiB/s and 250 MiB/s depending on volume size. Volumes greater than 170 GiB and below 334 GiB deliver a maximum throughput of 250 MiB/s if burst credits are available. Volumes with 334 GiB and above deliver 250 MiB/s, irrespective of burst credits. For details, see [Amazon EBS Volume Types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) in the AWS documentation.

General Purpose SSD `gp3` volumes deliver a consistent baseline of 3,000 IOPS and 125 MiB/s. You can also purchase additional IOPS (up to 16,000) and throughput (up to 1,000 MiB/s). While we recommend you to use the configurations shown in this guide, gp3 volumes provide flexibility to customize SAP HANA’s storage configuration (IOPS and throughput) according to your needs and usage.

The ** minimum ** gp3 configuration required to meet SAP HANA KPIs are the following:


| Storage Area | IOPS | Throughput | 
| --- | --- | --- | 
|   **SAP HANA Data**   |  7,000  |  425 MiB/s  | 
|   **SAP HANA Logs**   |  3,000  |  275 MiB/s  | 

## `io1`, `io2`, and `io2 Block Express` for HANA
<a name="io1-io2-hana-legacy"></a>

**Example**  


**Certified for production use**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **Provisioned IOPS SSD (io1/io2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Provisioned IOPS per volume**   |   **Total provisioned IOPS**   | 
|   **u-24tb1.112xlarge**   |  24,576  |  448  |  6 x 4,800 GiB  |  3,000  |  3,000  |  18,000  | 
|   **u-24tb1.metal**   |  24,576  |  448  |  6 x 4,800 GiB  |  3,000  |  3,000  |  18,000  | 
|   **u-18tb1.112xlarge**   |  18,432  |  448  |  6 x 3,600 GiB  |  3,000  |  3,000  |  18,000  | 
|   **u-18tb1.metal**   |  18,432  |  448  |  6 x 3,600 GiB  |  3,000  |  3,000  |  18,000  | 
|   **u-12tb1.112xlarge**   |  12,288  |  448  |  6 x 2,400 GiB  |  3,000  |  2,000  |  12,000  | 
|   **u-12tb1.metal**   |  12,288  |  448  |  6 x 2,400 GiB  |  3,000  |  2,000  |  12,000  | 
|   **u-9tb1.112xlarge**   |  9,216  |  448  |  6 x 1,800 GiB  |  3,000  |  2,000  |  12,000  | 
|   **u-9tb1.metal**   |  9,216  |  448  |  6 x 1,800 GiB  |  3,000  |  2,000  |  12,000  | 
|   **u7in-24tb.112xlarge**   |  24,576  |  896  |  6 x 4,800 GiB  |  3,000  |  3,000  |  18,000  | 
|   **u7in-16tb.112xlarge**   |  16,384  |  896  |  6 x 3,200 GiB  |  3,000  |  3,000  |  18,000  | 
|   **u7i-12tb.224xlarge**   |  12,288  |  896  |  6 x 2,400 GiB  |  3,000  |  3,000  |  18,000  | 
|   **u7i-8tb.112xlarge**   |  8,192  |  448  |  6 x 1,600 GiB  |  3,000  |  2,000  |  12,000  | 
|   **u7i-6tb.112xlarge**   |  6,144  |  448  |  6 x 1,200 GiB  |  3,000  |  2,000  |  12,000  | 
|   **u7inh-32tb.480xlarge**   |  32,768  |  1,920  |  6 x 6,400 GiB  |  3,000  |  3,000  |  18,000  | 
|   **u-6tb1.112xlarge**   |  6,144  |  448  |  6 x 1,200 GiB  |  3,000  |  2,000  |  12,000  | 
|   **u-6tb1.56xlarge**   |  6,144  |  224  |  6 x 1,200 GiB  |  3,000  |  2,000  |  12,000  | 
|   **u-6tb1.metal**   |  6,144  |  448  |  6 x 1,200 GiB  |  3,000  |  2,000  |  12,000  | 
|   **u-3tb1.56xlarge**   |  3,072  |  224  |  3 x 1,200 GiB  |  1,500  |  3,000  |  9,000  | 
|   **x2iedn.32xlarge**   |  4,096  |  128  |  2 x 2,400 GiB  |  1,000  |  4,500  |  9,000  | 
|   **x2iedn.24xlarge**   |  3,072  |  96  |  2 x 1,800 GiB  |  1,000  |  4,500  |  9,000  | 
|   **x2idn.32xlarge**   |  2,048  |  128  |  2 x 1,200 GiB  |  1,000  |  4,500  |  9,000  | 
|   **x2idn.24xlarge**   |  1,536  |  96  |  2 x 900 GiB  |  1,000  |  4,500  |  9,000  | 
|   **x2idn.16xlarge**   |  1,024  |  64  |  2 x 600 GiB  |  1,000  |  3,750  |  7,500  | 
|   **x1e.32xlarge**   |  3,904  |  128  |  3 x 1,600 GiB  |  1,500  |  3,000  |  9,000  | 
|   **x1.32xlarge**   |  1,952  |  128  |  3 x 800 GiB  |  1,500  |  3,000  |  9,000  | 
|   **x1.16xlarge**   |  976  |  64  |  1 x 1,200 GiB  |  500  |  7,500  |  7,500  | 
|   **r7i.48xlarge**   |  1,536  |  192  |  1 x 1,800 GiB  |  500  |  7,500  |  7,500  | 
|   **r7i.24xlarge**   |  768  |  96  |  1 x 900 GiB  |  500  |  7,500  |  7,500  | 
|   **r7i.16xlarge**   |  512  |  64  |  1 x 600 GiB  |  500  |  7,500  |  7,500  | 
|   **r7i.12xlarge**   |  384  |  48  |  1 x 600 GiB  |  500  |  7,500  |  7,500  | 
|   **r7i.8xlarge**   |  256  |  32  |  1 x 300 GiB  |  500  |  7,500  |  7,500  | 
|   **r6i.32xlarge**   |  1,024  |  128  |  1 x 1,200 GiB  |  500  |  7,500  |  7,500  | 
|   **r6i.24xlarge**   |  768  |  96  |  1 x 1,200 GiB  |  500  |  7,500  |  7,500  | 
|   **r6i.16xlarge**   |  512  |  64  |  1 x 600 GiB  |  500  |  7,500  |  7,500  | 
|   **r6i.12xlarge**   |  384  |  48  |  1 x 600 GiB  |  500  |  7,500  |  7,500  | 
|   **r6i.8xlarge**   |  256  |  32  |  1 x 300 GiB  |  500  |  7,500  |  7,500  | 
|   **r5.24xlarge**   |  768  |  96  |  1 x 1,200 GiB  |  500  |  7,500  |  7,500  | 
|   **r5.16xlarge**   |  512  |  64  |  1 x 600 GiB  |  500  |  7,500  |  7,500  | 
|   **r5.12xlarge**   |  384  |  48  |  1 x 600 GiB  |  500  |  7,500  |  7,500  | 
|   **r5.8xlarge**   |  256  |  32  |  1 x 300 GiB  |  500  |  7,500  |  7,500  | 
|   **r5.metal**   |  768  |  96  |  1 x 1,200 GiB  |  500  |  7,500  |  7,500  | 
|   **r5b.24xlarge**   |  768  |  96  |  1 x 1,200 GiB  |  500  |  7,500  |  7,500  | 
|   **r5b.16xlarge**   |  512  |  64  |  1 x 600 GiB  |  500  |  7,500  |  7,500  | 
|   **r5b.12xlarge**   |  384  |  48  |  1 x 600 GiB  |  500  |  7,500  |  7,500  | 
|   **r5b.8xlarge**   |  256  |  32  |  1 x 300 GiB  |  500  |  7,500  |  7,500  | 
|   **r5b.metal**   |  768  |  96  |  1 x 1,200 GiB  |  500  |  7,500  |  7,500  | 
|   **r4.16xlarge**   |  488  |  64  |  1 x 600 GiB  |  500  |  7,500  |  7,500  | 
|   **r4.8xlarge**   |  244  |  32  |  1 x 300 GiB  |  500  |  7,500  |  7,500  | 
|   **r3.8xlarge**   |  244  |  32  |  1 x 300 GiB  |  500  |  7,500  |  7,500  | 


**Supported for nonproduction use only**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **Provisioned IOPS SSD (io1/io2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Provisioned IOPS per volume**   |   **Total provisioned IOPS**   | 
|   **x2iedn.4xlarge**   |  512  |  16  |  1 x 600 GiB  |  500  |  2,000  |  2,000  | 
|   **x2iedn.2xlarge**   |  256  |  8  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **x2iedn.xlarge**   |  128  |  4  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **x1e.4xlarge**   |  488  |  16  |  1 x 600 GiB  |  500\$1\$1  |  2,000  |  2,000  | 
|   **x1e.2xlarge**   |  244  |  8  |  1 x 300 GiB  |  500\$1\$1  |  2,000  |  2,000  | 
|   **x1e.xlarge**   |  122  |  4  |  1 x 300 GiB  |  500\$1\$1  |  2,000  |  2,000  | 
|   **r7i.4xlarge**   |  128  |  16  |  1 x 300 GiB  |  500  |  7,500  |  7,500  | 
|   **r7i.2xlarge**   |  64  |  8  |  1 x 300 GiB  |  500  |  7,500  |  7,500  | 
|   **r6i.4xlarge**   |  128  |  16  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **r6i.2xlarge**   |  64  |  8  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **r5.4xlarge**   |  128  |  16  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **r5.2xlarge**   |  64  |  8  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **r5b.4xlarge**   |  128  |  16  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **r5b.2xlarge**   |  64  |  8  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **r4.4xlarge**   |  122  |  16  |  1 x 300 GiB  |  500\$1\$1  |  2,000  |  2,000  | 
|   **r4.2xlarge**   |  61  |  8  |  1 x 300 GiB  |  500\$1\$1  |  2,000  |  2,000  | 
|   **r3.4xlarge**   |  122  |  16  |  1 x 300 GiB  |  500\$1\$1  |  2,000  |  2,000  | 
|   **r3.2xlarge**   |  61  |  8  |  1 x 300 GiB  |  500\$1\$1  |  2,000  |  2,000  | 
+ Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a physical CPU core.
  + This value represents the maximum throughput that could be achieved when striping multiple EBS volumes. Actual throughput depends on the instance type. Every instance type has its own Amazon EBS throughput maximum. For details, see [Amazon EBS-Optimized Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html) in the AWS documentation.


**Certified for production use**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **Provisioned IOPS SSD (io1/io2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Provisioned IOPS per volume**   |   **Total provisioned IOPS**   | 
|   **u-24tb1.112xlarge**   |  24,576  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-24tb1.metal**   |  24,576  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-18tb1.112xlarge**   |  18,432  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-18tb1.metal**   |  18,432  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-12tb1.112xlarge**   |  12,288  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-12tb1.metal**   |  12,288  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-9tb1.112xlarge**   |  9,216  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-9tb1.metal**   |  9,216  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u7in-24tb.112xlarge**   |  24,576  |  896  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u7in-16tb.112xlarge**   |  16,384  |  896  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u7i-12tb.224xlarge**   |  12,288  |  896  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u7i-8tb.112xlarge**   |  8,192  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u7i-6tb.112xlarge**   |  6,144  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u7inh-32tb.480xlarge**   |  32,768  |  1,920  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-6tb1.112xlarge**   |  6,144  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-6tb1.56xlarge**   |  6,144  |  224  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-6tb1.metal**   |  6,144  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-3tb1.56xlarge**   |  3,072  |  224  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **x2iedn.32xlarge**   |  4,096  |  128  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **x2iedn.24xlarge**   |  3,072  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **x2idn.32xlarge**   |  2,048  |  128  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **x2idn.24xlarge**   |  1,536  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **x2idn.16xlarge**   |  1,024  |  64  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **x1e.32xlarge**   |  3,904  |  128  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **x1.32xlarge**   |  1,952  |  128  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **x1.16xlarge**   |  976  |  64  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r7i.48xlarge**   |  1,536  |  192  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r7i.24xlarge**   |  768  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r7i.16xlarge**   |  512  |  64  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r7i.12xlarge**   |  384  |  48  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r7i.8xlarge**   |  256  |  32  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r6i.32xlarge**   |  1,024  |  128  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r6i.24xlarge**   |  768  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r6i.16xlarge**   |  512  |  64  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r6i.12xlarge**   |  384  |  48  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r6i.8xlarge**   |  256  |  32  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r5.24xlarge**   |  768  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r5.16xlarge**   |  512  |  64  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r5.12xlarge**   |  384  |  48  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r5.8xlarge**   |  256  |  32  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r5.metal**   |  768  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r5b.24xlarge**   |  768  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r5b.16xlarge**   |  512  |  64  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r5b.12xlarge**   |  384  |  48  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r5b.8xlarge**   |  256  |  32  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r5b.metal**   |  768  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r4.16xlarge**   |  488  |  64  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r4.8xlarge**   |  244  |  32  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r3.8xlarge**   |  244  |  32  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 


**Supported for nonproduction use only**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **Provisioned IOPS SSD (io1/io2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Provisioned IOPS per volume**   |   **Total provisioned IOPS**   | 
|   **x2iedn.4xlarge**   |  512  |  16  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **x2iedn.2xlarge**   |  256  |  8  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **x2iedn.xlarge**   |  128  |  4  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **x1e.4xlarge**   |  488  |  16  |  1 x 260 GiB  |  250\$1\$1  |  1,000  |  1,000  | 
|   **x1e.2xlarge**   |  244  |  8  |  1 x 260 GiB  |  250\$1\$1  |  1,000  |  1,000  | 
|   **x1e.xlarge**   |  122  |  4  |  1 x 260 GiB  |  250\$1\$1  |  1,000  |  1,000  | 
|   **r7i.4xlarge**   |  128  |  16  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r7i.2xlarge**   |  64  |  8  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r6i.4xlarge**   |  128  |  16  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r6i.2xlarge**   |  64  |  8  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r5.4xlarge**   |  128  |  16  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r5.2xlarge**   |  64  |  8  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r5b.4xlarge**   |  128  |  16  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r5b.2xlarge**   |  64  |  8  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r4.4xlarge**   |  122  |  16  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r4.2xlarge**   |  61  |  8  |  1 x 260 GiB  |  250\$1\$1  |  1,000  |  1,000  | 
|   **r3.4xlarge**   |  122  |  16  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r3.2xlarge**   |  61  |  8  |  1 x 260 GiB  |  250\$1\$1  |  1,000  |  1,000  | 
+ Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a physical CPU core.
  + This value represents the maximum achievable throughput when striping multiple EBS volumes. Actual throughput depends on the instance type. Every instance type has its own Amazon EBS throughput maximum. For more information, see [Amazon EBS-Optimized Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).


**Certified for production use**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **Provisioned IOPS SSD (io1/io2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Provisioned IOPS per volume**   |   **Total provisioned IOPS**   | 
|   **x1e.32xlarge**   |  3,904  |  128  |  3 x 1,600 GiB  |  1,500  |  3,000  |  9,000  | 
|   **x1.32xlarge**   |  1,952  |  128  |  3 x 800 GiB  |  1,500  |  3,000  |  9,000  | 
|   **x1.16xlarge**   |  976  |  64  |  1 x 1,200 GiB  |  500  |  7,500  |  7,500  | 
|   **r4.16xlarge**   |  488  |  64  |  1 x 600 GiB  |  500  |  7,500  |  7,500  | 
|   **r4.8xlarge**   |  244  |  32  |  1 x 300 GiB  |  500  |  7,500  |  7,500  | 


**Supported for nonproduction use only**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **Provisioned IOPS SSD (io1/io2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Provisioned IOPS per volume**   |   **Total provisioned IOPS**   | 
|   **x1e.4xlarge**   |  488  |  16  |  1 x 600 GiB  |  500\$1\$1  |  2,000  |  2,000  | 
|   **x1e.2xlarge**   |  244  |  8  |  1 x 300 GiB  |  500\$1\$1  |  2,000  |  2,000  | 
|   **x1e.xlarge**   |  122  |  4  |  1 x 300 GiB  |  500\$1\$1  |  2,000  |  2,000  | 
|   **r4.4xlarge**   |  122  |  16  |  1 x 300 GiB  |  500\$1\$1  |  2,000  |  2,000  | 
|   **r4.2xlarge**   |  61  |  8  |  1 x 300 GiB  |  500\$1\$1  |  2,000  |  2,000  | 
|   **r3.4xlarge**   |  122  |  16  |  1 x 300 GiB  |  500\$1\$1  |  2,000  |  2,000  | 
|   **r3.2xlarge**   |  61  |  8  |  1 x 300 GiB  |  500\$1\$1  |  2,000  |  2,000  | 
+ Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a physical CPU core.
  + This value represents the maximum throughput that could be achieved when striping multiple EBS volumes. Actual throughput depends on the instance type. Every instance type has its own Amazon EBS throughput maximum. For details, see [Amazon EBS-Optimized Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html) in the AWS documentation.


**Certified for production use**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **Provisioned IOPS SSD (io1/io2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Provisioned IOPS per volume**   |   **Total provisioned IOPS**   | 
|   **x1e.32xlarge**   |  3,904  |  128  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **x1.32xlarge**   |  1,952  |  128  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **x1.16xlarge**   |  976  |  64  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r4.16xlarge**   |  488  |  64  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r4.8xlarge**   |  244  |  32  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 


**Supported for nonproduction use only**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **Provisioned IOPS SSD (io1/io2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Provisioned IOPS per volume**   |   **Total provisioned IOPS**   | 
|   **x1e.4xlarge**   |  488  |  16  |  1 x 260 GiB  |  250\$1\$1  |  1,000  |  1,000  | 
|   **x1e.2xlarge**   |  244  |  8  |  1 x 260 GiB  |  250\$1\$1  |  1,000  |  1,000  | 
|   **x1e.xlarge**   |  122  |  4  |  1 x 260 GiB  |  250\$1\$1  |  1,000  |  1,000  | 
|   **r4.4xlarge**   |  122  |  16  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r4.2xlarge**   |  61  |  8  |  1 x 260 GiB  |  250\$1\$1  |  1,000  |  1,000  | 
|   **r3.4xlarge**   |  122  |  16  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r3.2xlarge**   |  61  |  8  |  1 x 260 GiB  |  250\$1\$1  |  1,000  |  1,000  | 
+ Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a physical CPU core.
  + This value represents the maximum achievable throughput when striping multiple EBS volumes. Actual throughput depends on the instance type. Every instance type has its own Amazon EBS throughput maximum. For more information, see [Amazon EBS-Optimized Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).


**Certified for production use**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **Provisioned IOPS SSD (io1/io2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Provisioned IOPS per volume**   |   **Total provisioned IOPS**   | 
|   **u-24tb1.112xlarge**   |  24,576  |  448  |  2 x 14,400 GiB  |  4,500  |  9,000  |  18,000  | 
|   **u-24tb1.metal**   |  24,576  |  448  |  2 x 14,400 GiB  |  4,500  |  9,000  |  18,000  | 
|   **u-18tb1.112xlarge**   |  18,432  |  448  |  2 x 10,800 GiB  |  4,500  |  9,000  |  18,000  | 
|   **u-18tb1.metal**   |  18,432  |  448  |  2 x 10,800 GiB  |  4,500  |  9,000  |  18,000  | 
|   **u-12tb1.112xlarge**   |  12,288  |  448  |  2 x 7,200 GiB  |  3,000  |  6,000  |  12,000  | 
|   **u-12tb1.metal**   |  12,288  |  448  |  2 x 7,200 GiB  |  3,000  |  6,000  |  12,000  | 
|   **u-9tb1.112xlarge**   |  9,216  |  448  |  2 x 5,400 GiB  |  3,000  |  6,000  |  12,000  | 
|   **u-9tb1.metal**   |  9,216  |  448  |  2 x 5,400 GiB  |  3,000  |  6,000  |  12,000  | 
|   **u7in-24tb.112xlarge**   |  24,576  |  896  |  2 x 14,400 GiB  |  4,500  |  9,000  |  18,000  | 
|   **u7in-16tb.112xlarge**   |  16,384  |  896  |  2 x 9,600 GiB  |  4,500  |  9,000  |  18,000  | 
|   **u7i-12tb.224xlarge**   |  12,288  |  896  |  2 x 7,200 GiB  |  3,000  |  6,000  |  12,000  | 
|   **u7i-8tb.112xlarge**   |  8,192  |  448  |  2 x 4,800 GiB  |  3,000  |  6,000  |  12,000  | 
|   **u7i-6tb.112xlarge**   |  6,144  |  448  |  2 x 3,600 GiB  |  3,000  |  6,000  |  12,000  | 
|   **u7inh-32tb.480xlarge**   |  32,768  |  1,920  |  4 x 9,600 GiB  |  9,000  |  9,000  |  36,000  | 
|   **u-6tb1.112xlarge**   |  6,144  |  448  |  2 x 3,600 GiB  |  3,000  |  6,000  |  12,000  | 
|   **u-6tb1.56xlarge**   |  6,144  |  224  |  2 x 3,600 GiB  |  3,000  |  6,000  |  12,000  | 
|   **u-6tb1.metal**   |  6,144  |  448  |  2 x 3,600 GiB  |  3,000  |  6,000  |  12,000  | 
|   **u-3tb1.56xlarge**   |  3,072  |  224  |  2 x 1,800 GiB  |  2,250  |  4,500  |  9,000  | 
|   **x2iedn.32xlarge**   |  4,096  |  128  |  2 x 2,400 GiB  |  2,250  |  4,500  |  9,000  | 
|   **x2iedn.24xlarge**   |  3,072  |  96  |  2 x 1,800 GiB  |  2,250  |  4,500  |  9,000  | 
|   **x2idn.32xlarge**   |  2,048  |  128  |  2 x 1,200 GiB  |  2,250  |  4,500  |  9,000  | 
|   **x2idn.24xlarge**   |  1,536  |  96  |  2 x 900 GiB  |  1,875  |  3,750  |  7,500  | 
|   **x2idn.16xlarge**   |  1,024  |  64  |  2 x 600 GiB  |  1,875  |  3,750  |  7,500  | 
|   **r7i.48xlarge**   |  1,536  |  192  |  1 x 1,800 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r7i.24xlarge**   |  768  |  96  |  1 x 900 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r7i.16xlarge**   |  512  |  64  |  1 x 600 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r7i.12xlarge**   |  384  |  48  |  1 x 300 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r7i.8xlarge**   |  256  |  32  |  1 x 300 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r6i.32xlarge**   |  1,024  |  128  |  1 x 1,200 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r6i.24xlarge**   |  768  |  96  |  1 x 1,200 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r6i.16xlarge**   |  512  |  64  |  1 x 600 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r6i.12xlarge**   |  384  |  48  |  1 x 600 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r6i.8xlarge**   |  256  |  32  |  1 x 300 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r5.24xlarge**   |  768  |  96  |  1 x 1,200 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r5.16xlarge**   |  512  |  64  |  1 x 600 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r5.12xlarge**   |  384  |  48  |  1 x 600 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r5.8xlarge**   |  256  |  32  |  1 x 300 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r5.metal**   |  768  |  96  |  1 x 1,200 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r5b.24xlarge**   |  768  |  96  |  1 x 1,200 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r5b.16xlarge**   |  512  |  64  |  1 x 600 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r5b.12xlarge**   |  384  |  48  |  1 x 600 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r5b.8xlarge**   |  256  |  32  |  1 x 300 GiB  |  1,875  |  7,500  |  7,500  | 
|   **r5b.metal**   |  768  |  96  |  1 x 1,200 GiB  |  1,875  |  7,500  |  7,500  | 


**Supported for nonproduction use only**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **Provisioned IOPS SSD (io1/io2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Provisioned IOPS per volume**   |   **Total provisioned IOPS**   | 
|   **x2iedn.4xlarge**   |  512  |  16  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **x2iedn.2xlarge**   |  256  |  8  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **x2iedn.xlarge**   |  128  |  4  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **r7i.4xlarge**   |  128  |  16  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **r7i.2xlarge**   |  64  |  8  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **r6i.4xlarge**   |  128  |  16  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **r6i.2xlarge**   |  64  |  8  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **r5.4xlarge**   |  128  |  16  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **r5.2xlarge**   |  64  |  8  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **r5b.4xlarge**   |  128  |  16  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
|   **r5b.2xlarge**   |  64  |  8  |  1 x 300 GiB  |  500  |  2,000  |  2,000  | 
+ Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a physical CPU core.
  + This value represents the maximum throughput that could be achieved when striping multiple EBS volumes. Actual throughput depends on the instance type. Every instance type has its own Amazon EBS throughput maximum. For details, see [Amazon EBS-Optimized Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html) in the AWS documentation.


**Certified for production use**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **Provisioned IOPS SSD (io1/io2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Provisioned IOPS per volume**   |   **Total provisioned IOPS**   | 
|   **u-24tb1.112xlarge**   |  24,576  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-24tb1.metal**   |  24,576  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-18tb1.112xlarge**   |  18,432  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-18tb1.metal**   |  18,432  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-12tb1.112xlarge**   |  12,288  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-12tb1.metal**   |  12,288  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-9tb1.112xlarge**   |  9,216  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-9tb1.metal**   |  9,216  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u7in-24tb.112xlarge**   |  24,576  |  896  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u7in-16tb.112xlarge**   |  16,384  |  896  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u7i-12tb.224xlarge**   |  12,288  |  896  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u7i-8tb.112xlarge**   |  8,192  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u7i-6tb.112xlarge**   |  6,144  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u7inh-32tb.480xlarge**   |  32,768  |  1,920  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-6tb1.112xlarge**   |  6,144  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-6tb1.56xlarge**   |  6,144  |  224  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-6tb1.metal**   |  6,144  |  448  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **u-3tb1.56xlarge**   |  3,072  |  224  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **x2iedn.32xlarge**   |  4,096  |  128  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **x2iedn.24xlarge**   |  3,072  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **x2idn.32xlarge**   |  2,048  |  128  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **x2idn.24xlarge**   |  1,536  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **x2idn.16xlarge**   |  1,024  |  64  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r7i.48xlarge**   |  1,536  |  192  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r7i.24xlarge**   |  768  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r7i.16xlarge**   |  512  |  64  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r7i.12xlarge**   |  384  |  48  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r7i.8xlarge**   |  256  |  32  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r6i.32xlarge**   |  1,024  |  128  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r6i.24xlarge**   |  768  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r6i.16xlarge**   |  512  |  64  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r6i.12xlarge**   |  384  |  48  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r6i.8xlarge**   |  256  |  32  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r5.24xlarge**   |  768  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r5.16xlarge**   |  512  |  64  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r5.12xlarge**   |  384  |  48  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r5.8xlarge**   |  256  |  32  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r5.metal**   |  768  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r5b.24xlarge**   |  768  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 
|   **r5b.16xlarge**   |  512  |  64  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r5b.12xlarge**   |  384  |  48  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r5b.8xlarge**   |  256  |  32  |  1 x 260 GiB  |  500  |  2,000  |  2,000  | 
|   **r5b.metal**   |  768  |  96  |  1 x 525 GiB  |  500  |  2,000  |  2,000  | 


**Supported for nonproduction use only**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **Provisioned IOPS SSD (io1/io2) storage with LVM**   |   **Total maximum throughput (MiB/s)**   |   **Provisioned IOPS per volume**   |   **Total provisioned IOPS**   | 
|   **x2iedn.4xlarge**   |  512  |  16  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **x2iedn.2xlarge**   |  256  |  8  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **x2iedn.xlarge**   |  128  |  4  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r7i.4xlarge**   |  128  |  16  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r7i.2xlarge**   |  64  |  8  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r6i.4xlarge**   |  128  |  16  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r6i.2xlarge**   |  64  |  8  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r5.4xlarge**   |  128  |  16  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r5.2xlarge**   |  64  |  8  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r5b.4xlarge**   |  128  |  16  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
|   **r5b.2xlarge**   |  64  |  8  |  1 x 260 GiB  |  250  |  1,000  |  1,000  | 
+ Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a physical CPU core.
  + This value represents the maximum throughput that could be achieved when striping multiple EBS volumes. Actual throughput depends on the instance type. Every instance type has its own Amazon EBS throughput maximum. For details, see [Amazon EBS-Optimized Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html) in the AWS documentation.

**Note**  
io2 Block Express volume supports up to 4000 MiB/s throughput per volume with 16,000 IOPS at 256 KiB I/O size or with 64,000 IOPS at 16 KiB I/O size. The maximum throughput value represented in the *Total maximum throughput* column = Total provisioned IOPS \$1 256 KiB I/O. To increase the throughput, increase the provisioned IOPS.

## Root, binaries, shared, and backup volumes
<a name="root-binaries-hana-legacy"></a>

In addition to the SAP HANA data and log volumes, we recommend the following storage configuration for root, SAP binaries, and SAP HANA shared and backup volumes:


**Certified for production use**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **Root volume**   |   **SAP binaries**   |   **SAP HANA shared**\$1\$1  |   **SAP HANA backup**\$1\$1\$1  | 
|   **u-24tb1.112xlarge**   |  24,576  |  448  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  2 x 16,384 GiB  | 
|   **u-24tb1.metal**   |  24,576  |  448  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  2 x 16,384 GiB  | 
|   **u-18tb1.112xlarge**   |  18,432  |  448  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  2 x 16,384 GiB  | 
|   **u-18tb1.metal**   |  18,432  |  448  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  2 x 16,384 GiB  | 
|   **u-12tb1.112xlarge**   |  12,288  |  448  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 16,384 GiB  | 
|   **u-12tb1.metal**   |  12,288  |  448  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 16,384 GiB  | 
|   **u-9tb1.112xlarge**   |  9,216  |  448  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 16,384 GiB  | 
|   **u-9tb1.metal**   |  9,216  |  448  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 16,384 GiB  | 
|   **u7in-24tb.112xlarge**   |  24,576  |  896  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  2 x 16,384 GiB  | 
|   **u7in-16tb.112xlarge**   |  16,384  |  896  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  2 x 16,384 GiB  | 
|   **u7i-12tb.112xlarge**   |  12,288  |  896  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  2 x 16,384 GiB  | 
|   **u7i-8tb.112xlarge**   |  8,192  |  448  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 16,384 GiB  | 
|   **u7i-6tb.224xlarge**   |  6,144  |  448  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 12,288 GiB  | 
|   **u7inh-32tb.480xlarge**   |  32,768  |  1,920  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  3 x 16,384 GiB  | 
|   **u-6tb1.112xlarge**   |  6,144  |  448  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 12,288 GiB  | 
|   **u-6tb1.56xlarge**   |  6,144  |  224  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 12,288 GiB  | 
|   **u-6tb1.metal**   |  6,144  |  448  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 12,288 GiB  | 
|   **u-3tb1.56xlarge**   |  3,072  |  224  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 6,144 GiB  | 
|   **x2iedn.32xlarge**   |  4,096  |  128  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 8,192 GiB  | 
|   **x2iedn.24xlarge**   |  3,072  |  96  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 6,144 GiB  | 
|   **x2idn.32xlarge**   |  2,048  |  128  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 4,096 GiB  | 
|   **x2idn.24xlarge**   |  1,536  |  96  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 3,096 GiB  | 
|   **x2idn.16xlarge**   |  1,024  |  64  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 2,048 GiB  | 
|   **x1e.32xlarge**   |  3,904  |  128  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 8,192 GiB  | 
|   **x1.32xlarge**   |  1,952  |  128  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 4,096 GiB  | 
|   **x1.16xlarge**   |  976  |  64  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 2,048 GiB  | 
|   **r7i.48xlarge**   |  1,536  |  192  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 3,096 GiB  | 
|   **r7i.24xlarge**   |  768  |  96  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 2,048 GiB  | 
|   **r7i.16xlarge**   |  512  |  64  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 512 GiB  |  1 x 1,024 GiB  | 
|   **r7i.12xlarge**   |  384  |  48  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 512 GiB  |  1 x 1,024 GiB  | 
|   **r7i.8xlarge**   |  256  |  32  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 1,024 GiB  | 
|   **r6i.32xlarge**   |  1,024  |  128  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 2,048 GiB  | 
|   **r6i.24xlarge**   |  768  |  96  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 2,048 GiB  | 
|   **r6i.16xlarge**   |  512  |  64  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 512 GiB  |  1 x 1,024 GiB  | 
|   **r6i.12xlarge**   |  384  |  48  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 512 GiB  |  1 x 1,024 GiB  | 
|   **r6i.8xlarge**   |  256  |  32  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 1,024 GiB  | 
|   **r5.24xlarge**   |  768  |  96  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 2,048 GiB  | 
|   **r5.16xlarge**   |  512  |  64  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 512 GiB  |  1 x 1,024 GiB  | 
|   **r5.12xlarge**   |  384  |  48  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 512 GiB  |  1 x 1,024 GiB  | 
|   **r5.8xlarge**   |  256  |  32  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 1,024 GiB  | 
|   **r5.metal**   |  768  |  96  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 2,048 GiB  | 
|   **r5b.24xlarge**   |  768  |  96  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 2,048 GiB  | 
|   **r5b.16xlarge**   |  512  |  64  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 512 GiB  |  1 x 1,024 GiB  | 
|   **r5b.12xlarge**   |  384  |  48  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 512 GiB  |  1 x 1,024 GiB  | 
|   **r5b.8xlarge**   |  256  |  32  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 1,024 GiB  | 
|   **r5b.metal**   |  768  |  96  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 1,024 GiB  |  1 x 2,048 GiB  | 
|   **r4.16xlarge**   |  488  |  64  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 512 GiB  |  1 x 1,024 GiB  | 
|   **r4.8xlarge**   |  244  |  32  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 1,024 GiB  | 


**Supported for nonproduction use only**  

|  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |
|   **Instance type**   |   **Memory (GiB)**   |   **vCPUs / logical processors**\$1  |   **Root volume**   |   **SAP binaries**   |   **SAP HANA shared**\$1\$1  |   **SAP HANA backup**\$1\$1\$1  | 
|   **x2iedn.4xlarge**   |  512  |  16  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 512 GiB  |  1 x 1,024 GiB  | 
|   **x2iedn.2xlarge**   |  256  |  8  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 
|   **x2iedn.xlarge**   |  128  |  4  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 
|   **x1e.4xlarge**   |  488  |  16  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 512 GiB  |  1 x 1,024 GiB  | 
|   **x1e.2xlarge**   |  244  |  8  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 
|   **x1e.xlarge**   |  122  |  4  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 
|   **r7i.4xlarge**   |  128  |  16  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 
|   **r7i.2xlarge**   |  64  |  8  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 
|   **r6i.4xlarge**   |  128  |  16  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 
|   **r6i.2xlarge**   |  64  |  8  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 
|   **r5.4xlarge**   |  128  |  16  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 
|   **r5.2xlarge**   |  64  |  8  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 
|   **r5b.4xlarge**   |  128  |  16  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 
|   **r5b.2xlarge**   |  64  |  8  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 
|   **r4.4xlarge**   |  122  |  16  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 
|   **r4.2xlarge**   |  61  |  8  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 
|   **r3.4xlarge**   |  122  |  16  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 
|   **r3.2xlarge**   |  61  |  8  |  1 x 50 GiB  |  1 x 50 GiB  |  1 x 300 GiB  |  1 x 512 GiB  | 

 *\$1 Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a physical CPU core.* 

 *\$1\$1 In a multi-node architecture, the SAP HANA NFS shared volume is provisioned only once on the master node.* 

 *\$1\$1\$1 In a multi-node architecture, the SAP HANA backup volume can be deployed as NFS or Amazon EFS. The size of the SAP HANA NFS backup volume is multiplied by the number of nodes. The SAP HANA backup volume is provisioned only once on the master node, and NFS is mounted on the worker nodes. There is no provision needed for [Amazon EFS](https://aws.amazon.com/efs/features/) as it is built to scale on demand, growing and shrinking automatically as files are added and removed.* 

## Backup options
<a name="backup-options-legacy"></a>

For SAP HANA backup, you can choose file-based backup with storage configuration recommended in this guide or [AWS Backint for SAP HANA](https://aws.amazon.com/backint-agent/) to backup your database on Amazon S3. AWS Backint Agent for SAP HANA is an SAP-certified backup and restore solution for SAP HANA workloads running on Amazon EC2 instances. With AWS Backint for SAP HANA as your backup solution, provisioning additional Amazon EBS storage volumes or Amazon EFS file systems becomes optional. For more details, see [AWS Backint Agent for SAP HANA](https://aws.amazon.com/backint-agent/).

For Disaster Recovery (DR) purposes, you can also automate the creation of application-consistent EBS snapshots for SAP HANA using Amazon Data Lifecycle Manager and the AWS Systems Manager document for SAP HANA. EBS snapshots make it easy to maintain a copy of your SAP HANA databases in another Region or account. Restoring an entire SAP HANA database from an EBS snapshot can take longer than other backups. However, you can reduce the restore time by enabling the EBS snapshots for [Amazon EBS fast snapshot restore](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html). We recommend that you use EBS snapshots to supplement your existing backups with AWS Backint Agent, and to use Amazon Data Lifecycle Manager to automate the copying and retention of EBS snapshots in DR Regions as needed. For more information, see [Amazon EBS snapshots for SAP HANA](ebs-sap-hana.md).

For single-node deployment, we recommend using [Amazon EBS](https://aws.amazon.com/ebs/features/) Throughput Optimized HDD (`st1`) volumes for SAP HANA to perform file-based backup. This volume type provides low-cost magnetic storage designed for large sequential workloads. SAP HANA uses sequential I/O with large blocks to back up the database, so `st1` volumes provide a low-cost, high-performance option for this scenario. To learn more about `st1` volumes, see [Amazon EBS Volume Types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html).

The SAP HANA backup volume size is designed to provide optimal baseline and burst throughput as well as the ability to hold several backup sets. Holding multiple backup sets inthe backup volume makes it easier to recover your database if necessary. You may resize your SAP HANA backup volume after initial setup if needed. To learn more about resizing your Amazon EBS volumes, see [Expanding the Storage Size of an EBS Volume on Linux](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html).

For multi-node deployment, we recommend using [Amazon EFS](https://aws.amazon.com/efs/features/) for SAP HANA to perform file-based backup. It can support performance over 10 GB/sec and over 500,000 IOPS.

The configurations recommended in this guide are used by [AWS Launch Wizard for SAP](https://aws.amazon.com/launchwizard/).

# Configure storage (FSx for ONTAP)
<a name="sap-hana-amazon-fsx"></a>

Amazon FSx for NetApp ONTAP is a fully managed service that provides highly reliable, scalable, high-performing, and feature-rich file storage built on NetApp’s popular ONTAP file system. You can now deploy and operate SAP HANA on AWS with Amazon FSx for NetApp ONTAP. For more information, see [Amazon FSx for NetApp ONTAP](https://aws.amazon.com/fsx/netapp-ontap/).

SAP HANA stores and processes all of its data in memory and provides protection against data loss by saving the data in persistent storage locations. To achieve optimal performance, the storage solution used for SAP HANA data and log volumes must meet SAP’s storage KPI. As a fully managed service, Amazon FSx for NetApp ONTAP makes it easier to launch and scale reliable, high-performing, and secure shared file storage in the cloud.

If you are a first-time user, see [How Amazon FSx for NetApp ONTAP works](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/how-it-works-fsx-ontap.html).

This guide covers the following topics.
+  [Supported configurations](instances-sizing-sap-hana-amazon-fsx.md) 
+  [Set up FSx for ONTAP file system SVMs and volumes](amazon-fsx-sap-hana.md) 
+  [Set up host](host-setup-fsx-sap-hana.md) 

For SAP specifications, refer to [SAP Note 2039883 - FAQ: SAP HANA database and data snapshots](https://me.sap.com/notes/2039883) and [SAP Note 3024346 - Linux Kernel Settings for NetApp NFS](https://me.sap.com/notes/3024346).

# Supported configurations
<a name="instances-sizing-sap-hana-amazon-fsx"></a>

The following rules and limitations are applicable for deploying SAP HANA on AWS with Amazon FSx for NetApp ONTAP.
+ FSx for ONTAP file systems for SAP HANA data and log volumes are only supported for single Availability Zone deployment.
+ Amazon EC2 instances where you plan to deploy your SAP HANA workload and FSx for ONTAP file systems must be in the same subnet.
+ Use separate storage virtual machines (SVM) for SAP HANA data and log volumes at no additional cost. This ensures that your I/O traffic flows through different IP addresses and TCP sessions.
+ For SAP HANA scale-out with standby node, the `basepath_shared` must be set to *Yes*. You can locate it in the *Persistence* section of the `global.ini` file.
+ SAP HANA on FSx for ONTAP is only supported with the NFSv4.1 protocol. SAP HANA volumes must be created and mounted using the NFSv4.1 protocol.
+ SAP HANA on FSx for ONTAP is only supported on the following operating systems:
  + Red Hat Enterprise Linux 8.4 and above
  + SUSE Linux Enterprise Server 15 SP2 and above
+  `/hana/data` and `/hana/log` must have their own FSx for ONTAP volumes. `/hana/shared`, and `/usr/sap` can share a volume.

## Supported Amazon EC2 instance types
<a name="instance-types-sap-hana-amazon-fsx"></a>

Amazon FSx for NetApp ONTAP is certified by SAP for scale-up and scale-out (OLTP/OLAP) SAP HANA workloads in a single Availability Zone setup. You can use Amazon FSx for NetApp ONTAP as the primary storage for SAP HANA data, log, binary, and shared volumes. For a complete list of supported Amazon EC2 instances for SAP HANA, see [SAP HANA certified instances](https://docs.aws.amazon.com/sap/latest/general/sap-hana-aws-ec2.html).

## Sizing
<a name="sizing-sap-hana-amazon-fsx"></a>

You can configure the throughput capacity of FSx for ONTAP when you create a new file system by scaling up to 4 GB/s of read throughput and 1000 MB/s of write throughput in a single Availability Zone deployment. For more information, see [Amazon FSx for NetApp ONTAP performance](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/performance.html).

**Topics**
+ [SAP KPIs](#sizing-sap-kpi)
+ [Minimum requirement](#sizing-min-req)
+ [Higher throughput](#sizing-high-throughput)

### SAP KPIs
<a name="sizing-sap-kpi"></a>

 **SAP requires the following KPIs for SAP HANA volumes.** 


|  | Read | Write | 
| --- | --- | --- | 
|  Data  |  400 MB/s  |  250 MB/s  | 
|  Log  |  250 MB/s  |  250 MB/s  | 
|  Latency for log  |  Less than 1 millisecond write latency with 4K and 16K block sized I/O  | 

### Minimum requirement
<a name="sizing-min-req"></a>

You must provision FSx for ONTAP volumes with sufficient capacity and performance, based on the requirements of your SAP HANA workload. To meet the storage KPIs for SAP HANA, you need a throughput capacity of at least **1,024 MB/s**. Lower throughput may be acceptable for non-production systems.

Sharing a file system between multiple SAP HANA nodes is supported when the file system meets the requirements of all SAP HANA nodes. When sharing a file system, you can use the quality of service feature for consistent performance and reduced interference between competing workloads. For more information, see [Using Quality of Service in Amazon FSx for NetApp ONTAP](https://aws.amazon.com/blogs/storage/using-quality-of-service-in-amazon-fsx-for-netapp-ontap/).

### Higher throughput
<a name="sizing-high-throughput"></a>

If you require higher throughput, you can do one of the following:
+ Create separate data and log volumes on different FSx for ONTAP file systems.
+ Create additional data volume partitions across multiple FSx for ONTAP file systems.

To learn more about FSx for ONTAP performance, see [Performance details](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/performance.html#performance-details-fsxw).

## SAP HANA parameters
<a name="sap-hana-amazon-fsx"></a>

Set the following SAP HANA database parameters in the `global.ini` file.

```
[fileio]
max_parallel_io_requests=128
async_read_submit=on
async_write_submit_active=on
async_write_submit_blocks=all
```

Use the following SQL commands to set these parameters on `SYSTEM` level.

```
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'max_parallel_io_requests') = '128' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'async_read_submit') = 'on' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'async_write_submit_active') = 'on' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'async_write_submit_blocks') = 'all' WITH RECONFIGURE;
```

# Set up FSx for ONTAP file system, SVMs, and volumes
<a name="amazon-fsx-sap-hana"></a>

Before you create FSx for ONTAP file system, determine the total storage space you need for your SAP HANA workload. You can increase the storage size later. To decrease the storage size, you must create a new file system.

To create a FSx for ONTAP file system, see [Step 1: Create an Amazon FSx for NetApp ONTAP file system](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/getting-started-step1.html). For more information, see [Managing FSx for ONTAP file systems](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/managing-file-systems.html).

**Note**  
Only single Availability Zone file systems are supported for SAP HANA workloads.

**Topics**
+ [Create storage virtual machines (SVM)](#svm-sap-hana)
+ [Volume configuration](#volume-fsx-sap-hana)
+ [Sample estimate](#sizing-estimation)
+ [Volume layout](#vol-layout-fsx-sap-hana)
+ [File system setup](#filesys-fsx-sap-hana)
+ [Disable snapshots](#snaps-fsx-sap-hana)
+ [Quality of Service (QoS)](#fsx-qos)
+ [Backup](#fsx-backup)

## Create storage virtual machines (SVM)
<a name="svm-sap-hana"></a>

You get one SVM per FSx for ONTAP file system by default. You can create additional SVMs at any time. For optimal performance, mount data and log volumes using different IP addresses. You can achieve this using separate SVMs for data and log volumes. If you plan to use NetApp SnapCenter, all SVMs used for SAP HANA must have unique names. You don’t need to join your file system to Active Directory for SAP HANA. For more information, see [Managing FSx for ONTAP storage virtual machines](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/managing-svms.html).

## Volume configuration
<a name="volume-fsx-sap-hana"></a>

The storage capacity of your file system should align with the needs of `/hana/shared`, `/hana/data`, and `/hana/log` volumes. You must also consider the capacity required for snapshots, if applicable.

We recommend creating separate FSx for ONTAP volumes for each of SAP HANA data, log, shared, and binary volumes. The following table lists the recommended minimum sizes per volume.


| Volume | Recommended size for scale-up | Recommended size for scale-out | 
| --- | --- | --- | 
|   `/usr/sap`   |  50 GiB  |  50 GiB  | 
|   `/hana/shared`   |  Minimum of 1 x memory of your Amazon EC2 instance or 1TB  |  1 x memory of your Amazon EC2 instance for every 4 subordinate nodes\$1  | 
|   `/hana/data`   |  At least 1.2 x memory of your Amazon EC2 instance  |  At least 1.2 x memory of your Amazon EC2 instance  | 
|   `/hana/log`   |  Minimum of 0.5 x memory of your Amazon EC2 instance or 600 GiB  |  Minimum of 0.5 x memory of your Amazon EC2 instance or 600 GiB  | 

\$1For example, if you have 2-4 scale-out nodes, you need 1 x memory of your single Amazon EC2 instance. If you have 5-8 scale-out nodes, you need 2 x memory of your single Amazon EC2 instance.

The following limitations apply when you create a FSx for ONTAP file system for SAP HANA.
+  *Capacity Pool Tiering* is not supported for SAP HANA and must be set to **None**.
+  *Daily automatic backups* must be **disabled** for SAP HANA. Default FSx for ONTAP backups are not application-aware and cannot be used to restore SAP HANA to a consistent state.

## Sample estimate
<a name="sizing-estimation"></a>

You can use the formulas in the following table to create estimates for SAP HANA performance KPIs for production systems. These systems can be in single Availability Zone setup or a multi-Availability Zone setup. See the storage architecture for [Amazon FSx for NetApp ONTAP](https://docs.aws.amazon.com/sap/latest/sap-hana/architecture-fsx.html) to learn more.

Note: Amazon EC2 root volumes used as boot volumes for the operating system always need to be based on Amazon EBS. For example, `gp3` – using an EBS-based SAP HANA log volume with FSx for ONTAP is supported.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/amazon-fsx-sap-hana.html)

**Note**  
(\$1) You must provision a secondary FSx for ONTAP volume for SAP HANA multi-Availability Zone deployments.
(\$1\$1) This can be deployed in a single-Availability Zone setup for cost efficiency.

 **Common parameters** 
+ CHANGE-RATE-DB: 30%for prod, 5% for non-prod
+ CHANGE-RATE-BINARIES: 5%
+ LOG-RATE: 5%
+ SNAPSHOTS-KEPT-AT-PRIMARY: 3 days
+ RETENTION: 30 days

## Volume layout
<a name="vol-layout-fsx-sap-hana"></a>

**Topics**
+ [SAP HANA scale-up](#fsx-volume-layout-scaleup)
+ [SAP HANA scale-out](#fsx-volume-layout-scaleout)

### SAP HANA scale-up
<a name="fsx-volume-layout-scaleup"></a>

The following table presents an example of volume and mount point configuration for scale-up setup. It includes a single host. `HDB` is the SAP HANA system ID. To place the home directory of the `hdbadm` user on the central storage, the `/usr/sap/HDB` file system must be mounted from the `HDB_shared` volume.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/amazon-fsx-sap-hana.html)

### SAP HANA scale-out
<a name="fsx-volume-layout-scaleout"></a>

You must mount all the data, log, and shared volumes in every node, including the standby node.

The following table presents an example of volume and mount point configuration for a scale-out setup. It includes four active and one standby host. `HDB` is the SAP HANA system ID. The home (`/usr/sap/HDB`) and shared (`(/hana/shared`) directories of every host are stored in the `HDB_shared` volume. To place the home directory of the `hdbadm` user on the central storage, the `/usr/sap/HDB` file system must be mounted from the `HDB_shared` volume.


| Volume name | Junction path | Directory | Mount point | Note | 
| --- | --- | --- | --- | --- | 
|  HDB\$1data\$1mnt00001  |  HDB\$1data\$1mnt00001  |  N/A  |  /hana/data/HDB/mnt00001  |  Mounted on all hosts  | 
|  HDB\$1log\$1mnt00001  |  HDB\$1log\$1mnt00001  |  N/A  |  /hana/log/HDB/mnt00001  |  Mounted on all hosts  | 
|  HDB\$1data\$1mnt00002  |  HDB\$1data\$1mnt00002  |  N/A  |  /hana/data/HDB/mnt00002  |  Mounted on all hosts  | 
|  HDB\$1log\$1mnt00002  |  HDB\$1log\$1mnt00002  |  N/A  |  /hana/log/HDB/mnt00002  |  Mounted on all hosts  | 
|  HDB\$1data\$1mnt00003  |  HDB\$1data\$1mnt00003  |  N/A  |  /hana/data/HDB/mnt00003  |  Mounted on all hosts  | 
|  HDB\$1log\$1mnt00003  |  HDB\$1log\$1mnt00003  |  N/A  |  /hana/log/HDB/mnt00003  |  Mounted on all hosts  | 
|  HDB\$1data\$1mnt00004  |  HDB\$1data\$1mnt00004  |  N/A  |  /hana/data/HDB/mnt00004  |  Mounted on all hosts  | 
|  HDB\$1log\$1mnt00004  |  HDB\$1log\$1mnt00004  |  N/A  |  /hana/log/HDB/mnt00004  |  Mounted on all hosts  | 
|  HDB\$1shared  |  HDB\$1shared  |  HDB\$1shared  |  /hana/shared/HDB  |  Mounted on all hosts  | 
|  HDB\$1shared  |  HDB\$1shared  |  usr-sap-host1  |  /usr/sap/HDB  |  Mounted on host 1  | 
|  HDB\$1shared  |  HDB\$1shared  |  usr-sap-host2  |  /usr/sap/HDB  |  Mounted on host 2  | 
|  HDB\$1shared  |  HDB\$1shared  |  usr-sap-host3  |  /usr/sap/HDB  |  Mounted on host 3  | 
|  HDB\$1shared  |  HDB\$1shared  |  usr-sap-host4  |  /usr/sap/HDB  |  Mounted on host 4  | 
|  HDB\$1shared  |  HDB\$1shared  |  usr-sap-host5  |  /usr/sap/HDB  |  Mounted on host 5  | 

## File system setup
<a name="filesys-fsx-sap-hana"></a>

After creating a FSx for ONTAP file system, you must complete additional file system setup.

### Set administrative password
<a name="password-filesys-fsx-sap-hana"></a>

If you did not create an administrative password during FSx for ONTAP file system creation, you must set an ONTAP administrative password for `fsxadmin` user.

The administrative password enables you to access the file system via SSH, the ONTAP CLI, and REST API. To use tools like NetApp SnapCenter, you must have an administrative password.

### Sign in to the management endpoint via SSH
<a name="ssh-filesys-fsx-sap-hana"></a>

Get the DNS name of the management endpoint from AWS console. Sign in to the management endpoint via SSH, using the `fsxadmin` user and administrative password.

```
ssh fsxadmin@management.<file-system-id>.fsx.<aws-region>.amazonaws.com Password:
```

### Set TCP max transfer size
<a name="tcp-filesys-fsx-sap-hana"></a>

We recommend a TCP max transfer size of 262,144 for your SAP HANA workloads. Elevate the privilege level to *advanced* and use the following command on each SVM.

```
set advanced
nfs modify -vserver <svm> -tcp-max-xfer-size 262144
set admin
```

### Set the lease time on NFSv4 protocol
<a name="nfs-filesys-fsx-sap-hana"></a>

This task applies to SAP HANA scale-out with standby node setup.

Lease period refers to the time in which ONTAP irrevocably grants a lock to a client. It is set to 30 seconds by default. You can have faster server recovery by setting shorter lease time.

You can change the lease time with the following command.

```
set advanced
nfs modify -vserver <svm> -v4-lease-seconds 10
set admin
```

**Note**  
Starting with SAP HANA 2.0 SPS4, SAP provides parameters to control failover behavior. NetApp recommends using these parameters instead of setting the lease time at the SVM level. For more details, see.

## Disable snapshots
<a name="snaps-fsx-sap-hana"></a>

FSx for ONTAP automatically enables a snapshot policy for volumes that take hourly snapshots. The default policy offers limited value to SAP HANA due to missing application awareness. We recommend disabling the automatic snapshots by setting the policy to none. You can disable snapshots during volume creation or by using the following command.

```
volume modify -vserver <vserver-name> -volume <volume-name> -snapshot-policy none
```

### Data volume
<a name="data-snaps-fsx-sap-hana"></a>

The automatic FSx for ONTAP snapshots do not have application awareness. A database-consistent snapshot of the SAP HANA data volume must be prepared by creating a data snapshot. For more information, see [Create a Data Snapshot](https://help.sap.com/docs/SAP_HANA_COCKPIT/afa922439b204e9caf22c78b6b69e4f2/9fd1c8bb3b60455caa93b7491ae6d830.html).

### Log volume
<a name="log-snaps-fsx-sap-hana"></a>

The log volume is automatically backed up every 15 minutes by SAP HANA. An hourly volume snapshot does not offer any additional value in terms of RPO reduction.

The high frequency of changes on the log volume can rapidly increase the total capacity used for snapshots. This can cause the log volume to run out of capacity, making the SAP HANA workload unresponsive.

## Quality of Service (QoS)
<a name="fsx-qos"></a>

Quality of Service (QoS) enables FSx for ONTAP to consistently deliver predictable performance to multiple applications, and eliminate noisy neighbor applications. When sharing a file system, you can use the quality of service feature for consistent performance and reduced interference between competing workloads. For more information, see [Using Quality of Service in Amazon FSx for NetApp ONTAP](https://aws.amazon.com/blogs/storage/using-quality-of-service-in-amazon-fsx-for-netapp-ontap/).

QoS is configured by creating a QoS policy group, setting ceiling or floor performance levels (minimum or maximum performance), and assigning the policy to an SVM or volume. Performance can be specified in either IOPS or throughput.

 **Example** 

You are creating a test system, based on a snapshot from production, on the same file system as your production SAP HANA database. You want to ensure that the test system does not impact the performance of the production system. You create a QoS policy group (`qos-test`) and define an upper limit of 200 MB/s for data and log volumes (`vol-data` and `vol-log`), which share the same SVM (`svm-test`).

```
 Create QoS policy group
qos policy-group create -policy-group qos-test -vserver svm-test -is-shared false -max-throughput 200MBs

 Assign QoS policy group to data on log volumes
volume modify -vserver svm-test -volume vol-data -qos-policy-group qos-test
volume modify -vserver svm-test -volume vol-log -qos-policy-group qos-test
```

## Backup
<a name="fsx-backup"></a>

You must disable automatic backups for FSx for ONTAP volumes and file systems for SAP HANA. The backups cannot be used to restore SAP HANA to a consistent state. You can use the SnapCenter plugin for SAP HANA backups. For more details, see NetApp docs – [SnapCenter Plug-in for SAP HANA Database overview](https://docs.netapp.com/us-en/snapcenter/protect-hana/concept_snapcenter_plug_in_for_sap_hana_database_overview.html) and [SAP HANA on Amazon FSx for NetApp ONTAP - Backup and recovery with SnapCenter](https://docs.netapp.com/us-en/netapp-solutions-sap/backup/fsxn-overview.html).

You can also use SnapMirror for SAP HANA backups. For more information, see [How can I optimize SnapMirror performance, and what are the best practices for FSx for ONTAP?](https://repost.aws/knowledge-center/fsx-ontap-optimize-snapmirror) 

For point-in-time resilient restores, we highly recommend storing three days of snapshots on a local disk and replicating older backups via SnapVault to a secondary FSx for ONTAP file system using the capacity pool tier. For more information, see [Managing storage capacity](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/managing-storage-capacity.html#storage-tiers).

# Set up host
<a name="host-setup-fsx-sap-hana"></a>

This section walks you through an example host setup for deploying SAP HANA scale-up and scale-out systems on AWS using Amazon FSx for NetApp ONTAP as the primary storage solution.

You must configure your Amazon EC2 instance on an operating system level to use FSx for ONTAP with SAP HANA on AWS.

**Note**  
The following examples apply to an SAP HANA workload with SAP System ID `HDB`. The operating system user is `hdbadm`.

**Topics**
+ [SAP HANA scale-up](fsx-host-scaleup.md)
+ [SAP HANA scale-out](fsx-host-scaleout.md)

# SAP HANA scale-up
<a name="fsx-host-scaleup"></a>

The following section is an example host setup for SAP HANA scale-up deployment with FSx for ONTAP.

**Topics**
+ [Linux kernel parameters](#linux-setup-scaleup)
+ [Network File System (NFS)](#nfs-setup-scaleup)
+ [Create subdirectories](#subdirectories-scaleup)
+ [Create mount points](#mount-points-scaleup)
+ [Mount file systems](#mount-filesys-scaleup)
+ [Data volume partitions](#partitions-scaleup)

## Linux kernel parameters
<a name="linux-setup-scaleup"></a>

1. Create a file `/etc/sysctl.d/91-NetApp-HANA.conf` with the following configurations

   ```
   net.core.rmem_max = 16777216
   net.core.wmem_max = 16777216
   net.ipv4.tcp_rmem = 4096 131072 16777216
   net.ipv4.tcp_wmem = 4096 16384  16777216
   net.core.netdev_max_backlog = 300000
   net.ipv4.tcp_slow_start_after_idle = 0
   net.ipv4.tcp_no_metrics_save = 1
   net.ipv4.tcp_moderate_rcvbuf = 1
   net.ipv4.tcp_window_scaling = 1
   net.ipv4.tcp_timestamps = 1
   net.ipv4.tcp_sack = 1
   sunrpc.tcp_slot_table_entries = 128
   ```

1. To reduce I/O errors during failover of FSx for ONTAP Single-AZ file systems, including [planned maintenance windows](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/maintenance-windows.html) create an additional file `/etc/sysctl.d/99-fsx-failover.conf`. These parameters optimize NFS client behavior to detect and respond to failover events more quickly.

   ```
   # NFS client optimizations for faster failover detection
   # Replace 'default' with your interface name (e.g., eth0, ens5) to target a specific interface
   net.ipv4.neigh.default.base_reachable_time_ms = 5000
   net.ipv4.neigh.default.delay_first_probe_time = 1
   net.ipv4.neigh.default.ucast_solicit = 0
   net.ipv4.tcp_syn_retries = 3
   ```

   For more information and options, see [Troubleshooting I/O errors and NFS lock reclaim failures](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/nfs-failover-issues.html).

   If these errors occur, in some cases they may cause SAP HANA to perform an emergency shutdown of the indexserver process to protect database consistency.

1. Increase the max sessions slots for NFSv4 to 180.

   ```
   echo options nfs max_session_slots = 180 > /etc/modprobe.d/nfsclient.conf
   ```

To activate these changes, run `sysctl -p` for the kernel parameters and reload the NFS module, or reboot the instance during a planned maintenance window (recommended).

## Network File System (NFS)
<a name="nfs-setup-scaleup"></a>

Network File System (NFS) version 4 and higher requires user authentication. You can authenticate with Lightweight Directory Access Protocol (LDAP) server or local user accounts.

If you are using local user accounts, the NFSv4 domain must be set to the same value on all Linux servers and SVMs. You can set the domain parameter (`Domain = <domain name>`) in the `/etc/idmapd.conf` file on the Linux hosts.

To identify the domain setting of the SVM, use the following command:

```
nfs show -vserver hana-data -fields v4-id-domain
```

The following is example output:

```
vserver   v4-id-domain
--------- ------------
hana-data ec2.internal
```

## Create subdirectories
<a name="subdirectories-scaleup"></a>

Mount the `/hana/shared` volume, create `shared` and `usr-sap` subdirectories, and unmount.

```
mkdir /mnt/tmp
mount -t nfs -o sec=sys,vers=4.1 <svm-shared>:/HDB-shared /mnt/tmp
cd /mnt/tmp
mkdir shared
mkdir lss-shared
mkdir usr-sap
cd ..
umount /mnt/tmp
```

## Create mount points
<a name="mount-points-scaleup"></a>

On single-host systems, create the following mount points on your Amazon EC2 instance.

```
mkdir -p /hana/data/HDB/mnt00001
mkdir -p /hana/log/HDB/mnt00001
mkdir -p /hana/shared
mkdir -p /lss/shared/
mkdir -p /usr/sap/HDB
```

## Mount file systems
<a name="mount-filesys-scaleup"></a>

The created file systems must be mounted as NFS file systems on Amazon EC2. The following table is an example recommendation of NFS options for different SAP HANA file systems.


|  |  |  |  |  | 
| --- |--- |--- |--- |--- |
|   **File systems**   |   **Common mount options**   |   **Version options**   |   **Transfer size options**   |   **Connection options**   | 
|  SAP HANA data  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=4  | 
|  SAP HANA log  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
|  SAP HANA shared  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
|  SAP HANA binary  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
|  SAP HANA LSS shared  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
+ Changes to the `nconnect` parameter take effect only if the NFS file system is unmounted and mounted again.
+ Client systems must have unique host names when accessing FSx for ONTAP. If there are systems with the same name, the second system may not be able to access FSx for ONTAP.

 **Example** 

Add the following lines to `/etc/fstab` to preserve mounted file systems during an instance reboot. You can then run `mount -a` to mount the NFS file systems.

```
<svm-data>:/HDB_data_mnt00001 /hana/data/HDB/mnt00001 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=4
<svm-log>:/HDB_log_mnt00001 /hana/log/HDB/mnt00001 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
<svm-shared>:/HDB_shared/usr-sap /usr/sap/HDB nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
<svm-shared>:/HDB_shared/shared /hana/shared nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
```

## Data volume partitions
<a name="partitions-scaleup"></a>

With SAP HANA 2.0 SPS4, additional data volume partitions allow configuring two or more file system volumes for the DATA volume of an SAP HANA tenant database in a single-host or multi-host system. Data volume partitions enable SAP HANA to scale beyond the size and performance limits of a single volume. You can add additional data volume partitions at any time. For more information, see [Adding additional data volume partitions](https://docs.netapp.com/us-en/netapp-solutions-sap/bp/hana-aff-nfs-add-data-volume-partitions.html).

### Host preparation
<a name="host-preparation-scaleup"></a>

Additional mount points and `/etc/fstab` entries must be created and the new volumes must be mounted.
+ Create additional mount points and assign the required permissions, group, and ownership.

  ```
  mkdir -p /hana/data2/HDB/mnt00001
  chmod -R 777 /hana/data2/HDB/mnt00001
  ```
+ Add additional file systems to `/etc/fstab`.

  ```
  <data2>:/data2 /hana/data/HDB/mnt00001 nfs <mount options>
  ```
+ Set the permissions to 777. This is required to enable SAP HANA to add a new data volume in the subsequent step. SAP HANA sets more restrictive permissions automatically during data volume creation.

### Enabling data volume partitioning
<a name="enable-partition-scaleup"></a>

To enable data volume partitions, add the following entry in the `global.ini` file in the `SYSTEMDB` configuration.

```
[customizable_functionalities]
persistence_datavolume_partition_multipath = true
```

```
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM')
SET ('customizable_functionalities', 'PERSISTENCE_DATAVOLUME_PARTITION_MULTIPATH') = 'true'
WITH RECONFIGURE;
```

**Note**  
You must restart your database after updating the `global.ini` file.

### Adding additional data volume partition
<a name="add-partition-scaleup"></a>

Run the following SQL statement against the tenant database to add an additional data volume partition to your tenant database.

```
ALTER SYSTEM ALTER DATAVOLUME ADD PARTITION PATH '/hana/data/HDB/mnt00002/';
```

Adding a data volume partition is quick. The new data volume partitions are empty after creation. Data is distributed equally across data volumes over time.

After you configure and mount FSx for ONTAP file systems, you can install and setup your SAP HANA workload on AWS. For more information, see [SAP HANA Environment Setup on AWS](https://docs.aws.amazon.com/sap/latest/sap-hana/std-sap-hana-environment-setup.html).

# SAP HANA scale-out
<a name="fsx-host-scaleout"></a>

The following section is an example host setup for SAP HANA scale-out with standby node on AWS using FSx for ONTAP as the primary storage solution. You can use SAP HANA host auto failover, an automated solution provided by SAP, for recovering from a failure on your SAP HANA host. For more information, see [SAP HANA - Host Auto-Failover](https://www.sap.com/documents/2016/06/f6b3861d-767c-0010-82c7-eda71af511fa.html).

**Topics**
+ [Linux kernel parameters](#linux-setup-scaleout)
+ [Network File System (NFS)](#nfs-setup-scaleout)
+ [Create subdirectories](#subdirectories-scaleout)
+ [Create mount points](#mount-points-scaleout)
+ [Mount file systems](#mount-filesys-scaleout)
+ [Set ownership for directories](#directories-scaleout)
+ [SAP HANA parameters](#parameters-scaleout)
+ [Data volume partitions](#partitions-scaleout)
+ [Testing host auto failover](#failover-scaleout)

## Linux kernel parameters
<a name="linux-setup-scaleout"></a>

1. Create a file `/etc/sysctl.d/91-NetApp-HANA.conf` with the following configurations

   ```
   net.core.rmem_max = 16777216
   net.core.wmem_max = 16777216
   net.ipv4.tcp_rmem = 4096 131072 16777216
   net.ipv4.tcp_wmem = 4096 16384  16777216
   net.core.netdev_max_backlog = 300000
   net.ipv4.tcp_slow_start_after_idle = 0
   net.ipv4.tcp_no_metrics_save = 1
   net.ipv4.tcp_moderate_rcvbuf = 1
   net.ipv4.tcp_window_scaling = 1
   net.ipv4.tcp_timestamps = 1
   net.ipv4.tcp_sack = 1
   sunrpc.tcp_slot_table_entries = 128
   ```

1. To reduce I/O errors during failover of FSx for ONTAP Single-AZ file systems, including [planned maintenance windows](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/maintenance-windows.html) create an additional file `/etc/sysctl.d/99-fsx-failover.conf`. These parameters optimize NFS client behavior to detect and respond to failover events more quickly.

   ```
   # NFS client optimizations for faster failover detection
   # Replace 'default' with your interface name (e.g., eth0, ens5) to target a specific interface
   net.ipv4.neigh.default.base_reachable_time_ms = 5000
   net.ipv4.neigh.default.delay_first_probe_time = 1
   net.ipv4.neigh.default.ucast_solicit = 0
   net.ipv4.tcp_syn_retries = 3
   ```

   For more information and options, see [Troubleshooting I/O errors and NFS lock reclaim failures](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/nfs-failover-issues.html).

   If these errors occur, in some cases they may cause SAP HANA to perform an emergency shutdown of the indexserver process to protect database consistency.

1. Increase the max sessions slots for NFSv4 to 180.

   ```
   echo options nfs max_session_slots = 180 > /etc/modprobe.d/nfsclient.conf
   ```

To activate these changes, run `sysctl -p` for the kernel parameters and reload the NFS module, or reboot the instance during a planned maintenance window (recommended).

## Network File System (NFS)
<a name="nfs-setup-scaleout"></a>

**Important**  
For SAP HANA scale-out systems, FSx for ONTAP only supports NFS version 4.1.

Network File System (NFS) version 4 and higher requires user authentication. You can authenticate with Lightweight Directory Access Protocol (LDAP) server or local user accounts.

If you are using local user accounts, the NFSv4 domain must be set to the same value on all Linux servers and SVMs. You can set the domain parameter (`Domain = <domain name>`) in the `/etc/idmapd.conf` file on the Linux hosts.

To identify the domain setting of the SVM, use the following command:

```
nfs show -vserver hana-data -fields v4-id-domain
```

The following is example output:

```
vserver   v4-id-domain
--------- ------------
hana-data ec2.internal
```

## Create subdirectories
<a name="subdirectories-scaleout"></a>

Mount the `/hana/shared` volume and create `shared` and `usr-sap` subdirectories for each host. The following example command applies to 4\$11 SAP HANA scale-out systems.

```
mkdir /mnt/tmp
mount -t nfs -o sec=sys,vers=4.1 <svm-shared>:/HDB-shared /mnt/tmp
cd /mnt/tmp
mkdir shared
mkdir lss-shared
mkdir usr-sap-host1
mkdir usr-sap-host2
mkdir usr-sap-host3
mkdir usr-sap-host4
mkdir usr-sap-host5
cd
umount /mnt/tmp
```

## Create mount points
<a name="mount-points-scaleout"></a>

On scale-out systems, create the following mount points on all the subordinate and standby nodes. The following example command applies to 4\$11 SAP HANA scale-out systems.

```
mkdir -p /hana/data/HDB/mnt00001
mkdir -p /hana/log/HDB/mnt00001
mkdir -p /hana/data/HDB/mnt00002
mkdir -p /hana/log/HDB/mnt00002
mkdir -p /hana/data/HDB/mnt00003
mkdir -p /hana/log/HDB/mnt00003
mkdir -p /hana/data/HDB/mnt00004
mkdir -p /hana/log/HDB/mnt00004
mkdir -p /hana/shared
mkdir -p /lss/shared
mkdir -p /usr/sap/HDB
```

## Mount file systems
<a name="mount-filesys-scaleout"></a>

The created file systems must be mounted as NFS file systems on Amazon EC2. The following table is an example recommendation of NFS options for different SAP HANA file systems.


|  |  |  |  |  | 
| --- |--- |--- |--- |--- |
|   **File systems**   |   **Common mount options**   |   **Version options**   |   **Transfer size options**   |   **Connection options**   | 
|  SAP HANA data  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=4  | 
|  SAP HANA log  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
|  SAP HANA shared  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
|  SAP HANA binary  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
|  SAP HANA LSS shared  |  rw,bg,hard,timeo=600,noatime,  |  vers=4,minorversion=1,lock,  |  rsize=262144,wsize=262144,  |  nconnect=2  | 
+ Changes to the `nconnect` parameter take effect only if the NFS file system is unmounted and mounted again.
+ Client systems must have unique host names when accessing FSx for ONTAP. If there are systems with the same name, the second system may not be able to access FSx for ONTAP.

 **Example - mount shared volumes** 

Add the following lines to `/etc/fstab` on **all** the hosts to preserve mounted file systems during an instance reboot. You can then run `mount -a` to mount the NFS file systems.

```
<svm-data_1>:/HDB_data_mnt00001 /hana/data/HDB/mnt00001 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=4
<svm-log_1>:/HDB_log_mnt00001 /hana/log/HDB/mnt00001 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
<svm-data_2>:/HDB_data_mnt00002 /hana/data/HDB/mnt00002 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=4
<svm-log_2>:/HDB_log_mnt00002 /hana/log/HDB/mnt00002 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
<svm-data_3>:/HDB_data_mnt00003 /hana/data/HDB/mnt00003 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=4
<svm-log_3>:/HDB_log_mnt00003 /hana/log/HDB/mnt00003 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
<svm-data_4>:/HDB_data_mnt00004 /hana/data/HDB/mnt00004 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=4
<svm-log_4>:/HDB_log_mnt00004 /hana/log/HDB/mnt00004 nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
<svm-shared>:/HDB_shared/shared /hana/shared nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
<svm-lss-shared>:/HDB_shared/lss-shared /lss/shared nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2
```

 **Example - mount host-specific volumes** 

Add the host-specific line to `/etc/fstab` of **each** host to preserve mounted file systems during an instance reboot. You can then run `mount -a` to mount the NFS file systems.


| Host | Line | 
| --- | --- | 
|  Host 1  |   `<svm-shared>:/HDB_shared/usr-sap-host1 /usr/sap/HDB nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2`   | 
|  Host 2  |   `<svm-shared>:/HDB_shared/usr-sap-host2 /usr/sap/HDB nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2`   | 
|  Host 3  |   `<svm-shared>:/HDB_shared/usr-sap-host3 /usr/sap/HDB nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2`   | 
|  Host 4  |   `<svm-shared>:/HDB_shared/usr-sap-host4 /usr/sap/HDB nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2`   | 
|  Host 5 (standby host)  |   `<svm-shared>:/HDB_shared/usr-sap-host5 /usr/sap/HDB nfs rw,bg,hard,timeo=600,noatime,vers=4,minorversion=1,lock,rsize=262144,wsize=262144,nconnect=2`   | 

## Set ownership for directories
<a name="directories-scaleout"></a>

Use the following command to set the `hdbadm` ownership on SAP HANA data and log directories.

```
sudo chown hdbadm:sapsys /hana/data/HDB
sudo chown hdbadm:sapsys /hana/log/HDB
```

## SAP HANA parameters
<a name="parameters-scaleout"></a>

Install your SAP HANA system with the required configuration, and then set the following parameters. For more information on SAP HANA installation, see [SAP HANA Server Installation and Update Guide](https://help.sap.com/docs/SAP_HANA_PLATFORM/2c1988d620e04368aa4103bf26f17727/7eb0167eb35e4e2885415205b8383584.html?version=2.0.04).

**Topics**
+ [Optimal performance](#parameters-performance-scaleout)
+ [NFS lock lease](#parameters-nfslock-scaleout)

### Optimal performance
<a name="parameters-performance-scaleout"></a>

For optimal performance, set the following parameters in the `global.ini` file.

```
[fileio]
max_parallel_io_requests=128
async_read_submit=on
async_write_submit_active=on
async_write_submit_blocks=all
```

The following SQL commands can be used to set these parameters on `SYSTEM` level.

```
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'max_parallel_io_requests') = '128' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'async_read_submit') = 'on' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'async_write_submit_active') = 'on' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('fileio', 'async_write_submit_blocks') = 'all' WITH RECONFIGURE;
```

### NFS lock lease
<a name="parameters-nfslock-scaleout"></a>

Starting with SAP HANA 2.0 SPS4, SAP HANA provides parameters to control the failover behavior. It is recommended to use these parameters instead of setting the lease time at the `SVM` level. The following parameters are configured in the `namerserver.ini` file.


| Section | Parameter | Value | 
| --- | --- | --- | 
|   `failover`   |   `normal_retries`   |  9  | 
|   `distributed_watchdog`   |   `deactivation_retries`   |  11  | 
|   `distributed_watchdog`   |   `takeover_retries`   |  9  | 

The following SQL commands can be used to set these parameters on `SYSTEM` level.

```
ALTER SYSTEM ALTER CONFIGURATION ('nameserver.ini', 'SYSTEM') SET ('failover', 'normal_retries') = '9' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('nameserver.ini', 'SYSTEM') SET ('distributed_watchdog', 'deactivation_retries') = '11' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('nameserver.ini', 'SYSTEM') SET ('distributed_watchdog', 'takeover_retries') = '9' WITH RECONFIGURE;
```

## Data volume partitions
<a name="partitions-scaleout"></a>

With SAP HANA 2.0 SPS4, additional data volume partitions allow configuring two or more file system volumes for the DATA volume of an SAP HANA tenant database in a single-host or multi-host system. Data volume partitions enable SAP HANA to scale beyond the size and performance limits of a single volume. You can add additional data volume partitions at any time. For more information, see [Adding additional data volume partitions](https://docs.netapp.com/us-en/netapp-solutions-sap/bp/hana-aff-nfs-add-data-volume-partitions.html).

**Topics**
+ [Host preparation](#host-preparation-scaleout)
+ [Enabling data volume partitioning](#enable-partition-scaleout)
+ [Adding additional data volume partition](#add-partition-scaleout)

### Host preparation
<a name="host-preparation-scaleout"></a>

Additional mount points and `/etc/fstab` entries must be created and the new volumes must be mounted.
+ Create additional mount points and assign the required permissions, group, and ownership.

  ```
  mkdir -p /hana/data2/HDB/mnt00001
  chmod -R 777 /hana/data2/HDB/mnt00001
  ```
+ Add additional file systems to `/etc/fstab`.

  ```
  <data2>:/data2 /hana/data2/HDB/mnt00001 nfs <mount options>
  ```
+ Set the permissions to 777. This is required to enable SAP HANA to add a new data volume in the subsequent step. SAP HANA sets more restrictive permissions automatically during data volume creation.

### Enabling data volume partitioning
<a name="enable-partition-scaleout"></a>

To enable data volume partitions, add the following entry in the `global.ini` file in the `SYSTEMDB` configuration.

```
[customizable_functionalities]
persistence_datavolume_partition_multipath = true
```

```
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM')
SET ('customizable_functionalities', 'PERSISTENCE_DATAVOLUME_PARTITION_MULTIPATH') = 'true'
WITH RECONFIGURE;
```

**Note**  
You must restart your database after updating the `global.ini` file.

### Adding additional data volume partition
<a name="add-partition-scaleout"></a>

Run the following SQL statement against the tenant database to add an additional data volume partition to your tenant database.

```
ALTER SYSTEM ALTER DATAVOLUME ADD PARTITION PATH '/hana/data2/HDB/';
```

Adding a data volume partition is quick. The new data volume partitions are empty after creation. Data is distributed equally across data volumes over time.

## Testing host auto failover
<a name="failover-scaleout"></a>

We recommend testing your SAP HANA host auto failover scenarios. For more information, see [SAP HANA - Host Auto-Failover](https://www.sap.com/documents/2016/06/f6b3861d-767c-0010-82c7-eda71af511fa.html).

Some words have been redacted and replaced by inclusive terms. These words may appear different in your product, system code or table. For additional details, see [Inclusive Language at SAP](https://help.sap.com/docs/TERMINOLOGY/25cbeaaad3c24eba8ea10b579ce81aa1/83a23df24013403ea4c1fdd0107cc0fd.html).

The following table presents the expected results of different test scenarios.


| Scenario | Expected result | 
| --- | --- | 
|  SAP HANA subordinate node failure using `echo b > /proc/sysrq-trigger`   |  Subordinate node failover to standby node  | 
|  SAP HANA coordinator node failure using `HDB` kill  |  SAP HANA service failover to standby node (other candidate for coordinator node)  | 
|  SAP HANA coordinator node failure while other coordinator nodes act as subordinate nodes  |  Coordinator node failover to standby node while other coordinator nodes act as subordinate nodes  | 

**Topics**
+ [SAP HANA subordinate node failure](#scenario1-scaleout)
+ [SAP HANA coordinator node failure](#scenario2-scaleout)
+ [SAP HANA coordinator node failure while other coordinator nodes act as subordinate nodes](#scenario3-scaleout)

### SAP HANA subordinate node failure
<a name="scenario1-scaleout"></a>

Check the status of the landscape before testing.

```
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> python landscapeHostConfiguration.py
| Host    | Host   | Host   | Failover | Remove | Storage   | Storage   | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host    | Host    | Worker  | Worker  |
|         | Active | Status | Status   | Status | Config    | Actual    | Config   | Actual   | Config     | Actual     | Config      | Actual      | Config  | Actual  | Config  | Actual  |
|         |        |        |          |        | Partition | Partition | Group    | Group    | Role       | Role       | Role        | Role        | Roles   | Roles   | Groups  | Groups  |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hana    | yes    | ok     |          |        |         1 |         1 | default  | default  | coordinator 1   | coordinator     | worker      | coordinator      | worker  | worker  | default | default |
| hanaw01 | yes    | ok     |          |        |         2 |         2 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw02 | yes    | ok     |          |        |         3 |         3 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw03 | yes    | ok     |          |        |         4 |         4 | default  | default  | coordinator 3   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw04 | yes    | ignore |          |        |         0 |         0 | default  | default  | coordinator 2   | subordinate      | standby     | standby     | standby | standby | default | -       |

overall host status: ok
```

Run the following command on the subordinate node as `root` to simulate a node crash. In this case, the subordinate node is `hanaw01`.

```
echo b > /proc/sysrq-trigger
```

```
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> python landscapeHostConfiguration.py
| Host    | Host   | Host   | Failover | Remove | Storage   | Storage   | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host    | Host    | Worker  | Worker  |
|         | Active | Status | Status   | Status | Config    | Actual    | Config   | Actual   | Config     | Actual     | Config      | Actual      | Config  | Actual  | Config  | Actual  |
|         |        |        |          |        | Partition | Partition | Group    | Group    | Role       | Role       | Role        | Role        | Roles   | Roles   | Groups  | Groups  |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hana    | yes    | ok     |          |        |         1 |         1 | default  | default  | coordinator 1   | coordinator     | worker      | coordinator      | worker  | worker  | default | default |
| hanaw01 | no     | info   |          |        |         2 |         0 | default  | default  | subordinate      | subordinate      | worker      | standby     | worker  | standby | default | -       |
| hanaw02 | yes    | ok     |          |        |         3 |         3 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw03 | yes    | ok     |          |        |         4 |         4 | default  | default  | coordinator 3   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw04 | yes    | info   |          |        |         0 |         2 | default  | default  | coordinator 2   | subordinate      | standby     | subordinate       | standby | worker  | default | default |

overall host status: info
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support>
```

### SAP HANA coordinator node failure
<a name="scenario2-scaleout"></a>

Check the status of the landscape before crashing the node.

```
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> python landscapeHostConfiguration.py
| Host    | Host   | Host   | Failover | Remove | Storage   | Storage   | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host    | Host    | Worker  | Worker  |
|         | Active | Status | Status   | Status | Config    | Actual    | Config   | Actual   | Config     | Actual     | Config      | Actual      | Config  | Actual  | Config  | Actual  |
|         |        |        |          |        | Partition | Partition | Group    | Group    | Role       | Role       | Role        | Role        | Roles   | Roles   | Groups  | Groups  |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hana    | yes    | ok     |          |        |         1 |         1 | default  | default  | coordinator 1   | coordinator     | worker      | coordinator      | worker  | worker  | default | default |
| hanaw01 | yes    | ok     |          |        |         2 |         2 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw02 | yes    | ok     |          |        |         3 |         3 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw03 | yes    | ok     |          |        |         4 |         4 | default  | default  | coordinator 3   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw04 | yes    | ignore |          |        |         0 |         0 | default  | default  | coordinator 2   | subordinate      | standby     | standby     | standby | standby | default | -       |

overall host status: ok
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support>
```

Use the following command to simulate failure, by interrupting SAP HANA processes, on the coordinator node. In this case, the coordinator node is `hana`.

```
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> HDB kill
```

```
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> python landscapeHostConfiguration.py
nameserver hana:30001 not responding.
| Host    | Host   | Host   | Failover | Remove | Storage   | Storage   | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host    | Host    | Worker  | Worker  |
|         | Active | Status | Status   | Status | Config    | Actual    | Config   | Actual   | Config     | Actual     | Config      | Actual      | Config  | Actual  | Config  | Actual  |
|         |        |        |          |        | Partition | Partition | Group    | Group    | Role       | Role       | Role        | Role        | Roles   | Roles   | Groups  | Groups  |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hana    | no     | info   |          |        |         1 |         0 | default  | default  | coordinator 1   | subordinate      | worker      | standby     | worker  | standby | default | -       |
| hanaw01 | yes    | ok     |          |        |         2 |         2 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw02 | yes    | ok     |          |        |         3 |         3 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw03 | yes    | ok     |          |        |         4 |         4 | default  | default  | coordinator 3   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw04 | yes    | info   |          |        |         0 |         1 | default  | default  | coordinator 2   | coordinator     | standby     | coordinator      | standby | worker  | default | default |

overall host status: info
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support>
```

### SAP HANA coordinator node failure while other coordinator nodes act as subordinate nodes
<a name="scenario3-scaleout"></a>

Check the status of the landscape before testing.

```
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> python landscapeHostConfiguration.py
| Host    | Host   | Host   | Failover | Remove | Storage   | Storage   | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host    | Host    | Worker  | Worker  |
|         | Active | Status | Status   | Status | Config    | Actual    | Config   | Actual   | Config     | Actual     | Config      | Actual      | Config  | Actual  | Config  | Actual  |
|         |        |        |          |        | Partition | Partition | Group    | Group    | Role       | Role       | Role        | Role        | Roles   | Roles   | Groups  | Groups  |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hana    | yes    | ok     |          |        |         1 |         2 | default  | default  | coordinator 1   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw01 | yes    | info   |          |        |         2 |         0 | default  | default  | subordinate      | subordinate      | worker      | standby     | worker  | standby | default | -       |
| hanaw02 | yes    | ok     |          |        |         3 |         4 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw03 | yes    | ok     |          |        |         4 |         3 | default  | default  | coordinator 3   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw04 | yes    | info   |          |        |         0 |         1 | default  | default  | coordinator 2   | coordinator     | standby     | coordinator      | standby | worker  | default | default |

overall host status: info
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support>
```

Use the following command to simulate failure, by interrupting SAP HANA processes, on the coordinator node. In this case, the coordinator node is `hana04`.

```
hdbadm@hanaw04:/usr/sap/HDB/HDB00> HDB kill
```

```
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> python landscapeHostConfiguration.py
| Host    | Host     | Host    | Failover         | Remove | Storage   | Storage   | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host    | Host    | Worker  | Worker  |
|         | Active   | Status  | Status           | Status | Config    | Actual    | Config   | Actual   | Config     | Actual     | Config      | Actual      | Config  | Actual  | Config  | Actual  |
|         |          |         |                  |        | Partition | Partition | Group    | Group    | Role       | Role       | Role        | Role        | Roles   | Roles   | Groups  | Groups  |
| ------- | -------- | ------- | ---------------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hana    | starting | warning |                  |        |         1 |         1 | default  | default  | coordinator 1   | coordinator     | worker      | coordinator      | worker  | worker  | default | default |
| hanaw01 | starting | warning |                  |        |         2 |         2 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw02 | yes      | ok      |                  |        |         3 |         3 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw03 | yes      | ok      |                  |        |         4 |         4 | default  | default  | coordinator 3   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw04 | no       | warning | failover to hana |        |         0 |         0 | default  | default  | coordinator 2   | subordinate      | standby     | standby     | standby | standby | default | -       |

overall host status: warning
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support> python landscapeHostConfiguration.py
| Host    | Host   | Host   | Failover | Remove | Storage   | Storage   | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host    | Host    | Worker  | Worker  |
|         | Active | Status | Status   | Status | Config    | Actual    | Config   | Actual   | Config     | Actual     | Config      | Actual      | Config  | Actual  | Config  | Actual  |
|         |        |        |          |        | Partition | Partition | Group    | Group    | Role       | Role       | Role        | Role        | Roles   | Roles   | Groups  | Groups  |
| ------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- |
| hana    | yes    | ok     |          |        |         1 |         1 | default  | default  | coordinator 1   | coordinator     | worker      | coordinator      | worker  | worker  | default | default |
| hanaw01 | yes    | ok     |          |        |         2 |         2 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw02 | yes    | ok     |          |        |         3 |         3 | default  | default  | subordinate      | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw03 | yes    | ok     |          |        |         4 |         4 | default  | default  | coordinator 3   | subordinate      | worker      | subordinate       | worker  | worker  | default | default |
| hanaw04 | no     | ignore |          |        |         0 |         0 | default  | default  | coordinator 2   | subordinate      | standby     | standby     | standby | standby | default | -       |

overall host status: ok
hdbadm@hana:/usr/sap/HDB/HDB00/exe/python_support>
```

# Configure storage (Amazon EFS)
<a name="configure-nfs-for-scale-out-workloads"></a>

**Note**  
If you plan to use FSx for ONTAP storage for your deployment, refer to SAP HANA on AWS with Amazon FSx for NetApp ONTAP guide, and skip the Amazon EFS configuration steps detailed further here.

Amazon EFS provides easy-to-set-up, scalable, and highly available shared file systems that can be mounted with the NFSv4 client. For scale-out workloads, we recommend using Amazon EFS for SAP HANA shared and backup volumes. You can choose between different performance options for your file systems depending on your requirements. We recommend starting with the General Purpose and Provisioned Throughput options, with approximately 100 MiB/s to 200 MiB/s throughput. To set up your file systems, do the following:

1. Install the `nfs-utils` package in all the nodes in your scale-out cluster.
   + For RHEL, use `yum install nfs-utils`.
   + For SLES, use `zypper install nfs-utils`.

1. Create two Amazon EFS file systems and target mounts for SAP HANA shared and backup in your target VPC and subnet. For detailed steps, follow the instructions specified in the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEFS.html).

1. After the file systems are created, mount the newly created file systems in all the nodes by using the following commands:

   ```
      mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <EFS DNS Name>:/ /hana/shared
   
      mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <EFS DNS Name>:/ /backup
   ```
**Note**  
If you have trouble mounting the NFS file systems, you might need to adjust your security groups to allow access to port 2049. For details, see [Security Groups for Amazon EC2 Instances and Mount Targets](https://docs.aws.amazon.com/efs/latest/ug/security-considerations.html#network-access) in the AWS documentation.

1. Add NFS mount entries to the `/etc/fstab` file in all the nodes to automatically mount these file systems during system restart; for example:

   ```
      echo “nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <EFS DNS Name>:/ /hana/shared” >> /etc/fstab
      echo “nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <EFS DNS Name>:/ /backup” >> /etc/fstab
   ```

1. Set appropriate permissions and ownership for your target mount points.

# Configure ENA Express
<a name="ena-express-sap-hana"></a>

SAP HANA scale-out systems require a minimum of 9 Gbps of single flow network bandwidth between nodes. Amazon EC2 instances now support ENA Express, allowing a single flow bandwidth of up to 25 Gbps between instances, without requiring a cluster placement group. For more information, see [Improve network performance with ENA Express on Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ena-express.html).

## Prerequisites
<a name="prerequisites-ena-express-sap-hana"></a>

Before setting up ENA Express for SAP HANA scale-out systems or SAP NetWeaver workloads, verify the following prerequisites.
+ Verify that your chosen instance type is certified for SAP HANA or supported for SAP NetWeaver.
  + For **SAP HANA scale-out workloads**, you can enable ENA Express on a certified and supported Amazon EC2 instance. For information on supported instances, see [Supported instance types for ENA Express](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ena-express.html#ena-express-supported-instance-types). For information on certified instances, see [Certified and Supported SAP HANA Hardware](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=iaas;ve:23;v:b046dad8-7aa0-457a-ade5-286ebaf88a2f;v:963a354b-c138-4c78-b95f-2bca33f1fc0a). If an Amazon EC2 instance is certified for scale-out but doesn’t support ENA Express, you can continue to use cluster placement group to obtain upto 10 Gbps of single flow network bandwidth.
  + For **SAP NetWeaver workloads**, you can use ENA Express with all of the SAP certified Amazon EC2 instances that support ENA Express. For more information, see the following resources.
    +  [SAP NetWeaver supported instances](https://docs.aws.amazon.com/sap/latest/general/sap-netweaver-aws-ec2.html) 
    +  [SAP Note 1656099 – SAP Applications on AWS: Supported DB/OS and Amazon EC2 products](https://me.sap.com/notes/1656099/E) 
+ Ensure that you are using the minimum required operating system version with the latest kernel version.
  + RHEL for SAP 8.4 and above
  + SLES 12 SP5 for SAP or SLES 15 SP2 for SAP and above
**Note**  
Verify that your chosen operating system is certified for SAP HANA. For more information, see [Certified and Supported SAP HANA Hardware](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=iaas;ve:23;v:b046dad8-7aa0-457a-ade5-286ebaf88a2f;v:963a354b-c138-4c78-b95f-2bca33f1fc0a).

## Configure operating system
<a name="os-ena-express-sap-hana"></a>

You must configure some of the network related parameters at the operating system level to ensure that ENA Express works effectively. This includes configuring the correct maximum transmission unit (mtu) required for ENA Express, and other parameters. For more information, see [Prerequisites](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ena-express.html#ena-express-prereq-linux) for ENA Express.

You can also use the [check-ena-express-settings.sh](https://github.com/amzn/amzn-ec2-ena-utilities/blob/main/ena-express/check-ena-express-settings.sh) script to check the operating system prerequisites. You can run the script from AWS Systems Manager against multiple instances simultaneously. To run the script with Systems Manager, you must ensure that your system has AWS Systems Manager Agent installed. Use the following steps to run the script.

1. Go to https://console.aws.amazon.com/systems-manager/.

1. Select **Node Management** > **Run Command**.

1. Select **Run a command**, and search for ** ` AWS-RunRemoteScript` **.

1. Choose ** ` AWS-RunRemoteScript` **, and input the following parameters.
   +  **Source Type** – GitHub
   +  **Source Info** – `{ "owner": "amzn", "repository": "amzn-ec2-ena-utilities", "path": "ena-express", "getOptions": "branch: main" }` 
   +  **Command Line** – `check-ena-express-settings.sh eth0` 
**Note**  
You must repeat this check for all elastic network interfaces, such as `eth1`, `eth2`, etc.

1. In **Target selection**, specify the instances against which you want to run the script.

1. Select **Run**.

Once the command has completed running, you can review the output, and take corrective actions, if required.

## ENA Express settings
<a name="settings-ena-express-sap-hana"></a>

After configuring your operating system, you can enable ENA Express for your target instance via AWS Management Console or AWS CLI. For more information, see [Configure ENA Express settings](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ena-express-configure.html). This setting must be repeated on all nodes in scale-out setup.

You do not need a cluster placement group to obtain minimum required single flow network throughput for SAP HANA scale-out systems after successfully enabling ENA Express. To remove a placement group, see [Working with placement groups](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#concepts-placement-groups).

## Check SAP HANA scale-out performance
<a name="performance-ena-express-sap-hana"></a>

After enabling ENA Express, you can use [SAP HANA Hardware and Cloud Measurement Tools](https://help.sap.com/docs/HANA_HW_CLOUD_TOOLS/02bb1e64c2ae4de7a11369f4e70a6394/7e878f6e16394f2990f126e639386333.html) to check its performance. For additional details, see [Measure System Configuration and Performance - Scale-out Systems](https://help.sap.com/docs/HANA_HW_CLOUD_TOOLS/02bb1e64c2ae4de7a11369f4e70a6394/61c3401eff904a349032e450cd031a65.html).

# Post Deployment Steps
<a name="post-deployment-steps"></a>

1. Complete the steps required to connect your instance to your corporate directory service, such as Microsoft Active Directory, if needed.

1. Set up any monitoring required for your environment.

1. Set up a CloudWatch alarm and Amazon EC2 automatic recovery to automatically recover your instance from hardware failures. For details, see [Recover Your Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html) in the AWS documentation. You can also refer to the Knowledge Center [video](https://aws.amazon.com/premiumsupport/knowledge-center/automatic-recovery-ec2-cloudwatch/) for detailed instructions.
**Note**  
Automatic recovery is not supported for Amazon EC2 instances running in dedicated hosts.

1. Create an AMI of your newly deployed system to take a full backup of your instance. For details, see [Create an AMI from an Amazon EC2 Instance](https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/tkv-create-ami-from-instance.html) in the AWS documentation.

1. If you have deployed an SAP HANA scale-out cluster, consider adding additional elastic network interfaces and security groups to logically separate network traffic for client, inter-node, and optional SAP HANA System Replication (HSR) communications. For details, see the [SAP HANA on AWS Operations Guide](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-hana-on-aws-operations.html).