

# SAP HANA on AWS Operations Guide
<a name="sap-hana-on-aws-operations"></a>

 *SAP specialists, Amazon Web Services* 

 *Last updated: February 2022* 

Amazon Web Services offers you the ability to run your SAP HANA systems of various sizes and operating systems. Running SAP systems on AWS is very similar to running SAP systems in your data center. To a SAP Basis or NetWeaver administrator, there are minimal differences between the two environments. There are a number of AWS Cloud considerations relating to security, storage, compute configurations, management, and monitoring that will help you get the most out of your SAP HANA implementation on AWS.

This technical article provides the best practices for deployment, operations, and management of SAP HANA systems on AWS. The target audience is SAP Basis and NetWeaver administrators who have experience running SAP HANA systems in an on-premises environment and want to run their SAP HANA systems on AWS.

**Note**  
You must have SAP portal access to view the SAP Notes. For more information, see the [SAP Support website](https://support.sap.com/en/my-support/knowledge-base.html).

## About this Guide
<a name="hana-ops-about"></a>

This guide is part of a content series that provides detailed information about hosting, configuring, and using SAP technologies in the AWS Cloud. For the other guides in the series, ranging from overviews to advanced topics, see the [SAP on AWS Technical Documentation home page](https://aws.amazon.com/sap/docs/).

# Introduction
<a name="hana-ops-intro"></a>

This guide provides best practices for operating SAP HANA systems that have been deployed on AWS. This guide is not intended to replace any of the standard SAP documentation. See the following SAP guides and notes:
+  [SAP Library (help.sap.com) - SAP HANA Administration Guide](https://help.sap.com/hana/SAP_HANA_Administration_Guide_en.pdf) 
+  [SAP installation guides](https://help.sap.com/docs) (SAP portal access required)
+  [SAP notes](https://me.sap.com/notes) (SAP portal access required)

This guide assumes that you have a basic knowledge of AWS. If you are new to AWS, see the following on the AWS website before continuing:
+  [AWS Getting Started Resource Center](https://aws.amazon.com/getting-started/) 
+  [What is Amazon EC2?](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) 

# Administration
<a name="hana-ops-administration"></a>

This section provides guidance on common administrative tasks required to operate an SAP HANA system, including information about starting, stopping, and cloning systems.

## Starting and Stopping EC2 Instances Running SAP HANA Hosts
<a name="hana-ops-starting-and-stopping-ec2"></a>

At any time, you can stop one or multiple SAP HANA hosts. Before stopping the EC2 instance of an SAP HANA host, first stop SAP HANA on that instance.

When you resume the instance, it will automatically start with the same IP address, network, and storage configuration as before. You also have the option of using the [EC2 Scheduler](https://aws.amazon.com/answers/infrastructure-management/ec2-scheduler/) to schedule starts and stops of your EC2 instances. The EC2 Scheduler relies on the native shutdown and start-up mechanisms of the operating system. These native mechanisms will invoke the orderly shutdown and startup of your SAP HANA instance. Here is an architectural diagram of how the EC2 Scheduler works:

 **Figure 1: EC2 Scheduler** 

![\[EC2 Scheduler\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hana-ops-ec2-scheduler.jpg)


## Tagging SAP Resources on AWS
<a name="hana-ops-tagging"></a>

Tagging your SAP resources on AWS can significantly simplify identification, security, manageability, and billing of those resources. You can tag your resources using the AWS Management Console or by using the `create-tags` functionality of the AWS Command Line Interface (AWS CLI). This table lists some example tag names and tag values:


**​**  

| Tag name | Tag value | 
| --- | --- | 
|   **Name**   |  SAP server’s virtual (host) name  | 
|   **Environment**   |  SAP server’s landscape role; for example: SBX, DEV, QAT, STG, PRD.  | 
|   **Application**   |  SAP solution or product; for example: ECC, CRM, BW, PI, SCM, SRM, EP  | 
|   **Owner**   |  SAP point of contact  | 
|   **Service level**   |  Known uptime and downtime schedule  | 

After you have tagged your resources, you can apply specific security restrictions such as access control, based on the tag values. Here is an example of such a policy from the [AWS Security blog](https://aws.amazon.com/blogs/security/how-to-automatically-tag-amazon-ec2-resources-in-response-to-api-events/):

```
    {
   "Version" : "2012-10-17",
   "Statement" : [
      {
         "Sid" : "LaunchEC2Instances", "Effect" : "Allow",
         "Action" : [
            "ec2:Describe*", "ec2:RunInstances"
         ],
         "Resource" : [
            "*"
         ]
      },
      {
         "Sid" : "AllowActionsIfYouAreTheOwner",
         "Effect" : "Allow",
         "Action" : [
            "ec2:StopInstances",
            "ec2:StartInstances",
            "ec2:RebootInstances",
            "ec2:TerminateInstances"
         ],
         "Condition" : {
            "StringEquals" : {
               "ec2:ResourceTag/PrincipalId" : "${aws:userid}"
            }
         },
         "Resource"	: [
            "*"
         ]
      }
   ]
}
```

The AWS Identity and Access Management (IAM) policy allows only specific permissions based on the tag value. In this scenario, the current user ID must match the tag value in order for the user to be granted permissions. For more information on tagging, see the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) and [AWS blog](https://aws.amazon.com/blogs/aws/new-aws-resource-tagging-api/).

## Monitoring
<a name="hana-ops-monitoring"></a>

You can use various AWS, SAP, and third-party solutions to monitor your SAP workloads. Here are some of the core AWS monitoring services:
+  [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) – CloudWatch is a monitoring service for AWS resources. It’s critical for SAP workloads where it’s used to collect resource utilization logs and to create alarms to automatically react to changes in AWS resources.
+  [AWS CloudTrail](https://aws.amazon.com/cloudtrail/) – CloudTrail keeps track of all API calls made within your AWS account. It captures key metrics about the API calls and can be useful for automating trail creation for your SAP resources.

Configuring CloudWatch detailed monitoring for SAP resources is mandatory for getting AWS and SAP support. You can use native AWS monitoring services in a complementary fashion with the SAP Solution Manager. You can find third-party monitoring tools in [AWS Marketplace](https://aws.amazon.com/marketplace).

## Automation
<a name="hana-ops-automation"></a>

 AWS offers multiple options for programmatically scripting your resources to operate or scale them in a predictable and repeatable manner. You can use AWS CloudFormation to automate and operate SAP systems on AWS. Here are some examples for automating your SAP environment on AWS:


|  |  |  | 
| --- |--- |--- |
|   **Area**   |   **Activities**   |   ** AWS services**   | 
|   **Infrastructure deployment**   |  Provision new SAP environment SAP system cloning  |   [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/GettingStarted.html)   [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html)   | 
|   **Capacity management**   |  Automate scale-up/scale-out of SAP application servers  |   [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html)   [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/GettingStarted.html)   | 
|   **Operations**   |  SAP backup automation (see the [backup](#hana-ops-backup-example) [example](#hana-ops-backup-example)) Performing monitoring and visualization  |   [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html)https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html[AWS Systems Manager]  | 

## Patching
<a name="hana-ops-patching"></a>

There are two ways for you to patch your SAP HANA database, with options for minimizing cost and/or downtime. With AWS, you can provision additional servers as needed to minimize downtime for patching in a cost-effective manner. You can also minimize risks by creating on-demand copies of your existing production SAP HANA databases for lifelike production readiness testing.

This table summarizes the tradeoffs of the two patching methods:


| Patching method | Benefits | Tradeoff | Technologies available | 
| --- | --- | --- | --- | 
|   **Patch an existing server**   |  No costs for additional on-demand instances Lowest levels of relative complexity and setup tasks involved  |  Need to patch the existing operating system and database Longest downtime to the existing server and database  |  Native OS patching tools [Patch Manager](https://aws.amazon.com/ec2/systems-manager/patch-manager/)   [Native SAP HANA patching tools](https://help.sap.com/viewer/2c1988d620e04368aa4103bf26f17727/2.0.00/en-US/9731208b85fa4c2fa68c529404ffa75a.html)   | 
|   **Provision and patch a new server**   |  Leverage latest AMIs (only database patch is required) Shortest downtime on the existing server and database Option to patch and test the operating system and database separately or together  |  More costs for additional on-demand instances More complexity and setup tasks involved  |   [Amazon Machine Image (AMI)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)   [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-launch.html)   [AWS CloudFormation](https://aws.amazon.com/cloudformation/)   [SAP HANA System Replication](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.00/en-US/38ad53e538ad41db9d12d22a6c8f2503.html)https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.00/en-US/c622d640e47e4c0ebca8cbe74ff9550a.html[SAP HANA System Cloning][SAP HANA backups](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.00/en-US/ea70213a0e114ec29724e4a10b6bb176.html)  SAP Notes:  [1984882](https://launchpad.support.sap.com/%23/notes/1984882/E) - Using HANA System Replication for Hardware Exchange with minimum/zero downtime  [1913302](https://launchpad.support.sap.com/%23/notes/1913302/E) - HANA: Suspend DB connections for short maintenance tasks  | 

The first method (patch an existing server) involves patching the operating system (OS) and database (DB) components of your SAP HANA server. The goal of this method is to minimize any additional server costs and to avoid any tasks needed to set up additional systems or tests. This method may be most appropriate if you have a well-defined patching process and are satisfied with your current downtime and costs. With this method you must use the correct operating system (OS) update process and tools for your Linux distribution. See this [SUSE blog](https://www.suse.com/communities/blog/upgrading-running-demand-instances-public-cloud/) and [Red Hat FAQ](https://aws.amazon.com/partners/redhat/faqs/), or check each vendor’s documentation for their specific processes and procedures.

In addition to patching tools provided by our Linux partners,AWS offers a [free of charge patching service](https://aws.amazon.com/about-aws/whats-new/2016/12/amazon-ec2-systems-manager-now-offers-patch-management/) called [Patch Manager](https://aws.amazon.com/ec2/systems-manager/patch-manager/). Patch Manager is an automated tool that helps you simplify your OS patching process. You can scan your EC2 instances for missing patches and automatically install them, select the timing for patch rollouts, control instance reboots, and many other tasks. You can also define auto-approval rules for patches with an added ability to black-list or white-list specific patches, control how the patches are deployed on the target instances (e.g., stop services before applying the patch), and schedule the automatic rollout through maintenance windows.

The second method (provision and patch a new server) involves provisioning a new EC2 instance that will receive a copy of your source system and database. The goal of the method is to minimize downtime, minimize risks (by having production data and executing production-like testing), and have repeatable processes. This method may be most appropriate if you are looking for higher degrees of automation to enable these goals and are comfortable with the trade- offs. This method is more complex and has a many more options to fit your requirements. Certain options are not exclusive and can be used together. For example, your AWS CloudFormation template can include the latest Amazon Machine Images (AMIs), which you can then use to automate the provisioning, set up, and configuration of a new SAP HANA server.

For more information, see [Automated patching](https://docs.aws.amazon.com/sap/latest/sap-hana/automated-patching.html).

### Backup and Recovery
<a name="hana-ops-backup-recovery"></a>

This section provides an overview of the AWS services used in the backup and recovery of SAP HANA systems and provides an example backup and recovery scenario. This guide does not include detailed instructions on how to execute database backups using native HANA backup and recovery features or third- party backup tools. Please refer to the standard OS, SAP, and SAP HANA documentation or the documentation provided by backup software vendors. In addition, backup schedules, frequency, and retention periods might vary with your system type and business requirements. See the following standard SAP documentation for guidance on these topics.


| SAP Note | Description | 
| --- | --- | 
|   [1642148](https://me.sap.com/notes/1642148)   |  FAQ: SAP HANA Database Backup & Recovery  | 
|   [1821207](https://me.sap.com/notes/1821207)   |  Determining required recovery files  | 
|   [1869119](https://me.sap.com/notes/1869119)   |  Checking backups using hdbbackupcheck  | 
|   [1873247](https://me.sap.com/notes/1873247)   |  Checking recoverability with hdbbackupdiag --check  | 
|   [1651055](https://me.sap.com/notes/1651055)   |  Scheduling SAP HANA Database Backups in Linux  | 
|   [2484177](https://me.sap.com/notes/2484177)   |  Scheduling backups for multi-tenant SAP HANA Cockpit 2.0  | 

### Creating an Image of an SAP HANA System
<a name="hana-ops-creating-image"></a>

You can use the AWS Management Console or the command line to create your own AMI based on an existing instance. For more information, see the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html). You can use an AMI of your SAP HANA instance for the following purposes:
+  **To create a full offline system backup** (of the OS /usr/sap, HANA shared, backup, data, and log files) – AMIs are automatically saved in multiple Availability Zones within the same AWS Region.
+  **To move a HANA system from one AWS Region to another** – You can create an image of an existing EC2 instance and move it to another AWS Region by following the instructions in the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html). When the AMI has been copied to the target AWS Region, you can launch the new instance there.
+  **To clone an SAP HANA system** – You can create an AMI of an existing SAP HANA system to create an exact clone of the system. See the next section for additional information.

**Note**  
See [Restoring SAP HANA Backups and Snapshots](#hana-ops-restoring-backups-snapshots) later in this whitepaper to view the recommended restoration steps for production environments.

**Tip**  
The SAP HANA system should be in a consistent state before you create an AMI. To do this, stop the SAP HANA instance before creating the AMI or by following the instructions in [SAP Note 1703435](https://me.sap.com/notes/1703435).

### AWS Services and Components for Backup Solutions
<a name="hana-ops-aws-services-for-backup"></a>

 AWS provides a number of services and options for storage and backup, including Amazon Simple Storage Service (Amazon S3), AWS Identity and Access Management (IAM), and S3 Glacier.

#### Amazon S3
<a name="hana-ops-s3"></a>

 [Amazon S3](https://aws.amazon.com/s3/) is the center of any SAP backup and recovery solution on AWS. It provides a highly durable storage infrastructure designed for mission-critical and primary data storage. It is designed to provide 99.999999999% durability and 99.99% availability over a given year. See the [Amazon S3 documentation](https://aws.amazon.com/documentation/s3/) for detailed instructions on how to create and configure an S3 bucket to store your SAP HANA backup files.

#### IAM
<a name="hana-ops-iam"></a>

With [IAM](https://aws.amazon.com/iam/), you can securely control access to AWS services and resources for your users. You can create and manage AWS users and groups and use permissions to grant user access to AWS resources. You can create roles in IAM and manage permissions to control which operations can be performed by the entity, or AWS service, that assumes the role. You can also define which entity is allowed to assume the role.

During the deployment process, AWS CloudFormation creates an IAM role that allows access to get objects from and/or put objects into Amazon S3. That role is subsequently assigned to each EC2 instance that is hosting SAP HANA master and worker nodes at launch time as they are deployed.

 **Figure 2: IAM role example** 

![\[IAM role example\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hana-ops-iam-example.png)


To ensure security that applies the principle of least privilege, permissions for this role are limited only to actions that are required for backup and recovery.

```
{"Statement":[
   {"Resource":"arn:aws:s3::: <amzn-s3-demo-bucket>/*",
      "Action":["s3:GetObject","s3:PutObject","s3:DeleteObject",
"s3:ListBucket","s3:Get*","s3:List*"], "Effect":"Allow"},

{"Resource":"*","Action":["s3:List*","ec2:Describe*","ec2:Attach NetworkInterface",

"ec2:AttachVolume","ec2:CreateTags","ec2:CreateVolume","ec2:RunI nstances",
   "ec2:StartInstances"],"Effect":"Allow"}]}
```

To add functions later, you can use the AWS Management Console to modify the IAM role.

#### S3 Glacier
<a name="hana-ops-glacier"></a>

 [S3 Glacier](https://aws.amazon.com/glacier) is an extremely low-cost service that provides secure and durable storage for data archiving and backup. S3 Glacier is optimized for data that is infrequently accessed and provides multiple options such as expedited, standard, and bulk methods for data retrieval. With standard and bulk retrievals, data is available in 3-5 hours or 5-12 hours, respectively.

However, with expedited retrieval, S3 Glacier provides you with an option to retrieve data in 3-5 minutes, which can be ideal for occasional urgent requests. With S3 Glacier, you can reliably store large or small amounts of data for as little as \$10.01 per gigabyte per month, a significant savings compared to on-premises solutions. You can use [lifecycle policies](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-archival.html), as explained in the *Amazon S3 Developer Guide*, to push SAP HANA backups to S3 Glacier for long-term archiving.

### Backup Destination
<a name="hana-ops-backup-destination"></a>

The primary difference between backing up SAP systems on AWS compared with traditional on-premises infrastructure is the backup destination. Tape is the typical backup destination used with on-premises infrastructure. On AWS, backups are stored in Amazon S3. Amazon S3 has many benefits over tape, including the ability to automatically store backups offsite from the source system, since data in Amazon S3 is replicated across multiple facilities within the AWS Region.

SAP HANA systems provisioned with AWS Launch Wizard for SAP are configured with a set of EBS volumes to be used as an initial local backup destination. HANA backups are first stored on these local EBS volumes and then copied to Amazon S3 for long-term storage.

You can use SAP HANA Studio, SQL commands, or the DBA Cockpit to start or schedule SAP HANA data backups. Log backups are written automatically unless disabled. The /backup file system is configured as part of the deployment process.

 **Figure 3: SAP HANA file system layout** 

![\[SAP HANA file system layout\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hana-ops-fs-layout.jpg)


The SAP HANA global.ini configuration file has been customized for database backups to go directly to `/backup/data/<SID>`, while automatic log archival files go to `/backup/log/<SID>`.

```
[persistence]
basepath_shared = no
savepoint_intervals = 300
basepath_datavolumes = /hana/data/<SID>
basepath_logvolumes = /hana/log/<SID>
basepath_databackup = /backup/data/<SID>
basepath_logbackup = /backup/log/<SID>
```

Some third-party backup tools like Commvault, NetBackup, and IBM Tivoli Storage Manager (IBM TSM) are integrated with Amazon S3 capabilities and can be used to trigger and save SAP HANA backups directly into Amazon S3 without needing to store the backups on EBS volumes first.

### AWS CLI
<a name="hana-ops-cli"></a>

The [AWS Command Line Interface](https://aws.amazon.com/cli/) (AWS CLI), which is a unified tool to manage AWS services, is installed as part of the base image. Using various commands, you can control multiple AWS services from the command line directly and automate them through scripts. Access to your S3 bucket is available through the IAM role assigned to the instance (as [discussed earlier](#hana-ops-iam)). Using the AWS CLI commands for Amazon S3, you can list the contents of the previously created bucket, back up files, and restore files, as explained in the [AWS CLI documentation](https://docs.aws.amazon.com/cli/latest/reference/s3/).

```
imdbmaster:/backup # aws s3 ls --region=us-east-1 s3://node2- hana-s3bucket-gcynh5v2nqs3

Bucket: node2-hana-s3bucket-gcynh5v2nqs3
Prefix:
      LastWriteTime      Length      Name
      -------------      ------      ----
```

### Backup Example
<a name="hana-ops-backup-example"></a>

Here are the steps you can take for a typical backup task:

1. In the SAP HANA Backup Editor, choose **Open Backup Wizard**. You can also open the Backup Wizard by right-clicking the system that you want to back up and choosing **Back Up**.

   1. Select the destination type **File**. This will back up the database to files in the specified file system.

   1. Specify the backup destination (`/backup/data/<SID>`) and the backup prefix.

       **Figure 4: SAP HANA backup example**   
![\[SAP HANA backup example\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hana-ops-backup-example.jpg)

   1. Choose **Next** and then **Finish**. A confirmation message will appear when the backup is complete.

   1. Verify that the backup files are available at the OS level. The next step is to push or synchronize the backup files from the /backup file system to Amazon S3 by using the [aws s3 sync](https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html) command.

      ```
      imdbmaster:/ # aws s3 sync backup s3://node2-hana-s3bucket- gcynh5v2nqs3 --region=us-east-1
      ```

1. Use the AWS Management Console to verify that the files have been pushed to Amazon S3. You can also use the [aws s3 ls](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html) command shown previously in the [AWS Command Line Interface section](#hana-ops-cli).

    **Figure 5: Amazon S3 bucket contents after backup**   
![\[Amazon S3 bucket contents after backup\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hana-ops-bucket-contents.jpg)
**Tip**  
The `aws s3 sync` command will only upload new files that don’t exist in Amazon S3. Use a periodically scheduled `cron` job to sync, and then delete files that have been uploaded. See [SAP Note 1651055](https://me.sap.com/notes/1651055) for scheduling periodic backup jobs in Linux, and extend the supplied scripts with `aws s3 sync` commands.

### Scheduling and Executing Backups Remotely
<a name="hana-ops-remote-backups"></a>

You can use the [AWS Systems Manager Run Command](https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html), along with Amazon CloudWatch Events, to schedule backups of your SAP HANA system remotely without the need to log in to the EC2 instances. You can also use `cron` or any other instance-level scheduling mechanism.

The Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances. A managed instance is any EC2 instance or on-premises machine in your hybrid environment that has been configured for Systems Manager. The Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at

scale. You can use the Run Command from the Amazon EC2 console, the AWS CLI, Windows PowerShell, or the AWS SDKs.

#### Systems Manager Prerequisites
<a name="hana-ops-sm-prereq"></a>

Systems Manager has the following prerequisites.


|  |  | 
| --- |--- |
|   **Supported operating system (Linux)**   |  Instances must run a supported version of Linux. 64-bit and 32-bit systems: \$1 Amazon Linux 2014.09, 2014.03 or later \$1 Ubuntu Server 16.04 LTS, 14.04 LTS, or 12.04 LTS \$1 Red Hat Enterprise Linux (RHEL) 6.5 or later \$1 CentOS 6.3 or later 64-bit systems only: \$1 Amazon Linux 2015.09, 2015.03 or later \$1 Red Hat Enterprise Linux (RHEL) 7.x or later \$1 CentOS 7.1 or later \$1 SUSE Linux Enterprise Server (SLES) 12 or higher For the latest information about supported operating systems, see the [AWS Systems Manager documentation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-prereqs.html#prereqs-operating-systems).  | 
|   **Roles for Systems Manager**   |  Systems Manager requires an IAM role for instances that will process commands and a separate role for users who are executing commands. Both roles require permission policies that enable them to communicate with the Systems Manager API. You can choose to use Systems Manager managed policies or you can create your own roles and specify permissions. For more information, see [Configuring Security Roles for Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-access.html) in the AWS documentation. If you are configuring on-premises servers or virtual machines (VMs) that you want to configure using Systems Manager, you must also configure an IAM service role. For more information, see [Create an IAM Service Role](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managedinstances.html#sysman-service-role) in the AWS documentation.  | 
|   **SSM Agent (EC2 Linux instances)**   |   AWS Systems Manager Agent (SSM Agent) processes Systems Manager requests and configures your machine as specified in the request. You must download and install SSM Agent to your EC2 Linux instances. For more information, see [Installing SSM Agent on Linux](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html#sysman-install-ssm-agent) in the AWS documentation.  | 

To schedule remote backups, follow these high-level steps:

1. Install and configure SSM Agent on the EC2 instance. For detailed installation steps, see the [AWS Systems Manager documentation](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html#sysman-install-ssm-agent).

1. Provide SSM access to the EC2 instance role that is assigned to the SAP HANA instance. For detailed information on how to assign SSM access to a role, see the [AWS Systems Manager documentation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-access.html).

1. Create an SAP HANA backup script. You can use the following sample script as a starting point and modify it to meet your requirements.

   ```
   #!/bin/sh
   set -x
   S3Bucket_Name=<Name of the S3 bucket where backup files will be copied>
   TIMESTAMP=$(date +\%F\_%H\%M)
   exec 1>/backup/data/${SAPSYSTEMNAME}/${TIMESTAMP}_backup_log.out 2>&1
   echo "Starting to take backup of Hana Database and Upload the backup files to S3"
   echo "Backup Timestamp for $SAPSYSTEMNAME is $TIMESTAMP" BACKUP_PREFIX=${SAPSYSTEMNAME}_${TIMESTAMP}
   echo $BACKUP_PREFIX
   # source HANA environment
   source $DIR_INSTANCE/hdbenv.sh
   # execute command with user key
   hdbsql -U BACKUP	"backup data using file ('$BACKUP_PREFIX')" echo "HANA Backup is completed"
   echo "Continue with copying the backup files in to S3" echo $BACKUP_PREFIX
   sudo -u root /usr/local/bin/aws s3 cp --recursive
   /backup/data/${SAPSYSTEMNAME}/ s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME}/data/ --exclude "*" --include "${BACKUP_PREFIX}*"
   echo "Copying HANA Database log files in to S3"
   sudo -u root /usr/local/bin/aws s3 sync
   /backup/log/${SAPSYSTEMNAME}/ s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME}/log/ --exclude "*" --include "log_backup*"
   sudo -u root /usr/local/bin/aws s3 cp
   /backup/data/${SAPSYSTEMNAME}/${TIMESTAMP}_backup_log.out
   s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME}
   ```
**Note**  
This script takes into consideration that `hdbuserstore` has a key named `Backup`.

1. Test a one-time backup by executing an `ssm` command directly.
**Note**  
For this command to execute successfully, you will have to enable `<sid>adm login` using `sudo`.

   ```
   aws ssm send-command --instance-ids <HANA master instance ID> --document-name {aws}-RunShellScript
   --parameters commands="sudo - u <HANA_SID>adm TIMESTAMP=$(date +\%F\_%H\%M) SAPSYSTEMNAME=<HANA_SID>
   DIR_INSTANCE=/hana/shared/${SAPSYSTEMNAME}/HDB00 -i /usr/sap/HDB/HDB00/hana_backup.sh"
   ```

1. Using CloudWatch Events, you can schedule backups remotely at any desired frequency. Navigate to the CloudWatch Events page and create a rule.

   1. Choose **Schedule**.

   1. Select **SSM Run Command** as the target.

   1. Select ** AWS-RunShellScript (Linux)** as the document type.

   1. Choose **InstanceIds** or **Tags** as the target key.

   1. Choose **Constant** under **Configure Parameters**, and type the `run` command.

       **Figure 6: Creating Amazon CloudWatch Events rules**   
![\[Creating Amazon CloudWatch Events rules\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hana-ops-create-rule.png)

## Restoring SAP HANA Backups and Snapshots
<a name="hana-ops-restoring-backups-snapshots"></a>

### Restoring SAP Backups
<a name="hana-ops-restoring-backups"></a>

To restore your SAP HANA database from a backup, perform the following steps:

1. If the backup files are not already available in the /backup file system but are in Amazon S3, restore the files from Amazon S3 by using the [aws s3 cp](https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html) command. This command has the following syntax:

   ```
   aws --region <region> cp <s3-bucket/path> --recursive <backup- prefix>*.
   ```

   For example:

   ```
   imdbmaster:/backup/data/YYZ # aws --region us-east-1 s3 cp s3://node2-hana-s3bucket-gcynh5v2nqs3/data/YYZ . --recursive -- include COMPLETE*
   ```

1. Recover the SAP HANA database by using the Recovery Wizard as outlined in the [SAP HANA Administration Guide](https://help.sap.com/hana/SAP_HANA_Administration_Guide_en.pdf). Specify **File** as the destination type and enter the correct backup prefix.

    **Figure 7: Restore example**   
![\[Restore example\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hana-ops-restore-example.jpg)

1. When the recovery is complete, you can resume normal operations and clean up backup files from the ` /backup/<SID>/\$1` directories.

### Restoring EBS Snapshots
<a name="hana-ops-restoring-ebs-snapshots"></a>

To restore EBS snapshots, perform the following steps:

1. Create a new volume from the snapshot:

   ```
   aws ec2 create-volume --region us-west-2 --availability-zone us- west-2a --snapshot-id snap-1234abc123a12345a --volume-type gp2
   ```

1. Attach the newly created volume to your EC2 host:

   ```
   aws ec2 attach-volume --region=us-west-2 --volume-id vol- 4567c123e45678dd9 --instance-id i-03add123456789012 --device /dev/sdf
   ```

1. Mount the logical volume associated with SAP HANA data on the host:

   ```
   mount /dev/sdf /hana/data
   ```

1. Start your SAP HANA instance.

**Note**  
For large mission-critical systems, we highly recommend that you execute the volume initialization command on the database data and log volumes after restoring the AMI but before starting the database. Executing the volume initialization command will help you avoid extensive wait times before the database is available. Here is the sample `fio` command that you can use:  

```
sudo fio –filename=/dev/xvdf –rw=read –bs=128K –iodepth=32 – ioengine=libaiodirect=1 –name=volume-initialize
```

For more information about initializing Amazon EBS volumes, see the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html).

### Restoring AMI Snapshots
<a name="hana-ops-restoring-ami-snapshots"></a>

You can restore your SAP HANA AMI snapshots through the AWS Management Console. Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/), and choose **AMIs** in the navigation pane.

Choose the AMI that you want to restore, expand **Actions**, and then choose **Launch**.

 **Figure 8: Restoring an AMI snapshot** 

![\[Restoring an AMI snapshot\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hana-ops-restore-ami-snapshot.jpg)


# Automated patching for SAP HANA
<a name="automated-patching"></a>

Maintaining the SAP HANA database software version keeps the database on with supported software versions, and enables you to stay updated with security fixes and software improvements.

This section provides information about automating the update of your SAP HANA database software version with AWS Systems Manager. You must have a good understanding of SAP HANA patching processes, paths, and prerequisites. Apart from SAP HANA, you must also keep all other components of an SAP system updates with an SAP-supported version.

**Topics**
+ [SAP references](#sap-references)
+ [Architecture](#architecture-patching)
+ [Prerequisites](#prerequisites)
+ [SSM automation document](#ssm-automation-document)
+ [AWS services](#services-patching)
+ [Prepare to run the SSM automation document](#preparations)
+ [Troubleshoot](#troubleshoot-patching)
+ [SAP HANA version reporting](#version-reporting)

## SAP references
<a name="sap-references"></a>

It is recommended that you familiarize yourself with the following SAP documents to understand SAP HANA patching processes, paths, and prerequisites.

You must have SAP portal access to view the SAP Notes.
+ SAP Note : [2115815 - FAQ: SAP HANA Database Patches and Upgrades](https://me.sap.com/notes/2115815) 
+ SAP Note : [1948334 - SAP HANA Database Update Paths for SAP HANA Maintenance Revisions](https://me.sap.com/notes/1948334) 
+ SAP Note : [2378962 - SAP HANA 2.0 Revision and Maintenance Strategy](https://me.sap.com/notes/2378962) 
+ SAP HANA Master Guide : [Updating an SAP HANA System Landscape](https://help.sap.com/docs/SAP_HANA_PLATFORM/eb3777d5495d46c5b2fa773206bbfb46/e396b93cbb571014a319bfdf7fb84638.html) 

## Architecture
<a name="architecture-patching"></a>

Based on your governance strategy, you can centralize AWS SSM automation document into a Shared Services account or an automation account. For more information, see [Infrastructure OU - Shared Services account](https://docs.aws.amazon.com/prescriptive-guidance/latest/security-reference-architecture/shared-services.html).

A Shared Services account is used in this document. The AWS SSM automation document is stored in this account. It is connected to the child AWS accounts that host Amazon EC2 instances running SAP HANA workloads. The Shared Services account also hosts the Amazon S3 bucket containing the SAP HANA media software, and specific parameters stored in AWS Secrets Manager. These parameters are required for the automation document to run.

The automation account can be a production account running SAP workloads or a dedicated account for only running SSM automation documents. A Shared Services account for automation reduces the administrative overhead by maintaining the automation document and its dependencies in the same account.

![\[Diagram of an automation account connected to the child accounts that host Amazon EC2 instances running SAP HANA workloads.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/automated-patching-architecture.jpg)


## Prerequisites
<a name="prerequisites"></a>
+ You must setup IAM permissions in the Shared Services account as well as the connected child accounts. This is to enable AWS Systems Manager to run automation documents from the Shared Services account to connected accounts. For more information, see [Running automations in multiple AWS Regions and accounts](https://docs.aws.amazon.com/systems-manager/latest/userguide/running-automations-multiple-accounts-regions.html).
+ You must set up your Amazon EC2 instance running SAP workloads to be managed by AWS Systems Manager. For more information, see [Working with SSM Agent on Amazon EC2 instances on Linux](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-ssm-agent.html).

## SSM automation document
<a name="ssm-automation-document"></a>

You can find the code for the SSM automation document on [AWS Samples](https://github.com/aws-samples) GitHub repository. For more information, see [sap-hana-patch-sample.yml](https://github.com/aws-samples/sap-automated-hana-patching/blob/main/sap-hana-patch-sample.yml). The following diagram illustrates the steps run by the SSM automation document.

![\[Diagram of the steps run by the SSM automation document.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/ssm-automation-steps.jpg)


## AWS services
<a name="services-patching"></a>

The sample code interacts with the following AWS services to run the SSM automation documents.

**Topics**
+ [Amazon S3](#services-s3)
+ [Amazon EC2](#services-ec2)
+ [AWS Identity and Access Management](#services-iam)
+ [AWS Secrets Manager](#services-secrets-manager)
+ [AWS Key Management Service](#services-kms)

### Amazon S3
<a name="services-s3"></a>

You have the following three options to store the SAP HANA software media.
+ Amazon EBS volume attached to your Amazon EC2 instance
+ NFS mount point – Amazon EFS or Amazon FSx for NetApp ONTAP
+ Amazon S3 bucket

An Amazon S3 bucket can be used to store all the SAP HANA software media containing different versions. The target software version to be used in the SSM automation document can be selected from here.

Store the SAP media in a compressed `0SAR` file. The SSM automation document extracts information from this file when you choose to download SAP HANA media from Amazon S3.

The bucket can reside in a Shared Services account and can be shared with all AWS accounts that run SAP HANA workloads. The following table provides an example structure of the SAP HANA software media in Amazon S3.


|  |  |  |  |  | 
| --- |--- |--- |--- |--- |
|   **Software**   |   **Version**   |   **Revision**   |   **Patch**   |   **Amazon S3 path**   | 
|  SAP HANA database software  |  2  |  SP04  |  48  |  S3://<Your SAP software bucket>/linuxx86/hanadb/2.0/SP04/48  | 
|  SAP HANA database software  |  2  |  SP05  |  59  |  S3://<Your SAP software bucket>/linuxx86/hanadb/2.0/SP05/59  | 
|  SAP HANA database software  |  2  |  SP05  |  59.5  |  S3://<Your SAP software bucket>/linuxx86/hanadb/2.0/SP05/59p5  | 
|  SAP HANA database software  |  2  |  SP06  |  60  |  S3://<Your SAP software bucket>/linuxx86/hanadb/2.0/SP06/60  | 
|  SAP HANA database software  |  2  |  SP06  |  64  |  S3://<Your SAP software bucket>/linuxx86/hanadb/2.0/SP06/64  | 

 **Amazon S3 bucket policies** 

The Amazon S3 bucket containing the SAP HANA software media must be accessible to all Amazon EC2 instances running SAP HANA workloads in all of your AWS accounts. Use Amazon S3 bucket policies to grant limited access to Amazon S3 buckets and their contents only to specific authorized entities. For more information, see the following documents.
+  [Policies and Permissions in Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-policy-language-overview.html) 
+  [Security best practices for Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html) 

The following policy is an example Amazon S3 bucket policy that grants access to a specific role on a specific account to download all files from an Amazon S3 bucket.

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AddPerm",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:role/service-role/{ec2_role}"
            },
            "Action": [
                "s3:GetObject",
                "s3:GetObjectVersion",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::{bucket_name}/*",
                "arn:aws:s3:::{bucket_name}"
            ]
        }
    ]
}
```

 **Dedicated Linux file system** 

If the SAP HANA database software is stored in an Amazon S3 bucket, it is downloaded to the local Linux directory on Amazon EC2. It is recommended to have at least 30 GB of free space when downloading the SAP HANA software media files from Amazon S3 bucket to a local Linux directory. The directory path must be specified in the input parameters of the SSM automation document, as shown in the following image.

![\[Diagram of a scale-out environment for SAP HANA workloads using FSx for ONTAP.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/ssm-automation-document.jpg)


The files must be present in the specified directory on Amazon EC2 instance. The files must be unzipped, and stored in the following structure, based on the AWS SSM automation document code.

```
/{{HanaUpgradeBaseDir}}/x-sap-lnx-patch-hanadb/{{HANADBVersion}}/SAP_HANA_DATABASE/
```

The downloaded files are removed from the local directory once the SSM automation document has completed updating the SAP HANA database.

### Amazon EC2
<a name="services-ec2"></a>

Your Amazon EC2 instance running SAP HANA workloads requires two tags to support the SSM automation document code. For more information, see [Tag your Amazon EC2 resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html).

The `DBSid:{SID}` and `HanaPatchGroup:{Usage}` tags are accessed by AWS Secrets Manager. Both of these tags are depicted in the arc [Architecture](#architecture-patching).

The `HanaPatchGroup` tag is used to filter different Amazon Resource Names (ARNs) that are retrieved from AWS Secrets Manager for the SAP HANA database user. The following is an example of `HanaPatchGroup` tag values.

```
DBSid = HDB
HanaPatchGroup = DEV
HanaPatchGroup = QAS
HanaPatchGroup = PRD
HanaPatchGroup = SBX
```

You can customize the tags based on your strategy for user and password management of the database user that is going to perform the SAP HANA update process.

### AWS Identity and Access Management
<a name="services-iam"></a>

 AWS Systems Manager must be able to manage Amazon EC2 instance running SAP HANA workload. For more information, see [Create an IAM instance profile for Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-instance-profile.html).

If your SAP HANA database instance is provisioned via AWS Launch Wizard for SAP, this permission is included in the deployment. For more information, see [AWS Launch Wizard for SAP](https://docs.aws.amazon.com/launchwizard/latest/userguide/launch-wizard-sap.html).

### AWS Secrets Manager
<a name="services-secrets-manager"></a>

 AWS Secrets Manager is used to store the parameters of the SAP HANA database that are required to run the SSM automation document. AWS Secrets Manager enables sharing of secrets across multiple accounts. With this flexibility, you can manage the parameters in one location, and outside of the code.

Sharing the secrets across different accounts requires additional permissions. For more information, see [How do I share AWS Secrets Manager secrets between AWS accounts?](https://repost.aws/knowledge-center/secrets-manager-share-between-accounts) 

The following table shows the example secrets created in the Shared Services account to run the sample code.


|  |  |  | 
| --- |--- |--- |
|   **Secret name**   |   **Secret key**   |   **Secret value**   | 
|  zsap/hana/upgrade/user  |  User  |  <HANA Upgrade User ID>  | 
|  zsap/hana/upgrade/password/DEV  |  Password  |  <HANA DEV Upgrade User Password>  | 
|  zsap/hana/upgrade/password/QAS  |  Password  |  <HANA QAS Upgrade User Password>  | 
|  zsap/hana/upgrade/password/PRD  |  Password  |  <HANA PRD Upgrade User Password>  | 
|  zsap/hana/upgrade/password/SBX  |  Password  |  <HANA SBX Upgrade User Password>  | 
|  zsap/hana/upgrade/bucket  |  Amazon S3 bucket  |  <Amazon S3 bucket for SAP HANA software>  | 
|  zsap/sap/bucket/version\$1repo  |  Amazon S3 bucket  |  <Amazon S3 bucket for SAP HANA version repository>  | 

**Note**  
The sample code has references to the Amazon Resource Names of the secrets. This is required as the secrets are stored in a different account. The AWS account that contains the Amazon EC2 instance running the SAP HANA workload is different.

 **Policies for AWS Secrets Manager** 

The secrets created in AWS Secrets Manager must be set up to be accessible to target AWS accounts. For more information, see [Resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_resource-based).

The following is an example policy that is assigned to a Secret, granting access from a different AWS account.

```
{
  "Version" : "2012-10-17",
  "Statement" : [ {
    "Effect" : "Allow",
    "Principal" : {
      "{aws}" : "arn:aws:iam::{sap_workloads_account_id}:role/service-role/{ec2_role}"
    },
    "Action" : "secretsmanager:GetSecretValue",
    "Resource" : "arn:aws:secretsmanager:{region}:{automation_account_id}:
    {secret_ARN}"
  } ]
}
```

**Note**  
A valid user in the SAP HANA database `SYSTEMDB` with the required authorization to make the SAP HANA update is required.

In the sample code, the user and password are stored in AWS Secrets Manager as a secret. Follow the principle of granting least privilege, and use a user with the required authorizations. For more details, see [Create a Lesser-Privileged Database User for Update](https://help.sap.com/docs/SAP_HANA_PLATFORM/2c1988d620e04368aa4103bf26f17727/df3de8c31cef45c0847d2804b97604ea.htmls).

### AWS Key Management Service
<a name="services-kms"></a>

The sample code uses AWS Secrets Manager to share secrets across different AWS accounts. As AWS Secrets Manager encrypts the contents of the parameters, a KMS key is used for encryption and decryption operations. The KMS key must be accessible to all of your AWS accounts. For more information, see [Creating keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html).

## Prepare to run the SSM automation document
<a name="preparations"></a>

Before running the SSM automation document, you must ensure that a valid backup of the SAP HANA database exists, and that the applications connecting to the SAP HANA database are properly stopped. For more details, see [Administration](https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-administration.html).

For SAP HANA databases managed by an operation system or a third-party cluster software, the cluster must be placed in maintenance mode before initiating automated patching. The SSM automation document must run on the secondary node first.

For more details on SAP HANA clustered environments, see [SAP HANA on AWS: High Availability Configuration Guide for SLES and RHEL](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-hana-on-aws-ha-configuration.html). For more details on updating SAP HANA databases with SAP HANA System Replication enabled, see [Update SAP HANA Systems Running in a System Replication Setup](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/e94ce3e59ece4b918b5c90d998ab7fae.html).

Concurrency enables you to define how many SAP HANA databases should be updated in parallel. For more information, see [Control automations at scale](https://docs.aws.amazon.com/systems-manager/latest/userguide/running-automations-scale-controls.html).

## Troubleshoot
<a name="troubleshoot-patching"></a>

Follow these steps to see the status of each SSM automation.

1. Open the [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager).

1. On the left navigation pane, select **Automation**.

1. Select **Configure preferences** > **Executions**.

1. You can see the status of your SSM automations in the **Automation executions** section.

 AWS Management Console enables you to drill into each execution, review the steps executed, and the result for each step. You can understand the failures that occur *before* SSM automation. For troubleshooting *after* the SSM automation has been initiated, review the logs. You can find the SSM logs on Amazon EC2 at the following path.

```
/var/lib/amazon/ssm/{instance-id}/document/orchestration/{automation_step_execution_id}/awsrunShellScript/0.awsrunShellScript
```

You can send the output of each SSM automation to Amazon CloudWatch Logs. For more information, see [Configuring Amazon CloudWatch Logs for Run Command](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-rc-setting-up-cwlogs.html).

## SAP HANA version reporting
<a name="version-reporting"></a>

You can use [Amazon QuickSight](https://aws.amazon.com/quicksight/?nc1=h_ls) to create server less BI dashboards that can serve as a repository for your SAP HANA software versions. With Amazon QuickSight, you can review all of your SAP HANA database versions xin all of your AWS accounts. For more information, see [Maintain an SAP landscape inventory with AWS Systems Manager and Amazon Athena](https://aws.amazon.com/blogs/awsforsap/maintain-an-sap-landscape-inventory-with-aws-systems-manager-and-amazon-athena/).

The `HDB_Report_Version` step in the sample code gathers SAP HANA version information, and uploads that data into an Amazon S3 bucket. (In the sample code, the Amazon S3 bucket has a `/HANA` folder that contains the SAP HANA version information.) You can use the data in this bucket as source dataset to feed Amazon QuickSight dashboards. For more information, see [Creating a dataset using Amazon S3 files](https://docs.aws.amazon.com/quicksight/latest/user/create-a-data-set-s3.html). You can ensure accuracy of the data by scheduling automatic refreshes. For more information, see [Refreshing SPICE data](https://docs.aws.amazon.com/quicksight/latest/user/refreshing-imported-data.html).

You must set up IAM permissions for the Amazon S3 bucket. The following is a sample Amazon S3 bucket policy for storing SAP HANA version information.

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AddPerm",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:role/service-role/{ec2_role}"
            },
            "Action": "s3:PutObject",
            "Resource": [
                "arn:aws:s3:::{bucket_name}/*",
                "arn:aws:s3:::{bucket_name}"
            ]
        }
    ]
}
```

# Storage Configuration for SAP HANA
<a name="hana-ops-storage-config"></a>

**Important**  
This page has moved. The storage configuration documentation is now located in the Environment Setup section.  
See [Configure storage (Amazon EBS)](storage-configuration-ebs.md) for the current guidance, including memory-based sizing formulas, pre-calculated reference tables, and legacy instance-specific configurations.

# Networking
<a name="hana-ops-networking"></a>

SAP HANA components communicate over the following logical network zones:
+ Client zone – to communicate with different clients such as SQL clients, SAP Application Server, SAP HANA Extended Application Services (XS), and SAP HANA Studio
+ Internal zone – to communicate with hosts in a distributed SAP HANA system as well as for SAP HSR
+ Storage zone – to persist SAP HANA data in the storage infrastructure for resumption after start or recovery after failure

Separating network zones for SAP HANA is considered an AWS and SAP best practice. It enables you to isolate the traffic required for each communication channel.

In a traditional, bare-metal setup, these different network zones are set up by having multiple physical network cards or virtual LANs (VLANs). Conversely, on the AWS Cloud, you can use elastic network interfaces combined with security groups to achieve this network isolation. Amazon EBS-optimized instances can also be used for further isolation for storage I/O.

## EBS-Optimized Instances
<a name="hana-ops-ebs-optimized-instances"></a>

Many newer Amazon EC2 instance types such as the X1 use an optimized configuration stack and provide additional, dedicated capacity for Amazon EBS I/O. These are called [EBS-optimized instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html). This optimization provides the best performance for your EBS volumes by minimizing contention between Amazon EBS I/O and other traffic from your instance.

![\[EBS-optimized instances\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hana-ops-ebs-optimized.jpg)


 **Figure 9: EBS-optimized instances** 

## Elastic Network Interfaces
<a name="hana-ops-elastic-network-interfaces"></a>

An elastic network interface is a virtual network interface that you can attach to an EC2 instance in an Amazon Virtual Private Cloud (Amazon VPC). With an elastic network interface (referred to as *network interface* in the remainder of this guide), you can create different logical networks by specifying multiple private IP addresses for your instances.

For more information about network interfaces, see the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html). In the following example, two network interfaces are attached to each SAP HANA node as well as in a separate communication channel for storage.

![\[Network interfaces attached to SAP HANA nodes\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hana-ops-network-interfaces.jpg)


 **Figure 10: Network interfaces attached to SAP HANA nodes** 

## Security Groups
<a name="hana-ops-security-groups"></a>

A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security group at any time. The new rules are automatically applied to all instances that are associated with the security group. To learn more about security groups, see the [AWS documentation](https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html). In the following example, ENI-1 of each instance shown is a member of the same security group that controls inbound and outbound network traffic for the client network.

![\[Network interfaces and security groups\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hana-ops-security-groups.jpg)


 **Figure 11: Network interfaces and security groups** 

## Network Configuration for SAP HANA System Replication (HSR)
<a name="hana-ops-hsr"></a>

You can configure additional network interfaces and security groups to further isolate inter-node communication as well as SAP HSR network traffic. In Figure 10, ENI-2 is has its own security group (not shown) to secure client traffic from inter-node communication. ENI-3 is configured to secure SAP HSR traffic to another Availability Zone within the same Region. In this example, the target SAP HANA cluster would be configured with additional network interfaces similar to the source environment, and ENI-3 would share a common security group.

![\[Further isolation with additional ENIs and security groups\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hana-ops-isolation.jpg)


 **Figure 12: Further isolation with additional ENIs and security groups** 

## Configuration Steps for Logical Network Separation
<a name="hana-ops-config"></a>

To configure your logical network for SAP HANA, follow these steps:

1. Create new security groups to allow for isolation of client, internal communication, and, if applicable, SAP HSR network traffic. See [Ports and Connections](https://help.sap.com/saphelp_hanaplatform/helpdata/en/a9/326f20b39342a7bc3d08acb8ffc68a/frameset.htm) in the SAP HANA documentation to learn about the list of ports used for different network zones. For more information about how to create and configure security groups, see the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#creating-security-group).

1. Use Secure Shell (SSH) to connect to your EC2 instance at the OS level. Follow the steps described in the [appendix](hana-ops-appendix.md) to configure the OS to properly recognize and name the Ethernet devices associated with the new network interfaces you will be creating.

1. Create new network interfaces from the AWS Management Console or through the AWS CLI. Make sure that the new network interfaces are created in the subnet where your SAP HANA instance is deployed. As you create each new network interface, associate it with the appropriate security group you created in step 1. For more information about how to create a new network interface, see the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#create_eni).

1. Attach the network interfaces you created to your EC2 instance where SAP HANA is installed. For more information about how to attach a network interface to an EC2 instance, see the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#attach_eni_running_stopped).

1. Create virtual host names and map them to the IP addresses associated with client, internal, and replication network interfaces. Ensure that host name-to-IP-address resolution is working by creating entries in all applicable host files or in the Domain Name System (DNS). When complete, test that the virtual host names can be resolved from all SAP HANA nodes and clients.

1. For scale-out deployments, configure SAP HANA inter-service communication to let SAP HANA communicate over the internal network. To learn more about this step, see [Configuring SAP HANA Inter-Service Communication](https://help.sap.com/saphelp_hanaplatform/helpdata/en/bb/cb76c7fa7f45b4adb99e60ad6c85ba/frameset.htm) in the SAP HANA documentation.

1. Configure SAP HANA hostname resolution to let SAP HANA communicate over the replication network for SAP HSR. To learn more about this step, see [Configuring Hostname Resolution for SAP HANA System Replication](https://help.sap.com/saphelp_hanaplatform/helpdata/en/9a/cd6482a5154b7e95ce72e83b04f94d/frameset.htm) in the SAP HANA documentation.

# SAP Support Access
<a name="hana-ops-support"></a>

In some situations it may be necessary to allow an SAP support engineer to access your SAP HANA systems on AWS. The following information serves only as a supplement to the information contained in the "Getting Support" section of the [SAP HANA Administration Guide](https://help.sap.com/hana/SAP_HANA_Administration_Guide_en.pdf).

A few steps are required to configure proper connectivity to SAP. These steps differ depending on whether you want to use an existing remote network connection to SAP, or you are setting up a new connection directly with SAP from systems on AWS.

## Support Channel Setup with SAProuter on AWS
<a name="hana-ops-saprouter"></a>

When setting up a direct support connection to SAP from AWS, consider the following steps:

1. For the SAProuter instance, create and configure a specific SAProuter security group, which only allows the required inbound and outbound access to the SAP support network. This should be limited to a specific IP address that SAP gives you to connect to, along with TCP port 3299. See the [Amazon EC2 security group documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) for additional details about creating and configuring security groups.

1. Launch the instance that the SAProuter software will be installed on into a public subnet of the VPC and assign it an Elastic IP address.

1. Install the SAProuter software and create a saprouttab file that allows access from SAP to your SAP HANA system on AWS.

1. Set up the connection with SAP. For your internet connection, use **Secure Network Communication (SNC)**. For more information, see the [SAP Remote Support – Help](https://support.sap.com/remote-support/help.html) page.

1. Modify the existing SAP HANA security groups to trust the new SAProuter security group you have created.
**Tip**  
For added security, shut down the EC2 instance that hosts the SAProuter service when it is not needed for support purposes

![\[Support connectivity with SAProuter\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hana-ops-saprouter.jpg)


 **Figure 13: Support connectivity with SAProuter on AWS ** 

## Support Channel Setup with SAProuter on Premises
<a name="hana-ops-saprouter-onprem"></a>

In many cases, you may already have a support connection configured between your data center and SAP. This can easily be extended to support SAP systems on AWS. This scenario assumes that connectivity between your data center and AWS has already been established, either by way of a secure VPN tunnel over the internet or by using [AWS Direct Connect](https://aws.amazon.com/directconnect/).

You can extend this connectivity as follows:

1. Ensure that the proper saprouttab entries exist to allow access from SAP to resources in the VPC.

1. Modify the SAP HANA security groups to allow access from the on- premises SAProuter IP address.

1. Ensure that the proper firewall ports are open on your gateway to allow traffic to pass over TCP port 3299.

![\[Support connectivity with SAProuter on premises\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hana-ops-saprouter-onprem.jpg)


 **Figure 14: Support connectivity with SAProuter on premises** 

# Security
<a name="hana-ops-security"></a>

Here are additional AWS security resources to help you achieve the level of security you require for your SAP HANA environment on AWS.
+  [AWS Cloud Security Center](https://aws.amazon.com/security/) 
+  [CIS AWS Foundations Benchmark](https://docs.aws.amazon.com/securityhub/latest/userguide/cis-aws-foundations-benchmark.html) 
+  [Introduction to AWS Security](https://docs.aws.amazon.com/whitepapers/latest/introduction-aws-security/welcome.html) 
+  [AWS Well-Architected Framework – Security Pillar](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/welcome.html) 

## OS Hardening
<a name="hana-ops-os-hardening"></a>

You may want to lock down the OS configuration further, for example, to avoid providing a DB administrator with root credentials when logging into an instance.

You can also refer to the following SAP notes:
+  [1730999](https://me.sap.com/notes/1730999): *Configuration changes in HANA appliance* 
+  [1731000](https://me.sap.com/notes/1731000): *Unrecommended configuration changes* 

## Disabling HANA Services
<a name="hana-ops-disabling-services"></a>

HANA services such as HANA XS are optional and should be deactivated if they are not needed. For instructions, see [SAP Note 1697613](https://me.sap.com/notes/1697613): *Remove XS Engine out of SAP HANA database*. In case of service deactivation, you should also remove the TCP ports from the SAP HANA AWS security groups for complete security.

## API Call Logging
<a name="hana-ops-api-logging"></a>

 [AWS CloudTrail](https://aws.amazon.com/cloudtrail/) is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service.

With CloudTrail, you can get a history of AWS API calls for your account, including API calls made via the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services (such as AWS CloudFormation). The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing.

## Notifications on Access
<a name="hana-ops-notifications"></a>

You can use [Amazon Simple Notification Service (Amazon SNS)](https://aws.amazon.com/sns/) or third-party applications to set up notifications on SSH login to your email address or mobile phone.

# Architecture patterns for SAP HANA on AWS
<a name="hana-ops-patterns"></a>

This section provides information on architecture patterns that can be used as guidelines for deploying SAP HANA systems on AWS. For more information on the architecture patterns for SAP NetWeaver-based applications on AWS, see [Architecture guidance for availability and reliability of SAP on AWS](https://docs.aws.amazon.com/sap/latest/general/architecture-guidance-of-sap-on-aws.html).

You can change the patterns to fit your changing business requirements with minimum to no downtime, depending on the complexity of your chosen architecture pattern.

**Topics**
+ [SAP HANA System Replication](#hana-ops-patterns-hsr)
+ [Secondary SAP HANA instance](#hana-ops-secondary-instance)
+ [Overview of patterns](#hana-ops-patterns-types)
+ [Single Region architecture patterns for SAP HANA](hana-ops-patterns-single.md)
+ [Multi-Region architecture patterns for SAP HANA](hana-ops-patterns-multi.md)

## SAP HANA System Replication
<a name="hana-ops-patterns-hsr"></a>

SAP HANA System Replication is a high availability solution provided by SAP for SAP HANA that can be used to reduce outage due to maintenance activities, faults, and disasters. It continuously replicates data on a secondary instance. The changes persist on the alternate instance in the event of a failure on the primary instance. For more information, see [Configuring SAP HANA System Replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/676844172c2442f0bf6c8b080db05ae7.html?version=2.0.01).

## Secondary SAP HANA instance
<a name="hana-ops-secondary-instance"></a>

In AWS Cloud, a secondary SAP HANA instance can exist in the same Region on a different Availability Zone or in a separate Region. For more information, see [Architecture guidelines and decisions](https://docs.aws.amazon.com/sap/latest/general/arch-guide-architecture-guidelines-and-decisions.html). The secondary instance can be deployed as a passive instance or an active (read-only) instance. When the secondary instance is deployed as a passive instance, you can reuse the Amazon EC2 instance capacity to accommodate a non-production SAP HANA workload.

## Overview of patterns
<a name="hana-ops-patterns-types"></a>

The architecture patterns for SAP HANA are divided into the following two categories:
+  [Single Region architecture patterns for SAP HANA](hana-ops-patterns-single.md) 
+  [Multi-Region architecture patterns for SAP HANA](hana-ops-patterns-multi.md) 

You must consider the risk and impact of each failure type, and the cost of mitigation when choosing a pattern. The following table provides a quick overview of the architecture patterns for SAP HANA systems on AWS.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-patterns.html)

 * 1To achieve near zero recovery point objective, SAP HANA System Replication must be setup in sync mode for the SAP HANA instances within the same Region.* 

 * 2To achieve the lowest recovery time objective, we recommend using a high availability setup with third-party cluster solutions in combination with SAP HANA System Replication.* 

 * 3A production sized Amazon EC2 instance can be deployed as an MCOS installation to accommodate a non-production SAP HANA instance.* 

 * 4SAP HANA System Replication and the number of SAP HANA instance copies as targets.* 

 * 5Same-Region replication copies objects across Amazon S3 buckets in the same Region.* 

# Single Region architecture patterns for SAP HANA
<a name="hana-ops-patterns-single"></a>

Single Region architecture patterns help you avoid network latency as your SAP workload components are located in a close proximity within the same Region. Every AWS Region generally has three Availability Zones. For more information, see [AWS Global Infrastructure Map](https://aws.amazon.com/about-aws/global-infrastructure/).

You can choose these patterns when you need to ensure that your SAP data resides within regional boundaries stipulated by the data sovereignty laws.

The following are the four single Region architecture patterns.

**Topics**
+ [Pattern 1: Single Region with two Availability Zones for production](#hana-ops-patterns-pattern1)
+ [Pattern 2: Single Region with two Availability Zones for production and production sized non-production in a third Availability Zone](#hana-ops-patterns-pattern2)
+ [Pattern 3: Single Region with one Availability Zone for production and another Availability Zone for non-production](#hana-ops-patterns-pattern3)
+ [Pattern 4: Single Region with one Availability Zone for production](#hana-ops-patterns-pattern4)

## Pattern 1: Single Region with two Availability Zones for production
<a name="hana-ops-patterns-pattern1"></a>

In this pattern, SAP HANA instance is deployed across two Availability Zones with SAP HANA System Replication configured on both the instances. The primary and secondary instances are of the same instance type. The secondary instance can be deployed in active/passive or active/active mode. We recommend using the sync mode of HANA System Replication for the low-latency connectivity between the two Availability Zones. For more information, see \$1https---help-sap-com-docs-SAP-HANA-PLATFORM-6b94445c94ae495c83a19646e7c3fd56-c039a1a5b8824ecfa754b55e0caffc01-html-version-2-0-05\$1[Replication Modes for SAP HANA System Replication].

This pattern is foundational if you are looking for high availability cluster solutions for automated failover to fulfill near-zero recovery point and time objectives. SAP HANA System Replication with high availability cluster solutions for automated failover provides resiliency against failure scenarios. For more information, see [Failure scenarios](https://docs.aws.amazon.com/sap/latest/general/arch-guide-failure-scenarios.html).

You need to consider the cost of licensing for third-party cluster solutions. If the secondary SAP HANA instance is not being used for read-only operations, then it is an idle capacity. Provisioning production equivalent instance type as standby adds to the total cost of ownership.

Your SAP HANA instance backups can be stored in Amazon S3 buckets using AWS Backint Agent for SAP HANA. Amazon S3 objects are automatically stored across multiple devices spanning a minimum of three Availability Zones across a Region. To protect against logical data loss, you can use the Same-Region Replication feature of Amazon S3. For more information, see [Setting up replication](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-how-setup.html).

![\[Diagram of Pattern 1: Single Region with two Availability Zones for production.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/pattern1.png)


## Pattern 2: Single Region with two Availability Zones for production and production sized non-production in a third Availability Zone
<a name="hana-ops-patterns-pattern2"></a>

In this pattern, SAP HANA instance is deployed in a multi-tier SAP HANA System Replication across three Availability Zones. The primary and secondary SAP HANA instances are of the same instance type and can be configured in a highly available setup, using third-party cluster solutions. The secondary SAP HANA instance can be deployed in an active/passive or active/active configuration. We recommend using the sync mode of SAP HANA System Replication for the low-latency connectivity between the two Availability Zones. The tertiary SAP HANA instance is deployed in a third Availability Zone, as a Multiple Components on One System (MCOS) installation. The production instance is co-hosted (on the same Amazon EC2 instance) along with a non-production SAP HANA instance.

This architectural pattern is cost-optimized. It aids disaster recovery in the unlikely event of losing connection to two Availability Zones at the same time. For disaster recovery, the non-production SAP HANA workload is stopped to make resources available for production workload. However, invoking disaster recovery (third Availability Zone) is a manual activity. As per the requirements of MCOS, you are required to provision the non-production SAP HANA instance with the same AWS instance type as that of the primary instance and it has to be located in a third Availability Zone. Also, operating an MCOS system requires additional storage for non-production workloads and detailed tested procedures to invoke a disaster recovery.

In comparison to pattern 1, pattern 2 further enhances the application availability. There is no restoration or recovery from backups required to invoke a disaster recovery. The additional cost of the third instance is justified as the idle capacity is being utilized for non-production workloads.

![\[Diagram of Pattern 2: Single Region with two Availability Zones for production and production sized non-production in a third Availability Zone.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/pattern2.png)


## Pattern 3: Single Region with one Availability Zone for production and another Availability Zone for non-production
<a name="hana-ops-patterns-pattern3"></a>

In this pattern, SAP HANA instance is deployed in a two-tier SAP HANA System Replication across two Availability Zones. The primary and secondary SAP HANA instances are of the same type and there is no idle capacity or high availability licensing requirement. Additional storage is required for the non-production SAP HANA workloads on the secondary instance.

The secondary instance is an MCOS installation and co-hosts a non-production SAP HANA workload. For more information, see \$1https---launchpad-support-sap-com---notes-1681092\$1[SAP Note Multiple SAP HANA DBMSs (SIDs) on one SAP HANA system]. This is a cost-optimized solution without high availability. In the event of a failure on the primary instance, the non-production SAP HANA workload is stopped and a takeover is performed on the secondary instance. Considering the time taken in recovering services on the secondary instance, this type of pattern is suitable for SAP HANA workloads that can have a higher recovery time objective and are functioning as disaster recovery systems.

![\[Diagram of Pattern 3: Single Region with one Availability Zone for production and another Availability Zone for non-production.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/pattern3.png)


## Pattern 4: Single Region with one Availability Zone for production
<a name="hana-ops-patterns-pattern4"></a>

In this pattern, SAP HANA instance is deployed as a standalone installation with no target systems to replicate data. This is the most basic and cost-efficient deployment option. However, this is the least resilient of all the architectures and is not recommended for business-critical SAP HANA workloads. The options available to restore business operations during a failure scenario are by Amazon EC2 auto recovery, in the event of an instance failure or by restoration and recovery from most recent and valid backups, in the event of a significant issue impacting the Availability Zone. The non-production SAP HANA workloads have no dependency on the production SAP HANA instance. They are free to be deployed in an Availability Zone within the Region and can be appropriately sized for its workload.

![\[Diagram of Pattern 4: Single Region with one Availability Zone for production\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/pattern4.png)


# Multi-Region architecture patterns for SAP HANA
<a name="hana-ops-patterns-multi"></a>

 AWS Global Infrastructure spans across multiple Regions around the world and this footprint is constantly increasing. For the latest updates, see [AWS Global Infrastructure](https://aws.amazon.com/about-aws/global-infrastructure/). If you are looking for your SAP data to reside in multiple regions at any given point to ensure increased availability and minimal downtime in the event of failure, you should opt for multi-Region architecture patterns.

When deploying a multi-Region pattern, you can benefit from using an automated approach such as, cluster solution, for fail over between Availability Zones to minimize the overall downtime and remove the need for human intervention. Multi-Region patterns not only provide high availability but also disaster recovery, thereby lowering overall costs. Distance between the chosen regions have direct impact on latency and hence, in a multi-Region pattern, this has to be considered into the overall design of SAP HANA System Replication.

There are additional cost implications from cross-Region replication or data transfer that also need to be factored into the overall solution pricing. The pricing varies between Regions.

The following are the four multi-Region architecture patterns.

**Topics**
+ [Pattern 5: Primary Region with two Availability Zones for production and secondary Region with a replica of backups/AMIs](#hana-ops-patterns-pattern5)
+ [Pattern 6: Primary Region with two Availability Zones for production and secondary Region with compute and storage capacity deployed in a single Availability Zone](#hana-ops-patterns-pattern6)
+ [Pattern 7: Primary Region with two Availability Zones for production and a secondary Region with compute and storage capacity deployed, and data replication across two Availability Zones](#hana-ops-patterns-pattern7)
+ [Pattern 8: Primary Region with one Availability Zone for production and a secondary Region with a replica of backups/AMIs](#hana-ops-patterns-pattern8)
+ [Summary](#hana-ops-patterns-summary)

## Pattern 5: Primary Region with two Availability Zones for production and secondary Region with a replica of backups/AMIs
<a name="hana-ops-patterns-pattern5"></a>

This pattern is similar to pattern 1 where your SAP HANA instance is highly available. You deploy your production SAP HANA instance across two Availability Zones in the primary Region using synchronous SAP HANA System Replication. You can restore your SAP HANA instance in a secondary Region with a replica of backups stores in Amazon S3, Amazon EBS, and Amazon Machine Images (AMIs).

With cross-Region replication of files stored in Amazon S3, the data stored in a bucket is automatically (asynchronously) copied to the target Region. Amazon EBS snapshots can be copied between Regions. For more information, see [Copy an Amazon EBS snapshot](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-copy-snapshot.html). You can copy an AMI within or across Regions using AWS CLI, AWS Management Console, AWS SDKs or Amazon EC2 APIs. For more information, see [Copy an AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html). You can also use AWS Backup to schedule and run snapshots and replications across Regions.

In the event of a complete Region failure, the production SAP HANA instance needs to be built in the secondary Region using AMI. You can use AWS CloudFormation templates to automate the launch of a new SAP HANA instance. Once your instance is launched, you can then download the last set of backup from Amazon S3 to restore your SAP HANA instance to a point-in-time before the disaster event. You can also use AWS Backint Agent to restore and recover your SAP HANA instance and redirect your client traffic to the new instance in the secondary Region.

This architecture provides you with the advantage of implementing your SAP HANA instance across multiple Availability Zones with the ability to failover instantly in the event of a failure. For disaster recovery that is outside the primary Region, recovery point objective is constrained by how often you store your SAP HANA backup files in your Amazon S3 bucket and the time it takes to replicate your Amazon S3 bucket to the target Region. You can use Amazon S3 replication time control for a time-bound replication. For more information, see \$1https---docs-aws-amazon-com-AmazonS3-latest-userguide-replication-time-control-html-enabling-replication-time-control\$1[Enabling Amazon S3 Replication Time Control].

Your recovery time objective depends on the time it takes to build the system in the secondary Region and restore operations from backup files. The amount of time will vary depending on the size of the database. Also, the time required to get the compute capacity for restore procedures may be more in the absence of a reserved instance capacity. This pattern is suitable when you need the lowest possible recovery time and point objectives within a region and high recovery point and time objectives for disaster recovery outside the primary Region.

![\[Diagram of Pattern 5: Primary Region with two Availability Zones for production and secondary Region with a replica of backups/AMIs.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/pattern5.png)


## Pattern 6: Primary Region with two Availability Zones for production and secondary Region with compute and storage capacity deployed in a single Availability Zone
<a name="hana-ops-patterns-pattern6"></a>

In addition to the architecture of pattern 5, this pattern has an asynchronous SAP HANA System Replication is setup between the SAP HANA instance in the primary Region and an identical third instance in one of the Availability Zones in the secondary Region. We recommend using the asynchronous mode of SAP HANA System Replication when replicating between AWS Regions due to increased latency.

In the event of a failure in the primary Region, the production workloads are failed over to the secondary Region manually. This pattern ensures that your SAP systems are highly available and are disaster-tolerant. This pattern provides a quicker failover and continuity of business operations with continuous data replication.

There is an increased cost of deploying the required compute and storage for the production SAP HANA instance in the secondary Region and of data transfers between Regions. This pattern is suitable when you require disaster recovery outside of the primary Region with low recovery point and time objectives.

This pattern can be deployed in a multi-tier as well as multi-target replication configuration.

The following diagram shows a multi-target replication where the primary SAP HANA instance is replicated on both Availability Zones within the same Region and also in the secondary Region.

![\[Diagram of Pattern 6: Primary Region with two Availability Zones for production and secondary Region with compute and storage capacity deployed in a single Availability Zone.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/pattern6.1.png)


The following diagram shows a multi-tier replication where the replication is configured in a chained fashion.

![\[Diagram of a multi-tier replication where the replication is configured in a chained fashion.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/pattern6.2.png)


## Pattern 7: Primary Region with two Availability Zones for production and a secondary Region with compute and storage capacity deployed, and data replication across two Availability Zones
<a name="hana-ops-patterns-pattern7"></a>

In this pattern, two sets of two-tier SAP HANA System Replication is deployed across two AWS Regions. The two-tier SAP HANA System Replication is configured across two Availability Zones within the same Region and the replication outside of the primary Region is configured using SAP HANA Multi-target System Replication. This setup can be extended with high availability cluster solution for automatic failover capability on the primary Region. For more information, see \$1https---help-sap-com-docs-SAP-HANA-PLATFORM-6b94445c94ae495c83a19646e7c3fd56-ba457510958241889a459e606bbcf3d3-html-version-2-0-04\$1[SAP HANA Multi-target System Replication].

This pattern provides protection against failures in the Availability Zones and Regions. However, a cross-Region takeover of SAP HANA instance requires manual intervention. During a failover of the secondary Region, the SAP HANA instance continues to have SAP HANA System Replication up and running in the new Region without any manual intervention. This setup is applicable if you are looking for the highest application availability at all times and disaster recovery outside the primary Region with the least possible recovery point and time objectives. This pattern can withstand an extremely rare possibility of the failure of three Availability Zones spread across multiple Regions.

This pattern is highly suitable for you if you operate active/active (read-only) SAP HANA instances in the primary Region and plan to continue the same SAP HANA System Replication configuration with read-only capability. If you are looking for read-only capability across two Regions along with an existing read-only instance within the Region, you can configure multiple secondary systems supporting active/active (read-only) configuration. However, only one of the systems can be accessed via hint-based statement routing and the others must be accessed via direct connection.

With this pattern, the redundant compute and storage capacity deployed across two Availability Zones in two Regions and the cross-Region communication add to the total cost of ownership.

![\[Diagram of Pattern 7: Primary Region with two Availability Zones for production and a secondary Region with compute and storage capacity deployed, and data replication across two Availability Zones.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/pattern7.png)


## Pattern 8: Primary Region with one Availability Zone for production and a secondary Region with a replica of backups/AMIs
<a name="hana-ops-patterns-pattern8"></a>

This pattern is similar to pattern 4 with additional disaster recovery in a secondary Region containing replicas of the SAP HANA instance backups stored in Amazon S3, Amazon EBS snapshots, and AMIs. In this pattern, the SAP HANA instance is deployed as a standalone installation in the primary Region in one Availability Zone with no target SAP HANA systems to replicate data.

With this pattern, your SAP HANA instance is not highly available. In the event of a complete Region failure, the production SAP HANA instance needs to be built in the secondary Region using AMI. You can use AWS CloudFormation templates to automate the launch of a new SAP HANA instance. Once your instance is launched, you can then download the last set of backup from Amazon S3 to restore your SAP HANA instance to a point-in-time before the disaster event. You can also use AWS Backint Agent to restore recover your SAP HANA instance and redirect your client traffic to the new instance in the secondary Region.

For disaster recovery that is outside the primary Region, recovery point objective is constrained by how often you store your SAP HANA backup files in your Amazon S3 bucket and the time it takes to replicate your Amazon S3 bucket to the target Region. Your recovery time objective depends on the time it takes to build the system in the secondary Region and restore operations from backup files. The amount of time will vary depending on the size of the database. This pattern is suitable for non-production or non-critical production systems that can tolerate a downtime required to restore normal operations.

![\[Diagram of Pattern 8: Primary Region with one Availability Zone for production and a secondary Region with a replica of backups/AMIs.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/pattern8.png)


## Summary
<a name="hana-ops-patterns-summary"></a>

We highly recommend operating business critical SAP HANA instances across two Availability Zones. You can use a third-party cluster solution, such as, Pacemaker along with SAP HANA System Replication to ensure a highly availability setup.

A high availability setup with third-party cluster solution adds to the licensing cost and is still recommended as it can provide high resiliency architecture, a near-zero recovery time and point objectives.

# High availability and disaster recovery
<a name="hana-ops-ha-dr"></a>

 AWS provides multiple options for performing disaster recovery and making your SAP HANA systems highly available. This section provides information about these solutions. It also covers the support on AWS platform for native SAP HANA recovery features provided by SAP.

**Topics**
+ [Amazon EC2 recovery options](#ec2-recovery-hana-hadr)
+ [SAP HANA service auto-restart](#hana-restart-hana-hadr)
+ [SAP HANA backup/restore](#hana-backup-hana-hadr)
+ [AWS Backint Agent for SAP HANA](#backint-hana-hadr)
+ [Amazon EBS snapshots](#ebs-hana-hadr)
+ [Cluster solutions](#cluster-hana-hadr)
+ [Pacemaker cluster](#pacemaker-hana-hadr)
+ [AWS Launch Wizard for SAP](#lwsap-hana-hadr)
+ [AWS Application Migration Service and AWS Elastic Disaster Recovery](#mgn-drs-hana-hadr)
+ [SAP HANA system replication](hana-ops-ha-dr-hsr.md)
+ [Testing SAP HANA high availability deployments](hana-ops-ha-dr-testing.md)
+ [Troubleshoot high availability SAP HANA deployments](hana-ops-ha-dr-troubleshoot.md)

## Amazon EC2 recovery options
<a name="ec2-recovery-hana-hadr"></a>

You can recover your SAP HANA databases running on Amazon EC2 instances with the following recovery options.

**Example**  
+ The default configuration of an Amazon EC2 instance enables automatic recovery of a supported instance due to hardware failure or a problem requiring the involvement of AWS. Automatic recovery of your Amazon EC2 instance increases the resiliency of your SAP workload. For more information, see [Simplified automatic recovery based on instance configuration](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html#instance-configuration-recovery).
+ You can create a `StatusCheckFailed_System` CloudWatch alarm to monitor your Amazon EC2 instance. The system status check may fail due to the following reasons:
  + Loss of network connectivity
  + Loss of system power
  + Software issues on the physical host
  + Hardware issues on the physical host that impact network reachability
When the CloudWatch alarm detects this failure, recover action is initiated. A recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata. For more information, see [Amazon CloudWatch action based recovery](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html#cloudwatch-recovery).  
\$1 TIP: When you create the `StatusCheckFailed_System` CloudWatch alarm using AWS Management Console, associate it with Amazon SNS to receive email notifications. Alternatively, you can set up Amazon SNS notifications after creating the alarm. For more information, see [Setting up Amazon SNS notifications](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/US_SetupSNS.html).
+ Dedicated host auto recovery restarts your instances on to a new replacement host when there is a Dedicated Host failure due to system power or network connectivity events. For more information, see [Host recovery](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-hosts-recovery.html).

We recommend configuring your Amazon EC2 instances, except instances in a third-party cluster solution, and dedicated hosts with automatic recovery to protect against hardware failure. The following diagram illustrates Amazon EC2 recovery options.

![\[Image showing recovery options for SAP HANA databases running on Amazon EC2.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/ec2automaticrecovery.png)


## SAP HANA service auto-restart
<a name="hana-restart-hana-hadr"></a>

SAP HANA service auto-restart is a fault recovery solution provided by SAP. SAP HANA has many configured services running all the time for various activities. When any of these services is disabled due to software failure or human error, the service is automatically restarted with the SAP HANA service auto-restart watchdog function. When the service is restarted, it loads all the necessary data back into memory and resumes its operation. SAP HANA service auto-restart solution works the same way on AWS as it does on any other platform. Using SAP HANA service auto-restart along with [Amazon EC2 recovery options](#ec2-recovery-hana-hadr) is a robust disaster recovery solution.

## SAP HANA backup/restore
<a name="hana-backup-hana-hadr"></a>

Although SAP HANA is an in-memory database, it persists all changes in persistent storage to recover and resume from any failures, such as power outages. If the persistent storage is damaged or any logical errors occur, SAP HANA backups are required to restore the database. The SAP HANA database backup files can be regularly backed up to a remote location for disaster recovery purposes. SAP HANA backup/restore works the same way on AWS as it does on any other platform. For more information, see [SAP HANA Administration Guide](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/330e5550b09d4f0f8b6cceb14a64cd22.html).

## AWS Backint Agent for SAP HANA
<a name="backint-hana-hadr"></a>

 AWS Backint Agent for SAP HANA (AWS Backint agent) is an SAP-certified backup and restore application for SAP HANA workloads running on Amazon EC2 instances in the cloud. AWS Backint agent runs as a standalone application that integrates with your existing workflows to back up your SAP HANA database to Amazon S3 and to restore it using SAP HANA Cockpit, SAP HANA Studio, and SQL commands. AWS Backint agent supports full, incremental, and differential backup of SAP HANA databases. Additionally, you can back up log files and catalogs toAmazon S3. For more information, see [AWS Backint Agent for SAP HANA](https://docs.aws.amazon.com/sap/latest/sap-hana/aws-backint-agent-sap-hana.html).

**Topics**
+ [Example scenario](#example-backint-hana-hadr)
+ [Time to back up](#time-backint-hana-hadr)
+ [Recovery time and point objectives](#rto-rpo-backint-hana-hadr)

### Example scenario
<a name="example-backint-hana-hadr"></a>

 AWS Backint Agent for SAP HANA enables you to make your SAP HANA systems on AWS highly available and ready for disaster recovery. See the following example scenario to learn more.

1. Run your SAP HANA system on Amazon EC2 in Availability Zone 1.

1. Set up the `StatusCheckFailed_System` CloudWatch alarm to automatically recover your Amazon EC2 instance if the system check fails.

   1. Your instance is recovered within the same Availability Zone.

   1. You may not be able to access the instance when the Availability Zone becomes unavailable.

1. Launch a new Amazon EC2 instance using a AWS CloudFormation template in Availability Zone 2. For more information, see [Launch an instance from a launch template](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html).

1. Restore your SAP HANA database from Amazon S3 with AWS Backint agent. For more information, see [Back up and restore your SAP HANA system with AWS Backint Agent for SAP HANA](https://docs.aws.amazon.com/sap/latest/sap-hana/aws-backint-agent-backup-restore.html).

1. Redirect your client traffic to the new SAP HANA system on Amazon EC2 when it is operational.  
![\[Backup and restore using Backint Agent for SAP HANA\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/backinthanahadr.png)

In this scenario, you avoid the cost of a standby node. Using AWS multi-Availability Zone infrastructure and backup/restore with AWS Backint Agent for SAP HANA, you can quickly resume operations and significantly reduce downtime costs.

The elaborate recovery procedure makes this model suitable for a longer recovery time objective and a recovery point objective that is greater than zero. Your recovery point objective depends on how frequently you store your SAP HANA backup files in Amazon S3.

You can lower your recovery point objective with AWS Backint agent storing your SAP HANA system backups to Amazon S3. Additionally, you can quickly restore from the backup files in Amazon S3 without creating custom scripts to manually copy your SAP HANA backup files to and from Amazon S3.

### Time to back up
<a name="time-backint-hana-hadr"></a>

The time taken to back up and restore your SAP HANA database on Amazon EC2 with AWS Backint agent depends on the configuration of your system. These include Amazon EC2 instance type, Amazon EBS volume type, and database size. The following are the key variables that impact the time taken to back up and restore your SAP HANA system.
+ Storage throughput of the underlying Amazon EBS volume supporting the SAP HANA database
+ Network throughput supporting the communication channel with Amazon S3
+ Available CPU resources on the instance type

### Recovery time and point objectives
<a name="rto-rpo-backint-hana-hadr"></a>

We recommend you to perform various tests to identify the right system configuration that suit your business recovery time and point objectives. AWS Backint Agent for SAP HANA maximizes the available throughput by parallel processing the back up and restore processes. The recovery time objective is optimized for any given system configuration. For example, with SAP HANA scale-up node on r5.2xlarge, AWS Backint agent was able to upload 551GB of data in 4 minutes and 15 seconds, achieving an overall throughput of 2.16GB/s. Similarly, for a 4 node SAP HANA scale-out running on u6-tb1.metal instances, AWS Backint agent was able to upload 22.86TB of data in 23 minutes, achieving an overall throughput of 16.8GB/s.

Based on our testing, the time taken for restore operations using AWS Backint agent is normally 1.5 to 2 times the back up time. For more information, see [Performance tuning](https://docs.aws.amazon.com/sap/latest/sap-hana/aws-backint-agent-installing-configuring.html#aws-backint-agent-performance-tuning).

## Amazon EBS snapshots
<a name="ebs-hana-hadr"></a>

You can back up your data on Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots provide a fast backup process, regardless of the database size. They are stored in Amazon S3 and replicated across Availability Zones automatically.

Amazon EBS snapshots are incremental by default. Only the delta changes are stored since the last snapshot. Snapshots are also crash consistent. They contain the blocks of completed I/O operations. You can copy the snapshots across AWS Regions or share it with other AWS accounts. You can restore Amazon EBS volumes from the snapshot or create a new volume out of a snapshot in the same or different Availability Zone, and launch Amazon EC2 instances. Amazon EBS snapshots provide a simple and secure data protection solution that is designed to protect your block storage data, such as Amazon EBS volumes, boot volumes, and on-premises block data. For more information, see [Amazon EBS snapshots](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html).

Amazon EBS snapshots can also be used to enable disaster recovery, and migrate data across AWS Regions and accounts. Amazon EBS fast snapshot restore enables you to create a volume from a snapshot that is fully initialized at creation. This eliminates the latency of I/O operations on a block when it is accessed for the first time. Volumes that are created using fast snapshot restore instantly deliver all of their provisioned performance. Amazon EBS fast snapshot restore can be enabled on a snapshot while it is being created. It helps you achieve low recovery time objective. For more information, see [Amazon EBS fast snapshot restore](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html).

## Cluster solutions
<a name="cluster-hana-hadr"></a>

SAP HANA workloads on AWS are configured in a highly available and fault tolerant manner at the infrastructure layer. A failure still needs to be managed at the SAP HANA database layer. If a failure is detected at the hardware or software level, you can perform a manual failover process with SAP HANA cockpit, SAP HANA studio, or `hdbnsutil` command line tool. The manual processes can affect the availability of your business processes.

You can also use Python-based API included with SAP HANA to create your own high availability and disaster recovery provider or hooks. You can then integrate these hooks with SAP HANA system replication takeover process to automate tasks such as, restarting the primary node, IP redirection, DNS redirection, and shutdown of dev/QA systems in the secondary node. For more information, see [Implementing a HA/DR Provider](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/1367c8fdefaa4808a7485b09815ae0f3.html).

Based on the operating system of your SAP HANA database, you can implement a third-party high availability cluster solution. It can reduce downtime and automate failover steps. The following solutions include a pacemaker framework along with SAP HANA hooks that are certified by SAP and supported on AWS.
+ SUSE Linux Enterprise Server (SLES) High Availability Extension (HAE)
+ Red Hat Enterprise Linux (RHEL) for SAP high availability

For more information, see [SAP HANA on AWS: High Availability Configuration Guide for SLES and RHEL](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-hana-on-aws-ha-configuration.html).

## Pacemaker cluster
<a name="pacemaker-hana-hadr"></a>

SAP HANA high availability solution based on SAP HANA system replication is automated for failover between primary and secondary SAP HANA instances. The primary and secondary instances are configured together as a pacemaker cluster. The clustering software is at the operating system layer and is integrated with the SAP HANA database using SAP HANA hooks. The clustering software detects and automates the failover. The recovery time can be in minutes or less. For more information, see [SAP HANA system replication](https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-ha-dr-hsr.html).

The SAPHanaSR and SAPHANASR-Scale-out solutions from SUSE are based on pacemaker and corosync. These along with dedicated resource agents for SAP HANA are released as part of SLES for SAP Applications. For more information on how to set up a high availability cluster on SLES for SAP Applications on AWS, see [High availability cluster configuration on SLES](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-hana-on-aws-ha-cluster-configuration-on-sles.html).

The high availability solution from RHEL also provides a pacemaker cluster framework and the resource agents required for automation failover process of SAP HANA system replication. For more information on how to set up a high availability cluster on RHEL on AWS, see [High availability cluster configuration on RHEL](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-hana-on-aws-ha-cluster-configuration-on-rhel.html). The following resources are available from Red Hat.
+  [Configuring SAP HANA Scale-Up System Replication with the RHEL HA Add-On on Amazon Web Services (AWS)](https://access.redhat.com/articles/3569621) 
+  [Configuring SAP HANA Scale-Out System Replication with the RHEL HA Add-On on Amazon Web Services (AWS)](https://access.redhat.com/articles/6093611) 

For automated deployment of SAP HANA system replication using AWS Launch Wizard for SAP, see [AWS Launch Wizard for SAP](#lwsap-hana-hadr).

The pacemaker cluster uses a virtual IP address to connect to the master SAP HANA instance. The virtual IP address is migrated to the secondary instance during failover. The secondary instance is then promoted as active primary for traffic redirection. An overlay IP address is used for the networking configuration on AWS. It is a virtual IP address configured to point to the master SAP HANA instance whether it is on the primary node or secondary node. You can configure overlay IP routing with AWS Transit Gateway or Network Load Balancer. For more information, see [SAP on AWS High Availability with Overlay IP Address Routing](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-ha-overlay-ip.html).

## AWS Launch Wizard for SAP
<a name="lwsap-hana-hadr"></a>

 AWS Launch Wizard for SAP offers guided deployment for production-ready applications on AWS with resource sizing, customizable deployments, application configuration, and cost estimation. These tools eliminate the complexity of high availability deployments. For more information, see [AWS Launch Wizard for SAP](https://docs.aws.amazon.com/launchwizard/latest/userguide/launch-wizard-sap.html).

 AWS Launch Wizard for SAP fast-tracks your SAP HANA deployments on AWS. It requires minimal manual intervention. The following high availability automated deployment patterns for SAP HANA are supported by AWS Launch Wizard.
+  **Cross-AZ SAP HANA database high availability setup**: Deploy SAP HANA with high availability configured across two Availability Zones.
+  **Cross-AZ SAP NetWeaver system setup**: Deploy Amazon EC2 instances for ASCS/ERS and SAP HANA databases across two Availability Zones, and spread the deployment of application servers across them.
+  **SUSE/RHEL cluster setup**: For SAP HANA and NetWeaver on HANA high availability deployments, Launch Wizard for SAP configures SUSE/RHEL clustering when you provide SAP software and specify the deployment of SAP database or application software. Clustering is enabled between the ASCS and ERS nodes for SAP HANA databases across two Availability Zones. See the following diagram.  
![\[Example high availability configuration for SAP HANA across two Availability Zones.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/lwsaphanahadr.png)
**Note**  
We strongly recommend that you validate the setup of your environment before using the high availability cluster for deployment. Run tests before deploying an application on your SAP HANA instance set up by Launch Wizard. The tests can ensure that failover and fail-back operations are working properly.

The following table summarizes the deployment patterns supported by AWS Launch Wizard for SAP.


| Deployment pattern | Support | 
| --- | --- | 
|  SAP HANA database on a single Amazon EC2 instance  |  Yes  | 
|  SAP NetWeaver on SAP HANA system on a single Amazon EC2 instance  |  Yes  | 
|  SAP HANA database on multiple Amazon EC2 instances  |  Yes  | 
|  SAP NetWeaver system on multiple Amazon EC2 instances  |  Yes  | 
|  Cross-Availability Zone SAP HANA database high availability setup  |  Yes  | 
|  Cross-Availability Zone SAP NetWeaver system setup  |  Yes  | 
|  SUSE/RHEL cluster setup  |  Yes  | 

For more information, see [Supported deployments and features of AWS Launch Wizard](https://docs.aws.amazon.com/launchwizard/latest/userguide/launch-wizard-sap-deployments.html).

## AWS Application Migration Service and AWS Elastic Disaster Recovery
<a name="mgn-drs-hana-hadr"></a>

We recommend using AWS Application Migration Service to migrate your SAP HANA databases to AWS. For more information, see [What is AWS Application Migration Service](https://docs.aws.amazon.com/mgn/latest/ug/what-is-application-migration-service.html)?

For disaster recovery, we recommend using AWS Elastic Disaster Recovery. It uses block level replication to continuously replicate data from source to target. It helps reduce the infrastructure costs and total cost of ownership. It provides sub-second recovery point objective and recovery time objective of minutes. For more information, see [What is AWS Elastic Disaster Recovery](https://docs.aws.amazon.com/drs/latest/userguide/what-is-drs.html)?

 *Cloud Endure, an AWS company, also provides migration and disaster recovery services. Cloud Endure disaster recovery service is a business continuity offering that can be used for SAP and non-SAP workloads.* 

# SAP HANA system replication
<a name="hana-ops-ha-dr-hsr"></a>

SAP HANA system replication is a highly available solution provided by SAP for SAP HANA. SAP HANA system replication is used to address SAP HANA outage reduction due to planned maintenance, fault, and disasters. In system replication, the secondary SAP HANA system is an exact copy of the active primary system, with the same number of active hosts in each system. Each service in the primary system communicates with its counterpart in the secondary system, and operates in live replication mode to replicate and persist data and logs, and typically load data in the memory. SAP HANA system replication is fully supported on AWS.

**Topics**
+ [Architecture patterns](#hsr-patterns)
+ [Replication and operation modes](#hsr-modes)
+ [Configuration scenarios](#hsr-configuration-scenarios)
+ [Takeover considerations](#hsr-takeover)

## Architecture patterns
<a name="hsr-patterns"></a>

 AWS isolates facilities geographically, in Regions and Availability Zones. A multi-Availability Zone architecture reduces the risk of location failure while maintaining performance.

With single Region multi-Availabilty Zone pattern, the secondary system can be installed in a different Availability Zone in the same AWS Region as the primary system. This provides a rapid failover solution for planned downtime, managing storage corruption or any other local faults.

For disaster recovery, you can use a multi-Region architecture pattern where the secondary system is installed in a different AWS Region. You can choose the Region based on your business requirements, such as data residency limitations for compliance.

For more information, see [Architecture patterns for SAP HANA on AWS](https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-patterns.html).

## Replication and operation modes
<a name="hsr-modes"></a>

SAP HANA system replication offers the following replication and operation modes that are fully supported on AWS.

 **Replication modes** 

Different replication mode options for the replication of redo logs, including synchronous on disk, synchronous in-memory, and asynchronous, can be used depending on your recovery time and point objectives. Synchronous SAP HANA system replication is recommended for multi-Availability Zone deployments, ensuring near zero recovery point objectives. AWS provides low latency and high bandwidth connectivity between the different Availability Zones within a Region.

Asynchronous replication is recommended for system replication across AWS Regions. You can select a multi-Region architecture pattern if your business requirements are not impacted by potential network latency. You must also factor the cost of AWS services in different Regions and cross-Region data transfer.

 **Operation modes** 

Different operation modes can be used while registering the secondary SAP HANA system, such as `delta_datashipping`, `logreplay` or `logreplay_readaccess`. The database accordingly sends different types of data packages to the secondary system.

## Configuration scenarios
<a name="hsr-configuration-scenarios"></a>

SAP HANA system replication supports the following configuration scenarios that are fully supported on AWS.

**Topics**
+ [Active/Passive secondary system](#hsr-active-passive-secondary)
+ [Active/Active (read enabled) secondary system](#hsr-active-active-read-secondary)
+ [SAP HANA secondary time travel](#hsr-secondary-time-travel)
+ [SAP HANA replication scenarios in AWS](#hsr-hana-replication-aws)
+ [SAP HANA multi-tier replication](#hsr-multi-tier)
+ [SAP HANA multi-target replication](#hsr-multi-target)

### Active/Passive secondary system
<a name="hsr-active-passive-secondary"></a>

In this scenario, system replication does not allow read access or SQL querying on the secondary system until the active system is switched from the current primary to the secondary system by takeover. The secondary system acts as a hot standby with the `logreplay` operation mode.

### Active/Active (read enabled) secondary system
<a name="hsr-active-active-read-secondary"></a>

In this scenario, system replication supports read access on the secondary system. It requires the `logreplay_readaccess` operation mode.

### SAP HANA secondary time travel
<a name="hsr-secondary-time-travel"></a>

In this scenario, you can gain access to the data that was deleted in the primary system or intentionally delay the `logreplay` in secondary system to read older data while the replication continues on the secondary system. You can recover from logical errors and have a faster recovery. You can use the secondary time travel configuration only with the `logreplay` operation mode.

You must properly size the secondary time travel memory instance for replication. The minimum memory requirement is to use row store size, column store memory size, and 50 GB of memory with preload for `logreplay` operation mode. For more information, see [SAP Note 1999880 - FAQ: SAP HANA System Replication](https://me.sap.com/notes/1999880). The following parameters are require for setup.
+ `global.ini/[system\$1replication]/timetravel\$1max\$1retention\$1time ` parameter must be configured on the secondary system. This parameter defines the time period to which the secondary system can be brought back in the past.
+ `global.ini/[system\$1replication]/timetravel\$1snapshot\$1creation\$1interval ` is an optional parameter. You can adjust the secondary system’s snapshot creation. The secondary system can start retaining logs and snapshots.

The following diagram shows the SAP HANA secondary time travel configuration scenario.

![\[Diagram of the SAP HANA secondary time travel configuration scenario.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hsr-secondary-time-travel.png)


### SAP HANA replication scenarios in AWS
<a name="hsr-hana-replication-aws"></a>

In a two-tier SAP HANA system replication, deployment on AWS is optimized based on performance or cost. For the fastest takeover time, use a secondary instance with the same size as the primary instance. This is a performance optimized deployment. A cost optimized deployment can reduce overall costs with a compromise on the recovery time objective. Cost optimized scenarios are also referred to as pilot light disaster recovery. For more information, see [Rapidly recover mission-critical systems in a disaster](https://aws.amazon.com/blogs/publicsector/rapidly-recover-mission-critical-systems-in-a-disaster/).

**Topics**
+ [Performance optimized](#hsr-performance-optimized)
+ [Cost optimized](#hsr-cost-optimized)

#### Performance optimized
<a name="hsr-performance-optimized"></a>

SAP HANA database systems that are critical to business continuity require a near-zero recovery time objective during planned and unplanned outages. You can optimize performance with a secondary instance of the same size as primary. This configuration can accommodate preloaded column tables in-memory, and synchronous system replication. We do not recommend hosting your SAP HANA instances across AWS Regions in this setup. This is to avoid latency while replicating in a synchronous mode. This deployment protects your critical SAP HANA systems against failure of an Availability Zone, a rare occurrence.

You can set up a third-party cluster solution along with SAP HANA system replication to detect failure and automate failover. For more information, see [Pacemaker cluster](https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-ha-dr.html#pacemaker-hana-hadr). The following diagram shows a performance optimized deployment.

![\[Diagram of a performance optimized deployment.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hsr-performance-optimized.png)


#### Cost optimized
<a name="hsr-cost-optimized"></a>

You can reduce costs by using a smaller or shared secondary SAP HANA system. In the smaller secondary option, the infrastructure is initially sized smaller than the primary and resized before performing a takeover. In the shared secondary option, the unused memory on the secondary system is used by a non-production or sacrificial instance.

The `preload_column_tables` parameter is set to *false* for both, smaller and shared secondary options. You can find this parameter in the `global.ini` file located at `(/hana/shared/<SID>/global/hdb/custom/config`. Setting the parameter as *false* enables the secondary system to operate with reduced memory. However, the default value of the `preload_column_tables` is *true*.

**Note**  
Before performing a takeover in a cost optimized deployment, you must set the `preload_column_tables` parameter to its default value of *true* and restart the SAP HANA system.

The size of your SAP HANA database impacts the time taken to load the column tables into main memory. This affects your overall recovery time objective. You can use SQL scripts to get a rough estimate of the minimum memory required for these tables. Refer to the *HANA\$1Tables\$1ColumnStore\$1Columns\$1LastTouchTime* section in [SAP Note 1969700 – SQL Statement Collection for SAP HANA](https://me.sap.com/notes/1969700) for more information.

 **Smaller secondary** 

The following diagram shows the deployment of a smaller secondary SAP HANA system in a different Availability Zones within the same AWS Region.

![\[Diagram of the deployment of a smaller secondary SAP HANA system in a different Availability Zones within the same Region.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hsr-smaller-secondary.png)


This deployment is also possible across multiple AWS Regions. We recommend using the asynchronous mode while replicating across Regions. Note that when you resize the secondary system before a takeover, there is no reserved capacity. The requirement of a production sized instance is subject to the current availability in your Availability Zone.

 **Shared secondary** 

Multiple components one system (MCOS) model is a common use case of the shared secondary deployment option. You can operate an active quality instance along with the secondary instance on the same host. This setup requires additional storage to operate the additional instances. During a takeover, the instance with lower priority can be shutdown to make the underlying host resources available for production workloads.

You must set the `global_allocation_limit` for all instances running on the site. This ensures that no one instance with the `global_allocation_limit` set to `0` occupies the entire memory available on the host. For more information, see [SAP Note 1681092 – Multiple SAP HANA systems (SIDs) on the same underlying server(s)](https://me.sap.com/notes/1681092).

The following diagram shows a shared secondary deployment on AWS.

![\[Diagram of a shared secondary deployment.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hsr-shared-secondary.png)


 **Sizing considerations for cost optimized deployments** 

Despite disabling the preload of column tables, the actual memory usage on the secondary host is also dependent on the operation mode of system replication. For more information, see [SAP Note 1999880 - FAQ: SAP HANA System Replication](https://me.sap.com/notes/1999880).

Although the `preload_column_tables` parameter is set to *false*, the `logreplay` operation mode is also a contributing factor to the memory size. You should consider the size of column store tables with data modified in the previous 30 days from the current date of evaluation.

The `logreplay` operation mode may not be able to provide true cost optimization. The `delta_datashipping` operation mode can be an alternative. However, the `delta_datashipping` has limitations. It can include a higher recovery time and an increase demand for network bandwidth between the replication sites. If your business requirements can afford higher network bandwidth and relaxed recovery times, `delta_datashipping` mode can be a viable option.

The potential cost savings is higher with larger database instances. The memory footprint on the secondary system has a minimum requirement of row store memory and buffer requirements, even for smaller database instances. Calculation the memory requirement and accordingly setting the `global_allocation_limit` is an iterative process. The column store demand for delta merge grows with the growing size of the production database. Therefore, memory allocations for all hosts on a site should be monitored periodically, and after mass data loads, go-lives, and SAP system specific lifecycle events.

### SAP HANA multi-tier replication
<a name="hsr-multi-tier"></a>

This configuration scenario is suitable if you are looking for both, high availability and disaster recovery. This setup provides a chained replication model where a primary system can replicate to only one secondary system at any given point of time. For more information, see [Setting Up SAP HANA Multi-tier System Replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/f730f308fede4040bcb5ccea6751e74d.html).

In this scenario, there can be a mix of performance and cost deployment options. The primary and secondary system can be deployed in a high availability setup using a pacemaker cluster. The tertiary or disaster recovery system can be a cost optimized deployment. An active non-production instance can run on the same node, as a multiple components one installation model. This setup is shown in the following diagram.

![\[Diagram of an active non-production instance run on the same node\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hsr-multi-tier.png)


### SAP HANA multi-target replication
<a name="hsr-multi-target"></a>

In SAP HANA multi-tier scenario, replication happens sequentially, from primary to secondary system, and then from secondary to tertiary system. Starting with SPA HANA 2.0 SPS 03, SAP HANA provides multi-target system replication configuration for a single primary system to replicate to multiple secondary systems. For more information, see [SAP HANA Multitarget System Replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/ba457510958241889a459e606bbcf3d3.html).

The following diagram shows a multi-tier target replication configuration on AWS.

![\[Diagram of a multi-tier target replication configuration.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hsr-multi-target.png)


 **Replication mode** 

The primary, secondary, and tertiary systems can be placed on different Availability Zones within the same or across AWS Regions. Apart from the replication modes supported by SAP, SAP HANA systems deployed across different AWS Regions must choose the async mode of replication due to latency requirements. To see the replication modes supported by SAP, see [Supported Replication Modes between Sites](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/c3fe0a3c263c49dc9404143306455e16.html).

 **Operation mode** 

It is not possible to combine `logreplay` and `delta_datashipping` operation modes in a multi-tier or multi-target system replication. For example, if the primary and secondary systems use `logreplay` for system replication, then `delta_datashipping` cannot be used between the secondary and tertiary systems or vice-versa.

The `logreplay` operation mode is only supported in a multi-target system replication scenario. To implement a high availability pacemaker cluster solution along with multi-target replication, check the relevant resources from SUSE and RHEL.

The `logreplay_readaccess` operation mode is supported on an Active/Active (read enabled) configuration with multi-target system replication. However, in a multi-tier replication, only the secondary system can be used for read-only capability, and cannot be extended to the tertiary system.

 **Disaster recovery** 

The multi-target system replication offers automated re-registration of the secondary systems to a new primary source in case of failure on the primary. You can set this automation with the `register_secondaries_on_takeover` parameter. For more information, see [Disaster Recovery Scenarios for Multitarget System Replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/4e9b18c116aa42fc84c7dbfd02111aba/8428f79ca32d4869848a1aefe437151c.html).

## Takeover considerations
<a name="hsr-takeover"></a>

When there is a need for SAP HANA system replication takeover, you must trigger it in your secondary system by following the standard SAP HANA takeover process. You must decide if you want to wait for your system to be recovered in the primary Availability Zone before a takeover, if you have enabled automatic recovery. For more information, see [SAP Note 2063657 - SAP HANA System Replication Takeover Decision Guideline](https://me.sap.com/notes/2063657).

**Topics**
+ [Client redirect options](#hsr-client-redirect)
+ [Client redirection for Active/Active high availability scenario](#hsr-ha-client-redirect)

### Client redirect options
<a name="hsr-client-redirect"></a>

In almost all scenarios, failover of the SAP HANA system alone does not guarantee business continuity. You must ensure that your client applications, such as NetWeaver application server, JDBC, ODBC, etc are able to connect to the SAP HANA system after the failover. Connection can be reestablished by redirecting your network-based IP or DNS. IP redirection can be processed faster in a script as compared to synchronizing changes in DNS entries over a global network. For more information, see the *Client Connection Recovery* section in the [SAP HANA Administration Guide](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/330e5550b09d4f0f8b6cceb14a64cd22.html).

#### DNS redirection
<a name="hsr-redirect-dns"></a>

You must set the IP address of the secondary system in the host name for a network-based DNS redirection. The DNS records must point to the active SAP HANA instance in the same Availability Zone. You can use a script as part of the takeover to modify the DNS records. You can also make the change to DNS records manually.

A vendor proprietary solution is required to modify DNS records. With AWS, you can use Amazon Route 53 to automate the modification of DNS records with AWS CLI or AWS API. For more information, see [Configuring Amazon Route 53 as your DNS service](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-configuring.html).

![\[Diagram showing Amazon Route 53 automating the modification of DNS records.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hsr-redirect-dns.png)


#### IP redirection
<a name="hsr-redirect-ip"></a>

With network-based IP redirection, a virtual IP address is assigned to the virtual host name. In case of a takeover, the virtual IP unbinds from the network adapter of the primary system and binds to the network adapter on the secondary system.

Amazon VPC setup includes assigning subnets to your primary and secondary nodes for the SAP HANA database. Each of these configured subnets has a classless inter-domain routing (CIDR) IP assignment from the Amazon VPC which resides entirely within one Availability Zone. This CIDR IP assignment cannot span multiple zones or be reassigned to the secondary instance in a different Availability Zone during a failover. For more information, see [How Amazon VPC works](https://docs.aws.amazon.com/vpc/latest/userguide/how-it-works.html).

**Topics**
+ [AWS Transit Gateway](#hsr-redirect-ip-gateway)
+ [Network Load Balancer](#hsr-redirect-ip-nlb)

##### AWS Transit Gateway
<a name="hsr-redirect-ip-gateway"></a>

With Transit Gateway, you use route table rules which allow the overlay IP address to communicate to the SAP instance without having to configure any additional components, like a Network Load Balancer or Route 53. You can connect to the overlay IP from another VPC, another subnet (not sharing the same route table where overlay IP address is maintained), over a VPN connection, or via an AWS Direct Connect connection from a corporate network. For more information, see [What is a Transit Gateway?](https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html) 

![\[Diagram showing using a Transit Gateway and route table rules to allow the overlay IP address to communicate to the SAP instance.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hsr-redirect-ip-gateway.png)


##### Network Load Balancer
<a name="hsr-redirect-ip-nlb"></a>

If you do not use Amazon Route 53 or AWS Transit Gateway, you can use Network Load Balancer for accessing the overlay IP address externally. The Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the Network Load Balancer target group to route network connection request to a destination address which can be an overlay IP address. For more information, see [What is a Network Load Balancer?](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) 

![\[Diagram showing using a Network Load Balancer for accessing the overlay IP address externally.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hsr-redirect-ip-nlb.png)


### Client redirection for Active/Active high availability scenario
<a name="hsr-ha-client-redirect"></a>

You use the additional overlay IP address for your secondary read-only system in this configuration. The IP address binds to the active secondary system as part of the cluster failover. The DNS records for the secondary system can be updated manually or by using a script during takeover.

An additional Network Load Balancer needs to be created for load balancing your secondary system.

With Transit Gateway, you use an overlay IP address on your secondary system to connect with Amazon VPC and subnet where your secondary system will run.

**Topics**
+ [Active/Active scenario with DNS](#hsr-redirect-ha-dns)
+ [Active/Active scenario with AWS Transit Gateway](#hsr-redirect-ha-ip)
+ [Active/Active scenario with Network Load Balancer](#hsr-redirect-ha-gateway)

#### Active/Active scenario with DNS
<a name="hsr-redirect-ha-dns"></a>

In this scenario, you use two DNS records for SAP HANA read/write primary instance and SAP HANA read only secondary instance. In case of failover, the modification of DNS records can be automated or manual.

![\[Diagram showing the Active/Active scenario with DNS.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hsr-redirect-ha-dns.png)


#### Active/Active scenario with AWS Transit Gateway
<a name="hsr-redirect-ha-ip"></a>

In this scenario, two overlay IP addresses for SAP HANA read/write primary instance and SAP HANA read only secondary instance. In case of failover, the route table is adjusted in its Availability Zone, and Transit Gateway reroutes the connections to these IP addresses. This applies to both overlay IP addresses.

![\[Diagram showing the Active/Active scenario with Transit Gateway.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hsr-redirect-ha-ip.png)


#### Active/Active scenario with Network Load Balancer
<a name="hsr-redirect-ha-gateway"></a>

In this scenario, two overlay IP addresses for SAP HANA read/write primary instance and SAP HANA read only secondary instance. In case of failover, the route table is adjusted in its Availability Zone, and Network Load Balancer for the read/write or read only endpoint points to the overlay IP address in its Availability Zone. This applies to both overlay IP addresses.

![\[Diagram showing the Active/Active scenario with Network Load Balancer.\]](http://docs.aws.amazon.com/sap/latest/sap-hana/images/hsr-redirect-ha-gateway.png)


# Testing SAP HANA high availability deployments
<a name="hana-ops-ha-dr-testing"></a>

This section covers failure scenarios for backup, testing guidance and considerations for high availability and disaster recovery solutions, and disaster recovery mock exercise.

**Topics**
+ [Failure scenarios for backup and recommendations](#hana-ops-ha-dr-scenarios)
+ [Testing guidance and considerations](#hana-ops-ha-dr-testing-guidance)

## Failure scenarios for backup and recommendations
<a name="hana-ops-ha-dr-scenarios"></a>

The following table provides an overview of different failures scenarios for the SAP HANA system, the risk of occurrence, potential data loss, and maximum outage. It is important to determine which failure scenario will require a recovery from backup. Note that the granularity of the scenarios, classification, and impact will vary depending on your requirements and architecture.


|  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |
|   **Data protection/disaster recovery**   |   **Failure scenarios**   |   **Comparative risk of occurrence**   |   **Potential data loss**   |   **Maximum outage**   |   **Impact**   | 
|   **No high availability**   |  Resource exhausted or compromised (high CPU utilization/file system full/out of memory/storage issues)  |  Medium  |  \$1o (uncommitted transactions)  |  Avoidable  |  Region  | 
|   **High availability**   |  Single point of failure (database)  |  Medium  |  \$1o (uncommitted transactions)  |  Time to detect failure and failover (automated)  |  Region  | 
|   **High availability**   |  Availability Zone/network failure  |  Low  |  \$1o (uncommitted transactions)  |  Time to detect failure and failover (automated)  |  Region  | 
|   **High availability**   |  Core service failure  |  Low  |  o  |  Dependent on failure  |  Region  | 
|   **Disaster recovery**   |  Corruption/accidental deletion/malicious activities/faulty code deployment  |  Low  |  Last consistent restore point before failure  |  Time to detect failure and failover (manual)  |  Cross-Region  | 
|   **Disaster recovery**   |  Region failure  |  Very low  |  Replication delay  |  Time to detect failure and make a decision to invoke disaster recovery and takeover  |  Cross-Region  | 

For SAP HANA systems without high availability implementation, the core critical components of failure for an instance at the infrastructure level are compute, memory, and storage. For compute or memory related failure scenarios, it could be a processor, memory failure or resource exhaustion, such as high CPU utilization, out of memory etc. We recommend the following approaches for recovery of SAP HANA system, in case of CPU or memory issue.
+ Use Amazon EC2 automatic recovery or host recovery to bring the SAP HANA system up on new host. For more information, see [Amazon EC2 recovery options](https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-ha-dr.html#ec2-recovery-hana-hadr).
+ Create a full backup of your Amazon EC2 instance using Amazon Machine Image along with a snapshot of individual Amazon EBS volumes. Use this as golden image to launch a new instance in case of any failure.
+ Implement a monitoring solution, such as Amazon CloudWatch to prevent failure scenarios involving CPU or memory resource exhaustion.

You can resize or upgrade your Amazon EC2 instance to support a greater number of CPU cores or instance memory size. For more information, see [Change the instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-resize.html).

For SAP HANA system, Amazon EBS volumes can be the primary storage for operating `root`, `data` or `log` volumes. There can be different failure scenarios, such as Amazon EBS volume failure, disk corruption, accidental deletion of data, malicious attack, faulty code deployment etc. We recommend the following options to safeguard your data.
+ Use SAP HANA backup and restore to back up your SAP HANA database to Amazon S3 using AWS Backint Agent for SAP HANA.
+ Take regular Amazon Machine Images/Amazon EBS snapshots of your servers on a regular basis.

Amazon S3 single-Region replication should be configured to protect against data loss in the same Region. For disaster recovery, we recommend using Amazon S3 cross-Region replication to save backups/snapshots in the secondary Region, in the event of a failure in the primary Region. You can restore the SAP HANA system in the secondary Region from the last set of backups/snapshots. Here the recovery point objective depends on the last consistent restore point before the failure.

## Testing guidance and considerations
<a name="hana-ops-ha-dr-testing-guidance"></a>

Pacemaker cluster can help you perform planned downtime tasks, such as patching SAP HANA database, by automating failover and failback of cluster members. Various unplanned or fault situations can arise during SAP HANA database operations. These can include but are not limited to the following.
+ Hardware failures, such as memory module failures on bare-metal instances
+ Software failures, such as process crashes due to out-of-memory issues
+ Network outage

Most of these failure scenarios can be simulated using SAP HANA database and Linux operating system commands. The scenarios for AWS infrastructure can also be simulated on AWS Management Console or by using AWS APIs. For more information, see [AWS APIs](https://docs.aws.amazon.com/general/latest/gr/aws-apis.html).

High availability cluster solutions constantly monitor the configured resources, to detect and react as per pre-defined thresholds, dependencies, and target state. SAP HANA pacemaker cluster configuration can vary, depending on factors such as, size of the database, application availability, and others. The following are some of the considerations for testing SAP HANA high availability deployments based on pacemaker cluster.
+ SAP HANA high availability installation based on a pacemaker cluster must undergo planned and unplanned outage scenarios to verify stability.
+ You can perform initial cluster tests without loading business data into SAP HANA database. The first iteration of testing verifies if the cluster behaves as intended during various fault scenarios. In this iteration, you can also perform initial cycle of test cases and find out about any product or configuration issues.
+ The second iteration of testing can be performed with production size data loaded into the SAP HANA database. The main objective is to tune the cluster monitors for effective timeouts.

Large SAP HANA databases take more time to start and stop. If they are hosted on AWS bare-metal instances, the time taken to reboot can be longer. As these factors can impact the cluster behavior, the cluster timeout values have to be tuned accordingly.
+ An SAP Application can have many single point of failures, and SAP HANA database is one of them. The availability of an SAP application is dependent on all single point of failures being resilient to failure situations. Include single point of failures in overall testing. For example, validate an AWS Availability Zone failure where both SAP Application/NetWeaver stack component (ASCS) and SAP HANA database are deployed in the same Availability Zone. The cluster solution must be able to failover pre-configured resources and the SAP application must be restored on the target Availability Zone.
+ Test cases that comprise of planned and unplanned downtimes should be tested as a minimum validation. You can also include scenarios where single point of failures was observed in the past. For instance, a year-end consolidation jobs testing the instance memory limits, leading to database crashes.

For SAP HANA high availability deployment with pacemaker cluster on **SLES** on AWS test cases, see [Testing the cluster](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-hana-pacemaker-sles-testing.html).

For SAP HANA high availability deployment with pacemaker cluster on **RHEL** on AWS test cases, see [Testing the cluster](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-hana-pacemaker-rhel-testing.html).
+ Pacemaker cluster solution require virtual IP address configuration for client connections. With virtual IP addresses, the actual hardware where the SAP workloads run remain transparent to client applications. There is a seamless failover of connections in the event of a failure. You must verify that all the intended SAP or third-party interfaces are able to connect to the target SAP application post failover.

You can start by preparing a client connections or interfaces list that includes all critical connections to the target SAP system. Identify the modifications required in your connection configuration to point to a virtual IP address or load balancing mechanism. During testing, each connection must be validated for connectivity, time taken to detect new connection, and loss of locks set by the application, before the cluster performs a failover. For more information, see [Client redirect options](https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-ha-dr-hsr.html#hsr-client-redirect).
+ If you have high availability and disaster recovery on your SAP HANA workloads, you must take additional steps to perform cluster validations. A pacemaker cluster only has visibility into its cluster members(primary and secondary). The cluster software does not control disaster recovery operations (tier-3/tertiary).

When a failover is triggered in a multi-tier SAP HANA system replication setup and the secondary database takes over the role of primary, the replication continues on the tertiary system. However, once the fault with the original primary system is rectified and the system is made available again, manual intervention are be required to complete the reverse replication requirements from the new primary SAP HANA database to the original primary. These manual steps are needed for SAP HANA databases that do not support (lower than SAP HANA 2.0) multi-target replication. For more information, see [SAP HANA multi-target replication](https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-ha-dr-hsr.html#hsr-multi-target).

After performing failback to the original primary, some manual steps have to be performed to re-enable the replication on the tertiary site. It is very important to validate the flow of these steps and the time taken for services to startup during each testing scenario before releasing the systems for productive usage.

# Troubleshoot high availability SAP HANA deployments
<a name="hana-ops-ha-dr-troubleshoot"></a>

This section provides guidance for troubleshooting SAP HANA high availability deployments.

A healthy status of SAP HANA system replication is a foundational requirement for the cluster solution to maintain stability. If SAP HANA system replication doesn’t have any dependencies on cluster solution, it can be independently verified using [SAP Note 2518979 - HANA : how to check system replication status](https://me.sap.com/notes/2518979).

For manual deployment, there must not be any underlying issues within the cluster member systems for their continuous system replication and takeover procedures. This must be independently verified before integrating a cluster solution for automation. SAP HANA system replication depends on various factors for functioning smoothly. To troubleshoot any issues, see [Troubleshoot System Replication.](https://help.sap.com/docs/SAP_HANA_PLATFORM/4e9b18c116aa42fc84c7dbfd02111aba/782a0583f3af4a0992c5075b2ee7bd98.html?locale=en-US) 

Alternatively, you can use guided troubleshooting provided by SAP. For more information, see [SAP HANA Troubleshooting](https://ga.support.sap.com/index.html#/tree/1623/actions/21021:21032). You can also chat with experts or open an incident with SAP. For a speedy resolution, collect the relevant SAP HANA logs as per [SAP Note 2934640 - HANA and Replication - Collecting Support Data for Replication / Network related Tickets](https://me.sap.com/notes/2934640). The *fullsysteminfo-dumps* log must be collected from all the cluster member systems for a complete analysis.

For troubleshooting issues with AWS Launch Wizard, see [Troubleshoot AWS Launch Wizard for SAP](https://docs.aws.amazon.com/launchwizard/latest/userguide/launch-wizard-sap-troubleshooting.html).

For troubleshooting issues with high availability SAP HANA setup on SLES, see [Indepth HANA Cluster Debug Data Collection (PACEMAKER, SAP).](https://www.suse.com/support/kb/doc/?id=000019142) 

For troubleshooting issues with high availability SAP HANA setup on RHEL, see [How can I debug the SAPHana and SAPHanaTopology resource agents in a Pacemaker cluster?](https://access.redhat.com/solutions/4191201) 

# Appendix: Configuring Linux to Recognize Ethernet Devices for Multiple Network Interfaces
<a name="hana-ops-appendix"></a>

Follow these steps to configure the Linux operating system to recognize and name the Ethernet devices associated with the new elastic network interfaces created for logical network separation, which were discussed [earlier in this paper](hana-ops-networking.md#hana-ops-config).

1. Use SSH to connect to your SAP HANA host as `ec2-user`, and `sudo` to root.

1. Remove the existing `udev` rule; for example:

   ```
   hanamaster:# rm -f /etc/udev/rules.d/70-persistent-net.rules
   ```

1. Create a new `udev` rule that writes rules based on MAC address rather than other device attributes. This will ensure that on reboot, `eth0` is still `eth0`, `eth1` is `eth1`, and so on. For example:

   ```
   hanamaster:# cat <<EOF >/etc/udev/rules.d/75-persistent-net- generator.rules
   # Copyright (C) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved.
   #
   # Licensed under the Apache License, Version 2.0 (the "License").
   # You may not use this file except in compliance with the License.
   # A copy of the License is located at #
   #	    https://aws.amazon.com/apache2.0/ #
   # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
   # OF ANY KIND, either express or implied. See the License for the
   # specific language governing permissions and limitations under the
   # License.
   # these rules generate rules for persistent network device naming
   SUBSYSTEM!="net", GOTO="persistent_net_generator_end" KERNEL!="eth*", GOTO="persistent_net_generator_end" ACTION!="add", GOTO="persistent_net_generator_end" NAME=="?*", GOTO="persistent_net_generator_end"
   
   # do not create rule for eth0
   ENV{INTERFACE}=="eth0", GOTO="persistent_net_generator_end" # read MAC address
   ENV{MATCHADDR}="\$attr{address}" # do not use empty address
   ENV{MATCHADDR}=="00:00:00:00:00:00",
   GOTO="persistent_net_generator_end"
   # discard any interface name not generated by our rules ENV{INTERFACE_NAME}=="?*", ENV{INTERFACE_NAME}=""
   # default comment
   ENV{COMMENT}="elastic network interface" # write rule IMPORT{program}="write_net_rules"
   # rename interface if needed ENV{INTERFACE_NEW}=="?*", NAME="\$env{INTERFACE_NEW}"
   LABEL="persistent_net_generator_end" EOF
   ```

1. Ensure proper interface properties. For example:

   ```
   hanamaster:# cd /etc/sysconfig/network/
   
   hanamaster:# cat <<EOF >/etc/sysconfig/network/ifcfg-ethN
   BOOTPROTO='dhcp4'
   MTU="9000"
   REMOTE_IPADDR=''
   STARTMODE='onboot'
   LINK_REQUIRED=no
   LINK_READY_WAIT=5
   EOF
   ```

1. Ensure that you can accommodate up to seven more Ethernet devices or network interaces, and restart `wicked`. For example:

   ```
   hanamaster:# for dev in eth{1..7} ; do
   ln -s -f ifcfg-ethN /etc/sysconfig/network/ifcfg-${dev} done
   
   hanamaster:# systemctl restart wicked
   ```

1. Create and attach a new network interface to the instance.

1. Reboot.

1. Modify `/etc/iproute2/rt_tables`.
**Important**  
Repeat the following for each ENI that you attach to your instance.

   For example:

   ```
   hanamaster:# cd /etc/iproute2
   hanamaster:/etc/iproute2 # echo "2 eth1_rt" >> rt_tables
   hanamaster:/etc/iproute2 # ip route add default via 172.16.1.122 dev eth1 table eth1_rt
   
   hanamaster:/etc/iproute2 # ip rule
   0:	from all lookup local
   32766:	from all lookup main
   32767:	from all lookup default
   
   hanamaster:/etc/iproute2 # ip rule add from <ENI IP Address>
   lookup eth1_rt prio 1000
   
   hanamaster:/etc/iproute2 # ip rule 0:	from all lookup local
   1000:	from <ENI IP address> lookup eth1_rt
   32766:	from all lookup main
   32767:	from all lookup default
   ```

# Document history
<a name="hana-ops-doc-history"></a>


**​**  

| Date | Change | 
| --- | --- | 
|  September 2022  |  High availability and disaster recovery for SAP HANA  | 
|  July 2022  |  Architecture patterns for SAP HANA  | 
|  December 2021  |  r6i instances updated on storage configuration for SAP HANA  | 
|  July 2021  |  Storage configuration for SAP HANA  | 
|  December 2017  |  Initial publication  | 