

# Using AWS services to migrate data from Db2 to Amazon RDS for Db2
Migrating data with AWS services

In Amazon RDS, there are several ways you can migrate data from a Db2 database to Amazon RDS for Db2. You can perform a one-time migration of your Db2 database from Linux, AIX, or Windows environments to Amazon RDS for Db2. To minimize downtime, you can perform a near-zero downtime migration. You can migrate your data by saving it to Amazon S3 and loading it one table at a time into your Db2 database. You can also perform a synchronous migration through replication or use AWS Database Migration Service.

For one-time migrations for Linux-based Db2 databases, Amazon RDS only supports offline and online backups. Amazon RDS doesn't support incremental and Delta backups. For near-zero downtime migrations for Linux-based Db2 databases, Amazon RDS requires online backups. We recommend that you use online backups for near-zero downtime migrations and offline backups for migrations that can handle downtime.

**Topics**
+ [

# Migrating from Linux to Linux for Amazon RDS for Db2
](db2-one-time-migration-linux.md)
+ [

# Migrating from Linux to Linux with near-zero downtime for Amazon RDS for Db2
](db2-near-zero-downtime-migration.md)
+ [

# Migrating synchronously from Linux to Linux for Amazon RDS for Db2
](db2-synchronous-migration-linux.md)
+ [

# Migrating from AIX or Windows to Linux for Amazon RDS for Db2
](db2-one-time-migration-aix-windows-linux.md)
+ [

# Migrating Db2 data through Amazon S3 to Amazon RDS for Db2
](db2-migration-load-from-s3.md)
+ [

# Migrating to Amazon RDS for Db2 with AWS Database Migration Service (AWS DMS)
](db2-migration-amazon-dms.md)

# Migrating from Linux to Linux for Amazon RDS for Db2
Linux to Linux

With this migration approach, you back up your self-managed Db2 database to an Amazon S3 bucket. Then, you use Amazon RDS stored procedures to restore your Db2 database to an Amazon RDS for Db2 DB instance. For more information about using Amazon S3, see [Integrating an Amazon RDS for Db2 DB instance with Amazon S3](db2-s3-integration.md).

Backup and restore for RDS for Db2 follows the IBM Db2 supported upgrade paths and restrictions. For more information, see [Supported upgrade paths for Db2 servers](https://www.ibm.com/docs/en/db2/11.5?topic=servers-supported-upgrade-paths-db2) and [Upgrade restrictions for Db2 servers](https://www.ibm.com/docs/en/db2/11.5?topic=servers-upgrade-restrictions) in the IBM Db2 documentation.

**Topics**
+ [

## Limitations and recommendations for using native restore
](#db2-linux-migration-limitations)
+ [

## Backing up your database to Amazon S3
](#db2-linux-backing-up-database)
+ [

## Creating a default automatic storage group
](#db2-linux-creating-auto-storage-group)
+ [

## Restoring your Db2 database
](#db2-linux-restoring-db2-database)

## Limitations and recommendations for using native restore


The following limitations and recommendations apply to using native restore: 
+ Amazon RDS only supports migrating on-premises versions of Db2 that match supported RDS for Db2 versions. For more information about the supported versions, see [Upgrade management for Amazon RDS Db2 instances](Db2.Concepts.VersionMgmt.Supported.md).
+ Amazon RDS only supports offline and online backups for native restore. Amazon RDS doesn't support incremental or Delta backups.
+ You can't restore from an Amazon S3 bucket in an AWS Region that is different from the Region where your RDS for Db2 DB instance is located. 
+ Amazon S3 limits the size of files that are uploaded to an Amazon S3 bucket to 5 TB. If your database backup file exceeds 5 TB, then split the backup file into smaller files.
+ Amazon RDS doesn't support non-fenced external routines, incremental restores, or Delta restores.
+ You can't restore from an encrypted source database, but you can restore to an encrypted Amazon RDS DB instance.

The restoration process differs depending on your configuration.

If you set `USE_STREAMING_RESTORE` to `TRUE`, Amazon RDS directly streams your backup from your S3 bucket during restoration. Streaming significantly reduces storage requirements. You only need to provision storage space equal to or greater than either the size of the backup or the size of the original database, whichever is larger.

If you set `USE_STREAMING_RESTORE` to `FALSE`, Amazon RDS first downloads the backup to your RDS for Db2 DB instance and then extracts the backup. Extraction requires additional storage space. You must provision storage space equal to or greater than the sum of the size of the backup plus the size of the original database.

The maximum size of the restored database equals the maximum supported database size minus any space required for temporary storage during the restoration process.

## Backing up your database to Amazon S3


To back up your database on Amazon S3, you need the following AWS components:
+ *An Amazon S3 bucket to store your backup files*: Upload any backup files that you want to migrate to Amazon RDS. We recommend that you use offline backups for migrations that can handle downtime. If you already have an S3 bucket, you can use that bucket. If you don't have an S3 bucket, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon S3 User Guide*.
**Note**  
If your database is large and would take a long time to transfer to an S3 bucket, you can order an AWS Snow Family device and ask AWS to perform the backup. After you copy your files to the device and return it to the Snow Family team, the team transfers your backed-up images to your S3 bucket. For more information, see the [AWS Snow Family documentation](https://docs.aws.amazon.com/snowball/).
+ *An IAM role to access the S3 bucket*: If you already have an IAM role, you can use that role. If you don't have a role, see [Step 2: Create an IAM role and attach your IAM policy](db2-s3-integration.md#db2-creating-iam-role). 
+ *An IAM policy with trust relationships and permissions attached to your IAM role*: For more information, see [Step 1: Create an IAM policy](db2-s3-integration.md#db2-creating-iam-policy).
+ *The IAM role added to your RDS for Db2 DB instance*: For more information, see [Step 3: Add your IAM role to your RDS for Db2 DB instance](db2-s3-integration.md#db2-adding-iam-role).

## Creating a default automatic storage group


Your source database must have a default automatic storage group. If your database doesn't have a default automatic storage group, you must create one.

**To create a default automatic storage group**

1. Connect to your source database. In the following example, replace *source\$1database* with the name of your database.

   ```
   db2 connect to source_database 
   ```

1. Create an automatic storage group and set it as the default. In the following example, replace *storage\$1path* with the absolute path to where the storage group is located.

   ```
   db2 "create stogroup IBMSTOGROUP ON storage_path set as default"
   ```

1. Terminate backend processes.

   ```
   db2 terminate
   ```

1. Deactivate the database and stop all database services. In the following example, replace *source\$1database* with the name of the database that you created the storage group for.

   ```
   db2 deactivate db source_database
   ```

1. Back up the database. In the following example, replace *source\$1database* with the name of the database that you created the storage group for. Replace *file\$1system\$1path* with the absolute path to where you want to back up the database.

   ```
   db2 backup database source_database to file_system_path 
   ```

## Restoring your Db2 database


After you back up your database on Amazon S3 and create an automatic storage group, you are ready to restore your Db2 database to your RDS for Db2 DB instance.

**To restore your Db2 database from your Amazon S3 bucket to your RDS for Db2 DB instance**

1. Connect to your RDS for Db2 DB instance. For more information, see [Connecting to your Db2 DB instance](USER_ConnectToDb2DBInstance.md).

1. (Optional) To ensure that your database is configured with the optimal settings, check the values for the following parameters by calling [rdsadmin.show\$1configuration](db2-sp-managing-databases.md#db2-sp-show-configuration):
   + `RESTORE_DATABASE_NUM_BUFFERS`
   + `RESTORE_DATABASE_PARALLELISM`
   + `RESTORE_DATABASE_NUM_MULTI_PATHS`
   + `USE_STREAMING_RESTORE`

   Use [rdsadmin.set\$1configuration](db2-sp-managing-databases.md#db2-sp-set-configuration) to modify these values as needed. Properly configuring these parameters can significantly improve performance when restoring databases with large volumes of data. For most migration scenarios, we recommend setting `USE_STREAMING_RESTORE` to `TRUE` because it reduces storage requirements and can improve restoration speed.

1. Restore your database by calling `rdsadmin.restore_database`. For more information, see [rdsadmin.restore\$1database](db2-sp-managing-databases.md#db2-sp-restore-database). 

# Migrating from Linux to Linux with near-zero downtime for Amazon RDS for Db2
Linux to Linux (near-zero downtime)

With this migration approach, you migrate a Linux-based Db2 database from one self-managed Db2 database (source) to Amazon RDS for Db2. This approach results in minimal to no outage or downtime for the application or users. This approach backs up your database and restores it with log replay, which helps prevent disruptions to ongoing operations and provides high availability of your database. 

To achieve near-zero downtime migration, RDS for Db2 implements restore with log replay. This approach takes a backup of your self-managed Linux-based Db2 database and restores it on the RDS for Db2 server. With Amazon RDS stored procedures, you then apply subsequent transaction logs to bring the database up to date. 

**Topics**
+ [

## Limitations and recommendations for near-zero downtime migration
](#db2-near-zero-downtime-migration-limitations)
+ [

## Backing up your database to Amazon S3
](#db2-near-zero-downtime-backing-up-database)
+ [

## Creating a default automatic storage group
](#db2-near-zero-migration-creating-auto-storage-group)
+ [

## Migrating your Db2 database
](#db2-migrating-db2-database)

## Limitations and recommendations for near-zero downtime migration


The following limitations and recommendations apply to using near-zero downtime migration:
+ Amazon RDS requires an online backup for near-zero downtime migration. This is because Amazon RDS keeps your database in a rollforward pending state as you upload your archived transaction logs. For more information, see [Migrating your Db2 database](#db2-migrating-db2-database). 
+ You can't restore from an Amazon S3 bucket in an AWS Region that is different from the Region where your RDS for Db2 DB instance is located. 
+ Amazon S3 limits the size of files uploaded to an S3 bucket to 5 TB. If your database backup file exceeds 5 TB, then split the backup file into smaller files.
+ Amazon RDS doesn't support non-fenced external routines, incremental restores, or Delta restores.
+ You can't restore from an encrypted source database, but you can restore to an encrypted Amazon RDS DB instance.

The restoration process differs depending on your configuration.

If you set `USE_STREAMING_RESTORE` to `TRUE`, Amazon RDS directly streams your backup from your S3 bucket during restoration. Streaming significantly reduces storage requirements. You only need to provision storage space equal to or greater than either the size of the backup or the size of the original database, whichever is larger.

If you set `USE_STREAMING_RESTORE` to `FALSE`, Amazon RDS first downloads the backup to your RDS for Db2 DB instance and then extracts the backup. Extraction requires additional storage space. You must provision storage space equal to or greater than the sum of the size of the backup plus the size of the original database.

The maximum size of the restored database equals the maximum supported database size minus any space required for temporary storage during the restoration process. 

## Backing up your database to Amazon S3


To back up your database on Amazon S3, you need the following AWS components:
+ *An Amazon S3 bucket to store your backup files*: Upload any backup files that you want to migrate to Amazon RDS. Amazon RDS requires an online backup for near-zero downtime migration. If you already have an S3 bucket, you can use that bucket. If you don't have an S3 bucket, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon S3 User Guide*.
**Note**  
If your database is large and would take a long time to transfer to an S3 bucket, you can order an AWS Snow Family device and ask AWS to perform the backup. After you copy your files to the device and return it to the Snow Family team, the team transfers your backed-up images to your S3 bucket. For more information, see the [AWS Snow Family documentation](https://docs.aws.amazon.com/snowball/).
+ *An IAM role to access the S3 bucket*: If you already have an AWS Identity and Access Management (IAM) role, you can use that role. If you don't have a role, see [Step 2: Create an IAM role and attach your IAM policy](db2-s3-integration.md#db2-creating-iam-role). 
+ *An IAM policy with trust relationships and permissions attached to your IAM role*: For more information, see [Step 1: Create an IAM policy](db2-s3-integration.md#db2-creating-iam-policy).
+ *The IAM role added to your RDS for Db2 DB instance*: For more information, see [Step 3: Add your IAM role to your RDS for Db2 DB instance](db2-s3-integration.md#db2-adding-iam-role).

## Creating a default automatic storage group


Your source database must have a default automatic storage group. If your database doesn't have a default automatic storage group, you must create one.

**To create a default automatic storage group**

1. Connect to your source database. In the following example, replace *source\$1database* with the name of your database.

   ```
   db2 connect to source_database 
   ```

1. Create an automatic storage group and set it as the default. In the following example, replace *storage\$1path* with the absolute path to where the storage group is located.

   ```
   db2 "create stogroup IBMSTOGROUP ON storage_path set as default"
   ```

1. Terminate backend processes.

   ```
   db2 terminate
   ```

## Migrating your Db2 database


After you set up for near-zero downtime migration, you are ready to migrate your Db2 database from your Amazon S3 bucket to your RDS for Db2 DB instance.

**To perform a near-zero downtime migration of backup files from your Amazon S3 bucket to your RDS for Db2 DB instance**

1. Perform an online backup of your source database. For more information, see [BACKUP DATABASE command](https://www.ibm.com/docs/en/db2/11.5?topic=commands-backup-database) in the IBM Db2 documentation.

1. Copy the backup of your database to an Amazon S3 bucket. For information about using Amazon S3, see the [Amazon Simple Storage Service User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html).

1. Connect to the `rdsadmin` server with the *master\$1username* and *master\$1password* for your RDS for Db2 DB instance.

   ```
   db2 connect to rdsadmin user master_username using master_password
   ```

1. (Optional) To ensure that your database is configured with the optimal settings, check the values for the following parameters by calling [rdsadmin.show\$1configuration](db2-sp-managing-databases.md#db2-sp-show-configuration):
   + `RESTORE_DATABASE_NUM_BUFFERS`
   + `RESTORE_DATABASE_PARALLELISM`
   + `RESTORE_DATABASE_NUM_MULTI_PATHS`
   + `USE_STREAMING_RESTORE`

   Use [rdsadmin.set\$1configuration](db2-sp-managing-databases.md#db2-sp-set-configuration) to modify these values as needed. Properly configuring these parameters can significantly improve performance when restoring databases with large volumes of data. For most migration scenarios, we recommend setting `USE_STREAMING_RESTORE` to `TRUE` because it reduces storage requirements and can improve restoration speed.

1. Restore the backup on the RDS for Db2 server by calling `rdsadmin.restore_database`. Set `backup_type` to `ONLINE`. For more information, see [rdsadmin.restore\$1database](db2-sp-managing-databases.md#db2-sp-restore-database).

1. Copy your archive logs from your source server to your S3 bucket. For more information, see [Archive logging](https://www.ibm.com/docs/en/db2/11.5?topic=logging-archive) in the IBM Db2 documentation.

1. Apply archive logs as many times as needed by calling `rdsadmin.rollforward_database`. Set `complete_rollforward` to `FALSE` to keep the database in a `ROLL-FORWARD PENDING` state. For more information, see [rdsadmin.rollforward\$1database](db2-sp-managing-databases.md#db2-sp-rollforward-database).

1. After you apply all of the archive logs, bring the database online by calling `rdsadmin.complete_rollforward`. For more information, see [rdsadmin.complete\$1rollforward](db2-sp-managing-databases.md#db2-sp-complete-rollforward).

1. Switch application connections to the RDS for Db2 server by either updating your application endpoints for the database or by updating the DNS endpoints to redirect traffic to the RDS for Db2 server. You can also use the Db2 automatic client reroute feature on your self-managed Db2 database with the RDS for Db2 database endpoint. For more information, see [Automatic client reroute description and setup](https://www.ibm.com/docs/en/db2/11.5?topic=reroute-configuring-automatic-client) in the IBM Db2 documentation.

1. (Optional) Shut down your source database.

# Migrating synchronously from Linux to Linux for Amazon RDS for Db2
Linux to Linux (synchronous)

With this migration approach, you set up replication between your self-managed Db2 database and your Amazon RDS for Db2 DB instance. Changes made to the self-managed database replicates to the RDS for Db2 DB instance in near real-time. This approach can provide continuous availability and minimize downtime during the migration process.

# Migrating from AIX or Windows to Linux for Amazon RDS for Db2
AIX or Windows to Linux

With this migration approach, you use native Db2 tools to back up your self-managed Db2 database to an Amazon S3 bucket. Native Db2 tools include the `export` utility, the `db2move` system command, or the `db2look` system command. Your Db2 database can either be self-managed or in Amazon Elastic Compute Cloud (Amazon EC2). You can move data from your AIX or Windows system to your Amazon S3 bucket. Then, use a Db2 client to load data directly from the S3 bucket to your Amazon RDS for Db2 database. Downtime depends on the size of your database. For more information about using Amazon S3, see [Integrating an Amazon RDS for Db2 DB instance with Amazon S3](db2-s3-integration.md).

**To migrate your Db2 database to RDS for Db2**

1. Prepare to back up your database. Configure sufficient storage amount to hold the backup on your self-managed Db2 system.

1. Back up your database.

   1. Run the [db2look system command](https://www.ibm.com/docs/en/db2/11.5?topic=commands-db2look-db2-statistics-ddl-extraction-tool) to extract the data definition language (DDL) file for all objects.

   1. Run either the [Db2 export utility](https://www.ibm.com/docs/en/db2/11.5?topic=utility-exporting-data), the [db2move system command](https://www.ibm.com/docs/en/db2/11.5?topic=commands-db2move-database-movement-tool), or a [CREATE EXTERNAL TABLE statement](https://www.ibm.com/docs/en/db2/11.5?topic=statements-create-table-external) to unload the Db2 table data to storage on your Db2 system.

1. Move your backup to an Amazon S3 bucket. For more information, see [Integrating an Amazon RDS for Db2 DB instance with Amazon S3](db2-s3-integration.md). 
**Note**  
If your database is large and would take a long time to transfer to an S3 bucket, you can order an AWS Snow Family device and ask AWS to perform the backup. After you copy your files to the device and return it to the Snow Family team, the team transfers your backed-up images to your S3 bucket. For more information, see the [AWS Snow Family documentation](https://docs.aws.amazon.com/snowball/).

1. Use a Db2 client to load data directly from your S3 bucket to your RDS for Db2 database. For more information, see [Migrating with Amazon S3](db2-migration-load-from-s3.md).

# Migrating Db2 data through Amazon S3 to Amazon RDS for Db2
Migrating with Amazon S3

With this migration approach, you first save data from a single table into a data file that you place in an Amazon S3 bucket. Then, you use the [LOAD command](https://www.ibm.com/docs/en/db2/11.5?topic=commands-load) to load the data from that data file into a table in your Amazon RDS for Db2 database. For more information about using Amazon S3, see [Integrating an Amazon RDS for Db2 DB instance with Amazon S3](db2-s3-integration.md).

**Topics**
+ [

## Saving your data to Amazon S3
](#db2-migration-load-from-s3-saving-data-file)
+ [

## Loading your data into RDS for Db2 tables
](#db2-migration-load-from-s3-into-db-table)

## Saving your data to Amazon S3


To save data from a single table to Amazon S3, use a database utility to extract the data from your database management system (DBMS) into a CSV file. Then, upload the data file to Amazon S3.

For storing data files on Amazon S3, you need the following AWS components:
+ *An Amazon S3 bucket to store your backup files*: If you already have an S3 bucket, you can use that bucket. If you don't have an S3 bucket, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon S3 User Guide*.
+ *An IAM role to access the S3 bucket*: If you already have an IAM role, you can use that role. If you don't have a role, see [Step 2: Create an IAM role and attach your IAM policy](db2-s3-integration.md#db2-creating-iam-role). 
+ *An IAM policy with trust relationships and permissions attached to your IAM role*: For more information, see [Step 1: Create an IAM policy](db2-s3-integration.md#db2-creating-iam-policy).
+ *The IAM role added to your RDS for Db2 DB instance*: For more information, see [Step 3: Add your IAM role to your RDS for Db2 DB instance](db2-s3-integration.md#db2-adding-iam-role).

## Loading your data into RDS for Db2 tables


After you save your data files to Amazon S3, you can load the data from these files into individual tables on your RDS for Db2 DB instance.

**To load your Db2 table data into your RDS for Db2 DB database table**

1. Connect to the `rdsadmin` database using the master username and master password for your RDS for Db2 DB instance. In the following example, replace *master\$1username* and *master\$1password* with your own information.

   ```
   db2 connect to rdsadmin user master_username using master_password
   ```

1. Catalog a storage access alias that points to the Amazon S3 bucket where your saved files are stored. Take note of the name of this alias for use in the next step. You only need to perform this step once if you plan to load multiple tables from data files stored in the same Amazon S3 bucket.

   The following example catalogs an alias named *my\$1s3\$1alias* that grants a user named *jorge\$1souza* access to a bucket named *amzn-s3-demo-bucket*.

   ```
   db2 "call rdsadmin.catalog_storage_access(?, 'my_s3_alias', 'amzn-s3-demo-bucket', 'USER', 'jorge_souza')"
   ```

   For more information about this stored procedure, See [rdsadmin.catalog\$1storage\$1access](db2-sp-managing-storage-access.md#db2-sp-catalog-storage-access).

1. Run the `LOAD` command using the storage access alias that points to your Amazon S3 bucket. 
**Note**  
If the `LOAD` command returns an error, then you might need to create a VPC gateway endpoint for Amazon S3 and add outbound rules to the security group. For more information, see [File I/O error](db2-troubleshooting.md#db2-file-input-output-error).

   The following example loads data from a data file named *my\$1s3\$1datafile.csv* into a table named *my\$1db2\$1table*. The example assumes that the data file is in the Amazon S3 bucket that the alias named *my\$1s3\$1alias* points to.

   ```
   db2 "load from db2remote://my_s3_alias//my_s3_datafile.csv of DEL insert into my_db2_table";
   ```

   The following example loads LOBs from a data file named *my\$1table1\$1export.ixf* into a table named *my\$1db2\$1table*. The example assumes that the data file is in the Amazon S3 bucket that the alias named *my\$1s3\$1alias* points to.

   ```
   db2 "call sysproc.admin_cmd('load from "db2remote://my_s3_alias//my_table1_export.ixf" of ixf
           lobs from "db2remote://my_s3_alias//" xml from "db2remote://my_s3_alias//"
           modified by lobsinfile implicitlyhiddeninclude identityoverride generatedoverride periodoverride transactionidoverride
           messages on server
           replace into "my_schema"."my_db2_table"
                                  nonrecoverable
           indexing mode incremental allow no access')"
   ```

   Repeat this step for each data file in the Amazon S3 bucket that you want to load into a table in your RDS for Db2 DB instance.

   For more information about the `LOAD` command, see [LOAD command](https://www.ibm.com/docs/en/db2/11.5?topic=commands-load).

# Migrating to Amazon RDS for Db2 with AWS Database Migration Service (AWS DMS)
Migrating with AWS DMS

You can use AWS DMS for one-time migrations and then synchronize from Db2 on Linux, Unix (such as AIX), and Windows to Amazon RDS for Db2. For more information, see [What is AWS Database Migration Service?](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html).