

# Phase 3: Migrate


 After you complete migration planning and identify a migration strategy, the actual migration takes place. In this phase, the target database is designed, the source data is migrated to the target, and the data is validated.

 ![\[Iterative migration process\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/strategy-database-migration/images/iterative-migration-process.png) 

This is an iterative process that includes multiple cycles of conversion, migration, and testing. After the functional and performance testing is complete, you can cut over to the new database.

The migration phase consists of the following key steps, which are discussed in the following sections:
+ [Converting the schema](convert-schema.md)
+ [Migrating the data](migrate-data.md)
+ [Updating the application](update-app.md)
+ [Testing the migration](test-migration.md)
+ [Cutting over to the new database](cut-over.md)

# Convert the schema


 One of the key tasks during the database migration is to migrate your schema from the source database engine to the target database engine. If you rehost or replatform, your database engine won’t change. This is referred to as a *homogeneous database migration*, and you can use your native database tools to migrate the schema.

 However, if you are rearchitecting your application, schema conversion might require more effort. In this case, you will be doing a *heterogeneous database migration*, where your source and target database engines will be different. Your current database schema may be using packages and features that cannot be directly converted to the target database engine. Some features might be available under a different name. Therefore, converting the schema requires a good understanding of your source and target database engines. This task can be challenging, depending on the complexity of your current schema.

AWS provides two resources to help you with schema conversion: AWS Schema Conversion Tool (AWS SCT) and migration playbooks.

## AWS SCT


AWS SCT is a free tool that can help you convert your existing database from one engine to another. AWS SCT supports a number of source databases, including Oracle, Microsoft SQL Server, MySQL, Sybase, and IBM Db2 LUW. You can choose from target databases such as Aurora MySQL and Aurora PostgreSQL.

AWS SCT provides a graphical user interface that directly connects to the source and target databases to fetch the current schema objects. When connected, you can generate a database migration assessment report to get a high-level summary of the conversion effort and action items. The following screen illustration shows a sample database migration assessment report.

 ![\[Sample database migration assessment report from AWS SCT\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/strategy-database-migration/images/sct-assessment-report.png) 

With AWS SCT you can convert the schema and deploy it into the target database directly, or you can get SQL files for the converted schema. For more information, see [Using the AWS Schema Conversion Tool User Interface](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_UserInterface.html) in the AWS documentation.

## Migration playbooks


Although AWS SCT converts many of your source objects, some aspects of conversion require manual intervention and adjustments. To help with this task, AWS provides migration playbooks that detail incompatibilities and similarities between two databases. For more information about these playbooks, see [AWS Database Migration Service resources](https://aws.amazon.com/dms/resources/) on the AWS website.

# Migrate the data


When the schema migration is complete, you can move your data from the source database to the target database. Depending on your application availability requirements, you can run a simple extraction job that performs a one-time copy of the source data into the new database. Or, you can use a tool that copies the current data and continues to replicate all changes until you are ready to cut over to the new database. For rehost and replatform migrations, we recommend that you use native database-specific tools to migrate your data.

Tools that can help you with the data transfer include AWS Database Migration Service (AWS DMS) and offline migration tools. These are described in the following sections.



## AWS DMS


After you use AWS SCT to convert your schema objects from the source database engine to the target engine, you can use AWS DMS to migrate the data. With AWS DMS you can keep the source database up and running while the data is being replicated. You can perform a one-time copy of your data or copy with continuous replication. When the source and target databases are in sync, you can take your database offline and move your operations to the target database. AWS DMS can be used for homogeneous database migrations (for example, from an on-premises Oracle database to an Amazon RDS for Oracle database) as well as heterogeneous migrations (for example, from an on-premises Oracle database to an Amazon RDS for PostgreSQL database). For more information about working with AWS DMS, see the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html).

## Offline migration options


You can use other options in addition to AWS DMS to extract your data from the source database and load it to the target database. These options are mostly suitable when application downtime is allowed during the data migration activity. Examples of these methods include:
+ A comma-separate values (CSV) extract from the source database loaded to the target database
+ For Oracle source databases, the **ora2pg** utility to copy the data to PostgreSQL
+ Custom extract, transform, load (ETL) jobs to copy the data from source to target

# Update the application


A database migration is hardly ever a database-only migration. You have to look at the application that’s using the database to make sure that it works as expected with the new database. The changes are minimal if you are simply rehosting or replatforming the same database engine, but can be more significant if you decide to move to a new database engine.

If your application relies on an object-relational mapping (ORM) to interact with the database, it won’t require as many changes when you migrate to a new database engine. However, if your application has custom database interactions or dynamically built SQL queries, the changes can be sizable. There might be differences in the query formats that need to be corrected to make sure that the application works as expected.

For example, in Oracle, concatenating a string with `NULL` returns the original string. However, in PostgreSQL, concatenating a string with `NULL` returns `NULL`. Another example is how `NULL` and empty strings are treated. In PostgreSQL, `NULL` and empty strings are two different things, whereas databases like Oracle treat them in the same way. In Oracle, if you insert a row with the column value set to `NULL` or empty string, you can fetch both types of values by using the `where` clause: `where <mycolumn> is NULL`. In PostgreSQL, this `where` clause will return only one row where the column value is actually NULL; it won’t return the row that has an empty string value. For more information about these differences, see the migration playbooks listed on the [AWS Database Migration Service resources](https://aws.amazon.com/dms/resources/) webpage.

# Test the migration


Functional and performance testing is an essential part of database migrations. Detailed functional testing will make sure that your application is working with the new database without any issues. You should invest time to develop unit tests to test out the application workflows.

Performance testing makes sure that your database response times are within an acceptable time range. You can identify bottlenecks, optimize, and repeat the performance test. You repeat the cycle as required to get the desired performance results.

Testing can be manual or automated. We recommend that you use an automated framework for testing. During migration, you will need to run the test multiple times, so having an automated testing framework helps speed up the bug fixing and optimization cycles.

This testing can reveal issues that were missed during development phases. For example, any incorrectly converted queries will fail or return incorrect results, causing the functional testing to fail. Performance testing can reveal issues such as missing indexes causing slow query response time. They can also reveal performance issues that require database engine tuning, depending on the workload, or modifying the query.

# Cut over


The database cutover strategy is usually tightly coupled with the downtime requirements for the application. Strategies that you can use for the database cutover include offline migration, flash-cut migration, active/active database configuration, and incremental migration. These are discussed in the following sections.

## Offline migration


If you can take your application offline for an extended period during write operations, you can use AWS DMS full-load task settings or one of the offline migration options for your data migration. The read traffic can continue while this migration is in progress, but the write traffic must be stopped. Because all the data needs to be copied from the source database, source database resources such as I/O and CPU are utilized.

At a high level, offline migration involves these steps:

1. Complete the schema conversion.

1. Start downtime for write traffic.

1. Migrate the data using one of the offline migration options.

1. Verify your data.

1. Point your application to the new database.

1. End the application downtime.

## Flash-cut migration


In flash-cut migration, the main objective is to keep the downtime to a minimum. This strategy relies on continuous data replication (CDC) from the source database to the target database. All read/write traffic will continue on the current database while the data is being migrated. Because all the data needs to be copied from the source database, source server resources such as I/O and CPU are utilized. You should test to make sure that this data migration activity doesn’t impact your application performance SLAs.

At a high level, flash-cut migration involves these steps:

1. Complete the schema conversion.

1. Set up AWS DMS in continuous data replication mode.

1. When the source and target databases are in sync, verify the data.

1. Start the application downtime.

1. Roll out the new version of the application, which points to the new database.

1. End the application downtime.

## Active/active database configuration


Active/active database configuration involves setting up a mechanism to keep the source and target databases in sync while both databases are being used for write traffic. This strategy involves more work than offline or flash-cut migration, but it also provides more flexibility during migration. For example, in addition to experiencing minimal downtime during migration, you can move your production traffic to the new database in small, controlled batches instead of performing a one-time cutover. You can either perform dual write operations so that changes are made to both databases, or use a bi-directional replication tool like [HVR](https://www.hvr-software.com/product/) to keep the databases in sync. This strategy has a higher complexity in terms of setup and maintenance, so more testing is required to avoid data consistency issues.

At a high level, active/active database configuration involves these steps:

1. Complete the schema conversion.

1. Copy the existing data from the source database to the target database, and then keep the two databases in sync by using a bi-directional replication tool or dual writes from the application.

1. When the source and target databases are in sync, verify the data.

1. Start moving a subset of your traffic to the new database.

1. Keep moving the traffic until all your database traffic has been moved to the new database.

## Incremental migration


In incremental migration, you migrate your application in smaller parts instead of performing a one-time, full cutover. This cutover strategy could have many variations, based on your current application architecture or the refactoring you’re willing to do in the application.

You can use a [design pattern](https://samirbehara.com/2018/09/10/monolith-to-microservices-using-strangler-pattern/) to add new independent microservices to replace parts of an existing, monolithic legacy application. These independent microservices have their own tables that are not shared or accessed by any other part of the application. You migrate these microservices to the new database one by one, using any of the other cutover strategies. The migrated microservices start using the new database for read/write traffic while all other parts of the application continue to use the old database. When all microservices have been migrated, you decommission your legacy application. This strategy breaks up the migration into smaller, manageable pieces and can, therefore, reduce the risks that are associated with one big migration.

# Follow best practices on AWS


In addition to the migration activities discussed in the previous sections, you should invest time to make sure that you are following the best practices to host your database in the AWS Cloud. See the [AWS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_BestPractices.html) for best practices for working with relational databases on AWS.