

# Migration tools and services overview
Migration tools and services

This chapter provides conceptual content about AWS tools and features used to migrate from Microsoft SQL Server 2019 to Amazon Aurora MySQL. It introduces key tools and services such as AWS Schema Conversion Tool (AWS SCT), AWS Database Migration Service (AWS DMS), Amazon RDS on Outposts, Amazon RDS Proxy, Amazon Aurora Serverless, and Aurora’s Backtrack and parallel query features. These concepts are interconnected, offering a comprehensive overview of the migration process, database management options, and performance optimization techniques available within the AWS ecosystem.

**Topics**
+ [

# AWS Schema Conversion Tool overview
](chap-sql-server-aurora-mysql.tools.awssct.md)
+ [

# AWS SCT action code index
](chap-sql-server-aurora-mysql.tools.actioncode.md)
+ [

# AWS Database Migration Service overview
](chap-sql-server-aurora-mysql.tools.awsdms.md)
+ [

# Amazon RDS on Outposts overview
](chap-sql-server-aurora-mysql.tools.rdsoutposts.md)
+ [

# Amazon RDS Proxy overview
](chap-sql-server-aurora-mysql.tools.rdsproxy.md)
+ [

# Amazon Aurora Serverless v1 overview
](chap-sql-server-aurora-mysql.tools.auroraserverless.md)
+ [

# Amazon Aurora Backtrack overview
](chap-sql-server-aurora-mysql.tools.aurorabacktrack.md)
+ [

# Amazon Aurora Parallel Query overview
](chap-sql-server-aurora-mysql.tools.parallelquery.md)

# AWS Schema Conversion Tool overview


You can use the AWS Schema Conversion Tool (AWS SCT) to streamline the migration of your Microsoft SQL Server 2019 database to Amazon Aurora MySQL. This tool not only converts compatible objects automatically but also provides detailed recommendations for handling objects that require manual intervention. With AWS SCT, you can efficiently plan and execute your database migration, saving time and minimizing potential errors in the transition to Aurora MySQL.

The AWS Schema Conversion Tool (AWS SCT) is a Java utility that connects to source and target databases, scans the source database schema objects (tables, views, indexes, procedures, and so on), and converts them to target database objects.

This section provides a step-by-step process for using AWS SCT to migrate an SQL Server database to an Aurora MySQL database cluster. Since AWS SCT can automatically migrate most of the database objects, it greatly reduces manual effort.

We recommend to start every migration with the process outlined in this section and then use the rest of the Playbook to further explore manual solutions for objects that couldn’t be migrated automatically. For more information, see the AWS Schema Conversion Tool [User Guide](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/Welcome.html).

**Note**  
This walkthrough uses the AWS Database Migration Service Sample Database. You can download it from [GitHub](https://github.com/aws-samples/aws-database-migration-samples).

## Download the Software and Drivers


Download and install AWS SCT. For more information, see [Installing, verifying, and updating](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Installing.html) in the AWS Schema Conversion Tool User Guide.

Download the [Microsoft SQL Server](https://docs.microsoft.com/en-us/sql/connect/jdbc/release-notes-for-the-jdbc-driver?view=sql-server-ver15#72) and [MySQL](https://dev.mysql.com/downloads/connector/j/) drivers. For more information, see [Installing the required database drivers](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Installing.html#CHAP_Installing.JDBCDrivers) in the AWS Schema Conversion Tool User Guide.

## Configure AWS SCT


1. Start AWS Schema Conversion Tool (AWS SCT).

1. Choose **Settings** and then choose **Global settings**.

1. On the left navigation bar, choose **Drivers**.

1. Enter the paths for the SQL Server and MySQL drivers downloaded in the first step.

    ![\[Enter the paths for the Microsoft SQL Server and MySQL drivers\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-sql-server-aurora-mysql-configure-aws-sct.png) 

1. Choose **Apply** and then **OK**.

## Create a New Migration Project


1. In AWS SCT, choose **File**, and then choose **New project wizard**. Alternatively, use the keyboard shortcut **Ctrl\$1W**.

1. Enter a project name and select a location for the project files. For **Source engine**, choose **Microsoft SQL Server**, and then choose **Next**.

1. Enter connection details for the source SQL Server database and choose **Test connection** to verify. Choose **Next**.

1. Select the schema or database to migrate and choose **Next**.

The progress bar displays the objects that AWS SCT analyzes. When AWS SCT completes the analysis, the application displays the database migration assessment report. Read the Executive summary and other sections. Note that the information on the screen is only partial. To read the full report, including details of the individual issues, choose **Save to PDF** at the top right and open the PDF document.

![\[Assessment report\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-sql-server-aurora-mysql-aws-sct-assessment-report.png)


Scroll down to the **Database objects with conversion actions for Amazon Aurora (MySQL compatible)** section.

![\[Assessment report conversion statistics\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-sql-server-aurora-mysql-aws-sct-assessment-report-conversion-statistics.png)


Scroll further down to the **Detailed recommendations for Amazon Aurora (MySQL compatible) migrations** section and review the migration recommendations.

Return to AWS SCT and choose **Next**. Enter the connection details for the target Aurora MySQL database and choose **Finish**.

When the connection is complete, AWS SCT displays the main window. In this interface, you can explore the individual issues and recommendations discovered by AWS SCT.

Choose the schema, open the context (right-click) menu, and then choose **Create report** to create a report tailored for the target database type. You can view this report in AWS SCT.

The progress bar updates while the report is generated.

 AWS SCT displays the executive summary page of the database migration assessment report.

Choose **Action items**. In this window, you can investigate each issue in detail and view the suggested course of action. For each issue, drill down to view all instances of that issue.

Choose the database name, open the context (right-click) menu, and choose **Convert schema**. Make sure that you uncheck the `sys` and `information_schema` system schemas. This step doesn’t make any changes to the target database.

On the right pane, AWS SCT displays the new virtual schema as if it exists in the target database. Drilling down into individual objects displays the actual syntax generated by AWS SCT to migrate the objects.

Choose the database on the right pane, open the context (right-click) menu, and choose either **Apply to database** to automatically run the conversion script against the target database, or choose **Save as SQL** to save to an SQL file.

We recommend saving to an SQL file because you can verify and QA the converted code. Also, you can make the adjustments needed for objects that couldn’t be automatically converted.

For more information, see the AWS Schema Conversion Tool [User Guide](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/Welcome.html).

# AWS SCT action code index


This topic provides reference information for the automation levels and action codes used by AWS Schema Conversion Tool (AWS SCT) when migrating from Microsoft SQL Server 2019 to Amazon Aurora MySQL. You can use this information to understand the degree of automation available for various database objects and features during the migration process.

The following table shows the icons we use to describe the automation levels of AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS).


| Automation level icon | Description | 
| --- | --- | 
|   ![\[Five star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-5.png)   |   **Full automation** — AWS SCT performs fully automatic conversion, no manual conversion needed.  | 
|   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-4.png)   |   **High automation** — Minor, simple manual conversions may be needed.  | 
|   ![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-3.png)   |   **Medium automation** — Low-medium complexity manual conversions may be needed.  | 
|   ![\[Two star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-2.png)   |   **Low automation** — Medium-high complexity manual conversions may be needed.  | 
|   ![\[One star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-1.png)   |   **Very low automation** — High risk or complex manual conversions may be needed.  | 
|   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-0.png)   |   **No automation** — Not currently supported by AWS SCT, manual conversion is required for this feature.  | 

The following sections list the AWS Schema Conversion Tool action codes for topics that are covered in this playbook.

**Note**  
The links in the table point to the Microsoft SQL Server topic pages, which are immediately followed by the MySQL pages for the same topics.

## Creating Tables


![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-4.png)


 AWS SCT automatically converts the most commonly used constructs of the `CREATE TABLE` statement as both SQL Server and Amazon Aurora MySQL-Compatible Edition (Aurora MySQL) support the entry level American National Standards Institute (ANSI) compliance. These items include table names, containing security schema or database, column names, basic column data types, column and table constraints, column default values, primary, `UNIQUE`, and foreign keys. Some changes may be required for computed columns and global temporary tables.

For more information, see [Creating Tables](chap-sql-server-aurora-mysql.sql.creatingtables.md).


| Action code | Action message | 
| --- | --- | 
|  659  |  If you use recursion, make sure that table variables in your source database and temporary tables in your target database have the same scope.  | 
|  679  |   AWS SCT replaced computed columns with triggers.  | 
|  680  |  MySQL doesn’t support global temporary tables.  | 

## Constraints


![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-4.png)


 AWS Schema Conversion Tool (AWS SCT automatically converts most constraints because SQL Server and Amazon Aurora MySQL-Compatible Edition (Aurora MySQL) support the entry level ANSI compliance. These items include primary keys, foreign keys, null constraints, unique constraints, and default constraints with some exceptions. Manual conversions are required for some foreign key cascading options. AWS SCT replaces check constraints with triggers, and some default expressions for `DateTime` columns aren’t supported for automatic conversion. AWS SCT can’t automatically convert complex expressions for other default values.

For more information, see [Constraints](chap-sql-server-aurora-mysql.sql.constraints.md).


| Action code | Action message | 
| --- | --- | 
|  676  |  MySQL doesn’t support the `SET DEFAULT` referential constraint action.  | 
|  677  |  MySQL doesn’t support functions or expressions as a default value for `BLOB` and `TEXT` columns.  | 
|  678  |  MySQL doesn’t support check constraints.  | 
|  825  |   AWS SCT removed the default value of the ё column.  | 
|  826  |   AWS SCT can’t convert the default value of the ё variable.  | 
|  827  |   AWS SCT can’t convert default values.  | 

## Data Types


![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-4.png)


Data type syntax and rules are very similar between SQL Server and Aurora MySQL. AWS SCT automatically converts most of data type syntax and rules. Note that date and time handling paradigms are different for SQL Server and Aurora MySQL and require manual verification or conversion. Also note that because of differences in data type behavior between SQL Server and Aurora MySQL, manual verification and strict testing are highly recommended.

For more information, see [Data Types](chap-sql-server-aurora-mysql.sql.datatypes.md).


| Action code | Action message | 
| --- | --- | 
|  601  |  MySQL doesn’t support including `BLOB` and `TEXT` columns in foreign keys.  | 
|  706  |   AWS SCT replaced the unsupported %s data type.  | 
|  707  |   AWS SCT can’t convert the usage of a variable of the unsupported %s data type.  | 
|  708  |   AWS SCT can’t convert the usage of the unsupported %s data type.  | 
|  775  |  Converted code might lose accuracy compared to the source code.  | 
|  844  |   AWS SCT expanded fractional seconds support for `TIME`, `DATETIME2`, and `DATETIMEOFFSET` values with up to 6 digits of precision.  | 
|  919  |  MySQL doesn’t support the `DECIMAL` data type with scale greater than 30.  | 

## Collations


![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-4.png)


The collation paradigms of SQL Server and Aurora MySQL are significantly different. AWS SCT can successfully migrate most common use cases including data type differences such as `NCHAR` and `NVARCHAR` in SQL Server that don’t exist in Aurora MySQL. Aurora MySQL provides more options and flexibility in terms of collations. Rewrites are required for explicit collation clauses that aren’t supported by Aurora MySQL.

For more information, see [Collations](chap-sql-server-aurora-mysql.tsql.collations.md).


| Action code | Action message | 
| --- | --- | 
|  646  |  MySQL doesn’t support the `COLLATE` clause.  | 

## Window Functions


![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-0.png)


 Aurora MySQL version 5.7 doesn’t support window functions. AWS SCT can’t automatically convert window functions.

For workarounds using traditional SQL syntax, see [Window Functions](chap-sql-server-aurora-mysql.sql.windowfunctions.md).


| Action code | Action message | 
| --- | --- | 
|  647  |  MySQL doesn’t support the analytic form of the %s function.  | 
|  648  |  MySQL doesn’t support the `RANK` function.  | 
|  649  |  MySQL doesn’t support the `DENSE_RANK` function.  | 
|  650  |  MySQL doesn’t support the `NTILE` function.  | 
|  754  |  MySQL doesn’t ssupport `STDEV` functions with the `DISTINCT` clause.  | 
|  755  |  MySQL doesn’t support `STDEVP` functions with the `DISTINCT` clause.  | 
|  756  |  MySQL doesn’t support `VAR` functions with the `DISTINCT` clause.  | 
|  757  |  MySQL doesn’t support `VARP` functions with the `DISTINCT` clause.  | 

## PIVOT and UNPIVOT


![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-0.png)


 Aurora MySQL version 5.7 doesn’t support the `PIVOT` and `UNPIVOT` syntax. AWS SCT can’t automatically convert the PIVOT and UNPIVOT clauses.

For workarounds using traditional SQL syntax, see [PIVOT and UNPIVOT](chap-sql-server-aurora-mysql.tsql.pivot.md).


| Action code | Action message | 
| --- | --- | 
|  905  |  MySQL doesn’t support `PIVOT` clauses for `SELECT` statements.  | 
|  906  |  MySQL doesn’t support `UNPIVOT` clauses for `SELECT` statements.  | 

## TOP and FETCH


![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-4.png)


 Aurora MySQL supports the non-ANSI compliant (although popular with other common RDBMS engines) `LIMIT…​ OFFSET` operator for paging of results sets. Despite the differences, AWS SCT can automatically convert most common paging queries to use the Aurora MySQL syntax. Some options such as `PERCENT` and `WITH TIES` can’t be automatically converted and require manual conversion.

For more information, see [SQL Server TOP and FETCH and MySQL LIMIT](chap-sql-server-aurora-mysql.tsql.topfetch.md).


| Action code | Action message | 
| --- | --- | 
|  604  |  MySQL doesn’t support the `PERCENT` argument in `TOP` clauses. AWS SCT skips this argument in the converted code.  | 
|  605  |  MySQL doesn’t support the `WITH TIES` argument in `TOP` clauses. AWS SCT skips this argument in the converted code.  | 
|  608  |  MySQL doesn’t support the `PERCENT` argument in `TOP` clauses of `INSERT` statements.  | 
|  612  |  MySQL doesn’t support the `PERCENT` argument in `TOP` clauses of `UPDATE` statements.  | 
|  621  |  MySQL doesn’t support the `PERCENT` argument in TOP clauses. AWS SCT skips this argument in the converted code.  | 
|  830  |  MySQL doesn’t support `LIMIT` clauses with `IN`, `ALL`, `ANY`, or `SOME` subqueries.  | 

## Common Table Expressions


![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-0.png)


 Aurora MySQL version 5.7 doesn’t support common table expressions. AWS SCT can’t automatically convert common table expressions.

For workarounds using traditional SQL syntax, see [Common Table Expressions](chap-sql-server-aurora-mysql.sql.cte.md).


| Action code | Action message | 
| --- | --- | 
|  611  |  MySQL doesn’t support `WITH` queries in `UPDATE` statements.  | 
|  619  |  MySQL doesn’t support query definitions for common table expressions.  | 
|  839  |  MySQL doesn’t support query definitions for common table expressions.  | 
|  840  |   AWS SCT can’t convert updated common table expressions.  | 

## Cursors


![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-3.png)


 AWS SCT automatically converts the most commonly used cursor operations. These operations include forward-only, read only cursors, and the `DECLARE CURSOR`, `CLOSE CURSOR`, and `FETCH NEXT` operations. Modifications through cursors and non-forward-only fetches, which aren’t supported by Aurora MySQL, require manual conversions.

For more information, see [Cursors](chap-sql-server-aurora-mysql.tsql.cursors.md).


| Action code | Action message | 
| --- | --- | 
|  618  |  MySQL doesn’t support `CURRENT OF` clauses for DML queries that are in the body of a cursor loop.  | 
|  624  |  MySQL doesn’t support `CURRENT OF` clauses for DML queries that are in the body of a cursor loop.  | 
|  625  |  MySQL doesn’t support the `CURSOR` data type as a procedure argument.  | 
|  637  |  MySQL doesn’t support global cursors.  | 
|  638  |  MySQL doesn’t support the `SCROLL` option in cursors.  | 
|  639  |  MySQL doesn’t support dynamic cursors.  | 
|  667  |  MySQL doesn’t support the %s option in cursors.  | 
|  668  |  MySQL doesn’t support the `FIRST` option in cursors.  | 
|  669  |  MySQL doesn’t support the `PRIOR` option in cursors.  | 
|  670  |  MySQL doesn’t support the `ABSOLUTE` option in cursors.  | 
|  671  |  MySQL doesn’t support the `RELATIVE` option in cursors.  | 
|  692  |  MySQL doesn’t support cursor variables.  | 
|  700  |   AWS SCT can’t convert the `KEYSET` option because MySQL doesn’t support changing the membership and order of rows for cursors.  | 
|  701  |   AWS SCT doesn’t convert the `FAST_FORWARD` option because this is a default option for cursors in MySQL.  | 
|  702  |   AWS SCT doesn’t convert the `READ_ONLY` option because this is a default option for cursors in MySQL.  | 
|  703  |  MySQL doesn’t support the `SCROLL_LOCKS` option.  | 
|  704  |  MySQL doesn’t support the `OPTIMISTIC` option for cursors.  | 
|  705  |  MySQL doesn’t support the `TYPE_WARNING` option for cursors.  | 
|  842  |  MySQL doesn’t support the %s option in cursors.  | 

## Flow Control


![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-4.png)


Although the flow control syntax of SQL Server differs from Aurora MySQL, AWS SCT can convert most constructs automatically including loops, command blocks, and delays. Aurora MySQL doesn’t support the `GOTO` command nor the `WAITFOR TIME` command, which require manual conversion.

For more information, see [Flow Control](chap-sql-server-aurora-mysql.tsql.flowcontrol.md).


| Action code | Action message | 
| --- | --- | 
|  628  |  MySQL doesn’t support `GOTO` statements.  | 
|  691  |  MySQL doesn’t support the `WAITFOR TIME` feature.  | 

## Transaction Isolation


![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-4.png)


 Aurora MySQL supports the following four transaction isolation levels specified in the SQL:92 standard: `READ UNCOMMITTED`, `READ COMMITTED`, `REPEATABLE READ`, and `SERIALIZABLE`. AWS SCT automatically converts all these transaction isolation levels. AWS SCT also converts `BEGIN`, `COMMIT`, and `ROLLBACK` commands that use slightly different syntax. Manual conversion is required for named, marked, and delayed durability transactions that aren’t supported by Aurora MySQL.

For more information, see [Transactions](chap-sql-server-aurora-mysql.tsql.transactions.md).


| Action code | Action message | 
| --- | --- | 
|  629  |  MySQL doesn’t support named transactions.  | 
|  630  |  MySQL doesn’t support `WITH MARK` options.  | 
|  631  |  MySQL doesn’t support distributed transactions.  | 
|  632  |  MySQL doesn’t support rolling back named transactions  | 
|  633  |  MySQL doesn’t support the `DELAYED_DURABILITY` option.  | 
|  916  |  MySQL doesn’t support the `SNAPSHOT` transaction isolation level.  | 

## Stored Procedures


![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-4.png)


 Aurora MySQL stored procedures provide very similar functionality to SQL Server stored procedures. AWS SCT automatically converts SQL Server stored procedures. Manual conversion is required for procedures that use `RETURN` values and some less common `EXECUTE` options such as `RECOMPILE` and `RESULTS SETS`.

For more information, see [Stored Procedures](chap-sql-server-aurora-mysql.tsql.storedprocedures.md).


| Action code | Action message | 
| --- | --- | 
|  640  |  MySQL doesn’t support `EXECUTE` statements with the `WITH RECOMPILE` option.  | 
|  641  |  MySQL doesn’t support `EXECUTE` statements with the `RESULT SETS UNDEFINED` option.  | 
|  642  |  MySQL doesn’t support `EXECUTE` statements with the `RESULT SETS NONE` option.  | 
|  643  |  MySQL doesn’t support `EXECUTE` statements with the `RESULT SETS DEFINITION` option.  | 
|  689  |  MySQL doesn’t support `RETURN` statements that are used to return values from a procedure.  | 
|  695  |  MySQL doesn’t support the call of a procedure as a variable.  | 

## Triggers


![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-3.png)


 Aurora MySQL supports `BEFORE` and `AFTER` triggers for `INSERT`, `UPDATE`, and `DELETE`. However, Aurora MySQL triggers differ substantially from SQL Server triggers, but most common use cases can be migrated with minimal code changes. Although AWS SCT can automatically migrate trigger code, manual inspection and potential code modifications may be required because Aurora MySQL triggers are ran once for each row, not once for each statement such as triggers in SQL Server.

For more information, see [Triggers](chap-sql-server-aurora-mysql.tsql.triggers.md).


| Action code | Action message | 
| --- | --- | 
|  686  |  MySQL doesn’t support triggers with the `FOR STATEMENT` clause.  | 

## GROUP BY


![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-4.png)


 AWS SCT automatically converts the `GROUP BY` queries, except for `CUBE` and `GROUPING SETS`. You can create workarounds for these queries, but they require manual code changes.

For more information, see [GROUP BY](chap-sql-server-aurora-mysql.sql.groupby.md).


| Action code | Action message | 
| --- | --- | 
|  654  |  MySQL doesn’t support the `GROUP BY CUBE` option.  | 
|  655  |  MySQL doesn’t support `GROUP BY GROUPING SETS` clauses.  | 

## Identity and Sequences


![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-3.png)


Although the syntax for SQL Server `IDENTITY` and Aurora MySQL `AUTO_INCREMENT` auto-enumeration columns differs significantly, it can be automatically converted by AWS SCT. Some limitations imposed by Aurora MySQL require manual conversion such as explicit `SEED` and `INCREMENT` auto-enumeration columns that aren’t part of the primary key and the table-independent `SEQUENCE` objects.

For more information, see [Identity and Sequences](chap-sql-server-aurora-mysql.tsql.identitysequences.md).


| Action code | Action message | 
| --- | --- | 
|  696  |  MySQL doesn’t support identity columns with seed and increment.  | 
|  697  |  MySQL doesn’t support identity columns outside the primary key.  | 
|  732  |  MySQL doesn’t support identity columns in compound primary keys.  | 
|  815  |  MySQL doesn’t support sequences.  | 
|  841  |  MySQL doesn’t support numeric (x, 0) or decimal (x, 0) data types in columns with the `AUTO_INCREMENT` option. AWS SCT replaced this data type with a compatible data type.  | 
|  920  |  MySQL doesn’t support identity columns of the `DECIMAL` or `NUMERIC` data type with precision greater than 19.  | 

## Error Handling


![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-3.png)


The error handling paradigms in Aurora MySQL and SQL Server are significantly different; the former uses condition and handler objects. AWS SCT migrates the basic error handling constructs automatically. Due to the paradigm differences, we highly recommend that you perform strict inspection and validation of the migrated code. Manual conversions are required for `THROW` with variables and for built-in messages in SQL Server.

For more information, see [Error Handling](chap-sql-server-aurora-mysql.tsql.errorhandling.md).


| Action code | Action message | 
| --- | --- | 
|  729  |   AWS SCT can’t convert `THROW` operators with variables.  | 
|  730  |   AWS SCT truncated the error code.  | 
|  733  |  MySQL doesn’t support `PRINT` procedures.  | 
|  814  |   AWS SCT can’t convert the `RAISERROR` operator with messages from the `sys.messages` view.  | 
|  837  |  MySQL uses a different approach to handle errors compared to the source code.  | 

## Date and Time Functions


![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-4.png)


 AWS SCT automatically converts the most commonly used date and time functions despite the significant difference in syntax. Be aware of differences in data types, time zone awareness, and locale handling as well the functions themselves, and inspect the expression value output carefully. Some less commonly used options such as millisecond, nanosecond, and time zone offsets require manual conversion.

For more information, see [Date and Time Functions](chap-sql-server-aurora-mysql.tsql.datetime.md).


| Action code | Action message | 
| --- | --- | 
|  759  |  MySQL doesn’t support `DATEADD` functions with the nanosecond date part.  | 
|  760  |  MySQL doesn’t support `DATEDIFF` functions with the week date part.  | 
|  761  |  MySQL doesn’t support `DATEDIFF` functions with the millisecond date part.  | 
|  762  |  MySQL doesn’t support `DATEDIFF` functions with the nanosecond date part.  | 
|  763  |  MySQL doesn’t support `DATENAME` functions with the millisecond date part.  | 
|  764  |  MySQL doesn’t support `DATENAME` functions with the nanosecond date part.  | 
|  765  |  MySQL doesn’t support `DATENAME` functions with the TZoffset date part.  | 
|  767  |  MySQL doesn’t support `DATEPART` functions with the nanosecond date part.  | 
|  768  |  MySQL doesn’t support `DATEPART` functions with the TZoffset date part.  | 
|  773  |   AWS SCT can’t convert arithmetic operations with dates.  | 

## User-Defined Functions


![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-3.png)


 Aurora MySQL supports only scalar user-defined functions, which are automatically converted by AWS SCT. Table-valued user-defined functions, both in-line and multi-statement, require manual conversion. Workarounds using views or derived tables should be straightforward in most cases.

For more information, see [User-Defined Functions](chap-sql-server-aurora-mysql.tsql.udf.md).


| Action code | Action message | 
| --- | --- | 
|  777  |   AWS SCT can’t emulate a table-valued function because the column from the current query is used as a function parameter.  | 
|  822  |  MySQL doesn’t support table-valued functions in views.  | 

## User-Defined Types


![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-3.png)


 Aurora MySQL 5.7 doesn’t support-user defined types and user-defined table-valued parameters. AWS SCT can convert standard user defined types by replacing it with their base types, but manual conversion is required for user defined table types, which are used for table valued parameters for stored procedures.

For more information, see [User-Defined Types](chap-sql-server-aurora-mysql.tsql.udt.md).


| Action code | Action message | 
| --- | --- | 
|  690  |  MySQL doesn’t support table types.  | 

## Synonyms


![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-0.png)


 Aurora MySQL version 5.7 doesn’t support synonyms. AWS SCT can’t automatically convert synonyms.

For more information, see [Synonyms](chap-sql-server-aurora-mysql.tsql.synonyms.md).


| Action code | Action message | 
| --- | --- | 
|  792  |  MySQL doesn’t support synonyms.  | 

## XML and JSON


![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-4.png)


 Aurora MySQL provides minimal support for XML, but it does offer a native JSON data type and more than 25 dedicated JSON functions. Despite these differences, the most commonly used basic XML functions can be automatically migrated by AWS SCT. Some options such as `EXPLICIT`, used in functions or with subqueries, require manual conversion.

For more information, see [JSON and XML](chap-sql-server-aurora-mysql.tsql.xml.md).


| Action code | Action message | 
| --- | --- | 
|  817  |   AWS SCT can’t convert `FOR XML` clauses with `EXPLICIT` mode specified.  | 
|  818  |   AWS SCT can’t convert correlated subqueries with `FOR XML` clauses.  | 
|  843  |   AWS SCT can’t convert `FOR XML` statements in functions.  | 

## Table Joins


![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-4.png)


 AWS SCT automatically converts the most commonly used join types. These types include `INNER`, `OUTER`, and `CROSS` joins. `APPLY` joins, also known as `LATERAL` joins, aren’t supported by Aurora MySQL and require manual conversion.

For more information, see [Table JOIN](chap-sql-server-aurora-mysql.sql.tablejoin.md).


| Action code | Action message | 
| --- | --- | 
|  831  |  MySQL doesn’t support the `CROSS APPLY` and `OUTER APPLY` operators where the subquery references to the column of attachable table.  | 

## MERGE


![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-0.png)


 Aurora MySQL version 5.7 doesn’t support the `MERGE` statement. AWS SCT can’t automatically convert `MERGE` statements. Manual conversion is straightforward in most cases.

For more information and potential workarounds, see [MERGE](chap-sql-server-aurora-mysql.tsql.merge.md).


| Action code | Action message | 
| --- | --- | 
|  832  |  MySQL doesn’t support `MERGE` statements.  | 

## Query Hints


![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-3.png)


Basic query hints such as index hints can be converted automatically by AWS SCT, except for DML statements. Note that specific optimizations used for SQL Server may be completely inapplicable to a new query optimizer. We recommend that you remove all hints before the start of migration testin. Then, selectively apply hints as a last resort if other means such as schema, index, and query optimizations have failed. Plan guides aren’t supported by Aurora MySQL.

For more information, see [Query Hints and Plan Guides](chap-sql-server-aurora-mysql.tuning.queryhints.md).


| Action code | Action message | 
| --- | --- | 
|  610  |  MySQL doesn’t support hints in `INSERT` statements. AWS SCT skips `WITH(Table_Hint_Limited)` options in the converted code.  | 
|  617  |  MySQL doesn’t support hints in `UPDATE` statements. AWS SCT skips `WITH(Table_Hint_Limited)` options in the converted code.  | 
|  623  |  MySQL doesn’t support hints in `DELETE` statements. AWS SCT skips `WITH(Table_Hint_Limited)` options in the converted code.  | 
|  823  |  MySQL doesn’t support table hints in DML statements.  | 

## Full-Text Search


![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-0.png)


Migrating full-text indexes from SQL Server to Aurora MySQL requires a full rewrite of the code that deals with both creating, managing, and querying of full-text indexes. AWS SCT can’t automatically convert full-text indexes.

For more information, see [Full-Text Search](chap-sql-server-aurora-mysql.tsql.fulltextsearch.md).


| Action code | Action message | 
| --- | --- | 
|  687  |  MySQL doesn’t support the `CONTAINS` predicate.  | 
|  688  |  MySQL doesn’t support the `FREETEXT` predicate.  | 

## Indexes


![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-4.png)


 AWS SCT automatically converts basic non-clustered indexes, which are the most commonly used type of indexes. User-defined clustered indexes aren’t supported by Aurora MySQL because they are always created for the primary key. In addition, filtered indexes, indexes with included columns, and some SQL Server specific index options can’t be migrated automatically and require manual conversion.

For more information, see [Indexes](chap-sql-server-aurora-mysql.indexes.md).


| Action code | Action message | 
| --- | --- | 
|  602  |  MySQL has reached the limit of the internal InnoDB maximum key length.  | 
|  681  |  MySQL doesn’t support clustered indexes.  | 
|  682  |  MySQL doesn’t support the `INCLUDE` clause in indexes.  | 
|  683  |  MySQL doesn’t support the `WHERE` clause in indexes.  | 
|  684  |  MySQL doesn’t support the `WITH` clause in indexes.  | 

## Partitioning


![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-0.png)


Because Aurora MySQL stores each table in its own file, and because file management is performed by AWS and can’t be modified, some of the physical aspects of partitioning in SQL Server don’t apply to Aurora MySQL. For example, the concept of file groups and assigning partitions to file groups. Aurora MySQL supports a much richer framework for table partitioning than SQL Server, with many additional options such as hash partitioning, and sub partitioning. Due to the vast differences between partition creation, query, and management between Aurora MySQL and SQL Server, AWS SCT doesn’t automatically convert table and index partitions. These items require manual conversion.

For more information, see [Storage](chap-sql-server-aurora-mysql.storage.md).


| Action code | Action message | 
| --- | --- | 
|  907  |   AWS SCT can’t convert tables arranged in several partitions.  | 

## Backup


![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-0.png)


Migrating from a self-managed backup policy to a Platform as a Service (PaaS) environment such as Aurora MySQL is a complete paradigm shift. You no longer need to worry about transaction logs, file groups, disks running out of space, and purging old backups. Amazon Relational Database Service (Amazon RDS) provides guaranteed continuous backup with point-in-time restore up to 35 days. Therefore, AWS SCT doesn’t automatically convert backups.

For more information, see [Backup and Restore](chap-sql-server-aurora-mysql.hadr.backuprestore.md).


| Action code | Action message | 
| --- | --- | 
|  903  |  MySQL doesn’t support functionality similar to SQL Server Backup.  | 

## SQL Server Database Mail


![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-0.png)


 Aurora MySQL doesn’t provide native support for sending mail from the database.

For more information and potential workarounds, see [Database Mail](chap-sql-server-aurora-mysql.management.databasemail.md).


| Action code | Action message | 
| --- | --- | 
|  900  |  MySQL doesn’t support functionality similar to SQL Server Database Mail.  | 

## SQL Server Agent


![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-0.png)


 Aurora MySQL doesn’t provide functionality similar to SQL Server Agent as an external, cross-instance scheduler. However, Aurora MySQL provides a native, in-database scheduler. It is limited to the cluster scope and can’t be used to manage multiple clusters. Therefore, AWS SCT can’t automatically convert Agent jobs and alerts.

For more information, see [SQL Server Agent and MySQL Agent](chap-sql-server-aurora-mysql.management.agent.md).


| Action code | Action message | 
| --- | --- | 
|  902  |  MySQL doesn’t support functionality similar to SQL Server Agent.  | 

## Linked Servers


![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-0.png)


 Aurora MySQL doesn’t support remote data access from the database. Connectivity between schemas is trivial, but connectivity to other instances require a custom solution. AWS SCT can’t automatically convert commands on linked servers.

For more information, see [Linked Servers](chap-sql-server-aurora-mysql.management.linkedservers.md).


| Action code | Action message | 
| --- | --- | 
|  645  |  MySQL doesn’t support running pass-through commands on linked servers.  | 

## Views


![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-4.png)


MySQL views are similar to views in SQL Server. However, there are slight differences between the two, mostly around indexing and triggers on views, and also in the query definition.

For more information, see [Views](chap-sql-server-aurora-mysql.sql.views.md).


| Action code | Action message | 
| --- | --- | 
|  779  |   AWS SCT can’t convert `SELECT` statements that contain a subquery in the `FROM` clause.  | 

# AWS Database Migration Service overview


This topic provides conceptual content about AWS Database Migration Service (AWS DMS). It introduces you to the key features and benefits of AWS DMS, explaining how it can help you migrate databases to AWS quickly and securely.

The AWS Database Migration Service (AWS DMS) helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely-used commercial and open-source databases.

The service supports homogenous migrations such as Oracle to Oracle as well as heterogeneous migrations between different database platforms such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL. You can also use AWS DMS to stream data to Amazon Redshift, Amazon DynamoDB, and Amazon S3 from any of the supported sources, which are Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, SAP ASE, SQL Server, IBM DB2 LUW, and MongoDB, enabling consolidation and easy analysis of data in a petabyte-scale data warehouse. The AWS Database Migration Service can also be used for continuous data replication with high availability.

For AWS DMS pricing, see [Database Migration Service pricing](https://aws.amazon.com/dms/pricing).

For all supported sources for AWS DMS, see [Sources for data migration](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.html).

For all supported targets for AWS DMS, see [Targets for data migration](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.html).

## Migration Tasks Performed by AWS DMS


In a traditional solution, you need to perform capacity analysis, procure hardware and software, install and administer systems, and test and debug the installation. AWS DMS automatically manages the deployment, management, and monitoring of all hardware and software needed for your migration. You can start your migration within minutes of starting the AWS DMS configuration process.

With AWS DMS, you can scale up (or scale down) your migration resources as needed to match your actual workload. For example, if you determine that you need additional storage, you can easily increase your allocated storage and restart your migration, usually within minutes. On the other hand, if you discover that you aren’t using all of the resource capacity you configured, you can easily downsize to meet your actual workload.

 AWS DMS uses a pay-as-you-go model. You only pay for AWS DMS resources while you use them as opposed to traditional licensing models with up-front purchase costs and ongoing maintenance charges.

 AWS DMS automatically manages all of the infrastructure that supports your migration server including hardware and software, software patching, and error reporting.

 AWS DMS provides automatic failover. If your primary replication server fails for any reason, a backup replication server can take over with little or no interruption of service.

 AWS DMS can help you switch to a modern, perhaps more cost-effective database engine than the one you are running now. For example, AWS DMS can help you take advantage of the managed database services provided by Amazon RDS or Amazon Aurora. Or, it can help you move to the managed data warehouse service provided by Amazon Redshift, NoSQL platforms like Amazon DynamoDB, or low-cost storage platforms like Amazon S3. Conversely, if you want to migrate away from old infrastructure but continue to use the same database engine, AWS DMS also supports that process.

 AWS DMS supports nearly all of modern popular DBMS engines as data sources, including Oracle, Microsoft SQL Server, MySQL, MariaDB, PostgreSQL, Db2 LUW, SAP, MongoDB, and Amazon Aurora.

 AWS DMS provides a broad coverage of available target engines including Oracle, Microsoft SQL Server, PostgreSQL, MySQL, Amazon Redshift, SAP ASE, Amazon S3, and Amazon DynamoDB.

You can migrate from any of the supported data sources to any of the supported data targets. AWS DMS supports fully heterogeneous data migrations between the supported engines.

 AWS DMS ensures that your data migration is secure. Data at rest is encrypted with AWS Key Management Service (AWS KMS) encryption. During migration, you can use Secure Socket Layers (SSL) to encrypt your in-flight data as it travels from source to target.

## How AWS DMS Works


At its most basic level, AWS DMS is a server in the AWS Cloud that runs replication software. You create a source and target connection to tell AWS DMS where to extract from and load to. Then, you schedule a task that runs on this server to move your data. AWS DMS creates the tables and associated primary keys if they don’t exist on the target. You can pre-create the target tables manually if you prefer. Or you can use AWS SCT to create some or all of the target tables, indexes, views, triggers, and so on.

The following diagram illustrates the AWS DMS process.

![\[How the database migration service works\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-how-aws-dms-works.png)


For more information about AWS DMS, see [What is Database Migration Service?](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) and [Best practices for Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html).

# Amazon RDS on Outposts overview


This topic provides conceptual information about Amazon RDS on Outposts, a service that extends Amazon RDS capabilities to on-premises environments. You can learn about how this service enables you to run fully managed databases in your own data centers or co-location facilities, offering low-latency access to local systems and data processing capabilities.

**Note**  
This topic is related to Amazon Relational Database Service (Amazon RDS) and isn’t supported with Amazon Aurora.

 Amazon RDS on Outposts is a fully managed service that offers the same AWS infrastructure, AWS services, APIs, and tools to virtually any data center, co-location space, or on-premises facility for a truly consistent hybrid experience. Amazon RDS on Outposts is ideal for workloads that require low latency access to on-premises systems, local data processing, data residency, and migration of applications with local system inter-dependencies.

When you deploy Amazon RDS on Outposts, you can run Amazon RDS on premises for low latency workloads that need to be run in close proximity to your on-premises data and applications. Amazon RDS on Outposts also enables automatic backup to an AWS Region. You can manage Amazon RDS databases both in the cloud and on premises using the same AWS Management Console, APIs, and CLI. Amazon RDS on Outposts supports Microsoft SQL Server, MySQL, and PostgreSQL database engines, with support for additional database engines coming soon.

## How It Works


 Amazon RDS on Outposts lets you run Amazon RDS in your on-premises or co-location site. You can deploy and scale an Amazon RDS database instance in Outposts just as you do in the cloud, using the AWS console, APIs, or CLI. Amazon RDS databases in Outposts are encrypted at rest using AWS KMS keys. Amazon RDS automatically stores all automatic backups and manual snapshots in the AWS Region.

![\[How RDS on Outposts works\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-rds-outposts-how-it-works.png)


This option is helpful when you need to run Amazon RDS on premises for low latency workloads that need to be run in close proximity to your on-premises data and applications.

For more information, see [AWS Outposts Family](https://aws.amazon.com/outposts), [Amazon RDS on Outposts](https://aws.amazon.com/rds/outposts), and [Create Amazon RDS DB Instances on Outposts](https://aws.amazon.com/blogs/aws/new-create-amazon-rds-db-instances-on-aws-outposts).

# Amazon RDS Proxy overview


This topic provides conceptual content about Amazon RDS Proxy, a fully managed database proxy service for Amazon RDS. It introduces the key benefits and functionality of RDS Proxy, explaining how it improves application scalability, resilience, and security.understand the purpose and advantages of using Amazon RDS Proxy in their database architecture.

 Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications more scalable, more resilient to database failures, and more secure.

Many applications, including those built on modern server-less architectures, can have many open connections to the database server, and may open and close database connections at a high rate, exhausting database memory and compute resources. Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and application scalability. With Amazon RDS Proxy, fail-over times for Aurora and Amazon RDS databases are reduced by up to 66%. You can manage database credentials, authentication, and access through integration with AWS Secrets Manager and AWS Identity and Access Management (IAM).

You can turn on Amazon RDS Proxy for most applications with no code changes. You don’t need to provision or manage any additional infrastructure. Pricing is simple and predictable: you pay for each vCPU of the database instance for which the proxy is enabled. Amazon RDS Proxy is now generally available for Aurora MySQL, Aurora PostgreSQL, Amazon RDS for MySQL, and Amazon RDS for PostgreSQL.

## Amazon RDS Proxy Benefits

+  **Improved application performance** — Amazon RDS proxy manages a connection pooling which helps with reducing the stress on database compute and memory resources that typically occurs when new connections are established and it is useful to efficiently support a large number and frequency of application connections.
+  **Increase application availability** — By automatically connecting to a new database instance while preserving application connections Amazon RDS Proxy can reduce fail-over time by 66%.
+  **Manage application security** — Amazon RDS Proxy also enables you to centrally manage database credentials using AWS Secrets Manager.
+  **Fully managed** — Amazon RDS Proxy gives you the benefits of a database proxy without requiring additional burden of patching and managing your own proxy server.
+  **Fully compatible with your database** — Amazon RDS Proxy is fully compatible with the protocols of supported database engines, so you can deploy Amazon RDS Proxy for your application without making changes to your application code.
+  **Available and durable** — Amazon RDS Proxy is highly available and deployed over multiple Availability Zones (AZs) to protect you from infrastructure failure.

## How Amazon RDS Proxy Works


![\[How Amazon RDS Proxy Works\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-how-rds-proxy-works.png)


For more information, see [Amazon RDS Proxy for Scalable Serverless Applications](https://aws.amazon.com/blogs/aws/amazon-rds-proxy-now-generally-available) and [Amazon RDS Proxy](https://aws.amazon.com/rds/proxy).

# Amazon Aurora Serverless v1 overview


This topic provides conceptual information about Amazon Aurora Serverless. It introduces Aurora Serverless as an on-demand autoscaling configuration for Amazon Aurora, explaining how it automatically adjusts compute capacity based on application needs.

 Amazon Aurora Serverless version 1 (v1) is an on-demand autoscaling configuration for Amazon Aurora. An Aurora Serverless DB cluster is a DB cluster that scales compute capacity up and down based on your application’s needs. This contrasts with Aurora provisioned DB clusters, for which you manually manage capacity. Aurora Serverless v1 provides a relatively simple, cost-effective option for infrequent, intermittent, or unpredictable workloads. It is cost-effective because it automatically starts up, scales compute capacity to match your application’s usage, and shuts down when it’s not in use.

To learn more about pricing, see Serverless Pricing under MySQL-Compatible Edition or PostgreSQL-Compatible Edition on the [Amazon Aurora pricing page](https://aws.amazon.com/rds/aurora/pricing).

 Aurora Serverless v1 clusters have the same kind of high-capacity, distributed, and highly available storage volume that is used by provisioned DB clusters. The cluster volume for an Aurora Serverless v1 cluster is always encrypted. You can choose the encryption key, but you can’t turn off encryption. That means that you can perform the same operations on an Aurora Serverless v1 that you can on encrypted snapshots. For more information, see Aurora Serverless v1 and snapshots.

 Aurora Serverless v1 provides the following advantages:
+  **Simpler than provisioned** — Aurora Serverless v1 removes much of the complexity of managing DB instances and capacity.
+  **Scalable** — Aurora Serverless v1 seamlessly scales compute and memory capacity as needed, with no disruption to client connections.
+  **Cost-effective** — When you use Aurora Serverless v1, you pay only for the database resources that you consume, on a per-second basis.
+  **Highly available storage** — Aurora Serverless v1 uses the same fault-tolerant, distributed storage system with six-way replication as Aurora to protect against data loss.

 Aurora Serverless v1 is designed for the following use cases:
+  **Infrequently used applications** — You have an application that is only used for a few minutes several times for each day or week, such as a low-volume blog site. With Aurora Serverless v1, you pay for only the database resources that you consume on a per-second basis.
+  **New applications** — You’re deploying a new application and you’re unsure about the instance size you need. By using Aurora Serverless v1, you can create a database endpoint and have the database automatically scale to the capacity requirements of your application.
+  **Variable workloads** — You’re running a lightly used application, with peaks of 30 minutes to several hours a few times each day, or several times for each year. Examples are applications for human resources, budgeting, and operational reporting applications. With Aurora Serverless v1, you no longer need to provision for peak or average capacity.
+  **Unpredictable workloads** — You’re running daily workloads that have sudden and unpredictable increases in activity. An example is a traffic site that sees a surge of activity when it starts raining. With Aurora Serverless v1, your database automatically scales capacity to meet the needs of the application’s peak load and scales back down when the surge of activity is over.
+  **Development and test databases** — Your developers use databases during work hours but don’t need them on nights or weekends. With Aurora Serverless v1, your database automatically shuts down when it’s not in use.
+  **Multi-tenant applications** — With Aurora Serverless v1, you don’t have to individually manage database capacity for each application in your fleet. Aurora Serverless v1 manages individual database capacity for you.

This process takes almost no time. Because the storage is shared between nodes Aurora can scale up or down in seconds for most workloads. The service currently has autoscaling thresholds of 1.5 minutes to scale up and 5 minutes to scale down. That means metrics must exceed the limits for 1.5 minutes to trigger a scale up or fall below the limits for 5 minutes to trigger a scale down. The cool-down period between scaling activities is 5 minutes to scale up and 15 minutes to scale down. Before scaling can happen the service has to find a “scaling point” which may take longer than anticipated if you have long-running transactions. Scaling operations are transparent to the connected clients and applications since existing connections and session state are transferred to the new nodes. The only difference with pausing and resuming is a higher latency for the first connection, typically around 25 seconds. You can find more details in the documentation.

![\[How Aurora Serverless Works\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-aurora-serverless.png)


## Amazon Aurora Serverless v2


 Amazon Aurora Serverless v2 has been architected from the ground up to support serverless DB clusters that are instantly scalable. The Aurora Serverless v2 architecture rests on a lightweight foundation that’s engineered to provide the security and isolation needed in multitenant serverless cloud environments. This foundation has very little overhead so it can respond quickly. It’s also powerful enough to meet dramatic increases in processing demand.

When you create your Aurora Serverless v2 DB cluster, you define its capacity as a range between minimum and maximum number of Aurora capacity units (ACUs):
+  **Minimum Aurora capacity units** — The smallest number of ACUs down to which your Aurora Serverless v2 DB cluster can scale.
+  **Maximum Aurora capacity units** — The largest number of ACUs up to which your Aurora Serverless v2 DB cluster can scale.

Each ACU provides 2 GiB (gibibytes) of memory (RAM) and associated virtual processor (vCPU) with networking. Unlike Aurora Serverless v1, which scales by doubling ACUs each time the DB cluster reaches a threshold, Aurora Serverless v2 can increase ACUs incrementally. When your workload demand begins to reach the current resource capacity, your Aurora Serverless v2 DB cluster scales the number of ACUs. Your cluster scales ACUs in the increments required to provide the best performance for the resources consumed.

## How to Provision


Log in to your [Management Console](https://eu-central-1.console.aws.amazon.com/rds/home?#databases:), choose ** Amazon RDS **, and then choose **Create database**.

On **Engine options**, for **Engine versions**, choose **Show versions that support Serverless v2**.

![\[Provision Serverless v2\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-aurora-serverless-provision.png)


Choose the capacity settings for your use case.

For more information, see [Amazon Aurora Serverless](https://aws.amazon.com/rds/aurora/serverless), [Aurora Serverless MySQL Generally Available](https://aws.amazon.com/blogs/aws/aurora-serverless-ga/), and [Amazon Aurora PostgreSQL Serverless Now Generally Available](https://aws.amazon.com/blogs/aws/amazon-aurora-postgresql-serverless-now-generally-available).

# Amazon Aurora Backtrack overview


This topic provides conceptual information about Amazon Aurora 's Backtrack feature. You can use Backtrack to quickly undo mistakes or explore earlier data changes in your Aurora MySQL database clusters. The feature works by maintaining a log of changes, allowing you to rewind your database to a specific point in time within a configurable window.

We’ve all been there, you need to make a quick, seemingly simple fix to an important production database. You compose the query, give it a once-over, and let it run. Seconds later you realize that you forgot the `WHERE` clause, dropped the wrong table, or made another serious mistake, and interrupt the query, but the damage has been done. You take a deep breath, whistle through your teeth, wish that reality came with an Undo option.

Backtracking rewinds the DB cluster to the time you specify. Backtracking isn’t a replacement for backing up your DB cluster so that you can restore it to a point in time. However, backtracking provides the following advantages over traditional backup and restore:
+ You can easily undo mistakes. If you mistakenly perform a destructive action, such as a `DELETE` without a `WHERE` clause, you can backtrack the DB cluster to a time before the destructive action with minimal interruption of service.
+ You can backtrack a DB cluster quickly. Restoring a DB cluster to a point in time launches a new DB cluster and restores it from backup data or a DB cluster snapshot, which can take hours. Backtracking a DB cluster doesn’t require a new DB cluster and rewinds the DB cluster in minutes.
+ You can explore earlier data changes. You can repeatedly backtrack a DB cluster back and forth in time to help determine when a particular data change occurred. For example, you can backtrack a DB cluster three hours and then backtrack forward in time one hour. In this case, the backtrack time is two hours before the original time.

 Amazon Aurora uses a distributed, log-structured storage system (read Design Considerations for High Throughput Cloud-Native Relational Databases to learn a lot more); each change to your database generates a new log record, identified by a Log Sequence Number (LSN). Enabling the backtrack feature provisions a FIFO buffer in the cluster for storage of LSNs. This allows for quick access and recovery times measured in seconds.

When you create a new Aurora MySQL DB cluster, backtracking is configured when you choose **Enable Backtrack** and specify a **Target Backtrack window** value that is greater than zero in the Backtrack section.

To create a DB cluster, follow the instructions in [Creating an Amazon Aurora DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.CreateInstance.html). The following image shows the Backtrack section.

![\[How Amazon Aurora Backtrack works\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-amazon-aurora-backtrack.png)


After a production error, you can simply pause your application, open up the Aurora Console, select the cluster, and choose **Backtrack DB cluster**.

Then you select **Backtrack** and choose the point in time just before your epic fail, and choose **Backtrack DB cluster**.

![\[Backtrack DB cluster\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-backtrack-db-cluster.png)


Then you wait for the rewind to take place, unpause your application and proceed as if nothing had happened. When you initiate a backtrack, Aurora will pause the database, close any open connections, drop uncommitted writes, and wait for the backtrack to complete. Then it will resume normal operation and be able to accept requests. The instance state will be backtracking while the rewind is underway.

## Backtrack Window


With backtracking, there is a target backtrack window and an actual backtrack window:
+ The target backtrack window is the amount of time you want to be able to backtrack your DB cluster. When you enable backtracking, you specify a target backtrack window. For example, you might specify a target backtrack window of 24 hours if you want to be able to backtrack the DB cluster one day.
+ The actual backtrack window is the actual amount of time you can backtrack your DB cluster, which can be smaller than the target backtrack window. The actual backtrack window is based on your workload and the storage available for storing information about database changes, called change records.

As you make updates to your Aurora DB cluster with backtracking enabled, you generate change records. Aurora retains change records for the target backtrack window, and you pay an hourly rate for storing them. Both the target backtrack window and the workload on your DB cluster determine the number of change records you store. The workload is the number of changes you make to your DB cluster in a given amount of time. If your workload is heavy, you store more change records in your backtrack window than you do if your workload is light.

You can think of your target backtrack window as the goal for the maximum amount of time you want to be able to backtrack your DB cluster. In most cases, you can backtrack the maximum amount of time that you specified. However, in some cases, the DB cluster can’t store enough change records to backtrack the maximum amount of time, and your actual backtrack window is smaller than your target. Typically, the actual backtrack window is smaller than the target when you have extremely heavy workload on your DB cluster. When your actual backtrack window is smaller than your target, we send you a notification.

When backtracking is turned on for a DB cluster, and you delete a table stored in the DB cluster, Aurora keeps that table in the backtrack change records. It does this so that you can revert back to a time before you deleted the table. If you don’t have enough space in your backtrack window to store the table, the table might be removed from the backtrack change records eventually.

## Backtracking Limitations


The following limitations apply to backtracking:
+ Backtracking an Aurora DB cluster is available in certain AWS Regions and for specific Aurora MySQL versions only. For more information, see Backtracking in Aurora.
+ Backtracking is only available for DB clusters that were created with the Backtrack feature enabled. You can enable the Backtrack feature when you create a new DB cluster or restore a snapshot of a DB cluster. For DB clusters that were created with the Backtrack feature enabled, you can create a clone DB cluster with the Backtrack feature enabled. Currently, you can’t perform backtracking on DB clusters that were created with the Backtrack feature turned off.
+ The limit for a backtrack window is 72 hours.
+ Backtracking affects the entire DB cluster. For example, you can’t selectively backtrack a single table or a single data update.
+ Backtracking isn’t supported with binary log (binlog) replication. Cross-Region replication must be turned off before you can configure or use backtracking.
+ You can’t backtrack a database clone to a time before that database clone was created. However, you can use the original database to backtrack to a time before the clone was created. For more information about database cloning, see Cloning an Aurora DB cluster volume.
+ Backtracking causes a brief DB instance disruption. You must stop or pause your applications before starting a backtrack operation to ensure that there are no new read or write requests. During the backtrack operation, Aurora pauses the database, closes any open connections, and drops any uncommitted reads and writes. It then waits for the backtrack operation to complete.
+ Backtracking isn’t supported for the following AWS Regions:
  + Africa (Cape Town)
  + China (Ningxia)
  + Asia Pacific (Hong Kong)
  + Europe (Milan)
  + Europe (Stockholm)
  + Middle East (Bahrain)
  + South America (São Paulo)
+ You can’t restore a cross-region snapshot of a backtrack-enabled cluster in an AWS Region that doesn’t support backtracking.
+ You can’t use backtrack with Aurora multi-master clusters.
+ If you perform an in-place upgrade for a backtrack-enabled cluster from Aurora MySQL version 1 to version 2, you can’t backtrack to a point in time before the upgrade happened.

For more information, see: [Amazon Aurora Backtrack — Turn Back Time](https://aws.amazon.com/blogs/aws/amazon-aurora-backtrack-turn-back-time).

# Amazon Aurora Parallel Query overview


This topic provides conceptual information about Amazon Aurora parallel query, a feature that enhances analytical query performance in Amazon Aurora databases. You can leverage this feature to accelerate your analytical queries while maintaining high throughput for transactional workloads. By offloading query processing to the Aurora storage layer, parallel query reduces contention with transactional operations and enables faster data analysis on fresh, real-time data.

 Amazon Aurora parallel query is a feature of the Amazon Aurora database that provides faster analytical queries over your current data, without having to copy the data into a separate system. It can speed up queries by up to two orders of magnitude, while maintaining high throughput for your core transactional workload.

While some databases can parallelize query processing across CPUs in one or a handful of servers, parallel query takes advantage of Aurora unique architecture to push down and parallelize query processing across thousands of CPUs in the Aurora storage layer. By offloading analytical query processing to the Aurora storage layer, parallel query reduces network, CPU, and buffer pool contention with the transactional workload.

## Features


 **Accelerate Your Analytical Queries** 

In a traditional database, running analytical queries directly on the database means accepting slower query performance and risking a slowdown of your transactional workload, even when running light queries. Queries can run for several minutes to hours, depending on the size of the tables and database server instances. Queries are also slowed down by network latency, since the storage layer may have to transfer entire tables to the database server for processing.

With Amazon Aurora parallel query, query processing is pushed down to the Aurora storage layer. The query gains a large amount of computing power, and it needs to transfer far less data over the network. In the meantime, the Amazon Aurora database instance can continue serving transactions with much less interruption. This way, you can run transactional and analytical workloads alongside each other in the same Aurora database, while maintaining high performance.

 **Query on Fresh Data** 

Many analytical workloads require both fresh data and good query performance. For example, operational systems such as network monitoring, cyber-security or fraud detection rely on fresh, real-time data from a transactional database, and can’t wait for it to be extracted to a analytics system.

By running your queries in the same database that you use for transaction processing, without degrading transaction performance, Amazon Aurora parallel query enables smarter operational decisions with no additional software and no changes to your queries.

## Benefits of Using Parallel Query

+ Improved I/O performance, due to parallelizing physical read requests across multiple storage nodes.
+ Reduced network traffic. Amazon Aurora doesn’t transmit entire data pages from storage nodes to the head node and then filter out unnecessary rows and columns afterward. Instead, Aurora transmits compact tuples containing only the column values needed for the result set.
+ Reduced CPU usage on the head node, due to pushing down function processing, row filtering, and column projection for the WHERE clause.
+ Reduced memory pressure on the buffer pool. The pages processed by the parallel query aren’t added to the buffer pool. This approach reduces the chance of a data-intensive scan evicting frequently used data from the buffer pool.
+ Potentially reduced data duplication in your extract, transform, and load (ETL) pipeline, by making it practical to perform long-running analytic queries on existing data.

## Important Notes

+  **Table Formats** — The table row format must be `COMPACT`; partitioned tables aren’t supported.
+  **Data Types** — The `TEXT`, `BLOB`, and `GEOMETRY` data types aren’t supported.
+  **DDL** — The table can’t have any pending fast online DDL operations.
+  **Cost** — You can make use of parallel query at no extra charge. However, because it makes direct access to storage, there is a possibility that your IO cost will increase.

For more information, see [Amazon Aurora Parallel Query](https://aws.amazon.com/rds/aurora/parallel-query/).