

# Working with Amazon Aurora MySQL
Working with Aurora MySQL<a name="mysql"></a>

Amazon Aurora MySQL is a fully managed, MySQL-compatible, relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. Aurora MySQL is a drop-in replacement for MySQL and makes it simple and cost-effective to set up, operate, and scale your new and existing MySQL deployments, thus freeing you to focus on your business and applications. Amazon RDS provides administration for Aurora by handling routine database tasks such as provisioning, patching, backup, recovery, failure detection, and repair. Amazon RDS also provides push-button migration tools to convert your existing Amazon RDS for MySQL applications to Aurora MySQL.

**Topics**
+ [

# Overview of Amazon Aurora MySQL
](Aurora.AuroraMySQL.Overview.md)
+ [

# Security with Amazon Aurora MySQL
](AuroraMySQL.Security.md)
+ [

# Updating applications to connect to Aurora MySQL DB clusters using new TLS certificates
](ssl-certificate-rotation-aurora-mysql.md)
+ [

# Using Kerberos authentication for Aurora MySQL
](aurora-mysql-kerberos.md)
+ [

# Migrating data to an Amazon Aurora MySQL DB cluster
](AuroraMySQL.Migrating.md)
+ [

# Managing Amazon Aurora MySQL
](AuroraMySQL.Managing.md)
+ [

# Tuning Aurora MySQL
](AuroraMySQL.Managing.Tuning.md)
+ [

# Parallel query for Amazon Aurora MySQL
](aurora-mysql-parallel-query.md)
+ [

# Using Advanced Auditing with an Amazon Aurora MySQL DB cluster
](AuroraMySQL.Auditing.md)
+ [

# Replication with Amazon Aurora MySQL
](AuroraMySQL.Replication.md)
+ [

# Using local write forwarding in an Amazon Aurora MySQL DB cluster
](aurora-mysql-write-forwarding.md)
+ [

# Integrating Amazon Aurora MySQL with other AWS services
](AuroraMySQL.Integrating.md)
+ [

# Amazon Aurora MySQL lab mode
](AuroraMySQL.Updates.LabMode.md)
+ [

# Best practices with Amazon Aurora MySQL
](AuroraMySQL.BestPractices.md)
+ [

# Troubleshooting Amazon Aurora MySQL database performance
](aurora-mysql-troubleshooting.md)
+ [

# Amazon Aurora MySQL reference
](AuroraMySQL.Reference.md)
+ [

# Database engine updates for Amazon Aurora MySQL
](AuroraMySQL.Updates.md)

# Overview of Amazon Aurora MySQL
Overview of Aurora MySQL

The following sections provide an overview of Amazon Aurora MySQL.

**Topics**
+ [

## Amazon Aurora MySQL performance enhancements
](#Aurora.AuroraMySQL.Performance)
+ [

## Amazon Aurora MySQL and spatial data
](#Aurora.AuroraMySQL.Spatial)
+ [

# Aurora MySQL version 3 compatible with MySQL 8.0
](AuroraMySQL.MySQL80.md)
+ [

# Aurora MySQL version 2 compatible with MySQL 5.7
](AuroraMySQL.CompareMySQL57.md)

## Amazon Aurora MySQL performance enhancements


Amazon Aurora includes performance enhancements to support the diverse needs of high-end commercial databases.

### Fast insert


Fast insert accelerates parallel inserts sorted by primary key and applies specifically to `LOAD DATA` and `INSERT INTO ... SELECT ...` statements. Fast insert caches the position of a cursor in an index traversal while executing the statement. This avoids unnecessarily traversing the index again.

Fast insert is enabled only for regular InnoDB tables in Aurora MySQL version 3.03.2 and higher. This optimization doesn’t work for InnoDB temporary tables. It's disabled in Aurora MySQL version 2 for all 2.11 and 2.12 versions. Fast insert optimization works only if Adaptive Hash Index optimization is disabled.

You can monitor the following metrics to determine the effectiveness of fast insert for your DB cluster:
+ `aurora_fast_insert_cache_hits`: A counter that is incremented when the cached cursor is successfully retrieved and verified. 
+ `aurora_fast_insert_cache_misses`: A counter that is incremented when the cached cursor is no longer valid and Aurora performs a normal index traversal.

You can retrieve the current value of the fast insert metrics using the following command:

```
mysql> show global status like 'Aurora_fast_insert%';
```

You will get output similar to the following:

```
+---------------------------------+-----------+
| Variable_name                   | Value     |
+---------------------------------+-----------+
| Aurora_fast_insert_cache_hits   | 3598300   |
| Aurora_fast_insert_cache_misses | 436401336 |
+---------------------------------+-----------+
```

## Amazon Aurora MySQL and spatial data
Aurora MySQL and spatial data

The following list summarizes the main Aurora MySQL spatial features and explains how they correspond to spatial features in MySQL: 
+ Aurora MySQL version 2 supports the same spatial data types and spatial relation functions as MySQL 5.7. For more information about these data types and functions, see [Spatial Data Types](https://dev.mysql.com/doc/refman/5.7/en/spatial-types.html) and [Spatial Relation Functions](https://dev.mysql.com/doc/refman/5.7/en/spatial-relation-functions-object-shapes.html) in the MySQL 5.7 documentation.
+ Aurora MySQL version 3 supports the same spatial data types and spatial relation functions as MySQL 8.0. For more information about these data types and functions, see [Spatial Data Types](https://dev.mysql.com/doc/refman/8.0/en/spatial-types.html) and [Spatial Relation Functions](https://dev.mysql.com/doc/refman/8.0/en/spatial-relation-functions-object-shapes.html) in the MySQL 8.0 documentation.
+ Aurora MySQL supports spatial indexing on InnoDB tables. Spatial indexing improves query performance on large datasets for queries on spatial data. In MySQL, spatial indexing for InnoDB tables is available in MySQL 5.7 and 8.0.

  Aurora MySQL uses a different spatial indexing strategy from MySQL for high performance with spatial queries. The Aurora spatial index implementation uses a space-filling curve on a B-tree, which is intended to provide higher performance for spatial range scans than an R-tree.
**Note**  
In Aurora MySQL, a transaction on a table with a spatial index defined on a column with a spatial reference identifier (SRID) can't insert into an area selected for update by another transaction.

The following data definition language (DDL) statements are supported for creating indexes on columns that use spatial data types.

### CREATE TABLE


You can use the `SPATIAL INDEX` keywords in a `CREATE TABLE` statement to add a spatial index to a column in a new table. Following is an example.

```
CREATE TABLE test (shape POLYGON NOT NULL, SPATIAL INDEX(shape));
```

### ALTER TABLE


You can use the `SPATIAL INDEX` keywords in an `ALTER TABLE` statement to add a spatial index to a column in an existing table. Following is an example.

```
ALTER TABLE test ADD SPATIAL INDEX(shape);
```

### CREATE INDEX


You can use the `SPATIAL` keyword in a `CREATE INDEX` statement to add a spatial index to a column in an existing table. Following is an example.

```
CREATE SPATIAL INDEX shape_index ON test (shape);
```

# Aurora MySQL version 3 compatible with MySQL 8.0


 You can use Aurora MySQL version 3 to get the latest MySQL-compatible features, performance enhancements, and bug fixes. Following, you can learn about Aurora MySQL version 3, with MySQL 8.0 compatibility. You can learn how to upgrade your clusters and applications to Aurora MySQL version 3. 

 Some Aurora features, such as Aurora Serverless v2, require Aurora MySQL version 3. 

**Topics**
+ [

## Features from MySQL 8.0 Community Edition
](#AuroraMySQL.8.0-features-community)
+ [

## Aurora MySQL version 3 prerequisite for Aurora MySQL Serverless v2
](#AuroraMySQL.serverless-v2-8.0-prereq)
+ [

## Release notes for Aurora MySQL version 3
](#AuroraMySQL.mysql80-bugs-fixed)
+ [

## New parallel query optimizations
](#AuroraMySQL.8.0-features-pq)
+ [

## Optimizations to reduce database restart time
](#ReducedRestartTime)
+ [

# New temporary table behavior in Aurora MySQL version 3
](ams3-temptable-behavior.md)
+ [

# Comparing Aurora MySQL version 2 and Aurora MySQL version 3
](AuroraMySQL.Compare-v2-v3.md)
+ [

# Comparing Aurora MySQL version 3 and MySQL 8.0 Community Edition
](AuroraMySQL.Compare-80-v3.md)
+ [

# Upgrading to Aurora MySQL version 3
](AuroraMySQL.mysql80-upgrade-procedure.md)

## Features from MySQL 8.0 Community Edition


 The initial release of Aurora MySQL version 3 is compatible with MySQL 8.0.23 Community Edition. MySQL 8.0 introduces several new features, including the following: 
+ Atomic Data Definition Language (DDL) support. For more information, see [Atomic Data Definition Language (DDL) support](AuroraMySQL.Compare-v2-v3.md#AuroraMySQL.Compare-v2-v3-atomic-ddl).
+ JSON functions. For usage information, see [JSON Functions](https://dev.mysql.com/doc/refman/8.0/en/json-functions.html) in the *MySQL Reference Manual*.
+ Window functions. For usage information, see [Window Functions](https://dev.mysql.com/doc/refman/8.0/en/window-functions.html) in the *MySQL Reference Manual*.
+ Common table expressions (CTEs), using the `WITH` clause. For usage information, see [WITH (Common Table Expressions)](https://dev.mysql.com/doc/refman/8.0/en/with.html) in the *MySQL Reference Manual*.
+ Optimized `ADD COLUMN` and `RENAME COLUMN` clauses for the `ALTER TABLE` statement. These optimizations are called "instant DDL." Aurora MySQL version 3 is compatible with the community MySQL instant DDL feature. The former Aurora fast DDL feature isn't used. For usage information for instant DDL, see [Instant DDL (Aurora MySQL version 3)](AuroraMySQL.Managing.FastDDL.md#AuroraMySQL.mysql80-instant-ddl).
+ Descending, functional, and invisible indexes. For usage information, see [Invisible Indexes](https://dev.mysql.com/doc/refman/8.0/en/invisible-indexes.html), [Descending Indexes](https://dev.mysql.com/doc/refman/8.0/en/descending-indexes.html), and [CREATE INDEX Statement](https://dev.mysql.com/doc/refman/8.0/en/create-index.html#create-index-functional-key-parts) in the *MySQL Reference Manual*.
+ Role-based privileges controlled through SQL statements. For more information on changes to the privilege model, see [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model).
+ `NOWAIT` and `SKIP LOCKED` clauses with the `SELECT ... FOR SHARE` statement. These clauses avoid waiting for other transactions to release row locks. For usage information, see [Locking Reads](https://dev.mysql.com/doc/refman/8.0/en/innodb-locking-reads.html) in the *MySQL Reference Manual*. 
+ Improvements to binary log (binlog) replication. For the Aurora MySQL details, see [Binary log replication](AuroraMySQL.Compare-v2-v3.md#AuroraMySQL.mysql80-binlog). In particular, you can perform filtered replication. For usage information about filtered replication, see [How Servers Evaluate Replication Filtering Rules](https://dev.mysql.com/doc/refman/8.0/en/replication-rules.html) in the *MySQL Reference Manual*.
+ Hints. Some of the MySQL 8.0–compatible hints were already backported to Aurora MySQL version 2. For information about using hints with Aurora MySQL, see [Aurora MySQL hints](AuroraMySQL.Reference.Hints.md). For the full list of hints in community MySQL 8.0, see [Optimizer Hints](https://dev.mysql.com/doc/refman/8.0/en/optimizer-hints.html) in the *MySQL Reference Manual*.

For the full list of features added to MySQL 8.0 community edition, see the blog post [The complete list of new features in MySQL 8.0](https://dev.mysql.com/blog-archive/the-complete-list-of-new-features-in-mysql-8-0/).

Aurora MySQL version 3 also includes changes to keywords for inclusive language, backported from community MySQL 8.0.26. For details about those changes, see [Inclusive language changes for Aurora MySQL version 3](AuroraMySQL.Compare-v2-v3.md#AuroraMySQL.8.0-inclusive-language).

## Aurora MySQL version 3 prerequisite for Aurora MySQL Serverless v2


 Aurora MySQL version 3 is a prerequisite for all DB instances in an Aurora MySQL Serverless v2 cluster. Aurora MySQL Serverless v2 includes support for reader instances in a DB cluster, and other Aurora features that aren't available for Aurora MySQL Serverless v1. It also has faster and more granular scaling than Aurora MySQL Serverless v1. 

## Release notes for Aurora MySQL version 3
Release notes

 For the release notes for all Aurora MySQL version 3 releases, see [ Database engine updates for Amazon Aurora MySQL version 3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.30Updates.html) in the *Release Notes for Aurora MySQL*. 

## New parallel query optimizations


 The Aurora parallel query optimization now applies to more SQL operations: 
+  Parallel query now applies to tables containing the data types `TEXT`, `BLOB`, `JSON`, `GEOMETRY`, and `VARCHAR` and `CHAR` longer than 768 bytes. 
+  Parallel query can optimize queries involving partitioned tables. 
+  Parallel query can optimize queries involving aggregate function calls in the select list and the `HAVING` clause. 

 For more information about these enhancements, see [Upgrading parallel query clusters to Aurora MySQL version 3](aurora-mysql-parallel-query-optimizing.md#aurora-mysql-parallel-query-upgrade-pqv2). For general information about Aurora parallel query, see [Parallel query for Amazon Aurora MySQL](aurora-mysql-parallel-query.md). 

## Optimizations to reduce database restart time


Your Aurora MySQL DB cluster must be highly available during both planned and unplanned outages.

Database administrators need to perform occasional database maintenance. This maintenance includes database patching, upgrades, database parameter modifications requiring a manual reboot, performing a failover to reduce the time it takes for instance class changes, and so on. These planned actions require downtime.

However, downtime can also be caused by unplanned actions, such as an unexpected failover due to an underlying hardware fault or database resource throttling. All of these planned and unplanned actions result in a database restart.

In Aurora MySQL version 3.05 and higher, we've introduced optimizations that reduce the database restart time. These optimizations provide up to 65% less downtime than without optimizations, and fewer disruptions to your database workloads, after a restart.

During database startup, many internal memory components are initialized. The largest of these is the [InnoDB buffer pool](https://aws.amazon.com/blogs/database/best-practices-for-amazon-aurora-mysql-database-configuration/), which in Aurora MySQL is 75% of the instance memory size by default. Our testing has found that the initialization time is proportional to the size of InnoDB buffer pool, and therefore scales with the DB instance class size. During this initialization phase, the database can't accept connections, which causes longer downtime during restarts. The first phase of Aurora MySQL fast restart optimizes the buffer pool initialization, which reduces the time for database initialization and thereby reduces the overall restart time.

For more details, see the blog [Reduce downtime with Amazon Aurora MySQL database restart time optimizations](https://aws.amazon.com/blogs/database/reduce-downtime-with-amazon-aurora-mysql-database-restart-time-optimizations/).

# New temporary table behavior in Aurora MySQL version 3


Aurora MySQL version 3 handles temporary tables differently from earlier Aurora MySQL versions. This new behavior is inherited from MySQL 8.0 Community Edition. There are two types of temporary tables that can be created with Aurora MySQL version 3:
+ Internal (or *implicit*) temporary tables – Created by the Aurora MySQL engine to handle operations such as sorting aggregation, derived tables, or common table expressions (CTEs).
+ User-created (or *explicit*) temporary tables – Created by the Aurora MySQL engine when you use the `CREATE TEMPORARY TABLE` statement.

There are additional considerations for both internal and user-created temporary tables on Aurora reader DB instances. We discuss these changes in the following sections.

**Topics**
+ [

## Storage engine for internal (implicit) temporary tables
](#ams3-temptable-behavior-engine)
+ [

## Limiting the size of internal, in-memory temporary tables
](#ams3-temptable-behavior-limit)
+ [

## Mitigating fullness issues for internal temporary tables on Aurora Replicas
](#ams3-temptable-behavior-mitigate)
+ [

## Optimizing the temptable\$1max\$1mmap parameter on Aurora MySQL DB instances
](#ams-optimize-temptable_max_mmap)
+ [

## User-created (explicit) temporary tables on reader DB instances
](#ams3-temptable-behavior.user)
+ [

## Temporary table creation errors and mitigation
](#ams3-temptable-behavior.errors)

## Storage engine for internal (implicit) temporary tables


When generating intermediate result sets, Aurora MySQL initially attempts to write to in-memory temporary tables. This might be unsuccessful, because of either incompatible data types or configured limits. If so, the temporary table is converted to an on-disk temporary table rather than being held in memory. More information on this can be found in the [Internal Temporary Table Use in MySQL](https://dev.mysql.com/doc/refman/8.0/en/internal-temporary-tables.html) in the MySQL documentation.

In Aurora MySQL version 3, the way internal temporary tables work is different from earlier Aurora MySQL versions. Instead of choosing between the InnoDB and MyISAM storage engines for such temporary tables, now you choose between the `TempTable` and `MEMORY` storage engines.

With the `TempTable` storage engine, you can make an additional choice for how to handle certain data. The data affected overflows the memory pool that holds all the internal temporary tables for the DB instance.

Those choices can influence the performance for queries that generate high volumes of temporary data, for example while performing aggregations such as `GROUP BY` on large tables.

**Tip**  
If your workload includes queries that generate internal temporary tables, confirm how your application performs with this change by running benchmarks and monitoring performance-related metrics.   
In some cases, the amount of temporary data fits within the `TempTable` memory pool or only overflows the memory pool by a small amount. In these cases, we recommend using the `TempTable` setting for internal temporary tables and memory-mapped files to hold any overflow data. This setting is the default.

The `TempTable` storage engine is the default. `TempTable` uses a common memory pool for all temporary tables that use this engine, instead of a maximum memory limit per table. The size of this memory pool is specified by the [temptable\$1max\$1ram](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_temptable_max_ram) parameter. It defaults to 1 GiB on DB instances with 16 or more GiB of memory, and 16 MB on DB instances with less than 16 GiB of memory. The size of the memory pool influences session-level memory consumption.

In some cases when you use the `TempTable` storage engine, the temporary data might exceed the size of the memory pool. If so, Aurora MySQL stores the overflow data using a secondary mechanism.

You can set the [temptable\$1max\$1mmap](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_temptable_max_mmap) parameter to choose whether the data overflows to memory-mapped temporary files or to InnoDB internal temporary tables on disk. The different data formats and overflow criteria of these overflow mechanisms can affect query performance. They do so by influencing the amount of data written to disk and the demand on disk storage throughput.

Aurora MySQL version 3 stores the overflow data in the following way:
+ On the writer DB instance, data that overflows to InnoDB internal temporary tables or memory-mapped temporary files resides in local storage on the instance.
+ On reader DB instances, overflow data always resides in memory-mapped temporary files in local storage.

  Read-only instances can't store any data on the Aurora cluster volume.

The configuration parameters related to internal temporary tables apply differently to the writer and reader instances in your cluster:
+ On reader instances, Aurora MySQL always uses the `TempTable` storage engine.
+ The size for `temptable_max_mmap` defaults to 1 GiB for both writer and reader instances, regardless of the DB instance memory size. You can adjust this value on both writer and reader instances.
+ Setting `temptable_max_mmap` to `0` turns off the use of memory-mapped temporary files on writer instances. 
+ You can't set `temptable_max_mmap` to `0` on reader instances.

**Note**  
We don't recommend using the [temptable\$1use\$1mmap](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_temptable_use_mmap) parameter. It has been deprecated, and support for it is expected to be removed in a future MySQL release.

## Limiting the size of internal, in-memory temporary tables


As discussed in [Storage engine for internal (implicit) temporary tables](#ams3-temptable-behavior-engine), you can control temporary table resources globally by using the [temptable\$1max\$1ram](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_temptable_max_ram) and [temptable\$1max\$1mmap](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_temptable_max_mmap) settings.

You can also limit the size of any individual internal, in-memory temporary table by using the [tmp\$1table\$1size](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_tmp_table_size) DB parameter. This limit is intended to prevent individual queries from consuming an inordinate amount of global temporary table resources, which can affect the performance of concurrent queries that require these resources.

The `tmp_table_size` parameter defines the maximum size of temporary tables created by the `MEMORY` storage engine in Aurora MySQL version 3.

In Aurora MySQL version 3.04 and higher, `tmp_table_size` also defines the maximum size of temporary tables created by the `TempTable` storage engine when the `aurora_tmptable_enable_per_table_limit` DB parameter is set to `ON`. This behavior is disabled by default (`OFF`), which is the same behavior as in Aurora MySQL version 3.03 and lower versions.
+ When `aurora_tmptable_enable_per_table_limit` is `OFF`, `tmp_table_size` isn't considered for internal, in-memory temporary tables created by the `TempTable` storage engine.

  However, the global `TempTable` resources limit still applies. Aurora MySQL has the following behavior when the global `TempTable` resources limit is reached:
  + Writer DB instances – Aurora MySQL automatically converts the in-memory temporary table to an InnoDB on-disk temporary table.
  + Reader DB instances – The query ends with an error.

    ```
    ERROR 1114 (HY000): The table '/rdsdbdata/tmp/#sqlxx_xxx' is full
    ```
+ When `aurora_tmptable_enable_per_table_limit` is `ON`, Aurora MySQL has the following behavior when the `tmp_table_size` limit is reached:
  + Writer DB instances – Aurora MySQL automatically converts the in-memory temporary table to an InnoDB on-disk temporary table.
  + Reader DB instances – The query ends with an error.

    ```
    ERROR 1114 (HY000): The table '/rdsdbdata/tmp/#sqlxx_xxx' is full
    ```

    Both the global `TempTable` resources limit and the per-table limit apply in this case.

**Note**  
The `aurora_tmptable_enable_per_table_limit` parameter has no effect when [ internal\$1tmp\$1mem\$1storage\$1engine](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_internal_tmp_mem_storage_engine) is set to `MEMORY`. In this case, the maximum size of an in-memory temporary table is defined by the [tmp\$1table\$1size](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_tmp_table_size) or [max\$1heap\$1table\$1size](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_max_heap_table_size) value, whichever is smaller.

The following examples show the behavior of the `aurora_tmptable_enable_per_table_limit` parameter for writer and reader DB instances.

**Example of writer DB instance with `aurora_tmptable_enable_per_table_limit` set to `OFF`**  
The in-memory temporary table isn't converted to an InnoDB on-disk temporary table.  

```
mysql> set aurora_tmptable_enable_per_table_limit=0;
Query OK, 0 rows affected (0.00 sec)

mysql> select @@innodb_read_only,@@aurora_version,@@aurora_tmptable_enable_per_table_limit,@@temptable_max_ram,@@temptable_max_mmap;
+--------------------+------------------+------------------------------------------+---------------------+----------------------+
| @@innodb_read_only | @@aurora_version | @@aurora_tmptable_enable_per_table_limit | @@temptable_max_ram | @@temptable_max_mmap |
+--------------------+------------------+------------------------------------------+---------------------+----------------------+
|                  0 | 3.04.0           |                                        0 |          1073741824 |           1073741824 |
+--------------------+------------------+------------------------------------------+---------------------+----------------------+
1 row in set (0.00 sec)

mysql> show status like '%created_tmp_disk%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 0     |
+-------------------------+-------+
1 row in set (0.00 sec)

mysql> set cte_max_recursion_depth=4294967295;
Query OK, 0 rows affected (0.00 sec)

mysql> WITH RECURSIVE cte (n) AS (SELECT 1 UNION ALL SELECT n + 1 FROM cte WHERE n < 60000000) SELECT max(n) FROM cte;
+----------+
| max(n)   |
+----------+
| 60000000 |
+----------+
1 row in set (13.99 sec)

mysql> show status like '%created_tmp_disk%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 0     |
+-------------------------+-------+
1 row in set (0.00 sec)
```

**Example of writer DB instance with `aurora_tmptable_enable_per_table_limit` set to `ON`**  
The in-memory temporary table is converted to an InnoDB on-disk temporary table.  

```
mysql> set aurora_tmptable_enable_per_table_limit=1;
Query OK, 0 rows affected (0.00 sec)

mysql> select @@innodb_read_only,@@aurora_version,@@aurora_tmptable_enable_per_table_limit,@@tmp_table_size;
+--------------------+------------------+------------------------------------------+------------------+
| @@innodb_read_only | @@aurora_version | @@aurora_tmptable_enable_per_table_limit | @@tmp_table_size |
+--------------------+------------------+------------------------------------------+------------------+
|                  0 | 3.04.0           |                                        1 |         16777216 |
+--------------------+------------------+------------------------------------------+------------------+
1 row in set (0.00 sec)

mysql> set cte_max_recursion_depth=4294967295;
Query OK, 0 rows affected (0.00 sec)

mysql> show status like '%created_tmp_disk%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 0     |
+-------------------------+-------+
1 row in set (0.00 sec)

mysql> WITH RECURSIVE cte (n) AS (SELECT 1 UNION ALL SELECT n + 1 FROM cte WHERE n < 6000000) SELECT max(n) FROM cte;
+---------+
| max(n)  |
+---------+
| 6000000 |
+---------+
1 row in set (4.10 sec)

mysql> show status like '%created_tmp_disk%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 1     |
+-------------------------+-------+
1 row in set (0.00 sec)
```

**Example of reader DB instance with `aurora_tmptable_enable_per_table_limit` set to `OFF`**  
The query finishes without an error because `tmp_table_size` doesn't apply, and the global `TempTable` resources limit hasn't been reached.  

```
mysql> set aurora_tmptable_enable_per_table_limit=0;
Query OK, 0 rows affected (0.00 sec)

mysql> select @@innodb_read_only,@@aurora_version,@@aurora_tmptable_enable_per_table_limit,@@temptable_max_ram,@@temptable_max_mmap;
+--------------------+------------------+------------------------------------------+---------------------+----------------------+
| @@innodb_read_only | @@aurora_version | @@aurora_tmptable_enable_per_table_limit | @@temptable_max_ram | @@temptable_max_mmap |
+--------------------+------------------+------------------------------------------+---------------------+----------------------+
|                  1 | 3.04.0           |                                        0 |          1073741824 |           1073741824 |
+--------------------+------------------+------------------------------------------+---------------------+----------------------+
1 row in set (0.00 sec)

mysql> set cte_max_recursion_depth=4294967295;
Query OK, 0 rows affected (0.00 sec)

mysql> WITH RECURSIVE cte (n) AS (SELECT 1 UNION ALL SELECT n + 1 FROM cte WHERE n < 60000000) SELECT max(n) FROM cte;
+----------+
| max(n)   |
+----------+
| 60000000 |
+----------+
1 row in set (14.05 sec)
```

**Example of reader DB instance with `aurora_tmptable_enable_per_table_limit` set to `OFF`**  
This query reaches the global TempTable resources limit with `aurora_tmptable_enable_per_table_limit` set to OFF. The query ends with an error on reader instances.  

```
mysql> set aurora_tmptable_enable_per_table_limit=0;
Query OK, 0 rows affected (0.00 sec)

mysql> select @@innodb_read_only,@@aurora_version,@@aurora_tmptable_enable_per_table_limit,@@temptable_max_ram,@@temptable_max_mmap;
+--------------------+------------------+------------------------------------------+---------------------+----------------------+
| @@innodb_read_only | @@aurora_version | @@aurora_tmptable_enable_per_table_limit | @@temptable_max_ram | @@temptable_max_mmap |
+--------------------+------------------+------------------------------------------+---------------------+----------------------+
|                  1 | 3.04.0           |                                        0 |          1073741824 |           1073741824 |
+--------------------+------------------+------------------------------------------+---------------------+----------------------+
1 row in set (0.00 sec)

mysql> set cte_max_recursion_depth=4294967295;
Query OK, 0 rows affected (0.01 sec)

mysql> WITH RECURSIVE cte (n) AS (SELECT 1 UNION ALL SELECT n + 1 FROM cte WHERE n < 120000000) SELECT max(n) FROM cte;
ERROR 1114 (HY000): The table '/rdsdbdata/tmp/#sqlfd_1586_2' is full
```

**Example of reader DB instance with `aurora_tmptable_enable_per_table_limit` set to `ON`**  
The query ends with an error when the `tmp_table_size` limit is reached.  

```
mysql> set aurora_tmptable_enable_per_table_limit=1;
Query OK, 0 rows affected (0.00 sec)

mysql> select @@innodb_read_only,@@aurora_version,@@aurora_tmptable_enable_per_table_limit,@@tmp_table_size;
+--------------------+------------------+------------------------------------------+------------------+
| @@innodb_read_only | @@aurora_version | @@aurora_tmptable_enable_per_table_limit | @@tmp_table_size |
+--------------------+------------------+------------------------------------------+------------------+
|                  1 | 3.04.0           |                                        1 |         16777216 |
+--------------------+------------------+------------------------------------------+------------------+
1 row in set (0.00 sec)

mysql> set cte_max_recursion_depth=4294967295;
Query OK, 0 rows affected (0.00 sec)

mysql> WITH RECURSIVE cte (n) AS (SELECT 1 UNION ALL SELECT n + 1 FROM cte WHERE n < 6000000) SELECT max(n) FROM cte;
ERROR 1114 (HY000): The table '/rdsdbdata/tmp/#sqlfd_8_2' is full
```

## Mitigating fullness issues for internal temporary tables on Aurora Replicas


To avoid size limitation issues for temporary tables, set the `temptable_max_ram` and `temptable_max_mmap` parameters to a combined value that can fit the requirements of your workload.

Be careful when setting the value of the `temptable_max_ram` parameter. Setting the value too high reduces the available memory on the database instance, which can cause an out-of-memory condition. Monitor the average freeable memory on the DB instance. Then determine an appropriate value for `temptable_max_ram` so that you will still have a reasonable amount of free memory left on the instance. For more information, see [Freeable memory issues in Amazon Aurora](CHAP_Troubleshooting.md#Troubleshooting.FreeableMemory).

It is also important to monitor the size of the local storage and the temporary table space consumption. You can monitor the temporary storage available for a specific DB instance with the `FreeLocalStorage` Amazon CloudWatch metric, described in [Amazon CloudWatch metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md).

**Note**  
This procedure doesn't work when the `aurora_tmptable_enable_per_table_limit` parameter is set to `ON`. For more information, see [Limiting the size of internal, in-memory temporary tables](#ams3-temptable-behavior-limit).

**Example 1**  
You know that your temporary tables grow to a cumulative size of 20 GiB. You want to set in-memory temporary tables to 2 GiB and to grow to a maximum of 20 GiB on disk.  
Set `temptable_max_ram` to **2,147,483,648** and `temptable_max_mmap` to **21,474,836,480**. These values are in bytes.  
These parameter settings make sure that your temporary tables can grow to a cumulative total of 22 GiB.

**Example 2**  
Your current instance size is 16xlarge or larger. You don't know the total size of the temporary tables that you might need. You want to be able to use up to 4 GiB in memory and up to the maximum available storage size on disk.  
Set `temptable_max_ram` to **4,294,967,296** and `temptable_max_mmap` to **1,099,511,627,776**. These values are in bytes.  
Here you're setting `temptable_max_mmap` to 1 TiB, which is less than the maximum local storage of 1.2 TiB on a 16xlarge Aurora DB instance.  
On a smaller instance size, adjust the value of `temptable_max_mmap` so that it doesn't fill up the available local storage. For example, a 2xlarge instance has only 160 GiB of local storage available. Hence, we recommend setting the value to less than 160 GiB. For more information on the available local storage for DB instance sizes, see [Temporary storage limits for Aurora MySQLTemporary storage limits](AuroraMySQL.Managing.Performance.md#AuroraMySQL.Managing.TempStorage).

## Optimizing the temptable\$1max\$1mmap parameter on Aurora MySQL DB instances


The `temptable_max_mmap` parameter in Aurora MySQL controls the maximum amount of local disk space that can be used by memory mapped files before overflowing to the on-disk InnoDB temporary tables (on writer DB instances) or causing an error (on reader DB instances). Setting this DB instance parameter properly can help optimize the performance of your DB instances.

**Prerequisites**  

1. Make sure that the Performance Schema is enabled. You can verify this by running the following SQL command:

   ```
   SELECT @@performance_schema;
   ```

   An output value of `1` indicates that it's enabled.

1. Confirm that the temporary table memory instrumentation is enabled. You can verify this by running the following SQL command:

   ```
   SELECT name, enabled FROM performance_schema.setup_instruments WHERE name LIKE '%memory%temptable%';
   ```

   The `enabled` column shows `YES` for the relevant temporary table memory instrumentation entries.

**Monitoring temporary table usage**  
When setting the initial value for `temptable_max_mmap`, we recommend that you start with 80% of the local storage size for the DB instance class that you're using. This ensures that the temporary tables have enough disk space to operate efficiently, while leaving room for other disk usage on the instance.  
To find the local storage size for your DB instance class, see [Temporary storage limits for Aurora MySQLTemporary storage limits](AuroraMySQL.Managing.Performance.md#AuroraMySQL.Managing.TempStorage).  
For example, if you're using the db.r5.large DB instance class, the local storage size is 32 GiB. In this case, you would initially set the `temptable_max_mmap` parameter to 80% of 32 GiB, which is 25.6 GiB.  
After setting the initial `temptable_max_mmap` value, run your peak workload on the Aurora MySQL instances. Monitor the current and high temporary table disk usage using the following SQL query:  

```
SELECT event_name, current_count, current_alloc, current_avg_alloc, high_count, high_alloc, high_avg_alloc
FROM sys.memory_global_by_current_bytes WHERE event_name LIKE 'memory/temptable/%';
```
This query retrieves the following information:  
+ `event_name` – The name of the temporary table memory or disk usage event.
+ `current_count` – The current number of allocated temporary table memory or disk blocks.
+ `current_alloc` – The current amount of memory or disk allocated for temporary tables.
+ `current_avg_alloc` – The current average size of temporary table memory or disk blocks.
+ `high_count` – The highest number of allocated temporary table memory or disk blocks.
+ `high_alloc` – The highest amount of memory or disk allocated for temporary tables.
+ `high_avg_alloc` – The highest average size of temporary table memory or disk blocks.
If your queries fail with a Table is full error using this setting, it indicates that your workload requires more disk space for temporary table operations. In this case, consider increasing your DB instance size to one with more local storage space.

**Setting the optimal `temptable_max_mmap` value**  
Use the following procedure to monitor and set the right size for the `temptable_max_mmap` parameter.  

1. Review the output of the previous query, and identify the peak temporary table disk usage, as indicated by the `high_alloc` column.

1. Based on the peak temporary table disk usage, adjust the `temptable_max_mmap` parameter in the DB parameter group for your Aurora MySQL DB instances.

   Set the value to be slightly higher than the peak temporary table disk usage to accommodate future growth.

1. Apply the parameter group changes to your DB instances.

1. Monitor the temporary table disk usage again during your peak workload to make sure that the new `temptable_max_mmap` value is appropriate.

1. Repeat the previous steps as needed to fine tune the `temptable_max_mmap` parameter.

## User-created (explicit) temporary tables on reader DB instances


You can create explicit temporary tables using the `TEMPORARY` keyword in your `CREATE TABLE` statement. Explicit temporary tables are supported on the writer DB instance in an Aurora DB cluster. You can also use explicit temporary tables on reader DB instances, but the tables can't enforce the use of the InnoDB storage engine.

To avoid errors while creating explicit temporary tables on Aurora MySQL reader DB instances, make sure that you run all `CREATE TEMPORARY TABLE` statements in either or both of the following ways:
+ Don't specify the `ENGINE=InnoDB` clause.
+ Don't set the SQL mode to `NO_ENGINE_SUBSTITUTION`.

## Temporary table creation errors and mitigation


The error that you receive is different depending on whether you use a plain `CREATE TEMPORARY TABLE` statement or the variation `CREATE TEMPORARY TABLE AS SELECT`. The following examples show the different kinds of errors.

This temporary table behavior only applies to read-only instances. This first example confirms that's the kind of instance the session is connected to.

```
mysql> select @@innodb_read_only;
+--------------------+
| @@innodb_read_only |
+--------------------+
|                  1 |
+--------------------+
```

For plain `CREATE TEMPORARY TABLE` statements, the statement fails when the `NO_ENGINE_SUBSTITUTION` SQL mode is turned on. When `NO_ENGINE_SUBSTITUTION` is turned off (default), the appropriate engine substitution is made, and the temporary table creation succeeds.

```
mysql> set sql_mode = 'NO_ENGINE_SUBSTITUTION';

mysql>  CREATE TEMPORARY TABLE tt2 (id int) ENGINE=InnoDB;
ERROR 3161 (HY000): Storage engine InnoDB is disabled (Table creation is disallowed).

mysql> SET sql_mode = '';

mysql> CREATE TEMPORARY TABLE tt4 (id int) ENGINE=InnoDB;

mysql> SHOW CREATE TABLE tt4\G
*************************** 1. row ***************************
       Table: tt4
Create Table: CREATE TEMPORARY TABLE `tt4` (
  `id` int DEFAULT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
```

For `CREATE TEMPORARY TABLE AS SELECT` statements, the statement fails when the `NO_ENGINE_SUBSTITUTION` SQL mode is turned on. When `NO_ENGINE_SUBSTITUTION` is turned off (default), the appropriate engine substitution is made, and the temporary table creation succeeds.

```
mysql> set sql_mode = 'NO_ENGINE_SUBSTITUTION';

mysql> CREATE TEMPORARY TABLE tt1 ENGINE=InnoDB AS SELECT * FROM t1;
ERROR 3161 (HY000): Storage engine InnoDB is disabled (Table creation is disallowed).

mysql> SET sql_mode = '';

mysql> show create table tt3;
+-------+----------------------------------------------------------+
| Table | Create Table                                             |
+-------+----------------------------------------------------------+
| tt3   | CREATE TEMPORARY TABLE `tt3` (
  `id` int DEFAULT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci |
+-------+----------------------------------------------------------+
1 row in set (0.00 sec)
```

For more information about the storage aspects and performance implications of temporary tables in Aurora MySQL version 3, see the blog post [Use the TempTable storage engine on Amazon RDS for MySQL and Amazon Aurora MySQL](https://aws.amazon.com/blogs/database/use-the-temptable-storage-engine-on-amazon-rds-for-mysql-and-amazon-aurora-mysql/).

# Comparing Aurora MySQL version 2 and Aurora MySQL version 3


Use the following to learn about changes to be aware of when you upgrade your Aurora MySQL version 2 cluster to version 3.

**Topics**
+ [

## Atomic Data Definition Language (DDL) support
](#AuroraMySQL.Compare-v2-v3-atomic-ddl)
+ [

## Feature differences between Aurora MySQL version 2 and 3
](#AuroraMySQL.Compare-v2-v3-features)
+ [

## Instance class support
](#AuroraMySQL.mysql80-instance-classes)
+ [

## Parameter changes for Aurora MySQL version 3
](#AuroraMySQL.mysql80-parameter-changes)
+ [

## Status variables
](#AuroraMySQL.mysql80-status-vars)
+ [

## Inclusive language changes for Aurora MySQL version 3
](#AuroraMySQL.8.0-inclusive-language)
+ [

## AUTO\$1INCREMENT values
](#AuroraMySQL.mysql80-autoincrement)
+ [

## Binary log replication
](#AuroraMySQL.mysql80-binlog)

## Atomic Data Definition Language (DDL) support


One of the largest changes from MySQL 5.7 to 8.0 is the introduction of the [Atomic Data Dictionary](https://dev.mysql.com/doc/refman/8.0/en/data-dictionary-file-removal.html). Before MySQL 8.0, the MySQL data dictionary used a file-based approach to store metadata such as table definitions (.frm), triggers (.trg), and functions separately from the storage engine's metadata (such as InnoDB's). This had some issues, including the risk of tables becoming "[orphaned](https://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting-datadict.html)" if something unexpected happened during a DDL operation, causing the file-based and storage engine metadata to get out of sync.

To fix this, MySQL 8.0 introduced the Atomic Data Dictionary, which stores all metadata in a set of internal InnoDB tables in the `mysql` schema. This new architecture provides a transactional, [ACID](https://en.wikipedia.org/wiki/ACID)-compliant way to manage database metadata, solving the "atomic DDL" problem from the old file-based approach. For more information on the Atomic Data Dictionary, see [Removal of file-based metadata storage](https://dev.mysql.com/doc/refman/8.0/en/data-dictionary-file-removal.html) and [Atomic data definition statement support](https://dev.mysql.com/doc/refman/8.0/en/atomic-ddl.html) in the *MySQL Reference Manual*.

Due to this architectural change, you must consider the following when upgrading from Aurora MySQL version 2 to version 3:
+ The file-based metadata from version 2 must be migrated to the new data dictionary tables during the upgrade process to version 3. Depending on how many database objects are migrated, this could take some time.
+ The changes have also introduced some new incompatibilities that might need to be addressed before you can upgrade from MySQL 5.7 to 8.0. For example, 8.0 has some new reserved keywords that could conflict with existing database object names.

To help you identify these incompatibilities before upgrading the engine, Aurora MySQL runs a series of upgrade compatibility checks (prechecks) to determine whether there are any incompatible objects in your database dictionary, before performing the data dictionary upgrade. For more information on the prechecks, see [Major version upgrade prechecks for Aurora MySQL](AuroraMySQL.upgrade-prechecks.md).

## Feature differences between Aurora MySQL version 2 and 3


The following Amazon Aurora MySQL features are supported in Aurora MySQL for MySQL 5.7, but these features aren't supported in Aurora MySQL for MySQL 8.0:
+ You can't use Aurora MySQL version 3 for Aurora Serverless v1 clusters. Aurora MySQL version 3 works with Aurora Serverless v2.
+ Lab mode doesn't apply to Aurora MySQL version 3. There aren't any lab mode features in Aurora MySQL version 3. Instant DDL supersedes the fast online DDL feature that was formerly available in lab mode. For an example, see [Instant DDL (Aurora MySQL version 3)](AuroraMySQL.Managing.FastDDL.md#AuroraMySQL.mysql80-instant-ddl).
+ The query cache is removed from community MySQL 8.0 and also from Aurora MySQL version 3.
+ Aurora MySQL version 3 is compatible with the community MySQL hash join feature. The Aurora-specific implementation of hash joins in Aurora MySQL version 2 isn't used. For information about using hash joins with Aurora parallel query, see [Turning on hash join for parallel query clusters](aurora-mysql-parallel-query-enabling.md#aurora-mysql-parallel-query-enabling-hash-join) and [Aurora MySQL hints](AuroraMySQL.Reference.Hints.md). For general usage information about hash joins, see [Hash Join Optimization](https://dev.mysql.com/doc/refman/8.0/en/hash-joins.html) in the *MySQL Reference Manual*.
+ The `mysql.lambda_async` stored procedure that was deprecated in Aurora MySQL version 2 is removed in version 3. For version 3, use the asynchronous function `lambda_async` instead.
+ The default character set in Aurora MySQL version 3 is `utf8mb4`. In Aurora MySQL version 2, the default character set was `latin1`. For information about this character set, see [The utf8mb4 Character Set (4-Byte UTF-8 Unicode Encoding)](https://dev.mysql.com/doc/refman/8.0/en/charset-unicode-utf8mb4.html) in the *MySQL Reference Manual*.

Some Aurora MySQL features are available for certain combinations of AWS Region and DB engine version. For details, see [Supported features in Amazon Aurora by AWS Region and Aurora DB engine](Concepts.AuroraFeaturesRegionsDBEngines.grids.md).

## Instance class support


Aurora MySQL version 3 supports a different set of instance classes from Aurora MySQL version 2:
+ For larger instances, you can use the modern instance classes such as `db.r5`, `db.r6g`, and `db.x2g`.
+ For smaller instances, you can use the modern instance classes such as `db.t3` and `db.t4g`.
**Note**  
We recommend using the T DB instance classes only for development and test servers, or other non-production servers. For more details on the T instance classes, see [Using T instance classes for development and testing](AuroraMySQL.BestPractices.Performance.md#AuroraMySQL.BestPractices.T2Medium).

The following instance classes from Aurora MySQL version 2 aren't available for Aurora MySQL version 3:
+  `db.r4` 
+  `db.r3` 
+  `db.t3.small` 
+  `db.t2` 

 Check your administration scripts for any CLI statements that create Aurora MySQL DB instances. Hardcode instance class names that aren't available for Aurora MySQL version 3. If necessary, modify the instance class names to ones that Aurora MySQL version 3 supports. 

**Tip**  
 To check the instance classes that you can use for a specific combination of Aurora MySQL version and AWS Region, use the `describe-orderable-db-instance-options` AWS CLI command. 

 For full details about Aurora instance classes, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md). 

## Parameter changes for Aurora MySQL version 3
Parameter changes

Aurora MySQL version 3 includes new cluster-level and instance-level configuration parameters. Aurora MySQL version 3 also removes some parameters that were present in Aurora MySQL version 2. Some parameter names are changed as a result of the initiative for inclusive language. For backward compatibility, you can still retrieve the parameter values using either the old names or the new names. However, you must use the new names to specify parameter values in a custom parameter group.

In Aurora MySQL version 3, the value of the `lower_case_table_names` parameter is set permanently at the time the cluster is created. If you use a nondefault value for this option, set up your Aurora MySQL version 3 custom parameter group before upgrading. Then specify the parameter group during the create cluster or snapshot restore operation.

**Note**  
With an Aurora global database based on Aurora MySQL, you can't perform an in-place upgrade from Aurora MySQL version 2 to version 3 if the `lower_case_table_names` parameter is turned on. Use the snapshot restore method instead.

In Aurora MySQL version 3, the `init_connect` and `read_only` parameters don't apply for users who have the `CONNECTION_ADMIN` privilege. This includes the Aurora master user. For more information, see [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model).

For the full list of Aurora MySQL cluster parameters, see [Cluster-level parameters](AuroraMySQL.Reference.ParameterGroups.md#AuroraMySQL.Reference.Parameters.Cluster). The table covers all the parameters from Aurora MySQL version 2 and 3. The table includes notes showing which parameters are new in Aurora MySQL version 3 or were removed from Aurora MySQL version 3.

For the full list of Aurora MySQL instance parameters, see [Instance-level parameters](AuroraMySQL.Reference.ParameterGroups.md#AuroraMySQL.Reference.Parameters.Instance). The table covers all the parameters from Aurora MySQL version 2 and 3. The table includes notes showing which parameters are new in Aurora MySQL version 3 and which parameters were removed from Aurora MySQL version 3. It also includes notes showing which parameters were modifiable in earlier versions but not Aurora MySQL version 3.

For information about parameter names that changed, see [Inclusive language changes for Aurora MySQL version 3](#AuroraMySQL.8.0-inclusive-language).

## Status variables


For information about status variables that aren't applicable to Aurora MySQL, see [MySQL status variables that don't apply to Aurora MySQL](AuroraMySQL.Reference.GlobalStatusVars.md#AuroraMySQL.Reference.StatusVars.Inapplicable).

## Inclusive language changes for Aurora MySQL version 3


 Aurora MySQL version 3 is compatible with version 8.0.23 from the MySQL community edition. Aurora MySQL version 3 also includes changes from MySQL 8.0.26 related to keywords and system schemas for inclusive language. For example, the `SHOW REPLICA STATUS` command is now preferred instead of `SHOW SLAVE STATUS`. 

 The following Amazon CloudWatch metrics have new names in Aurora MySQL version 3. 

 In Aurora MySQL version 3, only the new metric names are available. Make sure to update any alarms or other automation that relies on metric names when you upgrade to Aurora MySQL version 3. 


|  Old name  |  New name  | 
| --- | --- | 
|  ForwardingMasterDMLLatency  |  ForwardingWriterDMLLatency  | 
|  ForwardingMasterOpenSessions  |  ForwardingWriterOpenSessions  | 
|  AuroraDMLRejectedMasterFull  |  AuroraDMLRejectedWriterFull  | 
|  ForwardingMasterDMLThroughput  |  ForwardingWriterDMLThroughput  | 

 The following status variables have new names in Aurora MySQL version 3. 

 For compatibility, you can use either name in the initial Aurora MySQL version 3 release. The old status variable names are to be removed in a future release. 


|  Name to be removed  |  New or preferred name  | 
| --- | --- | 
|  Aurora\$1fwd\$1master\$1dml\$1stmt\$1duration  |  Aurora\$1fwd\$1writer\$1dml\$1stmt\$1duration  | 
|  Aurora\$1fwd\$1master\$1dml\$1stmt\$1count  |  Aurora\$1fwd\$1writer\$1dml\$1stmt\$1count  | 
|  Aurora\$1fwd\$1master\$1select\$1stmt\$1duration  |  Aurora\$1fwd\$1writer\$1select\$1stmt\$1duration  | 
|  Aurora\$1fwd\$1master\$1select\$1stmt\$1count  |  Aurora\$1fwd\$1writer\$1select\$1stmt\$1count  | 
|  Aurora\$1fwd\$1master\$1errors\$1session\$1timeout  |  Aurora\$1fwd\$1writer\$1errors\$1session\$1timeout  | 
|  Aurora\$1fwd\$1master\$1open\$1sessions  |  Aurora\$1fwd\$1writer\$1open\$1sessions  | 
|  Aurora\$1fwd\$1master\$1errors\$1session\$1limit  |  Aurora\$1fwd\$1writer\$1errors\$1session\$1limit  | 
|  Aurora\$1fwd\$1master\$1errors\$1rpc\$1timeout  |  Aurora\$1fwd\$1writer\$1errors\$1rpc\$1timeout  | 

The following configuration parameters have new names in Aurora MySQL version 3.

For compatibility, you can check the parameter values in the `mysql` client by using either name in the initial Aurora MySQL version 3 release. You can use only the new names when modifying values in a custom parameter group. The old parameter names are to be removed in a future release.


|  Name to be removed  |  New or preferred name  | 
| --- | --- | 
|  aurora\$1fwd\$1master\$1idle\$1timeout  |  aurora\$1fwd\$1writer\$1idle\$1timeout  | 
|  aurora\$1fwd\$1master\$1max\$1connections\$1pct  |  aurora\$1fwd\$1writer\$1max\$1connections\$1pct  | 
|  master\$1verify\$1checksum  |  source\$1verify\$1checksum  | 
|  sync\$1master\$1info  |  sync\$1source\$1info  | 
|  init\$1slave  |  init\$1replica  | 
|  rpl\$1stop\$1slave\$1timeout  |  rpl\$1stop\$1replica\$1timeout  | 
|  log\$1slow\$1slave\$1statements  |  log\$1slow\$1replica\$1statements  | 
|  slave\$1max\$1allowed\$1packet  |  replica\$1max\$1allowed\$1packet  | 
|  slave\$1compressed\$1protocol  |  replica\$1compressed\$1protocol  | 
|  slave\$1exec\$1mode  |  replica\$1exec\$1mode  | 
|  slave\$1type\$1conversions  |  replica\$1type\$1conversions  | 
|  slave\$1sql\$1verify\$1checksum  |  replica\$1sql\$1verify\$1checksum  | 
|  slave\$1parallel\$1type  |  replica\$1parallel\$1type  | 
|  slave\$1preserve\$1commit\$1order  |  replica\$1preserve\$1commit\$1order  | 
|  log\$1slave\$1updates  |  log\$1replica\$1updates  | 
|  slave\$1allow\$1batching  |  replica\$1allow\$1batching  | 
|  slave\$1load\$1tmpdir  |  replica\$1load\$1tmpdir  | 
|  slave\$1net\$1timeout  |  replica\$1net\$1timeout  | 
|  sql\$1slave\$1skip\$1counter  |  sql\$1replica\$1skip\$1counter  | 
|  slave\$1skip\$1errors  |  replica\$1skip\$1errors  | 
|  slave\$1checkpoint\$1period  |  replica\$1checkpoint\$1period  | 
|  slave\$1checkpoint\$1group  |  replica\$1checkpoint\$1group  | 
|  slave\$1transaction\$1retries  |  replica\$1transaction\$1retries  | 
|  slave\$1parallel\$1workers  |  replica\$1parallel\$1workers  | 
|  slave\$1pending\$1jobs\$1size\$1max  |  replica\$1pending\$1jobs\$1size\$1max  | 
|  pseudo\$1slave\$1mode  |  pseudo\$1replica\$1mode  | 

 The following stored procedures have new names in Aurora MySQL version 3. 

 For compatibility, you can use either name in the initial Aurora MySQL version 3 release. The old procedure names are to be removed in a future release. 


|  Name to be removed  |  New or preferred name  | 
| --- | --- | 
|  mysql.rds\$1set\$1master\$1auto\$1position  |  mysql.rds\$1set\$1source\$1auto\$1position  | 
|  mysql.rds\$1set\$1external\$1master  |  mysql.rds\$1set\$1external\$1source  | 
|  mysql.rds\$1set\$1external\$1master\$1with\$1auto\$1position  |  mysql.rds\$1set\$1external\$1source\$1with\$1auto\$1position  | 
|  mysql.rds\$1reset\$1external\$1master  |  mysql.rds\$1reset\$1external\$1source  | 
|  mysql.rds\$1next\$1master\$1log  |  mysql.rds\$1next\$1source\$1log  | 

## AUTO\$1INCREMENT values


 In Aurora MySQL version 3, Aurora preserves the `AUTO_INCREMENT` value for each table when it restarts each DB instance. In Aurora MySQL version 2, the `AUTO_INCREMENT` value wasn't preserved after a restart. 

 The `AUTO_INCREMENT` value isn't preserved when you set up a new cluster by restoring from a snapshot, performing a point-in-time recovery, and cloning a cluster. In these cases, the `AUTO_INCREMENT` value is initialized to the value based on the largest column value in the table at the time the snapshot was created. This behavior is different than in RDS for MySQL 8.0, where the `AUTO_INCREMENT` value is preserved during these operations. 

## Binary log replication


 In MySQL 8.0 community edition, binary log replication is turned on by default. In Aurora MySQL version 3, binary log replication is turned off by default. 

**Tip**  
 If your high availability requirements are fulfilled by the Aurora built-in replication features, you can leave binary log replication turned off. That way, you can avoid the performance overhead of binary log replication. You can also avoid the associated monitoring and troubleshooting that are needed to manage binary log replication. 

 Aurora supports binary log replication from a MySQL 5.7–compatible source to Aurora MySQL version 3. The source system can be an Aurora MySQL DB cluster, an RDS for MySQL DB instance, or an on-premises MySQL instance. 

 As does community MySQL, Aurora MySQL supports replication from a source running a specific version to a target running the same major version or one major version higher. For example, replication from a MySQL 5.6–compatible system to Aurora MySQL version 3 isn't supported. Replicating from Aurora MySQL version 3 to a MySQL 5.7–compatible or MySQL 5.6–compatible system isn't supported. For details about using binary log replication, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md). 

 Aurora MySQL version 3 includes improvements to binary log replication in community MySQL 8.0, such as filtered replication. For details about the community MySQL 8.0 improvements, see [How Servers Evaluate Replication Filtering Rules](https://dev.mysql.com/doc/refman/8.0/en/replication-rules.html) in the *MySQL Reference Manual*. 

### Transaction compression for binary log replication


 For usage information about binary log compression, see [Binary Log Transaction Compression](https://dev.mysql.com/doc/refman/8.0/en/binary-log-transaction-compression.html) in the MySQL Reference Manual. 

 The following limitations apply to binary log compression in Aurora MySQL version 3: 
+  Transactions whose binary log data is larger than the maximum allowed packet size aren't compressed. This is true regardless of whether the Aurora MySQL binary log compression setting is turned on. Such transactions are replicated without being compressed. 
+  If you use a connector for change data capture (CDC) that doesn't support MySQL 8.0 yet, you can't use this feature. We recommend that you test any third-party connectors thoroughly with binary log compression. Also, we recommend that you do so before turning on binlog compression on systems that use binlog replication for CDC. 

# Comparing Aurora MySQL version 3 and MySQL 8.0 Community Edition


You can use the following information to learn about the changes to be aware of when you convert from a different MySQL 8.0–compatible system to Aurora MySQL version 3.

 In general, Aurora MySQL version 3 supports the feature set of community MySQL 8.0.23. Some new features from MySQL 8.0 community edition don't apply to Aurora MySQL. Some of those features aren't compatible with some aspect of Aurora, such as the Aurora storage architecture. Other features aren't needed because the Amazon RDS management service provides equivalent functionality. The following features in community MySQL 8.0 aren't supported or work differently in Aurora MySQL version 3.

 For release notes for all Aurora MySQL version 3 releases, see [ Database engine updates for Amazon Aurora MySQL version 3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.30Updates.html) in the *Release Notes for Aurora MySQL*.

**Topics**
+ [

## MySQL 8.0 features not available in Aurora MySQL version 3
](#AuroraMySQL.Compare-80-v3-features)
+ [

## Role-based privilege model
](#AuroraMySQL.privilege-model)
+ [

## Finding the database server ID
](#AuroraMySQL.server-id)
+ [

## Authentication
](#AuroraMySQL.mysql80-authentication)

## MySQL 8.0 features not available in Aurora MySQL version 3


The following features from community MySQL 8.0 aren't available or work differently in Aurora MySQL version 3.
+ Resource groups and associated SQL statements aren't supported in Aurora MySQL.
+ Aurora MySQL doesn't support user-defined undo tablespaces and associated SQL statements, such as `CREATE UNDO TABLESPACE`, `ALTER UNDO TABLESPACE ... SET INACTIVE`, and `DROP UNDO TABLESPACE`.
+ Aurora MySQL doesn't support undo tablespace truncation for Aurora MySQL versions lower than 3.06. In Aurora MySQL version 3.06 and higher, [automated undo tablespace truncation](https://dev.mysql.com/doc/refman/8.0/en/innodb-undo-tablespaces.html#truncate-undo-tablespace) is supported.
+ Password validation plugin is supported.
+ You can't modify the settings of any MySQL plugins, including password validation plugin.
+ The X plugin isn't supported.
+ Multisource replication isn't supported.

## Role-based privilege model


With Aurora MySQL version 3, you can't modify the tables in the `mysql` database directly. In particular, you can't set up users by inserting into the `mysql.user` table. Instead, you use SQL statements to grant role-based privileges. You also can't create other kinds of objects such as stored procedures in the `mysql` database. You can still query the `mysql` tables. If you use binary log replication, changes made directly to the `mysql` tables on the source cluster aren't replicated to the target cluster. 

 In some cases, your application might use shortcuts to create users or other objects by inserting into the `mysql` tables. If so, change your application code to use the corresponding statements such as `CREATE USER`. If your application creates stored procedures or other objects in the `mysql` database, use a different database instead. 

To export metadata for database users during the migration from an external MySQL database, you can use a MySQL Shell command instead of `mysqldump`. For more information, see [Instance Dump Utility, Schema Dump Utility, and Table Dump Utility](https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-utilities-dump-instance-schema.html#mysql-shell-utilities-dump-about).

To simplify managing permissions for many users or applications, you can use the `CREATE ROLE` statement to create a role that has a set of permissions. Then you can use the `GRANT` and `SET ROLE` statements and the `current_role` function to assign roles to users or applications, switch the current role, and check which roles are in effect. For more information on the role-based permission system in MySQL 8.0, see [Using Roles](https://dev.mysql.com/doc/refman/8.0/en/roles.html) in the MySQL Reference Manual.

**Important**  
We strongly recommend that you do not use the master user directly in your applications. Instead, adhere to the best practice of using a database user created with the minimal privileges required for your application.

**Topics**
+ [

### rds\$1superuser\$1role
](#AuroraMySQL.privilege-model.rds_superuser_role)
+ [

### Privilege checks user for binary log replication
](#AuroraMySQL.privilege-model.binlog)
+ [

### Roles for accessing other AWS services
](#AuroraMySQL.privilege-model.other)

### rds\$1superuser\$1role


Aurora MySQL version 3 includes a special role that has all of the following privileges. This role is named `rds_superuser_role`. The primary administrative user for each cluster already has this role granted. The `rds_superuser_role` role includes the following privileges for all database objects:
+ `ALTER`
+ `APPLICATION_PASSWORD_ADMIN`
+ `ALTER ROUTINE`
+ `CONNECTION_ADMIN`
+ `CREATE`
+ `CREATE ROLE`
+ `CREATE ROUTINE`
+ `CREATE TEMPORARY TABLES`
+ `CREATE USER`
+ `CREATE VIEW`
+ `DELETE`
+ `DROP`
+ `DROP ROLE`
+ `EVENT`
+ `EXECUTE`
+ `FLUSH_OPTIMIZER_COSTS` (Aurora MySQL version 3.09 and higher)
+ `FLUSH_STATUS` (Aurora MySQL version 3.09 and higher)
+ `FLUSH_TABLES` (Aurora MySQL version 3.09 and higher)
+ `FLUSH_USER_RESOURCES` (Aurora MySQL version 3.09 and higher)
+ `INDEX`
+ `INSERT`
+ `LOCK TABLES`
+ `PROCESS`
+ `REFERENCES`
+ `RELOAD`
+ `REPLICATION CLIENT`
+ `REPLICATION SLAVE`
+ `ROLE_ADMIN`
+ `SET_USER_ID`
+ `SELECT`
+ `SHOW DATABASES`
+ `SHOW_ROUTINE` (Aurora MySQL version 3.04 and higher)
+ `SHOW VIEW`
+ `TRIGGER`
+ `UPDATE`
+ `XA_RECOVER_ADMIN`

The role definition also includes `WITH GRANT OPTION` so that an administrative user can grant that role to other users. In particular, the administrator must grant any privileges needed to perform binary log replication with the Aurora MySQL cluster as the target.

**Tip**  
To see the full details of the permissions, enter the following statements.  

```
SHOW GRANTS FOR rds_superuser_role@'%';
SHOW GRANTS FOR name_of_administrative_user_for_your_cluster@'%';
```

### Privilege checks user for binary log replication


Aurora MySQL version 3 includes a privilege checks user for binary log (binlog) replication, `rdsrepladmin_priv_checks_user`. In addition to the privileges of `rds_superuser_role`, this user has the `replication_applier` privilege.

When you turn on binlog replication by calling the `mysql.rds_start_replication` stored procedure, `rdsrepladmin_priv_checks_user` is created.

The `rdsrepladmin_priv_checks_user@localhost` user is a reserved user. Don't modify it.

### Roles for accessing other AWS services


Aurora MySQL version 3 includes roles that you can use to access other AWS services. You can set many of these roles as an alternative to granting privileges. For example, you specify `GRANT AWS_LAMBDA_ACCESS TO user` instead of `GRANT INVOKE LAMBDA ON *.* TO user`. For the procedures to access other AWS services, see [Integrating Amazon Aurora MySQL with other AWS services](AuroraMySQL.Integrating.md). Aurora MySQL version 3 includes the following roles related to accessing other AWS services:
+ `AWS_LAMBDA_ACCESS` – An alternative to the `INVOKE LAMBDA` privilege. For usage information, see [Invoking a Lambda function from an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Lambda.md).
+ `AWS_LOAD_S3_ACCESS` – An alternative to the `LOAD FROM S3` privilege. For usage information, see [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md).
+ `AWS_SELECT_S3_ACCESS` – An alternative to the `SELECT INTO S3` privilege. For usage information, see [Saving data from an Amazon Aurora MySQL DB cluster into text files in an Amazon S3 bucket](AuroraMySQL.Integrating.SaveIntoS3.md).
+ `AWS_COMPREHEND_ACCESS` – An alternative to the `INVOKE COMPREHEND` privilege. For usage information, see [Granting database users access to Aurora machine learning](mysql-ml.md#aurora-ml-sql-privileges).
+ `AWS_SAGEMAKER_ACCESS` – An alternative to the `INVOKE SAGEMAKER` privilege. For usage information, see [Granting database users access to Aurora machine learning](mysql-ml.md#aurora-ml-sql-privileges).
+ `AWS_BEDROCK_ACCESS` – There's no analogous `INVOKE` privilege for Amazon Bedrock. For usage information, see [Granting database users access to Aurora machine learning](mysql-ml.md#aurora-ml-sql-privileges).

When you grant access by using roles in Aurora MySQL version 3, you also activate the role by using the `SET ROLE role_name` or `SET ROLE ALL` statement. The following example shows how. Substitute the appropriate role name for `AWS_SELECT_S3_ACCESS`.

```
# Grant role to user.

mysql> GRANT AWS_SELECT_S3_ACCESS TO 'user'@'domain-or-ip-address'

# Check the current roles for your user. In this case, the AWS_SELECT_S3_ACCESS role has not been activated.
# Only the rds_superuser_role is currently in effect.
mysql> SELECT CURRENT_ROLE();
+--------------------------+
| CURRENT_ROLE()           |
+--------------------------+
| `rds_superuser_role`@`%` |
+--------------------------+
1 row in set (0.00 sec)

# Activate all roles associated with this user using SET ROLE.
# You can activate specific roles or all roles.
# In this case, the user only has 2 roles, so we specify ALL.
mysql> SET ROLE ALL;
Query OK, 0 rows affected (0.00 sec)

# Verify role is now active
mysql> SELECT CURRENT_ROLE();
+-----------------------------------------------------+
| CURRENT_ROLE()                                      |
+-----------------------------------------------------+
| `AWS_SELECT_S3_ACCESS`@`%`,`rds_superuser_role`@`%` |
+-----------------------------------------------------+
```

## Finding the database server ID


The database server ID (`server_id`) is required for binary logging (binlog) replication. The way to find the server ID is different in Aurora MySQL from Community MySQL.

In Community MySQL, the server ID is a number, which you obtain by using the following syntax while logged into the server:

```
mysql> select @@server_id;

+-------------+
| @@server_id |
+-------------+
| 2           |
+-------------+
1 row in set (0.00 sec)
```

In Aurora MySQL, the server ID is the DB instance ID, which you obtain by using the following syntax while logged into the DB instance:

```
mysql> select @@aurora_server_id;

+------------------------+
| @@aurora_server_id     |
+------------------------+
| mydbcluster-instance-2 |
+------------------------+
1 row in set (0.00 sec)
```

For more information on binlog replication, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md).

## Authentication


In community MySQL 8.0, the default authentication plugin is `caching_sha2_password`. Aurora MySQL version 3 still uses the `mysql_native_password` plugin. You can't change the `default_authentication_plugin` setting. You can, however, create new users and alter current users, and their individual passwords use the new authentication plugin. Following is an example.

```
mysql> CREATE USER 'testnewsha'@'%' IDENTIFIED WITH caching_sha2_password BY 'aNewShaPassword';
Query OK, 0 rows affected (0.74 sec)
```

# Upgrading to Aurora MySQL version 3


For information on upgrading your database from Aurora MySQL version 2 to version 3, see [Upgrading the major version of an Amazon Aurora MySQL DB cluster](AuroraMySQL.Updates.MajorVersionUpgrade.md).

# Aurora MySQL version 2 compatible with MySQL 5.7


This topic describes the differences between Aurora MySQL version 2 and MySQL 5.7 Community Edition.

**Important**  
Aurora MySQL version 2 reached the end of standard support on October 31, 2024. For more information, see [Preparing for Amazon Aurora MySQL-Compatible Edition version 2 end of standard support](Aurora.MySQL57.EOL.md).

## Features not supported in Aurora MySQL version 2


The following features are supported in MySQL 5.7, but are currently not supported in Aurora MySQL version 2:
+ `CREATE TABLESPACE` SQL statement
+ Group replication plugin
+ Increased page size
+ InnoDB buffer pool loading at startup
+ InnoDB full-text parser plugin
+ Multisource replication
+ Online buffer pool resizing
+ Password validation plugin – You can install the plugin, but it isn't supported. You can't customize the plugin.
+ Query rewrite plugins
+ Replication filtering
+ X Protocol

For more information about these features, see the [MySQL 5.7 documentation](https://dev.mysql.com/doc/refman/5.7/en/).

## Temporary tablespace behavior in Aurora MySQL version 2


In MySQL 5.7, the temporary tablespace is autoextending and increases in size as necessary to accommodate on-disk temporary tables. When temporary tables are dropped, freed space can be reused for new temporary tables, but the temporary tablespace remains at the extended size and doesn't shrink. The temporary tablespace is dropped and re-created when engine is restarted.

In Aurora MySQL version 2, the following behavior applies:
+ For new Aurora MySQL DB clusters created with version 2.10 and higher, the temporary tablespace is removed and re-created when you restart the database. This allows the dynamic resizing feature to reclaim the storage space.
+ For existing Aurora MySQL DB clusters upgraded to:
  + Version 2.10 or higher – The temporary tablespace is removed and re-created when you restart the database. This allows the dynamic resizing feature to reclaim the storage space.
  + Version 2.09 – Temporary table space isn't removed when you restart the database.

You can check the size of the temporary tablespace on your Aurora MySQL version 2 DB cluster by using the following query:

```
SELECT
    FILE_NAME,
    TABLESPACE_NAME,
    ROUND((TOTAL_EXTENTS * EXTENT_SIZE) / 1024 / 1024 / 1024, 4) AS SIZE
FROM
    INFORMATION_SCHEMA.FILES
WHERE
    TABLESPACE_NAME = 'innodb_temporary';
```

For more information, see [The Temporary Tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-temporary-tablespace.html) in the MySQL documentation.

## Storage engine for on-disk temporary tables


Aurora MySQL version 2 uses different storage engines for on-disk internal temporary tables depending on the role of the instance.
+ On the writer instance, on-disk temporary tables use the InnoDB storage engine by default. They're stored in the temporary tablespace in the Aurora cluster volume.

  You can change this behavior on the writer instance by modifying the value for the DB parameter `internal_tmp_disk_storage_engine`. For more information, see [Instance-level parameters](AuroraMySQL.Reference.ParameterGroups.md#AuroraMySQL.Reference.Parameters.Instance).
+ On reader instances, on-disk temporary tables use the MyISAM storage engine, which uses local storage. That's because read-only instances can't store any data on the Aurora cluster volume.

# Security with Amazon Aurora MySQL
Security with Aurora MySQL

Security for Amazon Aurora MySQL is managed at three levels:
+ To control who can perform Amazon RDS management actions on Aurora MySQL DB clusters and DB instances, you use AWS Identity and Access Management (IAM). When you connect to AWS using IAM credentials, your AWS account must have IAM policies that grant the permissions required to perform Amazon RDS management operations. For more information, see [Identity and access management for Amazon Aurora](UsingWithRDS.IAM.md)

  If you are using IAM to access the Amazon RDS console, make sure to first sign in to the AWS Management Console with your IAM user credentials. Then go to the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).
+ Make sure to create Aurora MySQL DB clusters in a virtual public cloud (VPC) based on the Amazon VPC service. To control which devices and Amazon EC2 instances can open connections to the endpoint and port of the DB instance for Aurora MySQL DB clusters in a VPC, use a VPC security group. You can make these endpoint and port connections by using Transport Layer Security (TLS). In addition, firewall rules at your company can control whether devices running at your company can open connections to a DB instance. For more information on VPCs, see [Amazon VPC and Amazon Aurora](USER_VPC.md).

  The supported VPC tenancy depends on the DB instance class used by your Aurora MySQL DB clusters. With `default` VPC tenancy, the VPC runs on shared hardware. With `dedicated` VPC tenancy, the VPC runs on a dedicated hardware instance. The burstable performance DB instance classes support default VPC tenancy only. The burstable performance DB instance classes include the db.t2, db.t3, and db.t4g DB instance classes. All other Aurora MySQL DB instance classes support both default and dedicated VPC tenancy.
**Note**  
We recommend using the T DB instance classes only for development and test servers, or other non-production servers. For more details on the T instance classes, see [Using T instance classes for development and testing](AuroraMySQL.BestPractices.Performance.md#AuroraMySQL.BestPractices.T2Medium).

  For more information about instance classes, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md). For more information about `default` and `dedicated` VPC tenancy, see [Dedicated instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html) in the *Amazon Elastic Compute Cloud User Guide*.
+ To authenticate login and permissions for an Amazon Aurora MySQL DB cluster, you can take either of the following approaches, or a combination of them:
  + You can take the same approach as with a standalone instance of MySQL.

    Commands such as `CREATE USER`, `RENAME USER`, `GRANT`, `REVOKE`, and `SET PASSWORD` work just as they do in on-premises databases, as does directly modifying database schema tables. For more information, see [ Access control and account management](https://dev.mysql.com/doc/refman/8.0/en/access-control.html) in the MySQL documentation.
  + You can also use IAM database authentication.

    With IAM database authentication, you authenticate to your DB cluster by using an IAM user or IAM role and an authentication token. An *authentication token* is a unique value that is generated using the Signature Version 4 signing process. By using IAM database authentication, you can use the same credentials to control access to your AWS resources and your databases. For more information, see [IAM database authentication ](UsingWithRDS.IAMDBAuth.md).

**Note**  
For more information, see [Security in Amazon Aurora](UsingWithRDS.md).

In the following sections, see information about user permissions for Aurora MySQL and TLS connections with Aurora MySQL DB clusters.

**Topics**
+ [

## Master user privileges with Amazon Aurora MySQL
](#AuroraMySQL.Security.MasterUser)
+ [

## TLS connections to Aurora MySQL DB clusters
](#AuroraMySQL.Security.SSL)

## Master user privileges with Amazon Aurora MySQL
Master user privileges with Aurora MySQL

When you create an Amazon Aurora MySQL DB instance, the master user has the default privileges listed in [Master user account privileges](UsingWithRDS.MasterAccounts.md).

To provide management services for each DB cluster, the `admin` and `rdsadmin` users are created when the DB cluster is created. Attempting to drop, rename, change the password, or change privileges for the `rdsadmin` account results in an error.

In Aurora MySQL version 2 DB clusters, the `admin` and `rdsadmin` users are created when the DB cluster is created. In Aurora MySQL version 3 DB clusters, the `admin`, `rdsadmin`, and `rds_superuser_role` users are created.

**Important**  
We strongly recommend that you do not use the master user directly in your applications. Instead, adhere to the best practice of using a database user created with the minimal privileges required for your application.

For management of the Aurora MySQL DB cluster, the standard `kill` and `kill_query` commands have been restricted. Instead, use the Amazon RDS commands `rds_kill` and `rds_kill_query` to terminate user sessions or queries on Aurora MySQL DB instances. 

**Note**  
Encryption of a database instance and snapshots is not supported for the China (Ningxia) region.

## TLS connections to Aurora MySQL DB clusters
TLS connections

Amazon Aurora MySQL DB clusters support Transport Layer Security (TLS) connections from applications using the same process and public key as RDS for MySQL DB instances.

Amazon RDS creates an TLS certificate and installs the certificate on the DB instance when Amazon RDS provisions the instance. These certificates are signed by a certificate authority. The TLS certificate includes the DB instance endpoint as the Common Name (CN) for the TLS certificate to guard against spoofing attacks. As a result, you can only use the DB cluster endpoint to connect to a DB cluster using TLS if your client supports Subject Alternative Names (SAN). Otherwise, you must use the instance endpoint of a writer instance. 

For information about downloading certificates, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md).

We recommend the AWS JDBC Driver as a client that supports SAN with TLS. For more information about the AWS JDBC Driver and complete instructions for using it, see the [Amazon Web Services (AWS) JDBC Driver GitHub repository](https://github.com/aws/aws-advanced-jdbc-wrapper).

**Topics**
+ [

### Requiring a TLS connection to an Aurora MySQL DB cluster
](#AuroraMySQL.Security.SSL.RequireSSL)
+ [

### TLS versions for Aurora MySQL
](#AuroraMySQL.Security.SSL.TLS_Version)
+ [

### Configuring cipher suites for connections to Aurora MySQL DB clusters
](#AuroraMySQL.Security.SSL.ConfiguringCipherSuites)
+ [

### Encrypting connections to an Aurora MySQL DB cluster
](#AuroraMySQL.Security.SSL.EncryptingConnections)

### Requiring a TLS connection to an Aurora MySQL DB cluster


You can require that all user connections to your Aurora MySQL DB cluster use TLS by using the `require_secure_transport` DB cluster parameter. By default, the `require_secure_transport` parameter is set to `OFF`. You can set the `require_secure_transport` parameter to `ON` to require TLS for connections to your DB cluster.

You can set the `require_secure_transport` parameter value by updating the DB cluster parameter group for your DB cluster. You don't need to reboot your DB cluster for the change to take effect. For more information on parameter groups, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).

**Note**  
The `require_secure_transport` parameter is available for Aurora MySQL version 2 and 3. You can set this parameter in a custom DB cluster parameter group. The parameter isn't available in DB instance parameter groups.

When the `require_secure_transport` parameter is set to `ON` for a DB cluster, a database client can connect to it if it can establish an encrypted connection. Otherwise, an error message similar to the following is returned to the client:

```
MySQL Error 3159 (HY000): Connections using insecure transport are prohibited while --require_secure_transport=ON.
```

### TLS versions for Aurora MySQL


Aurora MySQL supports Transport Layer Security (TLS) versions 1.0, 1.1, 1.2, and 1.3. Starting in Aurora MySQL version 3.04.0 and higher, you can use the TLS 1.3 protocol to secure your connections. The following table shows the TLS support for Aurora MySQL versions. 


****  

| Aurora MySQL version | TLS 1.0 | TLS 1.1 | TLS 1.2 | TLS 1.3 | Default | 
| --- | --- | --- | --- | --- | --- | 
|  Aurora MySQL version 2  | Deprecated | Deprecated |  Supported  | Not supported | All supported TLS versions | 
|  Aurora MySQL version 3 (lower than 3.04.0)  | Deprecated | Deprecated | Supported | Not supported | All supported TLS versions | 
|  Aurora MySQL version 3 (3.04.0 and higher)  | Not supported  | Not supported  | Supported | Supported | All supported TLS versions | 

**Important**  
If you're using custom parameter groups for your Aurora MySQL clusters with version 2 and versions lower than 3.04.0, we recommend using TLS 1.2 because TLS 1.0 and 1.1 are less secure. The community edition of MySQL 8.0.26 and Aurora MySQL 3.03 and its minor versions deprecated support for TLS versions 1.1 and 1.0.  
The community edition of MySQL 8.0.28 and compatible Aurora MySQL versions 3.04.0 and higher do not support TLS 1.1 and TLS 1.0. If you're using Aurora MySQL version 3.04.0 or higher, do not set the TLS protocol to 1.0 and 1.1 in your custom parameter group.  
For Aurora MySQL versions 3.04.0 and higher, the default setting is TLS 1.3 and TLS 1.2.

You can use the `tls_version` DB cluster parameter to indicate the permitted protocol versions. Similar client parameters exist for most client tools or database drivers. Some older clients might not support newer TLS versions. By default, the DB cluster attempts to use the highest TLS protocol version allowed by both the server and client configuration.

Set the `tls_version` DB cluster parameter to one of the following values:
+ `TLSv1.3` 
+ `TLSv1.2` 
+ `TLSv1.1`
+ `TLSv1`

You can also set the `tls_version` parameter as a string of comma-separated list. If you want to use both TLS 1.2 and TLS 1.3 protocols, the `tls_version` parameter must include all protocols from the lowest to the highest protocol. In this case, `tls_version` is set as:

```
tls_version=TLSv1.2,TLSv1.3
```

For information about modifying parameters in a DB cluster parameter group, see [Modifying parameters in a DB cluster parameter groupin Amazon Aurora](USER_WorkingWithParamGroups.ModifyingCluster.md). If you use the AWS CLI to modify the `tls_version` DB cluster parameter, the `ApplyMethod` must be set to `pending-reboot`. When the application method is `pending-reboot`, changes to parameters are applied after you stop and restart the DB clusters associated with the parameter group.

### Configuring cipher suites for connections to Aurora MySQL DB clusters


By using configurable cipher suites, you can have more control over the security of your database connections. You can specify a list of cipher suites that you want to allow to secure client TLS connections to your database. With configurable cipher suites, you can control the connection encryption that your database server accepts. Doing this prevents the use of insecure or deprecated ciphers.

Configurable cipher suites are supported in Aurora MySQL version 3 and Aurora MySQL version 2. To specify the list of permissible TLS 1.2, TLS 1.1, TLS 1.0 ciphers for encrypting connections, modify the `ssl_cipher` cluster parameter. Set the `ssl_cipher` parameter in a cluster parameter group using the AWS Management Console, the AWS CLI, or the RDS API.

Set the `ssl_cipher` parameter to a string of comma-separated cipher values for your TLS version. For the client application, you can specify the ciphers to use for encrypted connections by using the `--ssl-cipher` option when connecting to the database. For more about connecting to your database, see [Connecting to an Amazon Aurora MySQL DB cluster](Aurora.Connecting.md#Aurora.Connecting.AuroraMySQL).

Starting in Aurora MySQL version 3.04.0 and higher, you can specify TLS 1.3 cipher suites. To specify the permissible TLS 1.3 cipher suites, modify the `tls_ciphersuites` parameter in your parameter group. TLS 1.3 has reduced the number of available cipher suites due to changes in the naming convention that removes the key exchange mechanism and certificate used. Set the `tls_ciphersuites` to a string of comma-separated cipher values for TLS 1.3.

The following table shows the supported ciphers along with the TLS encryption protocol and valid Aurora MySQL engine versions for each cipher.


| Cipher | Encryption protocol | Supported Aurora MySQL versions | 
| --- | --- | --- | 
| `ECDHE-RSA-AES128-SHA` | TLS 1.0 | 3.04.0 and higher, 2.11.0 and higher | 
| `ECDHE-RSA-AES128-SHA256` | TLS 1.2 | 3.04.0 and higher, 2.11.0 and higher | 
| `ECDHE-RSA-AES128-GCM-SHA256` | TLS 1.2 | 3.04.0 and higher, 2.11.0 and higher | 
| `ECDHE-RSA-AES256-SHA` | TLS 1.0 | 3.04.0 and higher, 2.11.0 and higher | 
| `ECDHE-RSA-AES256-GCM-SHA384` | TLS 1.2 | 3.04.0 and higher, 2.11.0 and higher | 
| `ECDHE-RSA-CHACHA20-POLY1305` | TLS 1.2 | 3.04.0 and higher, 2.11.0 and higher | 
| `ECDHE-ECDSA-AES128-SHA` | TLS 1.0 | 3.04.0 and higher, 2.11.0 and higher | 
| `ECDHE-ECDSA-AES256-SHA` | TLS 1.0 | 3.04.0 and higher, 2.11.0 and higher | 
| `ECDHE-ECDSA-AES128-GCM-SHA256` | TLS 1.2 | 3.04.0 and higher, 2.11.0 and higher | 
| `ECDHE-ECDSA-AES256-GCM-SHA384` | TLS 1.2 | 3.04.0 and higher, 2.11.0 and higher | 
| `ECDHE-ECDSA-CHACHA20-POLY1305` | TLS 1.2 | 3.04.0 and higher, 2.11.0 and higher | 
| `TLS_AES_128_GCM_SHA256` | TLS 1.3 | 3.04.0 and higher | 
| `TLS_AES_256_GCM_SHA384` | TLS 1.3 | 3.04.0 and higher | 
| `TLS_CHACHA20_POLY1305_SHA256` | TLS 1.3 | 3.04.0 and higher | 

For information about modifying parameters in a DB cluster parameter group, see [Modifying parameters in a DB cluster parameter groupin Amazon Aurora](USER_WorkingWithParamGroups.ModifyingCluster.md). If you use the CLI to modify the `ssl_cipher` DB cluster parameter, make sure to set the `ApplyMethod` to `pending-reboot`. When the application method is `pending-reboot`, changes to parameters are applied after you stop and restart the DB clusters associated with the parameter group.

You can also use the [describe-engine-default-cluster-parameters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-engine-default-cluster-parameters.html) CLI command to determine which cipher suites are currently supported for a specific parameter group family. The following example shows how to get the allowed values for the `ssl_cipher` cluster parameter for Aurora MySQL version 2.

```
aws rds describe-engine-default-cluster-parameters --db-parameter-group-family aurora-mysql5.7

        ...some output truncated...
	{
        "ParameterName": "ssl_cipher",
        "ParameterValue": "ECDHE-RSA-AES128-SHA,ECDHE-RSA-AES128-SHA256,ECDHE-RSA-AES128-GCM-SHA256,ECDHE-RSA-AES256-SHA,ECDHE-RSA-AES256-GCM-SHA384,ECDHE-RSA-CHACHA20-POLY1305,ECDHE-ECDSA-AES256-SHA,ECDHE-ECDSA-CHACHA20-POLY1305,ECDHE-ECDSA-AES256-GCM-SHA384,ECDHE-ECDSA-AES128-GCM-SHA256,ECDHE-ECDSA-AES128-SHA",
        "Description": "The list of permissible ciphers for connection encryption.",
        "Source": "system",
        "ApplyType": "static",
        "DataType": "list",
        "AllowedValues": "ECDHE-RSA-AES128-SHA,ECDHE-RSA-AES128-SHA256,ECDHE-RSA-AES128-GCM-SHA256,ECDHE-RSA-AES256-SHA,ECDHE-RSA-AES256-GCM-SHA384,ECDHE-RSA-CHACHA20-POLY1305,ECDHE-ECDSA-AES256-SHA,ECDHE-ECDSA-CHACHA20-POLY1305,ECDHE-ECDSA-AES256-GCM-SHA384,ECDHE-ECDSA-AES128-GCM-SHA256,ECDHE-ECDSA-AES128-SHA",
        "IsModifiable": true,
        "SupportedEngineModes": [
            "provisioned"
        ]
    },
       ...some output truncated...
```

For more information about ciphers, see the [ssl\$1cipher](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_ssl_cipher) variable in the MySQL documentation. For more information about cipher suite formats, see the [openssl-ciphers list format](https://www.openssl.org/docs/manmaster/man1/openssl-ciphers.html#CIPHER-LIST-FORMAT) and [openssl-ciphers string format](https://www.openssl.org/docs/manmaster/man1/openssl-ciphers.html#CIPHER-STRINGS) documentation on the OpenSSL website.

### Encrypting connections to an Aurora MySQL DB cluster


To encrypt connections using the default `mysql` client, launch the mysql client using the `--ssl-ca` parameter to reference the public key, for example: 

For MySQL 5.7 and 8.0:

```
mysql -h myinstance.123456789012.rds-us-east-1.amazonaws.com
--ssl-ca=full_path_to_CA_certificate --ssl-mode=VERIFY_IDENTITY
```

For MySQL 5.6:

```
mysql -h myinstance.123456789012.rds-us-east-1.amazonaws.com
--ssl-ca=full_path_to_CA_certificate --ssl-verify-server-cert
```

Replace *full\$1path\$1to\$1CA\$1certificate* with the full path to your Certificate Authority (CA) certificate. For information about downloading a certificate, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md).

You can require TLS connections for specific users accounts. For example, you can use one of the following statements, depending on your MySQL version, to require TLS connections on the user account `encrypted_user`.

For MySQL 5.7 and 8.0:

```
ALTER USER 'encrypted_user'@'%' REQUIRE SSL;            
```

For MySQL 5.6:

```
GRANT USAGE ON *.* TO 'encrypted_user'@'%' REQUIRE SSL;
```

 When you use an RDS Proxy, you connect to the proxy endpoint instead of the usual cluster endpoint. You can make SSL/TLS required or optional for connections to the proxy, in the same way as for connections directly to the Aurora DB cluster. For information about using RDS Proxy, see [Amazon RDS Proxyfor Aurora](rds-proxy.md). 

**Note**  
For more information on TLS connections with MySQL, see the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/using-encrypted-connections.html).

# Updating applications to connect to Aurora MySQL DB clusters using new TLS certificates
Updating applications for new TLS certificates

As of January 13, 2023, Amazon RDS has published new Certificate Authority (CA) certificates for connecting to your Aurora DB clusters using Transport Layer Security (TLS). Following, you can find information about updating your applications to use the new certificates.

This topic can help you to determine whether any client applications use TLS to connect to your DB clusters. If they do, you can further check whether those applications require certificate verification to connect. 

**Note**  
Some applications are configured to connect to Aurora MySQL DB clusters only if they can successfully verify the certificate on the server.   
For such applications, you must update your client application trust stores to include the new CA certificates. 

After you update your CA certificates in the client application trust stores, you can rotate the certificates on your DB clusters. We strongly recommend testing these procedures in a development or staging environment before implementing them in your production environments.

For more information about certificate rotation, see [Rotating your SSL/TLS certificate](UsingWithRDS.SSL-certificate-rotation.md). For more information about downloading certificates, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md). For information about using TLS with Aurora MySQL DB clusters, see [TLS connections to Aurora MySQL DB clusters](AuroraMySQL.Security.md#AuroraMySQL.Security.SSL).

**Topics**
+ [

## Determining whether any applications are connecting to your Aurora MySQL DB cluster using TLS
](#ssl-certificate-rotation-aurora-mysql.determining-server)
+ [

## Determining whether a client requires certificate verification to connect
](#ssl-certificate-rotation-aurora-mysql.determining-client)
+ [

## Updating your application trust store
](#ssl-certificate-rotation-aurora-mysql.updating-trust-store)
+ [

## Example Java code for establishing TLS connections
](#ssl-certificate-rotation-aurora-mysql.java-example)

## Determining whether any applications are connecting to your Aurora MySQL DB cluster using TLS


If you are using Aurora MySQL version 2 (compatible with MySQL 5.7) and the Performance Schema is enabled, run the following query to check if connections are using TLS. For information about enabling the Performance Schema, see [ Performance Schema quick start](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-quick-start.html) in the MySQL documentation.

```
mysql> SELECT id, user, host, connection_type
       FROM performance_schema.threads pst
       INNER JOIN information_schema.processlist isp
       ON pst.processlist_id = isp.id;
```

In this sample output, you can see both your own session (`admin`) and an application logged in as `webapp1` are using TLS.

```
+----+-----------------+------------------+-----------------+
| id | user            | host             | connection_type |
+----+-----------------+------------------+-----------------+
|  8 | admin           | 10.0.4.249:42590 | SSL/TLS         |
|  4 | event_scheduler | localhost        | NULL            |
| 10 | webapp1         | 159.28.1.1:42189 | SSL/TLS         |
+----+-----------------+------------------+-----------------+
3 rows in set (0.00 sec)
```

## Determining whether a client requires certificate verification to connect


You can check whether JDBC clients and MySQL clients require certificate verification to connect.

### JDBC


The following example with MySQL Connector/J 8.0 shows one way to check an application's JDBC connection properties to determine whether successful connections require a valid certificate. For more information on all of the JDBC connection options for MySQL, see [Configuration properties](https://dev.mysql.com/doc/connector-j/en/connector-j-reference-configuration-properties.html) in the MySQL documentation.

When using the MySQL Connector/J 8.0, an TLS connection requires verification against the server CA certificate if your connection properties have `sslMode` set to `VERIFY_CA` or `VERIFY_IDENTITY`, as in the following example.

```
Properties properties = new Properties();
properties.setProperty("sslMode", "VERIFY_IDENTITY");
properties.put("user", DB_USER);
properties.put("password", DB_PASSWORD);
```

**Note**  
If you use either the MySQL Java Connector v5.1.38 or later, or the MySQL Java Connector v8.0.9 or later to connect to your databases, even if you haven't explicitly configured your applications to use TLS when connecting to your databases, these client drivers default to using TLS. In addition, when using TLS, they perform partial certificate verification and fail to connect if the database server certificate is expired.

### MySQL


The following examples with the MySQL Client show two ways to check a script's MySQL connection to determine whether successful connections require a valid certificate. For more information on all of the connection options with the MySQL Client, see [Client-side configuration for encrypted connections](https://dev.mysql.com/doc/refman/8.0/en/using-encrypted-connections.html#using-encrypted-connections-client-side-configuration) in the MySQL documentation.

When using the MySQL 5.7 or MySQL 8.0 Client, an TLS connection requires verification against the server CA certificate if for the `--ssl-mode` option you specify `VERIFY_CA` or `VERIFY_IDENTITY`, as in the following example.

```
mysql -h mysql-database.rds.amazonaws.com -uadmin -ppassword --ssl-ca=/tmp/ssl-cert.pem --ssl-mode=VERIFY_CA
```

When using the MySQL 5.6 Client, an SSL connection requires verification against the server CA certificate if you specify the `--ssl-verify-server-cert` option, as in the following example.

```
mysql -h mysql-database.rds.amazonaws.com -uadmin -ppassword --ssl-ca=/tmp/ssl-cert.pem --ssl-verify-server-cert
```

## Updating your application trust store


For information about updating the trust store for MySQL applications, see [Installing SSL certificates](https://dev.mysql.com/doc/mysql-monitor/8.0/en/mem-ssl-installation.html) in the MySQL documentation.

**Note**  
When you update the trust store, you can retain older certificates in addition to adding the new certificates.

### Updating your application trust store for JDBC


You can update the trust store for applications that use JDBC for TLS connections.

For information about downloading the root certificate, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md).

For sample scripts that import certificates, see [Sample script for importing certificates into your trust store](UsingWithRDS.SSL-certificate-rotation.md#UsingWithRDS.SSL-certificate-rotation-sample-script).

If you are using the mysql JDBC driver in an application, set the following properties in the application.

```
System.setProperty("javax.net.ssl.trustStore", certs);
System.setProperty("javax.net.ssl.trustStorePassword", "password");
```

**Note**  
Specify a password other than the prompt shown here as a security best practice.

When you start the application, set the following properties.

```
java -Djavax.net.ssl.trustStore=/path_to_truststore/MyTruststore.jks -Djavax.net.ssl.trustStorePassword=my_truststore_password com.companyName.MyApplication
```

## Example Java code for establishing TLS connections


The following code example shows how to set up the SSL connection that validates the server certificate using JDBC.

```
public class MySQLSSLTest {

        private static final String DB_USER = "user name";
        private static final String DB_PASSWORD = "password";
        // This key store has only the prod root ca.
        private static final String KEY_STORE_FILE_PATH = "file-path-to-keystore";
        private static final String KEY_STORE_PASS = "keystore-password";

    public static void test(String[] args) throws Exception {
        Class.forName("com.mysql.jdbc.Driver");


        System.setProperty("javax.net.ssl.trustStore", KEY_STORE_FILE_PATH);
        System.setProperty("javax.net.ssl.trustStorePassword", KEY_STORE_PASS);

        Properties properties = new Properties();
        properties.setProperty("sslMode", "VERIFY_IDENTITY");
        properties.put("user", DB_USER);
        properties.put("password", DB_PASSWORD);

        Connection connection = DriverManager.getConnection("jdbc:mysql://jagdeeps-ssl-test.cni62e2e7kwh.us-east-1.rds.amazonaws.com:3306",properties);
        Statement stmt=connection.createStatement();

        ResultSet rs=stmt.executeQuery("SELECT 1 from dual");

        return;
    }
}
```

**Important**  
After you have determined that your database connections use TLS and have updated your application trust store, you can update your database to use the rds-ca-rsa2048-g1 certificates. For instructions, see step 3 in [Updating your CA certificate by modifying your DB instance ](UsingWithRDS.SSL-certificate-rotation.md#UsingWithRDS.SSL-certificate-rotation-updating).

# Using Kerberos authentication for Aurora MySQL


You can use Kerberos authentication to authenticate users when they connect to your Aurora MySQL DB cluster. To do so, configure your DB cluster to use AWS Directory Service for Microsoft Active Directory for Kerberos authentication. AWS Directory Service for Microsoft Active Directory is also called AWS Managed Microsoft AD. It's a feature available with Directory Service. To learn more, see [What is Directory Service?](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/what_is.html) in the *AWS Directory Service Administration Guide*.

To start, create an AWS Managed Microsoft AD directory to store user credentials. Then, provide the Active Directory's domain and other information to your Aurora MySQL DB cluster. When users authenticate with the Aurora MySQL DB cluster, authentication requests are forwarded to the AWS Managed Microsoft AD directory.

Keeping all of your credentials in the same directory can save you time and effort. With this approach, you have a centralized location for storing and managing credentials for multiple DB clusters. Using a directory can also improve your overall security profile.

In addition, you can access credentials from your own on-premises Microsoft Active Directory. To do so, create a trusting domain relationship so that the AWS Managed Microsoft AD directory trusts your on-premises Microsoft Active Directory. In this way, your users can access your Aurora MySQL DB clusters with the same Windows single sign-on (SSO) experience as when they access workloads in your on-premises network.

A database can use Kerberos, AWS Identity and Access Management (IAM), or both Kerberos and IAM authentication. However, because Kerberos and IAM authentication provide different authentication methods, a specific user can log in to a database using only one or the other authentication method, but not both. For more information about IAM authentication, see [IAM database authentication ](UsingWithRDS.IAMDBAuth.md).

**Contents**
+ [

## Overview of Kerberos authentication for Aurora MySQL DB clusters
](#aurora-mysql-kerberos-setting-up-overview)
+ [

## Limitations of Kerberos authentication for Aurora MySQL
](#aurora-mysql-kerberos.limitations)
+ [

# Setting up Kerberos authentication for Aurora MySQL DB clusters
](aurora-mysql-kerberos-setting-up.md)
  + [

## Step 1: Create a directory using AWS Managed Microsoft AD
](aurora-mysql-kerberos-setting-up.md#aurora-mysql-kerberos-setting-up.create-directory)
  + [

## Step 2: (Optional) Create a trust for an on-premises Active Directory
](aurora-mysql-kerberos-setting-up.md#aurora-mysql-kerberos-setting-up.create-trust)
  + [

## Step 3: Create an IAM role for use by Amazon Aurora
](aurora-mysql-kerberos-setting-up.md#aurora-mysql-kerberos-setting-up.CreateIAMRole)
  + [

## Step 4: Create and configure users
](aurora-mysql-kerberos-setting-up.md#aurora-mysql-kerberos-setting-up.create-users)
  + [

## Step 5: Create or modify an Aurora MySQL DB cluster
](aurora-mysql-kerberos-setting-up.md#aurora-mysql-kerberos-setting-up.create-modify)
  + [

## Step 6: Create Aurora MySQL users that use Kerberos authentication
](aurora-mysql-kerberos-setting-up.md#aurora-mysql-kerberos-setting-up.create-logins)
    + [

### Modifying an existing Aurora MySQL login
](aurora-mysql-kerberos-setting-up.md#aurora-mysql-kerberos.modify-login)
  + [

## Step 7: Configure a MySQL client
](aurora-mysql-kerberos-setting-up.md#aurora-mysql-kerberos-setting-up.configure-client)
  + [

## Step 8: (Optional) Configure case-insensitive username comparison
](aurora-mysql-kerberos-setting-up.md#aurora-mysql-kerberos-setting-up.case-insensitive)
+ [

# Connecting to Aurora MySQL with Kerberos authentication
](aurora-mysql-kerberos-connecting.md)
  + [

## Using the Aurora MySQL Kerberos login to connect to the DB cluster
](aurora-mysql-kerberos-connecting.md#aurora-mysql-kerberos-connecting.login)
  + [

## Kerberos authentication with Aurora global databases
](aurora-mysql-kerberos-connecting.md#aurora-mysql-kerberos-connecting.global)
  + [

## Migrating from RDS for MySQL to Aurora MySQL
](aurora-mysql-kerberos-connecting.md#aurora-mysql-kerberos-connecting.rds)
  + [

## Preventing ticket caching
](aurora-mysql-kerberos-connecting.md#aurora-mysql-kerberos.destroy-tickets)
  + [

## Logging for Kerberos authentication
](aurora-mysql-kerberos-connecting.md#aurora-mysql-kerberos.logging)
+ [

# Managing a DB cluster in a domain
](aurora-mysql-kerberos-managing.md)
  + [

## Understanding domain membership
](aurora-mysql-kerberos-managing.md#aurora-mysql-kerberos-managing.understanding)

## Overview of Kerberos authentication for Aurora MySQL DB clusters
Overview of Kerberos authentication for Aurora MySQL

To set up Kerberos authentication for an Aurora MySQL DB cluster, complete the following general steps. These steps are described in more detail later.

1. Use AWS Managed Microsoft AD to create an AWS Managed Microsoft AD directory. You can use the AWS Management Console, the AWS CLI, or the Directory Service to create the directory. For detailed instructions, see [Create your AWS Managed Microsoft AD directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started_create_directory.html) in the *AWS Directory Service Administration Guide*.

1. Create an AWS Identity and Access Management (IAM) role that uses the managed IAM policy `AmazonRDSDirectoryServiceAccess`. The role allows Amazon Aurora to make calls to your directory.

   For the role to allow access, the AWS Security Token Service (AWS STS) endpoint must be activated in the AWS Region for your AWS account. AWS STS endpoints are active by default in all AWS Regions, and you can use them without any further action. For more information, see [ Activating and deactivating AWS STS in an AWS Region](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html#sts-regions-activate-deactivate) in the *IAM User Guide*.

1. Create and configure users in the AWS Managed Microsoft AD directory using the Microsoft Active Directory tools. For more information about creating users in your Active Directory, see [Manage users and groups in AWS managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_manage_users_groups.html) in the *AWS Directory Service Administration Guide*.

1. Create or modify an Aurora MySQL DB cluster. If you use either the CLI or RDS API in the create request, specify a domain identifier with the `Domain` parameter. Use the `d-*` identifier that was generated when you created your directory and the name of the IAM role that you created.

   If you modify an existing Aurora MySQL DB cluster to use Kerberos authentication, set the domain and IAM role parameters for the DB cluster. Locate the DB cluster in the same VPC as the domain directory.

1. Use the Amazon RDS primary user credentials to connect to the Aurora MySQL DB cluster. Create the database user in Aurora MySQL by using the instructions in [Step 6: Create Aurora MySQL users that use Kerberos authentication](aurora-mysql-kerberos-setting-up.md#aurora-mysql-kerberos-setting-up.create-logins).

   Users that you create this way can log in to the Aurora MySQL DB cluster using Kerberos authentication. For more information, see [Connecting to Aurora MySQL with Kerberos authentication](aurora-mysql-kerberos-connecting.md).

To use Kerberos authentication with an on-premises or self-hosted Microsoft Active Directory, create a *forest trust*. A forest trust is a trust relationship between two groups of domains. The trust can be one-way or two-way. For more information about setting up forest trusts using Directory Service, see [When to create a trust relationship](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_setup_trust.html) in the *AWS Directory Service Administration Guide*.

## Limitations of Kerberos authentication for Aurora MySQL
Limitations

The following limitations apply to Kerberos authentication for Aurora MySQL:
+ Kerberos authentication is supported for Aurora MySQL version 3.03 and higher.

  For information about AWS Region support, see [Kerberos authentication with Aurora MySQL](Concepts.Aurora_Fea_Regions_DB-eng.Feature.KerberosAuthentication.md#Concepts.Aurora_Fea_Regions_DB-eng.Feature.KerberosAuthentication.amy).
+ To use Kerberos authentication with Aurora MySQL, your MySQL client or connector must use version 8.0.26 or higher on Unix platforms, 8.0.27 or higher on Windows. Otherwise, the client-side `authentication_kerberos_client` plugin isn't available and you can't authenticate.
+ Only AWS Managed Microsoft AD is supported on Aurora MySQL. However, you can join Aurora MySQL DB clusters to shared Managed Microsoft AD domains owned by different accounts in the same AWS Region.

  You can also use your own on-premises Active Directory. For more information, see [Step 2: (Optional) Create a trust for an on-premises Active Directory](aurora-mysql-kerberos-setting-up.md#aurora-mysql-kerberos-setting-up.create-trust).
+ When using Kerberos to authenticate a user connecting to the Aurora MySQL cluster from MySQL clients or from drivers on the Windows operating system, by default the character case of the database username must match the case of the user in the Active Directory. For example, if the user in the Active Directory appears as `Admin`, the database username must be `Admin`.

  However, you can now use case-insensitive username comparison with the `authentication_kerberos` plugin. For more information, see [Step 8: (Optional) Configure case-insensitive username comparison](aurora-mysql-kerberos-setting-up.md#aurora-mysql-kerberos-setting-up.case-insensitive).
+ You must reboot the reader DB instances after turning on the feature to install the `authentication_kerberos` plugin.
+ Replicating to DB instances that don't support the `authentication_kerberos` plugin can lead to replication failure.
+ For Aurora global databases to use Kerberos authentication, you must configure it for every DB cluster in the global database.
+ The domain name must be less than 62 characters long.
+ Don't modify the DB cluster port after turning on Kerberos authentication. If you modify the port, then Kerberos authentication will no longer work. 

# Setting up Kerberos authentication for Aurora MySQL DB clusters
Setting up Kerberos authentication for Aurora MySQL

Use AWS Managed Microsoft AD to set up Kerberos authentication for an Aurora MySQL DB cluster. To set up Kerberos authentication, take the following steps.

**Topics**
+ [

## Step 1: Create a directory using AWS Managed Microsoft AD
](#aurora-mysql-kerberos-setting-up.create-directory)
+ [

## Step 2: (Optional) Create a trust for an on-premises Active Directory
](#aurora-mysql-kerberos-setting-up.create-trust)
+ [

## Step 3: Create an IAM role for use by Amazon Aurora
](#aurora-mysql-kerberos-setting-up.CreateIAMRole)
+ [

## Step 4: Create and configure users
](#aurora-mysql-kerberos-setting-up.create-users)
+ [

## Step 5: Create or modify an Aurora MySQL DB cluster
](#aurora-mysql-kerberos-setting-up.create-modify)
+ [

## Step 6: Create Aurora MySQL users that use Kerberos authentication
](#aurora-mysql-kerberos-setting-up.create-logins)
+ [

## Step 7: Configure a MySQL client
](#aurora-mysql-kerberos-setting-up.configure-client)
+ [

## Step 8: (Optional) Configure case-insensitive username comparison
](#aurora-mysql-kerberos-setting-up.case-insensitive)

## Step 1: Create a directory using AWS Managed Microsoft AD
Create a directory

Directory Service creates a fully managed Active Directory in the AWS Cloud. When you create an AWS Managed Microsoft AD directory, Directory Service creates two domain controllers and Domain Name System (DNS) servers on your behalf. The directory servers are created in different subnets in a VPC. This redundancy helps make sure that your directory remains accessible even if a failure occurs.

When you create an AWS Managed Microsoft AD directory, Directory Service performs the following tasks on your behalf:
+ Sets up an Active Directory within the VPC.
+ Creates a directory administrator account with the username `Admin` and the specified password. You use this account to manage your directory.
**Note**  
Be sure to save this password. Directory Service doesn't store it. You can reset it, but you can't retrieve it.
+ Creates a security group for the directory controllers.

When you launch an AWS Managed Microsoft AD, AWS creates an Organizational Unit (OU) that contains all of your directory's objects. This OU has the NetBIOS name that you entered when you created your directory. It is located in the domain root, which is owned and managed by AWS.

The `Admin` account that was created with your AWS Managed Microsoft AD directory has permissions for the most common administrative activities for your OU, including:
+ Create, update, or delete users
+ Add resources to your domain, such as file or print servers, and then assign permissions for those resources to users in your OU
+ Create additional OUs and containers
+ Delegate authority
+ Restore deleted objects from the Active Directory Recycle Bin
+ Run AD and DNS Windows PowerShell modules on the Active Directory Web Service 

The `Admin` account also has rights to perform the following domain-wide activities:
+ Manage DNS configurations (add, remove, or update records, zones, and forwarders)
+ View DNS event logs
+ View security event logs

**To create a directory with AWS Managed Microsoft AD**

1. Sign in to the AWS Management Console and open the Directory Service console at [https://console.aws.amazon.com/directoryservicev2/](https://console.aws.amazon.com/directoryservicev2/).

1. In the navigation pane, choose **Directories** and choose **Set up Directory**.

1. Choose **AWS Managed Microsoft AD**. AWS Managed Microsoft AD is the only option that you can currently use with Amazon RDS.

1. Enter the following information:  
**Directory DNS name**  
The fully qualified name for the directory, such as **corp.example.com**.  
**Directory NetBIOS name**  
The short name for the directory, such as **CORP**.  
**Directory description**  
(Optional) A description for the directory.  
**Admin password**  
The password for the directory administrator. The directory creation process creates an administrator account with the username Admin and this password.  
The directory administrator password and can't include the word "admin." The password is case-sensitive and must be 8–64 characters in length. It must also contain at least one character from three of the following four categories:  
   + Lowercase letters (a–z)
   + Uppercase letters (A–Z)
   + Numbers (0–9)
   + Non-alphanumeric characters (\$1\$1@\$1\$1%^&\$1\$1-\$1=`\$1\$1()\$1\$1[]:;"'<>,.?/)  
**Confirm password**  
The administrator password re-entered.

1. Choose **Next**.

1.  Enter the following information in the **Networking** section and then choose **Next**:  
**VPC**  
The VPC for the directory. Create the Aurora MySQL DB cluster in this same VPC.  
**Subnets**  
Subnets for the directory servers. The two subnets must be in different Availability Zones.

1. Review the directory information and make any necessary changes. When the information is correct, choose **Create directory**.  
![\[Directory details page during creation\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/WinAuth2.png)

It takes several minutes to create the directory. When it has been successfully created, the **Status** value changes to **Active**.

To see information about your directory, choose the directory name in the directory listing. Note the **Directory ID** value because you need this value when you create or modify your Aurora MySQL DB cluster.

![\[Directory ID in the Directory details page\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/WinAuth3.png)


## Step 2: (Optional) Create a trust for an on-premises Active Directory
(Optional) Create a trust

If you don't plan to use your own on-premises Microsoft Active Directory, skip to [Step 3: Create an IAM role for use by Amazon Aurora](#aurora-mysql-kerberos-setting-up.CreateIAMRole).

To use Kerberos authentication with your on-premises Active Directory, you need to create a trusting domain relationship using a forest trust between your on-premises Microsoft Active Directory and the AWS Managed Microsoft AD directory (created in [Step 1: Create a directory using AWS Managed Microsoft AD](#aurora-mysql-kerberos-setting-up.create-directory)). The trust can be one-way, where the AWS Managed Microsoft AD directory trusts the on-premises Microsoft Active Directory. The trust can also be two-way, where both Active Directories trust each other. For more information about setting up trusts using Directory Service, see [When to create a trust relationship](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_setup_trust.html) in the *AWS Directory Service Administration Guide*.

**Note**  
If you use an on-premises Microsoft Active Directory:  
Windows clients can't connect using Aurora custom endpoints. To learn more, see [Amazon Aurora endpoint connections](Aurora.Overview.Endpoints.md).
For [global databases](aurora-global-database.md):  
Windows clients can connect using instance endpoints or cluster endpoints in the primary AWS Region of the global database only.
Windows clients can't connect using cluster endpoints in secondary AWS Regions.

Make sure that your on-premises Microsoft Active Directory domain name includes a DNS suffix routing that corresponds to the newly created trust relationship. The following screenshot shows an example.

![\[DNS routing corresponds to the created trust\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/kerberos-auth-trust.png)


## Step 3: Create an IAM role for use by Amazon Aurora
Create an IAM role

For Amazon Aurora to call Directory Service for you, you need an AWS Identity and Access Management (IAM) role that uses the managed IAM policy `AmazonRDSDirectoryServiceAccess`. This role allows Aurora to make calls to the Directory Service.

When you create a DB cluster using the AWS Management Console, and you have the `iam:CreateRole` permission, the console creates this role automatically. In this case, the role name is `rds-directoryservice-kerberos-access-role`. Otherwise, you must create the IAM role manually. When you create this IAM role, choose `Directory Service`, and attach the AWS managed policy `AmazonRDSDirectoryServiceAccess` to it.

For more information about creating IAM roles for a service, see [Creating a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*.

Optionally, you can create policies with the required permissions instead of using the managed IAM policy `AmazonRDSDirectoryServiceAccess`. In this case, the IAM role must have the following IAM trust policy.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": [
          "directoryservice.rds.amazonaws.com",
          "rds.amazonaws.com"
        ]
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
```

------

The role must also have the following IAM role policy.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Action": [
        "ds:DescribeDirectories",
        "ds:AuthorizeApplication",
        "ds:UnauthorizeApplication",
        "ds:GetAuthorizedApplicationDetails"
      ],
    "Effect": "Allow",
    "Resource": "*"
    }
  ]
}
```

------

## Step 4: Create and configure users
Create and configure users

You can create users with the Active Directory Users and Computers tool. This tool is part of the Active Directory Domain Services and Active Directory Lightweight Directory Services tools. Users represent individual people or entities that have access to your directory.

To create users in an Directory Service directory, you use an on-premises or Amazon EC2 instance based on Microsoft Windows that is joined to your Directory Service directory. You must be logged in to the instance as a user that has privileges to create users. For more information, see [Manage users and groups in AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/creating_ad_users_and_groups.html) in the *AWS Directory Service Administration Guide*.

## Step 5: Create or modify an Aurora MySQL DB cluster
Create or modify a DB cluster

Create or modify an Aurora MySQL DB cluster for use with your directory. You can use the console, AWS CLI, or RDS API to associate a DB cluster with a directory. You can do this task in one of the following ways:
+ Create a new Aurora MySQL DB cluster using the console, the [ create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html) CLI command, or the [CreateDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html) RDS API operation.

  For instructions, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md).
+ Modify an existing Aurora MySQL DB cluster using the console, the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) CLI command, or the [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) RDS API operation.

  For instructions, see [Modifying an Amazon Aurora DB cluster](Aurora.Modifying.md).
+ Restore an Aurora MySQL DB cluster from a DB snapshot using the console, the [restore-db-cluster-from-snapshot](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-snapshot.html) CLI command, or the [RestoreDBClusterFromSnapshot](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromSnapshot.html) RDS API operation.

  For instructions, see [Restoring from a DB cluster snapshot](aurora-restore-snapshot.md).
+ Restore an Aurora MySQL DB cluster to a point-in-time using the console, the [ restore-db-cluster-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html) CLI command, or the [RestoreDBClusterToPointInTime](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html) RDS API operation.

  For instructions, see [Restoring a DB cluster to a specified time](aurora-pitr.md).

Kerberos authentication is only supported for Aurora MySQL DB clusters in a VPC. The DB cluster can be in the same VPC as the directory, or in a different VPC. The DB cluster's VPC must have a VPC security group that allows outbound communication to your directory. 

### Console


When you use the console to create, modify, or restore a DB cluster, choose **Kerberos authentication** in the **Database authentication** section. Choose **Browse Directory** and then select the directory, or choose **Create a new directory**.

![\[Kerberos authentication setting when creating a DB cluster\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/kerberos-auth-create-cluster.png)


### AWS CLI


When you use the AWS CLI or RDS API, associate a DB cluster with a directory. The following parameters are required for the DB cluster to use the domain directory you created:
+ For the `--domain` parameter, use the domain identifier ("d-\$1" identifier) generated when you created the directory.
+ For the `--domain-iam-role-name` parameter, use the role you created that uses the managed IAM policy `AmazonRDSDirectoryServiceAccess`.

For example, the following CLI command modifies a DB cluster to use a directory.

For Linux, macOS, or Unix:

```
aws rds modify-db-cluster \
    --db-cluster-identifier mydbcluster \
    --domain d-ID \
    --domain-iam-role-name role-name
```

For Windows:

```
aws rds modify-db-cluster ^
    --db-cluster-identifier mydbcluster ^
    --domain d-ID ^
    --domain-iam-role-name role-name
```

**Important**  
If you modify a DB cluster to turn on Kerberos authentication, reboot the reader DB instances after making the change.

## Step 6: Create Aurora MySQL users that use Kerberos authentication
Create logins

The DB cluster is joined to the AWS Managed Microsoft AD domain. Thus, you can create Aurora MySQL users from the Active Directory users in your domain. Database permissions are managed through standard Aurora MySQL permissions that are granted to and revoked from these users.

You can allow an Active Directory user to authenticate with Aurora MySQL. To do this, first use the Amazon RDS primary user credentials to connect to the Aurora MySQL DB cluster as with any other DB cluster. After you're logged in, create an externally authenticated user with Kerberos authentication in Aurora MySQL as shown here:

```
CREATE USER user_name@'host_name' IDENTIFIED WITH 'authentication_kerberos' BY 'realm_name';
```
+ Replace `user_name` with the username. Users (both humans and applications) from your domain can now connect to the DB cluster from a domain-joined client machine using Kerberos authentication.
+ Replace `host_name` with the hostname. You can use `%` as a wild card. You can also use specific IP addresses for the hostname.
+ Replace *realm\$1name* with the directory realm name of the domain. The realm name is usually the same as the DNS domain name in uppercase letters, such as `CORP.EXAMPLE.COM`. A realm is a group of systems that use the same Kerberos Key Distribution Center.

The following example creates a database user with the name `Admin` that authenticates against the Active Directory with the realm name `MYSQL.LOCAL`.

```
CREATE USER Admin@'%' IDENTIFIED WITH 'authentication_kerberos' BY 'MYSQL.LOCAL';
```

### Modifying an existing Aurora MySQL login


You can also modify an existing Aurora MySQL login to use Kerberos authentication by using the following syntax:

```
ALTER USER user_name IDENTIFIED WITH 'authentication_kerberos' BY 'realm_name';
```

## Step 7: Configure a MySQL client
Configure a MySQL client

To configure a MySQL client, take the following steps:

1. Create a `krb5.conf` file (or equivalent) to point to the domain.

1. Verify that traffic can flow between the client host and Directory Service. Use a network utility such as Netcat, for the following:
   + Verify traffic over DNS for port 53.
   + Verify traffic over TCP/UDP for port 53 and for Kerberos, which includes ports 88 and 464 for Directory Service.

1. Verify that traffic can flow between the client host and the DB instance over the database port. For example, use `mysql` to connect and access the database.

The following is sample `krb5.conf` content for AWS Managed Microsoft AD.

```
[libdefaults]
 default_realm = EXAMPLE.COM
[realms]
 EXAMPLE.COM = {
  kdc = example.com
  admin_server = example.com
 }
[domain_realm]
 .example.com = EXAMPLE.COM
 example.com = EXAMPLE.COM
```

The following is sample `krb5.conf` content for an on-premises Microsoft Active Directory.

```
[libdefaults]
 default_realm = EXAMPLE.COM
[realms]
 EXAMPLE.COM = {
  kdc = example.com
  admin_server = example.com
 }
 ONPREM.COM = {
  kdc = onprem.com
  admin_server = onprem.com
 }
[domain_realm]
 .example.com = EXAMPLE.COM
 example.com = EXAMPLE.COM
 .onprem.com = ONPREM.COM
 onprem.com = ONPREM.COM  
 .rds.amazonaws.com = EXAMPLE.COM
 .amazonaws.com.rproxy.govskope.us.cn = EXAMPLE.COM
 .amazon.com = EXAMPLE.COM
```

## Step 8: (Optional) Configure case-insensitive username comparison
(Optional) Configure case-insensitive username comparison

By default, the character case of the MySQL database username must match that of the Active Directory login. However, you can now use case-insensitive username comparison with the `authentication_kerberos` plugin. To do so, you set the `authentication_kerberos_caseins_cmp` DB cluster parameter to `true`.

**To use case-insensitive username comparison**

1. Create a custom DB cluster parameter group. Follow the procedures in [Creating a DB cluster parameter groupin Amazon Aurora](USER_WorkingWithParamGroups.CreatingCluster.md).

1. Edit the new parameter group to set the value of `authentication_kerberos_caseins_cmp` to `true`. Follow the procedures in [Modifying parameters in a DB cluster parameter groupin Amazon Aurora](USER_WorkingWithParamGroups.ModifyingCluster.md).

1. Associate the DB cluster parameter group with your Aurora MySQL DB cluster. Follow the procedures in [Associating a DB cluster parameter group with a DB cluster in Amazon Aurora](USER_WorkingWithParamGroups.AssociatingCluster.md).

1. Reboot the DB cluster.

# Connecting to Aurora MySQL with Kerberos authentication


To avoid errors, use a MySQL client with version 8.0.26 or higher on Unix platforms, 8.0.27 or higher on Windows.

## Using the Aurora MySQL Kerberos login to connect to the DB cluster


To connect to Aurora MySQL with Kerberos authentication, you log in as a database user that you created using the instructions in [Step 6: Create Aurora MySQL users that use Kerberos authentication](aurora-mysql-kerberos-setting-up.md#aurora-mysql-kerberos-setting-up.create-logins).

At a command prompt, connect to one of the endpoints associated with your Aurora MySQL DB cluster. When you're prompted for the password, enter the Kerberos password associated with that username.

When you authenticate with Kerberos, a *ticket-granting ticket* (TGT) is generated if one doesn't already exist. The `authentication_kerberos` plugin uses the TGT to get a *service ticket*, which is then presented to the Aurora MySQL database server.

You can use the MySQL client to connect to Aurora MySQL with Kerberos authentication using either Windows or Unix.

### Unix


You can connect by using either one of the following methods:
+ Obtain the TGT manually. In this case, you don't need to supply the password to the MySQL client.
+ Supply the password for the Active Directory login directly to the MySQL client.

The client-side plugin is supported on Unix platforms for MySQL client versions 8.0.26 and higher.

**To connect by obtaining the TGT manually**

1. At the command line interface, use the following command to obtain the TGT.

   ```
   kinit user_name
   ```

1. Use the following `mysql` command to log in to the DB instance endpoint of your DB cluster.

   ```
   mysql -h DB_instance_endpoint -P 3306 -u user_name -p
   ```
**Note**  
Authentication can fail if the keytab is rotated on the DB instance. In this case, obtain a new TGT by rerunning `kinit`.

**To connect directly**

1. At the command line interface, use the following `mysql` command to log in to the DB instance endpoint of your DB cluster.

   ```
   mysql -h DB_instance_endpoint -P 3306 -u user_name -p
   ```

1. Enter the password for the Active Directory user.

### Windows


On Windows, authentication is usually done at login time, so you don't need to obtain the TGT manually to connect to the Aurora MySQL DB cluster. The case of the database username must match the character case of the user in the Active Directory. For example, if the user in the Active Directory appears as `Admin`, the database username must be `Admin`.

The client-side plugin is supported on Windows for MySQL client versions 8.0.27 and higher.

**To connect directly**
+ At the command line interface, use the following `mysql` command to log in to the DB instance endpoint of your DB cluster.

  ```
  mysql -h DB_instance_endpoint -P 3306 -u user_name
  ```

## Kerberos authentication with Aurora global databases


Kerberos authentication for Aurora MySQL is supported for Aurora global databases. To authenticate users on the secondary DB cluster using the Active Directory of the primary DB cluster, replicate the Active Directory to the secondary AWS Region. You turn on Kerberos authentication on the secondary cluster using the same domain ID as for the primary cluster. AWS Managed Microsoft AD replication is supported only with the Enterprise version of Active Directory. For more information, see [Multi-Region replication](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_configure_multi_region_replication.html) in the *AWS Directory Service Administration Guide*.

## Migrating from RDS for MySQL to Aurora MySQL


After you migrate from RDS for MySQL with Kerberos authentication enabled to Aurora MySQL, modify users created with the `auth_pam` plugin to use the `authentication_kerberos` plugin. For example:

```
ALTER USER user_name IDENTIFIED WITH 'authentication_kerberos' BY 'realm_name';
```

## Preventing ticket caching


If a valid TGT doesn't exist when the MySQL client application starts, the application can obtain and cache the TGT. If you want to prevent the TGT from being cached, set a configuration parameter in the `/etc/krb5.conf` file.

**Note**  
This configuration only applies to client hosts running Unix, not Windows.

**To prevent TGT caching**
+ Add an `[appdefaults]` section to `/etc/krb5.conf` as follows:

  ```
  [appdefaults]
    mysql = {
      destroy_tickets = true
    }
  ```

## Logging for Kerberos authentication


The `AUTHENTICATION_KERBEROS_CLIENT_LOG` environment variable sets the logging level for Kerberos authentication. You can use the logs for client-side debugging.

The permitted values are 1–5. Log messages are written to the standard error output. The following table describes each logging level.


| Logging level | Description | 
| --- | --- | 
| 1 or not set | No logging | 
| 2 | Error messages | 
| 3 | Error and warning messages | 
| 4 | Error, warning, and information messages | 
| 5 | Error, warning, information, and debug messages | 

# Managing a DB cluster in a domain


You can use the AWS CLI or the RDS API to manage your DB cluster and its relationship with your managed Active Directory. For example, you can associate an Active Directory for Kerberos authentication and disassociate an Active Directory to turn off Kerberos authentication. You can also move a DB cluster to be externally authenticated by one Active Directory to another.

For example, using the Amazon RDS API, you can do the following:
+ To reattempt turning on Kerberos authentication for a failed membership, use the `ModifyDBInstance` API operation and specify the current membership's directory ID.
+ To update the IAM role name for membership, use the `ModifyDBInstance` API operation and specify the current membership's directory ID and the new IAM role.
+ To turn off Kerberos authentication on a DB cluster, use the `ModifyDBInstance` API operation and specify `none` as the domain parameter.
+ To move a DB cluster from one domain to another, use the `ModifyDBInstance` API operation and specify the domain identifier of the new domain as the domain parameter.
+ To list membership for each DB cluster, use the `DescribeDBInstances` API operation.

## Understanding domain membership


After you create or modify your DB cluster, it becomes a member of the domain. You can view the status of the domain membership for the DB cluster by running the [describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) CLI command. The status of the DB cluster can be one of the following:
+ `kerberos-enabled` – The DB cluster has Kerberos authentication turned on.
+  `enabling-kerberos` – AWS is in the process of turning on Kerberos authentication on this DB cluster.
+ `pending-enable-kerberos` – Turning on Kerberos authentication is pending on this DB cluster.
+ `pending-maintenance-enable-kerberos` – AWS will attempt to turn on Kerberos authentication on the DB cluster during the next scheduled maintenance window.
+ `pending-disable-kerberos` – Turning off Kerberos authentication is pending on this DB cluster.
+ `pending-maintenance-disable-kerberos` – AWS will attempt to turn off Kerberos authentication on the DB cluster during the next scheduled maintenance window.
+ `enable-kerberos-failed` – A configuration problem has prevented AWS from turning on Kerberos authentication on the DB cluster. Check and fix your configuration before reissuing the DB cluster modify command.
+ `disabling-kerberos` – AWS is in the process of turning off Kerberos authentication on this DB cluster.

A request to turn on Kerberos authentication can fail because of a network connectivity issue or an incorrect IAM role. For example, suppose that you create a DB cluster or modify an existing DB cluster and the attempt to turn on Kerberos authentication fails. In this case, reissue the modify command or modify the newly created DB cluster to join the domain.

# Migrating data to an Amazon Aurora MySQL DB cluster
Migrating data to Aurora MySQL

You have several options for migrating data from your existing database to an Amazon Aurora MySQL DB cluster. Your migration options also depend on the database that you are migrating from and the size of the data that you are migrating.

There are two different types of migration: physical and logical. Physical migration means that physical copies of database files are used to migrate the database. Logical migration means that the migration is accomplished by applying logical database changes, such as inserts, updates, and deletes.

Physical migration has the following advantages:
+ Physical migration is faster than logical migration, especially for large databases.
+ Database performance does not suffer when a backup is taken for physical migration.
+ Physical migration can migrate everything in the source database, including complex database components.

Physical migration has the following limitations:
+ The `innodb_page_size` parameter must be set to its default value (`16KB`).
+ The `innodb_data_file_path` parameter must be configured with only one data file that uses the default data file name `"ibdata1:12M:autoextend"`. Databases with two data files, or with a data file with a different name, can't be migrated using this method.

  The following are examples of file names that are not allowed: `"innodb_data_file_path=ibdata1:50M; ibdata2:50M:autoextend"` and `"innodb_data_file_path=ibdata01:50M:autoextend"`.
+ The `innodb_log_files_in_group` parameter must be set to its default value (`2`).

Logical migration has the following advantages:
+ You can migrate subsets of the database, such as specific tables or parts of a table.
+ The data can be migrated regardless of the physical storage structure.

Logical migration has the following limitations:
+ Logical migration is usually slower than physical migration.
+ Complex database components can slow down the logical migration process. In some cases, complex database components can even block logical migration.

The following table describes your options and the type of migration for each option.


| Migrating from | Migration type | Solution | 
| --- | --- | --- | 
| An RDS for MySQL DB instance | Physical |  You can migrate from an RDS for MySQL DB instance by first creating an Aurora MySQL read replica of a MySQL DB instance. When the replica lag between the MySQL DB instance and the Aurora MySQL read replica is 0, you can direct your client applications to read from the Aurora read replica and then stop replication to make the Aurora MySQL read replica a standalone Aurora MySQL DB cluster for reading and writing. For details, see [Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster by using an Aurora read replica](AuroraMySQL.Migrating.RDSMySQL.Replica.md).  | 
| An RDS for MySQL DB snapshot | Physical |  You can migrate data directly from an RDS for MySQL DB snapshot to an Amazon Aurora MySQL DB cluster. For details, see [Migrating an RDS for MySQL snapshot to Aurora](AuroraMySQL.Migrating.RDSMySQL.Snapshot.md).  | 
| A MySQL database external to Amazon RDS | Logical |  You can create a dump of your data using the `mysqldump` utility, and then import that data into an existing Amazon Aurora MySQL DB cluster. For details, see [Logical migration from MySQL to Amazon Aurora MySQL by using mysqldump](AuroraMySQL.Migrating.ExtMySQL.mysqldump.md). To export metadata for database users during the migration from an external MySQL database, you can also use a MySQL Shell command instead of `mysqldump`. For more information, see [Instance Dump Utility, Schema Dump Utility, and Table Dump Utility](https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-utilities-dump-instance-schema.html#mysql-shell-utilities-dump-about).  The [mysqlpump](https://dev.mysql.com/doc/refman/8.0/en/mysqlpump.html) utility is deprecated as of MySQL 8.0.34.   | 
| A MySQL database external to Amazon RDS | Physical |  You can copy the backup files from your database to an Amazon Simple Storage Service (Amazon S3) bucket, and then restore an Amazon Aurora MySQL DB cluster from those files. This option can be considerably faster than migrating data using `mysqldump`. For details, see [Physical migration from MySQL by using Percona XtraBackup and Amazon S3](AuroraMySQL.Migrating.ExtMySQL.S3.md).  | 
| A MySQL database external to Amazon RDS | Logical |  You can save data from your database as text files and copy those files to an Amazon S3 bucket. You can then load that data into an existing Aurora MySQL DB cluster using the `LOAD DATA FROM S3` MySQL command. For more information, see [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md).  | 
| A database that isn't MySQL-compatible | Logical |  You can use AWS Database Migration Service (AWS DMS) to migrate data from a database that isn't MySQL-compatible. For more information on AWS DMS, see [What is AWS database migration service?](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) | 

**Note**  
If you're migrating a MySQL database external to Amazon RDS, the migration options described in the table are supported only if your database supports the InnoDB or MyISAM tablespaces.  
If the MySQL database you're migrating to Aurora MySQL uses `memcached`, remove `memcached` before migrating it.  
You can't migrate to Aurora MySQL version 3.05 and higher from some older MySQL 8.0 versions, including 8.0.11, 8.0.13, and 8.0.15. We recommend that you upgrade to MySQL version 8.0.28 before migrating.

# Migrating data from an external MySQL database to an Amazon Aurora MySQL DB cluster
Migrating from an external MySQL database to Aurora MySQL

If your database supports the InnoDB or MyISAM tablespaces, you have these options for migrating your data to an Amazon Aurora MySQL DB cluster: 
+ You can create a dump of your data using the `mysqldump` utility, and then import that data into an existing Amazon Aurora MySQL DB cluster. For more information, see [Logical migration from MySQL to Amazon Aurora MySQL by using mysqldump](AuroraMySQL.Migrating.ExtMySQL.mysqldump.md).
+ You can copy the full and incremental backup files from your database to an Amazon S3 bucket, and then restore to an Amazon Aurora MySQL DB cluster from those files. This option can be considerably faster than migrating data using `mysqldump`. For more information, see [Physical migration from MySQL by using Percona XtraBackup and Amazon S3](AuroraMySQL.Migrating.ExtMySQL.S3.md).

**Topics**
+ [

# Physical migration from MySQL by using Percona XtraBackup and Amazon S3
](AuroraMySQL.Migrating.ExtMySQL.S3.md)
+ [

# Logical migration from MySQL to Amazon Aurora MySQL by using mysqldump
](AuroraMySQL.Migrating.ExtMySQL.mysqldump.md)

# Physical migration from MySQL by using Percona XtraBackup and Amazon S3
Physical migration using Percona XtraBackup and Amazon S3

You can copy the full and incremental backup files from your source MySQL version 5.7 or 8.0 database to an Amazon S3 bucket. Then you can restore to an Amazon Aurora MySQL DB cluster with the same major DB engine version from those files.

This option can be considerably faster than migrating data using `mysqldump`, because using `mysqldump` replays all of the commands to recreate the schema and data from your source database in your new Aurora MySQL DB cluster. By copying your source MySQL data files, Aurora MySQL can immediately use those files as the data for an Aurora MySQL DB cluster.

You can also minimize downtime by using binary log replication during the migration process. If you use binary log replication, the external MySQL database remains open to transactions while the data is being migrated to the Aurora MySQL DB cluster. After the Aurora MySQL DB cluster has been created, you use binary log replication to synchronize the Aurora MySQL DB cluster with the transactions that happened after the backup. When the Aurora MySQL DB cluster is caught up with the MySQL database, you finish the migration by completely switching to the Aurora MySQL DB cluster for new transactions. For more information, see [Synchronizing the Amazon Aurora MySQL DB cluster with the MySQL database using replication](#AuroraMySQL.Migrating.ExtMySQL.S3.RepSync).

**Contents**
+ [

## Limitations and considerations
](#AuroraMySQL.Migrating.ExtMySQL.S3.Limits)
+ [

## Before you begin
](#AuroraMySQL.Migrating.ExtMySQL.S3.Prereqs)
  + [

### Installing Percona XtraBackup
](#AuroraMySQL.Migrating.ExtMySQL.S3.Prereqs.XtraBackup)
  + [

### Required permissions
](#AuroraMySQL.Migrating.ExtMySQL.S3.Prereqs.Permitting)
  + [

### Creating the IAM service role
](#AuroraMySQL.Migrating.ExtMySQL.S3.Prereqs.CreateRole)
+ [

## Backing up files to be restored as an Amazon Aurora MySQL DB cluster
](#AuroraMySQL.Migrating.ExtMySQL.S3.Backup)
  + [

### Creating a full backup with Percona XtraBackup
](#AuroraMySQL.Migrating.ExtMySQL.S3.Backup.Full)
  + [

### Using incremental backups with Percona XtraBackup
](#AuroraMySQL.Migrating.ExtMySQL.S3.Backup.Incr)
  + [

### Backup considerations
](#AuroraMySQL.Migrating.ExtMySQL.S3.Backup.Considerations)
+ [

## Restoring an Amazon Aurora MySQL DB cluster from an Amazon S3 bucket
](#AuroraMySQL.Migrating.ExtMySQL.S3.Restore)
+ [

## Synchronizing the Amazon Aurora MySQL DB cluster with the MySQL database using replication
](#AuroraMySQL.Migrating.ExtMySQL.S3.RepSync)
  + [

### Configuring your external MySQL database and your Aurora MySQL DB cluster for encrypted replication
](#AuroraMySQL.Migrating.ExtMySQL.S3.RepSync.ConfigureEncryption)
  + [

### Synchronizing the Amazon Aurora MySQL DB cluster with the external MySQL database
](#AuroraMySQL.Migrating.ExtMySQL.S3.RepSync.Synchronizing)
+ [

# Reducing the time for physical migration to Amazon Aurora MySQL
](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md)
  + [

## Unsupported table types
](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md#AuroraMySQL.Migrating.ExtMySQL.Prechecks.Tables)
  + [

## User accounts with unsupported privileges
](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md#AuroraMySQL.Migrating.ExtMySQL.Prechecks.Users)
  + [

## Dynamic privileges in Aurora MySQL version 3
](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md#AuroraMySQL.Migrating.ExtMySQL.Prechecks.Dynamic)
  + [

## Stored objects with 'rdsadmin'@'localhost' as the definer
](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md#AuroraMySQL.Migrating.ExtMySQL.Prechecks.Objects)

## Limitations and considerations


The following limitations and considerations apply to restoring to an Amazon Aurora MySQL DB cluster from an Amazon S3 bucket:
+ You can migrate your data only to a new DB cluster, not an existing DB cluster.
+ You must use Percona XtraBackup to back up your data to S3. For more information, see [Installing Percona XtraBackup](#AuroraMySQL.Migrating.ExtMySQL.S3.Prereqs.XtraBackup).
+ The Amazon S3 bucket and the Aurora MySQL DB cluster must be in the same AWS Region.
+ You can't restore from the following:
  + A DB cluster snapshot export to Amazon S3. You also can't migrate data from a DB cluster snapshot export to your S3 bucket.
  + An encrypted source database, but you can encrypt the data being migrated. You can also leave the data unencrypted during the migration process.
  + A MySQL 5.5 or 5.6 database
+ Percona Server for MySQL isn't supported as a source database, because it can contain `compression_dictionary*` tables in the `mysql` schema.
+ You can't restore to an Aurora Serverless DB cluster.
+ Backward migration isn't supported for either major versions or minor versions. For example, you can't migrate from MySQL version 8.0 to Aurora MySQL version 2 (compatible with MySQL 5.7), and you can't migrate from MySQL version 8.0.32 to Aurora MySQL version 3.03, which is compatible with MySQL community version 8.0.26.
+ You can't migrate to Aurora MySQL version 3.05 and higher from some older MySQL 8.0 versions, including 8.0.11, 8.0.13, and 8.0.15. We recommend that you upgrade to MySQL version 8.0.28 before migrating.
+ Importing from Amazon S3 isn't supported on the db.t2.micro DB instance class. However, you can restore to a different DB instance class, and change the DB instance class later. For more information about DB instance classes, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md).
+ Amazon S3 limits the size of a file uploaded to an S3 bucket to 5 TB. If a backup file exceeds 5 TB, then you must split the backup file into smaller files.
+ Amazon RDS limits the number of files uploaded to an S3 bucket to 1 million. If the backup data for your database, including all full and incremental backups, exceeds 1 million files, use a Gzip (.gz), tar (.tar.gz), or Percona xbstream (.xbstream) file to store full and incremental backup files in the S3 bucket. Percona XtraBackup 8.0 only supports Percona xbstream for compression.
+ To provide management services for each DB cluster, the `rdsadmin` user is created when the DB cluster is created. As this is a reserved user in RDS, the following limitations apply:
  + Functions, procedures, views, events, and triggers with the `'rdsadmin'@'localhost'` definer aren't imported. For more information, see [Stored objects with 'rdsadmin'@'localhost' as the definer](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md#AuroraMySQL.Migrating.ExtMySQL.Prechecks.Objects) and [Master user privileges with Amazon Aurora MySQL](AuroraMySQL.Security.md#AuroraMySQL.Security.MasterUser).
  + When the Aurora MySQL DB cluster is created, a master user is created with the maximum privileges supported. While restoring from backup, any unsupported privileges assigned to users being imported are removed automatically during import.

    To identify users that might be affected by this, see [User accounts with unsupported privileges](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md#AuroraMySQL.Migrating.ExtMySQL.Prechecks.Users). For more information on supported privileges in Aurora MySQL, see [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model).
+ For Aurora MySQL version 3, dynamic privileges aren't imported. Aurora-supported dynamic privileges can be imported after migration. For more information, see [Dynamic privileges in Aurora MySQL version 3](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md#AuroraMySQL.Migrating.ExtMySQL.Prechecks.Dynamic).
+ User-created tables in the `mysql` schema aren't migrated.
+ The `innodb_data_file_path` parameter must be configured with only one data file that uses the default data file name `ibdata1:12M:autoextend`. Databases with two data files, or with a data file with a different name, can't be migrated using this method.

  The following are examples of file names that aren't allowed: `innodb_data_file_path=ibdata1:50M`, `ibdata2:50M:autoextend`, and `innodb_data_file_path=ibdata01:50M:autoextend`.
+ You can't migrate from a source database that has tables defined outside of the default MySQL data directory.
+ The maximum supported size for uncompressed backups using this method is currently limited to 64 TiB. For compressed backups, this limit goes lower to account for the uncompression space requirements. In such cases, the maximum supported backup size would be (`64 TiB – compressed backup size`).
+ Aurora MySQL doesn't support the importing of MySQL and other external components and plugins.
+ Aurora MySQL doesn't restore everything from your database. We recommend that you save the database schema and values for the following items from your source MySQL database, then add them to your restored Aurora MySQL DB cluster after it has been created:
  + User accounts
  + Functions
  + Stored procedures
  + Time zone information. Time zone information is loaded from the local operating system of your Aurora MySQL DB cluster. For more information, see [Local time zone for Amazon Aurora DB clusters](Concepts.RegionsAndAvailabilityZones.md#Aurora.Overview.LocalTimeZone).

## Before you begin


Before you can copy your data to an Amazon S3 bucket and restore to a DB cluster from those files, you must do the following:
+ Install Percona XtraBackup on your local server.
+ Permit Aurora MySQL to access your Amazon S3 bucket on your behalf.

### Installing Percona XtraBackup


Amazon Aurora can restore a DB cluster from files that were created using Percona XtraBackup. You can install Percona XtraBackup from [Software Downloads - Percona](https://www.percona.com/downloads).

For MySQL 5.7 migration, use Percona XtraBackup 2.4.

For MySQL 8.0 migration, use Percona XtraBackup 8.0. Make sure that the Percona XtraBackup version is compatible with the engine version of your source database.

### Required permissions


To migrate your MySQL data to an Amazon Aurora MySQL DB cluster, several permissions are required:
+ The user that is requesting that Aurora create a new cluster from an Amazon S3 bucket must have permission to list the buckets for your AWS account. You grant the user this permission using an AWS Identity and Access Management (IAM) policy.
+ Aurora requires permission to act on your behalf to access the Amazon S3 bucket where you store the files used to create your Amazon Aurora MySQL DB cluster. You grant Aurora the required permissions using an IAM service role. 
+ The user making the request must also have permission to list the IAM roles for your AWS account.
+ If the user making the request is to create the IAM service role or request that Aurora create the IAM service role (by using the console), then the user must have permission to create an IAM role for your AWS account.
+ If you plan to encrypt the data during the migration process, update the IAM policy of the user who will perform the migration to grant RDS access to the AWS KMS keys used for encrypting the backups. For instructions, see [Creating an IAM policy to access AWS KMS resources](AuroraMySQL.Integrating.Authorizing.IAM.KMSCreatePolicy.md).

For example, the following IAM policy grants a user the minimum required permissions to use the console to list IAM roles, create an IAM role, list the Amazon S3 buckets for your account, and list the KMS keys.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:ListRoles",
                "iam:CreateRole",
                "iam:CreatePolicy",
                "iam:AttachRolePolicy",
                "s3:ListBucket",
                "kms:ListKeys"
            ],
            "Resource": "*"
        }
    ]
}
```

------

Additionally, for a user to associate an IAM role with an Amazon S3 bucket, the IAM user must have the `iam:PassRole` permission for that IAM role. This permission allows an administrator to restrict which IAM roles a user can associate with Amazon S3 buckets. 

For example, the following IAM policy allows a user to associate the role named `S3Access` with an Amazon S3 bucket.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement":[
        {
            "Sid":"AllowS3AccessRole",
            "Effect":"Allow",
            "Action":"iam:PassRole",
            "Resource":"arn:aws:iam::123456789012:role/S3Access"
        }
    ]
}
```

------

For more information on IAM user permissions, see [Managing access using policies](UsingWithRDS.IAM.md#security_iam_access-manage).

### Creating the IAM service role


You can have the AWS Management Console create a role for you by choosing the **Create a New Role** option (shown later in this topic). If you select this option and specify a name for the new role, then Aurora creates the IAM service role required for Aurora to access your Amazon S3 bucket with the name that you supply.

As an alternative, you can manually create the role using the following procedure.

**To create an IAM role for Aurora to access Amazon S3**

1. Complete the steps in [Creating an IAM policy to access Amazon S3 resources](AuroraMySQL.Integrating.Authorizing.IAM.S3CreatePolicy.md).

1. Complete the steps in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

1. Complete the steps in [Associating an IAM role with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.md).

## Backing up files to be restored as an Amazon Aurora MySQL DB cluster
Backing up files

You can create a full backup of your MySQL database files using Percona XtraBackup and upload the backup files to an Amazon S3 bucket. Alternatively, if you already use Percona XtraBackup to back up your MySQL database files, you can upload your existing full and incremental backup directories and files to an Amazon S3 bucket.

**Topics**
+ [

### Creating a full backup with Percona XtraBackup
](#AuroraMySQL.Migrating.ExtMySQL.S3.Backup.Full)
+ [

### Using incremental backups with Percona XtraBackup
](#AuroraMySQL.Migrating.ExtMySQL.S3.Backup.Incr)
+ [

### Backup considerations
](#AuroraMySQL.Migrating.ExtMySQL.S3.Backup.Considerations)

### Creating a full backup with Percona XtraBackup
Full backups

To create a full backup of your MySQL database files that can be restored from Amazon S3 to create an Aurora MySQL DB cluster, use the Percona XtraBackup utility (`xtrabackup`) to back up your database. 

For example, the following command creates a backup of a MySQL database and stores the files in the `/on-premises/s3-restore/backup` folder.

```
xtrabackup --backup --user=<myuser> --password=<password> --target-dir=</on-premises/s3-restore/backup>
```

If you want to compress your backup into a single file (which can be split, if needed), you can use the `--stream` option to save your backup in one of the following formats:
+ Gzip (.gz)
+ tar (.tar)
+ Percona xbstream (.xbstream)

The following command creates a backup of your MySQL database split into multiple Gzip files.

```
xtrabackup --backup --user=<myuser> --password=<password> --stream=tar \
   --target-dir=</on-premises/s3-restore/backup> | gzip - | split -d --bytes=500MB \
   - </on-premises/s3-restore/backup/backup>.tar.gz
```

The following command creates a backup of your MySQL database split into multiple tar files.

```
xtrabackup --backup --user=<myuser> --password=<password> --stream=tar \
   --target-dir=</on-premises/s3-restore/backup> | split -d --bytes=500MB \
   - </on-premises/s3-restore/backup/backup>.tar
```

The following command creates a backup of your MySQL database split into multiple xbstream files.

```
xtrabackup --backup --user=<myuser> --password=<password> --stream=xbstream \
   --target-dir=</on-premises/s3-restore/backup> | split -d --bytes=500MB \
   - </on-premises/s3-restore/backup/backup>.xbstream
```

**Note**  
If you see the following error, it might be caused by mixing file formats in your command:  

```
ERROR:/bin/tar: This does not look like a tar archive
```

Once you have backed up your MySQL database using the Percona XtraBackup utility, you can copy your backup directories and files to an Amazon S3 bucket.

For information on creating and uploading a file to an Amazon S3 bucket, see [Getting started with Amazon Simple Storage Service](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html) in the *Amazon S3 Getting Started Guide*.

### Using incremental backups with Percona XtraBackup
Incremental backups

Amazon Aurora MySQL supports both full and incremental backups created using Percona XtraBackup. If you already use Percona XtraBackup to perform full and incremental backups of your MySQL database files, you don't need to create a full backup and upload the backup files to Amazon S3. Instead, you can save a significant amount of time by copying your existing backup directories and files for your full and incremental backups to an Amazon S3 bucket. For more information, see [Create an incremental backup](https://docs.percona.com/percona-xtrabackup/8.0/create-incremental-backup.html) on the Percona website.

When copying your existing full and incremental backup files to an Amazon S3 bucket, you must recursively copy the contents of the base directory. Those contents include the full backup and also all incremental backup directories and files. This copy must preserve the directory structure in the Amazon S3 bucket. Aurora iterates through all files and directories. Aurora uses the `xtrabackup-checkpoints` file included with each incremental backup to identify the base directory and to order incremental backups by log sequence number (LSN) range.

For information on creating and uploading a file to an Amazon S3 bucket, see [Getting started with Amazon Simple Storage Service](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html) in the *Amazon S3 Getting Started Guide*.

### Backup considerations


Aurora doesn't support partial backups created using Percona XtraBackup. You can't use the following options to create a partial backup when you back up the source files for your database: `--tables`, `--tables-exclude`, `--tables-file`, `--databases`, `--databases-exclude`, or `--databases-file`.

For more information about backing up your database with Percona XtraBackup, see [Percona XtraBackup - Documentation](https://www.percona.com/doc/percona-xtrabackup/LATEST/index.html) and [Work with binary logs](https://docs.percona.com/percona-xtrabackup/8.0/working-with-binary-logs.html) on the Percona website.

Aurora supports incremental backups created using Percona XtraBackup. For more information, see [Create an incremental backup](https://docs.percona.com/percona-xtrabackup/8.0/create-incremental-backup.html) on the Percona website.

Aurora consumes your backup files based on the file name. Be sure to name your backup files with the appropriate file extension based on the file format—for example, `.xbstream` for files stored using the Percona xbstream format.

Aurora consumes your backup files in alphabetical order and also in natural number order. Always use the `split` option when you issue the `xtrabackup` command to ensure that your backup files are written and named in the proper order.

Amazon S3 limits the size of a file uploaded to an Amazon S3 bucket to 5 TB. If the backup data for your database exceeds 5 TB, use the `split` command to split the backup files into multiple files that are each less than 5 TB.

Aurora limits the number of source files uploaded to an Amazon S3 bucket to 1 million files. In some cases, backup data for your database, including all full and incremental backups, can come to a large number of files. In these cases, use a tarball (.tar.gz) file to store full and incremental backup files in the Amazon S3 bucket.

When you upload a file to an Amazon S3 bucket, you can use server-side encryption to encrypt the data. You can then restore an Amazon Aurora MySQL DB cluster from those encrypted files. Amazon Aurora MySQL can restore a DB cluster with files encrypted using the following types of server-side encryption:
+ Server-side encryption with Amazon S3–managed keys (SSE-S3) – Each object is encrypted with a unique key employing strong multifactor encryption.
+ Server-side encryption with AWS KMS–managed keys (SSE-KMS) – Similar to SSE-S3, but you have the option to create and manage encryption keys yourself, and also other differences.

For information about using server-side encryption when uploading files to an Amazon S3 bucket, see [Protecting data using server-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html) in the *Amazon S3 Developer Guide*.

## Restoring an Amazon Aurora MySQL DB cluster from an Amazon S3 bucket
Restoring from an Amazon S3 bucket

You can restore your backup files from your Amazon S3 bucket to create a new Amazon Aurora MySQL DB cluster by using the Amazon RDS console. 

**To restore an Amazon Aurora MySQL DB cluster from files on an Amazon S3 bucket**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the top right corner of the Amazon RDS console, choose the AWS Region in which to create your DB cluster. Choose the same AWS Region as the Amazon S3 bucket that contains your database backup. 

1. In the navigation pane, choose **Databases**, and then choose **Restore from S3**.

1. Choose **Restore from S3**.

   The **Create database by restoring from S3** page appears.  
![\[The page where you specify the details for restoring a DB cluster from S3\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/AuroraMigrateS3_01.png)

1. Under **S3 destination**:

   1. Choose the **S3 bucket** that contains the backup files.

   1. (Optional) For **S3 folder path prefix**, enter a file path prefix for the files stored in your Amazon S3 bucket.

      If you don't specify a prefix, then RDS creates your DB instance using all of the files and folders in the root folder of the S3 bucket. If you do specify a prefix, then RDS creates your DB instance using the files and folders in the S3 bucket where the path for the file begins with the specified prefix.

      For example, suppose that you store your backup files on S3 in a subfolder named backups, and you have multiple sets of backup files, each in its own directory (gzip\$1backup1, gzip\$1backup2, and so on). In this case, you specify a prefix of backups/gzip\$1backup1 to restore from the files in the gzip\$1backup1 folder. 

1. Under **Engine options**:

   1. For **Engine type**, choose **Amazon Aurora**.

   1. For **Version**, choose the Aurora MySQL engine version for your restored DB instance.

1. For **IAM role**, you can choose an existing IAM role.

1. (Optional) You can also have a new IAM role created for you by choosing **Create a new role**. If so:

   1. Enter the **IAM role name**.

   1.  Choose whether to **Allow access to KMS key**:
      + If you didn't encrypt the backup files, choose **No**.
      + If you encrypted the backup files with AES-256 (SSE-S3) when you uploaded them to Amazon S3, choose **No**. In this case, the data is decrypted automatically.
      + If you encrypted the backup files with AWS KMS (SSE-KMS) server-side encryption when you uploaded them to Amazon S3, choose **Yes**. Next, choose the correct KMS key for **AWS KMS key**.

        The AWS Management Console creates an IAM policy that enables Aurora to decrypt the data.

      For more information, see [Protecting data using server-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html) in the *Amazon S3 Developer Guide*.

1. Choose settings for your DB cluster, such as the DB cluster storage configuration, DB instance class, DB cluster identifier, and login credentials. For information about each setting, see [Settings for Aurora DB clusters](Aurora.CreateInstance.md#Aurora.CreateInstance.Settings).

1. Customize additional settings for your Aurora MySQL DB cluster as needed.

1. Choose **Create database** to launch your Aurora DB instance.

On the Amazon RDS console, the new DB instance appears in the list of DB instances. The DB instance has a status of **creating** until the DB instance is created and ready for use. When the state changes to **available**, you can connect to the primary instance for your DB cluster. Depending on the DB instance class and store allocated, it can take several minutes for the new instance to be available.

To view the newly created cluster, choose the **Databases** view in the Amazon RDS console and choose the DB cluster. For more information, see [Viewing an Amazon Aurora DB cluster](accessing-monitoring.md#Aurora.Viewing).

![\[Amazon Aurora DB Instances List\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/AuroraLaunch04.png)


Note the port and the writer endpoint of the DB cluster. Use the writer endpoint and port of the DB cluster in your JDBC and ODBC connection strings for any application that performs write or read operations.

## Synchronizing the Amazon Aurora MySQL DB cluster with the MySQL database using replication
Synchronizing the Aurora MySQL DB cluster with the MySQL database

To achieve little or no downtime during the migration, you can replicate transactions that were committed on your MySQL database to your Aurora MySQL DB cluster. Replication enables the DB cluster to catch up with the transactions on the MySQL database that happened during the migration. When the DB cluster is completely caught up, you can stop the replication and finish the migration to Aurora MySQL.

**Topics**
+ [

### Configuring your external MySQL database and your Aurora MySQL DB cluster for encrypted replication
](#AuroraMySQL.Migrating.ExtMySQL.S3.RepSync.ConfigureEncryption)
+ [

### Synchronizing the Amazon Aurora MySQL DB cluster with the external MySQL database
](#AuroraMySQL.Migrating.ExtMySQL.S3.RepSync.Synchronizing)

### Configuring your external MySQL database and your Aurora MySQL DB cluster for encrypted replication
Configuring for encrypted replication

To replicate data securely, you can use encrypted replication.

**Note**  
If you don't need to use encrypted replication, you can skip these steps and move on to the instructions in [Synchronizing the Amazon Aurora MySQL DB cluster with the external MySQL database](#AuroraMySQL.Migrating.ExtMySQL.S3.RepSync.Synchronizing).

The following are prerequisites for using encrypted replication:
+ Secure Sockets Layer (SSL) must be enabled on the external MySQL primary database.
+ A client key and client certificate must be prepared for the Aurora MySQL DB cluster.

During encrypted replication, the Aurora MySQL DB cluster acts a client to the MySQL database server. The certificates and keys for the Aurora MySQL client are in files in .pem format.

**To configure your external MySQL database and your Aurora MySQL DB cluster for encrypted replication**

1. Ensure that you are prepared for encrypted replication:
   + If you don't have SSL enabled on the external MySQL primary database and don't have a client key and client certificate prepared, enable SSL on the MySQL database server and generate the required client key and client certificate.
   + If SSL is enabled on the external primary, supply a client key and certificate for the Aurora MySQL DB cluster. If you don't have these, generate a new key and certificate for the Aurora MySQL DB cluster. To sign the client certificate, you must have the certificate authority key that you used to configure SSL on the external MySQL primary database.

   For more information, see [ Creating SSL certificates and keys using openssl](https://dev.mysql.com/doc/refman/5.6/en/creating-ssl-files-using-openssl.html) in the MySQL documentation.

   You need the certificate authority certificate, the client key, and the client certificate.

1. Connect to the Aurora MySQL DB cluster as the primary user using SSL.

   For information about connecting to an Aurora MySQL DB cluster with SSL, see [TLS connections to Aurora MySQL DB clusters](AuroraMySQL.Security.md#AuroraMySQL.Security.SSL).

1. Run the [mysql.rds\$1import\$1binlog\$1ssl\$1material](mysql-stored-proc-replicating.md#mysql_rds_import_binlog_ssl_material) stored procedure to import the SSL information into the Aurora MySQL DB cluster.

   For the `ssl_material_value` parameter, insert the information from the .pem format files for the Aurora MySQL DB cluster in the correct JSON payload.

   The following example imports SSL information into an Aurora MySQL DB cluster. In .pem format files, the body code typically is longer than the body code shown in the example.

   ```
   call mysql.rds_import_binlog_ssl_material(
   '{"ssl_ca":"-----BEGIN CERTIFICATE-----
   AAAAB3NzaC1yc2EAAAADAQABAAABAQClKsfkNkuSevGj3eYhCe53pcjqP3maAhDFcvBS7O6V
   hz2ItxCih+PnDSUaw+WNQn/mZphTk/a/gU8jEzoOWbkM4yxyb/wB96xbiFveSFJuOp/d6RJhJOI0iBXr
   lsLnBItntckiJ7FbtxJMXLvvwJryDUilBMTjYtwB+QhYXUMOzce5Pjz5/i8SeJtjnV3iAoG/cQk+0FzZ
   qaeJAAHco+CY/5WrUBkrHmFJr6HcXkvJdWPkYQS3xqC0+FmUZofz221CBt5IMucxXPkX4rWi+z7wB3Rb
   BQoQzd8v7yeb7OzlPnWOyN0qFU0XA246RA8QFYiCNYwI3f05p6KLxEXAMPLE
   -----END CERTIFICATE-----\n","ssl_cert":"-----BEGIN CERTIFICATE-----
   AAAAB3NzaC1yc2EAAAADAQABAAABAQClKsfkNkuSevGj3eYhCe53pcjqP3maAhDFcvBS7O6V
   hz2ItxCih+PnDSUaw+WNQn/mZphTk/a/gU8jEzoOWbkM4yxyb/wB96xbiFveSFJuOp/d6RJhJOI0iBXr
   lsLnBItntckiJ7FbtxJMXLvvwJryDUilBMTjYtwB+QhYXUMOzce5Pjz5/i8SeJtjnV3iAoG/cQk+0FzZ
   qaeJAAHco+CY/5WrUBkrHmFJr6HcXkvJdWPkYQS3xqC0+FmUZofz221CBt5IMucxXPkX4rWi+z7wB3Rb
   BQoQzd8v7yeb7OzlPnWOyN0qFU0XA246RA8QFYiCNYwI3f05p6KLxEXAMPLE
   -----END CERTIFICATE-----\n","ssl_key":"-----BEGIN RSA PRIVATE KEY-----
   AAAAB3NzaC1yc2EAAAADAQABAAABAQClKsfkNkuSevGj3eYhCe53pcjqP3maAhDFcvBS7O6V
   hz2ItxCih+PnDSUaw+WNQn/mZphTk/a/gU8jEzoOWbkM4yxyb/wB96xbiFveSFJuOp/d6RJhJOI0iBXr
   lsLnBItntckiJ7FbtxJMXLvvwJryDUilBMTjYtwB+QhYXUMOzce5Pjz5/i8SeJtjnV3iAoG/cQk+0FzZ
   qaeJAAHco+CY/5WrUBkrHmFJr6HcXkvJdWPkYQS3xqC0+FmUZofz221CBt5IMucxXPkX4rWi+z7wB3Rb
   BQoQzd8v7yeb7OzlPnWOyN0qFU0XA246RA8QFYiCNYwI3f05p6KLxEXAMPLE
   -----END RSA PRIVATE KEY-----\n"}');
   ```

   For more information, see [mysql.rds\$1import\$1binlog\$1ssl\$1material](mysql-stored-proc-replicating.md#mysql_rds_import_binlog_ssl_material) and [TLS connections to Aurora MySQL DB clusters](AuroraMySQL.Security.md#AuroraMySQL.Security.SSL).
**Note**  
After running the procedure, the secrets are stored in files. To erase the files later, you can run the [mysql.rds\$1remove\$1binlog\$1ssl\$1material](mysql-stored-proc-replicating.md#mysql_rds_remove_binlog_ssl_material) stored procedure.

### Synchronizing the Amazon Aurora MySQL DB cluster with the external MySQL database
Synchronizing the Aurora MySQL DB cluster with the MySQL database

You can synchronize your Amazon Aurora MySQL DB cluster with the MySQL database using replication.

**To synchronize your Aurora MySQL DB cluster with the MySQL database using replication**

1. Ensure that the /etc/my.cnf file for the external MySQL database has the relevant entries.

   If encrypted replication is not required, ensure that the external MySQL database is started with binary logs (binlogs) enabled and SSL disabled. The following are the relevant entries in the /etc/my.cnf file for unencrypted data.

   ```
   log-bin=mysql-bin
   server-id=2133421
   innodb_flush_log_at_trx_commit=1
   sync_binlog=1
   ```

   If encrypted replication is required, ensure that the external MySQL database is started with SSL and binlogs enabled. The entries in the /etc/my.cnf file include the .pem file locations for the MySQL database server.

   ```
   log-bin=mysql-bin
   server-id=2133421
   innodb_flush_log_at_trx_commit=1
   sync_binlog=1
   
   # Setup SSL.
   ssl-ca=/home/sslcerts/ca.pem
   ssl-cert=/home/sslcerts/server-cert.pem
   ssl-key=/home/sslcerts/server-key.pem
   ```

   You can verify that SSL is enabled with the following command.

   ```
   mysql> show variables like 'have_ssl';
   ```

   Your output should be similar the following.

   ```
   +~-~-~-~-~-~-~-~-~-~-~-~-~-~--+~-~-~-~-~-~--+
   | Variable_name | Value |
   +~-~-~-~-~-~-~-~-~-~-~-~-~-~--+~-~-~-~-~-~--+
   | have_ssl      | YES   |
   +~-~-~-~-~-~-~-~-~-~-~-~-~-~--+~-~-~-~-~-~--+
   1 row in set (0.00 sec)
   ```

1. Determine the starting binary log position for replication. You specify the position to start replication in a later step.

   **Using the AWS Management Console**

   1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

   1. In the navigation pane, choose **Events**.

   1. In the **Events** list, note the position in the **Recovered from Binary log filename** event.  
![\[View MySQL primary\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-mysql-rep-binary-log-position.png)

   **Using the AWS CLI**

   You can also get the binlog file name and position by using the [describe-events](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-events.html) AWS CLI command. The following shows an example `describe-events` command.

   ```
   PROMPT> aws rds describe-events
   ```

   In the output, identify the event that shows the binlog position.

1. While connected to the external MySQL database, create a user to be used for replication. This account is used solely for replication and must be restricted to your domain to improve security. The following is an example.

   ```
   mysql> CREATE USER '<user_name>'@'<domain_name>' IDENTIFIED BY '<password>';
   ```

   The user requires the `REPLICATION CLIENT` and `REPLICATION SLAVE` privileges. Grant these privileges to the user.

   ```
   GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO '<user_name>'@'<domain_name>';
   ```

   If you need to use encrypted replication, require SSL connections for the replication user. For example, you can use the following statement to require SSL connections on the user account `<user_name>`.

   ```
   GRANT USAGE ON *.* TO '<user_name>'@'<domain_name>' REQUIRE SSL;
   ```
**Note**  
If `REQUIRE SSL` is not included, the replication connection might silently fall back to an unencrypted connection.

1. In the Amazon RDS console, add the IP address of the server that hosts the external MySQL database to the VPC security group for the Aurora MySQL DB cluster. For more information on modifying a VPC security group, see [Security groups for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) in the *Amazon Virtual Private Cloud User Guide*. 

   You might also need to configure your local network to permit connections from the IP address of your Aurora MySQL DB cluster, so that it can communicate with your external MySQL database. To find the IP address of the Aurora MySQL DB cluster, use the `host` command.

   ```
   host <db_cluster_endpoint>
   ```

   The host name is the DNS name from the Aurora MySQL DB cluster endpoint.

1. Enable binary log replication by running the [mysql.rds\$1reset\$1external\$1master (Aurora MySQL version 2)](mysql-stored-proc-replicating.md#mysql_rds_reset_external_master) or [mysql.rds\$1reset\$1external\$1source (Aurora MySQL version 3)](mysql-stored-proc-replicating.md#mysql_rds_reset_external_source) stored procedure. This stored procedure has the following syntax.

   ```
   CALL mysql.rds_set_external_master (
     host_name
     , host_port
     , replication_user_name
     , replication_user_password
     , mysql_binary_log_file_name
     , mysql_binary_log_file_location
     , ssl_encryption
   );
   
   CALL mysql.rds_set_external_source (
     host_name
     , host_port
     , replication_user_name
     , replication_user_password
     , mysql_binary_log_file_name
     , mysql_binary_log_file_location
     , ssl_encryption
   );
   ```

   For information about the parameters, see [mysql.rds\$1reset\$1external\$1master (Aurora MySQL version 2)](mysql-stored-proc-replicating.md#mysql_rds_reset_external_master) and [mysql.rds\$1reset\$1external\$1source (Aurora MySQL version 3)](mysql-stored-proc-replicating.md#mysql_rds_reset_external_source).

   For `mysql_binary_log_file_name` and `mysql_binary_log_file_location`, use the position in the **Recovered from Binary log filename** event you noted earlier.

   If the data in the Aurora MySQL DB cluster is not encrypted, the `ssl_encryption` parameter must be set to `0`. If the data is encrypted, the `ssl_encryption` parameter must be set to `1`.

   The following example runs the procedure for an Aurora MySQL DB cluster that has encrypted data.

   ```
   CALL mysql.rds_set_external_master(
     'Externaldb.some.com',
     3306,
     'repl_user'@'mydomain.com',
     'password',
     'mysql-bin.000010',
     120,
     1);
   
   CALL mysql.rds_set_external_source(
     'Externaldb.some.com',
     3306,
     'repl_user'@'mydomain.com',
     'password',
     'mysql-bin.000010',
     120,
     1);
   ```

   This stored procedure sets the parameters that the Aurora MySQL DB cluster uses for connecting to the external MySQL database and reading its binary log. If the data is encrypted, it also downloads the SSL certificate authority certificate, client certificate, and client key to the local disk. 

1. Start binary log replication by running the [mysql.rds\$1start\$1replication](mysql-stored-proc-replicating.md#mysql_rds_start_replication) stored procedure.

   ```
   CALL mysql.rds_start_replication;
   ```

1. Monitor how far the Aurora MySQL DB cluster is behind the MySQL replication primary database. To do so, connect to the Aurora MySQL DB cluster and run the following command.

   ```
   Aurora MySQL version 2:
   SHOW SLAVE STATUS;
   
   Aurora MySQL version 3:
   SHOW REPLICA STATUS;
   ```

   In the command output, the `Seconds Behind Master` field shows how far the Aurora MySQL DB cluster is behind the MySQL primary. When this value is `0` (zero), the Aurora MySQL DB cluster has caught up to the primary, and you can move on to the next step to stop replication.

1. Connect to the MySQL replication primary database and stop replication. To do so, run the [mysql.rds\$1stop\$1replication](mysql-stored-proc-replicating.md#mysql_rds_stop_replication) stored procedure.

   ```
   CALL mysql.rds_stop_replication;
   ```

# Reducing the time for physical migration to Amazon Aurora MySQL
Reducing the physical migration time

You can make the following database modifications to speed up the process of migrating a database to Amazon Aurora MySQL.

**Important**  
Make sure to perform these updates on a copy of a production database, rather than on a production database. You can then back up the copy and restore it to your Aurora MySQL DB cluster to avoid any service interruptions on your production database.

## Unsupported table types


Aurora MySQL supports only the InnoDB engine for database tables. If you have MyISAM tables in your database, then those tables must be converted before migrating to Aurora MySQL. The conversion process requires additional space for the MyISAM to InnoDB conversion during the migration procedure.

To reduce your chances of running out of space or to speed up the migration process, convert all of your MyISAM tables to InnoDB tables before migrating them. The size of the resulting InnoDB table is equivalent to the size required by Aurora MySQL for that table. To convert a MyISAM table to InnoDB, run the following command:

```
ALTER TABLE schema.table_name engine=innodb, algorithm=copy;
```

Aurora MySQL doesn't support compressed tables or pages, that is, tables created with `ROW_FORMAT=COMPRESSED` or `COMPRESSION = {"zlib"|"lz4"}`.

To reduce your chances of running out of space or to speed up the migration process, expand your compressed tables by setting `ROW_FORMAT` to `DEFAULT`, `COMPACT`, `DYNAMIC`, or `REDUNDANT`. For compressed pages, set `COMPRESSION="none"`.

For more information, see [InnoDB row formats](https://dev.mysql.com/doc/refman/8.0/en/innodb-row-format.html) and [InnoDB table and page compression](https://dev.mysql.com/doc/refman/8.0/en/innodb-compression.html)in the MySQL documentation.

You can use the following SQL script on your existing MySQL DB instance to list the tables in your database that are MyISAM tables or compressed tables.

```
-- This script examines a MySQL database for conditions that block
-- migrating the database into Aurora MySQL.
-- It must be run from an account that has read permission for the
-- INFORMATION_SCHEMA database.

-- Verify that this is a supported version of MySQL.

select msg as `==> Checking current version of MySQL.`
from
  (
  select
    'This script should be run on MySQL version 5.6 or higher. ' +
    'Earlier versions are not supported.' as msg,
    cast(substring_index(version(), '.', 1) as unsigned) * 100 +
      cast(substring_index(substring_index(version(), '.', 2), '.', -1)
      as unsigned)
    as major_minor
  ) as T
where major_minor <> 506;


-- List MyISAM and compressed tables. Include the table size.

select concat(TABLE_SCHEMA, '.', TABLE_NAME) as `==> MyISAM or Compressed Tables`,
round(((data_length + index_length) / 1024 / 1024), 2) "Approx size (MB)"
from INFORMATION_SCHEMA.TABLES
where
  ENGINE <> 'InnoDB'
  and
  (
    -- User tables
    TABLE_SCHEMA not in ('mysql', 'performance_schema',
                         'information_schema')
    or
    -- Non-standard system tables
    (
      TABLE_SCHEMA = 'mysql' and TABLE_NAME not in
        (
          'columns_priv', 'db', 'event', 'func', 'general_log',
          'help_category', 'help_keyword', 'help_relation',
          'help_topic', 'host', 'ndb_binlog_index', 'plugin',
          'proc', 'procs_priv', 'proxies_priv', 'servers', 'slow_log',
          'tables_priv', 'time_zone', 'time_zone_leap_second',
          'time_zone_name', 'time_zone_transition',
          'time_zone_transition_type', 'user'
        )
    )
  )
  or
  (
    -- Compressed tables
       ROW_FORMAT = 'Compressed'
  );
```

## User accounts with unsupported privileges


User accounts with privileges that aren't supported by Aurora MySQL are imported without the unsupported privileges. For the list of supported privileges, see [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model).

You can run the following SQL query on your source database to list the user accounts that have unsupported privileges.

```
SELECT
    user,
    host
FROM
    mysql.user
WHERE
    Shutdown_priv = 'y'
    OR File_priv = 'y'
    OR Super_priv = 'y'
    OR Create_tablespace_priv = 'y';
```

## Dynamic privileges in Aurora MySQL version 3


Dynamic privileges aren't imported. Aurora MySQL version 3 supports the following dynamic privileges.

```
'APPLICATION_PASSWORD_ADMIN',
'CONNECTION_ADMIN',
'REPLICATION_APPLIER',
'ROLE_ADMIN',
'SESSION_VARIABLES_ADMIN',
'SET_USER_ID',
'XA_RECOVER_ADMIN'
```

The following example script grants the supported dynamic privileges to the user accounts in the Aurora MySQL DB cluster.

```
-- This script finds the user accounts that have Aurora MySQL supported dynamic privileges 
-- and grants them to corresponding user accounts in the Aurora MySQL DB cluster.

/home/ec2-user/opt/mysql/8.0.26/bin/mysql -uusername -pxxxxx -P8026 -h127.0.0.1 -BNe "SELECT
  CONCAT('GRANT ', GRANTS, ' ON *.* TO ', GRANTEE ,';') AS grant_statement
  FROM (select GRANTEE, group_concat(privilege_type) AS GRANTS FROM information_schema.user_privileges 
      WHERE privilege_type IN (
        'APPLICATION_PASSWORD_ADMIN',
        'CONNECTION_ADMIN',
        'REPLICATION_APPLIER',
        'ROLE_ADMIN',
        'SESSION_VARIABLES_ADMIN',
        'SET_USER_ID',
        'XA_RECOVER_ADMIN')
      AND GRANTEE NOT IN (\"'mysql.session'@'localhost'\",\"'mysql.infoschema'@'localhost'\",\"'mysql.sys'@'localhost'\") GROUP BY GRANTEE)
      AS PRIVGRANTS; " | /home/ec2-user/opt/mysql/8.0.26/bin/mysql -u master_username -p master_password -h DB_cluster_endpoint
```

## Stored objects with 'rdsadmin'@'localhost' as the definer


Functions, procedures, views, events, and triggers with `'rdsadmin'@'localhost'` as the definer aren't imported.

You can use the following SQL script on your source MySQL database to list the stored objects that have the unsupported definer.

```
-- This SQL query lists routines with `rdsadmin`@`localhost` as the definer.

SELECT
    ROUTINE_SCHEMA,
    ROUTINE_NAME
FROM
    information_schema.routines
WHERE
    definer = 'rdsadmin@localhost';

-- This SQL query lists triggers with `rdsadmin`@`localhost` as the definer.

SELECT
    TRIGGER_SCHEMA,
    TRIGGER_NAME,
    DEFINER
FROM
    information_schema.triggers
WHERE
    DEFINER = 'rdsadmin@localhost';

-- This SQL query lists events with `rdsadmin`@`localhost` as the definer.

SELECT
    EVENT_SCHEMA,
    EVENT_NAME
FROM
    information_schema.events
WHERE
    DEFINER = 'rdsadmin@localhost';

-- This SQL query lists views with `rdsadmin`@`localhost` as the definer.
SELECT
    TABLE_SCHEMA,
    TABLE_NAME
FROM
    information_schema.views
WHERE
    DEFINER = 'rdsadmin@localhost';
```

# Logical migration from MySQL to Amazon Aurora MySQL by using mysqldump
Logical migration using mysqldump

Because Amazon Aurora MySQL is a MySQL-compatible database, you can use the `mysqldump` utility to copy data from your MySQL database or the `mariadb-dump` utility to copy your data from your MariaDB database to an existing Aurora MySQL DB cluster.

For a discussion of how to do so with MySQL or MariaDB databases that are very large, see the following topics in the *Amazon Relational Database Service User Guide*:
+ MySQL – [Importing data to an Amazon RDS for MySQL database with reduced downtime](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql-importing-data-reduced-downtime.html)
+ MariaDB – [Importing data to an Amazon RDS for MariaDB database with reduced downtime](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mariadb-importing-data-reduced-downtime.html)

For MySQL or MariaDB databases that have smaller amounts of data, see the following topics in the *Amazon Relational Database Service User Guide*:
+ MySQL – [Importing data from an external MySQL database to an Amazon RDS for MySQL DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql-importing-data-external-database.html)
+ MariaDB – [Importing data from an external MariaDB database to an Amazon RDS for MariaDB DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mariadb-importing-data-external-database.html)

# Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster
Migrating from a MySQL DB instance to Aurora MySQL

You can migrate (copy) data to an Amazon Aurora MySQL DB cluster from an RDS for MySQL DB instance.

**Topics**
+ [

# Migrating an RDS for MySQL snapshot to Aurora
](AuroraMySQL.Migrating.RDSMySQL.Snapshot.md)
+ [

# Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster by using an Aurora read replica
](AuroraMySQL.Migrating.RDSMySQL.Replica.md)

**Note**  
Because Amazon Aurora MySQL is compatible with MySQL, you can migrate data from your MySQL database by setting up replication between your MySQL database and an Amazon Aurora MySQL DB cluster. For more information, see [Replication with Amazon Aurora](Aurora.Replication.md).

# Migrating an RDS for MySQL snapshot to Aurora


You can migrate a DB snapshot of an RDS for MySQL DB instance to create an Aurora MySQL DB cluster. The new Aurora MySQL DB cluster is populated with the data from the original RDS for MySQL DB instance. The DB snapshot must have been made from an Amazon RDS DB instance running a MySQL version that's compatible with Aurora MySQL.

You can migrate either a manual or automated DB snapshot. After the DB cluster is created, you can then create optional Aurora Replicas.

**Note**  
You can also migrate an RDS for MySQL DB instance to an Aurora MySQL DB cluster by creating an Aurora read replica of your source RDS for MySQL DB instance. For more information, see [Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster by using an Aurora read replica](AuroraMySQL.Migrating.RDSMySQL.Replica.md).  
You can't migrate to Aurora MySQL version 3.05 and higher from some older MySQL 8.0 versions, including 8.0.11, 8.0.13, and 8.0.15. We recommend that you upgrade to MySQL version 8.0.28 before migrating.

The general steps you must take are as follows:

1. Determine the amount of space to provision for your Aurora MySQL DB cluster. For more information, see [How much space do I need?](#AuroraMySQL.Migrating.RDSMySQL.Space)

1. Use the console to create the snapshot in the AWS Region where the Amazon RDS MySQL instance is located. For information about creating a DB snapshot, see [Creating a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html).

1. If the DB snapshot is not in the same AWS Region as your DB cluster, use the Amazon RDS console to copy the DB snapshot to that AWS Region. For information about copying a DB snapshot, see [Copying a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html).

1. Use the console to migrate the DB snapshot and create an Aurora MySQL DB cluster with the same databases as the original MySQL DB instance. 

**Warning**  
Amazon RDS limits each AWS account to one snapshot copy into each AWS Region at a time.

## How much space do I need?


When you migrate a snapshot of a MySQL DB instance into an Aurora MySQL DB cluster, Aurora uses an Amazon Elastic Block Store (Amazon EBS) volume to format the data from the snapshot before migrating it. In some cases, additional space is needed to format the data for migration.

Tables that are not MyISAM tables and are not compressed can be up to 16 TB in size. If you have MyISAM tables, then Aurora must use additional space in the volume to convert the tables to be compatible with Aurora MySQL. If you have compressed tables, then Aurora must use additional space in the volume to expand these tables before storing them on the Aurora cluster volume. Because of this additional space requirement, you should ensure that none of the MyISAM and compressed tables being migrated from your MySQL DB instance exceeds 8 TB in size.

## Reducing the amount of space required to migrate data into Amazon Aurora MySQL


You might want to modify your database schema prior to migrating it into Amazon Aurora. Such modification can be helpful in the following cases: 
+ You want to speed up the migration process.
+ You are unsure of how much space you need to provision.
+ You have attempted to migrate your data and the migration has failed due to a lack of provisioned space.

You can make the following changes to improve the process of migrating a database into Amazon Aurora.

**Important**  
Be sure to perform these updates on a new DB instance restored from a snapshot of a production database, rather than on a production instance. You can then migrate the data from the snapshot of your new DB instance into your Aurora DB cluster to avoid any service interruptions on your production database.


| Table type | Limitation or guideline | 
| --- | --- | 
|  MyISAM tables  |  Aurora MySQL supports InnoDB tables only. If you have MyISAM tables in your database, then those tables must be converted before being migrated into Aurora MySQL. The conversion process requires additional space for the MyISAM to InnoDB conversion during the migration procedure. To reduce your chances of running out of space or to speed up the migration process, convert all of your MyISAM tables to InnoDB tables before migrating them. The size of the resulting InnoDB table is equivalent to the size required by Aurora MySQL for that table. To convert a MyISAM table to InnoDB, run the following command:  `alter table <schema>.<table_name> engine=innodb, algorithm=copy;`   | 
|  Compressed tables  |  Aurora MySQL doesn't support compressed tables (that is, tables created with `ROW_FORMAT=COMPRESSED`).  To reduce your chances of running out of space or to speed up the migration process, expand your compressed tables by setting `ROW_FORMAT` to `DEFAULT`, `COMPACT`, `DYNAMIC`, or `REDUNDANT`. For more information, see [InnoDB row formats](https://dev.mysql.com/doc/refman/8.0/en/innodb-row-format.html) in the MySQL documentation.  | 

You can use the following SQL script on your existing MySQL DB instance to list the tables in your database that are MyISAM tables or compressed tables.

```
-- This script examines a MySQL database for conditions that block
-- migrating the database into Amazon Aurora.
-- It needs to be run from an account that has read permission for the
-- INFORMATION_SCHEMA database.

-- Verify that this is a supported version of MySQL.

select msg as `==> Checking current version of MySQL.`
from
  (
  select
    'This script should be run on MySQL version 5.6 or higher. ' +
    'Earlier versions are not supported.' as msg,
    cast(substring_index(version(), '.', 1) as unsigned) * 100 +
      cast(substring_index(substring_index(version(), '.', 2), '.', -1)
      as unsigned)
    as major_minor
  ) as T
where major_minor <> 506;


-- List MyISAM and compressed tables. Include the table size.

select concat(TABLE_SCHEMA, '.', TABLE_NAME) as `==> MyISAM or Compressed Tables`,
round(((data_length + index_length) / 1024 / 1024), 2) "Approx size (MB)"
from INFORMATION_SCHEMA.TABLES
where
  ENGINE <> 'InnoDB'
  and
  (
    -- User tables
    TABLE_SCHEMA not in ('mysql', 'performance_schema',
                         'information_schema')
    or
    -- Non-standard system tables
    (
      TABLE_SCHEMA = 'mysql' and TABLE_NAME not in
        (
          'columns_priv', 'db', 'event', 'func', 'general_log',
          'help_category', 'help_keyword', 'help_relation',
          'help_topic', 'host', 'ndb_binlog_index', 'plugin',
          'proc', 'procs_priv', 'proxies_priv', 'servers', 'slow_log',
          'tables_priv', 'time_zone', 'time_zone_leap_second',
          'time_zone_name', 'time_zone_transition',
          'time_zone_transition_type', 'user'
        )
    )
  )
  or
  (
    -- Compressed tables
       ROW_FORMAT = 'Compressed'
  );
```

The script produces output similar to the output in the following example. The example shows two tables that must be converted from MyISAM to InnoDB. The output also includes the approximate size of each table in megabytes (MB). 

```
+---------------------------------+------------------+
| ==> MyISAM or Compressed Tables | Approx size (MB) |
+---------------------------------+------------------+
| test.name_table                 |          2102.25 |
| test.my_table                   |            65.25 |
+---------------------------------+------------------+
2 rows in set (0.01 sec)
```

## Migrating an RDS for MySQL DB snapshot to an Aurora MySQL DB cluster
Migrating a DB snapshot to a DB cluster

You can migrate a DB snapshot of an RDS for MySQL DB instance to create an Aurora MySQL DB cluster using the AWS Management Console or the AWS CLI. The new Aurora MySQL DB cluster is populated with the data from the original RDS for MySQL DB instance. For information about creating a DB snapshot, see [Creating a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html).

If the DB snapshot is not in the AWS Region where you want to locate your data, copy the DB snapshot to that AWS Region. For information about copying a DB snapshot, see [Copying a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html).

### Console


When you migrate the DB snapshot by using the AWS Management Console, the console takes the actions necessary to create only the DB cluster.

You can also choose for your new Aurora MySQL DB cluster to be encrypted at rest using an AWS KMS key.

**To migrate a MySQL DB snapshot by using the AWS Management Console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Either start the migration from the MySQL DB instance or from the snapshot:

   To start the migration from the DB instance:

   1. In the navigation pane, choose **Databases**, and then select the MySQL DB instance.

   1. For **Actions**, choose **Migrate latest snapshot**.

   To start the migration from the snapshot:

   1. Choose **Snapshots**.

   1. On the **Snapshots** page, choose the snapshot that you want to migrate into an Aurora MySQL DB cluster.

   1. Choose **Snapshot Actions**, and then choose **Migrate Snapshot**.

   The **Migrate Database** page appears.

1. Set the following values on the **Migrate Database** page:
   + **Migrate to DB Engine**: Select `aurora`.
   + **DB Engine Version**: Select the DB engine version for the Aurora MySQL DB cluster.
   + **DB Instance Class**: Select a DB instance class that has the required storage and capacity for your database, for example `db.r3.large`. Aurora cluster volumes automatically grow as the amount of data in your database increases. An Aurora cluster volume can grow to a maximum size of 128 tebibytes (TiB). So you only need to select a DB instance class that meets your current storage requirements. For more information, see [Overview of Amazon Aurora storage](Aurora.Overview.StorageReliability.md#Aurora.Overview.Storage).
   + **DB Instance Identifier**: Type a name for the DB cluster that is unique for your account in the AWS Region you selected. This identifier is used in the endpoint addresses for the instances in your DB cluster. You might choose to add some intelligence to the name, such as including the AWS Region and DB engine you selected, for example **aurora-cluster1**.

     The DB instance identifier has the following constraints:
     + It must contain from 1 to 63 alphanumeric characters or hyphens.
     + Its first character must be a letter.
     + It cannot end with a hyphen or contain two consecutive hyphens.
     + It must be unique for all DB instances per AWS account, per AWS Region.
   + **Virtual Private Cloud (VPC)**: If you have an existing VPC, then you can use that VPC with your Aurora MySQL DB cluster by selecting your VPC identifier, for example `vpc-a464d1c1`. For information on creating a VPC, see [Tutorial: Create a VPC for use with a DB cluster (IPv4 only)](CHAP_Tutorials.WebServerDB.CreateVPC.md).

     Otherwise, you can choose to have Aurora create a VPC for you by selecting **Create a new VPC**. 
   + **DB subnet group**: If you have an existing subnet group, then you can use that subnet group with your Aurora MySQL DB cluster by selecting your subnet group identifier, for example `gs-subnet-group1`.

     Otherwise, you can choose to have Aurora create a subnet group for you by selecting **Create a new subnet group**. 
   + **Public accessibility**: Select **No** to specify that instances in your DB cluster can only be accessed by resources inside of your VPC. Select **Yes** to specify that instances in your DB cluster can be accessed by resources on the public network. The default is **Yes**.
**Note**  
Your production DB cluster might not need to be in a public subnet, because only your application servers require access to your DB cluster. If your DB cluster doesn't need to be in a public subnet, set **Publicly Accessible** to **No**.
   + **Availability Zone**: Select the Availability Zone to host the primary instance for your Aurora MySQL DB cluster. To have Aurora select an Availability Zone for you, select **No Preference**.
   + **Database Port**: Type the default port to be used when connecting to instances in the Aurora MySQL DB cluster. The default is `3306`.
**Note**  
You might be behind a corporate firewall that doesn't allow access to default ports such as the MySQL default port, 3306. In this case, provide a port value that your corporate firewall allows. Remember that port value later when you connect to the Aurora MySQL DB cluster.
   + **Encryption**: Choose **Enable Encryption** for your new Aurora MySQL DB cluster to be encrypted at rest. If you choose **Enable Encryption**, you must choose a KMS key as the **AWS KMS key** value.

     If your DB snapshot isn't encrypted, specify an encryption key to have your DB cluster encrypted at rest.

     If your DB snapshot is encrypted, specify an encryption key to have your DB cluster encrypted at rest using the specified encryption key. You can specify the encryption key used by the DB snapshot or a different key. You can't create an unencrypted DB cluster from an encrypted DB snapshot.
   + **Auto Minor Version Upgrade**: This setting doesn't apply to Aurora MySQL DB clusters.

     For more information about engine updates for Aurora MySQL, see [Database engine updates for Amazon Aurora MySQLLong-term support (LTS) and beta releases for Amazon Aurora MySQL](AuroraMySQL.Updates.md).

1. Choose **Migrate** to migrate your DB snapshot. 

1. Choose **Instances**, and then choose the arrow icon to show the DB cluster details and monitor the progress of the migration. On the details page, you can find the cluster endpoint used to connect to the primary instance of the DB cluster. For more information on connecting to an Aurora MySQL DB cluster, see [Connecting to an Amazon Aurora DB cluster](Aurora.Connecting.md). 

### AWS CLI


You can create an Aurora DB cluster from a DB snapshot of an RDS for MySQL DB instance by using the [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-snapshot.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-snapshot.html) command with the following parameters:
+ `--db-cluster-identifier` – The name of the DB cluster to create.
+ `--engine aurora-mysql` – For a MySQL 5.7–compatible or 8.0–compatible DB cluster.
+ `--kms-key-id` – The AWS KMS key to optionally encrypt the DB cluster with, depending on whether your DB snapshot is encrypted.
  + If your DB snapshot isn't encrypted, specify an encryption key to have your DB cluster encrypted at rest. Otherwise, your DB cluster isn't encrypted.
  + If your DB snapshot is encrypted, specify an encryption key to have your DB cluster encrypted at rest using the specified encryption key. Otherwise, your DB cluster is encrypted at rest using the encryption key for the DB snapshot.
**Note**  
You can't create an unencrypted DB cluster from an encrypted DB snapshot.
+ `--snapshot-identifier` – The Amazon Resource Name (ARN) of the DB snapshot to migrate. For more information about Amazon RDS ARNs, see [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-rds).

When you migrate the DB snapshot by using the `RestoreDBClusterFromSnapshot` command, the command creates both the DB cluster and the primary instance.

In this example, you create a MySQL 5.7–compatible DB cluster named *mydbcluster* from a DB snapshot with an ARN set to *mydbsnapshotARN*.

For Linux, macOS, or Unix:

```
aws rds restore-db-cluster-from-snapshot \
    --db-cluster-identifier mydbcluster \
    --snapshot-identifier mydbsnapshotARN \
    --engine aurora-mysql
```

For Windows:

```
aws rds restore-db-cluster-from-snapshot ^
    --db-cluster-identifier mydbcluster ^
    --snapshot-identifier mydbsnapshotARN ^
    --engine aurora-mysql
```

In this example, you create a MySQL 5.7–compatible DB cluster named *mydbcluster* from a DB snapshot with an ARN set to *mydbsnapshotARN*.

For Linux, macOS, or Unix:

```
aws rds restore-db-cluster-from-snapshot \
    --db-cluster-identifier mydbcluster \
    --snapshot-identifier mydbsnapshotARN \
    --engine aurora-mysql
```

For Windows:

```
aws rds restore-db-cluster-from-snapshot ^
    --db-cluster-identifier mydbcluster ^
    --snapshot-identifier mydbsnapshotARN ^
    --engine aurora-mysql
```

# Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster by using an Aurora read replica
Migrating from RDS for MySQL to Aurora MySQL using a read replica

Aurora uses the MySQL DB engines' binary log replication functionality to create a special type of DB cluster called an Aurora read replica for a source RDS for MySQL DB instance. Updates made to the source RDS for MySQL DB instance are asynchronously replicated to the Aurora read replica.

We recommend using this functionality to migrate from a RDS for MySQL DB instance to an Aurora MySQL DB cluster by creating an Aurora read replica of your source RDS for MySQL DB instance. When the replica lag between the RDS for MySQL DB instance and the Aurora read replica is 0, you can direct your client applications to the Aurora read replica and then stop replication to make the Aurora read replica a standalone Aurora MySQL DB cluster. Be prepared for migration to take a while, roughly several hours per tebibyte (TiB) of data.

For a list of regions where Aurora is available, see [Amazon Aurora](https://docs.aws.amazon.com/general/latest/gr/rande.html#aurora) in the *AWS General Reference*.

When you create an Aurora read replica of an RDS for MySQL DB instance, Amazon RDS creates a DB snapshot of your source RDS for MySQL DB instance (private to Amazon RDS, and incurring no charges). Amazon RDS then migrates the data from the DB snapshot to the Aurora read replica. After the data from the DB snapshot has been migrated to the new Aurora MySQL DB cluster, Amazon RDS starts replication between your RDS for MySQL DB instance and the Aurora MySQL DB cluster. If your RDS for MySQL DB instance contains tables that use storage engines other than InnoDB, or that use compressed row format, you can speed up the process of creating an Aurora read replica by altering those tables to use the InnoDB storage engine and dynamic row format before you create your Aurora read replica. For more information about the process of copying a MySQL DB snapshot to an Aurora MySQL DB cluster, see [Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster](AuroraMySQL.Migrating.RDSMySQL.md).

You can have only one Aurora read replica for an RDS for MySQL DB instance.

**Note**  
Replication issues can arise due to feature differences between Aurora MySQL and the MySQL database engine version of your RDS for MySQL DB instance that is the replication primary. If you encounter an error, you can find help in the [Amazon RDS community forum](https://forums.aws.amazon.com/forum.jspa?forumID=60) or by contacting AWS Support.  
You can't create an Aurora read replica if your RDS for MySQL DB instance is already the source for a cross-Region read replica.  
You can't migrate to Aurora MySQL version 3.05 and higher from some older RDS for MySQL 8.0 versions, including 8.0.11, 8.0.13, and 8.0.15. We recommend that you upgrade to RDS for MySQL version 8.0.28 before migrating.

For more information on MySQL read replicas, see [ Working with read replicas of MariaDB, MySQL, and PostgreSQL DB instances](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html).

## Creating an Aurora read replica


You can create an Aurora read replica for an RDS for MySQL DB instance by using the console, the AWS CLI, or the RDS API.

### Console


**To create an Aurora read replica from a source RDS for MySQL DB instance**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**. 

1. Choose the MySQL DB instance that you want to use as the source for your Aurora read replica.

1. For **Actions**, choose **Create Aurora read replica**.

1. Choose the DB cluster specifications you want to use for the Aurora read replica, as described in the following table.     
<a name="aurora_read_replica_param_advice"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.RDSMySQL.Replica.html)

1. Choose **Create read replica**.

### AWS CLI


To create an Aurora read replica from a source RDS for MySQL DB instance, use the [https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html) and [https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) AWS CLI commands to create a new Aurora MySQL DB cluster. When you call the `create-db-cluster` command, include the `--replication-source-identifier` parameter to identify the Amazon Resource Name (ARN) for the source MySQL DB instance. For more information about Amazon RDS ARNs, see [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-rds).

Don't specify the master username, master password, or database name as the Aurora read replica uses the same master username, master password, and database name as the source MySQL DB instance. 

For Linux, macOS, or Unix:

```
aws rds create-db-cluster --db-cluster-identifier sample-replica-cluster --engine aurora \
    --db-subnet-group-name mysubnetgroup --vpc-security-group-ids sg-c7e5b0d2 \
    --replication-source-identifier arn:aws:rds:us-west-2:123456789012:db:primary-mysql-instance
```

For Windows:

```
aws rds create-db-cluster --db-cluster-identifier sample-replica-cluster --engine aurora ^
    --db-subnet-group-name mysubnetgroup --vpc-security-group-ids sg-c7e5b0d2 ^
    --replication-source-identifier arn:aws:rds:us-west-2:123456789012:db:primary-mysql-instance
```

If you use the console to create an Aurora read replica, then Aurora automatically creates the primary instance for your DB cluster Aurora read replica. If you use the AWS CLI to create an Aurora read replica, you must explicitly create the primary instance for your DB cluster. The primary instance is the first instance that is created in a DB cluster.

You can create a primary instance for your DB cluster by using the [https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) AWS CLI command with the following parameters.
+ `--db-cluster-identifier`

  The name of your DB cluster.
+ `--db-instance-class`

  The name of the DB instance class to use for your primary instance.
+ `--db-instance-identifier`

  The name of your primary instance.
+ `--engine aurora`

In this example, you create a primary instance named *myreadreplicainstance* for the DB cluster named *myreadreplicacluster*, using the DB instance class specified in *myinstanceclass*.

**Example**  
For Linux, macOS, or Unix:  

```
aws rds create-db-instance \
    --db-cluster-identifier myreadreplicacluster \
    --db-instance-class myinstanceclass \
    --db-instance-identifier myreadreplicainstance \
    --engine aurora
```
For Windows:  

```
aws rds create-db-instance ^
    --db-cluster-identifier myreadreplicacluster ^
    --db-instance-class myinstanceclass ^
    --db-instance-identifier myreadreplicainstance ^
    --engine aurora
```

### RDS API


To create an Aurora read replica from a source RDS for MySQL DB instance, use the [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html) and [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html) Amazon RDS API commands to create a new Aurora DB cluster and primary instance. Do not specify the master username, master password, or database name as the Aurora read replica uses the same master username, master password, and database name as the source RDS for MySQL DB instance. 

You can create a new Aurora DB cluster for an Aurora read replica from a source RDS for MySQL DB instance by using the [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html) Amazon RDS API command with the following parameters:
+ `DBClusterIdentifier`

  The name of the DB cluster to create.
+ `DBSubnetGroupName`

  The name of the DB subnet group to associate with this DB cluster.
+ `Engine=aurora`
+ `KmsKeyId`

  The AWS KMS key to optionally encrypt the DB cluster with, depending on whether your MySQL DB instance is encrypted.
  + If your MySQL DB instance isn't encrypted, specify an encryption key to have your DB cluster encrypted at rest. Otherwise, your DB cluster is encrypted at rest using the default encryption key for your account.
  + If your MySQL DB instance is encrypted, specify an encryption key to have your DB cluster encrypted at rest using the specified encryption key. Otherwise, your DB cluster is encrypted at rest using the encryption key for the MySQL DB instance.
**Note**  
You can't create an unencrypted DB cluster from an encrypted MySQL DB instance.
+ `ReplicationSourceIdentifier`

  The Amazon Resource Name (ARN) for the source MySQL DB instance. For more information about Amazon RDS ARNs, see [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-rds). 
+ `VpcSecurityGroupIds`

  The list of EC2 VPC security groups to associate with this DB cluster.

In this example, you create a DB cluster named *myreadreplicacluster* from a source MySQL DB instance with an ARN set to *mysqlprimaryARN*, associated with a DB subnet group named *mysubnetgroup* and a VPC security group named *mysecuritygroup*.

**Example**  

```
https://rds.us-east-1.amazonaws.com/
    ?Action=CreateDBCluster
    &DBClusterIdentifier=myreadreplicacluster
    &DBSubnetGroupName=mysubnetgroup
    &Engine=aurora
    &ReplicationSourceIdentifier=mysqlprimaryARN
    &SignatureMethod=HmacSHA256
    &SignatureVersion=4
    &Version=2014-10-31
    &VpcSecurityGroupIds=mysecuritygroup
    &X-Amz-Algorithm=AWS4-HMAC-SHA256
    &X-Amz-Credential=AKIADQKE4SARGYLE/20150927/us-east-1/rds/aws4_request
    &X-Amz-Date=20150927T164851Z
    &X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
    &X-Amz-Signature=6a8f4bd6a98f649c75ea04a6b3929ecc75ac09739588391cd7250f5280e716db
```

If you use the console to create an Aurora read replica, then Aurora automatically creates the primary instance for your DB cluster Aurora read replica. If you use the AWS CLI to create an Aurora read replica, you must explicitly create the primary instance for your DB cluster. The primary instance is the first instance that is created in a DB cluster.

You can create a primary instance for your DB cluster by using the [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html) Amazon RDS API command with the following parameters:
+ `DBClusterIdentifier`

  The name of your DB cluster.
+ `DBInstanceClass`

  The name of the DB instance class to use for your primary instance.
+ `DBInstanceIdentifier`

  The name of your primary instance.
+ `Engine=aurora`

In this example, you create a primary instance named *myreadreplicainstance* for the DB cluster named *myreadreplicacluster*, using the DB instance class specified in *myinstanceclass*.

**Example**  

```
https://rds.us-east-1.amazonaws.com/
    ?Action=CreateDBInstance
    &DBClusterIdentifier=myreadreplicacluster
    &DBInstanceClass=myinstanceclass
    &DBInstanceIdentifier=myreadreplicainstance
    &Engine=aurora
    &SignatureMethod=HmacSHA256
    &SignatureVersion=4
    &Version=2014-09-01
    &X-Amz-Algorithm=AWS4-HMAC-SHA256
    &X-Amz-Credential=AKIADQKE4SARGYLE/20140424/us-east-1/rds/aws4_request
    &X-Amz-Date=20140424T194844Z
    &X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
    &X-Amz-Signature=bee4aabc750bf7dad0cd9e22b952bd6089d91e2a16592c2293e532eeaab8bc77
```

## Viewing an Aurora read replica


You can view the MySQL to Aurora MySQL replication relationships for your Aurora MySQL DB clusters by using the AWS Management Console or the AWS CLI.

### Console


**To view the primary MySQL DB instance for an Aurora read replica**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**. 

1. Choose the DB cluster for the Aurora read replica to display its details. The primary MySQL DB instance information is in the **Replication source** field.  
![\[View MySQL primary\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-repl6.png)

### AWS CLI


To view the MySQL to Aurora MySQL replication relationships for your Aurora MySQL DB clusters by using the AWS CLI, use the [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) and [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html) commands. 

To determine which MySQL DB instance is the primary, use the [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) and specify the cluster identifier of the Aurora read replica for the `--db-cluster-identifier` option. Refer to the `ReplicationSourceIdentifier` element in the output for the ARN of the DB instance that is the replication primary. 

To determine which DB cluster is the Aurora read replica, use the [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html) and specify the instance identifier of the MySQL DB instance for the `--db-instance-identifier` option. Refer to the `ReadReplicaDBClusterIdentifiers` element in the output for the DB cluster identifier of the Aurora read replica. 

**Example**  
For Linux, macOS, or Unix:  

```
aws rds describe-db-clusters \
    --db-cluster-identifier myreadreplicacluster
```

```
aws rds describe-db-instances \
    --db-instance-identifier mysqlprimary
```
For Windows:  

```
aws rds describe-db-clusters ^
    --db-cluster-identifier myreadreplicacluster
```

```
aws rds describe-db-instances ^
    --db-instance-identifier mysqlprimary
```

## Promoting an Aurora read replica


After migration completes, you can promote the Aurora read replica to a stand-alone DB cluster using the AWS Management Console or AWS CLI.

Then you can direct your client applications to the endpoint for the Aurora read replica. For more information on the Aurora endpoints, see [Amazon Aurora endpoint connections](Aurora.Overview.Endpoints.md). Promotion should complete fairly quickly, and you can read from and write to the Aurora read replica during promotion. However, you can't delete the primary MySQL DB instance or unlink the DB Instance and the Aurora read replica during this time.

Before you promote your Aurora read replica, stop any transactions from being written to the source MySQL DB instance, and then wait for the replica lag on the Aurora read replica to reach 0. You can view the replica lag for an Aurora read replica by calling the `SHOW SLAVE STATUS` (Aurora MySQL version 2) or `SHOW REPLICA STATUS` (Aurora MySQL version 3) command on your Aurora read replica. Check the **Seconds behind master** value. 

You can start writing to the Aurora read replica after write transactions to the primary have stopped and replica lag is 0. If you write to the Aurora read replica before this and you modify tables that are also being modified on the MySQL primary, you risk breaking replication to Aurora. If this happens, you must delete and recreate your Aurora read replica.

### Console


**To promote an Aurora read replica to an Aurora DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the DB cluster for the Aurora read replica.

1. For **Actions**, choose **Promote**.

1. Choose **Promote read replica**.

After you promote, confirm that the promotion has completed by using the following procedure.

**To confirm that the Aurora read replica was promoted**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Events**.

1. On the **Events** page, verify that there is a `Promoted Read Replica cluster to a stand-alone database cluster` event for the cluster that you promoted.

After promotion is complete, the primary MySQL DB instance and the Aurora read replica are unlinked, and you can safely delete the DB instance if you want.

### AWS CLI


To promote an Aurora read replica to a stand-alone DB cluster, use the [https://docs.aws.amazon.com/cli/latest/reference/rds/promote-read-replica-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/promote-read-replica-db-cluster.html) AWS CLI command. 

**Example**  
For Linux, macOS, or Unix:  

```
aws rds promote-read-replica-db-cluster \
    --db-cluster-identifier myreadreplicacluster
```
For Windows:  

```
aws rds promote-read-replica-db-cluster ^
    --db-cluster-identifier myreadreplicacluster
```

# Managing Amazon Aurora MySQL
Managing Aurora MySQL

The following sections discuss managing an Amazon Aurora MySQL DB cluster.

**Topics**
+ [

# Managing performance and scaling for Amazon Aurora MySQL
](AuroraMySQL.Managing.Performance.md)
+ [

# Backtracking an Aurora DB cluster
](AuroraMySQL.Managing.Backtrack.md)
+ [

# Testing Amazon Aurora MySQL using fault injection queries
](AuroraMySQL.Managing.FaultInjectionQueries.md)
+ [

# Altering tables in Amazon Aurora using Fast DDL
](AuroraMySQL.Managing.FastDDL.md)
+ [

# Displaying volume status for an Aurora MySQL DB cluster
](AuroraMySQL.Managing.VolumeStatus.md)

# Managing performance and scaling for Amazon Aurora MySQL


## Scaling Aurora MySQL DB instances


You can scale Aurora MySQL DB instances in two ways, instance scaling and read scaling. For more information about read scaling, see [Read scaling](Aurora.Managing.Performance.md#Aurora.Managing.Performance.ReadScaling).

You can scale your Aurora MySQL DB cluster by modifying the DB instance class for each DB instance in the DB cluster. Aurora MySQL supports several DB instance classes optimized for Aurora. Don't use db.t2 or db.t3 instance classes for larger Aurora clusters of size greater than 40 TB. For the specifications of the DB instance classes supported by Aurora MySQL, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md).

**Note**  
We recommend using the T DB instance classes only for development and test servers, or other non-production servers. For more details on the T instance classes, see [Using T instance classes for development and testing](AuroraMySQL.BestPractices.Performance.md#AuroraMySQL.BestPractices.T2Medium).

## Maximum connections to an Aurora MySQL DB instance
Maximum connections<a name="max_connections"></a>

The maximum number of connections allowed to an Aurora MySQL DB instance is determined by the `max_connections` parameter in the instance-level parameter group for the DB instance.

The following table lists the resulting default value of `max_connections` for each DB instance class available to Aurora MySQL. You can increase the maximum number of connections to your Aurora MySQL DB instance by scaling the instance up to a DB instance class with more memory, or by setting a larger value for the `max_connections` parameter in the DB parameter group for your instance, up to 16,000.

**Tip**  
If your applications frequently open and close connections, or keep a large number of long-lived connections open, we recommend that you use Amazon RDS Proxy. RDS Proxy is a fully managed, highly available database proxy that uses connection pooling to share database connections securely and efficiently. To learn more about RDS Proxy, see [Amazon RDS Proxyfor Aurora](rds-proxy.md).

 For details about how Aurora Serverless v2 instances handle this parameter, see [Maximum connections for Aurora Serverless v2](aurora-serverless-v2.setting-capacity.md#aurora-serverless-v2.max-connections). 


| Instance class | max\$1connections default value | 
| --- | --- | 
|  db.t2.small  |  45  | 
|  db.t2.medium  |  90  | 
|  db.t3.small  |  45  | 
|  db.t3.medium  |  90  | 
|  db.t3.large  |  135  | 
|  db.t4g.medium  |  90  | 
|  db.t4g.large  |  135  | 
|  db.r3.large  |  1000  | 
|  db.r3.xlarge  |  2000  | 
|  db.r3.2xlarge  |  3000  | 
|  db.r3.4xlarge  |  4000  | 
|  db.r3.8xlarge  |  5000  | 
|  db.r4.large  |  1000  | 
|  db.r4.xlarge  |  2000  | 
|  db.r4.2xlarge  |  3000  | 
|  db.r4.4xlarge  |  4000  | 
|  db.r4.8xlarge  |  5000  | 
|  db.r4.16xlarge  |  6000  | 
|  db.r5.large  |  1000  | 
|  db.r5.xlarge  |  2000  | 
|  db.r5.2xlarge  |  3000  | 
|  db.r5.4xlarge  |  4000  | 
|  db.r5.8xlarge  |  5000  | 
|  db.r5.12xlarge  |  6000  | 
|  db.r5.16xlarge  |  6000  | 
|  db.r5.24xlarge  |  7000  | 
| db.r6g.large | 1000 | 
| db.r6g.xlarge | 2000 | 
| db.r6g.2xlarge | 3000 | 
| db.r6g.4xlarge | 4000 | 
| db.r6g.8xlarge | 5000 | 
| db.r6g.12xlarge | 6000 | 
| db.r6g.16xlarge | 6000 | 
| db.r6i.large | 1000 | 
| db.r6i.xlarge | 2000 | 
| db.r6i.2xlarge | 3000 | 
| db.r6i.4xlarge | 4000 | 
| db.r6i.8xlarge | 5000 | 
| db.r6i.12xlarge | 6000 | 
| db.r6i.16xlarge | 6000 | 
| db.r6i.24xlarge | 7000 | 
| db.r6i.32xlarge | 7000 | 
| db.r7g.large | 1000 | 
| db.r7g.xlarge | 2000 | 
| db.r7g.2xlarge | 3000 | 
| db.r7g.4xlarge | 4000 | 
| db.r7g.8xlarge | 5000 | 
| db.r7g.12xlarge | 6000 | 
| db.r7g.16xlarge | 6000 | 
| db.r7i.large | 1000 | 
| db.r7i.xlarge | 2000 | 
| db.r7i.2xlarge | 3000 | 
| db.r7i.4xlarge | 4000 | 
| db.r7i.8xlarge | 5000 | 
| db.r7i.12xlarge | 6000 | 
| db.r7i.16xlarge | 6000 | 
| db.r7i.24xlarge | 7000 | 
| db.r7i.48xlarge | 8000 | 
| db.r8g.large | 1000 | 
| db.r8g.xlarge | 2000 | 
| db.r8g.2xlarge | 3000 | 
| db.r8g.4xlarge | 4000 | 
| db.r8g.8xlarge | 5000 | 
| db.r8g.12xlarge | 6000 | 
| db.r8g.16xlarge | 6000 | 
| db.r8g.24xlarge | 7000 | 
| db.r8g.48xlarge | 8000 | 
| db.x2g.large | 2000 | 
| db.x2g.xlarge | 3000 | 
| db.x2g.2xlarge | 4000 | 
| db.x2g.4xlarge | 5000 | 
| db.x2g.8xlarge | 6000 | 
| db.x2g.12xlarge | 7000 | 
| db.x2g.16xlarge | 7000 | 

**Tip**  
The `max_connections` parameter calculation uses log base 2 (distinct from natural logarithm) and the `DBInstanceClassMemory` value in bytes for the selected Aurora MySQL instance class. The parameter accepts only integer values, with decimal portions truncated from calculations. The formula implements connection limits as follows:  
1000 connection increment for larger R3, R4, and R5 instances
45 connection increment for T2 and T3 instance memory variants
Example: For db.r6g.large, while the formula calculates 1069.2, the system implements 1000 to maintain consistent incremental patterns.

If you create a new parameter group to customize your own default for the connection limit, you'll see that the default connection limit is derived using a formula based on the `DBInstanceClassMemory` value. As shown in the preceding table, the formula produces connection limits that increase by 1000 as the memory doubles between progressively larger R3, R4, and R5 instances, and by 45 for different memory sizes of T2 and T3 instances.

See [Specifying DB parameters](USER_ParamValuesRef.md) for more details on how `DBInstanceClassMemory` is calculated.

Aurora MySQL and RDS for MySQL DB instances have different amounts of memory overhead. Therefore, the `max_connections` value can be different for Aurora MySQL and RDS for MySQL DB instances that use the same instance class. The values in the table only apply to Aurora MySQL DB instances.

**Note**  
The much lower connectivity limits for T2 and T3 instances are because with Aurora, those instance classes are intended only for development and test scenarios, not for production workloads.

The default connection limits are tuned for systems that use the default values for other major memory consumers, such as the buffer pool and query cache. If you change those other settings for your cluster, consider adjusting the connection limit to account for the increase or decrease in available memory on the DB instances.

## Temporary storage limits for Aurora MySQL
Temporary storage limits

Aurora MySQL stores tables and indexes in the Aurora storage subsystem. Aurora MySQL uses separate temporary or local storage for nonpersistent temporary files and non-InnoDB temporary tables. Local storage also includes files that are used for such purposes as sorting large datasets during query processing or for index build operations. It doesn't include InnoDB temporary tables.

For more information on temporary tables in Aurora MySQL version 3, see [New temporary table behavior in Aurora MySQL version 3](ams3-temptable-behavior.md). For more information on temporary tables in version 2, see [Temporary tablespace behavior in Aurora MySQL version 2](AuroraMySQL.CompareMySQL57.md#AuroraMySQL.TempTables57).

The data and temporary files on these volumes are lost when starting and stopping the DB instance, and during host replacement.

These local storage volumes are backed by Amazon Elastic Block Store (EBS) and can be extended by using a larger DB instance class. For more information about storage, see [Amazon Aurora storage](Aurora.Overview.StorageReliability.md).

Local storage is also used for importing data from Amazon S3 using `LOAD DATA FROM S3` or `LOAD XML FROM S3`, and for exporting data to S3 using SELECT INTO OUTFILE S3. For more information on importing from and exporting to S3, see the following:
+ [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md)
+ [Saving data from an Amazon Aurora MySQL DB cluster into text files in an Amazon S3 bucket](AuroraMySQL.Integrating.SaveIntoS3.md)

Aurora MySQL uses separate permanent storage for error logs, general logs, slow query logs, and audit logs for most of the Aurora MySQL DB instance classes (not including burstable-performance instance class types such as db.t2, db.t3, and db.t4g). The data on this volume is retained when starting and stopping the DB instance, and during host replacement.

This permanent storage volume is also backed by Amazon EBS and has a fixed size according to the DB instance class. It can't be extended by using a larger DB instance class.

The following table shows the maximum amount of temporary and permanent storage available for each Aurora MySQL DB instance class. For more information on DB instance class support for Aurora, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md).


| DB instance class | Maximum temporary/local storage available (GiB) | Additional maximum storage available for log files (GiB) | 
| --- | --- | --- | 
| db.x2g.16xlarge | 1280 | 500 | 
| db.x2g.12xlarge | 960 | 500 | 
| db.x2g.8xlarge | 640 | 500 | 
| db.x2g.4xlarge | 320 | 500 | 
| db.x2g.2xlarge | 160 | 60 | 
| db.x2g.xlarge | 80 | 60 | 
| db.x2g.large | 40 | 60 | 
| db.r8g.48xlarge | 3840 | 500 | 
| db.r8g.24xlarge | 1920 | 500 | 
| db.r8g.16xlarge | 1280 | 500 | 
| db.r8g.12xlarge | 960 | 500 | 
| db.r8g.8xlarge | 640 | 500 | 
| db.r8g.4xlarge | 320 | 500 | 
| db.r8g.2xlarge | 160 | 60 | 
| db.r8g.xlarge | 80 | 60 | 
| db.r8g.large | 32 | 60 | 
| db.r7i.48xlarge | 3840 | 500 | 
| db.r7i.24xlarge | 1920 | 500 | 
| db.r7i.16xlarge | 1280 | 500 | 
| db.r7i.12xlarge | 960 | 500 | 
| db.r7i.8xlarge | 640 | 500 | 
| db.r7i.4xlarge | 320 | 500 | 
| db.r7i.2xlarge | 160 | 60 | 
| db.r7i.xlarge | 80 | 60 | 
| db.r7i.large | 32 | 60 | 
| db.r7g.16xlarge | 1280 | 500 | 
| db.r7g.12xlarge | 960 | 500 | 
| db.r7g.8xlarge | 640 | 500 | 
| db.r7g.4xlarge | 320 | 500 | 
| db.r7g.2xlarge | 160 | 60 | 
| db.r7g.xlarge | 80 | 60 | 
| db.r7g.large | 32 | 60 | 
| db.r6i.32xlarge | 2560 | 500 | 
| db.r6i.24xlarge | 1920 | 500 | 
| db.r6i.16xlarge | 1280 | 500 | 
| db.r6i.12xlarge | 960 | 500 | 
| db.r6i.8xlarge | 640 | 500 | 
| db.r6i.4xlarge | 320 | 500 | 
| db.r6i.2xlarge | 160 | 60 | 
| db.r6i.xlarge | 80 | 60 | 
| db.r6i.large | 32 | 60 | 
| db.r6g.16xlarge | 1280 | 500 | 
| db.r6g.12xlarge | 960 | 500 | 
| db.r6g.8xlarge | 640 | 500 | 
| db.r6g.4xlarge | 320 | 500 | 
| db.r6g.2xlarge | 160 | 60 | 
| db.r6g.xlarge | 80 | 60 | 
| db.r6g.large | 32 | 60 | 
| db.r5.24xlarge | 1920 | 500 | 
| db.r5.16xlarge | 1280 | 500 | 
| db.r5.12xlarge | 960 | 500 | 
| db.r5.8xlarge | 640 | 500 | 
| db.r5.4xlarge | 320 | 500 | 
| db.r5.2xlarge | 160 | 60 | 
| db.r5.xlarge | 80 | 60 | 
| db.r5.large | 32 | 60 | 
| db.r4.16xlarge | 1280 | 500 | 
| db.r4.8xlarge | 640 | 500 | 
| db.r4.4xlarge | 320 | 500 | 
| db.r4.2xlarge | 160 | 60 | 
| db.r4.xlarge | 80 | 60 | 
| db.r4.large | 32 | 60 | 
| db.t4g.large | 32 | – | 
| db.t4g.medium | 32 | – | 
| db.t3.large | 32 | – | 
| db.t3.medium | 32 | – | 
| db.t3.small | 32 | – | 
| db.t2.medium | 32 | – | 
| db.t2.small | 32 | – | 

**Important**  
 These values represent the theoretical maximum amount of free storage on each DB instance. The actual local storage available to you might be lower. Aurora uses some local storage for its management processes, and the DB instance uses some local storage even before you load any data. You can monitor the temporary storage available for a specific DB instance with the `FreeLocalStorage` CloudWatch metric, described in [Amazon CloudWatch metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md). You can check the amount of free storage at the present time. You can also chart the amount of free storage over time. Monitoring the free storage over time helps you to determine whether the value is increasing or decreasing, or to find the minimum, maximum, or average values.  
(This doesn't apply to Aurora Serverless v2.)

# Backtracking an Aurora DB cluster
Backtracking a DB cluster

With Amazon Aurora MySQL-Compatible Edition, you can backtrack a DB cluster to a specific time, without restoring data from a backup.

**Contents**
+ [

## Overview of backtracking
](#AuroraMySQL.Managing.Backtrack.Overview)
  + [

### Backtrack window
](#AuroraMySQL.Managing.Backtrack.Overview.BacktrackWindow)
  + [

### Backtracking time
](#AuroraMySQL.Managing.Backtrack.Overview.BacktrackTime)
  + [

### Backtracking limitations
](#AuroraMySQL.Managing.Backtrack.Limitations)
+ [

## Region and version availability
](#AuroraMySQL.Managing.Backtrack.Availability)
+ [

## Upgrade considerations for backtrack-enabled clusters
](#AuroraMySQL.Managing.Backtrack.Upgrade)
+ [

# Configuring backtracking a Aurora MySQL DB cluster
](AuroraMySQL.Managing.Backtrack.Configuring.md)
+ [

# Performing a backtrack for an Aurora MySQL DB cluster
](AuroraMySQL.Managing.Backtrack.Performing0.md)
+ [

# Monitoring backtracking for an Aurora MySQL DB cluster
](AuroraMySQL.Managing.Backtrack.Monitoring.md)
+ [

## Subscribing to a backtrack event with the console
](#AuroraMySQL.Managing.Backtrack.Event.Console)
+ [

## Retrieving existing backtracks
](#AuroraMySQL.Managing.Backtrack.Retrieving)
+ [

# Disabling backtracking for an Aurora MySQL DB cluster
](AuroraMySQL.Managing.Backtrack.Disabling.md)

## Overview of backtracking
Overview of backtracking

Backtracking "rewinds" the DB cluster to the time you specify. Backtracking is not a replacement for backing up your DB cluster so that you can restore it to a point in time. However, backtracking provides the following advantages over traditional backup and restore:
+ You can easily undo mistakes. If you mistakenly perform a destructive action, such as a DELETE without a WHERE clause, you can backtrack the DB cluster to a time before the destructive action with minimal interruption of service.
+ You can backtrack a DB cluster quickly. Restoring a DB cluster to a point in time launches a new DB cluster and restores it from backup data or a DB cluster snapshot, which can take hours. Backtracking a DB cluster doesn't require a new DB cluster and rewinds the DB cluster in minutes.
+ You can explore earlier data changes. You can repeatedly backtrack a DB cluster back and forth in time to help determine when a particular data change occurred. For example, you can backtrack a DB cluster three hours and then backtrack forward in time one hour. In this case, the backtrack time is two hours before the original time.

**Note**  
For information about restoring a DB cluster to a point in time, see [Overview of backing up and restoring an Aurora DB cluster](Aurora.Managing.Backups.md).

### Backtrack window


With backtracking, there is a target backtrack window and an actual backtrack window:
+ The *target backtrack window* is the amount of time you want to be able to backtrack your DB cluster. When you enable backtracking, you specify a *target backtrack window*. For example, you might specify a target backtrack window of 24 hours if you want to be able to backtrack the DB cluster one day.
+ The *actual backtrack window* is the actual amount of time you can backtrack your DB cluster, which can be smaller than the target backtrack window. The actual backtrack window is based on your workload and the storage available for storing information about database changes, called *change records.*

As you make updates to your Aurora DB cluster with backtracking enabled, you generate change records. Aurora retains change records for the target backtrack window, and you pay an hourly rate for storing them. Both the target backtrack window and the workload on your DB cluster determine the number of change records you store. The workload is the number of changes you make to your DB cluster in a given amount of time. If your workload is heavy, you store more change records in your backtrack window than you do if your workload is light.

You can think of your target backtrack window as the goal for the maximum amount of time you want to be able to backtrack your DB cluster. In most cases, you can backtrack the maximum amount of time that you specified. However, in some cases, the DB cluster can't store enough change records to backtrack the maximum amount of time, and your actual backtrack window is smaller than your target. Typically, the actual backtrack window is smaller than the target when you have extremely heavy workload on your DB cluster. When your actual backtrack window is smaller than your target, we send you a notification.

When backtracking is enabled for a DB cluster, and you delete a table stored in the DB cluster, Aurora keeps that table in the backtrack change records. It does this so that you can revert back to a time before you deleted the table. If you don't have enough space in your backtrack window to store the table, the table might be removed from the backtrack change records eventually.

### Backtracking time


Aurora always backtracks to a time that is consistent for the DB cluster. Doing so eliminates the possibility of uncommitted transactions when the backtrack is complete. When you specify a time for a backtrack, Aurora automatically chooses the nearest possible consistent time. This approach means that the completed backtrack might not exactly match the time you specify, but you can determine the exact time for a backtrack by using the [describe-db-cluster-backtracks](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-cluster-backtracks.html) AWS CLI command. For more information, see [Retrieving existing backtracks](#AuroraMySQL.Managing.Backtrack.Retrieving).

### Backtracking limitations


The following limitations apply to backtracking:
+ Backtracking is only available for DB clusters that were created with the Backtrack feature enabled. You can't modify a DB cluster to enable the Backtrack feature. You can enable the Backtrack feature when you create a new DB cluster or restore a snapshot of a DB cluster.
+ The limit for a backtrack window is 72 hours.
+ Backtracking affects the entire DB cluster. For example, you can't selectively backtrack a single table or a single data update.
+ You can't create cross-Region read replicas from a backtrack-enabled cluster, but you can still enable binary log (binlog) replication on the cluster. If you try to backtrack a DB cluster for which binary logging is enabled, an error typically occurs unless you choose to force the backtrack. Any attempts to force a backtrack will break downstream read replicas and interfere with other operations such as blue/green deployments.
+ You can't backtrack a database clone to a time before that database clone was created. However, you can use the original database to backtrack to a time before the clone was created. For more information about database cloning, see [Cloning a volume for an Amazon Aurora DB cluster](Aurora.Managing.Clone.md).
+ Backtracking causes a brief DB instance disruption. You must stop or pause your applications before starting a backtrack operation to ensure that there are no new read or write requests. During the backtrack operation, Aurora pauses the database, closes any open connections, and drops any uncommitted reads and writes. It then waits for the backtrack operation to complete.
+ You can't restore a cross-Region snapshot of a backtrack-enabled cluster in an AWS Region that doesn't support backtracking.
+ If you perform an in-place upgrade for a backtrack-enabled cluster from Aurora MySQL version 2 to version 3, you can't backtrack to a point in time before the upgrade happened.

## Region and version availability
Region and version availability

Backtrack is not available for Aurora PostgreSQL.

Following are the supported engines and Region availability for Backtrack with Aurora MySQL.


| Region | Aurora MySQL version 3 | Aurora MySQL version 2 | 
| --- | --- | --- | 
| US East (N. Virginia) | All versions | All versions | 
| US East (Ohio) | All versions | All versions | 
| US West (N. California) | All versions | All versions | 
| US West (Oregon) | All versions | All versions | 
| Africa (Cape Town) | – | – | 
| Asia Pacific (Hong Kong) | – | – | 
| Asia Pacific (Jakarta) | – | – | 
| Asia Pacific (Malaysia) | – | – | 
| Asia Pacific (Melbourne) | – | – | 
| Asia Pacific (Mumbai) | All versions | All versions | 
| Asia Pacific (New Zealand) | – | – | 
| Asia Pacific (Osaka) | All versions | Version 2.07.3 and higher | 
| Asia Pacific (Seoul) | All versions | All versions | 
| Asia Pacific (Singapore) | All versions | All versions | 
| Asia Pacific (Sydney) | All versions | All versions | 
| Asia Pacific (Taipei) | – | – | 
| Asia Pacific (Thailand) | – | – | 
| Asia Pacific (Tokyo) | All versions | All versions | 
| Canada (Central) | All versions | All versions | 
| Canada West (Calgary) | – | – | 
| China (Beijing) | – | – | 
| China (Ningxia) | – | – | 
| Europe (Frankfurt) | All versions | All versions | 
| Europe (Ireland) | All versions | All versions | 
| Europe (London) | All versions | All versions | 
| Europe (Milan) | – | – | 
| Europe (Paris) | All versions | All versions | 
| Europe (Spain) | – | – | 
| Europe (Stockholm) | – | – | 
| Europe (Zurich) | – | – | 
| Israel (Tel Aviv) | – | – | 
| Mexico (Central) | – | – | 
| Middle East (Bahrain) | – | – | 
| Middle East (UAE) | – | – | 
| South America (São Paulo) | – | – | 
| AWS GovCloud (US-East) | – | – | 
| AWS GovCloud (US-West) | – | – | 

## Upgrade considerations for backtrack-enabled clusters


You can upgrade a backtrack-enabled DB cluster from Aurora MySQL version 2 to version 3, because all minor versions of Aurora MySQL version 3 are supported for Backtrack.

# Configuring backtracking a Aurora MySQL DB cluster


To use the Backtrack feature, you must enable backtracking and specify a target backtrack window. Otherwise, backtracking is disabled.

For the target backtrack window, specify the amount of time that you want to be able to rewind your database using Backtrack. Aurora tries to retain enough change records to support that window of time.

## Console


You can use the console to configure backtracking when you create a new DB cluster. You can also modify a DB cluster to change the backtrack window for a backtrack-enabled cluster. If you turn off backtracking entirely for a cluster by setting the backtrack window to 0, you can't enable backtrack again for that cluster.

**Topics**
+ [

### Configuring backtracking with the console when creating a DB cluster
](#AuroraMySQL.Managing.Backtrack.Configuring.Console.Creating)
+ [

### Configuring backtrack with the console when modifying a DB cluster
](#AuroraMySQL.Managing.Backtrack.Configuring.Console.Modifying)

### Configuring backtracking with the console when creating a DB cluster
Configuring backtrack with the console at cluster creation

When you create a new Aurora MySQL DB cluster, backtracking is configured when you choose **Enable Backtrack** and specify a **Target Backtrack window** value that is greater than zero in the **Backtrack** section.

To create a DB cluster, follow the instructions in [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md). The following image shows the **Backtrack** section.

![\[Enable Backtrack during DB cluster creation with console\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-backtrack-create.png)


When you create a new DB cluster, Aurora has no data for the DB cluster's workload. So it can't estimate a cost specifically for the new DB cluster. Instead, the console presents a typical user cost for the specified target backtrack window based on a typical workload. The typical cost is meant to provide a general reference for the cost of the Backtrack feature.

**Important**  
Your actual cost might not match the typical cost, because your actual cost is based on your DB cluster's workload.

### Configuring backtrack with the console when modifying a DB cluster
Configuring backtrack with the console at cluster modification

You can modify backtracking for a DB cluster using the console.

**Note**  
Currently, you can modify backtracking only for a DB cluster that has the Backtrack feature enabled. The **Backtrack** section doesn't appear for a DB cluster that was created with the Backtrack feature disabled or if the Backtrack feature has been disabled for the DB cluster.

**To modify backtracking for a DB cluster using the console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose **Databases**.

1. Choose the cluster that you want to modify, and choose **Modify**.

1. For **Target Backtrack window**, modify the amount of time that you want to be able to backtrack. The limit is 72 hours.  
![\[Modify Backtrack with console\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-backtrack-modify.png)

   The console shows the estimated cost for the amount of time you specified based on the DB cluster's past workload:
   + If backtracking was disabled on the DB cluster, the cost estimate is based on the `VolumeWriteIOPS` metric for the DB cluster in Amazon CloudWatch.
   + If backtracking was enabled previously on the DB cluster, the cost estimate is based on the `BacktrackChangeRecordsCreationRate` metric for the DB cluster in Amazon CloudWatch.

1. Choose **Continue**.

1. For **Scheduling of Modifications**, choose one of the following:
   + **Apply during the next scheduled maintenance window** – Wait to apply the **Target Backtrack window** modification until the next maintenance window.
   + **Apply immediately** – Apply the **Target Backtrack window** modification as soon as possible.

1. Choose **Modify cluster**.

## AWS CLI


When you create a new Aurora MySQL DB cluster using the [create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html) AWS CLI command, backtracking is configured when you specify a `--backtrack-window` value that is greater than zero. The `--backtrack-window` value specifies the target backtrack window. For more information, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md).

You can also specify the `--backtrack-window` value using the following AWS CLI commands:
+  [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) 
+  [restore-db-cluster-from-s3](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-s3.html) 
+  [restore-db-cluster-from-snapshot](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-snapshot.html) 
+  [restore-db-cluster-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html) 

The following procedure describes how to modify the target backtrack window for a DB cluster using the AWS CLI.

**To modify the target backtrack window for a DB cluster using the AWS CLI**
+ Call the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) AWS CLI command and supply the following values:
  + `--db-cluster-identifier` – The name of the DB cluster.
  + `--backtrack-window` – The maximum number of seconds that you want to be able to backtrack the DB cluster.

  The following example sets the target backtrack window for `sample-cluster` to one day (86,400 seconds).

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-cluster \
      --db-cluster-identifier sample-cluster \
      --backtrack-window 86400
  ```

  For Windows:

  ```
  aws rds modify-db-cluster ^
      --db-cluster-identifier sample-cluster ^
      --backtrack-window 86400
  ```

**Note**  
Currently, you can enable backtracking only for a DB cluster that was created with the Backtrack feature enabled.

## RDS API


When you create a new Aurora MySQL DB cluster using the [CreateDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html) Amazon RDS API operation, backtracking is configured when you specify a `BacktrackWindow` value that is greater than zero. The `BacktrackWindow` value specifies the target backtrack window for the DB cluster specified in the `DBClusterIdentifier` value. For more information, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md).

You can also specify the `BacktrackWindow` value using the following API operations:
+  [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) 
+  [RestoreDBClusterFromS3](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromS3.html) 
+  [RestoreDBClusterFromSnapshot](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromSnapshot.html) 
+  [RestoreDBClusterToPointInTime](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html) 

**Note**  
Currently, you can enable backtracking only for a DB cluster that was created with the Backtrack feature enabled.

# Performing a backtrack for an Aurora MySQL DB cluster


You can backtrack a DB cluster to a specified backtrack time stamp. If the backtrack time stamp isn't earlier than the earliest possible backtrack time, and isn't in the future, the DB cluster is backtracked to that time stamp. 

Otherwise, an error typically occurs. Also, if you try to backtrack a DB cluster for which binary logging is enabled, an error typically occurs unless you've chosen to force the backtrack to occur. Forcing a backtrack to occur can interfere with other operations that use binary logging.

**Important**  
Backtracking doesn't generate binlog entries for the changes that it makes. If you have binary logging enabled for the DB cluster, backtracking might not be compatible with your binlog implementation.

**Note**  
For database clones, you can't backtrack the DB cluster earlier than the date and time when the clone was created. For more information about database cloning, see [Cloning a volume for an Amazon Aurora DB cluster](Aurora.Managing.Clone.md).

## Console


The following procedure describes how to perform a backtrack operation for a DB cluster using the console.

**To perform a backtrack operation using the console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Instances**.

1. Choose the primary instance for the DB cluster that you want to backtrack.

1. For **Actions**, choose **Backtrack DB cluster**.

1. On the **Backtrack DB cluster** page, enter the backtrack time stamp to backtrack the DB cluster to.  
![\[Backtrack DB cluster\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-backtrack-db-cluster.png)

1. Choose **Backtrack DB cluster**.

## AWS CLI


The following procedure describes how to backtrack a DB cluster using the AWS CLI.

**To backtrack a DB cluster using the AWS CLI**
+ Call the [backtrack-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/backtrack-db-cluster.html) AWS CLI command and supply the following values:
  + `--db-cluster-identifier` – The name of the DB cluster.
  + `--backtrack-to` – The backtrack time stamp to backtrack the DB cluster to, specified in ISO 8601 format.

  The following example backtracks the DB cluster `sample-cluster` to March 19, 2018, at 10 a.m.

  For Linux, macOS, or Unix:

  ```
  aws rds backtrack-db-cluster \
      --db-cluster-identifier sample-cluster \
      --backtrack-to 2018-03-19T10:00:00+00:00
  ```

  For Windows:

  ```
  aws rds backtrack-db-cluster ^
      --db-cluster-identifier sample-cluster ^
      --backtrack-to 2018-03-19T10:00:00+00:00
  ```

## RDS API


To backtrack a DB cluster using the Amazon RDS API, use the [BacktrackDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_BacktrackDBCluster.html) operation. This operation backtracks the DB cluster specified in the `DBClusterIdentifier` value to the specified time.

# Monitoring backtracking for an Aurora MySQL DB cluster
Monitoring backtracking

You can view backtracking information and monitor backtracking metrics for a DB cluster.

## Console


**To view backtracking information and monitor backtracking metrics using the console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose **Databases**.

1. Choose the DB cluster name to open information about it.

   The backtrack information is in the **Backtrack** section.  
![\[Backtrack details for a DB cluster\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-backtrack-details.png)

   When backtracking is enabled, the following information is available:
   + **Target window** – The current amount of time specified for the target backtrack window. The target is the maximum amount of time that you can backtrack if there is sufficient storage.
   + **Actual window** – The actual amount of time you can backtrack, which can be smaller than the target backtrack window. The actual backtrack window is based on your workload and the storage available for retaining backtrack change records.
   + **Earliest backtrack time** – The earliest possible backtrack time for the DB cluster. You can't backtrack the DB cluster to a time before the displayed time.

1. Do the following to view backtracking metrics for the DB cluster:

   1. In the navigation pane, choose **Instances**.

   1. Choose the name of the primary instance for the DB cluster to display its details.

   1. In the **CloudWatch** section, type **Backtrack** into the **CloudWatch** box to show only the Backtrack metrics.  
![\[Backtrack metrics\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-backtrack-metrics.png)

      The following metrics are displayed:
      + **Backtrack Change Records Creation Rate (Count)** – This metric shows the number of backtrack change records created over five minutes for your DB cluster. You can use this metric to estimate the backtrack cost for your target backtrack window.
      + **[Billed] Backtrack Change Records Stored (Count)** – This metric shows the actual number of backtrack change records used by your DB cluster.
      + **Backtrack Window Actual (Minutes)** – This metric shows whether there is a difference between the target backtrack window and the actual backtrack window. For example, if your target backtrack window is 2 hours (120 minutes), and this metric shows that the actual backtrack window is 100 minutes, then the actual backtrack window is smaller than the target.
      + **Backtrack Window Alert (Count)** – This metric shows how often the actual backtrack window is smaller than the target backtrack window for a given period of time.
**Note**  
The following metrics might lag behind the current time:  
**Backtrack Change Records Creation Rate (Count)**
**[Billed] Backtrack Change Records Stored (Count)**

## AWS CLI


The following procedure describes how to view backtrack information for a DB cluster using the AWS CLI.

**To view backtrack information for a DB cluster using the AWS CLI**
+ Call the [describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) AWS CLI command and supply the following values:
  + `--db-cluster-identifier` – The name of the DB cluster.

  The following example lists backtrack information for `sample-cluster`.

  For Linux, macOS, or Unix:

  ```
  aws rds describe-db-clusters \
      --db-cluster-identifier sample-cluster
  ```

  For Windows:

  ```
  aws rds describe-db-clusters ^
      --db-cluster-identifier sample-cluster
  ```

## RDS API


To view backtrack information for a DB cluster using the Amazon RDS API, use the [DescribeDBClusters](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html) operation. This operation returns backtrack information for the DB cluster specified in the `DBClusterIdentifier` value.

## Subscribing to a backtrack event with the console
Subscribing to a backtrack event

The following procedure describes how to subscribe to a backtrack event using the console. The event sends you an email or text notification when your actual backtrack window is smaller than your target backtrack window.

**To view backtrack information using the console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose **Event subscriptions**.

1. Choose **Create event subscription**.

1. In the **Name** box, type a name for the event subscription, and ensure that **Yes** is selected for **Enabled**.

1. In the **Target** section, choose **New email topic**.

1. For **Topic name**, type a name for the topic, and for **With these recipients**, enter the email addresses or phone numbers to receive the notifications.

1. In the **Source** section, choose **Instances** for **Source type**.

1. For **Instances to include**, choose **Select specific instances**, and choose your DB instance.

1. For **Event categories to include**, choose **Select specific event categories**, and choose **backtrack**.

   Your page should look similar to the following page.  
![\[Backtrack event subscription\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-backtrack-event.png)

1. Choose **Create**.

## Retrieving existing backtracks


You can retrieve information about existing backtracks for a DB cluster. This information includes the unique identifier of the backtrack, the date and time backtracked to and from, the date and time the backtrack was requested, and the current status of the backtrack.

**Note**  
Currently, you can't retrieve existing backtracks using the console.

### AWS CLI


The following procedure describes how to retrieve existing backtracks for a DB cluster using the AWS CLI.

**To retrieve existing backtracks using the AWS CLI**
+ Call the [describe-db-cluster-backtracks](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-cluster-backtracks.html) AWS CLI command and supply the following values:
  + `--db-cluster-identifier` – The name of the DB cluster.

  The following example retrieves existing backtracks for `sample-cluster`.

  For Linux, macOS, or Unix:

  ```
  aws rds describe-db-cluster-backtracks \
      --db-cluster-identifier sample-cluster
  ```

  For Windows:

  ```
  aws rds describe-db-cluster-backtracks ^
      --db-cluster-identifier sample-cluster
  ```

### RDS API


To retrieve information about the backtracks for a DB cluster using the Amazon RDS API, use the [DescribeDBClusterBacktracks](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusterBacktracks.html) operation. This operation returns information about backtracks for the DB cluster specified in the `DBClusterIdentifier` value.

# Disabling backtracking for an Aurora MySQL DB cluster


You can disable the Backtrack feature for a DB cluster.

## Console


You can disable backtracking for a DB cluster using the console. After you turn off backtracking entirely for a cluster, you can't enable it again for that cluster.

**To disable the Backtrack feature for a DB cluster using the console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose **Databases**.

1. Choose the cluster you want to modify, and choose **Modify**.

1. In the **Backtrack** section, choose **Disable Backtrack**.

1. Choose **Continue**.

1. For **Scheduling of Modifications**, choose one of the following:
   + **Apply during the next scheduled maintenance window** – Wait to apply the modification until the next maintenance window.
   + **Apply immediately** – Apply the modification as soon as possible.

1. Choose **Modify Cluster**.

## AWS CLI


You can disable the Backtrack feature for a DB cluster using the AWS CLI by setting the target backtrack window to `0` (zero). After you turn off backtracking entirely for a cluster, you can't enable it again for that cluster.

**To modify the target backtrack window for a DB cluster using the AWS CLI**
+ Call the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) AWS CLI command and supply the following values:
  + `--db-cluster-identifier` – The name of the DB cluster.
  + `--backtrack-window` – specify `0` to turn off backtracking.

  The following example disables the Backtrack feature for the `sample-cluster` by setting `--backtrack-window` to `0`.

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-cluster \
      --db-cluster-identifier sample-cluster \
      --backtrack-window 0
  ```

  For Windows:

  ```
  aws rds modify-db-cluster ^
      --db-cluster-identifier sample-cluster ^
      --backtrack-window 0
  ```

## RDS API


To disable the Backtrack feature for a DB cluster using the Amazon RDS API, use the [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) operation. Set the `BacktrackWindow` value to `0` (zero), and specify the DB cluster in the `DBClusterIdentifier` value. After you turn off backtracking entirely for a cluster, you can't enable it again for that cluster.

# Testing Amazon Aurora MySQL using fault injection queries


You can test the fault tolerance of your Aurora MySQL DB cluster by using fault injection queries. Fault injection queries are issued as SQL commands to an Amazon Aurora instance. They let you schedule a simulated occurrence of one of the following events:
+ A crash of a writer or reader DB instance
+ A failure of an Aurora Replica
+ A disk failure
+ Disk congestion

When a fault injection query specifies a crash, it forces a crash of the Aurora MySQL DB instance. The other fault injection queries result in simulations of failure events, but don't cause the event to occur. When you submit a fault injection query, you also specify an amount of time for the failure event simulation to occur for.

You can submit a fault injection query to one of your Aurora Replica instances by connecting to the endpoint for the Aurora Replica. For more information, see [Amazon Aurora endpoint connections](Aurora.Overview.Endpoints.md).

Running fault injection queries requires all of the master user privileges. For more information, see [Master user account privileges](UsingWithRDS.MasterAccounts.md).

## Testing an instance crash


You can force a crash of an Amazon Aurora instance using the `ALTER SYSTEM CRASH` fault injection query.

For this fault injection query, a failover will not occur. If you want to test a failover, then you can choose the **Failover** instance action for your DB cluster in the RDS console, or use the [failover-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/failover-db-cluster.html) AWS CLI command or the [FailoverDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_FailoverDBCluster.html) RDS API operation.

### Syntax


```
1. ALTER SYSTEM CRASH [ INSTANCE | DISPATCHER | NODE ];
```

### Options


This fault injection query takes one of the following crash types:
+ **`INSTANCE`** — A crash of the MySQL-compatible database for the Amazon Aurora instance is simulated.
+ **`DISPATCHER`** — A crash of the dispatcher on the writer instance for the Aurora DB cluster is simulated. The *dispatcher* writes updates to the cluster volume for an Amazon Aurora DB cluster.
+ **`NODE`** — A crash of both the MySQL-compatible database and the dispatcher for the Amazon Aurora instance is simulated. For this fault injection simulation, the cache is also deleted.

The default crash type is `INSTANCE`.

## Testing an Aurora replica failure


You can simulate the failure of an Aurora Replica using the `ALTER SYSTEM SIMULATE READ REPLICA FAILURE` fault injection query.

An Aurora Replica failure blocks all requests from the writer instance to an Aurora Replica or all Aurora Replicas in the DB cluster for a specified time interval. When the time interval completes, the affected Aurora Replicas will be automatically synced up with the writer instance. 

### Syntax


```
1. ALTER SYSTEM SIMULATE percentage_of_failure PERCENT READ REPLICA FAILURE
2.     [ TO ALL | TO "replica name" ]
3.     FOR INTERVAL quantity { YEAR | QUARTER | MONTH | WEEK | DAY | HOUR | MINUTE | SECOND };
```

### Options


This fault injection query takes the following parameters:
+ **`percentage_of_failure`** — The percentage of requests to block during the failure event. This value can be a double between 0 and 100. If you specify 0, then no requests are blocked. If you specify 100, then all requests are blocked.
+ **Failure type** — The type of failure to simulate. Specify `TO ALL` to simulate failures for all Aurora Replicas in the DB cluster. Specify `TO` and the name of the Aurora Replica to simulate a failure of a single Aurora Replica. The default failure type is `TO ALL`.
+ **`quantity`** — The amount of time for which to simulate the Aurora Replica failure. The interval is an amount followed by a time unit. The simulation will occur for that amount of the specified unit. For example, `20 MINUTE` will result in the simulation running for 20 minutes.
**Note**  
Take care when specifying the time interval for your Aurora Replica failure event. If you specify too long of a time interval, and your writer instance writes a large amount of data during the failure event, then your Aurora DB cluster might assume that your Aurora Replica has crashed and replace it.

## Testing a disk failure


You can simulate a disk failure for an Aurora DB cluster using the `ALTER SYSTEM SIMULATE DISK FAILURE` fault injection query.

During a disk failure simulation, the Aurora DB cluster randomly marks disk segments as faulting. Requests to those segments will be blocked for the duration of the simulation.

### Syntax


```
1. ALTER SYSTEM SIMULATE percentage_of_failure PERCENT DISK FAILURE
2.     [ IN DISK index | NODE index ]
3.     FOR INTERVAL quantity { YEAR | QUARTER | MONTH | WEEK | DAY | HOUR | MINUTE | SECOND };
```

### Options


This fault injection query takes the following parameters:
+ **`percentage_of_failure`** — The percentage of the disk to mark as faulting during the failure event. This value can be a double between 0 and 100. If you specify 0, then none of the disk is marked as faulting. If you specify 100, then the entire disk is marked as faulting.
+ **`DISK index`** — A specific logical block of data to simulate the failure event for. If you exceed the range of available logical blocks of data, you will receive an error that tells you the maximum index value that you can specify. For more information, see [Displaying volume status for an Aurora MySQL DB cluster](AuroraMySQL.Managing.VolumeStatus.md).
+ **`NODE index`** — A specific storage node to simulate the failure event for. If you exceed the range of available storage nodes, you will receive an error that tells you the maximum index value that you can specify. For more information, see [Displaying volume status for an Aurora MySQL DB cluster](AuroraMySQL.Managing.VolumeStatus.md).
+ **`quantity`** — The amount of time for which to simulate the disk failure. The interval is an amount followed by a time unit. The simulation will occur for that amount of the specified unit. For example, `20 MINUTE` will result in the simulation running for 20 minutes.

## Testing disk congestion


You can simulate a disk failure for an Aurora DB cluster using the `ALTER SYSTEM SIMULATE DISK CONGESTION` fault injection query.

During a disk congestion simulation, the Aurora DB cluster randomly marks disk segments as congested. Requests to those segments will be delayed between the specified minimum and maximum delay time for the duration of the simulation.

### Syntax


```
1. ALTER SYSTEM SIMULATE percentage_of_failure PERCENT DISK CONGESTION
2.     BETWEEN minimum AND maximum MILLISECONDS
3.     [ IN DISK index | NODE index ]
4.     FOR INTERVAL quantity { YEAR | QUARTER | MONTH | WEEK | DAY | HOUR | MINUTE | SECOND };
```

### Options


This fault injection query takes the following parameters:
+ **`percentage_of_failure`** — The percentage of the disk to mark as congested during the failure event. This value can be a double between 0 and 100. If you specify 0, then none of the disk is marked as congested. If you specify 100, then the entire disk is marked as congested.
+ **`DISK index` Or `NODE index`** — A specific disk or node to simulate the failure event for. If you exceed the range of indexes for the disk or node, you will receive an error that tells you the maximum index value that you can specify.
+ **`minimum` And `maximum`** — The minimum and maximum amount of congestion delay, in milliseconds. Disk segments marked as congested will be delayed for a random amount of time within the range of the minimum and maximum amount of milliseconds for the duration of the simulation.
+ **`quantity`** — The amount of time for which to simulate the disk congestion. The interval is an amount followed by a time unit. The simulation will occur for that amount of the specified time unit. For example, `20 MINUTE` will result in the simulation running for 20 minutes.

# Altering tables in Amazon Aurora using Fast DDL


Amazon Aurora includes optimizations to run an `ALTER TABLE` operation in place, nearly instantaneously. The operation completes without requiring the table to be copied and without having a material impact on other DML statements. Because the operation doesn't consume temporary storage for a table copy, it makes DDL statements practical even for large tables on small instance classes.

Aurora MySQL version 3 is compatible with the MySQL 8.0 feature called instant DDL. Aurora MySQL version 2 uses a different implementation called Fast DDL.

**Topics**
+ [

## Instant DDL (Aurora MySQL version 3)
](#AuroraMySQL.mysql80-instant-ddl)
+ [

## Fast DDL (Aurora MySQL version 2)
](#AuroraMySQL.Managing.FastDDL-v2)

## Instant DDL (Aurora MySQL version 3)
<a name="instant_ddl"></a>

 The optimization performed by Aurora MySQL version 3 to improve the efficiency of some DDL operations is called instant DDL. 

 Aurora MySQL version 3 is compatible with the instant DDL from community MySQL 8.0. You perform an instant DDL operation by using the clause `ALGORITHM=INSTANT` with the `ALTER TABLE` statement. For syntax and usage details about instant DDL, see [ALTER TABLE](https://dev.mysql.com/doc/refman/8.0/en/alter-table.html) and [Online DDL Operations](https://dev.mysql.com/doc/refman/8.0/en/innodb-online-ddl-operations.html) in the MySQL documentation. 

 The following examples demonstrate the instant DDL feature. The `ALTER TABLE` statements add columns and change default column values. The examples include both regular and virtual columns, and both regular and partitioned tables. At each step, you can see the results by issuing `SHOW CREATE TABLE` and `DESCRIBE` statements. 

```
mysql> CREATE TABLE t1 (a INT, b INT, KEY(b)) PARTITION BY KEY(b) PARTITIONS 6;
Query OK, 0 rows affected (0.02 sec)

mysql> ALTER TABLE t1 RENAME TO t2, ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)

mysql> ALTER TABLE t2 ALTER COLUMN b SET DEFAULT 100, ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.00 sec)

mysql> ALTER TABLE t2 ALTER COLUMN b DROP DEFAULT, ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)

mysql> ALTER TABLE t2 ADD COLUMN c ENUM('a', 'b', 'c'), ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)

mysql> ALTER TABLE t2 MODIFY COLUMN c ENUM('a', 'b', 'c', 'd', 'e'), ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)

mysql> ALTER TABLE t2 ADD COLUMN (d INT GENERATED ALWAYS AS (a + 1) VIRTUAL), ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.02 sec)

mysql> ALTER TABLE t2 ALTER COLUMN a SET DEFAULT 20,
    ->   ALTER COLUMN b SET DEFAULT 200, ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)

mysql> CREATE TABLE t3 (a INT, b INT) PARTITION BY LIST(a)(
    ->   PARTITION mypart1 VALUES IN (1,3,5),
    ->   PARTITION MyPart2 VALUES IN (2,4,6)
    -> );
Query OK, 0 rows affected (0.03 sec)

mysql> ALTER TABLE t3 ALTER COLUMN a SET DEFAULT 20, ALTER COLUMN b SET DEFAULT 200, ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)

mysql> CREATE TABLE t4 (a INT, b INT) PARTITION BY RANGE(a)
    ->   (PARTITION p0 VALUES LESS THAN(100), PARTITION p1 VALUES LESS THAN(1000),
    ->   PARTITION p2 VALUES LESS THAN MAXVALUE);
Query OK, 0 rows affected (0.05 sec)

mysql> ALTER TABLE t4 ALTER COLUMN a SET DEFAULT 20,
    ->   ALTER COLUMN b SET DEFAULT 200, ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)

/* Sub-partitioning example */
mysql> CREATE TABLE ts (id INT, purchased DATE, a INT, b INT)
    ->   PARTITION BY RANGE( YEAR(purchased) )
    ->     SUBPARTITION BY HASH( TO_DAYS(purchased) )
    ->     SUBPARTITIONS 2 (
    ->       PARTITION p0 VALUES LESS THAN (1990),
    ->       PARTITION p1 VALUES LESS THAN (2000),
    ->       PARTITION p2 VALUES LESS THAN MAXVALUE
    ->    );
Query OK, 0 rows affected (0.10 sec)

mysql> ALTER TABLE ts ALTER COLUMN a SET DEFAULT 20,
    ->   ALTER COLUMN b SET DEFAULT 200, ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)
```

## Fast DDL (Aurora MySQL version 2)


 <a name="fast_ddl"></a>

Fast DDL in Aurora MySQL is an optimization designed to improve the performance of certain schema changes, such as adding or dropping columns, by reducing downtime and resource usage. It allows these operations to be completed more efficiently compared to traditional DDL methods.

**Important**  
Currently, you must enable Aurora lab mode to use Fast DDL. For information about enabling lab mode, see [Amazon Aurora MySQL lab mode](AuroraMySQL.Updates.LabMode.md).  
The Fast DDL optimization was initially introduced in lab mode on Aurora MySQL version 2 to enhance the efficiency of certain DDL operations. In Aurora MySQL version 3, lab mode has been discontinued, and Fast DDL has been replaced by the MySQL 8.0 Instant DDL feature.

In MySQL, many data definition language (DDL) operations have a significant performance impact.

For example, suppose that you use an `ALTER TABLE` operation to add a column to a table. Depending on the algorithm specified for the operation, this operation can involve the following:
+ Creating a full copy of the table
+ Creating a temporary table to process concurrent data manipulation language (DML) operations
+ Rebuilding all indexes for the table
+ Applying table locks while applying concurrent DML changes
+ Slowing concurrent DML throughput

This performance impact can be particularly challenging in environments with large tables or high transaction volumes. Fast DDL helps mitigate these challenges by optimizing schema changes, which enables quicker and less resource-intensive operations.

### Fast DDL limitations


Currently, Fast DDL has the following limitations:
+ Fast DDL only supports adding nullable columns, without default values, to the end of an existing table.
+ Fast DDL doesn't work for partitioned tables.
+ Fast DDL doesn't work for InnoDB tables that use the REDUNDANT row format.
+  Fast DDL doesn't work for tables with full-text search indexes. 
+ If the maximum possible record size for the DDL operation is too large, Fast DDL is not used. A record size is too large if it is greater than half the page size. The maximum size of a record is computed by adding the maximum sizes of all columns. For variable sized columns, according to InnoDB standards, extern bytes are not included for computation.

### Fast DDL syntax


```
ALTER TABLE tbl_name ADD COLUMN col_name column_definition
```

This statement takes the following options:
+ **`tbl_name` — **The name of the table to be modified.
+ **`col_name` — **The name of the column to be added.
+ **`col_definition` — **The definition of the column to be added.
**Note**  
You must specify a nullable column definition without a default value. Otherwise, Fast DDL isn't used.

### Fast DDL examples


 The following examples demonstrate the speedup from Fast DDL operations. The first SQL example runs `ALTER TABLE` statements on a large table without using Fast DDL. This operation takes substantial time. A CLI example shows how to enable Fast DDL for the cluster. Then another SQL example runs the same `ALTER TABLE` statements on an identical table. With Fast DDL enabled, the operation is very fast. 

 This example uses the `ORDERS` table from the TPC-H benchmark, containing 150 million rows. This cluster intentionally uses a relatively small instance class, to demonstrate how long `ALTER TABLE` statements can take when you can't use Fast DDL. The example creates a clone of the original table containing identical data. Checking the `aurora_lab_mode` setting confirms that the cluster can't use Fast DDL, because lab mode isn't enabled. Then `ALTER TABLE ADD COLUMN` statements take substantial time to add new columns at the end of the table. 

```
mysql> create table orders_regular_ddl like orders;
Query OK, 0 rows affected (0.06 sec)

mysql> insert into orders_regular_ddl select * from orders;
Query OK, 150000000 rows affected (1 hour 1 min 25.46 sec)

mysql> select @@aurora_lab_mode;
+-------------------+
| @@aurora_lab_mode |
+-------------------+
|                 0 |
+-------------------+

mysql> ALTER TABLE orders_regular_ddl ADD COLUMN o_refunded boolean;
Query OK, 0 rows affected (40 min 31.41 sec)

mysql> ALTER TABLE orders_regular_ddl ADD COLUMN o_coverletter varchar(512);
Query OK, 0 rows affected (40 min 44.45 sec)
```

 This example does the same preparation of a large table as the previous example. However, you can't simply enable lab mode within an interactive SQL session. That setting must be enabled in a custom parameter group. Doing so requires switching out of the `mysql` session and running some AWS CLI commands or using the AWS Management Console. 

```
mysql> create table orders_fast_ddl like orders;
Query OK, 0 rows affected (0.02 sec)

mysql> insert into orders_fast_ddl select * from orders;
Query OK, 150000000 rows affected (58 min 3.25 sec)

mysql> set aurora_lab_mode=1;
ERROR 1238 (HY000): Variable 'aurora_lab_mode' is a read only variable
```

 Enabling lab mode for the cluster requires some work with a parameter group. This AWS CLI example uses a cluster parameter group, to ensure that all DB instances in the cluster use the same value for the lab mode setting. 

```
$ aws rds create-db-cluster-parameter-group \
  --db-parameter-group-family aurora5.7 \
    --db-cluster-parameter-group-name lab-mode-enabled-57 --description 'TBD'
$ aws rds describe-db-cluster-parameters \
  --db-cluster-parameter-group-name lab-mode-enabled-57 \
    --query '*[*].[ParameterName,ParameterValue]' \
      --output text | grep aurora_lab_mode
aurora_lab_mode 0
$ aws rds modify-db-cluster-parameter-group \
  --db-cluster-parameter-group-name lab-mode-enabled-57 \
    --parameters ParameterName=aurora_lab_mode,ParameterValue=1,ApplyMethod=pending-reboot
{
    "DBClusterParameterGroupName": "lab-mode-enabled-57"
}

# Assign the custom parameter group to the cluster that's going to use Fast DDL.
$ aws rds modify-db-cluster --db-cluster-identifier tpch100g \
  --db-cluster-parameter-group-name lab-mode-enabled-57
{
  "DBClusterIdentifier": "tpch100g",
  "DBClusterParameterGroup": "lab-mode-enabled-57",
  "Engine": "aurora-mysql",
  "EngineVersion": "5.7.mysql_aurora.2.10.2",
  "Status": "available"
}

# Reboot the primary instance for the cluster tpch100g:
$ aws rds reboot-db-instance --db-instance-identifier instance-2020-12-22-5208
{
  "DBInstanceIdentifier": "instance-2020-12-22-5208",
  "DBInstanceStatus": "rebooting"
}

$ aws rds describe-db-clusters --db-cluster-identifier tpch100g \
  --query '*[].[DBClusterParameterGroup]' --output text
lab-mode-enabled-57

$ aws rds describe-db-cluster-parameters \
  --db-cluster-parameter-group-name lab-mode-enabled-57 \
    --query '*[*].{ParameterName:ParameterName,ParameterValue:ParameterValue}' \
      --output text | grep aurora_lab_mode
aurora_lab_mode 1
```

 The following example shows the remaining steps after the parameter group change takes effect. It tests the `aurora_lab_mode` setting to make sure that the cluster can use Fast DDL. Then it runs `ALTER TABLE` statements to add columns to the end of another large table. This time, the statements finish very quickly. 

```
mysql> select @@aurora_lab_mode;
+-------------------+
| @@aurora_lab_mode |
+-------------------+
|                 1 |
+-------------------+

mysql> ALTER TABLE orders_fast_ddl ADD COLUMN o_refunded boolean;
Query OK, 0 rows affected (1.51 sec)

mysql> ALTER TABLE orders_fast_ddl ADD COLUMN o_coverletter varchar(512);
Query OK, 0 rows affected (0.40 sec)
```

# Displaying volume status for an Aurora MySQL DB cluster
Displaying volume status for an Aurora DB cluster

In Amazon Aurora, a DB cluster volume consists of a collection of logical blocks. Each of these represents 10 gigabytes of allocated storage. These blocks are called *protection groups*.

The data in each protection group is replicated across six physical storage devices, called *storage nodes*. These storage nodes are allocated across three Availability Zones (AZs) in the AWS Region where the DB cluster resides. In turn, each storage node contains one or more logical blocks of data for the DB cluster volume. For more information about protection groups and storage nodes, see [Introducing the Aurora storage engine](https://aws.amazon.com/blogs/database/introducing-the-aurora-storage-engine/) on the AWS Database Blog.

You can simulate the failure of an entire storage node, or a single logical block of data within a storage node. To do so, you use the `ALTER SYSTEM SIMULATE DISK FAILURE` fault injection statement. For the statement, you specify the index value of a specific logical block of data or storage node. However, if you specify an index value greater than the number of logical blocks of data or storage nodes used by the DB cluster volume, the statement returns an error. For more information about fault injection queries, see [Testing Amazon Aurora MySQL using fault injection queries](AuroraMySQL.Managing.FaultInjectionQueries.md).

You can avoid that error by using the `SHOW VOLUME STATUS` statement. The statement returns two server status variables, `Disks` and `Nodes`. These variables represent the total number of logical blocks of data and storage nodes, respectively, for the DB cluster volume.

## Syntax


```
SHOW VOLUME STATUS
```

## Example


The following example illustrates a typical SHOW VOLUME STATUS result.

```
mysql> SHOW VOLUME STATUS;
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Disks         | 96    |
| Nodes         | 74    |
+---------------+-------+
```

# Tuning Aurora MySQL


Wait events and thread states are important tuning tools for Aurora MySQL. If you can find out why sessions are waiting for resources and what they are doing, you are better able to reduce bottlenecks. You can use the information in this section to find possible causes and corrective actions.

Amazon DevOps Guru for RDS can proactively determine whether your Aurora MySQL databases are experiencing problematic conditions that might cause bigger problems later. Amazon DevOps Guru for RDS publishes an explanation and recommendations for corrective actions in a proactive insight. This section contains insights for common problems.

**Important**  
The wait events and thread states in this section are specific to Aurora MySQL. Use the information in this section to tune only Amazon Aurora, not Amazon RDS for MySQL.  
Some wait events in this section have no analogs in the open source versions of these database engines. Other wait events have the same names as events in open source engines, but behave differently. For example, Amazon Aurora storage works different from open source storage, so storage-related wait events indicate different resource conditions.

**Topics**
+ [

# Essential concepts for Aurora MySQL tuning
](AuroraMySQL.Managing.Tuning.concepts.md)
+ [

# Tuning Aurora MySQL with wait events
](AuroraMySQL.Managing.Tuning.wait-events.md)
+ [

# Tuning Aurora MySQL with thread states
](AuroraMySQL.Managing.Tuning.thread-states.md)
+ [

# Tuning Aurora MySQL with Amazon DevOps Guru proactive insights
](MySQL.Tuning.proactive-insights.md)

# Essential concepts for Aurora MySQL tuning


Before you tune your Aurora MySQL database, make sure to learn what wait events and thread states are and why they occur. Also review the basic memory and disk architecture of Aurora MySQL when using the InnoDB storage engine. For a helpful architecture diagram, see the [MySQL Reference Manual](https://dev.mysql.com/doc/refman/8.0/en/innodb-architecture.html).

**Topics**
+ [

## Aurora MySQL wait events
](#AuroraMySQL.Managing.Tuning.concepts.waits)
+ [

## Aurora MySQL thread states
](#AuroraMySQL.Managing.Tuning.concepts.thread-states)
+ [

## Aurora MySQL memory
](#AuroraMySQL.Managing.Tuning.concepts.memory)
+ [

## Aurora MySQL processes
](#AuroraMySQL.Managing.Tuning.concepts.processes)

## Aurora MySQL wait events


A *wait event* indicates a resource for which a session is waiting. For example, the wait event `io/socket/sql/client_connection` indicates that a thread is in the process of handling a new connection. Typical resources that a session waits for include the following:
+ Single-threaded access to a buffer, for example, when a session is attempting to modify a buffer
+ A row that is currently locked by another session
+ A data file read
+ A log file write

For example, to satisfy a query, the session might perform a full table scan. If the data isn't already in memory, the session waits for the disk I/O to complete. When the buffers are read into memory, the session might need to wait because other sessions are accessing the same buffers. The database records the waits by using a predefined wait event. These events are grouped into categories.

A wait event doesn't by itself show a performance problem. For example, if requested data isn't in memory, reading data from disk is necessary. If one session locks a row for an update, another session waits for the row to be unlocked so that it can update it. A commit requires waiting for the write to a log file to complete. Waits are integral to the normal functioning of a database. 

Large numbers of wait events typically show a performance problem. In such cases, you can use wait event data to determine where sessions are spending time. For example, if a report that typically runs in minutes now runs for hours, you can identify the wait events that contribute the most to total wait time. If you can determine the causes of the top wait events, you can sometimes make changes that improve performance. For example, if your session is waiting on a row that has been locked by another session, you can end the locking session. 

## Aurora MySQL thread states


A *general thread state* is a `State` value that is associated with general query processing. For example, the thread state `sending data` indicates that a thread is reading and filtering rows for a query to determine the correct result set. 

You can use thread states to tune Aurora MySQL in a similar fashion to how you use wait events. For example, frequent occurrences of `sending data` usually indicate that a query isn't using an index. For more information about thread states, see [General Thread States](https://dev.mysql.com/doc/refman/5.7/en/general-thread-states.html) in the *MySQL Reference Manual*.

When you use Performance Insights, one of the following conditions is true:
+ Performance Schema is turned on – Aurora MySQL shows wait events rather than the thread state.
+ Performance Schema isn't turned on – Aurora MySQL shows the thread state.

We recommend that you configure the Performance Schema for automatic management. The Performance Schema provides additional insights and better tools to investigate potential performance problems. For more information, see [Overview of the Performance Schema for Performance Insights on Aurora MySQL](USER_PerfInsights.EnableMySQL.md).

## Aurora MySQL memory


In Aurora MySQL, the most important memory areas are the buffer pool and log buffer.

**Topics**
+ [

### Buffer pool
](#AuroraMySQL.Managing.Tuning.concepts.memory.buffer-pool)

### Buffer pool


The *buffer pool* is the shared memory area where Aurora MySQL caches table and index data. Queries can access frequently used data directly from memory without reading from disk.

The buffer pool is structured as a linked list of pages. A *page* can hold multiple rows. Aurora MySQL uses a least recently used (LRU) algorithm to age pages out of the pool.

For more information, see [Buffer Pool](https://dev.mysql.com/doc/refman/8.0/en/innodb-buffer-pool.html) in the *MySQL Reference Manual*.

## Aurora MySQL processes


Aurora MySQL uses a process model that is very different from Aurora PostgreSQL.

**Topics**
+ [

### MySQL server (mysqld)
](#AuroraMySQL.Managing.Tuning.concepts.processes.mysqld)
+ [

### Threads
](#AuroraMySQL.Managing.Tuning.concepts.processes.threads)
+ [

### Thread pool
](#AuroraMySQL.Managing.Tuning.concepts.processes.pool)

### MySQL server (mysqld)


The MySQL server is a single operating-system process named mysqld. The MySQL server doesn't spawn additional processes. Thus, an Aurora MySQL database uses mysqld to perform most of its work.

When the MySQL server starts, it listens for network connections from MySQL clients. When a client connects to the database, mysqld opens a thread.

### Threads


Connection manager threads associate each client connection with a dedicated thread. This thread manages authentication, runs statements, and returns results to the client. Connection manager creates new threads when necessary.

The *thread cache* is the set of available threads. When a connection ends, MySQL returns the thread to the thread cache if the cache isn't full. The `thread_cache_size` system variable determines the thread cache size.

### Thread pool


The *thread pool* consists of a number of thread groups. Each group manages a set of client connections. When a client connects to the database, the thread pool assigns the connections to thread groups in round-robin fashion. The thread pool separates connections and threads. There is no fixed relationship between connections and the threads that run statements received from those connections.

# Tuning Aurora MySQL with wait events


The following table summarizes the Aurora MySQL wait events that most commonly indicate performance problems. The following wait events are a subset of the list in [Aurora MySQL wait events](AuroraMySQL.Reference.Waitevents.md).


| Wait event | Description | 
| --- | --- | 
|  [cpu](ams-waits.cpu.md)  |  This event occurs when a thread is active in CPU or is waiting for CPU.  | 
|  [io/aurora\$1redo\$1log\$1flush](ams-waits.io-auredologflush.md)  |  This event occurs when a session is writing persistent data to Aurora storage.  | 
|  [io/aurora\$1respond\$1to\$1client](ams-waits.respond-to-client.md)  |  This event occurs when a thread is waiting to return a result set to a client.  | 
|  [io/redo\$1log\$1flush](ams-waits.io-redologflush.md)  |  This event occurs when a session is writing persistent data to Aurora storage.  | 
|  [io/socket/sql/client\$1connection](ams-waits.client-connection.md)  |  This event occurs when a thread is in the process of handling a new connection.  | 
|  [io/table/sql/handler](ams-waits.waitio.md)  |  This event occurs when work has been delegated to a storage engine.   | 
|  [synch/cond/innodb/row\$1lock\$1wait](ams-waits.row-lock-wait.md)  |  This event occurs when one session has locked a row for an update, and another session tries to update the same row.  | 
|  [synch/cond/innodb/row\$1lock\$1wait\$1cond](ams-waits.row-lock-wait-cond.md)  |  This event occurs when one session has locked a row for an update, and another session tries to update the same row.  | 
|  [synch/cond/sql/MDL\$1context::COND\$1wait\$1status](ams-waits.cond-wait-status.md)  |  This event occurs when there are threads waiting on a table metadata lock.  | 
|  [synch/mutex/innodb/aurora\$1lock\$1thread\$1slot\$1futex](ams-waits.waitsynch.md)  |  This event occurs when one session has locked a row for an update, and another session tries to update the same row.  | 
|  [synch/mutex/innodb/buf\$1pool\$1mutex](ams-waits.bufpoolmutex.md)  |  This event occurs when a thread has acquired a lock on the InnoDB buffer pool to access a page in memory.  | 
|  [synch/mutex/innodb/fil\$1system\$1mutex](ams-waits.innodb-fil-system-mutex.md)  |  This event occurs when a session is waiting to access the tablespace memory cache.  | 
|  [synch/mutex/innodb/trx\$1sys\$1mutex](ams-waits.trxsysmutex.md)  |  This event occurs when there is high database activity with a large number of transactions.  | 
|  [synch/sxlock/innodb/hash\$1table\$1locks](ams-waits.sx-lock-hash-table-locks.md)  |  This event occurs when pages not found in the buffer pool must be read from a file.  | 
|  [synch/mutex/innodb/temp\$1pool\$1manager\$1mutex](ams-waits.io-temppoolmanager.md)  |  This event occurs when a session is waiting to acquire a mutex for managing the pool of session temporary tablespaces.   | 

# cpu


The `cpu` wait event occurs when a thread is active in CPU or is waiting for CPU.

**Topics**
+ [

## Supported engine versions
](#ams-waits.cpu.context.supported)
+ [

## Context
](#ams-waits.cpu.context)
+ [

## Likely causes of increased waits
](#ams-waits.cpu.causes)
+ [

## Actions
](#ams-waits.cpu.actions)

## Supported engine versions


This wait event information is supported for the following engine versions:
+ Aurora MySQL versions 2 and 3

## Context


For every vCPU, a connection can run work on this CPU. In some situations, the number of active connections that are ready to run is higher than the number of vCPUs. This imbalance results in connections waiting for CPU resources. If the number of active connections stays consistently higher than the number of vCPUs, then your instance experiences CPU contention. The contention causes the `cpu` wait event to occur.

**Note**  
The Performance Insights metric for CPU is `DBLoadCPU`. The value for `DBLoadCPU` can differ from the value for the CloudWatch metric `CPUUtilization`. The latter metric is collected from the HyperVisor for a database instance.

Performance Insights OS metrics provide detailed information about CPU utilization. For example, you can display the following metrics:
+ `os.cpuUtilization.nice.avg`
+ `os.cpuUtilization.total.avg`
+ `os.cpuUtilization.wait.avg`
+ `os.cpuUtilization.idle.avg`

Performance Insights reports the CPU usage by the database engine as `os.cpuUtilization.nice.avg`.

## Likely causes of increased waits


When this event occurs more than normal, possibly indicating a performance problem, typical causes include the following:
+ Analytic queries
+ Highly concurrent transactions
+ Long-running transactions
+ A sudden increase in the number of connections, known as a *login storm*
+ An increase in context switching

## Actions


If the `cpu` wait event dominates database activity, it doesn't necessarily indicate a performance problem. Respond to this event only when performance degrades. 

Depending on the cause of the increase in CPU utilization, consider the following strategies:
+ Increase the CPU capacity of the host. This approach typically gives only temporary relief.
+ Identify top queries for potential optimization.
+ Redirect some read-only workload to reader nodes, if applicable.

**Topics**
+ [

### Identify the sessions or queries that are causing the problem
](#ams-waits.cpu.actions.az-vpc-subnet)
+ [

### Analyze and optimize the high CPU workload
](#ams-waits.cpu.actions.db-instance-class)

### Identify the sessions or queries that are causing the problem


To find the sessions and queries, look at the **Top SQL** table in Performance Insights for the SQL statements that have the highest CPU load. For more information, see [Analyzing metrics with the Performance Insights dashboard](USER_PerfInsights.UsingDashboard.md).

Typically, one or two SQL statements consume the majority of CPU cycles. Concentrate your efforts on these statements. Suppose that your DB instance has 2 vCPUs with a DB load of 3.1 average active sessions (AAS), all in the CPU state. In this case, your instance is CPU bound. Consider the following strategies:
+ Upgrade to a larger instance class with more vCPUs.
+ Tune your queries to have lower CPU load.

In this example, the top SQL queries have a DB load of 1.5 AAS, all in the CPU state. Another SQL statement has a load of 0.1 in the CPU state. In this example, if you stopped the lowest-load SQL statement, you don't significantly reduce database load. However, if you optimize the two high-load queries to be twice as efficient, you eliminate the CPU bottleneck. If you reduce the CPU load of 1.5 AAS by 50 percent, the AAS for each statement decreases to 0.75. The total DB load spent on CPU is now 1.6 AAS. This value is below the maximum vCPU line of 2.0.

For a useful overview of troubleshooting using Performance Insights, see the blog post [Analyze Amazon Aurora MySQL Workloads with Performance Insights](https://aws.amazon.com/blogs/database/analyze-amazon-aurora-mysql-workloads-with-performance-insights/). Also see the AWS Support article [How can I troubleshoot and resolve high CPU utilization on my Amazon RDS for MySQL instances?](https://aws.amazon.com/premiumsupport/knowledge-center/rds-instance-high-cpu/).

### Analyze and optimize the high CPU workload


After you identify the query or queries increasing CPU usage, you can either optimize them or end the connection. The following example shows how to end a connection.

```
CALL mysql.rds_kill(processID);
```

For more information, see [mysql.rds\$1kill](mysql-stored-proc-ending.md#mysql_rds_kill).

If you end a session, the action might trigger a long rollback.

#### Follow the guidelines for optimizing queries


To optimize queries, consider the following guidelines:
+ Run the `EXPLAIN` statement. 

  This command shows the individual steps involved in running a query. For more information, see [Optimizing Queries with EXPLAIN](https://dev.mysql.com/doc/refman/5.7/en/using-explain.html) in the MySQL documentation.
+ Run the `SHOW PROFILE` statement.

  Use this statement to review profile details that can indicate resource usage for statements that are run during the current session. For more information, see [SHOW PROFILE Statement](https://dev.mysql.com/doc/refman/5.7/en/show-profile.html) in the MySQL documentation.
+ Run the `ANALYZE TABLE` statement.

  Use this statement to refresh the index statistics for the tables accessed by the high-CPU consuming query. By analyzing the statement, you can help the optimizer choose an appropriate execution plan. For more information, see [ANALYZE TABLE Statement](https://dev.mysql.com/doc/refman/5.7/en/analyze-table.html) in the MySQL documentation.

#### Follow the guidelines for improving CPU usage


To improve CPU usage in a database instance, follow these guidelines:
+ Ensure that all queries are using proper indexes.
+ Find out whether you can use Aurora parallel queries. You can use this technique to reduce CPU usage on the head node by pushing down function processing, row filtering, and column projection for the `WHERE` clause.
+ Find out whether the number of SQL executions per second meets the expected thresholds.
+ Find out whether index maintenance or new index creation takes up CPU cycles needed by your production workload. Schedule maintenance activities outside of peak activity times.
+ Find out whether you can use partitioning to help reduce the query data set. For more information, see the blog post [How to plan and optimize Amazon Aurora with MySQL compatibility for consolidated workloads](https://aws.amazon.com/blogs/database/planning-and-optimizing-amazon-aurora-with-mysql-compatibility-for-consolidated-workloads/).

#### Check for connection storms


 If the `DBLoadCPU` metric is not very high, but the `CPUUtilization` metric is high, the cause of the high CPU utilization lies outside of the database engine. A classic example is a connection storm.

Check whether the following conditions are true:
+ There is an increase in both the Performance Insights `CPUUtilization` metric and the Amazon CloudWatch `DatabaseConnections` metric.
+ The number of threads in the CPU is greater than the number of vCPUs.

If the preceding conditions are true, consider decreasing the number of database connections. For example, you can use a connection pool such as RDS Proxy. To learn the best practices for effective connection management and scaling, see the whitepaper [Amazon Aurora MySQL DBA Handbook for Connection Management](https://d1.awsstatic.com/whitepapers/RDS/amazon-aurora-mysql-database-administrator-handbook.pdf).

# io/aurora\$1redo\$1log\$1flush


The `io/aurora_redo_log_flush` event occurs when a session is writing persistent data to Amazon Aurora storage.

**Topics**
+ [

## Supported engine versions
](#ams-waits.io-auredologflush.context.supported)
+ [

## Context
](#ams-waits.io-auredologflush.context)
+ [

## Likely causes of increased waits
](#ams-waits.io-auredologflush.causes)
+ [

## Actions
](#ams-waits.io-auredologflush.actions)

## Supported engine versions


This wait event information is supported for the following engine versions:
+ Aurora MySQL version 2

## Context


The `io/aurora_redo_log_flush` event is for a write input/output (I/O) operation in Aurora MySQL.

**Note**  
In Aurora MySQL version 3, this wait event is named [io/redo\$1log\$1flush](ams-waits.io-redologflush.md).

## Likely causes of increased waits


For data persistence, commits require a durable write to stable storage. If the database is doing too many commits, there is a wait event on the write I/O operation, the `io/aurora_redo_log_flush` wait event.

In the following examples, 50,000 records are inserted into an Aurora MySQL DB cluster using the db.r5.xlarge DB instance class:
+ In the first example, each session inserts 10,000 records row by row. By default, if a data manipulation language (DML) command isn't within a transaction, Aurora MySQL uses implicit commits. Autocommit is turned on. This means that for each row insertion there is a commit. Performance Insights shows that the connections spend most of their time waiting on the `io/aurora_redo_log_flush` wait event.   
![\[Performance Insights example of the wait event\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/auredologflush_PI_example1.png)

  This is caused by the simple insert statements used.  
![\[Insert statements in Top SQL\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/auredologflush_top_SQL1.png)

  The 50,000 records take 3.5 minutes to be inserted.
+ In the second example, inserts are made in 1,000 batches, that is each connection performs 10 commits instead of 10,000. Performance Insights shows that the connections don't spend most of their time on the `io/aurora_redo_log_flush` wait event.  
![\[Performance Insights example of the wait event having less impact\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/auredologflush_PI_example2.png)

  The 50,000 records take 4 seconds to be inserted.

## Actions


We recommend different actions depending on the causes of your wait event.

**Topics**
+ [

### Identify the problematic sessions and queries
](#ams-waits.io-auredologflush.actions.identify-queries)
+ [

### Group your write operations
](#ams-waits.io-auredologflush.actions.action0)
+ [

### Turn off autocommit
](#ams-waits.io-auredologflush.actions.action1)
+ [

### Use transactions
](#ams-waits.io-auredologflush.action2)
+ [

### Use batches
](#ams-waits.io-auredologflush.action3)

### Identify the problematic sessions and queries


If your DB instance is experiencing a bottleneck, your first task is to find the sessions and queries that cause it. For a useful AWS Database Blog post, see [Analyze Amazon Aurora MySQL Workloads with Performance Insights](https://aws.amazon.com/blogs/database/analyze-amazon-aurora-mysql-workloads-with-performance-insights/).

**To identify sessions and queries causing a bottleneck**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Performance Insights**.

1. Choose your DB instance.

1. In **Database load**, choose **Slice by wait**.

1. At the bottom of the page, choose **Top SQL**.

   The queries at the top of the list are causing the highest load on the database.

### Group your write operations


The following examples trigger the `io/aurora_redo_log_flush` wait event. (Autocommit is turned on.)

```
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');
....
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');

UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE id=xx;
UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE id=xx;
UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE id=xx;
....
UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE id=xx;

DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;
DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;
DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;
....
DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;
```

To reduce the time spent waiting on the `io/aurora_redo_log_flush` wait event, group your write operations logically into a single commit to reduce persistent calls to storage.

### Turn off autocommit


Turn off autocommit before making large changes that aren't within a transaction, as shown in the following example.

```
SET SESSION AUTOCOMMIT=OFF;
UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE sampleCol1=xx;
UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE sampleCol1=xx;
UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE sampleCol1=xx;
....
UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE sampleCol1=xx;
-- Other DML statements here
COMMIT;

SET SESSION AUTOCOMMIT=ON;
```

### Use transactions


You can use transactions, as shown in the following example.

```
BEGIN
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');
....
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');

DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;
DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;
DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;
....
DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;

-- Other DML statements here
END
```

### Use batches


You can make changes in batches, as shown in the following example. However, using batches that are too large can cause performance issues, especially in read replicas or when doing point-in-time recovery (PITR). 

```
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES
('xxxx','xxxxx'),('xxxx','xxxxx'),...,('xxxx','xxxxx'),('xxxx','xxxxx');

UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE sampleCol1 BETWEEN xx AND xxx;

DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1<xx;
```

# io/aurora\$1respond\$1to\$1client


The `io/aurora_respond_to_client` event occurs when a thread is waiting to return a result set to a client.

**Topics**
+ [

## Supported engine versions
](#ams-waits.respond-to-client.context.supported)
+ [

## Context
](#ams-waits.respond-to-client.context)
+ [

## Likely causes of increased waits
](#ams-waits.respond-to-client.causes)
+ [

## Actions
](#ams-waits.respond-to-client.actions)

## Supported engine versions


This wait event information is supported for the following engine versions:
+ Aurora MySQL version 2

## Context


The event `io/aurora_respond_to_client` indicates that a thread is waiting to return a result set to a client.

The query processing is complete, and the results are being returned back to the application client. However, because there isn't enough network bandwidth on the DB cluster, a thread is waiting to return the result set.

## Likely causes of increased waits


When the `io/aurora_respond_to_client` event appears more than normal, possibly indicating a performance problem, typical causes include the following:

**DB instance class insufficient for the workload**  
The DB instance class used by the DB cluster doesn't have the necessary network bandwidth to process the workload efficiently.

**Large result sets**  
There was an increase in size of the result set being returned, because the query returns higher numbers of rows. The larger result set consumes more network bandwidth.

**Increased load on the client**  
There might be CPU pressure, memory pressure, or network saturation on the client. An increase in load on the client delays the reception of data from the Aurora MySQL DB cluster.

**Increased network latency**  
There might be increased network latency between the Aurora MySQL DB cluster and client. Higher network latency increases the time required for the client to receive the data.

## Actions


We recommend different actions depending on the causes of your wait event.

**Topics**
+ [

### Identify the sessions and queries causing the events
](#ams-waits.respond-to-client.actions.identify)
+ [

### Scale the DB instance class
](#ams-waits.respond-to-client.actions.scale-db-instance-class)
+ [

### Check workload for unexpected results
](#ams-waits.respond-to-client.actions.workload)
+ [

### Distribute workload with reader instances
](#ams-waits.respond-to-client.actions.balance)
+ [

### Use the SQL\$1BUFFER\$1RESULT modifier
](#ams-waits.respond-to-client.actions.sql-buffer-result)

### Identify the sessions and queries causing the events


You can use Performance Insights to show queries blocked by the `io/aurora_respond_to_client` wait event. Typically, databases with moderate to significant load have wait events. The wait events might be acceptable if performance is optimal. If performance isn't optimal, then examine where the database is spending the most time. Look at the wait events that contribute to the highest load, and find out whether you can optimize the database and application to reduce those events. 

**To find SQL queries that are responsible for high load**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Performance Insights**.

1. Choose a DB instance. The Performance Insights dashboard is shown for that DB instance.

1. In the **Database load** chart, choose **Slice by wait**.

1. At the bottom of the page, choose **Top SQL**.

   The chart lists the SQL queries that are responsible for the load. Those at the top of the list are most responsible. To resolve a bottleneck, focus on these statements.

For a useful overview of troubleshooting using Performance Insights, see the AWS Database Blog post [Analyze Amazon Aurora MySQL Workloads with Performance Insights](https://aws.amazon.com/blogs/database/analyze-amazon-aurora-mysql-workloads-with-performance-insights/).

### Scale the DB instance class


Check for the increase in the value of the Amazon CloudWatch metrics related to network throughput, such as `NetworkReceiveThroughput` and `NetworkTransmitThroughput`. If the DB instance class network bandwidth is being reached, you can scale the DB instance class used by the DB cluster by modifying the DB cluster. A DB instance class with larger network bandwidth returns data to clients more efficiently.

For information about monitoring Amazon CloudWatch metrics, see [Viewing metrics in the Amazon RDS console](USER_Monitoring.md). For information about DB instance classes, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md). For information about modifying a DB cluster, see [Modifying an Amazon Aurora DB cluster](Aurora.Modifying.md).

### Check workload for unexpected results


Check the workload on the DB cluster and make sure that it isn't producing unexpected results. For example, there might be queries that are returning a higher number of rows than expected. In this case, you can use Performance Insights counter metrics such as `Innodb_rows_read`. For more information, see [Performance Insights counter metrics](USER_PerfInsights_Counters.md).

### Distribute workload with reader instances


You can distribute read-only workload with Aurora replicas. You can scale horizontally by adding more Aurora replicas. Doing so can result in an increase in the throttling limits for network bandwidth. For more information, see [Amazon Aurora DB clusters](Aurora.Overview.md).

### Use the SQL\$1BUFFER\$1RESULT modifier


You can add the `SQL_BUFFER_RESULT` modifier to `SELECT` statements to force the result into a temporary table before they are returned to the client. This modifier can help with performance issues when InnoDB locks aren't being freed because queries are in the `io/aurora_respond_to_client` wait state. For more information, see [SELECT Statement](https://dev.mysql.com/doc/refman/5.7/en/select.html) in the MySQL documentation.

# io/redo\$1log\$1flush


The `io/redo_log_flush` event occurs when a session is writing persistent data to Amazon Aurora storage.

**Topics**
+ [

## Supported engine versions
](#ams-waits.io-redologflush.context.supported)
+ [

## Context
](#ams-waits.io-redologflush.context)
+ [

## Likely causes of increased waits
](#ams-waits.io-redologflush.causes)
+ [

## Actions
](#ams-waits.io-redologflush.actions)

## Supported engine versions


This wait event information is supported for the following engine versions:
+ Aurora MySQL version 3

## Context


The `io/redo_log_flush` event is for a write input/output (I/O) operation in Aurora MySQL.

**Note**  
In Aurora MySQL version 2, this wait event is named [io/aurora\$1redo\$1log\$1flush](ams-waits.io-auredologflush.md).

## Likely causes of increased waits


For data persistence, commits require a durable write to stable storage. If the database is doing too many commits, there is a wait event on the write I/O operation, the `io/redo_log_flush` wait event.

For examples of the behavior of this wait event, see [io/aurora\$1redo\$1log\$1flush](ams-waits.io-auredologflush.md).

## Actions


We recommend different actions depending on the causes of your wait event.

**Topics**
+ [

### Identify the problematic sessions and queries
](#ams-waits.io-redologflush.actions.identify-queries)
+ [

### Group your write operations
](#ams-waits.io-redologflush.actions.action0)
+ [

### Turn off autocommit
](#ams-waits.io-redologflush.actions.action1)
+ [

### Use transactions
](#ams-waits.io-redologflush.action2)
+ [

### Use batches
](#ams-waits.io-redologflush.action3)

### Identify the problematic sessions and queries


If your DB instance is experiencing a bottleneck, your first task is to find the sessions and queries that cause it. For a useful AWS Database Blog post, see [Analyze Amazon Aurora MySQL Workloads with Performance Insights](https://aws.amazon.com/blogs/database/analyze-amazon-aurora-mysql-workloads-with-performance-insights/).

**To identify sessions and queries causing a bottleneck**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Performance Insights**.

1. Choose your DB instance.

1. In **Database load**, choose **Slice by wait**.

1. At the bottom of the page, choose **Top SQL**.

   The queries at the top of the list are causing the highest load on the database.

### Group your write operations


The following examples trigger the `io/redo_log_flush` wait event. (Autocommit is turned on.)

```
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');
....
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');

UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE id=xx;
UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE id=xx;
UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE id=xx;
....
UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE id=xx;

DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;
DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;
DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;
....
DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;
```

To reduce the time spent waiting on the `io/redo_log_flush` wait event, group your write operations logically into a single commit to reduce persistent calls to storage.

### Turn off autocommit


Turn off autocommit before making large changes that aren't within a transaction, as shown in the following example.

```
SET SESSION AUTOCOMMIT=OFF;
UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE sampleCol1=xx;
UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE sampleCol1=xx;
UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE sampleCol1=xx;
....
UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE sampleCol1=xx;
-- Other DML statements here
COMMIT;

SET SESSION AUTOCOMMIT=ON;
```

### Use transactions


You can use transactions, as shown in the following example.

```
BEGIN
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');
....
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES ('xxxx','xxxxx');

DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;
DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;
DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;
....
DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1=xx;

-- Other DML statements here
END
```

### Use batches


You can make changes in batches, as shown in the following example. However, using batches that are too large can cause performance issues, especially in read replicas or when doing point-in-time recovery (PITR).

```
INSERT INTO `sampleDB`.`sampleTable` (sampleCol2, sampleCol3) VALUES
('xxxx','xxxxx'),('xxxx','xxxxx'),...,('xxxx','xxxxx'),('xxxx','xxxxx');

UPDATE `sampleDB`.`sampleTable` SET sampleCol3='xxxxx' WHERE sampleCol1 BETWEEN xx AND xxx;

DELETE FROM `sampleDB`.`sampleTable` WHERE sampleCol1<xx;
```

# io/socket/sql/client\$1connection


The `io/socket/sql/client_connection` event occurs when a thread is in the process of handling a new connection.

**Topics**
+ [

## Supported engine versions
](#ams-waits.client-connection.context.supported)
+ [

## Context
](#ams-waits.client-connection.context)
+ [

## Likely causes of increased waits
](#ams-waits.client-connection.causes)
+ [

## Actions
](#ams-waits.client-connection.actions)

## Supported engine versions


This wait event information is supported for the following engine versions:
+ Aurora MySQL versions 2 and 3

## Context


The event `io/socket/sql/client_connection` indicates that mysqld is busy creating threads to handle incoming new client connections. In this scenario, the processing of servicing new client connection requests slows down while connections wait for the thread to be assigned. For more information, see [MySQL server (mysqld)](AuroraMySQL.Managing.Tuning.concepts.md#AuroraMySQL.Managing.Tuning.concepts.processes.mysqld).

## Likely causes of increased waits


When this event appears more than normal, possibly indicating a performance problem, typical causes include the following:
+ There is a sudden increase in new user connections from the application to your Amazon RDS instance.
+ Your DB instance can't process new connections because the network, CPU, or memory is being throttled.

## Actions


If `io/socket/sql/client_connection` dominates database activity, it doesn't necessarily indicate a performance problem. In a database that isn't idle, a wait event is always on top. Act only when performance degrades. We recommend different actions depending on the causes of your wait event.

**Topics**
+ [

### Identify the problematic sessions and queries
](#ams-waits.client-connection.actions.identify-queries)
+ [

### Follow best practices for connection management
](#ams-waits.client-connection.actions.manage-connections)
+ [

### Scale up your instance if resources are being throttled
](#ams-waits.client-connection.upgrade)
+ [

### Check the top hosts and top users
](#ams-waits.client-connection.top-hosts)
+ [

### Query the performance\$1schema tables
](#ams-waits.client-connection.perf-schema)
+ [

### Check the thread states of your queries
](#ams-waits.client-connection.thread-states)
+ [

### Audit your requests and queries
](#ams-waits.client-connection.auditing)
+ [

### Pool your database connections
](#ams-waits.client-connection.pooling)

### Identify the problematic sessions and queries


If your DB instance is experiencing a bottleneck, your first task is to find the sessions and queries that cause it. For a useful blog post, see [Analyze Amazon Aurora MySQL Workloads with Performance Insights](https://aws.amazon.com/blogs/database/analyze-amazon-aurora-mysql-workloads-with-performance-insights/).

**To identify sessions and queries causing a bottleneck**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Performance Insights**.

1. Choose your DB instance.

1. In **Database load**, choose **Slice by wait**.

1. At the bottom of the page, choose **Top SQL**.

   The queries at the top of the list are causing the highest load on the database.

### Follow best practices for connection management


To manage your connections, consider the following strategies:
+ Use connection pooling.

  You can gradually increase the number of connections as required. For more information, see the whitepaper [Amazon Aurora MySQL Database Administrator’s Handbook](https://d1.awsstatic.com/whitepapers/RDS/amazon-aurora-mysql-database-administrator-handbook.pdf).
+ Use a reader node to redistribute read-only traffic.

  For more information, see [Aurora Replicas](Aurora.Replication.md#Aurora.Replication.Replicas) and [Amazon Aurora endpoint connections](Aurora.Overview.Endpoints.md).

### Scale up your instance if resources are being throttled


Look for examples of throttling in the following resources:
+ CPU

  Check your Amazon CloudWatch metrics for high CPU usage.
+ Network

  Check for an increase in the value of the CloudWatch metrics `network receive throughput` and `network transmit throughput`. If your instance has reached the network bandwidth limit for your instance class, consider scaling up your RDS instance to a higher instance class type. For more information, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md).
+ Freeable memory 

  Check for a drop in the CloudWatch metric `FreeableMemory`. Also, consider turning on Enhanced Monitoring. For more information, see [Monitoring OS metrics with Enhanced Monitoring](USER_Monitoring.OS.md).

### Check the top hosts and top users


Use Performance Insights to check the top hosts and top users. For more information, see [Analyzing metrics with the Performance Insights dashboard](USER_PerfInsights.UsingDashboard.md).

### Query the performance\$1schema tables


To get an accurate count of the current and total connections, query the `performance_schema` tables. With this technique, you identify the source user or host that is responsible for creating a high number of connections. For example, query the `performance_schema` tables as follows.

```
SELECT * FROM performance_schema.accounts;
SELECT * FROM performance_schema.users;
SELECT * FROM performance_schema.hosts;
```

### Check the thread states of your queries


If your performance issue is ongoing, check the thread states of your queries. In the `mysql` client, issue the following command.

```
show processlist;
```

### Audit your requests and queries


To check the nature of the requests and queries from user accounts, use AuroraAurora MySQL Advanced Auditing. To learn how to turn on auditing, see [Using Advanced Auditing with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Auditing.md).

### Pool your database connections


Consider using Amazon RDS Proxy for connection management. By using RDS Proxy, you can allow your applications to pool and share database connections to improve their ability to scale. RDS Proxy makes applications more resilient to database failures by automatically connecting to a standby DB instance while preserving application connections. For more information, see [Amazon RDS Proxyfor Aurora](rds-proxy.md).

# io/table/sql/handler


The `io/table/sql/handler` event occurs when work has been delegated to a storage engine.

**Topics**
+ [

## Supported engine versions
](#ams-waits.waitio.context.supported)
+ [

## Context
](#ams-waits.waitio.context)
+ [

## Likely causes of increased waits
](#ams-waits.waitio.causes)
+ [

## Actions
](#ams-waits.waitio.actions)

## Supported engine versions


This wait event information is supported for the following engine versions:
+ Aurora MySQL versions 2 and 3

## Context


The event `io/table` indicates a wait for access to a table. This event occurs regardless of whether the data is cached in the buffer pool or accessed on disk. The `io/table/sql/handler` event indicates an increase in workload activity. 

A *handler* is a routine specialized in a certain type of data or focused on certain special tasks. For example, an event handler receives and digests events and signals from the operating system or from a user interface. A memory handler performs tasks related to memory. A file input handler is a function that receives file input and performs special tasks on the data, according to context.

Views such as `performance_schema.events_waits_current` often show `io/table/sql/handler` when the actual wait is a nested wait event such as a lock. When the actual wait isn't `io/table/sql/handler`, Performance Insights reports the nested wait event. When Performance Insights reports `io/table/sql/handler`, it represents InnoDB processing of the I/O request and not a hidden nested wait event. For more information, see [Performance Schema Atom and Molecule Events](https://dev.mysql.com/doc/refman/5.7/en/performance-schema-atom-molecule-events.html) in the *MySQL Reference Manual*.

The `io/table/sql/handler` event often appears in top wait events with I/O waits such as `io/aurora_redo_log_flush`.

## Likely causes of increased waits


In Performance Insights, sudden spikes in the `io/table/sql/handler` event indicate an increase in workload activity. Increased activity means increased I/O. 

Performance Insights filters the nesting event IDs and doesn't report a `io/table/sql/handler` wait when the underlying nested event is a lock wait. For example, if the root cause event is [synch/mutex/innodb/aurora\$1lock\$1thread\$1slot\$1futex](ams-waits.waitsynch.md), Performance Insights displays this wait in top wait events and not `io/table/sql/handler`.

In views such as `performance_schema.events_waits_current`, waits for `io/table/sql/handler` often appear when the actual wait is a nested wait event such as a lock. When the actual wait differs from `io/table/sql/handler`, Performance Insights looks up the nested wait and reports the actual wait instead of `io/table/sql/handler`. When Performance Insights reports `io/table/sql/handler`, the real wait is `io/table/sql/handler` and not a hidden nested wait event. For more information, see [Performance Schema Atom and Molecule Events](https://dev.mysql.com/doc/refman/5.7/en/performance-schema-atom-molecule-events.html) in the *MySQL 5.7 Reference Manual*.

## Actions


If this wait event dominates database activity, it doesn't necessarily indicate a performance problem. A wait event is always on top when the database is active. You need to act only when performance degrades.

We recommend different actions depending on the other wait events that you see.

**Topics**
+ [

### Identify the sessions and queries causing the events
](#ams-waits.waitio.actions.identify)
+ [

### Check for a correlation with Performance Insights counter metrics
](#ams-waits.waitio.actions.filters)
+ [

### Check for other correlated wait events
](#ams-waits.waitio.actions.maintenance)

### Identify the sessions and queries causing the events


Typically, databases with moderate to significant load have wait events. The wait events might be acceptable if performance is optimal. If performance is isn't optimal, then examine where the database is spending the most time. Look at the wait events that contribute to the highest load, and find out whether you can optimize the database and application to reduce those events.

**To find SQL queries that are responsible for high load**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Performance Insights**.

1. Choose a DB instance. The Performance Insights dashboard is shown for that DB instance.

1. In the **Database load** chart, choose **Slice by wait**.

1. At the bottom of the page, choose **Top SQL**.

   The chart lists the SQL queries that are responsible for the load. Those at the top of the list are most responsible. To resolve a bottleneck, focus on these statements.

For a useful overview of troubleshooting using Performance Insights, see the blog post [Analyze Amazon Aurora MySQL Workloads with Performance Insights](https://aws.amazon.com/blogs/database/analyze-amazon-aurora-mysql-workloads-with-performance-insights/).

### Check for a correlation with Performance Insights counter metrics


Check for Performance Insights counter metrics such as `Innodb_rows_changed`. If counter metrics are correlated with `io/table/sql/handler`, follow these steps:

1. In Performance Insights, look for the SQL statements accounting for the `io/table/sql/handler` top wait event. If possible, optimize this statement so that it returns fewer rows.

1. Retrieve the top tables from the `schema_table_statistics` and `x$schema_table_statistics` views. These views show the amount of time spent per table. For more information, see [The schema\$1table\$1statistics and x\$1schema\$1table\$1statistics Views](https://dev.mysql.com/doc/refman/5.7/en/sys-schema-table-statistics.html) in the *MySQL Reference Manual*.

   By default, rows are sorted by descending total wait time. Tables with the most contention appear first. The output indicates whether time is spent on reads, writes, fetches, inserts, updates, or deletes.

   ```
   mysql> select * from sys.schema_table_statistics limit 1\G
   
   *************************** 1. row ***************************
        table_schema: read_only_db
          table_name: sbtest41
       total_latency: 54.11 m
        rows_fetched: 6001557
       fetch_latency: 39.14 m
       rows_inserted: 14833
      insert_latency: 5.78 m
        rows_updated: 30470
      update_latency: 5.39 m
        rows_deleted: 14833
      delete_latency: 3.81 m
    io_read_requests: NULL
             io_read: NULL
     io_read_latency: NULL
   io_write_requests: NULL
            io_write: NULL
    io_write_latency: NULL
    io_misc_requests: NULL
     io_misc_latency: NULL
   1 row in set (0.11 sec)
   ```

### Check for other correlated wait events


If `synch/sxlock/innodb/btr_search_latch` and `io/table/sql/handler` contribute most to the DB load anomaly together, check whether the `innodb_adaptive_hash_index` variable is turned on. If it is, consider increasing the `innodb_adaptive_hash_index_parts` parameter value.

If the Adaptive Hash Index is turned off, consider turning it on. To learn more about the MySQL Adaptive Hash Index, see the following resources:
+ The article [Is Adaptive Hash Index in InnoDB right for my workload?](https://www.percona.com/blog/2016/04/12/is-adaptive-hash-index-in-innodb-right-for-my-workload) on the Percona website
+ [Adaptive Hash Index](https://dev.mysql.com/doc/refman/5.7/en/innodb-adaptive-hash.html) in the *MySQL Reference Manual*
+ The article [Contention in MySQL InnoDB: Useful Info From the Semaphores Section](https://www.percona.com/blog/2019/12/20/contention-in-mysql-innodb-useful-info-from-the-semaphores-section/) on the Percona website

**Note**  
The Adaptive Hash Index isn't supported on Aurora reader DB instances.  
In some cases, performance might be poor on a reader instance when `synch/sxlock/innodb/btr_search_latch` and `io/table/sql/handler` are dominant. If so, consider redirecting the workload temporarily to the writer DB instance and turning on the Adaptive Hash Index.

# synch/cond/innodb/row\$1lock\$1wait


The `synch/cond/innodb/row_lock_wait` event occurs when one session has locked a row for an update, and another session tries to update the same row. For more information, see [InnoDB locking](https://dev.mysql.com/doc/refman/8.0/en/innodb-locking.html) in the MySQL documentation.



## Supported engine versions


This wait event information is supported for the following engine versions:
+ Aurora MySQL version 3

## Likely causes of increased waits


Multiple data manipulation language (DML) statements are accessing the same row or rows simultaneously.

## Actions


We recommend different actions depending on the other wait events that you see.

**Topics**
+ [

### Find and respond to the SQL statements responsible for this wait event
](#ams-waits.row-lock-wait.actions.id)
+ [

### Find and respond to the blocking session
](#ams-waits.row-lock-wait.actions.blocker)

### Find and respond to the SQL statements responsible for this wait event


Use Performance Insights to identify the SQL statements responsible for this wait event. Consider the following strategies:
+ If row locks are a persistent problem, consider rewriting the application to use optimistic locking.
+ Use multirow statements.
+ Spread the workload over different database objects. You can do this through partitioning.
+ Check the value of the `innodb_lock_wait_timeout` parameter. It controls how long transactions wait before generating a timeout error.

For a useful overview of troubleshooting using Performance Insights, see the blog post [Analyze Amazon Aurora MySQL Workloads with Performance Insights](https://aws.amazon.com/blogs/database/analyze-amazon-aurora-mysql-workloads-with-performance-insights/).

### Find and respond to the blocking session


Determine whether the blocking session is idle or active. Also, find out whether the session comes from an application or an active user.

To identify the session holding the lock, you can run `SHOW ENGINE INNODB STATUS`. The following example shows sample output.

```
mysql> SHOW ENGINE INNODB STATUS;

---TRANSACTION 1688153, ACTIVE 82 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 1136, 2 row lock(s)
MySQL thread id 4244, OS thread handle 70369524330224, query id 4020834 172.31.14.179 reinvent executing
select id1 from test.t1 where id1=1 for update
------- TRX HAS BEEN WAITING 24 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 11 page no 4 n bits 72 index GEN_CLUST_INDEX of table test.t1 trx id 1688153 lock_mode X waiting
Record lock, heap no 2 PHYSICAL RECORD: n_fields 5; compact format; info bits 0
```

Or you can use the following query to extract details on current locks.

```
mysql> SELECT p1.id waiting_thread,
    p1.user waiting_user,
    p1.host waiting_host,
    it1.trx_query waiting_query,
    ilw.requesting_engine_transaction_id waiting_transaction,
    ilw.blocking_engine_lock_id blocking_lock,
    il.lock_mode blocking_mode,
    il.lock_type blocking_type,
    ilw.blocking_engine_transaction_id blocking_transaction,
    CASE it.trx_state
        WHEN 'LOCK WAIT'
        THEN it.trx_state
        ELSE p.state end blocker_state,
    concat(il.object_schema,'.', il.object_name) as locked_table,
    it.trx_mysql_thread_id blocker_thread,
    p.user blocker_user,
    p.host blocker_host
FROM performance_schema.data_lock_waits ilw
JOIN performance_schema.data_locks il
ON ilw.blocking_engine_lock_id = il.engine_lock_id
AND ilw.blocking_engine_transaction_id = il.engine_transaction_id
JOIN information_schema.innodb_trx it
ON ilw.blocking_engine_transaction_id = it.trx_id join information_schema.processlist p
ON it.trx_mysql_thread_id = p.id join information_schema.innodb_trx it1
ON ilw.requesting_engine_transaction_id = it1.trx_id join information_schema.processlist p1
ON it1.trx_mysql_thread_id = p1.id\G

*************************** 1. row ***************************
waiting_thread: 4244
waiting_user: reinvent
waiting_host: 123.456.789.012:18158
waiting_query: select id1 from test.t1 where id1=1 for update
waiting_transaction: 1688153
blocking_lock: 70369562074216:11:4:2:70369549808672
blocking_mode: X
blocking_type: RECORD
blocking_transaction: 1688142
blocker_state: User sleep
locked_table: test.t1
blocker_thread: 4243
blocker_user: reinvent
blocker_host: 123.456.789.012:18156
1 row in set (0.00 sec)
```

When you identify the session, your options include the following:
+ Contact the application owner or the user.
+ If the blocking session is idle, consider ending the blocking session. This action might trigger a long rollback. To learn how to end a session, see [Ending a session or query](mysql-stored-proc-ending.md).

For more information about identifying blocking transactions, see [Using InnoDB transaction and locking information](https://dev.mysql.com/doc/refman/8.0/en/innodb-information-schema-examples.html) in the MySQL documentation.

# synch/cond/innodb/row\$1lock\$1wait\$1cond


The `synch/cond/innodb/row_lock_wait_cond` event occurs when one session has locked a row for an update, and another session tries to update the same row. For more information, see [InnoDB locking](https://dev.mysql.com/doc/refman/5.7/en/innodb-locking.html) in the MySQL documentation.



## Supported engine versions


This wait event information is supported for the following engine versions:
+ Aurora MySQL version 2

## Likely causes of increased waits


Multiple data manipulation language (DML) statements are accessing the same row or rows simultaneously.

## Actions


We recommend different actions depending on the other wait events that you see.

**Topics**
+ [

### Find and respond to the SQL statements responsible for this wait event
](#ams-waits.row-lock-wait-cond.actions.id)
+ [

### Find and respond to the blocking session
](#ams-waits.row-lock-wait-cond.actions.blocker)

### Find and respond to the SQL statements responsible for this wait event


Use Performance Insights to identify the SQL statements responsible for this wait event. Consider the following strategies:
+ If row locks are a persistent problem, consider rewriting the application to use optimistic locking.
+ Use multirow statements.
+ Spread the workload over different database objects. You can do this through partitioning.
+ Check the value of the `innodb_lock_wait_timeout` parameter. It controls how long transactions wait before generating a timeout error.

For a useful overview of troubleshooting using Performance Insights, see the blog post [Analyze Amazon Aurora MySQL Workloads with Performance Insights](https://aws.amazon.com/blogs/database/analyze-amazon-aurora-mysql-workloads-with-performance-insights/).

### Find and respond to the blocking session


Determine whether the blocking session is idle or active. Also, find out whether the session comes from an application or an active user.

To identify the session holding the lock, you can run `SHOW ENGINE INNODB STATUS`. The following example shows sample output.

```
mysql> SHOW ENGINE INNODB STATUS;

---TRANSACTION 2771110, ACTIVE 112 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 1136, 1 row lock(s)
MySQL thread id 24, OS thread handle 70369573642160, query id 13271336 172.31.14.179 reinvent Sending data
select id1 from test.t1 where id1=1 for update
------- TRX HAS BEEN WAITING 43 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 11 page no 3 n bits 0 index GEN_CLUST_INDEX of table test.t1 trx id 2771110 lock_mode X waiting
Record lock, heap no 2 PHYSICAL RECORD: n_fields 5; compact format; info bits 0
```

Or you can use the following query to extract details on current locks.

```
mysql> SELECT p1.id waiting_thread,
              p1.user waiting_user,
              p1.host waiting_host,
              it1.trx_query waiting_query,        
              ilw.requesting_trx_id waiting_transaction, 
              ilw.blocking_lock_id blocking_lock, 
              il.lock_mode blocking_mode,
              il.lock_type blocking_type,
              ilw.blocking_trx_id blocking_transaction,
              CASE it.trx_state 
                WHEN 'LOCK WAIT' 
                THEN it.trx_state 
                ELSE p.state 
              END blocker_state, 
              il.lock_table locked_table,        
              it.trx_mysql_thread_id blocker_thread, 
              p.user blocker_user, 
              p.host blocker_host 
       FROM information_schema.innodb_lock_waits ilw 
       JOIN information_schema.innodb_locks il 
         ON ilw.blocking_lock_id = il.lock_id 
        AND ilw.blocking_trx_id = il.lock_trx_id
       JOIN information_schema.innodb_trx it 
         ON ilw.blocking_trx_id = it.trx_id
       JOIN information_schema.processlist p 
         ON it.trx_mysql_thread_id = p.id 
       JOIN information_schema.innodb_trx it1 
         ON ilw.requesting_trx_id = it1.trx_id 
       JOIN information_schema.processlist p1 
         ON it1.trx_mysql_thread_id = p1.id\G

*************************** 1. row ***************************
      waiting_thread: 3561959471
        waiting_user: reinvent
        waiting_host: 123.456.789.012:20485
       waiting_query: select id1 from test.t1 where id1=1 for update
 waiting_transaction: 312337314
       blocking_lock: 312337287:261:3:2
       blocking_mode: X
       blocking_type: RECORD
blocking_transaction: 312337287
       blocker_state: User sleep
        locked_table: `test`.`t1`
      blocker_thread: 3561223876
        blocker_user: reinvent
        blocker_host: 123.456.789.012:17746
1 row in set (0.04 sec)
```

When you identify the session, your options include the following:
+ Contact the application owner or the user.
+ If the blocking session is idle, consider ending the blocking session. This action might trigger a long rollback. To learn how to end a session, see [Ending a session or query](mysql-stored-proc-ending.md).

For more information about identifying blocking transactions, see [Using InnoDB transaction and locking information](https://dev.mysql.com/doc/refman/5.7/en/innodb-information-schema-examples.html) in the MySQL documentation.

# synch/cond/sql/MDL\$1context::COND\$1wait\$1status


The `synch/cond/sql/MDL_context::COND_wait_status` event occurs when there are threads waiting on a table metadata lock.

**Topics**
+ [

## Supported engine versions
](#ams-waits.cond-wait-status.context.supported)
+ [

## Context
](#ams-waits.cond-wait-status.context)
+ [

## Likely causes of increased waits
](#ams-waits.cond-wait-status.causes)
+ [

## Actions
](#ams-waits.cond-wait-status.actions)

## Supported engine versions


This wait event information is supported for the following engine versions:
+ Aurora MySQL versions 2 and 3

## Context


The event `synch/cond/sql/MDL_context::COND_wait_status` indicates that there are threads waiting on a table metadata lock. In some cases, one session holds a metadata lock on a table and another session tries to get the same lock on the same table. In such a case, the second session waits on the `synch/cond/sql/MDL_context::COND_wait_status` wait event.

MySQL uses metadata locking to manage concurrent access to database objects and to ensure data consistency. Metadata locking applies to tables, schemas, scheduled events, tablespaces, and user locks acquired with the `get_lock` function, and stored programs. Stored programs include procedures, functions, and triggers. For more information, see [Metadata locking](https://dev.mysql.com/doc/refman/5.7/en/metadata-locking.html) in the MySQL documentation.

The MySQL process list shows this session in the state `waiting for metadata lock`. In Performance Insights, if `Performance_schema` is turned on, the event `synch/cond/sql/MDL_context::COND_wait_status` appears.

The default timeout for a query waiting on a metadata lock is based on the value of the `lock_wait_timeout` parameter, which defaults to 31,536,000 seconds (365 days).

For more details on different InnoDB locks and the types of locks that can cause conflicts, see [InnoDB Locking](https://dev.mysql.com/doc/refman/5.7/en/innodb-locking.html) in the MySQL documentation.

## Likely causes of increased waits


When the `synch/cond/sql/MDL_context::COND_wait_status` event appears more than normal, possibly indicating a performance problem, typical causes include the following:

**Long-running transactions**  
One or more transactions are modifying a large amount of data and holding locks on tables for a very long time.

**Idle transactions**  
One or more transactions remain open for a long time, without being committed or rolled back.

**DDL statements on large tables**  
One or more data definition statements (DDL) statements, such as `ALTER TABLE` commands, were run on very large tables.

**Explicit table locks**  
There are explicit locks on tables that aren't being released in a timely manner. For example, an application might run `LOCK TABLE` statements improperly.

## Actions


We recommend different actions depending on the causes of your wait event and on the version of the Aurora MySQL DB cluster.

**Topics**
+ [

### Identify the sessions and queries causing the events
](#ams-waits.cond-wait-status.actions.identify)
+ [

### Check for past events
](#ams-waits.cond-wait-status.actions.past-events)
+ [

### Run queries on Aurora MySQL version 2
](#ams-waits.cond-wait-status.actions.run-queries-aurora-mysql-57)
+ [

### Respond to the blocking session
](#ams-waits.cond-wait-status.actions.blocker)

### Identify the sessions and queries causing the events


You can use Performance Insights to show queries blocked by the `synch/cond/sql/MDL_context::COND_wait_status` wait event. However, to identify the blocking session, query metadata tables from `performance_schema` and `information_schema` on the DB cluster.

Typically, databases with moderate to significant load have wait events. The wait events might be acceptable if performance is optimal. If performance isn't optimal, then examine where the database is spending the most time. Look at the wait events that contribute to the highest load, and find out whether you can optimize the database and application to reduce those events.

**To find SQL queries that are responsible for high load**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Performance Insights**.

1. Choose a DB instance. The Performance Insights dashboard for that DB instance appears.

1. In the **Database load** chart, choose **Slice by wait**.

1. At the bottom of the page, choose **Top SQL**.

   The chart lists the SQL queries that are responsible for the load. Those at the top of the list are most responsible. To resolve a bottleneck, focus on these statements.

For a useful overview of troubleshooting using Performance Insights, see the AWS Database Blog post [Analyze Amazon Aurora MySQL Workloads with Performance Insights](https://aws.amazon.com/blogs/database/analyze-amazon-aurora-mysql-workloads-with-performance-insights/).

### Check for past events


You can gain insight into this wait event to check for past occurrences of it. To do so, complete the following actions:
+ Check the data manipulation language (DML) and DDL throughput and latency to see if there were any changes in workload.

  You can use Performance Insights to find queries waiting on this event at the time of the issue. Also, you can view the digest of the queries run near the time of issue.
+ If audit logs or general logs are turned on for the DB cluster, you can check for all queries run on the objects (schema.table) involved in the waiting transaction. You can also check for the queries that completed running before the transaction.

The information available to troubleshoot past events is limited. Performing these checks doesn't show which object is waiting for information. However, you can identify tables with heavy load at the time of the event and the set of frequently operated rows causing conflict at the time of issue. You can then use this information to reproduce the issue in a test environment and provide insights about its cause.

### Run queries on Aurora MySQL version 2


In Aurora MySQL version 2, you can identify the blocked session directly by querying `performance_schema` tables or `sys` schema views. An example can illustrate how to query tables to identify blocking queries and sessions.

In the following process list output, the connection ID `89` is waiting on a metadata lock, and it's running a `TRUNCATE TABLE` command. In a query on the `performance_schema` tables or `sys` schema views, the output shows that the blocking session is `76`.

```
MySQL [(none)]> select @@version, @@aurora_version;
+-----------+------------------+
| @@version | @@aurora_version |
+-----------+------------------+
| 5.7.12    | 2.11.5           |
+-----------+------------------+
1 row in set (0.01 sec)

MySQL [(none)]> show processlist;
+----+-----------------+--------------------+-----------+---------+------+---------------------------------+-------------------------------+
| Id | User            | Host               | db        | Command | Time | State                           | Info                          |
+----+-----------------+--------------------+-----------+---------+------+---------------------------------+-------------------------------+
|  2 | rdsadmin        | localhost          | NULL      | Sleep   |    0 | NULL                            | NULL                          |
|  4 | rdsadmin        | localhost          | NULL      | Sleep   |    2 | NULL                            | NULL                          |
|  5 | rdsadmin        | localhost          | NULL      | Sleep   |    1 | NULL                            | NULL                          |
| 20 | rdsadmin        | localhost          | NULL      | Sleep   |    0 | NULL                            | NULL                          |
| 21 | rdsadmin        | localhost          | NULL      | Sleep   |  261 | NULL                            | NULL                          |
| 66 | auroramysql5712 | 172.31.21.51:52154 | sbtest123 | Sleep   |    0 | NULL                            | NULL                          |
| 67 | auroramysql5712 | 172.31.21.51:52158 | sbtest123 | Sleep   |    0 | NULL                            | NULL                          |
| 68 | auroramysql5712 | 172.31.21.51:52150 | sbtest123 | Sleep   |    0 | NULL                            | NULL                          |
| 69 | auroramysql5712 | 172.31.21.51:52162 | sbtest123 | Sleep   |    0 | NULL                            | NULL                          |
| 70 | auroramysql5712 | 172.31.21.51:52160 | sbtest123 | Sleep   |    0 | NULL                            | NULL                          |
| 71 | auroramysql5712 | 172.31.21.51:52152 | sbtest123 | Sleep   |    0 | NULL                            | NULL                          |
| 72 | auroramysql5712 | 172.31.21.51:52156 | sbtest123 | Sleep   |    0 | NULL                            | NULL                          |
| 73 | auroramysql5712 | 172.31.21.51:52164 | sbtest123 | Sleep   |    0 | NULL                            | NULL                          |
| 74 | auroramysql5712 | 172.31.21.51:52166 | sbtest123 | Sleep   |    0 | NULL                            | NULL                          |
| 75 | auroramysql5712 | 172.31.21.51:52168 | sbtest123 | Sleep   |    0 | NULL                            | NULL                          |
| 76 | auroramysql5712 | 172.31.21.51:52170 | NULL      | Query   |    0 | starting                        | show processlist              |
| 88 | auroramysql5712 | 172.31.21.51:52194 | NULL      | Query   |   22 | User sleep                      | select sleep(10000)           |
| 89 | auroramysql5712 | 172.31.21.51:52196 | NULL      | Query   |    5 | Waiting for table metadata lock | truncate table sbtest.sbtest1 |
+----+-----------------+--------------------+-----------+---------+------+---------------------------------+-------------------------------+
18 rows in set (0.00 sec)
```

Next, a query on the `performance_schema` tables or `sys` schema views shows that the blocking session is `76`.

```
MySQL [(none)]> select * from sys.schema_table_lock_waits;                                                                
+---------------+-------------+-------------------+-------------+------------------------------+-------------------+-----------------------+-------------------------------+--------------------+-----------------------------+-----------------------------+--------------------+--------------+------------------------------+--------------------+------------------------+-------------------------+------------------------------+
| object_schema | object_name | waiting_thread_id | waiting_pid | waiting_account              | waiting_lock_type | waiting_lock_duration | waiting_query                 | waiting_query_secs | waiting_query_rows_affected | waiting_query_rows_examined | blocking_thread_id | blocking_pid | blocking_account             | blocking_lock_type | blocking_lock_duration | sql_kill_blocking_query | sql_kill_blocking_connection |
+---------------+-------------+-------------------+-------------+------------------------------+-------------------+-----------------------+-------------------------------+--------------------+-----------------------------+-----------------------------+--------------------+--------------+------------------------------+--------------------+------------------------+-------------------------+------------------------------+
| sbtest        | sbtest1     |               121 |          89 | auroramysql5712@192.0.2.0    | EXCLUSIVE         | TRANSACTION           | truncate table sbtest.sbtest1 |                 10 |                           0 |                           0 |                108 |           76 | auroramysql5712@192.0.2.0    | SHARED_READ        | TRANSACTION            | KILL QUERY 76           | KILL 76                      |
+---------------+-------------+-------------------+-------------+------------------------------+-------------------+-----------------------+-------------------------------+--------------------+-----------------------------+-----------------------------+--------------------+--------------+------------------------------+--------------------+------------------------+-------------------------+------------------------------+
1 row in set (0.00 sec)
```

### Respond to the blocking session


When you identify the session, your options include the following:
+ Contact the application owner or the user.
+ If the blocking session is idle, consider ending the blocking session. This action might trigger a long rollback. To learn how to end a session, see [Ending a session or query](mysql-stored-proc-ending.md).

For more information about identifying blocking transactions, see [Using InnoDB Transaction and Locking Information](https://dev.mysql.com/doc/refman/5.7/en/innodb-information-schema-examples.html) in the MySQL documentation.

# synch/mutex/innodb/aurora\$1lock\$1thread\$1slot\$1futex


The `synch/mutex/innodb/aurora_lock_thread_slot_futex` event occurs when one session has locked a row for an update, and another session tries to update the same row. For more information, see [InnoDB locking](https://dev.mysql.com/doc/refman/5.7/en/innodb-locking.html) in the *MySQL Reference*.



## Supported engine versions


This wait event information is supported for the following engine versions:
+ Aurora MySQL version 2

## Likely causes of increased waits


Multiple data manipulation language (DML) statements are accessing the same row or rows simultaneously.

## Actions


We recommend different actions depending on the other wait events that you see.

**Topics**
+ [

### Find and respond to the SQL statements responsible for this wait event
](#ams-waits.waitsynch.actions.id)
+ [

### Find and respond to the blocking session
](#ams-waits.waitsynch.actions.blocker)

### Find and respond to the SQL statements responsible for this wait event


Use Performance Insights to identify the SQL statements responsible for this wait event. Consider the following strategies:
+ If row locks are a persistent problem, consider rewriting the application to use optimistic locking.
+ Use multirow statements.
+ Spread the workload over different database objects. You can do this through partitioning.
+ Check the value of the `innodb_lock_wait_timeout` parameter. It controls how long transactions wait before generating a timeout error.

For a useful overview of troubleshooting using Performance Insights, see the blog post [Analyze Amazon Aurora MySQL Workloads with Performance Insights](https://aws.amazon.com/blogs/database/analyze-amazon-aurora-mysql-workloads-with-performance-insights/).

### Find and respond to the blocking session


Determine whether the blocking session is idle or active. Also, find out whether the session comes from an application or an active user.

To identify the session holding the lock, you can run `SHOW ENGINE INNODB STATUS`. The following example shows sample output.

```
mysql> SHOW ENGINE INNODB STATUS;

---------------------TRANSACTION 302631452, ACTIVE 2 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 376, 1 row lock(s)
MySQL thread id 80109, OS thread handle 0x2ae915060700, query id 938819 10.0.4.12 reinvent updating
UPDATE sbtest1 SET k=k+1 WHERE id=503
------- TRX HAS BEEN WAITING 2 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 148 page no 11 n bits 30 index `PRIMARY` of table `sysbench2`.`sbtest1` trx id 302631452 lock_mode X locks rec but not gap waiting
Record lock, heap no 30 PHYSICAL RECORD: n_fields 6; compact format; info bits 0
```

Or you can use the following query to extract details on current locks.

```
mysql> SELECT p1.id waiting_thread,
              p1.user waiting_user,
              p1.host waiting_host,
              it1.trx_query waiting_query,        
              ilw.requesting_trx_id waiting_transaction, 
              ilw.blocking_lock_id blocking_lock, 
              il.lock_mode blocking_mode,
              il.lock_type blocking_type,
              ilw.blocking_trx_id blocking_transaction,
              CASE it.trx_state 
                WHEN 'LOCK WAIT' 
                THEN it.trx_state 
                ELSE p.state 
              END blocker_state, 
              il.lock_table locked_table,        
              it.trx_mysql_thread_id blocker_thread, 
              p.user blocker_user, 
              p.host blocker_host 
       FROM information_schema.innodb_lock_waits ilw 
       JOIN information_schema.innodb_locks il 
         ON ilw.blocking_lock_id = il.lock_id 
        AND ilw.blocking_trx_id = il.lock_trx_id
       JOIN information_schema.innodb_trx it 
         ON ilw.blocking_trx_id = it.trx_id
       JOIN information_schema.processlist p 
         ON it.trx_mysql_thread_id = p.id 
       JOIN information_schema.innodb_trx it1 
         ON ilw.requesting_trx_id = it1.trx_id 
       JOIN information_schema.processlist p1 
         ON it1.trx_mysql_thread_id = p1.id\G

*************************** 1. row ***************************
      waiting_thread: 3561959471
        waiting_user: reinvent
        waiting_host: 123.456.789.012:20485
       waiting_query: select id1 from test.t1 where id1=1 for update
 waiting_transaction: 312337314
       blocking_lock: 312337287:261:3:2
       blocking_mode: X
       blocking_type: RECORD
blocking_transaction: 312337287
       blocker_state: User sleep
        locked_table: `test`.`t1`
      blocker_thread: 3561223876
        blocker_user: reinvent
        blocker_host: 123.456.789.012:17746
1 row in set (0.04 sec)
```

When you identify the session, your options include the following:
+ Contact the application owner or the user.
+ If the blocking session is idle, consider ending the blocking session. This action might trigger a long rollback. To learn how to end a session, see [Ending a session or query](mysql-stored-proc-ending.md).

For more information about identifying blocking transactions, see [Using InnoDB Transaction and Locking Information](https://dev.mysql.com/doc/refman/5.7/en/innodb-information-schema-examples.html) in the *MySQL Reference Manual*.

# synch/mutex/innodb/buf\$1pool\$1mutex


The `synch/mutex/innodb/buf_pool_mutex` event occurs when a thread has acquired a lock on the InnoDB buffer pool to access a page in memory.

**Topics**
+ [

## Relevant engine versions
](#ams-waits.bufpoolmutex.context.supported)
+ [

## Context
](#ams-waits.bufpoolmutex.context)
+ [

## Likely causes of increased waits
](#ams-waits.bufpoolmutex.causes)
+ [

## Actions
](#ams-waits.bufpoolmutex.actions)

## Relevant engine versions


This wait event information is supported for the following engine versions:
+ Aurora MySQL version 2

## Context


The `buf_pool` mutex is a single mutex that protects the control data structures of the buffer pool.

For more information, see [Monitoring InnoDB Mutex Waits Using Performance Schema](https://dev.mysql.com/doc/refman/5.7/en/monitor-innodb-mutex-waits-performance-schema.html) in the MySQL documentation.

## Likely causes of increased waits


This is a workload-specific wait event. Common causes for `synch/mutex/innodb/buf_pool_mutex` to appear among the top wait events include the following:
+ The buffer pool size isn't large enough to hold the working set of data.
+ The workload is more specific to certain pages from a specific table in the database, leading to contention in the buffer pool.

## Actions


We recommend different actions depending on the causes of your wait event.

**Topics**
+ [

### Identify the sessions and queries causing the events
](#ams-waits.bufpoolmutex.actions.identify)
+ [

### Use Performance Insights
](#ams-waits.bufpoolmutex.actions.action1)
+ [

### Create Aurora Replicas
](#ams-waits.bufpoolmutex.actions.action2)
+ [

### Examine the buffer pool size
](#ams-waits.bufpoolmutex.actions.action3)
+ [

### Monitor the global status history
](#ams-waits.bufpoolmutex.actions.action4)

### Identify the sessions and queries causing the events


Typically, databases with moderate to significant load have wait events. The wait events might be acceptable if performance is optimal. If performance isn't optimal, then examine where the database is spending the most time. Look at the wait events that contribute to the highest load, and find out whether you can optimize the database and application to reduce those events.

**To view the Top SQL chart in the AWS Management Console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Performance Insights**.

1. Choose a DB instance. The Performance Insights dashboard is shown for that DB instance.

1. In the **Database load** chart, choose **Slice by wait**.

1. Underneath the **Database load** chart, choose **Top SQL**.

   The chart lists the SQL queries that are responsible for the load. Those at the top of the list are most responsible. To resolve a bottleneck, focus on these statements.

For a useful overview of troubleshooting using Performance Insights, see the blog post [Analyze Amazon Aurora MySQL Workloads with Performance Insights](https://aws.amazon.com/blogs/database/analyze-amazon-aurora-mysql-workloads-with-performance-insights/).

### Use Performance Insights


This event is related to workload. You can use Performance Insights to do the following:
+ Identify when wait events start, and whether there's any change in the workload around that time from the application logs or related sources.
+ Identify the SQL statements responsible for this wait event. Examine the execution plan of the queries to make sure that these queries are optimized and using appropriate indexes.

  If the top queries responsible for the wait event are related to the same database object or table, then consider partitioning that object or table.

### Create Aurora Replicas


You can create Aurora Replicas to serve read-only traffic. You can also use Aurora Auto Scaling to handle surges in read traffic. Make sure to run scheduled read-only tasks and logical backups on Aurora Replicas.

For more information, see [Amazon Aurora Auto Scaling with Aurora Replicas](Aurora.Integrating.AutoScaling.md).

### Examine the buffer pool size


Check whether the buffer pool size is sufficient for the workload by looking at the metric `innodb_buffer_pool_wait_free`. If the value of this metric is high and increasing continuously, that indicates that the size of the buffer pool isn't sufficient to handle the workload. If `innodb_buffer_pool_size` has been set properly, the value of `innodb_buffer_pool_wait_free` should be small. For more information, see [Innodb\$1buffer\$1pool\$1wait\$1free](https://dev.mysql.com/doc/refman/5.7/en/server-status-variables.html#statvar_Innodb_buffer_pool_wait_free) in the MySQL documentation.

Increase the buffer pool size if the DB instance has enough memory for session buffers and operating-system tasks. If it doesn't, change the DB instance to a larger DB instance class to get additional memory that can be allocated to the buffer pool.

**Note**  
Aurora MySQL automatically adjusts the value of `innodb_buffer_pool_instances` based on the configured `innodb_buffer_pool_size`.

### Monitor the global status history


By monitoring the change rates of status variables, you can detect locking or memory issues on your DB instance. Turn on Global Status History (GoSH) if it isn't already turned on. For more information on GoSH, see [Managing the global status history](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.MySQL.CommonDBATasks.html#Appendix.MySQL.CommonDBATasks.GoSH).

You can also create custom Amazon CloudWatch metrics to monitor status variables. For more information, see [Publishing custom metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html).

# synch/mutex/innodb/fil\$1system\$1mutex


The `synch/mutex/innodb/fil_system_mutex` event occurs when a session is waiting to access the tablespace memory cache.

**Topics**
+ [

## Supported engine versions
](#ams-waits.innodb-fil-system-mutex.context.supported)
+ [

## Context
](#ams-waits.innodb-fil-system-mutex.context)
+ [

## Likely causes of increased waits
](#ams-waits.innodb-fil-system-mutex.causes)
+ [

## Actions
](#ams-waits.innodb-fil-system-mutex.actions)

## Supported engine versions


This wait event information is supported for the following engine versions:
+ Aurora MySQL versions 2 and 3

## Context


InnoDB uses tablespaces to manage the storage area for tables and log files. The *tablespace memory cache* is a global memory structure that maintains information about tablespaces. MySQL uses `synch/mutex/innodb/fil_system_mutex` waits to control concurrent access to the tablespace memory cache. 

The event `synch/mutex/innodb/fil_system_mutex` indicates that there is currently more than one operation that needs to retrieve and manipulate information in the tablespace memory cache for the same tablespace.

## Likely causes of increased waits


When the `synch/mutex/innodb/fil_system_mutex` event appears more than normal, possibly indicating a performance problem, this typically occurs when all of the following conditions are present:
+ An increase in concurrent data manipulation language (DML) operations that update or delete data in the same table.
+ The tablespace for this table is very large and has a lot of data pages.
+ The fill factor for these data pages is low.

## Actions


We recommend different actions depending on the causes of your wait event.

**Topics**
+ [

### Identify the sessions and queries causing the events
](#ams-waits.innodb-fil-system-mutex.actions.identify)
+ [

### Reorganize large tables during off-peak hours
](#ams-waits.innodb-fil-system-mutex.actions.reorganize)

### Identify the sessions and queries causing the events


Typically, databases with moderate to significant load have wait events. The wait events might be acceptable if performance is optimal. If performance isn't optimal, examine where the database is spending the most time. Look at the wait events that contribute to the highest load, and find out whether you can optimize the database and application to reduce those events.

**To find SQL queries that are responsible for high load**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Performance Insights**.

1. Choose a DB instance. The Performance Insights dashboard appears for that DB instance.

1. In the **Database load** chart, choose **Slice by wait**.

1. At the bottom of the page, choose **Top SQL**.

   The chart lists the SQL queries that are responsible for the load. Those at the top of the list are most responsible. To resolve a bottleneck, focus on these statements.

For a useful overview of troubleshooting using Performance Insights, see the blog post [Analyze Amazon Aurora MySQL Workloads with Performance Insights](https://aws.amazon.com/blogs/database/analyze-amazon-aurora-mysql-workloads-with-performance-insights/).

Another way to find out which queries are causing high numbers of `synch/mutex/innodb/fil_system_mutex` waits is to check `performance_schema`, as in the following example.

```
mysql> select * from performance_schema.events_waits_current where EVENT_NAME='wait/synch/mutex/innodb/fil_system_mutex'\G
*************************** 1. row ***************************
            THREAD_ID: 19
             EVENT_ID: 195057
         END_EVENT_ID: 195057
           EVENT_NAME: wait/synch/mutex/innodb/fil_system_mutex
               SOURCE: fil0fil.cc:6700
          TIMER_START: 1010146190118400
            TIMER_END: 1010146196524000
           TIMER_WAIT: 6405600
                SPINS: NULL
        OBJECT_SCHEMA: NULL
          OBJECT_NAME: NULL
           INDEX_NAME: NULL
          OBJECT_TYPE: NULL
OBJECT_INSTANCE_BEGIN: 47285552262176
     NESTING_EVENT_ID: NULL
   NESTING_EVENT_TYPE: NULL
            OPERATION: lock
      NUMBER_OF_BYTES: NULL
                FLAGS: NULL
*************************** 2. row ***************************
            THREAD_ID: 23
             EVENT_ID: 5480
         END_EVENT_ID: 5480
           EVENT_NAME: wait/synch/mutex/innodb/fil_system_mutex
               SOURCE: fil0fil.cc:5906
          TIMER_START: 995269979908800
            TIMER_END: 995269980159200
           TIMER_WAIT: 250400
                SPINS: NULL
        OBJECT_SCHEMA: NULL
          OBJECT_NAME: NULL
           INDEX_NAME: NULL
          OBJECT_TYPE: NULL
OBJECT_INSTANCE_BEGIN: 47285552262176
     NESTING_EVENT_ID: NULL
   NESTING_EVENT_TYPE: NULL
            OPERATION: lock
      NUMBER_OF_BYTES: NULL
                FLAGS: NULL
*************************** 3. row ***************************
            THREAD_ID: 55
             EVENT_ID: 23233794
         END_EVENT_ID: NULL
           EVENT_NAME: wait/synch/mutex/innodb/fil_system_mutex
               SOURCE: fil0fil.cc:449
          TIMER_START: 1010492125341600
            TIMER_END: 1010494304900000
           TIMER_WAIT: 2179558400
                SPINS: NULL
        OBJECT_SCHEMA: NULL
          OBJECT_NAME: NULL
           INDEX_NAME: NULL
          OBJECT_TYPE: NULL
OBJECT_INSTANCE_BEGIN: 47285552262176
     NESTING_EVENT_ID: 23233786
   NESTING_EVENT_TYPE: WAIT
            OPERATION: lock
      NUMBER_OF_BYTES: NULL
                FLAGS: NULL
```

### Reorganize large tables during off-peak hours


Reorganize large tables that you identify as the source of high numbers of `synch/mutex/innodb/fil_system_mutex` wait events during a maintenance window outside of production hours. Doing so ensures that the internal tablespaces map cleanup doesn't occur when quick access to the table is critical. For information about reorganizing tables, see [OPTIMIZE TABLE Statement](https://dev.mysql.com/doc/refman/5.7/en/optimize-table.html) in the *MySQL Reference*.

# synch/mutex/innodb/trx\$1sys\$1mutex


The `synch/mutex/innodb/trx_sys_mutex` event occurs when there is high database activity with a large number of transactions.

**Topics**
+ [

## Relevant engine versions
](#ams-waits.trxsysmutex.context.supported)
+ [

## Context
](#ams-waits.trxsysmutex.context)
+ [

## Likely causes of increased waits
](#ams-waits.trxsysmutex.causes)
+ [

## Actions
](#ams-waits.trxsysmutex.actions)

## Relevant engine versions


This wait event information is supported for the following engine versions:
+ Aurora MySQL versions 2 and 3

## Context


Internally, the InnoDB database engine uses the repeatable read isolation level with snapshots to provide read consistency. This gives you a point-in-time view of the database at the time the snapshot was created.

In InnoDB, all changes are applied to the database as soon as they arrive, regardless of whether they're committed. This approach means that without multiversion concurrency control (MVCC), all users connected to the database see all of the changes and the latest rows. Therefore, InnoDB requires a way to track the changes to understand what to roll back when necessary.

To do this, InnoDB uses a transaction system (`trx_sys`) to track snapshots. The transaction system does the following:
+ Tracks the transaction ID for each row in the undo logs.
+ Uses an internal InnoDB structure called `ReadView` that helps to identify which transaction IDs are visible for a snapshot.

## Likely causes of increased waits


Any database operation that requires the consistent and controlled handling (creating, reading, updating, and deleting) of transactions IDs generates a call from `trx_sys` to the mutex.

These calls happen inside three functions:
+ `trx_sys_mutex_enter` – Creates the mutex.
+ `trx_sys_mutex_exit` – Releases the mutex.
+ `trx_sys_mutex_own` – Tests whether the mutex is owned.

The InnoDB Performance Schema instrumentation tracks all `trx_sys` mutex calls. Tracking includes, but isn't limited to, management of `trx_sys` on database startup or shutdown, rollback operations, undo cleanups, row read access, and buffer pool loads. High database activity with a large number of transactions results in `synch/mutex/innodb/trx_sys_mutex` appearing among the top wait events.

For more information, see [Monitoring InnoDB Mutex Waits Using Performance Schema](https://dev.mysql.com/doc/refman/5.7/en/monitor-innodb-mutex-waits-performance-schema.html) in the MySQL documentation.

## Actions


We recommend different actions depending on the causes of your wait event.

**Topics**
+ [

### Identify the sessions and queries causing the events
](#ams-waits.trxsysmutex.actions.identify)
+ [

### Examine other wait events
](#ams-waits.trxsysmutex.actions.action1)

### Identify the sessions and queries causing the events


Typically, databases with moderate to significant load have wait events. The wait events might be acceptable if performance is optimal. If performance isn't optimal, then examine where the database is spending the most time. Look at the wait events that contribute to the highest load. Find out whether you can optimize the database and application to reduce those events.

**To view the Top SQL chart in the AWS Management Console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Performance Insights**.

1. Choose a DB instance. The Performance Insights dashboard is shown for that DB instance.

1. In the **Database load** chart, choose **Slice by wait**.

1. Under the **Database load** chart, choose **Top SQL**.

   The chart lists the SQL queries that are responsible for the load. Those at the top of the list are most responsible. To resolve a bottleneck, focus on these statements.

For a useful overview of troubleshooting using Performance Insights, see the blog post [Analyze Amazon Aurora MySQL Workloads with Performance Insights](https://aws.amazon.com/blogs/database/analyze-amazon-aurora-mysql-workloads-with-performance-insights/).

### Examine other wait events


Examine the other wait events associated with the `synch/mutex/innodb/trx_sys_mutex` wait event. Doing this can provide more information about the nature of the workload. A large number of transactions might reduce throughput, but the workload might also make this necessary.

For more information on how to optimize transactions, see [Optimizing InnoDB Transaction Management](https://dev.mysql.com/doc/refman/5.7/en/optimizing-innodb-transaction-management.html) in the MySQL documentation.

# synch/sxlock/innodb/hash\$1table\$1locks


The `synch/sxlock/innodb/hash_table_locks` event occurs when pages not found in the buffer pool must be read from storage.

**Topics**
+ [

## Supported engine versions
](#ams-waits.sx-lock-hash-table-locks.context.supported)
+ [

## Context
](#ams-waits.sx-lock-hash-table-locks.context)
+ [

## Likely causes of increased waits
](#ams-waits.sx-lock-hash-table-locks.causes)
+ [

## Actions
](#ams-waits.sx-lock-hash-table-locks.actions)

## Supported engine versions


This wait event information is supported for the following versions:
+ Aurora MySQL versions 2 and 3

## Context


The event `synch/sxlock/innodb/hash_table_locks` indicates that a workload is frequently accessing data that isn't stored in the buffer pool. This wait event is associated with new page additions and old data evictions from the buffer pool. The data stored in the buffer pool aged and new data must be cached, so the aged pages are evicted to allow caching of the new pages. MySQL uses a least recently used (LRU) algorithm to evict pages from the buffer pool. The workload is trying to access data that hasn't been loaded into the buffer pool or data that has been evicted from the buffer pool.

This wait event occurs when the workload must access the data in files on disk or when blocks are freed from or added to the buffer pool's LRU list. These operations wait to obtain a shared excluded lock (SX-lock). This SX-lock is used for the synchronization over the *hash table*, which is a table in memory designed to improve buffer pool access performance.

For more information, see [Buffer Pool](https://dev.mysql.com/doc/refman/5.7/en/innodb-buffer-pool.html) in the MySQL documentation.

## Likely causes of increased waits


When the `synch/sxlock/innodb/hash_table_locks` wait event appears more than normal, possibly indicating a performance problem, typical causes include the following:

**An undersized buffer pool**  
The size of the buffer pool is too small to keep all of the frequently accessed pages in memory.

**Heavy workload**  
The workload is causing frequent evictions and data pages reloads in the buffer cache.

**Errors reading the pages**  
There are errors reading pages in the buffer pool, which might indicate data corruption.

## Actions


We recommend different actions depending on the causes of your wait event.

**Topics**
+ [

### Increase the size of the buffer pool
](#ams-waits.sx-lock-hash-table-locks.actions.increase-buffer-pool-size)
+ [

### Improve data access patterns
](#ams-waits.sx-lock-hash-table-locks.actions.improve-data-access-patterns)
+ [

### Reduce or avoid full-table scans
](#ams-waits.sx-lock-hash-table-locks.actions.reduce-full-table-scans)
+ [

### Check the error logs for page corruption
](#ams-waits.sx-lock-hash-table-locks.actions.check-error-logs)

### Increase the size of the buffer pool


Make sure that the buffer pool is appropriately sized for the workload. To do so, you can check the buffer pool cache hit rate. Typically, if the value drops below 95 percent, consider increasing the buffer pool size. A larger buffer pool can keep frequently accessed pages in memory longer. To increase the size of the buffer pool, modify the value of the `innodb_buffer_pool_size` parameter. The default value of this parameter is based on the DB instance class size. For more information, see [ Best practices for Amazon Aurora MySQL database configuration](https://aws.amazon.com/blogs/database/best-practices-for-amazon-aurora-mysql-database-configuration/).

### Improve data access patterns


Check the queries affected by this wait and their execution plans. Consider improving data access patterns. For example, if you are using [mysqli\$1result::fetch\$1array](https://www.php.net/manual/en/mysqli-result.fetch-array.php), you can try increasing the array fetch size.

You can use Performance Insights to show queries and sessions that might be causing the `synch/sxlock/innodb/hash_table_locks` wait event.

**To find SQL queries that are responsible for high load**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Performance Insights**.

1. Choose a DB instance. The Performance Insights dashboard is shown for that DB instance.

1. In the **Database load** chart, choose **Slice by wait**.

1. At the bottom of the page, choose **Top SQL**.

   The chart lists the SQL queries that are responsible for the load. Those at the top of the list are most responsible. To resolve a bottleneck, focus on these statements.

For a useful overview of troubleshooting using Performance Insights, see the AWS Database Blog post [Analyze Amazon Aurora MySQL Workloads with Performance Insights](https://aws.amazon.com/blogs/database/analyze-amazon-aurora-mysql-workloads-with-performance-insights/).

### Reduce or avoid full-table scans


Monitor your workload to see if it's running full-table scans, and, if it is, reduce or avoid them. For example, you can monitor status variables such as `Handler_read_rnd_next`. For more information, see [Server Status Variables](https://dev.mysql.com/doc/refman/5.7/en/server-status-variables.html#statvar_Handler_read_rnd_next) in the MySQL documentation.

### Check the error logs for page corruption


You can check the mysql-error.log for corruption-related messages that were detected near the time of the issue. Messages that you can work with to resolve the issue are in the error log. You might need to recreate objects that were reported as corrupted.

# synch/mutex/innodb/temp\$1pool\$1manager\$1mutex


The `synch/mutex/innodb/temp_pool_manager_mutex` wait event occurs when a session is waiting to acquire a mutex for managing the pool of session temporary tablespaces.

**Topics**
+ [

## Supported engine versions
](#ams-waits.io-temppoolmanager.context.supported)
+ [

## Context
](#ams-waits.io-temppoolmanager.context)
+ [

## Likely causes of increased waits
](#ams-waits.io-temppoolmanager.causes)
+ [

## Actions
](#ams-waits.io-temppoolmanager.actions)

## Supported engine versions


This wait event information is supported for the following engine versions:
+ Aurora MySQL version 3

## Context


Aurora MySQL version 3.x and higher uses `temp_pool_manager_mutex` to control multiple sessions accessing the temporary tablespace pool at the same time. Aurora MySQL manages storage through an Aurora cluster volume for persistent data and local storage for temporary files. A temporary tablespace is needed when a session creates a temporary table on the Aurora cluster volume. 

When a session first requests a temporary tablespace, MySQL allocates session temporary tablespaces from the shared pool. A session can hold up to 2 temporary tablespaces at a time for the following table types:
+ User-created temporary tables
+ Optimizer-generated internal temporary tables

The default `TempTable` engine uses the following overflow mechanism to handle temporary tables:
+ Stores tables in RAM up to the [https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_temptable_max_ram](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_temptable_max_ram) limit.
+ Moves to memory-mapped files on local storage when RAM is full.
+ Uses the shared cluster volume when memory-mapped files reach their [https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_temptable_max_mmap](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_temptable_max_mmap) limit.

After temporary tables exceed both RAM and local storage limits, MySQL manages them using on-disk tablespace.

When a session requires an on-disk temporary table, MySQL:
+ Looks for available `INACTIVE` tablespaces in the pool to reuse.
+ Creates 10 new tablespaces if no `INACTIVE` spaces exist.

When a session disconnects, MySQL:
+ Truncates the session's temporary tablespaces.
+ Marks them as INACTIVE in the pool for reuse.
+ Maintains the current pool size until server restart.
+ Returns to the default pool size (10 tablespaces) after restart.

## Likely causes of increased waits


Common situations that cause this wait event:
+ Concurrent sessions creating internal temporary tables on the cluster volume.
+ Concurrent sessions creating user temporary tables on the cluster volume.
+ Sudden termination of sessions using active tablespaces.
+ Tablespace pool expansion during heavy write workloads.
+ Concurrent queries accessing `INFORMATION_SCHEMA.`

## Actions


We recommend different actions depending on the causes of your wait event.

**Topics**
+ [

### Monitor and optimize temporary table usage
](#ams-waits.io-temppoolmanager.actions.monitor)
+ [

### Review queries using INFORMATION\$1SCHEMA
](#ams-waits.io-temppoolmanager.actions.schema-queries)
+ [

### Increase innodb\$1sync\$1array\$1size parameter
](#ams-waits.io-temppoolmanager.actions.sync_array)
+ [

### Implement connection pooling
](#ams-waits.io-temppoolmanager.actions.connection_pooling)

### Monitor and optimize temporary table usage


To monitor and optimize temporary table usage, use one of these methods:
+ Check the `Created_tmp_disk_tables` counter in Performance Insights to track on-disk temporary table creation across your Aurora cluster.
+ Run this command in your database to directly monitor temporary table creation: `mysql> show status like '%created_tmp_disk%'`.

**Note**  
Temporary table behavior differs between Aurora MySQL reader nodes and writer nodes. For more information, see [New temporary table behavior in Aurora MySQL version 3](ams3-temptable-behavior.md).

After identifying queries creating temporary tables, take these optimization steps:
+ Use `EXPLAIN` to examine query execution plans and identify where and why temporary tables are being created.
+ Modify queries to reduce temporary table usage where possible.

If query optimization alone doesn't resolve performance issues, consider adjusting these configuration parameters:
+  [https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_temptable_max_ram](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_temptable_max_ram)- Controls maximum RAM usage for temporary tables.
+  [https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_temptable_max_mmap](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_temptable_max_mmap) - Sets the limit for memory-mapped file storage.
+ [https://dev.mysql.com/doc/refman/8.4/en/server-system-variables.html#sysvar_tmp_table_size](https://dev.mysql.com/doc/refman/8.4/en/server-system-variables.html#sysvar_tmp_table_size)- Applies when `aurora_tmptable_enable_per_table_limit` is enabled (disabled by default).

**Important**  
Note that certain query conditions will always require on-disk temporary tables, regardless of configuration settings. For more information `TempTable` storage engine, see [ Use the TempTable storage engine on Amazon RDS for MySQL and Amazon Aurora MySQL ](https://aws.amazon.com/blogs/database/use-the-temptable-storage-engine-on-amazon-rds-for-mysql-and-amazon-aurora-mysql/).

### Review queries using INFORMATION\$1SCHEMA


When you query `INFORMATION_SCHEMA` tables, MySQL creates InnoDB temporary tables on the cluster volume. Each session needs a temporary tablespace for these tables, which can lead to performance issues during high concurrent access.

To improve performance:
+ Use `PERFORMANCE_SCHEMA` instead of `INFORMATION_SCHEMA` where possible.
+ If you must use `INFORMATION_SCHEMA`, reduce how often you run these queries.

### Increase innodb\$1sync\$1array\$1size parameter


The `innodb_sync_array_size` parameter controls the size of the mutex/lock wait array in MySQL. The default value of `1` works for general workloads, but increasing it can reduce thread contention during high concurrency.

When your workload shows increasing numbers of waiting threads:
+ Monitor the number of waiting threads in your workload.
+ Set `innodb_sync_array_size` equal to or higher than your instance's vCPU count to split the thread coordination structure and reduce contention.

**Note**  
To determine the number of vCPUs available on your RDS instance, see the vCPU specifications in [ Amazon RDS instance types ](https://aws.amazon.com/rds/instance-types/).

### Implement connection pooling


MySQL assigns a dedicated tablespace to each session that creates a temporary table. This tablespace remains active until the database connection ends. To manage your resources more efficiently:
+ Implement connection pooling to limit the number of active temporary tablespaces.
+ Reuse existing connections instead of creating new ones for each operation.

# Tuning Aurora MySQL with thread states


The following table summarizes the most common general thread states for Aurora MySQL.


| General thread state | Description | 
| --- | --- | 
|  [creating sort index](ams-states.sort-index.md)  |  This thread state indicates that a thread is processing a `SELECT` statement that requires the use of an internal temporary table to sort the data.  | 
|  [sending data](ams-states.sending-data.md)  |  This thread state indicates that a thread is reading and filtering rows for a query to determine the correct result set.  | 

# creating sort index


The `creating sort index` thread state indicates that a thread is processing a `SELECT` statement that requires the use of an internal temporary table to sort the data.

**Topics**
+ [

## Supported engine versions
](#ams-states.sort-index.context.supported)
+ [

## Context
](#ams-states.sort-index.context)
+ [

## Likely causes of increased waits
](#ams-states.sort-index.causes)
+ [

## Actions
](#ams-states.sort-index.actions)

## Supported engine versions


This thread state information is supported for the following versions:
+ Aurora MySQL version 2 up to 2.09.2

## Context


The `creating sort index` state appears when a query with an `ORDER BY` or `GROUP BY` clause can't use an existing index to perform the operation. In this case, MySQL needs to perform a more expensive `filesort` operation. This operation is typically performed in memory if the result set isn't too large. Otherwise, it involves creating a file on disk.

## Likely causes of increased waits


The appearance of `creating sort index` doesn't by itself indicate a problem. If performance is poor, and you see frequent instances of `creating sort index`, the most likely cause is slow queries with `ORDER BY` or `GROUP BY` operators.

## Actions


The general guideline is to find queries with `ORDER BY` or `GROUP BY` clauses that are associated with the increases in the `creating sort index` state. Then see whether adding an index or increasing the sort buffer size solves the problem.

**Topics**
+ [

### Turn on the Performance Schema if it isn't turned on
](#ams-states.sort-index.actions.enable-pfs)
+ [

### Identify the problem queries
](#ams-states.sort-index.actions.identify)
+ [

### Examine the explain plans for filesort usage
](#ams-states.sort-index.actions.plan)
+ [

### Increase the sort buffer size
](#ams-states.sort-index.actions.increasebuffersize)

### Turn on the Performance Schema if it isn't turned on


Performance Insights reports thread states only if Performance Schema instruments aren't turned on. When Performance Schema instruments are turned on, Performance Insights reports wait events instead. Performance Schema instruments provide additional insights and better tools when you investigate potential performance problems. Therefore, we recommend that you turn on the Performance Schema. For more information, see [Overview of the Performance Schema for Performance Insights on Aurora MySQL](USER_PerfInsights.EnableMySQL.md).

### Identify the problem queries


To identify current queries that are causing increases in the `creating sort index` state, run `show processlist` and see if any of the queries have `ORDER BY` or `GROUP BY`. Optionally, run `explain for connection N`, where `N` is the process list ID of the query with `filesort`.

To identify past queries that are causing these increases, turn on the slow query log and find the queries with `ORDER BY`. Run `EXPLAIN` on the slow queries and look for "using filesort." For more information, see [Examine the explain plans for filesort usage](#ams-states.sort-index.actions.plan).

### Examine the explain plans for filesort usage


Identify the statements with `ORDER BY` or `GROUP BY` clauses that result in the `creating sort index` state. 

The following example shows how to run `explain` on a query. The `Extra` column shows that this query uses `filesort`.

```
mysql> explain select * from mytable order by c1 limit 10\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: mytable
   partitions: NULL
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 2064548
     filtered: 100.00
        Extra: Using filesort
1 row in set, 1 warning (0.01 sec)
```

The following example shows the result of running `EXPLAIN` on the same query after an index is created on column `c1`.

```
mysql> alter table mytable add index (c1);
```

```
mysql> explain select * from mytable order by c1 limit 10\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: mytable
   partitions: NULL
         type: index
possible_keys: NULL
          key: c1
      key_len: 1023
          ref: NULL
         rows: 10
     filtered: 100.00
        Extra: Using index
1 row in set, 1 warning (0.01 sec)
```

For information on using indexes for sort order optimization, see [ORDER BY Optimization](https://dev.mysql.com/doc/refman/5.7/en/order-by-optimization.html) in the MySQL documentation.

### Increase the sort buffer size


To see whether a specific query required a `filesort` process that created a file on disk, check the `sort_merge_passes` variable value after running the query. The following shows an example.

```
mysql> show session status like 'sort_merge_passes';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| Sort_merge_passes | 0     |
+-------------------+-------+
1 row in set (0.01 sec)

--- run query
mysql> select * from mytable order by u limit 10; 
--- run status again:

mysql> show session status like 'sort_merge_passes';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| Sort_merge_passes | 0     |
+-------------------+-------+
1 row in set (0.01 sec)
```

If the value of `sort_merge_passes` is high, consider increasing the sort buffer size. Apply the increase at the session level, because increasing it globally can significantly increase the amount of RAM MySQL uses. The following example shows how to change the sort buffer size before running a query. 

```
mysql> set session sort_buffer_size=10*1024*1024;
Query OK, 0 rows affected (0.00 sec)
-- run query
```

# sending data


The `sending data` thread state indicates that a thread is reading and filtering rows for a query to determine the correct result set. The name is misleading because it implies the state is transferring data, not collecting and preparing data to be sent later.

**Topics**
+ [

## Supported engine versions
](#ams-states.sending-data.context.supported)
+ [

## Context
](#ams-states.sending-data.context)
+ [

## Likely causes of increased waits
](#ams-states.sending-data.causes)
+ [

## Actions
](#ams-states.sending-data.actions)

## Supported engine versions


This thread state information is supported for the following versions:
+ Aurora MySQL version 2 up to 2.09.2

## Context


Many thread states are short-lasting. Operations occurring during `sending data` tend to perform large numbers of disk or cache reads. Therefore, `sending data` is often the longest-running state over the lifetime of a given query. This state appears when Aurora MySQL is doing the following:
+ Reading and processing rows for a `SELECT` statement
+ Performing a large number of reads from either disk or memory
+ Completing a full read of all data from a specific query
+ Reading data from a table, an index, or the work of a stored procedure
+ Sorting, grouping, or ordering data

After the `sending data` state finishes preparing the data, the thread state `writing to net` indicates the return of data to the client. Typically, `writing to net` is captured only when the result set is very large or severe network latency is slowing the transfer.

## Likely causes of increased waits


The appearance of `sending data` doesn't by itself indicate a problem. If performance is poor, and you see frequent instances of `sending data`, the most likely causes are as follows.

**Topics**
+ [

### Inefficient query
](#ams-states.sending-data.causes.structure)
+ [

### Suboptimal server configuration
](#ams-states.sending-data.causes.server)

### Inefficient query


In most cases, what's responsible for this state is a query that isn't using an appropriate index to find the result set of a specific query. For example, consider a query reading a 10 million record table for all orders placed in California, where the state column isn't indexed or is poorly indexed. In the latter case, the index might exist, but the optimizer ignores it because of low cardinality.

### Suboptimal server configuration


If several queries appear in the `sending data` state, the database server might be configured poorly. Specifically, the server might have the following issues:
+ The database server doesn't have enough computing capacity: disk I/O, disk type and speed, CPU, or number of CPUs.
+ The server is starved for allocated resources, such as the InnoDB buffer pool for InnoDB tables or the key buffer for MyIsam tables.
+ Per-thread memory settings such as `sort_buffer`, `read_buffer`, and `join_buffer` consume more RAM than required, starving the physical server for memory resources.

## Actions


The general guideline is to find queries that return large numbers of rows by checking the Performance Schema. If logging queries that don't use indexes is turned on, you can also examine the results from the slow logs.

**Topics**
+ [

### Turn on the Performance Schema if it isn't turned on
](#ams-states.sending-data.actions.enable-pfs)
+ [

### Examine memory settings
](#ams-states.sending-data.actions.memory)
+ [

### Examine the explain plans for index usage
](#ams-states.sending-data.actions.plans)
+ [

### Check the volume of data returned
](#ams-states.sending-data.actions.maintenance)
+ [

### Check for concurrency issues
](#ams-states.sending-data.actions.concurrent-queries)
+ [

### Check the structure of your queries
](#ams-states.sending-data.actions.subqueries)

### Turn on the Performance Schema if it isn't turned on


Performance Insights reports thread states only if Performance Schema instruments aren't turned on. When Performance Schema instruments are turned on, Performance Insights reports wait events instead. Performance Schema instruments provide additional insights and better tools when you investigate potential performance problems. Therefore, we recommend that you turn on the Performance Schema. For more information, see [Overview of the Performance Schema for Performance Insights on Aurora MySQL](USER_PerfInsights.EnableMySQL.md).

### Examine memory settings


Examine the memory settings for the primary buffer pools. Make sure that these pools are appropriately sized for the workload. If your database uses multiple buffer pool instances, make sure that they aren't divided into many small buffer pools. Threads can only use one buffer pool at a time.

Make sure that the following memory settings used for each thread are properly sized:
+ read\$1buffer
+ read\$1rnd\$1buffer
+ sort\$1buffer
+ join\$1buffer
+ binlog\$1cache

Unless you have a specific reason to modify the settings, use the default values.

### Examine the explain plans for index usage


For queries in the `sending data` thread state, examine the plan to determine whether appropriate indexes are used. If a query isn't using a useful index, consider adding hints like `USE INDEX` or `FORCE INDEX`. Hints can greatly increase or decrease the time it takes to run a query, so use care before adding them.

### Check the volume of data returned


Check the tables that are being queried and the amount of data that they contain. Can any of this data be archived? In many cases, the cause of poor query execution times isn't the result of the query plan, but the volume of data to be processed. Many developers are very efficient in adding data to a database but seldom consider dataset life cycle in the design and development phases.

Look for queries that perform well in low-volume databases but perform poorly in your current system. Sometimes developers who design specific queries might not realize that these queries are returning 350,000 rows. The developers might have developed the queries in a lower-volume environment with smaller datasets than production environments have.

### Check for concurrency issues


Check whether multiple queries of the same type are running at the same time. Some forms of queries run efficiently when they run alone. However, if similar forms of query run together, or in high volume, they can cause concurrency issues. Often, these issues are caused when the database uses temp tables to render results. A restrictive transaction isolation level can also cause concurrency issues.

If tables are read and written to concurrently, the database might be using locks. To help identify periods of poor performance, examine the use of databases through large-scale batch processes. To see recent locks and rollbacks, examine the output of the `SHOW ENGINE INNODB STATUS` command.

### Check the structure of your queries


Check whether captured queries from these states use subqueries. This type of query often leads to poor performance because the database compiles the results internally and then substitutes them back into the query to render data. This process is an extra step for the database. In many cases, this step can cause poor performance in a highly concurrent loading condition.

Also check whether your queries use large numbers of `ORDER BY` and `GROUP BY` clauses. In such operations, often the database must first form the entire dataset in memory. Then it must order or group it in a specific manner before returning it to the client.

# Tuning Aurora MySQL with Amazon DevOps Guru proactive insights


DevOps Guru proactive insights detect known problematic conditions on your Aurora MySQL DB clusters before they occur. DevOps Guru can do the following:
+ Prevent many common database issues by cross-checking your database configuration against common recommended settings.
+ Alert you to critical issues in your fleet that, if left unchecked, can lead to larger problems later.
+ Alert you to newly discovered problems.

Every proactive insight contains an analysis of the cause of the problem and recommendations for corrective actions.

**Topics**
+ [

# The InnoDB history list length increased significantly
](proactive-insights.history-list.md)
+ [

# Database is creating temporary tables on disk
](proactive-insights.temp-tables.md)

# The InnoDB history list length increased significantly


Starting on *date*, your history list for row changes increased significantly, up to *length* on *db-instance*. This increase affects query and database shutdown performance.

**Topics**
+ [

## Supported engine versions
](#proactive-insights.history-list.context.supported)
+ [

## Context
](#proactive-insights.history-list.context)
+ [

## Likely causes for this issue
](#proactive-insights.history-list.causes)
+ [

## Actions
](#proactive-insights.history-list.actions)
+ [

## Relevant metrics
](#proactive-insights.history-list.metrics)

## Supported engine versions


This insight information is supported for all versions of Aurora MySQL.

## Context


The InnoDB transaction system maintains multiversion concurrency control (MVCC). When a row is modified, the pre-modification version of the data being modified is stored as an undo record in an undo log. Every undo record has a reference to its previous redo record, forming a linked list.

The InnoDB history list is a global list of the undo logs for committed transactions. MySQL uses the history list to purge records and log pages when transactions no longer require the history. The history list length is the total number of undo logs that contain modifications in the history list. Each log contains one or more modifications. If the InnoDB history list length grows too large, indicating a large number of old row versions, queries and database shutdowns become slower.

## Likely causes for this issue


Typical causes of a long history list include the following:
+ Long-running transactions, either read or write
+ A heavy write load

## Actions


We recommend different actions depending on the causes of your insight.

**Topics**
+ [

### Don't begin any operation involving a database shutdown until the InnoDB history list decreases
](#proactive-insights.history-list.actions.no-shutdown)
+ [

### Identify and end long-running transactions
](#proactive-insights.history-list.actions.long-txn)
+ [

### Identify the top hosts and top users by using Performance Insights.
](#proactive-insights.history-list.actions.top-PI)

### Don't begin any operation involving a database shutdown until the InnoDB history list decreases


Because a long InnoDB history list slows database shutdowns, reduce the list size before initiating operations involving a database shutdown. These operations include major version database upgrades.

### Identify and end long-running transactions


You can find long-running transactions by querying `information_schema.innodb_trx`.

**Note**  
Make sure also to look for long-running transactions on read replicas.

**To identify and end long-running transactions**

1. In your SQL client, run the following query:

   ```
   SELECT a.trx_id, 
         a.trx_state, 
         a.trx_started, 
         TIMESTAMPDIFF(SECOND,a.trx_started, now()) as "Seconds Transaction Has Been Open", 
         a.trx_rows_modified, 
         b.USER, 
         b.host, 
         b.db, 
         b.command, 
         b.time, 
         b.state 
   FROM  information_schema.innodb_trx a, 
         information_schema.processlist b 
   WHERE a.trx_mysql_thread_id=b.id
     AND TIMESTAMPDIFF(SECOND,a.trx_started, now()) > 10 
   ORDER BY trx_started
   ```

1. End each long-running transaction with the stored procedure [mysql.rds\$1kill](mysql-stored-proc-ending.md#mysql_rds_kill).

### Identify the top hosts and top users by using Performance Insights.


Optimize transactions so that large numbers of modified rows are immediately committed.

## Relevant metrics


The following metrics are related to this insight:
+ `trx_rseg_history_len` – This counter metric can be viewed in Performance Insights, as well as the `INFORMATION_SCHEMA.INNODB_METRICS` table. For more information, see [InnoDB INFORMATION\$1SCHEMA metrics table](https://dev.mysql.com/doc/refman/8.0/en/innodb-information-schema-metrics-table.html) in the MySQL documentation.
+ `RollbackSegmentHistoryListLength` – This Amazon CloudWatch metric measures the undo logs that record committed transactions with delete-marked records. These records are scheduled to be processed by the InnoDB purge operation. The metric `trx_rseg_history_len` has the same value as `RollbackSegmentHistoryListLength`.
+ `PurgeBoundary` – The transaction number up to which InnoDB purging is allowed. If this CloudWatch metric doesn't advance for extended periods of time, it's a good indication that InnoDB purging is blocked by long-running transactions. To investigate, check the active transactions on your Aurora MySQL DB cluster. This metric is available only for Aurora MySQL version 2.11 and higher, and version 3.08 and higher.
+ `PurgeFinishedPoint` – The transaction number up to which InnoDB purging is performed. This CloudWatch metric can help you examine how fast InnoDB purging is progressing. This metric is available only for Aurora MySQL version 2.11 and higher, and version 3.08 and higher.
+ `TransactionAgeMaximum` – The age of the oldest active running transaction. This CloudWatch metric is available only for Aurora MySQL version 3.08 and higher.
+ `TruncateFinishedPoint` – The transaction number up to which undo truncation is performed. This CloudWatch metric is available only for Aurora MySQL version 2.11 and higher, and version 3.08 and higher.

For more information on the CloudWatch metrics, see [Instance-level metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md#Aurora.AuroraMySQL.Monitoring.Metrics.instances).

# Database is creating temporary tables on disk


Your recent on-disk temporary table usage increased significantly, up to *percentage*. The database is creating around *number* temporary tables per second. This might impact performance and increase disk operations on *db-instance*.

**Topics**
+ [

## Supported engine versions
](#proactive-insights.temp-tables.context.supported)
+ [

## Context
](#proactive-insights.temp-tables.context)
+ [

## Likely causes for this issue
](#proactive-insights.temp-tables.causes)
+ [

## Actions
](#proactive-insights.temp-tables.actions)
+ [

## Relevant metrics
](#proactive-insights.temp-tables.metrics)

## Supported engine versions


This insight information is supported for all versions of Aurora MySQL.

## Context


Sometimes it's necessary for the MySQL server to create an internal temporary table while processing a query. Aurora MySQL can hold an internal temporary table in memory, where it can be processed by the TempTable or MEMORY storage engine, or stored on disk by InnoDB. For more information, see [Internal Temporary Table Use in MySQL](https://dev.mysql.com/doc/refman/5.6/en/internal-temporary-tables.html) in the *MySQL Reference Manual*.

## Likely causes for this issue


An increase in on-disk temporary tables indicates the use of complex queries. If the configured memory is insufficient to store temporary tables in memory, Aurora MySQL creates the tables on disk. This can impact performance and increase disk operations.

## Actions


We recommend different actions depending on the causes of your insight.
+ For Aurora MySQL version 3, we recommend that you use the TempTable storage engine.
+ Optimize your queries to return less data by selecting only necessary columns.

  If you turn on the Performance Schema with all `statement` instruments enabled and timed, you can query `SYS.statements_with_temp_tables` to retrieve the list of queries that use temporary tables. For more information, see [Prerequisites for Using the sys Schema](https://dev.mysql.com/doc/refman/8.0/en/sys-schema-prerequisites.html) in the MySQL documentation.
+ Consider indexing columns that are involved in sorting and grouping operations.
+ Rewrite your queries to avoid `BLOB` and `TEXT` columns. These columns always use disk.
+ Tune the following database parameters: `tmp_table_size` and `max_heap_table_size`.

  The default values for these parameters is 16 MiB. When using the MEMORY storage engine for in-memory temporary tables, their maximum size is defined by the `tmp_table_size` or `max_heap_table_size` value, whichever is smaller. When this maximum size is reached, MySQL automatically converts the in-memory internal temporary table to an InnoDB on-disk internal temporary table. For more information, see [Use the TempTable storage engine on Amazon RDS for MySQL and Amazon Aurora MySQL](https://aws.amazon.com/blogs/database/use-the-temptable-storage-engine-on-amazon-rds-for-mysql-and-amazon-aurora-mysql/).
**Note**  
When explicitly creating MEMORY tables with CREATE TABLE, only the `max_heap_table_size` variable determines how large a table can grow. There is also no conversion to an on-disk format.

## Relevant metrics


The following Performance Insights metrics are related to this insight:
+ Created\$1tmp\$1disk\$1tables
+ Created\$1tmp\$1tables

For more information, see [Created\$1tmp\$1disk\$1tables](https://dev.mysql.com/doc/refman/8.0/en/server-status-variables.html#statvar_Created_tmp_disk_tables) in the MySQL documentation.

# Parallel query for Amazon Aurora MySQL
Parallel query for Aurora MySQL<a name="parallel_query"></a><a name="pq"></a>

This topic describes the parallel query performance optimization for Amazon Aurora MySQL-Compatible Edition. This feature uses a special processing path for certain data-intensive queries, taking advantage of the Aurora shared storage architecture. Parallel query works best with Aurora MySQL DB clusters that have tables with millions of rows and analytic queries that take minutes or hours to complete. 

**Topics**
+ [

## Overview of parallel query for Aurora MySQL
](#aurora-mysql-parallel-query-overview)
+ [

# Creating a parallel query DB cluster in Aurora MySQL
](aurora-mysql-parallel-query-creating-cluster.md)
+ [

# Turning parallel query on and off in Aurora MySQL
](aurora-mysql-parallel-query-enabling.md)
+ [

# Optimizing parallel query in Aurora MySQL
](aurora-mysql-parallel-query-optimizing.md)
+ [

# Verifying which statements use parallel query for Aurora MySQL
](aurora-mysql-parallel-query-verifying.md)
+ [

# Monitoring parallel query for Aurora MySQL
](aurora-mysql-parallel-query-monitoring.md)
+ [

# SQL constructs for parallel query in Aurora MySQL
](aurora-mysql-parallel-query-sql.md)

## Overview of parallel query for Aurora MySQL
Overview of parallel query

 Aurora MySQL parallel query is an optimization that parallelizes some of the I/O and computation involved in processing data-intensive queries. The work that is parallelized includes retrieving rows from storage, extracting column values, and determining which rows match the conditions in the `WHERE` clause and join clauses. This data-intensive work is delegated (in database optimization terms, *pushed down*) to multiple nodes in the Aurora distributed storage layer. Without parallel query, each query brings all the scanned data to a single node within the Aurora MySQL cluster (the head node) and performs all the query processing there. 

**Tip**  
The PostgreSQL database engine also has a feature called "parallel query." That feature is unrelated to Aurora parallel query.

 When the parallel query feature is turned on, the Aurora MySQL engine automatically determines when queries can benefit, without requiring SQL changes such as hints or table attributes. In the following sections, you can find an explanation of when parallel query is applied to a query. You can also find how to make sure that parallel query is applied where it provides the most benefit. 

**Note**  
 The parallel query optimization provides the most benefit for long-running queries that take minutes or hours to complete. Aurora MySQL generally doesn't perform parallel query optimization for inexpensive queries. It also generally doesn't perform parallel query optimization if another optimization technique makes more sense, such as query caching, buffer pool caching, or index lookups. If you find that parallel query isn't being used when you expect it, see [Verifying which statements use parallel query for Aurora MySQL](aurora-mysql-parallel-query-verifying.md). 

**Topics**
+ [

### Benefits
](#aurora-mysql-parallel-query-benefits)
+ [

### Architecture
](#aurora-mysql-parallel-query-architecture)
+ [

### Prerequisites
](#aurora-mysql-parallel-query-prereqs)
+ [

### Limitations
](#aurora-mysql-parallel-query-limitations)
+ [

### I/O costs with parallel query
](#aurora-mysql-parallel-query-cost)

### Benefits


 With parallel query, you can run data-intensive analytic queries on Aurora MySQL tables. In many cases, you can get an order-of-magnitude performance improvement over the traditional division of labor for query processing. 

 Benefits of parallel query include the following: 
+  Improved I/O performance, due to parallelizing physical read requests across multiple storage nodes. 
+  Reduced network traffic. Aurora doesn't transmit entire data pages from storage nodes to the head node and then filter out unnecessary rows and columns afterward. Instead, Aurora transmits compact tuples containing only the column values needed for the result set. 
+  Reduced CPU usage on the head node, due to pushing down function processing, row filtering, and column projection for the `WHERE` clause. 
+  Reduced memory pressure on the buffer pool. The pages processed by the parallel query aren't added to the buffer pool. This approach reduces the chance of a data-intensive scan evicting frequently used data from the buffer pool. 
+  Potentially reduced data duplication in your extract, transform, load (ETL) pipeline, by making it practical to perform long-running analytic queries on existing data. 

### Architecture


 The parallel query feature uses the major architectural principles of Aurora MySQL: decoupling the database engine from the storage subsystem, and reducing network traffic by streamlining communication protocols. Aurora MySQL uses these techniques to speed up write-intensive operations such as redo log processing. Parallel query applies the same principles to read operations. 

**Note**  
 The architecture of Aurora MySQL parallel query differs from that of similarly named features in other database systems. Aurora MySQL parallel query doesn't involve symmetric multiprocessing (SMP) and so doesn't depend on the CPU capacity of the database server. The parallel processing happens in the storage layer, independent of the Aurora MySQL server that serves as the query coordinator. 

 By default, without parallel query, the processing for an Aurora query involves transmitting raw data to a single node within the Aurora cluster (the *head node*). Aurora then performs all further processing for that query in a single thread on that single node. With parallel query, much of this I/O-intensive and CPU-intensive work is delegated to nodes in the storage layer. Only the compact rows of the result set are transmitted back to the head node, with rows already filtered, and column values already extracted and transformed. The performance benefit comes from the reduction in network traffic, reduction in CPU usage on the head node, and parallelizing the I/O across the storage nodes. The amount of parallel I/O, filtering, and projection is independent of the number of DB instances in the Aurora cluster that runs the query. 

### Prerequisites


Using all features of parallel query requires an Aurora MySQL DB cluster that's running version 2.09 or higher. If you already have a cluster that you want to use with parallel query, you can upgrade it to a compatible version and turn on parallel query afterward. In that case, make sure to follow the upgrade procedure in [Upgrade considerations for parallel query](aurora-mysql-parallel-query-optimizing.md#aurora-mysql-parallel-query-upgrade) because the configuration setting names and default values are different in these newer versions.

The DB instances in your cluster must use the `db.r*` instance classes.

Make sure that hash join optimization is turned on for your cluster. To learn how, see [Turning on hash join for parallel query clusters](aurora-mysql-parallel-query-enabling.md#aurora-mysql-parallel-query-enabling-hash-join).

 To customize parameters such as `aurora_parallel_query` and `aurora_disable_hash_join`, you must have a custom parameter group that you use with your cluster. You can specify these parameters individually for each DB instance by using a DB parameter group. However, we recommend that you specify them in a DB cluster parameter group. That way, all DB instances in your cluster inherit the same settings for these parameters. 

### Limitations


The following limitations apply to the parallel query feature:
+ Parallel query isn't supported with the Aurora I/O-Optimized DB cluster storage configuration.
+ You can't use parallel query with the db.t2 or db.t3 instance classes. This limitation applies even if you request parallel query using the `aurora_pq_force` session variable.
+ Parallel query doesn't apply to tables using the `COMPRESSED` or `REDUNDANT` row formats. Use the `COMPACT` or `DYNAMIC` row formats for tables you plan to use with parallel query.
+ Aurora uses a cost-based algorithm to determine whether to use the parallel query mechanism for each SQL statement. Using certain SQL constructs in a statement can prevent parallel query or make parallel query unlikely for that statement. For information about compatibility of SQL constructs with parallel query, see [SQL constructs for parallel query in Aurora MySQL](aurora-mysql-parallel-query-sql.md).
+ Each Aurora DB instance can run only a certain number of parallel query sessions at one time. If a query has multiple parts that use parallel query, such as subqueries, joins, or `UNION` operators, those phases run in sequence. The statement only counts as a single parallel query session at any one time. You can monitor the number of active sessions using the [parallel query status variables](aurora-mysql-parallel-query-monitoring.md). You can check the limit on concurrent sessions for a given DB instance by querying the status variable `Aurora_pq_max_concurrent_requests`.
+ Parallel query is available in all AWS Regions that Aurora supports. For most AWS Regions, the minimum required Aurora MySQL version to use parallel query is 2.09.
+ Parallel query is designed to improve the performance of data-intensive queries. It isn't designed for lightweight queries.
+ We recommend that you use reader nodes for SELECT statements, especially data-intensive ones.

### I/O costs with parallel query


If your Aurora MySQL cluster uses parallel query, you might see an increase in `VolumeReadIOPS` values. Parallel queries don't use the buffer pool. Thus, although the queries are fast, this optimized processing can result in an increase in read operations and associated charges.

Parallel query I/O costs for your query are metered at the storage layer, and will be the same or larger with parallel query turned on. Your benefit is the improvement in query performance. There are two reasons for potentially higher I/O costs with parallel query:
+ Even if some of the data in a table is in the buffer pool, parallel query requires all data to be scanned at the storage layer, incurring I/O costs.
+ Running a parallel query doesn't warm up the buffer pool. As a result, consecutive runs of the same parallel query incur the full I/O cost.

# Creating a parallel query DB cluster in Aurora MySQL
Creating a parallel query cluster

 To create an Aurora MySQL cluster with parallel query, add new instances to it, or perform other administrative operations, you use the same AWS Management Console and AWS CLI techniques that you do with other Aurora MySQL clusters. You can create a new cluster to work with parallel query. You can also create a DB cluster to work with parallel query by restoring from a snapshot of a MySQL-compatible Aurora DB cluster. If you aren't familiar with the process for creating a new Aurora MySQL cluster, you can find background information and prerequisites in [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md). 

When you choose an Aurora MySQL engine version, we recommend that you choose the latest one available. Currently, all available Aurora MySQL versions support parallel query. You have more flexibility to turn parallel query on and off, or use parallel query with existing clusters, if you use the latest versions.

 Whether you create a new cluster or restore from a snapshot, you use the same techniques to add new DB instances that you do with other Aurora MySQL clusters. 

You can create a parallel query cluster using the Amazon RDS console or the AWS CLI.

**Contents**
+ [

## Creating a parallel query cluster using the console
](#aurora-mysql-parallel-query-creating-cluster-console)
+ [

## Creating a parallel query cluster using the CLI
](#aurora-mysql-parallel-query-creating-cluster-cli)

## Creating a parallel query cluster using the console


 You can create a new parallel query cluster with the console as described following. 

**To create a parallel query cluster with the AWS Management Console**

1.  Follow the general AWS Management Console procedure in [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md). 

1. For **Engine type**, choose Aurora MySQL.

1. For **Additional configuration**, choose a parameter group that you created for **DB cluster parameter group**. Using such a custom parameter group is required for Aurora MySQL 2.09 and higher. In your DB cluster parameter group, specify the parameter settings `aurora_parallel_query=ON` and `aurora_disable_hash_join=OFF`. Doing so turns on parallel query for the cluster, and turns on the hash join optimization that works in combination with parallel query.

**To verify that a new cluster can use parallel query**

1. Create a cluster using the preceding technique.

1. (For Aurora MySQL version 2 or 3) Check that the `aurora_parallel_query` configuration setting is true.

   ```
   mysql> select @@aurora_parallel_query;
   +-------------------------+
   | @@aurora_parallel_query |
   +-------------------------+
   |                       1 |
   +-------------------------+
   ```

1. (For Aurora MySQL version 2) Check that the `aurora_disable_hash_join` setting is false.

   ```
   mysql> select @@aurora_disable_hash_join;
   +----------------------------+
   | @@aurora_disable_hash_join |
   +----------------------------+
   |                          0 |
   +----------------------------+
   ```

1.  With some large tables and data-intensive queries, check the query plans to confirm that some of your queries are using the parallel query optimization. To do so, follow the procedure in [Verifying which statements use parallel query for Aurora MySQL](aurora-mysql-parallel-query-verifying.md). 

## Creating a parallel query cluster using the CLI


 You can create a new parallel query cluster with the CLI as described following. 

**To create a parallel query cluster with the AWS CLI**

1.  (Optional) Check which Aurora MySQL versions are compatible with parallel query clusters. To do so, use the `describe-db-engine-versions` command and check the value of the `SupportsParallelQuery` field. For an example, see [Checking Aurora MySQL version compatibility for parallel query](aurora-mysql-parallel-query-optimizing.md#aurora-mysql-parallel-query-checking-compatibility). 

1.  (Optional) Create a custom DB cluster parameter group with the settings `aurora_parallel_query=ON` and `aurora_disable_hash_join=OFF`. Use commands such as the following.

   ```
   aws rds create-db-cluster-parameter-group --db-parameter-group-family aurora-mysql8.0 --db-cluster-parameter-group-name pq-enabled-80-compatible
   aws rds modify-db-cluster-parameter-group --db-cluster-parameter-group-name pq-enabled-80-compatible \
     --parameters ParameterName=aurora_parallel_query,ParameterValue=ON,ApplyMethod=pending-reboot
   aws rds modify-db-cluster-parameter-group --db-cluster-parameter-group-name pq-enabled-80-compatible \
     --parameters ParameterName=aurora_disable_hash_join,ParameterValue=OFF,ApplyMethod=pending-reboot
   ```

    If you perform this step, specify the option `--db-cluster-parameter-group-name my_cluster_parameter_group` in the subsequent `create-db-cluster` statement. Substitute the name of your own parameter group. If you omit this step, you create the parameter group and associate it with the cluster later, as described in [Turning parallel query on and off in Aurora MySQL](aurora-mysql-parallel-query-enabling.md). 

1.  Follow the general AWS CLI procedure in [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md). 

1. Specify the following set of options:
   + For the `--engine` option, use `aurora-mysql`. These values produce parallel query clusters that are compatible with MySQL 5.7 or 8.0.
   +  For the `--db-cluster-parameter-group-name` option, specify the name of a DB cluster parameter group that you created and specified the parameter value `aurora_parallel_query=ON`. If you omit this option, you can create the cluster with a default parameter group and later modify it to use such a custom parameter group. 
   + For the `--engine-version` option, use an Aurora MySQL version that's compatible with parallel query. Use the procedure from [Optimizing parallel query in Aurora MySQLPlanning for a parallel query cluster](aurora-mysql-parallel-query-optimizing.md#aurora-mysql-parallel-query-planning) to get a list of versions if necessary.

     The following code example shows how. Substitute your own value for each of the environment variables such as *\$1CLUSTER\$1ID*. This example also specifies the `--manage-master-user-password` option to generate the master user password and manage it in Secrets Manager. For more information, see [Password management with Amazon Aurora and AWS Secrets Manager](rds-secrets-manager.md). Alternatively, you can use the `--master-password` option to specify and manage the password yourself. 

     ```
     aws rds create-db-cluster --db-cluster-identifier $CLUSTER_ID \
       --engine aurora-mysql --engine-version 8.0.mysql_aurora.3.04.1 \
       --master-username $MASTER_USER_ID --manage-master-user-password \
       --db-cluster-parameter-group-name $CUSTOM_CLUSTER_PARAM_GROUP
     
     aws rds create-db-instance --db-instance-identifier ${INSTANCE_ID}-1 \
       --engine same_value_as_in_create_cluster_command \
       --db-cluster-identifier $CLUSTER_ID --db-instance-class $INSTANCE_CLASS
     ```

1. Verify that a cluster you created or restored has the parallel query feature available.

   Check that the `aurora_parallel_query` configuration setting exists. If this setting has the value 1, parallel query is ready for you to use. If this setting has the value 0, set it to 1 before you can use parallel query. Either way, the cluster is capable of performing parallel queries.

   ```
   mysql> select @@aurora_parallel_query;
   +------------------------+
   | @@aurora_parallel_query|
   +------------------------+
   |                      1 |
   +------------------------+
   ```

**To restore a snapshot to a parallel query cluster with the AWS CLI**

1.  Check which Aurora MySQL versions are compatible with parallel query clusters. To do so, use the `describe-db-engine-versions` command and check the value of the `SupportsParallelQuery` field. For an example, see [Checking Aurora MySQL version compatibility for parallel query](aurora-mysql-parallel-query-optimizing.md#aurora-mysql-parallel-query-checking-compatibility). Decide which version to use for the restored cluster.

1.  Locate an Aurora MySQL-compatible cluster snapshot. 

1. Follow the general AWS CLI procedure in [Restoring from a DB cluster snapshot](aurora-restore-snapshot.md).

   ```
   aws rds restore-db-cluster-from-snapshot \
     --db-cluster-identifier mynewdbcluster \
     --snapshot-identifier mydbclustersnapshot \
     --engine aurora-mysql
   ```

1.  Verify that a cluster you created or restored has the parallel query feature available. Use the same verification procedure as in [Creating a parallel query cluster using the CLI](#aurora-mysql-parallel-query-creating-cluster-cli). 

# Turning parallel query on and off in Aurora MySQL
Turning parallel query on and off

When parallel query is turned on, Aurora MySQL determines whether to use it at runtime for each query. In the case of joins, unions, subqueries, and so on, Aurora MySQL determines whether to use parallel query at runtime for each query block. For details, see [Verifying which statements use parallel query for Aurora MySQL](aurora-mysql-parallel-query-verifying.md) and [SQL constructs for parallel query in Aurora MySQL](aurora-mysql-parallel-query-sql.md).

You can turn on and turn off parallel query dynamically at both the global and session level for a DB instance by using the **aurora\$1parallel\$1query** option. You can change the `aurora_parallel_query` setting in your DB cluster group to turn parallel query on or off by default.

```
mysql> select @@aurora_parallel_query;
+------------------------+
| @@aurora_parallel_query|
+------------------------+
|                      1 |
+------------------------+
```

 To toggle the `aurora_parallel_query` parameter at the session level, use the standard methods to change a client configuration setting. For example, you can do so through the `mysql` command line or within a JDBC or ODBC application. The command on the standard MySQL client is `set session aurora_parallel_query = {'ON'/'OFF'}`. You can also add the session-level parameter to the JDBC configuration or within your application code to turn on or turn off parallel query dynamically. 

 You can permanently change the setting for the `aurora_parallel_query` parameter, either for a specific DB instance or for your whole cluster. If you specify the parameter value in a DB parameter group, that value only applies to specific DB instance in your cluster. If you specify the parameter value in a DB cluster parameter group, all DB instances in the cluster inherit the same setting. To toggle the `aurora_parallel_query` parameter, use the techniques for working with parameter groups, as described in [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md). Follow these steps: 

1.  Create a custom cluster parameter group (recommended) or a custom DB parameter group. 

1.  In this parameter group, update `parallel_query` to the value that you want. 

1.  Depending on whether you created a DB cluster parameter group or a DB parameter group, attach the parameter group to your Aurora cluster or to the specific DB instances where you plan to use the parallel query feature. 
**Tip**  
Because `aurora_parallel_query` is a dynamic parameter, it doesn't require a cluster restart after changing this setting. However, any connections that were using parallel query before toggling the option will continue to do so until the connection is closed, or the instance is rebooted.

 You can modify the parallel query parameter by using the [ModifyDBClusterParameterGroup](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBClusterParameterGroup.html) or [ModifyDBParameterGroup](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBParameterGroup.html) API operation or the AWS Management Console. 

You can turn on hash join for parallel query clusters, turn parallel query on and off using the Amazon RDS console or the AWS CLI, and override the parallel query optimizer.

**Contents**
+ [

## Turning on hash join for parallel query clusters
](#aurora-mysql-parallel-query-enabling-hash-join)
+ [

## Turning on and turning off parallel query using the console
](#aurora-mysql-parallel-query-enabling-console)
+ [

## Turning on and turning off parallel query using the CLI
](#aurora-mysql-parallel-query-enabling-cli)
+ [

## Overriding the parallel query optimizer
](#aurora-mysql-parallel-query-enabling.aurora_pq_force)

## Turning on hash join for parallel query clusters


Parallel query is typically used for the kinds of resource-intensive queries that benefit from the hash join optimization. Thus, it's helpful to make sure that hash joins are turned on for clusters where you plan to use parallel query. For information about how to use hash joins effectively, see [Optimizing large Aurora MySQL join queries with hash joins](AuroraMySQL.BestPractices.Performance.md#Aurora.BestPractices.HashJoin).

## Turning on and turning off parallel query using the console


 You can turn on or turn off parallel query at the DB instance level or the DB cluster level by working with parameter groups. 

**To turn on or turn off parallel query for a DB cluster with the AWS Management Console**

1.  Create a custom parameter group, as described in [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md). 

1. Update **aurora\$1parallel\$1query** to **1** (turned on) or **0** (turned off). For clusters where the parallel query feature is available, **aurora\$1parallel\$1query** is turned off by default.

1.  If you use a custom cluster parameter group, attach it to the Aurora DB cluster where you plan to use the parallel query feature. If you use a custom DB parameter group, attach it to one or more DB instances in the cluster. We recommend using a cluster parameter group. Doing so makes sure that all DB instances in the cluster have the same settings for parallel query and associated features such as hash join. 

## Turning on and turning off parallel query using the CLI


 You can modify the parallel query parameter by using the `modify-db-cluster-parameter-group` or `modify-db-parameter-group` command. Choose the appropriate command depending on whether you specify the value of `aurora_parallel_query` through a DB cluster parameter group or a DB parameter group. 

**To turn on or turn off parallel query for a DB cluster with the CLI**
+  Modify the parallel query parameter by using the `modify-db-cluster-parameter-group` command. Use a command such as the following. Substitute the appropriate name for your own custom parameter group. Substitute either `ON` or `OFF` for the `ParameterValue` portion of the `--parameters` option. 

  ```
  $ aws rds modify-db-cluster-parameter-group --db-cluster-parameter-group-name cluster_param_group_name \
    --parameters ParameterName=aurora_parallel_query,ParameterValue=ON,ApplyMethod=pending-reboot
  {
      "DBClusterParameterGroupName": "cluster_param_group_name"
  }
  
  aws rds modify-db-cluster-parameter-group --db-cluster-parameter-group-name cluster_param_group_name \
    --parameters ParameterName=aurora_pq,ParameterValue=ON,ApplyMethod=pending-reboot
  ```

 You can also turn on or turn off parallel query at the session level, for example through the `mysql` command line or within a JDBC or ODBC application. To do so, use the standard methods to change a client configuration setting. For example, the command on the standard MySQL client is `set session aurora_parallel_query = {'ON'/'OFF'}` for Aurora MySQL.

 You can also add the session-level parameter to the JDBC configuration or within your application code to turn on or turn off parallel query dynamically. 

## Overriding the parallel query optimizer


You can use the `aurora_pq_force` session variable to override the parallel query optimizer and request parallel query for every query. We recommend that you do this only for testing purposes The following example shows how to use `aurora_pq_force` in a session.

```
set SESSION aurora_parallel_query = ON;
set SESSION aurora_pq_force = ON;
```

To turn off the override, do the following:

```
set SESSION aurora_pq_force = OFF;
```

# Optimizing parallel query in Aurora MySQL
Optimizing parallel query

To optimize your DB cluster for parallel query, consider which DB clusters would benefit from parallel query and whether to upgrade for parallel query. Then, tune your workload and create schema objects for parallel query.

**Contents**
+ [

## Planning for a parallel query cluster
](#aurora-mysql-parallel-query-planning)
  + [

### Checking Aurora MySQL version compatibility for parallel query
](#aurora-mysql-parallel-query-checking-compatibility)
+ [

## Upgrade considerations for parallel query
](#aurora-mysql-parallel-query-upgrade)
  + [

### Upgrading parallel query clusters to Aurora MySQL version 3
](#aurora-mysql-parallel-query-upgrade-pqv2)
  + [

### Upgrading to Aurora MySQL 2.09 and higher
](#aurora-mysql-parallel-query-upgrade-2.09)
+ [

## Performance tuning for parallel query
](#aurora-mysql-parallel-query-performance)
+ [

## Creating schema objects to take advantage of parallel query
](#aurora-mysql-parallel-query-tables)

## Planning for a parallel query cluster
Planning for a parallel query cluster

Planning for a DB cluster that has parallel query turned on requires making some choices. These include performing setup steps (either creating or restoring a full Aurora MySQL cluster) and deciding how broadly to turn on parallel query across your DB cluster.

Consider the following as part of planning:
+ If you use Aurora MySQL that's compatible with MySQL 5.7, you must create a provisioned cluster. Then you turn on parallel query using the `aurora_parallel_query` parameter.

  If you have an existing Aurora MySQL cluster, you don't have to create a new cluster to use parallel query. You can associate your cluster, or specific DB instances in the cluster, with a parameter group that has the `aurora_parallel_query` parameter turned on. By doing so, you can reduce the time and effort to set up the relevant data to use with parallel query.
+ Plan for any large tables that you need to reorganize so that you can use parallel query when accessing them. You might need to create new versions of some large tables where parallel query is useful. For example, you might need to remove full-text search indexes. For details, see [Creating schema objects to take advantage of parallel query](#aurora-mysql-parallel-query-tables).

### Checking Aurora MySQL version compatibility for parallel query
Checking Aurora MySQL version compatibility for parallel query

To check which Aurora MySQL versions are compatible with parallel query clusters, use the `describe-db-engine-versions` AWS CLI command and check the value of the `SupportsParallelQuery` field. The following code example shows how to check which combinations are available for parallel query clusters in a specified AWS Region. Make sure to specify the full `--query` parameter string on a single line.

```
aws rds describe-db-engine-versions --region us-east-1 --engine aurora-mysql \
--query '*[]|[?SupportsParallelQuery == `true`].[EngineVersion]' --output text
```

The preceding commands produce output similar to the following. The output might vary depending on which Aurora MySQL versions are available in the specified AWS Region.

```
5.7.mysql_aurora.2.11.1
5.7.mysql_aurora.2.11.2
5.7.mysql_aurora.2.11.3
5.7.mysql_aurora.2.11.4
5.7.mysql_aurora.2.11.5
5.7.mysql_aurora.2.11.6
5.7.mysql_aurora.2.12.0
5.7.mysql_aurora.2.12.1
5.7.mysql_aurora.2.12.2
5.7.mysql_aurora.2.12.3
5.7.mysql_aurora.2.12.4
8.0.mysql_aurora.3.04.0
8.0.mysql_aurora.3.04.1
8.0.mysql_aurora.3.04.2
8.0.mysql_aurora.3.04.3
8.0.mysql_aurora.3.05.2
8.0.mysql_aurora.3.06.0
8.0.mysql_aurora.3.06.1
8.0.mysql_aurora.3.07.0
8.0.mysql_aurora.3.07.1
```

 After you start using parallel query with a cluster, you can monitor performance and remove obstacles to parallel query usage. For those instructions, see [Performance tuning for parallel query](#aurora-mysql-parallel-query-performance). 

## Upgrade considerations for parallel query
Upgrading a parallel query cluster

 Depending on the original and destination versions when you upgrade a parallel query cluster, you might find enhancements in the types of queries that parallel query can optimize. You might also find that you don't need to specify a special engine mode parameter for parallel query. The following sections explain the considerations when you upgrade a cluster that has parallel query turned on. 

### Upgrading parallel query clusters to Aurora MySQL version 3


 Several SQL statements, clauses, and data types have new or improved parallel query support starting in Aurora MySQL version 3. When you upgrade from a release that's earlier than version 3, check whether additional queries can benefit from parallel query optimizations. For information about these parallel query enhancements, see [Column data types](aurora-mysql-parallel-query-sql.md#aurora-mysql-parallel-query-sql-datatypes), [Partitioned tables](aurora-mysql-parallel-query-sql.md#aurora-mysql-parallel-query-sql-partitioning), and [Aggregate functions, GROUP BY clauses, and HAVING clauses](aurora-mysql-parallel-query-sql.md#aurora-mysql-parallel-query-sql-aggregation). 

If you're upgrading a parallel query cluster from Aurora MySQL 2.08 or lower, also learn about changes in how to turn on parallel query. To do so, read [Upgrading to Aurora MySQL 2.09 and higher](#aurora-mysql-parallel-query-upgrade-2.09).

In Aurora MySQL version 3, hash join optimization is turned on by default. The `aurora_disable_hash_join` configuration option from earlier versions isn't used.

### Upgrading to Aurora MySQL 2.09 and higher


In Aurora MySQL version 2.09 and higher, parallel query works for provisioned clusters and doesn't require the `parallelquery` engine mode parameter. Thus, you don't need to create a new cluster or restore from an existing snapshot to use parallel query with these versions. You can use the upgrade procedures described in [Upgrading the minor version or patch level of an Aurora MySQL DB cluster](AuroraMySQL.Updates.Patching.md) to upgrade your cluster to such a version. You can upgrade an older cluster regardless of whether it was a parallel query cluster or a provisioned cluster. To reduce the number of choices in the **Engine version** menu, you can choose **Show versions that support the parallel query feature** to filter the entries in that menu. Then choose Aurora MySQL 2.09 or higher.

After you upgrade an earlier parallel query cluster to Aurora MySQL 2.09 or higher, you turn on parallel query in the upgraded cluster. Parallel query is turned off by default in these versions, and the procedure for enabling it is different. The hash join optimization is also turned off by default and must be turned on separately. Thus, make sure that you turn on these settings again after the upgrade. For instructions on doing so, see [Turning parallel query on and off in Aurora MySQL](aurora-mysql-parallel-query-enabling.md) and [Turning on hash join for parallel query clusters](aurora-mysql-parallel-query-enabling.md#aurora-mysql-parallel-query-enabling-hash-join).

In particular, you turn on parallel query by using the configuration parameters `aurora_parallel_query=ON` and `aurora_disable_hash_join=OFF` instead of `aurora_pq_supported` and `aurora_pq`. The `aurora_pq_supported` and `aurora_pq` parameters are deprecated in the newer Aurora MySQL versions.

 In the upgraded cluster, the `EngineMode` attribute has the value `provisioned` instead of `parallelquery`. To check whether parallel query is available for a specified engine version, now you check the value of the `SupportsParallelQuery` field in the output of the `describe-db-engine-versions` AWS CLI command. In earlier Aurora MySQL versions, you checked for the presence of `parallelquery` in the `SupportedEngineModes` list. 

After you upgrade to Aurora MySQL version 2.09 or higher, you can take advantage of the following features. These features aren't available to parallel query clusters running older Aurora MySQL versions.
+ Performance Insights. For more information, see [Monitoring DB load with Performance Insights on Amazon Aurora](USER_PerfInsights.md).
+ Backtracking. For more information, see [Backtracking an Aurora DB cluster](AuroraMySQL.Managing.Backtrack.md).
+ Stopping and starting the cluster. For more information, see [Stopping and starting an Amazon Aurora DB cluster](aurora-cluster-stop-start.md).

## Performance tuning for parallel query
Performance tuning

 To manage the performance of a workload with parallel query, make sure that parallel query is used for the queries where this optimization helps the most. 

 To do so, you can do the following: 
+  Make sure that your biggest tables are compatible with parallel query. You might change table properties or recreate some tables so that queries for those tables can take advantage of the parallel query optimization. To learn how, see [Creating schema objects to take advantage of parallel query](#aurora-mysql-parallel-query-tables). 
+  Monitor which queries use parallel query. To learn how, see [Monitoring parallel query for Aurora MySQL](aurora-mysql-parallel-query-monitoring.md). 
+  Verify that parallel query is being used for the most data-intensive and long-running queries, and with the right level of concurrency for your workload. To learn how, see [Verifying which statements use parallel query for Aurora MySQL](aurora-mysql-parallel-query-verifying.md). 
+  Fine-tune your SQL code to turn on parallel query to apply to the queries that you expect. To learn how, see [SQL constructs for parallel query in Aurora MySQL](aurora-mysql-parallel-query-sql.md). 

## Creating schema objects to take advantage of parallel query
Creating schema objects

 Before you create or modify tables that you plan to use for parallel query, make sure to familiarize yourself with the requirements described in [Prerequisites](aurora-mysql-parallel-query.md#aurora-mysql-parallel-query-prereqs) and [Limitations](aurora-mysql-parallel-query.md#aurora-mysql-parallel-query-limitations). 

 Because parallel query requires tables to use the `ROW_FORMAT=Compact` or `ROW_FORMAT=Dynamic` setting, check your Aurora configuration settings for any changes to the `INNODB_FILE_FORMAT` configuration option. Issue the `SHOW TABLE STATUS` statement to confirm the row format for all the tables in a database. 

 Before changing your schema to turn on parallel query to work with more tables, make sure to test. Your tests should confirm if parallel query results in a net increase in performance for those tables. Also, make sure that the schema requirements for parallel query are otherwise compatible with your goals. 

 For example, before switching from `ROW_FORMAT=Compressed` to `ROW_FORMAT=Compact` or `ROW_FORMAT=Dynamic`, test the performance of workloads for the original and new tables. Also, consider other potential effects such as increased data volume. 

# Verifying which statements use parallel query for Aurora MySQL
Verifying parallel query usage

 In typical operation, you don't need to perform any special actions to take advantage of parallel query. After a query meets the essential requirements for parallel query, the query optimizer automatically decides whether to use parallel query for each specific query. 

 If you run experiments in a development or test environment, you might find that parallel query isn't used because your tables are too small in number of rows or overall data volume. The data for the table might also be entirely in the buffer pool, especially for tables that you created recently to perform experiments. 

 As you monitor or tune cluster performance, make sure to decide whether parallel query is being used in the appropriate contexts. You might adjust the database schema, settings, SQL queries, or even the cluster topology and application connection settings to take advantage of this feature. 

 To check if a query is using parallel query, check the query plan (also known as the "explain plan") by running the [EXPLAIN](https://dev.mysql.com/doc/refman/5.7/en/execution-plan-information.html) statement. For examples of how SQL statements, clauses, and expressions affect `EXPLAIN` output for parallel query, see [SQL constructs for parallel query in Aurora MySQL](aurora-mysql-parallel-query-sql.md). 

 The following example demonstrates the difference between a traditional query plan and a parallel query plan. This explain plan is from Query 3 from the TPC-H benchmark. Many of the sample queries throughout this section use the tables from the TPC-H dataset. You can get the table definitions, queries, and the `dbgen` program that generates sample data from [the TPC-h website](http://www.tpc.org/tpch/). 

```
EXPLAIN SELECT l_orderkey,
  sum(l_extendedprice * (1 - l_discount)) AS revenue,
  o_orderdate,
  o_shippriority
FROM customer,
  orders,
  lineitem
WHERE c_mktsegment = 'AUTOMOBILE'
AND c_custkey = o_custkey
AND l_orderkey = o_orderkey
AND o_orderdate < date '1995-03-13'
AND l_shipdate > date '1995-03-13'
GROUP BY l_orderkey,
  o_orderdate,
  o_shippriority
ORDER BY revenue DESC,
  o_orderdate LIMIT 10;
```

 By default, the query might have a plan like the following. If you don't see hash join used in the query plan, make sure that optimization is turned on first. 

```
+----+-------------+----------+------------+------+---------------+------+---------+------+----------+----------+----------------------------------------------------+
| id | select_type | table    | partitions | type | possible_keys | key  | key_len | ref  | rows     | filtered | Extra                                              |
+----+-------------+----------+------------+------+---------------+------+---------+------+----------+----------+----------------------------------------------------+
|  1 | SIMPLE      | customer | NULL       | ALL  | NULL          | NULL | NULL    | NULL |  1480234 |    10.00 | Using where; Using temporary; Using filesort       |
|  1 | SIMPLE      | orders   | NULL       | ALL  | NULL          | NULL | NULL    | NULL | 14875240 |     3.33 | Using where; Using join buffer (Block Nested Loop) |
|  1 | SIMPLE      | lineitem | NULL       | ALL  | NULL          | NULL | NULL    | NULL | 59270573 |     3.33 | Using where; Using join buffer (Block Nested Loop) |
+----+-------------+----------+------------+------+---------------+------+---------+------+----------+----------+----------------------------------------------------+
```

For Aurora MySQL version 3, you turn on hash join at the session level by issuing the following statement.

```
SET optimizer_switch='block_nested_loop=on';
```

For Aurora MySQL version 2.09 and higher, you set the `aurora_disable_hash_join` DB parameter or DB cluster parameter to `0` (off). Turning off `aurora_disable_hash_join` sets the value of `optimizer_switch` to `hash_join=on`.

After you turn on hash join, try running the `EXPLAIN` statement again. For information about how to use hash joins effectively, see [Optimizing large Aurora MySQL join queries with hash joins](AuroraMySQL.BestPractices.Performance.md#Aurora.BestPractices.HashJoin).

 With hash join turned on but parallel query turned off, the query might have a plan like the following, which uses hash join but not parallel query. 

```
+----+-------------+----------+...+-----------+-----------------------------------------------------------------+
| id | select_type | table    |...| rows      | Extra                                                           |
+----+-------------+----------+...+-----------+-----------------------------------------------------------------+
|  1 | SIMPLE      | customer |...|   5798330 | Using where; Using index; Using temporary; Using filesort       |
|  1 | SIMPLE      | orders   |...| 154545408 | Using where; Using join buffer (Hash Join Outer table orders)   |
|  1 | SIMPLE      | lineitem |...| 606119300 | Using where; Using join buffer (Hash Join Outer table lineitem) |
+----+-------------+----------+...+-----------+-----------------------------------------------------------------+
```

 After parallel query is turned on, two steps in this query plan can use the parallel query optimization, as shown under the `Extra` column in the `EXPLAIN` output. The I/O-intensive and CPU-intensive processing for those steps is pushed down to the storage layer. 

```
+----+...+--------------------------------------------------------------------------------------------------------------------------------+
| id |...| Extra                                                                                                                          |
+----+...+--------------------------------------------------------------------------------------------------------------------------------+
|  1 |...| Using where; Using index; Using temporary; Using filesort                                                                      |
|  1 |...| Using where; Using join buffer (Hash Join Outer table orders); Using parallel query (4 columns, 1 filters, 1 exprs; 0 extra)   |
|  1 |...| Using where; Using join buffer (Hash Join Outer table lineitem); Using parallel query (4 columns, 1 filters, 1 exprs; 0 extra) |
+----+...+--------------------------------------------------------------------------------------------------------------------------------+
```

 For information about how to interpret `EXPLAIN` output for a parallel query and the parts of SQL statements that parallel query can apply to, see [SQL constructs for parallel query in Aurora MySQL](aurora-mysql-parallel-query-sql.md). 

 The following example output shows the results of running the preceding query on a db.r4.2xlarge instance with a cold buffer pool. The query runs substantially faster when using parallel query. 

**Note**  
 Because timings depend on many environmental factors, your results might be different. Always conduct your own performance tests to confirm the findings with your own environment, workload, and so on. 

```
-- Without parallel query
+------------+-------------+-------------+----------------+
| l_orderkey | revenue     | o_orderdate | o_shippriority |
+------------+-------------+-------------+----------------+
|   92511430 | 514726.4896 | 1995-03-06  |              0 |
.
.
|   28840519 | 454748.2485 | 1995-03-08  |              0 |
+------------+-------------+-------------+----------------+
10 rows in set (24 min 49.99 sec)
```

```
-- With parallel query
+------------+-------------+-------------+----------------+
| l_orderkey | revenue     | o_orderdate | o_shippriority |
+------------+-------------+-------------+----------------+
|   92511430 | 514726.4896 | 1995-03-06  |              0 |
.
.
|   28840519 | 454748.2485 | 1995-03-08  |              0 |
+------------+-------------+-------------+----------------+
10 rows in set (1 min 49.91 sec)
```

 Many of the sample queries throughout this section use the tables from this TPC-H dataset, particularly the `PART` table, which has 20 million rows and the following definition. 

```
+---------------+---------------+------+-----+---------+-------+
| Field         | Type          | Null | Key | Default | Extra |
+---------------+---------------+------+-----+---------+-------+
| p_partkey     | int(11)       | NO   | PRI | NULL    |       |
| p_name        | varchar(55)   | NO   |     | NULL    |       |
| p_mfgr        | char(25)      | NO   |     | NULL    |       |
| p_brand       | char(10)      | NO   |     | NULL    |       |
| p_type        | varchar(25)   | NO   |     | NULL    |       |
| p_size        | int(11)       | NO   |     | NULL    |       |
| p_container   | char(10)      | NO   |     | NULL    |       |
| p_retailprice | decimal(15,2) | NO   |     | NULL    |       |
| p_comment     | varchar(23)   | NO   |     | NULL    |       |
+---------------+---------------+------+-----+---------+-------+
```

 Experiment with your workload to get a sense of whether individual SQL statements can take advantage of parallel query. Then use the following monitoring techniques to help verify how often parallel query is used in real workloads over time. For real workloads, extra factors such as concurrency limits apply. 

# Monitoring parallel query for Aurora MySQL
Monitoring parallel query

 If your Aurora MySQL cluster uses parallel query, you might see an increase in `VolumeReadIOPS` values. Parallel queries don't use the buffer pool. Thus, although the queries are fast, this optimized processing can result in an increase in read operations and associated charges. 

 In addition to the Amazon CloudWatch metrics described in [Viewing metrics in the Amazon RDS console](USER_Monitoring.md), Aurora provides other global status variables. You can use these global status variables to help monitor parallel query execution. They can give you insights into why the optimizer might use or not use parallel query in a given situation. To access these variables, you can use the `[SHOW GLOBAL STATUS](https://dev.mysql.com/doc/refman/5.7/en/server-status-variables.html)` command. You can also find these variables listed following. 

 A parallel query session isn't necessarily a one-to-one mapping with the queries performed by the database. For example, suppose that your query plan has two steps that use parallel query. In that case, the query involves two parallel sessions and the counters for requests attempted and requests successful are incremented by two. 

 When you experiment with parallel query by issuing `EXPLAIN` statements, expect to see increases in the counters designated as "not chosen" even though the queries aren't actually running. When you work with parallel query in production, you can check if the "not chosen" counters are increasing faster than you expect. At this point, you can adjust so that parallel query runs for the queries that you expect. To do so, you can change your cluster settings, query mix, DB instances where parallel query is turned on, and so on.

 These counters are tracked at the DB instance level. When you connect to a different endpoint, you might see different metrics because each DB instance runs its own set of parallel queries. You might also see different metrics when the reader endpoint connects to a different DB instance for each session. 


| Name | Description | 
| --- | --- | 
|  `Aurora_pq_bytes_returned`  |  The number of bytes for the tuple data structures transmitted to the head node during parallel queries. Divide by 16,384 to compare against `Aurora_pq_pages_pushed_down`.  | 
|  `Aurora_pq_max_concurrent_requests`  |  The maximum number of parallel query sessions that can run concurrently on this Aurora DB instance. This is a fixed number that depends on the AWS DB instance class.  | 
|  `Aurora_pq_pages_pushed_down`  |  The number of data pages (each with a fixed size of 16 KiB) where parallel query avoided a network transmission to the head node.  | 
|  `Aurora_pq_request_attempted`  |  The number of parallel query sessions requested. This value might represent more than one session per query, depending on SQL constructs such as subqueries and joins.  | 
|  `Aurora_pq_request_executed`  |  The number of parallel query sessions run successfully.  | 
|  `Aurora_pq_request_failed`  |  The number of parallel query sessions that returned an error to the client. In some cases, a request for a parallel query might fail, for example due to a problem in the storage layer. In these cases, the query part that failed is retried using the nonparallel query mechanism. If the retried query also fails, an error is returned to the client and this counter is incremented.  | 
|  `Aurora_pq_request_in_progress`  |  The number of parallel query sessions currently in progress. This number applies to the particular Aurora DB instance that you are connected to, not the entire Aurora DB cluster. To see if a DB instance is close to its concurrency limit, compare this value to `Aurora_pq_max_concurrent_requests`.  | 
|  `Aurora_pq_request_not_chosen`  |  The number of times parallel query wasn't chosen to satisfy a query. This value is the sum of several other more granular counters. An `EXPLAIN` statement can increment this counter even though the query isn't actually performed.  | 
|  `Aurora_pq_request_not_chosen_below_min_rows`  |  The number of times parallel query wasn't chosen due to the number of rows in the table. An `EXPLAIN` statement can increment this counter even though the query isn't actually performed.  | 
|  `Aurora_pq_request_not_chosen_column_bit`  |  The number of parallel query requests that use the nonparallel query processing path because of an unsupported data type in the list of projected columns.  | 
|  `Aurora_pq_request_not_chosen_column_geometry`  |  The number of parallel query requests that use the nonparallel query processing path because the table has columns with the `GEOMETRY` data type. For information about Aurora MySQL versions that remove this limitation, see [Upgrading parallel query clusters to Aurora MySQL version 3](aurora-mysql-parallel-query-optimizing.md#aurora-mysql-parallel-query-upgrade-pqv2).  | 
|  `Aurora_pq_request_not_chosen_column_lob`  |  The number of parallel query requests that use the nonparallel query processing path because the table has columns with a `LOB` data type, or `VARCHAR` columns that are stored externally due to the declared length. For information about Aurora MySQL versions that remove this limitation, see [Upgrading parallel query clusters to Aurora MySQL version 3](aurora-mysql-parallel-query-optimizing.md#aurora-mysql-parallel-query-upgrade-pqv2).  | 
|  `Aurora_pq_request_not_chosen_column_virtual`  |  The number of parallel query requests that use the nonparallel query processing path because the table contains a virtual column.  | 
|  `Aurora_pq_request_not_chosen_custom_charset`  |  The number of parallel query requests that use the nonparallel query processing path because the table has columns with a custom character set.  | 
|  `Aurora_pq_request_not_chosen_fast_ddl`  |  The number of parallel query requests that use the nonparallel query processing path because the table is currently being altered by a fast DDL `ALTER` statement.  | 
|  `Aurora_pq_request_not_chosen_few_pages_outside_buffer_pool`  |  The number of times parallel query wasn't chosen, even though less than 95 percent of the table data was in the buffer pool, because there wasn't enough unbuffered table data to make parallel query worthwhile.  | 
|  `Aurora_pq_request_not_chosen_full_text_index`  |  The number of parallel query requests that use the nonparallel query processing path because the table has full-text indexes.  | 
|  `Aurora_pq_request_not_chosen_high_buffer_pool_pct`  |  The number of times parallel query wasn't chosen because a high percentage of the table data (currently, greater than 95 percent) was already in the buffer pool. In these cases, the optimizer determines that reading the data from the buffer pool is more efficient. An `EXPLAIN` statement can increment this counter even though the query isn't actually performed.  | 
|  `Aurora_pq_request_not_chosen_index_hint`  |  The number of parallel query requests that use the nonparallel query processing path because the query includes an index hint.  | 
|  `Aurora_pq_request_not_chosen_innodb_table_format`  |  The number of parallel query requests that use the nonparallel query processing path because the table uses an unsupported InnoDB row format. Aurora parallel query only applies to the `COMPACT`, `REDUNDANT`, and `DYNAMIC` row formats.  | 
|  `Aurora_pq_request_not_chosen_long_trx`  |  The number of parallel query requests that used the nonparallel query processing path, due to the query being started inside a long-running transaction. An `EXPLAIN` statement can increment this counter even though the query isn't actually performed.  | 
|  `Aurora_pq_request_not_chosen_no_where_clause`  |  The number of parallel query requests that use the nonparallel query processing path because the query doesn't include any `WHERE` clause.  | 
|  `Aurora_pq_request_not_chosen_range_scan`  |  The number of parallel query requests that use the nonparallel query processing path because the query uses a range scan on an index.  | 
|  `Aurora_pq_request_not_chosen_row_length_too_long`  |  The number of parallel query requests that use the nonparallel query processing path because the total combined length of all the columns is too long.  | 
|  `Aurora_pq_request_not_chosen_small_table`  |  The number of times parallel query wasn't chosen due to the overall size of the table, as determined by number of rows and average row length. An `EXPLAIN` statement can increment this counter even though the query isn't actually performed.  | 
|  `Aurora_pq_request_not_chosen_temporary_table`  |  The number of parallel query requests that use the nonparallel query processing path because the query refers to temporary tables that use the unsupported `MyISAM` or `memory` table types.  | 
|  `Aurora_pq_request_not_chosen_tx_isolation`  |  The number of parallel query requests that use the nonparallel query processing path because query uses an unsupported transaction isolation level. On reader DB instances, parallel query only applies to the `REPEATABLE READ` and `READ COMMITTED` isolation levels.  | 
|  `Aurora_pq_request_not_chosen_update_delete_stmts`  |  The number of parallel query requests that use the nonparallel query processing path because the query is part of an `UPDATE` or `DELETE` statement.  | 
|  `Aurora_pq_request_not_chosen_unsupported_access`  |  The number of parallel query requests that use the nonparallel query processing path because the `WHERE` clause doesn't meet the criteria for parallel query. This result can occur if the query doesn't require a data-intensive scan, or if the query is a `DELETE` or `UPDATE` statement.  | 
|  `Aurora_pq_request_not_chosen_unsupported_storage_type`  |  The number of parallel query requests that use the nonparallel query processing path because the Aurora MySQL DB cluster isn't using a supported Aurora cluster storage configuration. This parameter is available in Aurora MySQL version 3.04 and higher. For more information, see [Limitations](aurora-mysql-parallel-query.md#aurora-mysql-parallel-query-limitations).  | 
|  `Aurora_pq_request_throttled`  |  The number of times parallel query wasn't chosen due to the maximum number of concurrent parallel queries already running on a particular Aurora DB instance.  | 

# SQL constructs for parallel query in Aurora MySQL
SQL constructs for parallel query

 In the following section, you can find more detail about why particular SQL statements use or don't use parallel query. This section also details how Aurora MySQL features interact with parallel query. This information can help you diagnose performance issues for a cluster that uses parallel query or understand how parallel query applies for your particular workload. 

 The decision to use parallel query relies on many factors that occur at the moment that the statement runs. Thus, parallel query might be used for certain queries always, never, or only under certain conditions. 

**Tip**  
 When you view these examples in HTML, you can use the **Copy** widget in the upper-right corner of each code listing to copy the SQL code to try yourself. Using the **Copy** widget avoids copying the extra characters around the `mysql>` prompt and `->` continuation lines. 

**Topics**
+ [

## EXPLAIN statement
](#aurora-mysql-parallel-query-sql-explain)
+ [

## WHERE clause
](#aurora-mysql-parallel-query-sql-where)
+ [

## Data definition language (DDL)
](#aurora-mysql-parallel-query-sql-ddl)
+ [

## Column data types
](#aurora-mysql-parallel-query-sql-datatypes)
+ [

## Partitioned tables
](#aurora-mysql-parallel-query-sql-partitioning)
+ [

## Aggregate functions, GROUP BY clauses, and HAVING clauses
](#aurora-mysql-parallel-query-sql-aggregation)
+ [

## Function calls in WHERE clause
](#aurora-mysql-parallel-query-sql-functions)
+ [

## LIMIT clause
](#aurora-mysql-parallel-query-sql-limit)
+ [

## Comparison operators
](#aurora-mysql-parallel-query-sql-comparisons)
+ [

## Joins
](#aurora-mysql-parallel-query-sql-joins)
+ [

## Subqueries
](#aurora-mysql-parallel-query-sql-subqueries)
+ [

## UNION
](#aurora-mysql-parallel-query-sql-union)
+ [

## Views
](#aurora-mysql-parallel-query-sql-views)
+ [

## Data manipulation language (DML) statements
](#aurora-mysql-parallel-query-sql-dml)
+ [

## Transactions and locking
](#aurora-mysql-parallel-query-sql-transactions)
+ [

## B-tree indexes
](#aurora-mysql-parallel-query-sql-indexes)
+ [

## Full-text search (FTS) indexes
](#aurora-mysql-parallel-query-sql-fts)
+ [

## Virtual columns
](#aurora-mysql-parallel-query-sql-virtual-column)
+ [

## Built-in caching mechanisms
](#aurora-mysql-parallel-query-sql-caching)
+ [

## Optimizer hints
](#aurora-mysql-parallel-query-hints)
+ [

## MyISAM temporary tables
](#aurora-mysql-parallel-query-sql-myisam)

## EXPLAIN statement


 As shown in examples throughout this section, the `EXPLAIN` statement indicates whether each stage of a query is currently eligible for parallel query. It also indicates which aspects of a query can be pushed down to the storage layer. The most important items in the query plan are the following: 
+  A value other than `NULL` for the `key` column suggests that the query can be performed efficiently using index lookups, and parallel query is unlikely. 
+  A small value for the `rows` column (a value not in the millions) suggests that the query isn't accessing enough data to make parallel query worthwhile. This means that parallel query is unlikely. 
+  The `Extra` column shows you if parallel query is expected to be used. This output looks like the following example. 

  ```
  Using parallel query (A columns, B filters, C exprs; D extra)
  ```

   The `columns` number represents how many columns are referred to in the query block. 

   The `filters` number represents the number of `WHERE` predicates representing a simple comparison of a column value to a constant. The comparison can be for equality, inequality, or a range. Aurora can parallelize these kinds of predicates most effectively. 

   The `exprs` number represents the number of expressions such as function calls, operators, or other expressions that can also be parallelized, though not as effectively as a filter condition. 

   The `extra` number represents how many expressions can't be pushed down and are performed by the head node. 

 For example, consider the following `EXPLAIN` output. 

```
mysql> explain select p_name, p_mfgr from part
    -> where p_brand is not null
    -> and upper(p_type) is not null
    -> and round(p_retailprice) is not null;
+----+-------------+-------+...+----------+----------------------------------------------------------------------------+
| id | select_type | table |...| rows     | Extra                                                                      |
+----+-------------+-------+...+----------+----------------------------------------------------------------------------+
|  1 | SIMPLE      | part  |...| 20427936 | Using where; Using parallel query (5 columns, 1 filters, 2 exprs; 0 extra) |
+----+-------------+-------+...+----------+----------------------------------------------------------------------------+
```

 The information from the `Extra` column shows that five columns are extracted from each row to evaluate the query conditions and construct the result set. One `WHERE` predicate involves a filter, that is, a column that is directly tested in the `WHERE` clause. Two `WHERE` clauses require evaluating more complicated expressions, in this case involving function calls. The `0 extra` field confirms that all the operations in the `WHERE` clause are pushed down to the storage layer as part of parallel query processing. 

 In cases where parallel query isn't chosen, you can typically deduce the reason from the other columns of the `EXPLAIN` output. For example, the `rows` value might be too small, or the `possible_keys` column might indicate that the query can use an index lookup instead of a data-intensive scan. The following example shows a query where the optimizer can estimate that the query will scan only a small number of rows. It does so based on the characteristics of the primary key. In this case, parallel query isn't required. 

```
mysql> explain select count(*) from part where p_partkey between 1 and 100;
+----+-------------+-------+-------+---------------+---------+---------+------+------+--------------------------+
| id | select_type | table | type  | possible_keys | key     | key_len | ref  | rows | Extra                    |
+----+-------------+-------+-------+---------------+---------+---------+------+------+--------------------------+
|  1 | SIMPLE      | part  | range | PRIMARY       | PRIMARY | 4       | NULL |   99 | Using where; Using index |
+----+-------------+-------+-------+---------------+---------+---------+------+------+--------------------------+
```

 The output showing whether parallel query will be used takes into account all available factors at the moment that the `EXPLAIN` statement is run. The optimizer might make a different choice when the query is actually run, if the situation changed in the meantime. For example, `EXPLAIN` might report that a statement will use parallel query. But when the query is actually run later, it might not use parallel query based on the conditions then. Such conditions can include several other parallel queries running concurrently. They can also include rows being deleted from the table, a new index being created, too much time passing within an open transaction, and so on. 

## WHERE clause


 For a query to use the parallel query optimization, it *must* include a `WHERE` clause. 

 The parallel query optimization speeds up many kinds of expressions used in the `WHERE` clause: 
+  Simple comparisons of a column value to a constant, known as *filters*. These comparisons benefit the most from being pushed down to the storage layer. The number of filter expressions in a query is reported in the `EXPLAIN` output. 
+  Other kinds of expressions in the `WHERE` clause are also pushed down to the storage layer where possible. The number of such expressions in a query is reported in the `EXPLAIN` output. These expressions can be function calls, `LIKE` operators, `CASE` expressions, and so on. 
+  Certain functions and operators aren't currently pushed down by parallel query. The number of such expressions in a query is reported as the `extra` counter in the `EXPLAIN` output. The rest of the query can still use parallel query. 
+  While expressions in the select list aren't pushed down, queries containing such functions can still benefit from reduced network traffic for the intermediate results of parallel queries. For example, queries that call aggregation functions in the select list can benefit from parallel query, even though the aggregation functions aren't pushed down. 

 For example, the following query does a full-table scan and processes all the values for the `P_BRAND` column. However, it doesn't use parallel query because the query doesn't include any `WHERE` clause. 

```
mysql> explain select count(*), p_brand from part group by p_brand;
+----+-------------+-------+------+---------------+------+---------+------+----------+---------------------------------+
| id | select_type | table | type | possible_keys | key  | key_len | ref  | rows     | Extra                           |
+----+-------------+-------+------+---------------+------+---------+------+----------+---------------------------------+
|  1 | SIMPLE      | part  | ALL  | NULL          | NULL | NULL    | NULL | 20427936 | Using temporary; Using filesort |
+----+-------------+-------+------+---------------+------+---------+------+----------+---------------------------------+
```

 In contrast, the following query includes `WHERE` predicates that filter the results, so parallel query can be applied: 

```
mysql> explain select count(*), p_brand from part where p_name is not null
    ->   and p_mfgr in ('Manufacturer#1', 'Manufacturer#3') and p_retailprice > 1000
    -> group by p_brand;
+----+...+----------+-------------------------------------------------------------------------------------------------------------+
| id |...| rows     | Extra                                                                                                       |
+----+...+----------+-------------------------------------------------------------------------------------------------------------+
|  1 |...| 20427936 | Using where; Using temporary; Using filesort; Using parallel query (5 columns, 1 filters, 2 exprs; 0 extra) |
+----+...+----------+-------------------------------------------------------------------------------------------------------------+
```

 If the optimizer estimates that the number of returned rows for a query block is small, parallel query isn't used for that query block. The following example shows a case where a greater-than operator on the primary key column applies to millions of rows, which causes parallel query to be used. The converse less-than test is estimated to apply to only a few rows and doesn't use parallel query. 

```
mysql> explain select count(*) from part where p_partkey > 10;
+----+...+----------+----------------------------------------------------------------------------+
| id |...| rows     | Extra                                                                      |
+----+...+----------+----------------------------------------------------------------------------+
|  1 |...| 20427936 | Using where; Using parallel query (1 columns, 1 filters, 0 exprs; 0 extra) |
+----+...+----------+----------------------------------------------------------------------------+

mysql> explain select count(*) from part where p_partkey < 10;
+----+...+------+--------------------------+
| id |...| rows | Extra                    |
+----+...+------+--------------------------+
|  1 |...|    9 | Using where; Using index |
+----+...+------+--------------------------+
```

## Data definition language (DDL)


In Aurora MySQL version 2, parallel query is only available for tables for which no fast data definition language (DDL) operations are pending. In Aurora MySQL version 3, you can use parallel query on a table at the same time as an instant DDL operation.

Instant DDL in Aurora MySQL version 3 replaces the fast DDL feature in Aurora MySQL version 2. For information about instant DDL, see [Instant DDL (Aurora MySQL version 3)](AuroraMySQL.Managing.FastDDL.md#AuroraMySQL.mysql80-instant-ddl). 

## Column data types


 In Aurora MySQL version 3, parallel query can work with tables containing columns with data types `TEXT`, `BLOB`, `JSON`, and `GEOMETRY`. It can also work with `VARCHAR` and `CHAR` columns with a maximum declared length longer than 768 bytes. If your query refers to any columns containing such large object types, the additional work to retrieve them does add some overhead to query processing. In that case, check if the query can omit the references to those columns. If not, run benchmarks to confirm if such queries are faster with parallel query turned on or turned off. 

In Aurora MySQL version 2, parallel query has these limitations for large object types:
+ `TEXT`, `BLOB`, `JSON`, and `GEOMETRY` data types aren't supported with parallel query. A query that refers to any columns of these types can't use parallel query.
+ Variable-length columns (`VARCHAR` and `CHAR` data types) are compatible with parallel query up to a maximum declared length of 768 bytes. A query that refers to any columns of the types declared with a longer maximum length can't use parallel query. For columns that use multibyte character sets, the byte limit takes into account the maximum number of bytes in the character set. For example, for the character set `utf8mb4` (which has a maximum character length of 4 bytes), a `VARCHAR(192)` column is compatible with parallel query but a `VARCHAR(193)` column isn't.

## Partitioned tables


 You can use partitioned tables with parallel query in Aurora MySQL version 3. Because partitioned tables are represented internally as multiple smaller tables, a query that uses parallel query on a nonpartitioned table might not use parallel query on an identical partitioned table. Aurora MySQL considers whether each partition is large enough to qualify for the parallel query optimization, instead of evaluating the size of the entire table. Check whether the `Aurora_pq_request_not_chosen_small_table` status variable is incremented if a query on a partitioned table doesn't use parallel query when you expect it to. 

 For example, consider one table partitioned with `PARTITION BY HASH (column) PARTITIONS 2` and another table partitioned with `PARTITION BY HASH (column) PARTITIONS 10`. In the table with two partititions, the partitions are five times as large as the table with ten partitions. Thus, parallel query is more likely to be used for queries against the table with fewer partitions. In the following example, the table `PART_BIG_PARTITIONS` has two partitions and `PART_SMALL_PARTITIONS` has ten partitions. With identical data, parallel query is more likely to be used for the table with fewer big partitions. 

```
mysql> explain select count(*), p_brand from part_big_partitions where p_name is not null
    ->   and p_mfgr in ('Manufacturer#1', 'Manufacturer#3') and p_retailprice > 1000 group by p_brand;
+----+-------------+---------------------+------------+-------------------------------------------------------------------------------------------------------------------+
| id | select_type | table               | partitions | Extra                                                                                                             |
+----+-------------+---------------------+------------+-------------------------------------------------------------------------------------------------------------------+
|  1 | SIMPLE      | part_big_partitions | p0,p1      | Using where; Using temporary; Using parallel query (4 columns, 1 filters, 1 exprs; 0 extra; 1 group-bys, 1 aggrs) |
+----+-------------+---------------------+------------+-------------------------------------------------------------------------------------------------------------------+

mysql> explain select count(*), p_brand from part_small_partitions where p_name is not null
    ->   and p_mfgr in ('Manufacturer#1', 'Manufacturer#3') and p_retailprice > 1000 group by p_brand;
+----+-------------+-----------------------+-------------------------------+------------------------------+
| id | select_type | table                 | partitions                    | Extra                        |
+----+-------------+-----------------------+-------------------------------+------------------------------+
|  1 | SIMPLE      | part_small_partitions | p0,p1,p2,p3,p4,p5,p6,p7,p8,p9 | Using where; Using temporary |
+----+-------------+-----------------------+-------------------------------+------------------------------+
```

## Aggregate functions, GROUP BY clauses, and HAVING clauses


 Queries involving aggregate functions are often good candidates for parallel query, because they involve scanning large numbers of rows within large tables. 

 In Aurora MySQL 3, parallel query can optimize aggregate function calls in the select list and the `HAVING` clause. 

 Before Aurora MySQL 3, aggregate function calls in the select list or the `HAVING` clause aren't pushed down to the storage layer. However, parallel query can still improve the performance of such queries with aggregate functions. It does so by first extracting column values from the raw data pages in parallel at the storage layer. It then transmits those values back to the head node in a compact tuple format instead of as entire data pages. As always, the query requires at least one `WHERE` predicate for parallel query to be activated. 

 The following simple examples illustrate the kinds of aggregate queries that can benefit from parallel query. They do so by returning intermediate results in compact form to the head node, filtering nonmatching rows from the intermediate results, or both. 

```
mysql> explain select sql_no_cache count(distinct p_brand) from part where p_mfgr = 'Manufacturer#5';
+----+...+----------------------------------------------------------------------------+
| id |...| Extra                                                                      |
+----+...+----------------------------------------------------------------------------+
|  1 |...| Using where; Using parallel query (2 columns, 1 filters, 0 exprs; 0 extra) |
+----+...+----------------------------------------------------------------------------+

mysql> explain select sql_no_cache p_mfgr from part where p_retailprice > 1000 group by p_mfgr having count(*) > 100;
+----+...+-------------------------------------------------------------------------------------------------------------+
| id |...| Extra                                                                                                       |
+----+...+-------------------------------------------------------------------------------------------------------------+
|  1 |...| Using where; Using temporary; Using filesort; Using parallel query (3 columns, 0 filters, 1 exprs; 0 extra) |
+----+...+-------------------------------------------------------------------------------------------------------------+
```

## Function calls in WHERE clause


 Aurora can apply the parallel query optimization to calls to most built-in functions in the `WHERE` clause. Parallelizing these function calls offloads some CPU work from the head node. Evaluating the predicate functions in parallel during the earliest query stage helps Aurora minimize the amount of data transmitted and processed during later stages. 

 Currently, the parallelization doesn't apply to function calls in the select list. Those functions are evaluated by the head node, even if identical function calls appear in the `WHERE` clause. The original values from relevant columns are included in the tuples transmitted from the storage nodes back to the head node. The head node performs any transformations such as `UPPER`, `CONCATENATE`, and so on to produce the final values for the result set. 

 In the following example, parallel query parallelizes the call to `LOWER` because it appears in the `WHERE` clause. Parallel query doesn't affect the calls to `SUBSTR` and `UPPER` because they appear in the select list. 

```
mysql> explain select sql_no_cache distinct substr(upper(p_name),1,5) from part
    -> where lower(p_name) like '%cornflower%' or lower(p_name) like '%goldenrod%';
+----+...+---------------------------------------------------------------------------------------------+
| id |...| Extra                                                                                       |
+----+...+---------------------------------------------------------------------------------------------+
|  1 |...| Using where; Using temporary; Using parallel query (2 columns, 0 filters, 1 exprs; 0 extra) |
+----+...+---------------------------------------------------------------------------------------------+
```

 The same considerations apply to other expressions, such as `CASE` expressions or `LIKE` operators. For example, the following example shows that parallel query evaluates the `CASE` expression and `LIKE` operators in the `WHERE` clause. 

```
mysql> explain select p_mfgr, p_retailprice from part
    -> where p_retailprice > case p_mfgr
    ->   when 'Manufacturer#1' then 1000
    ->   when 'Manufacturer#2' then 1200
    ->   else 950
    -> end
    -> and p_name like '%vanilla%'
    -> group by p_retailprice;
+----+...+-------------------------------------------------------------------------------------------------------------+
| id |...| Extra                                                                                                       |
+----+...+-------------------------------------------------------------------------------------------------------------+
|  1 |...| Using where; Using temporary; Using filesort; Using parallel query (4 columns, 0 filters, 2 exprs; 0 extra) |
+----+...+-------------------------------------------------------------------------------------------------------------+
```

## LIMIT clause


 Currently, parallel query isn't used for any query block that includes a `LIMIT` clause. Parallel query might still be used for earlier query phases with `GROUP` by, `ORDER BY`, or joins. 

## Comparison operators


 The optimizer estimates how many rows to scan to evaluate comparison operators, and determines whether to use parallel query based on that estimate. 

 The first example following shows that an equality comparison against the primary key column can be performed efficiently without parallel query. The second example following shows that a similar comparison against an unindexed column requires scanning millions of rows and therefore can benefit from parallel query. 

```
mysql> explain select * from part where p_partkey = 10;
+----+...+------+-------+
| id |...| rows | Extra |
+----+...+------+-------+
|  1 |...|    1 | NULL  |
+----+...+------+-------+

mysql> explain select * from part where p_type = 'LARGE BRUSHED BRASS';
+----+...+----------+----------------------------------------------------------------------------+
| id |...| rows     | Extra                                                                      |
+----+...+----------+----------------------------------------------------------------------------+
|  1 |...| 20427936 | Using where; Using parallel query (9 columns, 1 filters, 0 exprs; 0 extra) |
+----+...+----------+----------------------------------------------------------------------------+
```

 The same considerations apply for not-equals tests and for range comparisons such as less than, greater than or equal to, or `BETWEEN`. The optimizer estimates the number of rows to scan, and determines whether parallel query is worthwhile based on the overall volume of I/O. 

## Joins


 Join queries with large tables typically involve data-intensive operations that benefit from the parallel query optimization. The comparisons of column values between multiple tables (that is, the join predicates themselves) currently aren't parallelized. However, parallel query can push down some of the internal processing for other join phases, such as constructing the Bloom filter during a hash join. Parallel query can apply to join queries even without a `WHERE` clause. Therefore, a join query is an exception to the rule that a `WHERE` clause is required to use parallel query. 

 Each phase of join processing is evaluated to check if it is eligible for parallel query. If more than one phase can use parallel query, these phases are performed in sequence. Thus, each join query counts as a single parallel query session in terms of concurrency limits. 

 For example, when a join query includes `WHERE` predicates to filter the rows from one of the joined tables, that filtering option can use parallel query. As another example, suppose that a join query uses the hash join mechanism, for example to join a big table with a small table. In this case, the table scan to produce the Bloom filter data structure might be able to use parallel query. 

**Note**  
 Parallel query is typically used for the kinds of resource-intensive queries that benefit from the hash join optimization. The method for turning on the hash join optimization depends on the Aurora MySQL version. For details for each version, see [Turning on hash join for parallel query clusters](aurora-mysql-parallel-query-enabling.md#aurora-mysql-parallel-query-enabling-hash-join). For information about how to use hash joins effectively, see [Optimizing large Aurora MySQL join queries with hash joins](AuroraMySQL.BestPractices.Performance.md#Aurora.BestPractices.HashJoin). 

```
mysql> explain select count(*) from orders join customer where o_custkey = c_custkey;
+----+...+----------+-------+---------------+-------------+...+-----------+-----------------------------------------------------------------------------------------------------------------+
| id |...| table    | type  | possible_keys | key         |...| rows      | Extra                                                                                                           |
+----+...+----------+-------+---------------+-------------+...+-----------+-----------------------------------------------------------------------------------------------------------------+
|  1 |...| customer | index | PRIMARY       | c_nationkey |...|  15051972 | Using index                                                                                                     |
|  1 |...| orders   | ALL   | o_custkey     | NULL        |...| 154545408 | Using join buffer (Hash Join Outer table orders); Using parallel query (1 columns, 0 filters, 1 exprs; 0 extra) |
+----+...+----------+-------+---------------+-------------+...+-----------+-----------------------------------------------------------------------------------------------------------------+
```

 For a join query that uses the nested loop mechanism, the outermost nested loop block might use parallel query. The use of parallel query depends on the same factors as usual, such as the presence of additional filter conditions in the `WHERE` clause. 

```
mysql> -- Nested loop join with extra filter conditions can use parallel query.
mysql> explain select count(*) from part, partsupp where p_partkey != ps_partkey and p_name is not null and ps_availqty > 0;
+----+-------------+----------+...+----------+----------------------------------------------------------------------------+
| id | select_type | table    |...| rows     | Extra                                                                      |
+----+-------------+----------+...+----------+----------------------------------------------------------------------------+
|  1 | SIMPLE      | part     |...| 20427936 | Using where; Using parallel query (2 columns, 1 filters, 0 exprs; 0 extra) |
|  1 | SIMPLE      | partsupp |...| 78164450 | Using where; Using join buffer (Block Nested Loop)                         |
+----+-------------+----------+...+----------+----------------------------------------------------------------------------+
```

## Subqueries


 The outer query block and inner subquery block might each use parallel query, or not. Whether they do is based on the usual characteristics of the table, `WHERE` clause, and so on, for each block. For example, the following query uses parallel query for the subquery block but not the outer block. 

```
mysql> explain select count(*) from part where
   --> p_partkey < (select max(p_partkey) from part where p_name like '%vanilla%');
+----+-------------+...+----------+----------------------------------------------------------------------------+
| id | select_type |...| rows     | Extra                                                                      |
+----+-------------+...+----------+----------------------------------------------------------------------------+
|  1 | PRIMARY     |...|     NULL | Impossible WHERE noticed after reading const tables                        |
|  2 | SUBQUERY    |...| 20427936 | Using where; Using parallel query (2 columns, 0 filters, 1 exprs; 0 extra) |
+----+-------------+...+----------+----------------------------------------------------------------------------+
```

 Currently, correlated subqueries can't use the parallel query optimization. 

## UNION


 Each query block in a `UNION` query can use parallel query, or not, based on the usual characteristics of the table, `WHERE` clause, and so on, for each part of the `UNION`. 

```
mysql> explain select p_partkey from part where p_name like '%choco_ate%'
    -> union select p_partkey from part where p_name like '%vanil_a%';
+----+----------------+...+----------+----------------------------------------------------------------------------+
| id | select_type    |...| rows     | Extra                                                                      |
+----+----------------+...+----------+----------------------------------------------------------------------------+
|  1 | PRIMARY        |...| 20427936 | Using where; Using parallel query (2 columns, 0 filters, 1 exprs; 0 extra) |
|  2 | UNION          |...| 20427936 | Using where; Using parallel query (2 columns, 0 filters, 1 exprs; 0 extra) |
| NULL | UNION RESULT | <union1,2> |...|     NULL | Using temporary                                           |
+----+--------------+...+----------+----------------------------------------------------------------------------+
```

**Note**  
 Each `UNION` clause within the query is run sequentially. Even if the query includes multiple stages that all use parallel query, it only runs a single parallel query at any one time. Therefore, even a complex multistage query only counts as 1 toward the limit of concurrent parallel queries. 

## Views


 The optimizer rewrites any query using a view as a longer query using the underlying tables. Thus, parallel query works the same whether table references are views or real tables. All the same considerations about whether to use parallel query for a query, and which parts are pushed down, apply to the final rewritten query. 

 For example, the following query plan shows a view definition that usually doesn't use parallel query. When the view is queried with additional `WHERE` clauses, Aurora MySQL uses parallel query. 

```
mysql> create view part_view as select * from part;
mysql> explain select count(*) from part_view where p_partkey is not null;
+----+...+----------+----------------------------------------------------------------------------+
| id |...| rows     | Extra                                                                      |
+----+...+----------+----------------------------------------------------------------------------+
|  1 |...| 20427936 | Using where; Using parallel query (1 columns, 0 filters, 0 exprs; 1 extra) |
+----+...+----------+----------------------------------------------------------------------------+
```

## Data manipulation language (DML) statements


 The `INSERT` statement can use parallel query for the `SELECT` phase of processing, if the `SELECT` part meets the other conditions for parallel query. 

```
mysql> create table part_subset like part;
mysql> explain insert into part_subset select * from part where p_mfgr = 'Manufacturer#1';
+----+...+----------+----------------------------------------------------------------------------+
| id |...| rows     | Extra                                                                      |
+----+...+----------+----------------------------------------------------------------------------+
|  1 |...| 20427936 | Using where; Using parallel query (9 columns, 1 filters, 0 exprs; 0 extra) |
+----+...+----------+----------------------------------------------------------------------------+
```

**Note**  
 Typically, after an `INSERT` statement, the data for the newly inserted rows is in the buffer pool. Therefore, a table might not be eligible for parallel query immediately after inserting a large number of rows. Later, after the data is evicted from the buffer pool during normal operation, queries against the table might begin using parallel query again. 

 The `CREATE TABLE AS SELECT` statement doesn't use parallel query, even if the `SELECT` portion of the statement would otherwise be eligible for parallel query. The DDL aspect of this statement makes it incompatible with parallel query processing. In contrast, in the `INSERT ... SELECT` statement, the `SELECT` portion can use parallel query. 

 Parallel query is never used for `DELETE` or `UPDATE` statements, regardless of the size of the table and predicates in the `WHERE` clause. 

```
mysql> explain delete from part where p_name is not null;
+----+-------------+...+----------+-------------+
| id | select_type |...| rows     | Extra       |
+----+-------------+...+----------+-------------+
|  1 | SIMPLE      |...| 20427936 | Using where |
+----+-------------+...+----------+-------------+
```

## Transactions and locking


 You can use all the isolation levels on the Aurora primary instance. 

On Aurora reader DB instances, parallel query applies to statements performed under the `REPEATABLE READ` isolation level. Aurora MySQL version 2.09 or higher can also use the `READ COMMITTED` isolation level on reader DB instances. `REPEATABLE READ` is the default isolation level for Aurora reader DB instances. To use `READ COMMITTED` isolation level on reader DB instances requires setting the `aurora_read_replica_read_committed` configuration option at the session level. The `READ COMMITTED` isolation level for reader instances complies with SQL standard behavior. However, the isolation is less strict on reader instances than when queries use `READ COMMITTED` isolation level on the writer instance.

 For more information about Aurora isolation levels, especially the differences in `READ COMMITTED` between writer and reader instances, see [Aurora MySQL isolation levels](AuroraMySQL.Reference.IsolationLevels.md). 

 After a big transaction is finished, the table statistics might be stale. Such stale statistics might require an `ANALYZE TABLE` statement before Aurora can accurately estimate the number of rows. A large-scale DML statement might also bring a substantial portion of the table data into the buffer pool. Having this data in the buffer pool can lead to parallel query being chosen less frequently for that table until the data is evicted from the pool. 

 When your session is inside a long-running transaction (by default, 10 minutes), further queries inside that session don't use parallel query. A timeout can also occur during a single long-running query. This type of timeout might happen if the query runs for longer than the maximum interval (currently 10 minutes) before the parallel query processing starts. 

 You can reduce the chance of starting long-running transactions accidentally by setting `autocommit=1` in `mysql` sessions where you perform ad hoc (one-time) queries. Even a `SELECT` statement against a table begins a transaction by creating a read view. A *read view *is a consistent set of data for subsequent queries that remains until the transaction is committed. Be aware of this restriction also when using JDBC or ODBC applications with Aurora, because such applications might run with the `autocommit` setting turned off. 

 The following example shows how, with the `autocommit` setting turned off, running a query against a table creates a read view that implicitly begins a transaction. Queries that are run shortly afterward can still use parallel query. However, after a pause of several minutes, queries are no longer eligible for parallel query. Ending the transaction with `COMMIT` or `ROLLBACK` restores parallel query eligibility. 

```
mysql> set autocommit=0;

mysql> explain select sql_no_cache count(*) from part where p_retailprice > 10.0;
+----+...+---------+----------------------------------------------------------------------------+
| id |...| rows    | Extra                                                                      |
+----+...+---------+----------------------------------------------------------------------------+
|  1 |...| 2976129 | Using where; Using parallel query (1 columns, 1 filters, 0 exprs; 0 extra) |
+----+...+---------+----------------------------------------------------------------------------+

mysql> select sleep(720); explain select sql_no_cache count(*) from part where p_retailprice > 10.0;
+------------+
| sleep(720) |
+------------+
|          0 |
+------------+
1 row in set (12 min 0.00 sec)

+----+...+---------+-------------+
| id |...| rows    | Extra       |
+----+...+---------+-------------+
|  1 |...| 2976129 | Using where |
+----+...+---------+-------------+

mysql> commit;

mysql> explain select sql_no_cache count(*) from part where p_retailprice > 10.0;
+----+...+---------+----------------------------------------------------------------------------+
| id |...| rows    | Extra                                                                      |
+----+...+---------+----------------------------------------------------------------------------+
|  1 |...| 2976129 | Using where; Using parallel query (1 columns, 1 filters, 0 exprs; 0 extra) |
+----+...+---------+----------------------------------------------------------------------------+
```

 To see how many times queries weren't eligible for parallel query because they were inside long-running transactions, check the status variable `Aurora_pq_request_not_chosen_long_trx`. 

```
mysql> show global status like '%pq%trx%';
+---------------------------------------+-------+
| Variable_name                         | Value |
+---------------------------------------+-------+
| Aurora_pq_request_not_chosen_long_trx | 4     |
+-------------------------------+-------+
```

 Any `SELECT` statement that acquires locks, such as the `SELECT FOR UPDATE` or `SELECT LOCK IN SHARE MODE` syntax, can't use parallel query. 

 Parallel query can work for a table that is locked by a `LOCK TABLES` statement. 

```
mysql> explain select o_orderpriority, o_shippriority from orders where o_clerk = 'Clerk#000095055';
+----+...+-----------+----------------------------------------------------------------------------+
| id |...| rows      | Extra                                                                      |
+----+...+-----------+----------------------------------------------------------------------------+
|  1 |...| 154545408 | Using where; Using parallel query (3 columns, 1 filters, 0 exprs; 0 extra) |
+----+...+-----------+----------------------------------------------------------------------------+

mysql> explain select o_orderpriority, o_shippriority from orders where o_clerk = 'Clerk#000095055' for update;
+----+...+-----------+-------------+
| id |...| rows      | Extra       |
+----+...+-----------+-------------+
|  1 |...| 154545408 | Using where |
+----+...+-----------+-------------+
```

## B-tree indexes


 The statistics gathered by the `ANALYZE TABLE` statement help the optimizer to decide when to use parallel query or index lookups, based on the characteristics of the data for each column. Keep statistics current by running `ANALYZE TABLE` after DML operations that make substantial changes to the data within a table. 

 If index lookups can perform a query efficiently without a data-intensive scan, Aurora might use index lookups. Doing so avoids the overhead of parallel query processing. There are also concurrency limits on the number of parallel queries that can run simultaneously on any Aurora DB cluster. Make sure to use best practices for indexing your tables, so that your most frequent and most highly concurrent queries use index lookups. 

## Full-text search (FTS) indexes


 Currently, parallel query isn't used for tables that contain a full-text search index, regardless of whether the query refers to such indexed columns or uses the `MATCH` operator. 

## Virtual columns


 Currently, parallel query isn't used for tables that contain a virtual column, regardless of whether the query refers to any virtual columns. 

## Built-in caching mechanisms


 Aurora includes built-in caching mechanisms, namely the buffer pool and the query cache. The Aurora optimizer chooses between these caching mechanisms and parallel query depending on which one is most effective for a particular query. 

 When a parallel query filters rows and transforms and extracts column values, data is transmitted back to the head node as tuples rather than as data pages. Therefore, running a parallel query doesn't add any pages to the buffer pool, or evict pages that are already in the buffer pool. 

 Aurora checks the number of pages of table data that are present in the buffer pool, and what proportion of the table data that number represents. Aurora uses that information to determine whether it is more efficient to use parallel query (and bypass the data in the buffer pool). Alternatively, Aurora might use the nonparallel query processing path, which uses data cached in the buffer pool. Which pages are cached and how data-intensive queries affect caching and eviction depends on configuration settings related to the buffer pool. Therefore, it can be hard to predict whether any particular query uses parallel query, because the choice depends on the ever-changing data within the buffer pool. 

 Also, Aurora imposes concurrency limits on parallel queries. Because not every query uses parallel query, tables that are accessed by multiple queries simultaneously typically have a substantial portion of their data in the buffer pool. Therefore, Aurora often doesn't choose these tables for parallel queries. 

 When you run a sequence of nonparallel queries on the same table, the first query might be slow due to the data not being in the buffer pool. Then the second and subsequent queries are much faster because the buffer pool is now "warmed up". Parallel queries typically show consistent performance from the very first query against the table. When conducting performance tests, benchmark the nonparallel queries with both a cold and a warm buffer pool. In some cases, the results with a warm buffer pool can compare well to parallel query times. In these cases, consider factors such as the frequency of queries against that table. Also consider whether it is worthwhile to keep the data for that table in the buffer pool. 

 The query cache avoids rerunning a query when an identical query is submitted and the underlying table data hasn't changed. Queries optimized by parallel query feature can go into the query cache, effectively making them instantaneous when run again. 

**Note**  
 When conducting performance comparisons, the query cache can produce artificially low timing numbers. Therefore, in benchmark-like situations, you can use the `sql_no_cache` hint. This hint prevents the result from being served from the query cache, even if the same query had been run previously. The hint comes immediately after the `SELECT` statement in a query. Many parallel query examples in this topic include this hint, to make query times comparable between versions of the query for which parallel query is turned on and turned off.   
 Make sure that you remove this hint from your source when you move to production use of parallel query. 

## Optimizer hints


Another way to control the optimizer is by using optimizer hints, which can be specified within individual statements. For example, you can turn on an optimization for one table in a statement, and then turn off the optimization for a different table. For more information about these hints, see [Optimizer Hints](https://dev.mysql.com/doc/refman/8.0/en/optimizer-hints.html) in the *MySQL Reference Manual*.

You can use SQL hints with Aurora MySQL queries to fine-tune performance. You can also use hints to prevent execution plans for important queries from changing because of unpredictable conditions.

We have extended the SQL hints feature to help you control optimizer choices for your query plans. These hints apply to queries that use parallel query optimization. For more information, see [Aurora MySQL hints](AuroraMySQL.Reference.Hints.md).

## MyISAM temporary tables


The parallel query optimization only applies to InnoDB tables. Because Aurora MySQL uses MyISAM behind the scenes for temporary tables, internal query phases involving temporary tables never use parallel query. These query phases are indicated by `Using temporary` in the `EXPLAIN` output. 

# Using Advanced Auditing with an Amazon Aurora MySQL DB cluster
Advanced Auditing with Aurora MySQL<a name="auditing"></a><a name="advanced_auditing"></a>

You can use the high-performance Advanced Auditing feature in Amazon Aurora MySQL to audit database activity. To do so, you enable the collection of audit logs by setting several DB cluster parameters. When Advanced Auditing is enabled, you can use it to log any combination of supported events. 

 You can view or download the audit logs to review the audit information for one DB instance at a time. To do so, you can use the procedures in [Monitoring Amazon Aurora log files](USER_LogAccess.md). 

**Tip**  
 For an Aurora DB cluster containing multiple DB instances, you might find it more convenient to examine the audit logs for all instances in the cluster. To do so, you can use CloudWatch Logs. You can turn on a setting at the cluster level to publish the Aurora MySQL audit log data to a log group in CloudWatch. Then you can view, filter, and search the audit logs through the CloudWatch interface. For more information, see [Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs](AuroraMySQL.Integrating.CloudWatch.md). 

## Enabling Advanced Auditing


Use the parameters described in this section to enable and configure Advanced Auditing for your DB cluster. 

Use the `server_audit_logging` parameter to enable or disable Advanced Auditing.

Use the `server_audit_events` parameter to specify what events to log.

Use the `server_audit_incl_users` and `server_audit_excl_users` parameters to specify who gets audited. By default, all users are audited. For details about how these parameters work when one or both are left empty, or the same user names are specified in both, see [server\$1audit\$1incl\$1users](#AuroraMySQL.Auditing.Enable.server_audit_incl_users) and [server\$1audit\$1excl\$1users](#AuroraMySQL.Auditing.Enable.server_audit_excl_users). 

Configure Advanced Auditing by setting these parameters in the parameter group used by your DB cluster. You can use the procedure shown in [Modifying parameters in a DB parameter group in Amazon Aurora](USER_WorkingWithParamGroups.Modifying.md) to modify DB cluster parameters using the AWS Management Console. You can use the [modify-db-cluster-parameter-group](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster-parameter-group.html) AWS CLI command or the [ModifyDBClusterParameterGroup](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBClusterParameterGroup.html) Amazon RDS API operation to modify DB cluster parameters programmatically.

Modifying these parameters doesn't require a DB cluster restart when the parameter group is already associated with your cluster. When you associate the parameter group with the cluster for the first time, a cluster restart is required.

**Topics**
+ [

### server\$1audit\$1logging
](#AuroraMySQL.Auditing.Enable.server_audit_logging)
+ [

### server\$1audit\$1events
](#AuroraMySQL.Auditing.Enable.server_audit_events)
+ [

### server\$1audit\$1incl\$1users
](#AuroraMySQL.Auditing.Enable.server_audit_incl_users)
+ [

### server\$1audit\$1excl\$1users
](#AuroraMySQL.Auditing.Enable.server_audit_excl_users)

### server\$1audit\$1logging


Enables or disables Advanced Auditing. This parameter defaults to OFF; set it to ON to enable Advanced Auditing. 

 No audit data appears in the logs unless you also define one or more types of events to audit using the `server_audit_events` parameter. 

 To confirm that audit data is logged for a DB instance, check that some log files for that instance have names of the form `audit/audit.log.other_identifying_information`. To see the names of the log files, follow the procedure in [Viewing and listing database log files](USER_LogAccess.Procedural.Viewing.md). 

### server\$1audit\$1events


Contains the comma-delimited list of events to log. Events must be specified in all caps, and there should be no white space between the list elements, for example: `CONNECT,QUERY_DDL`. This parameter defaults to an empty string.

You can log any combination of the following events:
+ CONNECT – Logs both successful and failed connections and also disconnections. This event includes user information.
+ QUERY – Logs all queries in plain text, including queries that fail due to syntax or permission errors.
**Tip**  
 With this event type turned on, the audit data includes information about the continuous monitoring and health-checking information that Aurora does automatically. If you are only interested in particular kinds of operations, you can use the more specific kinds of events. You can also use the CloudWatch interface to search in the logs for events related to specific databases, tables, or users. 
+ QUERY\$1DCL – Similar to the QUERY event, but returns only data control language (DCL) queries (GRANT, REVOKE, and so on).
+ QUERY\$1DDL – Similar to the QUERY event, but returns only data definition language (DDL) queries (CREATE, ALTER, and so on).
+ QUERY\$1DML – Similar to the QUERY event, but returns only data manipulation language (DML) queries (INSERT, UPDATE, and so on, and also SELECT).
+ TABLE – Logs the tables that were affected by query execution.

**Note**  
There's no filter in Aurora that excludes certain queries from audit logs. To exclude `SELECT` queries, you must exclude all DML statements.  
If a certain user is reporting these internal `SELECT` queries in the audit logs, then you can exclude that user by setting the [server\$1audit\$1excl\$1users](#AuroraMySQL.Auditing.Enable.server_audit_excl_users) DB cluster parameter. However, if that user is also used in other activities and can't be omitted, then there is no other option for excluding `SELECT` queries.

### server\$1audit\$1incl\$1users


Contains the comma-delimited list of user names for users whose activity is logged. There should be no white space between the list elements, for example: `user_3,user_4`. This parameter defaults to an empty string. The maximum length is 1024 characters. Specified user names must match corresponding values in the `User` column of the `mysql.user` table. For more information about user names, see [Account User Names and Passwords](https://dev.mysql.com/doc/refman/8.0/en/user-names.html) in the MySQL documentation.

 If `server_audit_incl_users` and `server_audit_excl_users` are both empty (the default), all users are audited. 

 If you add users to `server_audit_incl_users` and leave `server_audit_excl_users` empty, then only those users are audited. 

 If you add users to `server_audit_excl_users` and leave `server_audit_incl_users` empty, then all users are audited, except for those listed in `server_audit_excl_users`. 

 If you add the same users to both `server_audit_excl_users` and `server_audit_incl_users`, then those users are audited. When the same user is listed in both settings, `server_audit_incl_users` is given higher priority. 

Connect and disconnect events aren't affected by this variable; they are always logged if specified. A user is logged even if that user is also specified in the `server_audit_excl_users` parameter, because `server_audit_incl_users` has higher priority. 

### server\$1audit\$1excl\$1users


Contains the comma-delimited list of user names for users whose activity isn't logged. There should be no white space between the list elements, for example: `rdsadmin,user_1,user_2`. This parameter defaults to an empty string. The maximum length is 1024 characters. Specified user names must match corresponding values in the `User` column of the `mysql.user` table. For more information about user names, see [Account User Names and Passwords](https://dev.mysql.com/doc/refman/8.0/en/user-names.html) in the MySQL documentation.

 If `server_audit_incl_users` and `server_audit_excl_users` are both empty (the default), all users are audited. 

 If you add users to `server_audit_excl_users` and leave `server_audit_incl_users` empty, then only those users that you list in `server_audit_excl_users` are not audited, and all other users are. 

 If you add the same users to both `server_audit_excl_users` and `server_audit_incl_users`, then those users are audited. When the same user is listed in both settings, `server_audit_incl_users` is given higher priority. 

Connect and disconnect events aren't affected by this variable; they are always logged if specified. A user is logged if that user is also specified in the `server_audit_incl_users` parameter, because that setting has higher priority than `server_audit_excl_users`.

## Viewing audit logs


 You can view and download the audit logs by using the console. On the **Databases** page, choose the DB instance to show its details, then scroll to the **Logs** section. The audit logs produced by the Advanced Auditing feature have names of the form `audit/audit.log.other_identifying_information`. 

To download a log file, choose that file in the **Logs** section and then choose **Download**.

You can also get a list of the log files by using the [describe-db-log-files](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-log-files.html) AWS CLI command. You can download the contents of a log file by using the [download-db-log-file-portion](https://docs.aws.amazon.com/cli/latest/reference/rds/download-db-log-file-portion.html) AWS CLI command. For more information, see [Viewing and listing database log files](USER_LogAccess.Procedural.Viewing.md) and [Downloading a database log file](USER_LogAccess.Procedural.Downloading.md).

## Audit log details


Log files are represented as comma-separated variable (CSV) files in UTF-8 format. Queries are also wrapped in single quotes (').

The audit log is stored separately on the local storage of each Aurora MySQL DB instance. Each instance distributes writes across four log files at a time. The maximum size of a log file is 100 MB. When this non-configurable limit is reached, Aurora rotates the file and generates a new file.

**Tip**  
Log file entries are not in sequential order. To order the entries, use the timestamp value. To see the latest events, you might have to review all log files. For more flexibility in sorting and searching the log data, turn on the setting to upload the audit logs to CloudWatch and view them using the CloudWatch interface.  
 To view audit data with more types of fields and with output in JSON format, you can also use the Database Activity Streams feature. For more information, see [Monitoring Amazon Aurora with Database Activity Streams](DBActivityStreams.md). 

The audit log files include the following comma-delimited information in rows, in the specified order:


| Field | Description | 
| --- | --- | 
|  timestamp  |  The Unix time stamp for the logged event with microsecond precision.  | 
|  serverhost  |  The name of the instance that the event is logged for.  | 
|  username  |  The connected user name of the user.  | 
|  host  |  The host that the user connected from.  | 
|  connectionid  |  The connection ID number for the logged operation.  | 
|  queryid  |  The query ID number, which can be used for finding the relational table events and related queries. For `TABLE` events, multiple lines are added.  | 
|  operation  |  The recorded action type. Possible values are: `CONNECT`, `QUERY`, `READ`, `WRITE`, `CREATE`, `ALTER`, `RENAME`, and `DROP`.  | 
|  database  |  The active database, as set by the `USE` command.  | 
|  object  |  For `QUERY` events, this value indicates the query that the database performed. For `TABLE` events, it indicates the table name.  | 
|  retcode  |  The return code of the logged operation.  | 

# Replication with Amazon Aurora MySQL
Replication with Aurora MySQL<a name="replication"></a>

 The Aurora MySQL replication features are key to the high availability and performance of your cluster. Aurora makes it easy to create or resize clusters with up to 15 Aurora Replicas. 

 All the replicas work from the same underlying data. If some database instances go offline, others remain available to continue processing queries or to take over as the writer if needed. Aurora automatically spreads your read-only connections across multiple database instances, helping an Aurora cluster to support query-intensive workloads. 

In the following topics, you can find information about how Aurora MySQL replication works and how to fine-tune replication settings for best availability and performance. 

**Topics**
+ [

## Using Aurora Replicas
](#AuroraMySQL.Replication.Replicas)
+ [

## Replication options for Amazon Aurora MySQL
](#AuroraMySQL.Replication.Options)
+ [

## Performance considerations for Amazon Aurora MySQL replication
](#AuroraMySQL.Replication.Performance)
+ [

# Configuring replication filters with Aurora MySQL
](AuroraMySQL.Replication.Filters.md)
+ [

## Monitoring Amazon Aurora MySQL replication
](#AuroraMySQL.Replication.Monitoring)
+ [

# Replicating Amazon Aurora MySQL DB clusters across AWS Regions
](AuroraMySQL.Replication.CrossRegion.md)
+ [

# Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)
](AuroraMySQL.Replication.MySQL.md)
+ [

# Using GTID-based replication
](mysql-replication-gtid.md)

## Using Aurora Replicas
Aurora Replicas

 Aurora Replicas are independent endpoints in an Aurora DB cluster, best used for scaling read operations and increasing availability. Up to 15 Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region. Although the DB cluster volume is made up of multiple copies of the data for the DB cluster, the data in the cluster volume is represented as a single, logical volume to the primary instance and to Aurora Replicas in the DB cluster. For more information about Aurora Replicas, see [Aurora Replicas](Aurora.Replication.md#Aurora.Replication.Replicas). 

 Aurora Replicas work well for read scaling because they are fully dedicated to read operations on your cluster volume. Write operations are managed by the primary instance. Because the cluster volume is shared among all instances in your Aurora MySQL DB cluster, no additional work is required to replicate a copy of the data for each Aurora Replica. In contrast, MySQL read replicas must replay, on a single thread, all write operations from the source DB instance to their local data store. This limitation can affect the ability of MySQL read replicas to support large volumes of read traffic. 

 With Aurora MySQL, when an Aurora Replica is deleted, its instance endpoint is removed immediately, and the Aurora Replica is removed from the reader endpoint. If there are statements running on the Aurora Replica that is being deleted, there is a three minute grace period. Existing statements can finish gracefully during the grace period. When the grace period ends, the Aurora Replica is shut down and deleted. 

**Important**  
 Aurora Replicas for Aurora MySQL always use the `REPEATABLE READ` default transaction isolation level for operations on InnoDB tables. You can use the `SET TRANSACTION ISOLATION LEVEL` command to change the transaction level only for the primary instance of an Aurora MySQL DB cluster. This restriction avoids user-level locks on Aurora Replicas, and allows Aurora Replicas to scale to support thousands of active user connections while still keeping replica lag to a minimum. 

**Note**  
 DDL statements that run on the primary instance might interrupt database connections on the associated Aurora Replicas. If an Aurora Replica connection is actively using a database object, such as a table, and that object is modified on the primary instance using a DDL statement, the Aurora Replica connection is interrupted. 

**Note**  
 The China (Ningxia) Region does not support cross-Region read replicas. 

## Replication options for Amazon Aurora MySQL
Replication options

You can set up replication between any of the following options:
+ Two Aurora MySQL DB clusters in different AWS Regions, by creating a cross-Region read replica of an Aurora MySQL DB cluster.

  For more information, see [Replicating Amazon Aurora MySQL DB clusters across AWS Regions](AuroraMySQL.Replication.CrossRegion.md).
+ Two Aurora MySQL DB clusters in the same AWS Region, by using MySQL binary log (binlog) replication.

  For more information, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md).
+ An RDS for MySQL DB instance as the source and an Aurora MySQL DB cluster, by creating an Aurora read replica of an RDS for MySQL DB instance.

  You can use this approach to bring existing and ongoing data changes into Aurora MySQL during migration to Aurora. For more information, see [Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster by using an Aurora read replica](AuroraMySQL.Migrating.RDSMySQL.Replica.md). 

  You can also use this approach to increase the scalability of read queries for your data. You do so by querying the data using one or more DB instances within a read-only Aurora MySQL cluster. For more information, see [Scaling reads for your MySQL database with Amazon Aurora](AuroraMySQL.Replication.ReadScaling.md).
+ An Aurora MySQL DB cluster in one AWS Region and up to five Aurora read-only Aurora MySQL DB clusters in different Regions, by creating an Aurora global database.

  You can use an Aurora global database to support applications with a world-wide footprint. The primary Aurora MySQL DB cluster has a Writer instance and up to 15 Aurora Replicas. The read-only secondary Aurora MySQL DB clusters can each be made up of as many as 16 Aurora Replicas. For more information, see [Using Amazon Aurora Global Database](aurora-global-database.md).

**Note**  
Rebooting the primary instance of an Amazon Aurora DB cluster also automatically reboots the Aurora Replicas for that DB cluster, to re-establish an entry point that guarantees read/write consistency across the DB cluster.

## Performance considerations for Amazon Aurora MySQL replication
Replication performance

The following features help you to fine-tune the performance of Aurora MySQL replication.

The replica log compression feature automatically reduces network bandwidth for replication messages. Because each message is transmitted to all Aurora Replicas, the benefits are greater for larger clusters. This feature involves some CPU overhead on the writer node to perform the compression. It's always enabled in Aurora MySQL version 2 and version 3.

The binlog filtering feature automatically reduces network bandwidth for replication messages. Because the Aurora Replicas don't use the binlog information that is included in the replication messages, that data is omitted from the messages sent to those nodes.

In Aurora MySQL version 2, you can control this feature by changing the `aurora_enable_repl_bin_log_filtering` parameter. This parameter is on by default. Because this optimization is intended to be transparent, you might turn off this setting only during diagnosis or troubleshooting for issues related to replication. For example, you can do so to match the behavior of an older Aurora MySQL cluster where this feature was not available.

Binlog filtering is always enabled in Aurora MySQL version 3.

# Configuring replication filters with Aurora MySQL
Replication filters

You can use replication filters to specify which databases and tables are replicated with a read replica. Replication filters can include databases and tables in replication or exclude them from replication.

The following are some use cases for replication filters:
+ To reduce the size of a read replica. With replication filtering, you can exclude the databases and tables that aren't needed on the read replica.
+ To exclude databases and tables from read replicas for security reasons.
+ To replicate different databases and tables for specific use cases at different read replicas. For example, you might use specific read replicas for analytics or sharding.
+ For a DB cluster that has read replicas in different AWS Regions, to replicate different databases or tables in different AWS Regions.
+ To specify which databases and tables are replicated with an Aurora MySQL DB cluster that is configured as a replica in an inbound replication topology. For more information about this configuration, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md).

**Topics**
+ [

## Setting replication filtering parameters for Aurora MySQL
](#AuroraMySQL.Replication.Filters.Configuring)
+ [

## Replication filtering limitations for Aurora MySQL
](#AuroraMySQL.Replication.Filters.Limitations)
+ [

## Replication filtering examples for Aurora MySQL
](#AuroraMySQL.Replication.Filters.Examples)
+ [

## Viewing the replication filters for a read replica
](#AuroraMySQL.Replication.Filters.Viewing)

## Setting replication filtering parameters for Aurora MySQL


To configure replication filters, set the following parameters:
+ `binlog-do-db` – Replicate changes to the specified binary logs. When you set this parameter for a binlog source cluster, only the binary logs specified in the parameter are replicated.
+ `binlog-ignore-db` – Don't replicate changes to the specified binary logs. When the `binlog-do-db` parameter is set for a binlog source cluster, this parameter isn't evaluated.
+ `replicate-do-db` – Replicate changes to the specified databases. When you set this parameter for a binlog replica cluster, only the databases specified in the parameter are replicated.
+ `replicate-ignore-db` – Don't replicate changes to the specified databases. When the `replicate-do-db` parameter is set for a binlog replica cluster, this parameter isn't evaluated.
+ `replicate-do-table` – Replicate changes to the specified tables. When you set this parameter for a read replica, only the tables specified in the parameter are replicated. Also, when the `replicate-do-db` or `replicate-ignore-db` parameter is set, make sure to include the database that includes the specified tables in replication with the binlog replica cluster.
+ `replicate-ignore-table` – Don't replicate changes to the specified tables. When the `replicate-do-table` parameter is set for a binlog replica cluster, this parameter isn't evaluated.
+ `replicate-wild-do-table` – Replicate tables based on the specified database and table name patterns. The `%` and `_` wildcard characters are supported. When the `replicate-do-db` or `replicate-ignore-db` parameter is set, make sure to include the database that includes the specified tables in replication with the binlog replica cluster.
+ `replicate-wild-ignore-table` – Don't replicate tables based on the specified database and table name patterns. The `%` and `_` wildcard characters are supported. When the `replicate-do-table` or `replicate-wild-do-table` parameter is set for a binlog replica cluster, this parameter isn't evaluated.

The parameters are evaluated in the order that they are listed. For more information about how these parameters work, see the MySQL documentation:
+ For general information, see [ Replica Server Options and Variables](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html).
+ For information about how database replication filtering parameters are evaluated, see [ Evaluation of Database-Level Replication and Binary Logging Options](https://dev.mysql.com/doc/refman/8.0/en/replication-rules-db-options.html).
+ For information about how table replication filtering parameters are evaluated, see [ Evaluation of Table-Level Replication Options](https://dev.mysql.com/doc/refman/8.0/en/replication-rules-table-options.html).

By default, each of these parameters has an empty value. On each binlog cluster, you can use these parameters to set, change, and delete replication filters. When you set one of these parameters, separate each filter from others with a comma.

You can use the `%` and `_` wildcard characters in the `replicate-wild-do-table` and `replicate-wild-ignore-table` parameters. The `%` wildcard matches any number of characters, and the `_` wildcard matches only one character.

The binary logging format of the source DB instance is important for replication because it determines the record of data changes. The setting of the `binlog_format` parameter determines whether the replication is row-based or statement-based. For more information, see [Configuring Aurora MySQL binary logging for Single-AZ databases](USER_LogAccess.MySQL.BinaryFormat.md).

**Note**  
All data definition language (DDL) statements are replicated as statements, regardless of the `binlog_format` setting on the source DB instance.

## Replication filtering limitations for Aurora MySQL


The following limitations apply to replication filtering for Aurora MySQL:
+ Replication filters are supported only for Aurora MySQL version 3.
+ Each replication filtering parameter has a 2,000-character limit.
+ Commas aren't supported in replication filters.
+ Replication filtering doesn't support XA transactions.

  For more information, see [Restrictions on XA Transactions](https://dev.mysql.com/doc/refman/8.0/en/xa-restrictions.html) in the MySQL documentation.

## Replication filtering examples for Aurora MySQL


To configure replication filtering for a read replica, modify the replication filtering parameters in the DB cluster parameter group associated with the read replica.

**Note**  
You can't modify a default DB cluster parameter group. If the read replica is using a default parameter group, create a new parameter group and associate it with the read replica. For more information on DB cluster parameter groups, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).

You can set parameters in a DB cluster parameter group using the AWS Management Console, AWS CLI, or RDS API. For information about setting parameters, see [Modifying parameters in a DB parameter group in Amazon Aurora](USER_WorkingWithParamGroups.Modifying.md). When you set parameters in a DB cluster parameter group, all of the DB clusters associated with the parameter group use the parameter settings. If you set the replication filtering parameters in a DB cluster parameter group, make sure that the parameter group is associated only with read replica clusters. Leave the replication filtering parameters empty for source DB instances.

The following examples set the parameters using the AWS CLI. These examples set `ApplyMethod` to `immediate` so that the parameter changes occur immediately after the CLI command completes. If you want a pending change to be applied after the read replica is rebooted, set `ApplyMethod` to `pending-reboot`. 

The following examples set replication filters:
+ [Including databases in replication](#rep-filter-in-dbs-ams)
+ [Including tables in replication](#rep-filter-in-tables-ams)
+ [Including tables in replication with wildcard characters](#rep-filter-in-tables-wildcards-ams)
+ [Excluding databases from replication](#rep-filter-ex-dbs-ams)
+ [Excluding tables from replication](#rep-filter-ex-tables-ams)
+ [Excluding tables from replication using wildcard characters](#rep-filter-ex-tables-wildcards-ams)<a name="rep-filter-in-dbs-ams"></a>

**Example Including databases in replication**  
The following example includes the `mydb1` and `mydb2` databases in replication.  
For Linux, macOS, or Unix:  

```
aws rds modify-db-cluster-parameter-group \
  --db-cluster-parameter-group-name myparametergroup \
  --parameters "ParameterName=replicate-do-db,ParameterValue='mydb1,mydb2',ApplyMethod=immediate"
```
For Windows:  

```
aws rds modify-db-cluster-parameter-group ^
  --db-cluster-parameter-group-name myparametergroup ^
  --parameters "ParameterName=replicate-do-db,ParameterValue='mydb1,mydb2',ApplyMethod=immediate"
```<a name="rep-filter-in-tables-ams"></a>

**Example Including tables in replication**  
The following example includes the `table1` and `table2` tables in database `mydb1` in replication.  
For Linux, macOS, or Unix:  

```
aws rds modify-db-cluster-parameter-group \
  --db-cluster-parameter-group-name myparametergroup \
  --parameters "ParameterName=replicate-do-table,ParameterValue='mydb1.table1,mydb1.table2',ApplyMethod=immediate"
```
For Windows:  

```
aws rds modify-db-cluster-parameter-group ^
  --db-cluster-parameter-group-name myparametergroup ^
  --parameters "ParameterName=replicate-do-table,ParameterValue='mydb1.table1,mydb1.table2',ApplyMethod=immediate"
```<a name="rep-filter-in-tables-wildcards-ams"></a>

**Example Including tables in replication using wildcard characters**  
The following example includes tables with names that begin with `order` and `return` in database `mydb` in replication.  
For Linux, macOS, or Unix:  

```
aws rds modify-db-cluster-parameter-group \
  --db-cluster-parameter-group-name myparametergroup \
  --parameters "ParameterName=replicate-wild-do-table,ParameterValue='mydb.order%,mydb.return%',ApplyMethod=immediate"
```
For Windows:  

```
aws rds modify-db-cluster-parameter-group ^
  --db-cluster-parameter-group-name myparametergroup ^
  --parameters "ParameterName=replicate-wild-do-table,ParameterValue='mydb.order%,mydb.return%',ApplyMethod=immediate"
```<a name="rep-filter-ex-dbs-ams"></a>

**Example Excluding databases from replication**  
The following example excludes the `mydb5` and `mydb6` databases from replication.  
For Linux, macOS, or Unix:  

```
aws rds modify-db-cluster-parameter-group \
  --db-cluster-parameter-group-name myparametergroup \
  --parameters "ParameterName=replicate-ignore-db,ParameterValue='mydb5,mydb6',ApplyMethod=immediate"
```
For Windows:  

```
aws rds modify-db-cluster-parameter-group ^
  --db-cluster-parameter-group-name myparametergroup ^
  --parameters "ParameterName=replicate-ignore-db,ParameterValue='mydb5,mydb6,ApplyMethod=immediate"
```<a name="rep-filter-ex-tables-ams"></a>

**Example Excluding tables from replication**  
The following example excludes tables `table1` in database `mydb5` and `table2` in database `mydb6` from replication.  
For Linux, macOS, or Unix:  

```
aws rds modify-db-cluster-parameter-group \
  --db-cluster-parameter-group-name myparametergroup \
  --parameters "ParameterName=replicate-ignore-table,ParameterValue='mydb5.table1,mydb6.table2',ApplyMethod=immediate"
```
For Windows:  

```
aws rds modify-db-cluster-parameter-group ^
  --db-cluster-parameter-group-name myparametergroup ^
  --parameters "ParameterName=replicate-ignore-table,ParameterValue='mydb5.table1,mydb6.table2',ApplyMethod=immediate"
```<a name="rep-filter-ex-tables-wildcards-ams"></a>

**Example Excluding tables from replication using wildcard characters**  
The following example excludes tables with names that begin with `order` and `return` in database `mydb7` from replication.  
For Linux, macOS, or Unix:  

```
aws rds modify-db-cluster-parameter-group \
  --db-cluster-parameter-group-name myparametergroup \
  --parameters "ParameterName=replicate-wild-ignore-table,ParameterValue='mydb7.order%,mydb7.return%',ApplyMethod=immediate"
```
For Windows:  

```
aws rds modify-db-cluster-parameter-group ^
  --db-cluster-parameter-group-name myparametergroup ^
  --parameters "ParameterName=replicate-wild-ignore-table,ParameterValue='mydb7.order%,mydb7.return%',ApplyMethod=immediate"
```

## Viewing the replication filters for a read replica


You can view the replication filters for a read replica in the following ways:
+ Check the settings of the replication filtering parameters in the parameter group associated with the read replica.

  For instructions, see [Viewing parameter values for a DB parameter group in Amazon Aurora](USER_WorkingWithParamGroups.Viewing.md).
+ In a MySQL client, connect to the read replica and run the `SHOW REPLICA STATUS` statement.

  In the output, the following fields show the replication filters for the read replica:
  + `Binlog_Do_DB`
  + `Binlog_Ignore_DB`
  + `Replicate_Do_DB`
  + `Replicate_Ignore_DB`
  + `Replicate_Do_Table`
  + `Replicate_Ignore_Table`
  + `Replicate_Wild_Do_Table`
  + `Replicate_Wild_Ignore_Table`

  For more information about these fields, see [Checking Replication Status](https://dev.mysql.com/doc/refman/8.0/en/replication-administration-status.html) in the MySQL documentation.

## Monitoring Amazon Aurora MySQL replication
Monitoring replication

Read scaling and high availability depend on minimal lag time. You can monitor how far an Aurora Replica is lagging behind the primary instance of your Aurora MySQL DB cluster by monitoring the Amazon CloudWatch `AuroraReplicaLag` metric. The `AuroraReplicaLag` metric is recorded in each Aurora Replica.

The primary DB instance also records the `AuroraReplicaLagMaximum` and `AuroraReplicaLagMinimum` Amazon CloudWatch metrics. The `AuroraReplicaLagMaximum` metric records the maximum amount of lag between the primary DB instance and each Aurora Replica in the DB cluster. The `AuroraReplicaLagMinimum` metric records the minimum amount of lag between the primary DB instance and each Aurora Replica in the DB cluster.

If you need the most current value for Aurora Replica lag, you can check the `AuroraReplicaLag` metric in Amazon CloudWatch. The Aurora Replica lag is also recorded on each Aurora Replica of your Aurora MySQL DB cluster in the `information_schema.replica_host_status` table. For more information on this table, see [information\$1schema.replica\$1host\$1status](AuroraMySQL.Reference.ISTables.md#AuroraMySQL.Reference.ISTables.replica_host_status).

For more information on monitoring RDS instances and CloudWatch metrics, see [Monitoring metrics in an Amazon Aurora cluster](MonitoringAurora.md).

# Replicating Amazon Aurora MySQL DB clusters across AWS Regions
Cross-Region replication

 You can create an Amazon Aurora MySQL DB cluster as a read replica in a different AWS Region than the source DB cluster. Taking this approach can improve your disaster recovery capabilities, let you scale read operations into an AWS Region that is closer to your users, and make it easier to migrate from one AWS Region to another. 

 You can create read replicas of both encrypted and unencrypted DB clusters. The read replica must be encrypted if the source DB cluster is encrypted. 

 For each source DB cluster, you can have up to five cross-Region DB clusters that are read replicas. 

**Note**  
 As an alternative to cross-Region read replicas, you can scale read operations with minimal lag time by using an Aurora global database. An Aurora global database has a primary Aurora DB cluster in one AWS Region and up to 10 secondary read-only DB clusters in different Regions. Each secondary DB cluster can include up to 16 (rather than 15) Aurora Replicas. Replication from the primary DB cluster to all secondaries is handled by the Aurora storage layer rather than by the database engine, so lag time for replicating changes is minimal—typically, less than 1 second. Keeping the database engine out of the replication process means that the database engine is dedicated to processing workloads. It also means that you don't need to configure or manage the Aurora MySQL binlog (binary logging) replication. To learn more, see [Using Amazon Aurora Global Database](aurora-global-database.md). 

 When you create an Aurora MySQL DB cluster read replica in another AWS Region, you should be aware of the following: 
+  Both your source DB cluster and your cross-Region read replica DB cluster can have up to 15 Aurora Replicas, along with the primary instance for the DB cluster. By using this functionality, you can scale read operations for both your source AWS Region and your replication target AWS Region. 
+  In a cross-Region scenario, there is more lag time between the source DB cluster and the read replica due to the longer network channels between AWS Regions. 
+  Data transferred for cross-Region replication incurs Amazon RDS data transfer charges. The following cross-Region replication actions generate charges for the data transferred out of the source AWS Region: 
  +  When you create the read replica, Amazon RDS takes a snapshot of the source cluster and transfers the snapshot to the AWS Region that holds the read replica. 
  +  For each data modification made in the source databases, Amazon RDS transfers data from the source region to the AWS Region that holds the read replica. 

   For more information about Amazon RDS data transfer pricing, see [Amazon Aurora pricing](http://aws.amazon.com/rds/aurora/pricing/). 
+  You can run multiple concurrent create or delete actions for read replicas that reference the same source DB cluster. However, you must stay within the limit of five read replicas for each source DB cluster. 
+  For replication to operate effectively, each read replica should have the same amount of compute and storage resources as the source DB cluster. If you scale the source DB cluster, you should also scale the read replicas. 

**Topics**
+ [

## Before you begin
](#AuroraMySQL.Replication.CrossRegion.Prerequisites)
+ [

# Creating a cross-Region read replica DB cluster for Aurora MySQL
](AuroraMySQL.Replication.CrossRegion.Creating.md)
+ [

# Promoting a read replica to a DB cluster for Aurora MySQL
](AuroraMySQL.Replication.CrossRegion.Promote.md)
+ [

# Troubleshooting cross-Region replicas for Amazon Aurora MySQL
](AuroraMySQL.Replication.CrossRegion.Troubleshooting.md)

## Before you begin


 Before you can create an Aurora MySQL DB cluster that is a cross-Region read replica, you must turn on binary logging on your source Aurora MySQL DB cluster. Cross-region replication for Aurora MySQL uses MySQL binary replication to replay changes on the cross-Region read replica DB cluster. 

 To turn on binary logging on an Aurora MySQL DB cluster, update the `binlog_format` parameter for your source DB cluster. The `binlog_format` parameter is a cluster-level parameter that is in the default cluster parameter group. If your DB cluster uses the default DB cluster parameter group, create a new DB cluster parameter group to modify `binlog_format` settings. We recommend that you set the `binlog_format` to `MIXED`. However, you can also set `binlog_format` to `ROW` or `STATEMENT` if you need a specific binlog format. Reboot your Aurora DB cluster for the change to take effect. 

 For more information about using binary logging with Aurora MySQL, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md). For more information about modifying Aurora MySQL configuration parameters, see [Amazon Aurora DB cluster and DB instance parameters](USER_WorkingWithDBClusterParamGroups.md#Aurora.Managing.ParameterGroups) and [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md). 

# Creating a cross-Region read replica DB cluster for Aurora MySQL
Creating a cross-Region read replica

 You can create an Aurora DB cluster that is a cross-Region read replica by using the AWS Management Console, the AWS Command Line Interface (AWS CLI), or the Amazon RDS API. You can create cross-Region read replicas from both encrypted and unencrypted DB clusters. 

 When you create a cross-Region read replica for Aurora MySQL by using the AWS Management Console, Amazon RDS creates a DB cluster in the target AWS Region, and then automatically creates a DB instance that is the primary instance for that DB cluster. 

 When you create a cross-Region read replica using the AWS CLI or RDS API, you first create the DB cluster in the target AWS Region and wait for it to become active. Once it is active, you then create a DB instance that is the primary instance for that DB cluster. 

 Replication begins when the primary instance of the read replica DB cluster becomes available. 

 Use the following procedures to create a cross-Region read replica from an Aurora MySQL DB cluster. These procedures work for creating read replicas from either encrypted or unencrypted DB clusters. 

## Console


**To create an Aurora MySQL DB cluster that is a cross-Region read replica with the AWS Management Console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  In the top-right corner of the AWS Management Console, select the AWS Region that hosts your source DB cluster. 

1.  In the navigation pane, choose **Databases**.

1.  Choose the DB cluster for which you want to create a cross-Region read replica.

1. For **Actions**, choose **Create cross-Region read replica**.

1.  On the **Create cross region read replica** page, choose the option settings for your cross-Region read replica DB cluster, as described in the following table.    
<a name="cross-region-read-replica-settings"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.Creating.html)

1.  Choose **Create** to create your cross-Region read replica for Aurora.

## AWS CLI


**To create an Aurora MySQL DB cluster that is a cross-Region read replica with the CLI**

1.  Call the AWS CLI [create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html) command in the AWS Region where you want to create the read replica DB cluster. Include the `--replication-source-identifier` option and specify the Amazon Resource Name (ARN) of the source DB cluster to create a read replica for. 

    For cross-Region replication where the DB cluster identified by `--replication-source-identifier` is encrypted, specify the `--kms-key-id` option and the `--storage-encrypted` option. 
**Note**  
 You can set up cross-Region replication from an unencrypted DB cluster to an encrypted read replica by specifying `--storage-encrypted` and providing a value for `--kms-key-id`. 

    You can't specify the `--master-username` and `--master-user-password` parameters. Those values are taken from the source DB cluster. 

    The following code example creates a read replica in the us-east-1 Region from an unencrypted DB cluster snapshot in the us-west-2 Region. The command is called in the us-east-1 Region. This example specifies the `--manage-master-user-password` option to generate the master user password and manage it in Secrets Manager. For more information, see [Password management with Amazon Aurora and AWS Secrets Manager](rds-secrets-manager.md). Alternatively, you can use the `--master-password` option to specify and manage the password yourself. 

   For Linux, macOS, or Unix:

   ```
   aws rds create-db-cluster \
     --db-cluster-identifier sample-replica-cluster \
     --engine aurora-mysql \
     --engine-version 8.0.mysql_aurora.3.08.0 \
     --replication-source-identifier arn:aws:rds:us-west-2:123456789012:cluster:sample-master-cluster
   ```

   For Windows:

   ```
   aws rds create-db-cluster ^
     --db-cluster-identifier sample-replica-cluster ^
     --engine aurora-mysql ^
     --engine-version 8.0.mysql_aurora.3.08.0 ^
     --replication-source-identifier arn:aws:rds:us-west-2:123456789012:cluster:sample-master-cluster
   ```

    The following code example creates a read replica in the us-east-1 Region from an encrypted DB cluster snapshot in the us-west-2 Region. The command is called in the us-east-1 Region. 

   For Linux, macOS, or Unix:

   ```
   aws rds create-db-cluster \
     --db-cluster-identifier sample-replica-cluster \
     --engine aurora-mysql \
     --engine-version 8.0.mysql_aurora.3.08.0 \
     --replication-source-identifier arn:aws:rds:us-west-2:123456789012:cluster:sample-master-cluster \
     --kms-key-id my-us-east-1-key \
     --storage-encrypted
   ```

   For Windows:

   ```
   aws rds create-db-cluster ^
     --db-cluster-identifier sample-replica-cluster ^
     --engine aurora-mysql ^
     --engine-version 8.0.mysql_aurora.3.08.0 ^
     --replication-source-identifier arn:aws:rds:us-west-2:123456789012:cluster:sample-master-cluster ^
     --kms-key-id my-us-east-1-key ^
     --storage-encrypted
   ```

   The `--source-region` option is required for cross-Region replication between the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions, where the DB cluster identified by `--replication-source-identifier` is encrypted. For `--source-region`, specify the AWS Region of the source DB cluster.

   If `--source-region` isn't specified, specify a `--pre-signed-url` value. A *presigned URL* is a URL that contains a Signature Version 4 signed request for the `create-db-cluster` command that is called in the source AWS Region. To learn more about the `pre-signed-url` option, see [ create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html) in the *AWS CLI Command Reference*.

1.  Check that the DB cluster has become available to use by using the AWS CLI [describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) command, as shown in the following example. 

   ```
   aws rds describe-db-clusters --db-cluster-identifier sample-replica-cluster
   ```

    When the **`describe-db-clusters`** results show a status of `available`, create the primary instance for the DB cluster so that replication can begin. To do so, use the AWS CLI [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) command as shown in the following example. 

   For Linux, macOS, or Unix:

   ```
   aws rds create-db-instance \
     --db-cluster-identifier sample-replica-cluster \
     --db-instance-class db.r5.large \
     --db-instance-identifier sample-replica-instance \
     --engine aurora-mysql
   ```

   For Windows:

   ```
   aws rds create-db-instance ^
     --db-cluster-identifier sample-replica-cluster ^
     --db-instance-class db.r5.large ^
     --db-instance-identifier sample-replica-instance ^
     --engine aurora-mysql
   ```

    When the DB instance is created and available, replication begins. You can determine if the DB instance is available by calling the AWS CLI [describe-db-instances](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html) command. 

## RDS API


**To create an Aurora MySQL DB cluster that is a cross-Region read replica with the API**

1.  Call the RDS API [CreateDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html) operation in the AWS Region where you want to create the read replica DB cluster. Include the `ReplicationSourceIdentifier` parameter and specify the Amazon Resource Name (ARN) of the source DB cluster to create a read replica for. 

    For cross-Region replication where the DB cluster identified by `ReplicationSourceIdentifier` is encrypted, specify the `KmsKeyId` parameter and set the `StorageEncrypted` parameter to `true`. 
**Note**  
 You can set up cross-Region replication from an unencrypted DB cluster to an encrypted read replica by specifying `StorageEncrypted` as **true** and providing a value for `KmsKeyId`. In this case, you don't need to specify `PreSignedUrl`. 

    You don't need to include the `MasterUsername` and `MasterUserPassword` parameters, because those values are taken from the source DB cluster. 

    The following code example creates a read replica in the us-east-1 Region from an unencrypted DB cluster snapshot in the us-west-2 Region. The action is called in the us-east-1 Region. 

   ```
   https://rds.us-east-1.amazonaws.com/
     ?Action=CreateDBCluster
     &ReplicationSourceIdentifier=arn:aws:rds:us-west-2:123456789012:cluster:sample-master-cluster
     &DBClusterIdentifier=sample-replica-cluster
     &Engine=aurora-mysql
     &SignatureMethod=HmacSHA256
     &SignatureVersion=4
     &Version=2014-10-31
     &X-Amz-Algorithm=AWS4-HMAC-SHA256
     &X-Amz-Credential=AKIADQKE4SARGYLE/20161117/us-east-1/rds/aws4_request
     &X-Amz-Date=20160201T001547Z
     &X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
     &X-Amz-Signature=a04c831a0b54b5e4cd236a90dcb9f5fab7185eb3b72b5ebe9a70a4e95790c8b7
   ```

    The following code example creates a read replica in the us-east-1 Region from an encrypted DB cluster snapshot in the us-west-2 Region. The action is called in the us-east-1 Region. 

   ```
   https://rds.us-east-1.amazonaws.com/
     ?Action=CreateDBCluster
     &KmsKeyId=my-us-east-1-key
     &StorageEncrypted=true
     &PreSignedUrl=https%253A%252F%252Frds.us-west-2.amazonaws.com%252F
            %253FAction%253DCreateDBCluster
            %2526DestinationRegion%253Dus-east-1
            %2526KmsKeyId%253Dmy-us-east-1-key
            %2526ReplicationSourceIdentifier%253Darn%25253Aaws%25253Ards%25253Aus-west-2%25253A123456789012%25253Acluster%25253Asample-master-cluster
            %2526SignatureMethod%253DHmacSHA256
            %2526SignatureVersion%253D4
            %2526Version%253D2014-10-31
            %2526X-Amz-Algorithm%253DAWS4-HMAC-SHA256
            %2526X-Amz-Credential%253DAKIADQKE4SARGYLE%252F20161117%252Fus-west-2%252Frds%252Faws4_request
            %2526X-Amz-Date%253D20161117T215409Z
            %2526X-Amz-Expires%253D3600
            %2526X-Amz-SignedHeaders%253Dcontent-type%253Bhost%253Buser-agent%253Bx-amz-content-sha256%253Bx-amz-date
            %2526X-Amz-Signature%253D255a0f17b4e717d3b67fad163c3ec26573b882c03a65523522cf890a67fca613
     &ReplicationSourceIdentifier=arn:aws:rds:us-west-2:123456789012:cluster:sample-master-cluster
     &DBClusterIdentifier=sample-replica-cluster
     &Engine=aurora-mysql
     &SignatureMethod=HmacSHA256
     &SignatureVersion=4
     &Version=2014-10-31
     &X-Amz-Algorithm=AWS4-HMAC-SHA256
     &X-Amz-Credential=AKIADQKE4SARGYLE/20161117/us-east-1/rds/aws4_request
     &X-Amz-Date=20160201T001547Z
     &X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
     &X-Amz-Signature=a04c831a0b54b5e4cd236a90dcb9f5fab7185eb3b72b5ebe9a70a4e95790c8b7
   ```

   For cross-Region replication between the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions, where the DB cluster identified by `ReplicationSourceIdentifier` is encrypted, also specify the `PreSignedUrl` parameter. The presigned URL must be a valid request for the `CreateDBCluster` API operation that can be performed in the source AWS Region that contains the encrypted DB cluster to be replicated. The KMS key identifier is used to encrypt the read replica, and must be a KMS key valid for the destination AWS Region. To automatically rather than manually generate a presigned URL, use the AWS CLI [create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html) command with the `--source-region` option instead. 

1.  Check that the DB cluster has become available to use by using the RDS API [DescribeDBClusters](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html) operation, as shown in the following example. 

   ```
   https://rds.us-east-1.amazonaws.com/
     ?Action=DescribeDBClusters
     &DBClusterIdentifier=sample-replica-cluster
     &SignatureMethod=HmacSHA256
     &SignatureVersion=4
     &Version=2014-10-31
     &X-Amz-Algorithm=AWS4-HMAC-SHA256
     &X-Amz-Credential=AKIADQKE4SARGYLE/20161117/us-east-1/rds/aws4_request
     &X-Amz-Date=20160201T002223Z
     &X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
     &X-Amz-Signature=84c2e4f8fba7c577ac5d820711e34c6e45ffcd35be8a6b7c50f329a74f35f426
   ```

    When `DescribeDBClusters` results show a status of `available`, create the primary instance for the DB cluster so that replication can begin. To do so, use the RDS API [CreateDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html) action as shown in the following example. 

   ```
   https://rds.us-east-1.amazonaws.com/
     ?Action=CreateDBInstance
     &DBClusterIdentifier=sample-replica-cluster
     &DBInstanceClass=db.r5.large
     &DBInstanceIdentifier=sample-replica-instance
     &Engine=aurora-mysql
     &SignatureMethod=HmacSHA256
     &SignatureVersion=4
     &Version=2014-10-31
     &X-Amz-Algorithm=AWS4-HMAC-SHA256
     &X-Amz-Credential=AKIADQKE4SARGYLE/20161117/us-east-1/rds/aws4_request
     &X-Amz-Date=20160201T003808Z
     &X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
     &X-Amz-Signature=125fe575959f5bbcebd53f2365f907179757a08b5d7a16a378dfa59387f58cdb
   ```

    When the DB instance is created and available, replication begins. You can determine if the DB instance is available by calling the AWS CLI [DescribeDBInstances](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html) command. 

## Viewing Amazon Aurora MySQL cross-Region replicas


 You can view the cross-Region replication relationships for your Amazon Aurora MySQL DB clusters by calling the [describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) AWS CLI command or the [DescribeDBClusters](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html) RDS API operation. In the response, refer to the `ReadReplicaIdentifiers` field for the DB cluster identifiers of any cross-Region read replica DB clusters. Refer to the `ReplicationSourceIdentifier` element for the ARN of the source DB cluster that is the replication source. 

# Promoting a read replica to a DB cluster for Aurora MySQL
Promoting a read replica

 You can promote an Aurora MySQL read replica to a standalone DB cluster. When you promote an Aurora MySQL read replica, its DB instances are rebooted before they become available. 

 Typically, you promote an Aurora MySQL read replica to a standalone DB cluster as a data recovery scheme if the source DB cluster fails. 

 To do this, first create a read replica and then monitor the source DB cluster for failures. In the event of a failure, do the following: 

1.  Promote the read replica. 

1.  Direct database traffic to the promoted DB cluster. 

1.  Create a replacement read replica with the promoted DB cluster as its source. 

 When you promote a read replica, the read replica becomes a standalone Aurora DB cluster. The promotion process can take several minutes or longer to complete, depending on the size of the read replica. After you promote the read replica to a new DB cluster, it's just like any other DB cluster. For example, you can create read replicas from it and perform point-in-time restore operations. You can also create Aurora Replicas for the DB cluster. 

 Because the promoted DB cluster is no longer a read replica, you can't use it as a replication target. 

 The following steps show the general process for promoting a read replica to a DB cluster: 

1.  Stop any transactions from being written to the read replica source DB cluster, and then wait for all updates to be made to the read replica. Database updates occur on the read replica after they have occurred on the source DB cluster, and this replication lag can vary significantly. Use the `ReplicaLag` metric to determine when all updates have been made to the read replica. The `ReplicaLag` metric records the amount of time a read replica DB instance lags behind the source DB instance. When the `ReplicaLag` metric reaches `0`, the read replica has caught up to the source DB instance. 

1.  Promote the read replica by using the **Promote** option on the Amazon RDS console, the AWS CLI command [promote-read-replica-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/promote-read-replica-db-cluster.html), or the [PromoteReadReplicaDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_PromoteReadReplicaDBCluster.html) Amazon RDS API operation. 

    You choose an Aurora MySQL DB instance to promote the read replica. After the read replica is promoted, the Aurora MySQL DB cluster is promoted to a standalone DB cluster. The DB instance with the highest failover priority is promoted to the primary DB instance for the DB cluster. The other DB instances become Aurora Replicas. 
**Note**  
 The promotion process takes a few minutes to complete. When you promote a read replica, replication is stopped and the DB instances are rebooted. When the reboot is complete, the read replica is available as a new DB cluster. 

## Console


**To promote an Aurora MySQL read replica to a DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  On the console, choose **Instances**. 

    The **Instance** pane appears. 

1.  In the **Instances** pane, choose the read replica that you want to promote. 

    The read replicas appear as Aurora MySQL DB instances. 

1.  For **Actions**, choose **Promote read replica**. 

1.  On the acknowledgment page, choose **Promote read replica**. 

## AWS CLI


 To promote a read replica to a DB cluster, use the AWS CLI [promote-read-replica-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/promote-read-replica-db-cluster.html) command. 

**Example**  
For Linux, macOS, or Unix:  

```
aws rds promote-read-replica-db-cluster \
    --db-cluster-identifier mydbcluster
```
For Windows:  

```
aws rds promote-read-replica-db-cluster ^
    --db-cluster-identifier mydbcluster
```

## RDS API


 To promote a read replica to a DB cluster, call [PromoteReadReplicaDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_PromoteReadReplicaDBCluster.html). 

# Troubleshooting cross-Region replicas for Amazon Aurora MySQL
Troubleshooting cross-Region replicas

 Following you can find a list of common error messages that you might encounter when creating an Amazon Aurora cross-Region read replica, and how to resolve the specified errors. 

## Source cluster [DB cluster ARN] doesn't have binlogs enabled


 To resolve this issue, turn on binary logging on the source DB cluster. For more information, see [Before you begin](AuroraMySQL.Replication.CrossRegion.md#AuroraMySQL.Replication.CrossRegion.Prerequisites). 

## Source cluster [DB cluster ARN] doesn't have cluster parameter group in sync on writer


 You receive this error if you have updated the `binlog_format` DB cluster parameter, but have not rebooted the primary instance for the DB cluster. Reboot the primary instance (that is, the writer) for the DB cluster and try again. 

## Source cluster [DB cluster ARN] already has a read replica in this region


 You can have up to five cross-Region DB clusters that are read replicas for each source DB cluster in any AWS Region. If you already have the maximum number of read replicas for a DB cluster in a particular AWS Region, you must delete an existing one before you can create a new cross-Region DB cluster in that Region. 

## DB cluster [DB cluster ARN] requires a database engine upgrade for cross-Region replication support


 To resolve this issue, upgrade the database engine version for all of the instances in the source DB cluster to the most recent database engine version, and then try creating a cross-Region read replica DB again. 

# Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)
Binary log (binlog) replication<a name="binlog_replication"></a><a name="binlog"></a>

Because Amazon Aurora MySQL is compatible with MySQL, you can set up replication between a MySQL database and an Amazon Aurora MySQL DB cluster. This type of replication uses the MySQL binary log replication, also referred to as *binlog replication*. If you use binary log replication with Aurora, we recommend that your MySQL database run MySQL version 5.5 or later. You can set up replication where your Aurora MySQL DB cluster is the replication source or the replica. You can replicate with an Amazon RDS MySQL DB instance, a MySQL database external to Amazon RDS, or another Aurora MySQL DB cluster.

**Note**  
You can't use binlog replication to or from certain types of Aurora DB clusters. In particular, binlog replication isn't available for Aurora Serverless v1 clusters. If the `SHOW MASTER STATUS` and `SHOW SLAVE STATUS` (Aurora MySQL version 2) or `SHOW REPLICA STATUS` (Aurora MySQL version 3) statement returns no output, check that the cluster you're using supports binlog replication.

You can also replicate with an RDS for MySQL DB instance or Aurora MySQL DB cluster in another AWS Region. When you're performing replication across AWS Regions, make sure that your DB clusters and DB instances are publicly accessible. If the Aurora MySQL DB clusters are in private subnets in your VPC, use VPC peering between the AWS Regions. For more information, see [A DB cluster in a VPC accessed by an EC2 instance in a different VPC](USER_VPC.Scenarios.md#USER_VPC.Scenario3).

If you want to configure replication between an Aurora MySQL DB cluster and an Aurora MySQL DB cluster in another AWS Region, you can create an Aurora MySQL DB cluster as a read replica in a different AWS Region from the source DB cluster. For more information, see [Replicating Amazon Aurora MySQL DB clusters across AWS Regions](AuroraMySQL.Replication.CrossRegion.md).

With Aurora MySQL version 2 and 3, you can replicate between Aurora MySQL and an external source or target that uses global transaction identifiers (GTIDs) for replication. Ensure that the GTID-related parameters in the Aurora MySQL DB cluster have settings that are compatible with the GTID status of the external database. To learn how to do this, see [Using GTID-based replication](mysql-replication-gtid.md). In Aurora MySQL version 3.01 and higher, you can choose how to assign GTIDs to transactions that are replicated from a source that doesn't use GTIDs. For information about the stored procedure that controls that setting, see [mysql.rds\$1assign\$1gtids\$1to\$1anonymous\$1transactions (Aurora MySQL version 3)](mysql-stored-proc-gtid.md#mysql_assign_gtids_to_anonymous_transactions).

**Warning**  
 When you replicate between Aurora MySQL and MySQL, make sure that you use only InnoDB tables. If you have MyISAM tables that you want to replicate, you can convert them to InnoDB before setting up replication with the following command.   

```
alter table <schema>.<table_name> engine=innodb, algorithm=copy;
```

In the following sections, set up replication, stop replication, scale reads for your database, optimize binlog replication, and set up enhanced binlog.

**Topics**
+ [

# Setting up binary log replication for Aurora MySQL
](AuroraMySQL.Replication.MySQL.SettingUp.md)
+ [

# Stopping binary log replication for Aurora MySQL
](AuroraMySQL.Replication.MySQL.Stopping.md)
+ [

# Scaling reads for your MySQL database with Amazon Aurora
](AuroraMySQL.Replication.ReadScaling.md)
+ [

# Optimizing binary log replication for Aurora MySQL
](binlog-optimization.md)
+ [

# Setting up enhanced binlog for Aurora MySQL
](AuroraMySQL.Enhanced.binlog.md)

# Setting up binary log replication for Aurora MySQL
Setting up binlog replication

Setting up MySQL replication with Aurora MySQL involves the following steps, which are discussed in detail:

**Contents**
+ [

## 1. Turn on binary logging on the replication source
](#AuroraMySQL.Replication.MySQL.EnableBinlog)
+ [

## 2. Retain binary logs on the replication source until no longer needed
](#AuroraMySQL.Replication.MySQL.RetainBinlogs)
+ [

## 3. Create a copy or dump of your replication source
](#AuroraMySQL.Replication.MySQL.CreateSnapshot)
+ [

## 4. Load the dump into your replica target (if needed)
](#AuroraMySQL.Replication.MySQL.LoadSnapshot)
+ [

## 5. Create a replication user on your replication source
](#AuroraMySQL.Replication.MySQL.CreateReplUser)
+ [

## 6. Turn on replication on your replica target
](#AuroraMySQL.Replication.MySQL.EnableReplication)
  + [

### Setting a location to stop replication to a read replica
](#AuroraMySQL.Replication.StartReplicationUntil)
+ [

## 7. Monitor your replica
](#AuroraMySQL.Replication.MySQL.Monitor)
+ [

## Synchronizing passwords between replication source and target
](#AuroraMySQL.Replication.passwords)

## 1. Turn on binary logging on the replication source


 Find instructions on how to turn on binary logging on the replication source for your database engine following. 


|  Database engine  |  Instructions  | 
| --- | --- | 
|   Aurora MySQL   |   **To turn on binary logging on an Aurora MySQL DB cluster**  Set the `binlog_format` DB cluster parameter to `ROW`, `STATEMENT`, or `MIXED`. `MIXED` is recommended unless you have a need for a specific binlog format. (The default value is `OFF`.) To change the `binlog_format` parameter, create a custom DB cluster parameter group and associate that custom parameter group with your DB cluster. You can't change parameters in the default DB cluster parameter group. If you're changing the `binlog_format` parameter from `OFF` to another value, reboot your Aurora DB cluster for the change to take effect.  For more information, see [Amazon Aurora DB cluster and DB instance parameters](USER_WorkingWithDBClusterParamGroups.md#Aurora.Managing.ParameterGroups) and [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).   | 
|   RDS for MySQL   |   **To turn on binary logging on an Amazon RDS DB instance**   You can't turn on binary logging directly for an Amazon RDS DB instance, but you can turn it on by doing one of the following:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.SettingUp.html)  | 
|   MySQL (external)  |  **To set up encrypted replication** To replicate data securely with Aurora MySQL version 2, you can use encrypted replication.   If you don't need to use encrypted replication, you can skip these steps.    The following are prerequisites for using encrypted replication:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.SettingUp.html)  During encrypted replication, the Aurora MySQL DB cluster acts a client to the MySQL database server. The certificates and keys for the Aurora MySQL client are in files in .pem format.  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.SettingUp.html)  **To turn on binary logging on an external MySQL database**  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.SettingUp.html)  | 

## 2. Retain binary logs on the replication source until no longer needed


When you use MySQL binary log replication, Amazon RDS doesn't manage the replication process. As a result, you need to ensure that the binlog files on your replication source are retained until after the changes have been applied to the replica. This maintenance helps you to restore your source database in the event of a failure.

Use the following instructions to retain binary logs for your database engine.


|  Database engine  |  Instructions  | 
| --- | --- | 
|   Aurora MySQL  |  **To retain binary logs on an Aurora MySQL DB cluster** You don't have access to the binlog files for an Aurora MySQL DB cluster. As a result, you must choose a time frame to retain the binlog files on your replication source long enough to ensure that the changes have been applied to your replica before the binlog file is deleted by Amazon RDS. You can retain binlog files on an Aurora MySQL DB cluster for up to 90 days. If you're setting up replication with a MySQL database or RDS for MySQL DB instance as the replica, and the database that you are creating a replica for is very large, choose a large time frame to retain binlog files until the initial copy of the database to the replica is complete and the replica lag has reached 0. To set the binary log retention time frame, use the [mysql.rds\$1set\$1configuration](mysql-stored-proc-configuring.md#mysql_rds_set_configuration) procedure and specify a configuration parameter of `'binlog retention hours'` along with the number of hours to retain binlog files on the DB cluster. The maximum value for Aurora MySQL version 2.11.0 and higher and version 3 is 2160 (90 days). The following example sets the retention period for binlog files to 6 days: <pre>CALL mysql.rds_set_configuration('binlog retention hours', 144);</pre> After replication has been started, you can verify that changes have been applied to your replica by running the `SHOW SLAVE STATUS` (Aurora MySQL version 2) or `SHOW REPLICA STATUS` (Aurora MySQL version 3) command on your replica and checking the `Seconds behind master` field. If the `Seconds behind master` field is 0, then there is no replica lag. When there is no replica lag, reduce the length of time that binlog files are retained by setting the `binlog retention hours` configuration parameter to a smaller time frame. If this setting isn't specified, the default for Aurora MySQL is 24 (1 day). If you specify a value for `'binlog retention hours'` that is higher than the maximum value, then Aurora MySQL uses the maximum.  | 
|   RDS for MySQL   |   **To retain binary logs on an Amazon RDS DB instance**   You can retain binary log files on an Amazon RDS DB instance by setting the binlog retention hours just as with an Aurora MySQL DB cluster, described in the previous row. You can also retain binlog files on an Amazon RDS DB instance by creating a read replica for the DB instance. This read replica is temporary and solely for the purpose of retaining binlog files. After the read replica has been created, call the [mysql.rds\$1stop\$1replication](mysql-stored-proc-replicating.md#mysql_rds_stop_replication) procedure on the read replica. While replication is stopped, Amazon RDS doesn't delete any of the binlog files on the replication source. After you have set up replication with your permanent replica, you can delete the read replica when the replica lag (`Seconds behind master` field) between your replication source and your permanent replica reaches 0.  | 
|   MySQL (external)   |  **To retain binary logs on an external MySQL database** Because binlog files on an external MySQL database are not managed by Amazon RDS, they are retained until you delete them. After replication has been started, you can verify that changes have been applied to your replica by running the `SHOW SLAVE STATUS` (Aurora MySQL version 2) or `SHOW REPLICA STATUS` (Aurora MySQL version 3) command on your replica and checking the `Seconds behind master` field. If the `Seconds behind master` field is 0, then there is no replica lag. When there is no replica lag, you can delete old binlog files.  | 

## 3. Create a copy or dump of your replication source


You use a snapshot, clone, or dump of your replication source to load a baseline copy of your data onto your replica. Then you start replicating from that point.

Use the following instructions to create a copy or dump of the replication source for your database engine.


| Database engine | Instructions | 
| --- | --- | 
|   Aurora MySQL   |  **To create a copy of an Aurora MySQL DB cluster** Use one of the following methods: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.SettingUp.html) **To determine the binlog file name and position** Use one of the following methods: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.SettingUp.html) **To create a dump of an Aurora MySQL DB cluster** If your replica target is an external MySQL database or an RDS for MySQL DB instance, then you must create a dump file from your Aurora DB cluster. Be sure to run the `mysqldump` command against the copy of your source DB cluster that you created. This is to avoid locking considerations when taking the dump. If the dump were taken on the source DB cluster directly, it would be necessary to lock the source tables to prevent concurrent writes to them while the dump is in progress. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.SettingUp.html)  | 
|  RDS for MySQL  |  **To create a snapshot of an Amazon RDS DB instance** Create a read replica of your Amazon RDS DB instance. For more information, see [Creating a read replica](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.Create) in the *Amazon Relational Database Service User Guide*.  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.SettingUp.html)  | 
|  MySQL (external)  |  **To create a dump of an external MySQL database** [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.SettingUp.html)  | 

## 4. Load the dump into your replica target (if needed)


If you plan to load data from a dump of a MySQL database that is external to Amazon RDS, you might want to create an EC2 instance to copy the dump files to. Then you can load the data into your DB cluster or DB instance from that EC2 instance. Using this approach, you can compress the dump file(s) before copying them to the EC2 instance in order to reduce the network costs associated with copying data to Amazon RDS. You can also encrypt the dump file or files to secure the data as it is being transferred across the network.

**Note**  
If you create a new Aurora MySQL DB cluster as your replica target, then you don't need to load a dump file:  
You can restore from a DB cluster snapshot to create a new DB cluster. For more information, see [Restoring from a DB cluster snapshot](aurora-restore-snapshot.md).
You can clone your source DB cluster to create a new DB cluster. For more information, see [Cloning a volume for an Amazon Aurora DB cluster](Aurora.Managing.Clone.md).
You can migrate the data from a DB instance snapshot into a new DB cluster. For more information, see [Migrating data to an Amazon Aurora MySQL DB cluster](AuroraMySQL.Migrating.md).

Use the following instructions to load the dump of your replication source into your replica target for your database engine.


| Database engine | Instructions | 
| --- | --- | 
|  Aurora MySQL   |   **To load a dump into an Aurora MySQL DB cluster**  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.SettingUp.html)  | 
|   RDS for MySQL   |  **To load a dump into an Amazon RDS DB instance** [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.SettingUp.html)  | 
|  MySQL (external)  |  **To load a dump into an external MySQL database** You can't load a DB snapshot or a DB cluster snapshot into an external MySQL database. Instead, you must use the output from the `mysqldump` command. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.SettingUp.html)  | 

## 5. Create a replication user on your replication source


Create a user ID on the source that is used solely for replication. The following example is for RDS for MySQL or external MySQL source databases.

```
mysql> CREATE USER 'repl_user'@'domain_name' IDENTIFIED BY 'password';
```

For Aurora MySQL source databases, the `skip_name_resolve` DB cluster parameter is set to `1` (`ON`) and can't be modified, so you must use an IP address for the host instead of a domain name. For more information, see [skip\$1name\$1resolve](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_skip_name_resolve) in the MySQL documentation.

```
mysql> CREATE USER 'repl_user'@'IP_address' IDENTIFIED BY 'password';
```

The user requires the `REPLICATION CLIENT` and `REPLICATION SLAVE` privileges. Grant these privileges to the user.

If you need to use encrypted replication, require SSL connections for the replication user. For example, you can use one of the following statements to require SSL connections on the user account `repl_user`.

```
GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'IP_address';
```

```
GRANT USAGE ON *.* TO 'repl_user'@'IP_address' REQUIRE SSL;
```

**Note**  
If `REQUIRE SSL` isn't included, the replication connection might silently fall back to an unencrypted connection.

## 6. Turn on replication on your replica target


Before you turn on replication, we recommend that you take a manual snapshot of the Aurora MySQL DB cluster or RDS for MySQL DB instance replica target. If a problem arises and you need to re-establish replication with the DB cluster or DB instance replica target, you can restore the DB cluster or DB instance from this snapshot instead of having to import the data into your replica target again.

Use the following instructions to turn on replication for your database engine.


|  Database engine  |  Instructions  | 
| --- | --- | 
|   Aurora MySQL   |  **To turn on replication from an Aurora MySQL DB cluster**  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.SettingUp.html) To use SSL encryption, set the final value to `1` instead of `0`.  | 
|   RDS for MySQL   |   **To turn on replication from an Amazon RDS DB instance**  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.SettingUp.html) To use SSL encryption, set the final value to `1` instead of `0`.  | 
|   MySQL (external)   |   **To turn on replication from an external MySQL database**  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.SettingUp.html)  | 

If replication fails, it can result in a large increase in unintentional I/O on the replica, which can degrade performance. If replication fails or is no longer needed, you can run the [mysql.rds\$1reset\$1external\$1master (Aurora MySQL version 2)](mysql-stored-proc-replicating.md#mysql_rds_reset_external_master) or [mysql.rds\$1reset\$1external\$1source (Aurora MySQL version 3)](mysql-stored-proc-replicating.md#mysql_rds_reset_external_source) stored procedure to remove the replication configuration.

### Setting a location to stop replication to a read replica


In Aurora MySQL version 3.04 and higher, you can start replication and then stop it at a specified binary log file location using the [mysql.rds\$1start\$1replication\$1until(Aurora MySQL version 3)](mysql-stored-proc-replicating.md#mysql_rds_start_replication_until) stored procedure.

**To start replication to a read replica and stop replication at a specific location**

1. Using a MySQL client, connect to the replica Aurora MySQL DB cluster as the master user.

1. Run the [mysql.rds\$1start\$1replication\$1until(Aurora MySQL version 3)](mysql-stored-proc-replicating.md#mysql_rds_start_replication_until) stored procedure.

   The following example initiates replication and replicates changes until it reaches location `120` in the `mysql-bin-changelog.000777` binary log file. In a disaster recovery scenario, assume that location `120` is just before the disaster.

   ```
   call mysql.rds_start_replication_until(
     'mysql-bin-changelog.000777',
     120);
   ```

Replication stops automatically when the stop point is reached. The following RDS event is generated: `Replication has been stopped since the replica reached the stop point specified by the rds_start_replication_until stored procedure`.

If you use GTID-based replication, use the [mysql.rds\$1start\$1replication\$1until\$1gtid(Aurora MySQL version 3)](mysql-stored-proc-gtid.md#mysql_rds_start_replication_until_gtid) stored procedure instead of the [mysql.rds\$1start\$1replication\$1until(Aurora MySQL version 3)](mysql-stored-proc-replicating.md#mysql_rds_start_replication_until) stored procedure. For more information about GTID-based replication, see [Using GTID-based replication](mysql-replication-gtid.md).

## 7. Monitor your replica


 When you set up MySQL replication with an Aurora MySQL DB cluster, you must monitor failover events for the Aurora MySQL DB cluster when it is the replica target. If a failover occurs, then the DB cluster that is your replica target might be recreated on a new host with a different network address. For information on how to monitor failover events, see [Working with Amazon RDS event notification](USER_Events.md). 

 You can also monitor how far the replica target is behind the replication source by connecting to the replica target and running the `SHOW SLAVE STATUS` (Aurora MySQL version 2) or `SHOW REPLICA STATUS` (Aurora MySQL version 3) command. In the command output, the `Seconds Behind Master` field tells you how far the replica target is behind the source. 

**Important**  
If you upgrade your DB cluster and specify a custom parameter group, make sure to manually reboot the cluster after the upgrade finishes. Doing so makes the cluster use your new custom parameter settings, and restarts binlog replication.

## Synchronizing passwords between replication source and target


 When you change user accounts and passwords on the replication source using SQL statements, those changes are replicated to the replication target automatically. 

 If you use the AWS Management Console, the AWS CLI, or the RDS API to change the master password on the replication source, those changes are not automatically replicated to the replication target. If you want to synchronize the master user and master password between the source and target systems, you must make the same change on the replication target yourself. 

# Stopping binary log replication for Aurora MySQL
Stopping binlog replication

To stop binary log replication with a MySQL DB instance, external MySQL database, or another Aurora DB cluster, follow these steps, discussed in detail following in this topic.

[1. Stop binary log replication on the replica target](#AuroraMySQL.Replication.MySQL.Stopping.StopReplication)

[2. Turn off binary logging on the replication source](#AuroraMySQL.Replication.MySQL.Stopping.DisableBinaryLogging)

## 1. Stop binary log replication on the replica target


Use the following instructions to stop binary log replication for your database engine.


|  Database engine  |  Instructions  | 
| --- | --- | 
|   Aurora MySQL   |  **To stop binary log replication on an Aurora MySQL DB cluster replica target** Connect to the Aurora DB cluster that is the replica target, and call the [mysql.rds\$1stop\$1replication](mysql-stored-proc-replicating.md#mysql_rds_stop_replication) procedure.  | 
|   RDS for MySQL   |  **To stop binary log replication on an Amazon RDS DB instance** Connect to the RDS DB instance that is the replica target and call the [mysql.rds\$1stop\$1replication](mysql-stored-proc-replicating.md#mysql_rds_stop_replication) procedure.  | 
|   MySQL (external)   |  **To stop binary log replication on an external MySQL database** Connect to the MySQL database and run the `STOP SLAVE` (version 5.7) or `STOP REPLICA` (version 8.0) command.  | 

## 2. Turn off binary logging on the replication source


Use the instructions in the following table to turn off binary logging on the replication source for your database engine.


| Database engine | Instructions | 
| --- | --- | 
|   Aurora MySQL   |  **To turn off binary logging on an Amazon Aurora DB cluster** [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.Stopping.html)  | 
|   RDS for MySQL   |  **To turn off binary logging on an Amazon RDS DB instance** You can't turn off binary logging directly for an Amazon RDS DB instance, but you can turn it off by doing the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.Stopping.html)  | 
|   MySQL (external)   |  **To turn off binary logging on an external MySQL database** Connect to the MySQL database and call the `STOP REPLICATION` command. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.Stopping.html)  | 

# Scaling reads for your MySQL database with Amazon Aurora
Scaling MySQL reads

You can use Amazon Aurora with your MySQL DB instance to take advantage of the read scaling capabilities of Amazon Aurora and expand the read workload for your MySQL DB instance. To use Aurora to scale reads for your MySQL DB instance, create an Amazon Aurora MySQL DB cluster and make it a read replica of your MySQL DB instance. This applies to an RDS for MySQL DB instance, or a MySQL database running external to Amazon RDS.

For information on creating an Amazon Aurora DB cluster, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md).

When you set up replication between your MySQL DB instance and your Amazon Aurora DB cluster, be sure to follow these guidelines:
+ Use the Amazon Aurora DB cluster endpoint address when you reference your Amazon Aurora MySQL DB cluster. If a failover occurs, then the Aurora Replica that is promoted to the primary instance for the Aurora MySQL DB cluster continues to use the DB cluster endpoint address.
+ Maintain the binlogs on your writer instance until you have verified that they have been applied to the Aurora Replica. This maintenance ensures that you can restore your writer instance in the event of a failure.

**Important**  
When using self-managed replication, you're responsible for monitoring and resolving any replication issues that may occur. For more information, see [Diagnosing and resolving lag between read replicas](CHAP_Troubleshooting.md#CHAP_Troubleshooting.MySQL.ReplicaLag).

**Note**  
The permissions required to start replication on an Aurora MySQL DB cluster are restricted and not available to your Amazon RDS master user. Therefore, you must use the [mysql.rds\$1set\$1external\$1master (Aurora MySQL version 2)](mysql-stored-proc-replicating.md#mysql_rds_set_external_master) or [mysql.rds\$1set\$1external\$1source (Aurora MySQL version 3)](mysql-stored-proc-replicating.md#mysql_rds_set_external_source) and [mysql.rds\$1start\$1replication](mysql-stored-proc-replicating.md#mysql_rds_start_replication) procedures to set up replication between your Aurora MySQL DB cluster and your MySQL DB instance.

## Start replication between an external source instance and an Aurora MySQL DB cluster


1.  Make the source MySQL DB instance read-only: 

   ```
   mysql> FLUSH TABLES WITH READ LOCK;
   mysql> SET GLOBAL read_only = ON;
   ```

1.  Run the `SHOW MASTER STATUS` command on the source MySQL DB instance to determine the binlog location. You receive output similar to the following example: 

   ```
   File                        Position
   ------------------------------------
    mysql-bin-changelog.000031      107
   ------------------------------------
   ```

1. Copy the database from the external MySQL DB instance to the Amazon Aurora MySQL DB cluster using `mysqldump`. For very large databases, you might want to use the procedure in [Importing data to an Amazon RDS for MySQL database with reduced downtime](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql-importing-data-reduced-downtime.html) in the *Amazon Relational Database Service User Guide*.

   For Linux, macOS, or Unix:

   ```
   mysqldump \
       --databases <database_name> \
       --single-transaction \
       --compress \
       --order-by-primary \
       -u local_user \
       -p local_password | mysql \
           --host aurora_cluster_endpoint_address \
           --port 3306 \
           -u RDS_user_name \
           -p RDS_password
   ```

   For Windows:

   ```
   mysqldump ^
       --databases <database_name> ^
       --single-transaction ^
       --compress ^
       --order-by-primary ^
       -u local_user ^
       -p local_password | mysql ^
           --host aurora_cluster_endpoint_address ^
           --port 3306 ^
           -u RDS_user_name ^
           -p RDS_password
   ```
**Note**  
Make sure that there is not a space between the `-p` option and the entered password.

   Use the `--host`, `--user (-u)`, `--port` and `-p` options in the `mysql` command to specify the hostname, user name, port, and password to connect to your Aurora DB cluster. The host name is the DNS name from the Amazon Aurora DB cluster endpoint, for example, `mydbcluster.cluster-123456789012.us-east-1.rds.amazonaws.com`. You can find the endpoint value in the cluster details in the Amazon RDS Management Console.

1. Make the source MySQL DB instance writeable again:

   ```
   mysql> SET GLOBAL read_only = OFF;
   mysql> UNLOCK TABLES;
   ```

   For more information on making backups for use with replication, see [http://dev.mysql.com/doc/refman/8.0/en/replication-solutions-backups-read-only.html](http://dev.mysql.com/doc/refman/8.0/en/replication-solutions-backups-read-only.html) in the MySQL documentation.

1. In the Amazon RDS Management Console, add the IP address of the server that hosts the source MySQL database to the VPC security group for the Amazon Aurora DB cluster. For more information on modifying a VPC security group, see [Security groups for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) in the *Amazon Virtual Private Cloud User Guide*.

   You might also need to configure your local network to permit connections from the IP address of your Amazon Aurora DB cluster, so that it can communicate with your source MySQL instance. To find the IP address of the Amazon Aurora DB cluster, use the `host` command.

   ```
   host aurora_endpoint_address
   ```

   The host name is the DNS name from the Amazon Aurora DB cluster endpoint.

1. Using the client of your choice, connect to the external MySQL instance and create a MySQL user to be used for replication. This account is used solely for replication and must be restricted to your domain to improve security. The following is an example.

   ```
   CREATE USER 'repl_user'@'example.com' IDENTIFIED BY 'password';
   ```

1. For the external MySQL instance, grant `REPLICATION CLIENT` and `REPLICATION SLAVE` privileges to your replication user. For example, to grant the `REPLICATION CLIENT` and `REPLICATION SLAVE` privileges on all databases for the '`repl_user`' user for your domain, issue the following command.

   ```
   GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'example.com'
       IDENTIFIED BY 'password';
   ```

1. Take a manual snapshot of the Aurora MySQL DB cluster to be the read replica before setting up replication. If you need to reestablish replication with the DB cluster as a read replica, you can restore the Aurora MySQL DB cluster from this snapshot instead of having to import the data from your MySQL DB instance into a new Aurora MySQL DB cluster.

1. Make the Amazon Aurora DB cluster the replica. Connect to the Amazon Aurora DB cluster as the master user and identify the source MySQL database as the replication source by using the [mysql.rds\$1set\$1external\$1master (Aurora MySQL version 2)](mysql-stored-proc-replicating.md#mysql_rds_set_external_master) or [mysql.rds\$1set\$1external\$1source (Aurora MySQL version 3)](mysql-stored-proc-replicating.md#mysql_rds_set_external_source) and [mysql.rds\$1start\$1replication](mysql-stored-proc-replicating.md#mysql_rds_start_replication) procedures.

   Use the binlog file name and position that you determined in Step 2. The following is an example.

   ```
   For Aurora MySQL version 2:
   CALL mysql.rds_set_external_master ('mymasterserver.example.com', 3306,
       'repl_user', 'password', 'mysql-bin-changelog.000031', 107, 0);
   
   For Aurora MySQL version 3:
   CALL mysql.rds_set_external_source ('mymasterserver.example.com', 3306,
       'repl_user', 'password', 'mysql-bin-changelog.000031', 107, 0);
   ```

1. On the Amazon Aurora DB cluster, call the [mysql.rds\$1start\$1replication](mysql-stored-proc-replicating.md#mysql_rds_start_replication) procedure to start replication.

   ```
   CALL mysql.rds_start_replication; 
   ```

After you have established replication between your source MySQL DB instance and your Amazon Aurora DB cluster, you can add Aurora Replicas to your Amazon Aurora DB cluster. You can then connect to the Aurora Replicas to read scale your data. For information on creating an Aurora Replica, see [Adding Aurora Replicas to a DB cluster](aurora-replicas-adding.md).

# Optimizing binary log replication for Aurora MySQL
Optimizing binlog replication

 Following, you can learn how to optimize binary log replication performance and troubleshoot related issues in Aurora MySQL. 

**Tip**  
 This discussion presumes that you are familiar with the MySQL binary log replication mechanism and how it works. For background information, see [Replication Implementation](https://dev.mysql.com/doc/refman/8.0/en/replication-implementation.html) in the MySQL documentation. 

## Multithreaded binary log replication


With multithreaded binary log replication, a SQL thread reads events from the relay log and queues them up for SQL worker threads to apply. The SQL worker threads are managed by the coordinator thread. The binary log events are applied in parallel when possible. The level of parallelism depends on factors including version, parameters, schema design, and workload characteristics.

Multithreaded binary log replication is supported in Aurora MySQL version 3, and in Aurora MySQL version 2.12.1 and higher. For a multithreaded replica to efficiently process binlog events in parallel, you must configure the source for multithreaded binary log replication, and the source must use a version that includes the parallelism information on its binary log files. 

When an Aurora MySQL DB instance is configured to use binary log replication, by default the replica instance uses single-threaded replication for Aurora MySQL versions lower than 3.04. To enable multithreaded replication, you update the `replica_parallel_workers` parameter to a value greater than `1` in your custom parameter group.

For Aurora MySQL version 3.04 and higher, replication is multithreaded by default, with `replica_parallel_workers` set to `4`. You can modify this parameter in your custom parameter group.

To increase the resilience of your database against unexpected halts, we recommend that you enable GTID replication on the source and allow GTIDs on the replica. To allow GTID replication, set `gtid_mode` to `ON_PERMISSIVE` on both the source and replica. For more information about GTID-based replication, see [Using GTID-based replication](mysql-replication-gtid.md).

The following configuration options help you to fine-tune multithreaded replication. For usage information, see [Replication and Binary Logging Options and Variables](https://dev.mysql.com/doc/refman/8.0/en/replication-options.html) in the *MySQL Reference Manual*. For more information about multithreaded replication, see the MySQL Blog [https://dev.mysql.com/blog-archive/improving-the-parallel-applier-with-writeset-based-dependency-tracking/](https://dev.mysql.com/blog-archive/improving-the-parallel-applier-with-writeset-based-dependency-tracking/).

Optimal parameter values depend on several factors. For example, performance for binary log replication is influenced by your database workload characteristics and the DB instance class the replica is running on. Thus, we recommend that you thoroughly test all changes to these configuration parameters before applying new parameter settings to a production instance:
+ `binlog_format recommended value` – set to `ROW`
+ `binlog_group_commit_sync_delay`
+ `binlog_group_commit_sync_no_delay_count`
+ `binlog_transaction_dependency_history_size`
+ `binlog_transaction_dependency_tracking` – recommended value is `WRITESET`
+ `replica_preserve_commit_order`
+ `replica_parallel_type` – recommended value is `LOGICAL_CLOCK`
+ `replica_parallel_workers`
+ `replica_pending_jobs_size_max`
+ `transaction_write_set_extraction` – recommended value is `XXHASH64`

Your schema and workload characteristics are factors that affect replication in parallel. The most common factors are the following.
+ Absence of primary keys – RDS can't establish writeset dependency for tables without primary keys. With `ROW` format, a single multi-row statement can be accomplished with a single full table scan on the source, but results in one full table scan per row modified on the replica. The absence of primary keys significantly decreases replication throughput.
+ Presence of foreign keys – If foreign keys are present, Amazon RDS can't use writeset dependency for parallelism of the tables with the FK relationship.
+ Size of transactions – If a single transaction spans dozens or hundreds of megabytes or gigabytes, the coordinator thread and one of the worker threads might spend a long time processing only that transaction. During that time, all other worker threads might remain idle after they conclude processing their previous transactions.

In Aurora MySQL version 3.06 and higher, you can improve performance for binary log replicas when replicating transactions for large tables with more than one secondary index. This feature introduces a thread pool to apply secondary index changes in parallel on a binlog replica. The feature is controlled by the `aurora_binlog_replication_sec_index_parallel_workers` DB cluster parameter, which controls the total number of parallel threads available to apply the secondary index changes. The parameter is set to `0` (disabled) by default. Enabling this feature doesn't require an instance restart. To enable this feature, stop ongoing replication, set the desired number of parallel worker threads, and then start replication again.

## Optimizing binlog replication
<a name="binlog_boost"></a><a name="binlog_io_cache"></a>

 In Aurora MySQL 2.10 and higher, Aurora automatically applies an optimization known as the binlog I/O cache to binary log replication. By caching the most recently committed binlog events, this optimization is designed to improve binlog dump thread performance while limiting the impact to foreground transactions on the binlog source instance. 

**Note**  
 This memory used for this feature is independent of the MySQL `binlog_cache` setting.   
 This feature doesn't apply to Aurora DB instances that use the `db.t2` and `db.t3` instance classes. 

You don't need to adjust any configuration parameters to turn on this optimization. In particular, if you had adjusted the configuration parameter `aurora_binlog_replication_max_yield_seconds` to a nonzero value in earlier Aurora MySQL versions, set it back to zero for currently available versions.

The status variables `aurora_binlog_io_cache_reads` and `aurora_binlog_io_cache_read_requests` help you to monitor how often the data is read from the binlog I/O cache.
+  `aurora_binlog_io_cache_read_requests` shows the number of binlog I/O read requests from the cache. 
+  `aurora_binlog_io_cache_reads` shows the number of binlog I/O reads that retrieve information from the cache. 

 The following SQL query computes the percentage of binlog read requests that take advantage of the cached information. In this case, the closer the ratio is to 100, the better it is. 

```
mysql> SELECT
  (SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS
    WHERE VARIABLE_NAME='aurora_binlog_io_cache_reads')
  / (SELECT VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS
    WHERE VARIABLE_NAME='aurora_binlog_io_cache_read_requests')
  * 100
  as binlog_io_cache_hit_ratio;
+---------------------------+
| binlog_io_cache_hit_ratio |
+---------------------------+
|         99.99847949080622 |
+---------------------------+
```

 The binlog I/O cache feature also includes new metrics related to the binlog dump threads. *Dump threads* are the threads that are created when new binlog replicas are connected to the binlog source instance. 

The dump thread metrics are printed to the database log every 60 seconds with the prefix `[Dump thread metrics]`. The metrics include information for each binlog replica such as `Secondary_id`, `Secondary_uuid`, binlog file name, and the position that each replica is reading. The metrics also include `Bytes_behind_primary` representing the distance in bytes between replication source and replica. This metric measures the lag of the replica I/O thread. That figure is different from the lag of the replica SQL applier thread, which is represented by the `seconds_behind_master` metric on the binlog replica. You can determine whether binlog replicas are catching up to the source or falling behind by checking whether the distance decreases or increases. 

## In-memory relay log


In Aurora MySQL version 3.10 and higher, Aurora introduces an optimization known as in-memory relay log to improve replication throughput. This optimization enhances relay log I/O performance by caching all intermediate relay log content in memory. As a result, it reduces commit latency by minimizing storage I/O operations since the relay log content remains readily accessible in memory.

By default, the in-memory relay log feature is automatically enabled for Aurora-managed replication scenarios (including blue-green deployments, Aurora-Aurora replication, and cross-region replicas) when the replica meets any of these configurations:
+ Single-threaded replication mode (replica\$1parallel\$1workers = 0)
+ Multi-threaded replication with GTID mode enabled:
  + Auto-position enabled
  + GTID mode set to ON on the replica
+ File-based replication with replica\$1preserve\$1commit\$1order = ON

The in-memory relay log feature is supported on instance classes larger than t3.large, but is not available on Aurora Serverless instances. The relay log circular buffer has a fixed size of 128 MB. To monitor the memory consumption of this feature, you can run the following query:

```
SELECT event_name, current_alloc FROM sys.memory_global_by_current_bytes WHERE event_name = 'memory/sql/relaylog_io_cache';
```

The in-memory relay log feature is controlled by the aurora\$1in\$1memory\$1relaylog parameter, which can be set at either the DB cluster or instance level. You can enable or disable this feature dynamically without restarting your instance:

1. Stop the ongoing replication

1. Set aurora\$1in\$1memory\$1relaylog to ON (to enable) or OFF (to disable) in parameter group

1. Restart replication

Example:

```
CALL mysql.rds_stop_replication;
set aurora_in_memory_relaylog to ON to enable or OFF to disable in cluster parameter group
CALL mysql.rds_start_replication;
```

Even when aurora\$1in\$1memory\$1relaylog is set to ON, the in-memory relay log feature might still be disabled under certain conditions. To verify the feature's current status, you can use the following command:

```
SHOW GLOBAL STATUS LIKE 'Aurora_in_memory_relaylog_status';
```

If the feature is unexpectedly disabled, you can identify the reason by running:

```
SHOW GLOBAL STATUS LIKE 'Aurora_in_memory_relaylog_disabled_reason';
```

This command returns a message explaining why the feature is currently disabled.

# Setting up enhanced binlog for Aurora MySQL
Setting up enhanced binlog

Enhanced binlog reduces the compute performance overhead caused by turning on binlog, which can reach up to 50% in certain cases. With enhanced binlog, this overhead can be reduced to about 13%. To reduce overhead, enhanced binlog writes the binary and transactions logs to storage in parallel, which minimizes the data written at the transaction commit time.

Using enhanced binlog also improves database recovery time after restarts and failovers by up to 99% compared to community MySQL binlog. The enhanced binlog is compatible with existing binlog-based workloads, and you interact with it the same way you interact with the community MySQL binlog.

Enhanced binlog is available on Aurora MySQL version 3.03.1 and higher.

**Topics**
+ [

## Configuring enhanced binlog parameters
](#AuroraMySQL.Enhanced.binlog.enhancedbinlog.parameters)
+ [

## Other related parameters
](#AuroraMySQL.Enhanced.binlog.other.parameters)
+ [

## Differences between enhanced binlog and community MySQL binlog
](#AuroraMySQL.Enhanced.binlog.differences)
+ [

## Amazon CloudWatch metrics for enhanced binlog
](#AuroraMySQL.Enhanced.binlog.cloudwatch.metrics)
+ [

## Enhanced binlog limitations
](#AuroraMySQL.Enhanced.binlog.limitations)

## Configuring enhanced binlog parameters


You can switch between community MySQL binlog and enhanced binlog by turning on/off the enhanced binlog parameters. The existing binlog consumers can continue to read and consume the binlog files without any gaps in the binlog file sequence.

To turn on enhanced binlog, set the following parameters:


| Parameter | Default | Description | 
| --- | --- | --- | 
| binlog\$1format | – | Set the binlog\$1format parameter to the binary logging format of your choice to turn on enhanced binlog. Make sure the binlog\$1format parameter isn't set to OFF. For more information, see [Configuring Aurora MySQL binary logging](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.MySQL.BinaryFormat.html). | 
| aurora\$1enhanced\$1binlog | 0 | Set the value of this parameter to 1 in the DB cluster parameter group associated with the Aurora MySQL cluster. When you change the value of this parameter, you must reboot the writer instance when the DBClusterParameterGroupStatus value is shown as pending-reboot. | 
| binlog\$1backup | 1 |  Turn off this parameter to turn on enhanced binlog. To do so, set the value of this parameter to 0. | 
| binlog\$1replication\$1globaldb | 1 |  Turn off this parameter to turn on enhanced binlog. To do so, set the value of this parameter to 0. | 

**Important**  
You can turn off the `binlog_backup` and `binlog_replication_globaldb` parameters only when you use enhanced binlog.

To turn off enhanced binlog, set the following parameters:


| Parameter | Description | 
| --- | --- | 
| aurora\$1enhanced\$1binlog | Set the value of this parameter to 0 in the DB cluster parameter group associated with the Aurora MySQL cluster. Whenever you change the value of this parameter, you must reboot the writer instance when the DBClusterParameterGroupStatus value is shown as pending-reboot. | 
| binlog\$1backup | Turn on this parameter when you turn off enhanced binlog. To do so, set the value of this parameter to 1. | 
| binlog\$1replication\$1globaldb | Turn on this parameter when you turn off enhanced binlog. To do so, set the value of this parameter to 1. | 

To check whether enhanced binlog is turned on, use the following command in the MySQL client:

```
mysql>show status like 'aurora_enhanced_binlog';
              
+------------------------+--------+
| Variable_name          | Value  |
+------------------------+--------+
| aurora_enhanced_binlog | ACTIVE |
+------------------------+--------+
1 row in set (0.00 sec)
```

When enhanced binlog is turned on, the output shows `ACTIVE` for `aurora_enhanced_binlog`.

## Other related parameters


When you turn on the enhanced binlog, the following parameters are affected:
+ The `max_binlog_size` parameter is visible but not modifiable. It's default value `134217728` is automatically adjusted to `268435456` when enhanced binlog is turned on.
+ Unlike in community MySQL binlog, the `binlog_checksum` doesn't act as a dynamic parameter when the enhanced binlog is turned on. For the change to this parameter to take effect, you must manually reboot the DB cluster even when the `ApplyMethod` is `immediate`.
+ The value you set on the `binlog_order_commits` parameter has no effect on the order of the commits when enhanced binlog is turned on. The commits are always ordered without any further performance implications.

## Differences between enhanced binlog and community MySQL binlog


Enhanced binlog interacts differently with clones, backups, and Aurora global database when compared to community MySQL binlog. We recommend that you understand the following differences before using enhanced binlog.
+ Enhanced binlog files from the source DB cluster aren't available on a cloned DB cluster.
+ Enhanced binlog files aren't included in Aurora backups. Therefore, enhanced binlog files from the source DB cluster aren't available after restoring a DB cluster despite any retention period set on it.
+ When used with an Aurora global database, the enhanced binlog files of the primary DB cluster aren't replicated to the DB cluster in the secondary regions.

****Examples****  
The following examples illustrate the differences between enhanced binlog and community MySQL binlog.

**On a restored or cloned DB cluster**

When enhanced binlog is turned on, the historical binlog files aren't available in the restored or cloned DB cluster. After a restore or clone operation, if binlog is turned on, the new DB cluster starts writing its own sequence of binlog files, starting from 1 (mysql-bin-changelog.000001).

To turn on enhanced binlog after a restore or clone operation, set the required DB cluster parameters on the restored or cloned DB cluster. For more information, see [Configuring enhanced binlog parameters](#AuroraMySQL.Enhanced.binlog.enhancedbinlog.parameters).

**Example: Clone or restore operation performed when enhanced binlog is turned on**  
Source DB Cluster:  

```
mysql> show binary logs;
                      
+----------------------------+-----------+-----------+
| Log_name                   | File_size | Encrypted |
+----------------------------+-----------+-----------+
| mysql-bin-changelog.000001 |       156 | No        |
| mysql-bin-changelog.000002 |       156 | No        |
| mysql-bin-changelog.000003 |       156 | No        |
| mysql-bin-changelog.000004 |       156 | No        | --> Enhanced Binlog turned on
| mysql-bin-changelog.000005 |       156 | No        | --> Enhanced Binlog turned on
| mysql-bin-changelog.000006 |       156 | No        | --> Enhanced Binlog turned on
+----------------------------+-----------+-----------+
6 rows in set (0.00 sec)
```
 On a restored or cloned DB cluster, binlog files aren't backed up when enhanced binlog is turned on. To avoid discontinuity in the binlog data, the binlog files written before turning on the enhanced binlog are also not available.   

```
mysql>show binary logs;
                      
+----------------------------+-----------+-----------+
| Log_name                   | File_size | Encrypted |
+----------------------------+-----------+-----------+
| mysql-bin-changelog.000001 |       156 | No        | --> New sequence of Binlog files
+----------------------------+-----------+-----------+ 
1 row in set (0.00 sec)
```

**Example: Clone or restore operation performed when enhanced binlog is turned off**  
Source DB cluster:  

```
mysql>show binary logs;
                                                
+----------------------------+-----------+-----------+
| Log_name                   | File_size | Encrypted |
+----------------------------+-----------+-----------+
| mysql-bin-changelog.000001 |       156 | No        |
| mysql-bin-changelog.000002 |       156 | No        | --> Enhanced Binlog enabled
| mysql-bin-changelog.000003 |       156 | No        | --> Enhanced Binlog enabled
| mysql-bin-changelog.000004 |       156 | No        | 
| mysql-bin-changelog.000005 |       156 | No        | 
| mysql-bin-changelog.000006 |       156 | No        |
+----------------------------+-----------+-----------+
6 rows in set (0.00 sec)
```
Enhanced binlog is disabled after `mysql-bin-changelog.000003`. On a restored or cloned DB cluster, binlog files written after turning off the enhanced binlog are available.  

```
mysql>show binary logs;
                      
+----------------------------+-----------+-----------+
| Log_name                   | File_size | Encrypted |
+----------------------------+-----------+-----------+
| mysql-bin-changelog.000004 |       156 | No        | 
| mysql-bin-changelog.000005 |       156 | No        | 
| mysql-bin-changelog.000006 |       156 | No        |
+----------------------------+-----------+-----------+
1 row in set (0.00 sec)
```

**On an Amazon Aurora global database**

On an Amazon Aurora global database, the binlog data of the primary DB cluster isn't replicated to the secondary DB clusters. After a cross-Region failover process, the binlog data isn't available in the newly promoted primary DB cluster. If binlog is turned on, the newly promoted DB cluster starts its own sequence of binlog files, starting from 1 (mysql-bin-changelog.000001).

To turn on enhanced binlog after failover, you must set the required DB cluster parameters on the secondary DB cluster. For more information, see [Configuring enhanced binlog parameters](#AuroraMySQL.Enhanced.binlog.enhancedbinlog.parameters).

**Example: Global database failover operation is performed when enhanced binlog is turned on**  
Old primary DB Cluster (before failover):  

```
mysql>show binary logs;
                  
+----------------------------+-----------+-----------+
| Log_name                   | File_size | Encrypted |
+----------------------------+-----------+-----------+
| mysql-bin-changelog.000001 |       156 | No        |
| mysql-bin-changelog.000002 |       156 | No        |
| mysql-bin-changelog.000003 |       156 | No        |
| mysql-bin-changelog.000004 |       156 | No        | --> Enhanced Binlog enabled
| mysql-bin-changelog.000005 |       156 | No        | --> Enhanced Binlog enabled
| mysql-bin-changelog.000006 |       156 | No        | --> Enhanced Binlog enabled
+----------------------------+-----------+-----------+
6 rows in set (0.00 sec)
```
New primary DB cluster (after failover):  
Binlog files aren't replicated to secondary regions when enhanced binlog is turned on. To avoid discontinuity in the binlog data, the binlog files written before turning on the enhanced binlog aren't available.  

```
mysql>show binary logs;
                      
+----------------------------+-----------+-----------+
| Log_name                   | File_size | Encrypted |
+----------------------------+-----------+-----------+
| mysql-bin-changelog.000001 |       156 | No        | --> Fresh sequence of Binlog files
+----------------------------+-----------+-----------+ 
1 row in set (0.00 sec)
```

**Example: Global database failover operation is performed when enhanced binlog is turned off**  
Source DB Cluster:  

```
mysql>show binary logs;
                  
+----------------------------+-----------+-----------+
| Log_name                   | File_size | Encrypted |
+----------------------------+-----------+-----------+
| mysql-bin-changelog.000001 |       156 | No        |
| mysql-bin-changelog.000002 |       156 | No        | --> Enhanced Binlog enabled
| mysql-bin-changelog.000003 |       156 | No        | --> Enhanced Binlog enabled
| mysql-bin-changelog.000004 |       156 | No        | 
| mysql-bin-changelog.000005 |       156 | No        | 
| mysql-bin-changelog.000006 |       156 | No        |
+----------------------------+-----------+-----------+
6 rows in set (0.00 sec)
```
**Restored or cloned DB cluster:**  
Enhanced binlog is disabled after `mysql-bin-changelog.000003`. Binlog files that are written after turning off the enhanced binlog are replicated and are available in the newly promoted DB cluster.  

```
mysql>show binary logs;
                  
+----------------------------+-----------+-----------+
| Log_name                   | File_size | Encrypted |
+----------------------------+-----------+-----------+
| mysql-bin-changelog.000004 |       156 | No        | 
| mysql-bin-changelog.000005 |       156 | No        | 
| mysql-bin-changelog.000006 |       156 | No        |
+----------------------------+-----------+-----------+
3 rows in set (0.00 sec)
```

## Amazon CloudWatch metrics for enhanced binlog


The following Amazon CloudWatch metrics are published only when enhanced binlog is turned on.


| CloudWatch metric | Description | Units | 
| --- | --- | --- | 
| ChangeLogBytesUsed | The amount of storage used by the enhanced binlog. | Bytes | 
| ChangeLogReadIOPs | The number of read I/O operations performed in the enhanced binlog within a 5-minute interval. | Count per 5 minutes | 
| ChangeLogWriteIOPs | The number of write disk I/O operations performed in the enhanced binlog within a 5-minute interval. | Count per 5 minutes | 

## Enhanced binlog limitations


The following limitations apply to Amazon Aurora DB clusters when enhanced binlog is turned on.
+ Enhanced binlog is only supported on Aurora MySQL version3.03.1 and higher.
+ The enhanced binlog ﬁles written on the primary DB cluster aren't copied to the cloned or restored DB clusters.
+ When used with Amazon Aurora global database, the enhanced binlog files of the primary DB cluster aren't replicated to the secondary DB clusters. Therefore, after the failover process, the historical binlog data isn't available in the new primary DB cluster.
+ The following binlog conﬁguration parameters are ignored:
  + `binlog_group_commit_sync_delay`
  + `binlog_group_commit_sync_no_delay_count`
  + `binlog_max_flush_queue_time`
+ You can't drop or rename a corrupted table in a database. To drop these tables, you can contact Support.
+ The binlog I/O cache is disabled when enhanced binlog is turned on. For more information, see [Optimizing binary log replication for Aurora MySQL](binlog-optimization.md).
**Note**  
Enhanced binlog provides similar read performance improvements as binlog I/O cache and better write performance improvements. 
+ The backtrack feature is not supported. Enhanced binlog can't be turned on in a DB cluster under the following conditions:
  + DB cluster with the backtrack feature currently enabled.
  + DB cluster where the backtrack feature was previously enabled, but is now disabled.
  + DB cluster restored from a source DB cluster or a snapshot with the backtrack feature enabled.

# Using GTID-based replication
GTID-based replication

The following content explains how to use global transaction identifiers (GTIDs) with binary log (binlog) replication between an Aurora MySQL cluster and an external source. 

**Note**  
For Aurora, you can use this feature only with Aurora MySQL clusters that use binlog replication to or from an external MySQL database. The other database might be an Amazon RDS MySQL instance, an on-premises MySQL database, or an Aurora DB cluster in a different AWS Region. To learn how to configure that kind of replication, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md). 

If you use binlog replication and aren't familiar with GTID-based replication with MySQL, see [Replication with global transaction identifiers](https://dev.mysql.com/doc/refman/5.7/en/replication-gtids.html) in the MySQL documentation.

GTID-based replication is supported for Aurora MySQL version 2 and 3.

**Topics**
+ [

## Overview of global transaction identifiers (GTIDs)
](#mysql-replication-gtid.overview)
+ [

## Parameters for GTID-based replication
](#mysql-replication-gtid.parameters)
+ [

# Enabling GTID-based replication for an Aurora MySQL cluster
](mysql-replication-gtid.configuring-aurora.md)
+ [

# Disabling GTID-based replication for an Aurora MySQL DB cluster
](mysql-replication-gtid.disabling.md)

## Overview of global transaction identifiers (GTIDs)


*Global transaction identifiers (GTIDs)* are unique identifiers generated for committed MySQL transactions. You can use GTIDs to make binlog replication simpler and easier to troubleshoot.

**Note**  
When Aurora synchronizes data among the DB instances in a cluster, that replication mechanism doesn't involve the binary log (binlog). For Aurora MySQL, GTID-based replication only applies when you also use binlog replication to replicate into or out of an Aurora MySQL DB cluster from an external MySQL-compatible database. 

MySQL uses two different types of transactions for binlog replication:
+ *GTID transactions* – Transactions that are identified by a GTID.
+ *Anonymous transactions* – Transactions that don't have a GTID assigned.

In a replication configuration, GTIDs are unique across all DB instances. GTIDs simplify replication configuration because when you use them, you don't have to refer to log file positions. GTIDs also make it easier to track replicated transactions and determine whether the source instance and replicas are consistent.

 You typically use GTID-based replication with Aurora when replicating from an external MySQL-compatible database into an Aurora cluster. You can set up this replication configuration as part of a migration from an on-premises or Amazon RDS database into Aurora MySQL. If the external database already uses GTIDs, enabling GTID-based replication for the Aurora cluster simplifies the replication process. 

 You configure GTID-based replication for an Aurora MySQL cluster by first setting the relevant configuration parameters in a DB cluster parameter group. You then associate that parameter group with the cluster. 

## Parameters for GTID-based replication


Use the following parameters to configure GTID-based replication.


| Parameter | Valid values | Description | 
| --- | --- | --- | 
|  `gtid_mode`  |  `OFF`, `OFF_PERMISSIVE`, `ON_PERMISSIVE`, `ON`  |  `OFF` specifies that new transactions are anonymous transactions (that is, don't have GTIDs), and a transaction must be anonymous to be replicated.  `OFF_PERMISSIVE` specifies that new transactions are anonymous transactions, but all transactions can be replicated.  `ON_PERMISSIVE` specifies that new transactions are GTID transactions, but all transactions can be replicated.  `ON` specifies that new transactions are GTID transactions, and a transaction must be a GTID transaction to be replicated.   | 
|  `enforce_gtid_consistency`  |  `OFF`, `ON`, `WARN`  |  `OFF` allows transactions to violate GTID consistency.  `ON` prevents transactions from violating GTID consistency.  `WARN` allows transactions to violate GTID consistency but generates a warning when a violation occurs.   | 

**Note**  
In the AWS Management Console, the `gtid_mode` parameter appears as `gtid-mode`.

For GTID-based replication, use these settings for the DB cluster parameter group for your Aurora MySQL DB cluster: 
+ `ON` and `ON_PERMISSIVE` apply only to outgoing replication from an Aurora MySQL cluster. Both of these values cause your Aurora DB cluster to use GTIDs for transactions that are replicated to an external database. `ON` requires that the external database also use GTID-based replication. `ON_PERMISSIVE` makes GTID-based replication optional on the external database. 
+ `OFF_PERMISSIVE`, if set, means that your Aurora DB cluster can accept incoming replication from an external database. It can do this whether the external database uses GTID-based replication or not.
+  `OFF`, if set, means that your Aurora DB cluster only accepts incoming replication from external databases that don't use GTID-based replication. 

**Tip**  
Incoming replication is the most common binlog replication scenario for Aurora MySQL clusters. For incoming replication, we recommend that you set the GTID mode to `OFF_PERMISSIVE`. That setting allows incoming replication from external databases regardless of the GTID settings at the replication source. 

For more information about parameter groups, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).

# Enabling GTID-based replication for an Aurora MySQL cluster
Enabling GTID-based replication<a name="gtid"></a>

When GTID-based replication is enabled for an Aurora MySQL DB cluster, the GTID settings apply to both inbound and outbound binlog replication. 

**To enable GTID-based replication for an Aurora MySQL cluster**

1. Create or edit a DB cluster parameter group using the following parameter settings:
   + `gtid_mode` – `ON` or `ON_PERMISSIVE`
   + `enforce_gtid_consistency` – `ON`

1. Associate the DB cluster parameter group with the Aurora MySQL cluster. To do so, follow the procedures in [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).

1. (Optional) Specify how to assign GTIDs to transactions that don't include them. To do so, call the stored procedure in [mysql.rds\$1assign\$1gtids\$1to\$1anonymous\$1transactions (Aurora MySQL version 3)](mysql-stored-proc-gtid.md#mysql_assign_gtids_to_anonymous_transactions).

# Disabling GTID-based replication for an Aurora MySQL DB cluster
Disabling GTID-based replication

You can disable GTID-based replication for an Aurora MySQL DB cluster. Doing so means that the Aurora cluster can't perform inbound or outbound binlog replication with external databases that use GTID-based replication. 

**Note**  
In the following procedure, *read replica* means the replication target in an Aurora configuration with binlog replication to or from an external database. It doesn't mean the read-only Aurora Replica DB instances. For example, when an Aurora cluster accepts incoming replication from an external source, the Aurora primary instance acts as the read replica for binlog replication. 

For more details about the stored procedures mentioned in this section, see [Aurora MySQL stored procedure reference](AuroraMySQL.Reference.StoredProcs.md). 

**To disable GTID-based replication for an Aurora MySQL DB cluster**

1. On the Aurora replicas, run the following procedure:

   For version 3

   ```
   CALL mysql.rds_set_source_auto_position(0);
   ```

   For version 2

   ```
   CALL mysql.rds_set_master_auto_position(0);
   ```

1. Reset the `gtid_mode` to `ON_PERMISSIVE`.

   1. Make sure that the DB cluster parameter group associated with the Aurora MySQL cluster has `gtid_mode` set to `ON_PERMISSIVE`.

      For more information about setting configuration parameters using parameter groups, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).

   1. Restart the Aurora MySQL DB cluster.

1. Reset the `gtid_mode` to `OFF_PERMISSIVE`.

   1. Make sure that the DB cluster parameter group associated with the Aurora MySQL cluster has `gtid_mode` set to `OFF_PERMISSIVE`.

   1. Restart the Aurora MySQL DB cluster.

1. Wait for all of the GTID transactions to be applied on the Aurora primary instance. To check that these are applied, do the following steps:

   1. On the Aurora primary instance, run the `SHOW MASTER STATUS` command.

      Your output should be similar to the following output.

      ```
      File                        Position
      ------------------------------------
      mysql-bin-changelog.000031      107
      ------------------------------------
      ```

      Note the file and position in your output.

   1. On each read replica, use the file and position information from its source instance in the previous step to run the following query:

      For version 3

      ```
      SELECT SOURCE_POS_WAIT('file', position);
      ```

      For version 2

      ```
      SELECT MASTER_POS_WAIT('file', position);
      ```

      For example, if the file name is `mysql-bin-changelog.000031` and the position is `107`, run the following statement:

      For version 3

      ```
      SELECT SOURCE_POS_WAIT('mysql-bin-changelog.000031', 107);
      ```

      For version 2

      ```
      SELECT MASTER_POS_WAIT('mysql-bin-changelog.000031', 107);
      ```

1. Reset the GTID parameters to disable GTID-based replication.

   1. Make sure that the DB cluster parameter group associated with the Aurora MySQL cluster has the following parameter settings:
      + `gtid_mode` – `OFF`
      + `enforce_gtid_consistency` – `OFF`

   1. Restart the Aurora MySQL DB cluster.

# Using local write forwarding in an Amazon Aurora MySQL DB cluster
Local write forwarding

*Local (in-cluster) write forwarding* allows your applications to issue read/write transactions directly on an Aurora Replica. These transactions are then forwarded to the writer DB instance to be committed. You can use local write forwarding when your applications require *read-after-write consistency*, which is the ability to read the latest write in a transaction.

Read replicas receive updates asynchronously from the writer. Without write forwarding, you have to transact any reads that require read-after-write consistency on the writer DB instance. Or you have to develop complex custom application logic to take advantage of multiple read replicas for scalability. Your applications must fully split all read and write traffic, maintaining two sets of database connections to send the traffic to the correct endpoint. This development overhead complicates application design when the queries are part of a single logical session, or transaction, within the application. Moreover, because replication lag can differ among read replicas, it's difficult to achieve global read consistency across all instances in the database.

Write forwarding avoids the need to split those transactions or send them exclusively to the writer, which simplifies application development. This new capability makes it easy to achieve read scale for workloads that need to read the latest write in a transaction and aren't sensitive to write latency.

Local write forwarding is different from global write forwarding, which forwards writes from a secondary DB cluster to the primary DB cluster in an Aurora global database. You can use local write forwarding in a DB cluster that is part of an Aurora global database. For more information, see [Using write forwarding in an Amazon Aurora global database](aurora-global-database-write-forwarding.md).

Local write forwarding requires Aurora MySQL version 3.04 or higher.

**Topics**
+ [

# Enabling local write forwarding
](aurora-mysql-write-forwarding-enabling.md)
+ [

## Checking if a DB cluster has write forwarding enabled
](#aurora-mysql-write-forwarding-describing)
+ [

## Application and SQL compatibility with write forwarding
](#aurora-mysql-write-forwarding-compatibility)
+ [

## Isolation levels for write forwarding
](#aurora-mysql-write-forwarding-isolation)
+ [

# Read consistency for write forwarding
](aurora-mysql-write-forwarding-consistency.md)
+ [

## Running multipart statements with write forwarding
](#aurora-mysql-write-forwarding-multipart)
+ [

## Transactions with write forwarding
](#aurora-mysql-write-forwarding-txns)
+ [

## Configuration parameters for write forwarding
](#aurora-mysql-write-forwarding-params)
+ [

# Amazon CloudWatch metrics and Aurora MySQL status variables for write forwarding
](aurora-mysql-write-forwarding-cloudwatch.md)
+ [

## Identifying forwarded transactions and queries
](#aurora-write-forwarding-processlist)

# Enabling local write forwarding


By default, local write forwarding isn't enabled for Aurora MySQL DB clusters. You enable local write forwarding at the cluster level, not at the instance level.

**Important**  
You can also enable local write forwarding for cross-Region read replicas that use binary logging, but write operations aren't forwarded to the source AWS Region. They're forwarded to the writer DB instance of the binlog read replica cluster.  
Use this method only if you have a use case for writing to the binlog read replica in the secondary AWS Region. Otherwise, you might end up with a "split-brain" scenario where replicated datasets are inconsistent with each other.   
We recommend that you use global write forwarding with global databases, rather than local write forwarding on cross-Region read replicas, unless absolutely necessary. For more information, see [Using write forwarding in an Amazon Aurora global database](aurora-global-database-write-forwarding.md).

## Console


Using the AWS Management Console, select the **Turn on local write forwarding** check box under **Read replica write forwarding** when you create or modify a DB cluster.

## AWS CLI


To enable write forwarding with the AWS CLI, use the `--enable-local-write-forwarding` option. This option works when you create a new DB cluster using the `create-db-cluster` command. It also works when you modify an existing DB cluster using the `modify-db-cluster` command. You can disable write forwarding by using the `--no-enable-local-write-forwarding` option with these same CLI commands.

The following example creates an Aurora MySQL DB cluster with write forwarding enabled.

```
aws rds create-db-cluster \
  --db-cluster-identifier write-forwarding-test-cluster \
  --enable-local-write-forwarding \
  --engine aurora-mysql \
  --engine-version 8.0.mysql_aurora.3.04.0 \
  --master-username myuser \
  --master-user-password mypassword \
  --backup-retention 1
```

You then create writer and reader DB instances so that you can use write forwarding. For more information, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md).

## RDS API


To enable write forwarding using the Amazon RDS API, set the `EnableLocalWriteForwarding` parameter to `true`. This parameter works when you create a new DB cluster using the `CreateDBCluster` operation. It also works when you modify an existing DB cluster using the `ModifyDBCluster` operation. You can disable write forwarding by setting the `EnableLocalWriteForwarding` parameter to `false`.

## Enabling write forwarding for database sessions


The `aurora_replica_read_consistency` parameter is a DB parameter and DB cluster parameter that enables write forwarding. You can specify `EVENTUAL`, `SESSION`, or `GLOBAL` for the read consistency level. To learn more about consistency levels, see [Read consistency for write forwarding](aurora-mysql-write-forwarding-consistency.md). 

The following rules apply to this parameter:
+ The default value is '' (null).
+ Write forwarding is available only if you set `aurora_replica_read_consistency` to `EVENTUAL`, `SESSION`, or `GLOBAL`. This parameter is relevant only in reader instances of DB clusters that have write forwarding enabled.
+ You can't set this parameter (when empty) or unset it (when already set) inside a multistatement transaction. You can change it from one valid value to another valid value during such a transaction, but we don't recommend this action.

## Checking if a DB cluster has write forwarding enabled


To determine that you can use write forwarding in a DB cluster, confirm that the cluster has the attribute `LocalWriteForwardingStatus` set to `enabled`.

In the AWS Management Console, on the **Configuration** tab of the details page for the cluster, you see the status **Enabled** for **Local read replica write forwarding**.

To see the status of the write forwarding setting for all of your clusters, run the following AWS CLI command.

**Example**  

```
aws rds describe-db-clusters \
--query '*[].{DBClusterIdentifier:DBClusterIdentifier,LocalWriteForwardingStatus:LocalWriteForwardingStatus}'

[
    {
        "LocalWriteForwardingStatus": "enabled",
        "DBClusterIdentifier": "write-forwarding-test-cluster-1"
    },
    {
        "LocalWriteForwardingStatus": "disabled",
        "DBClusterIdentifier": "write-forwarding-test-cluster-2"
    },
    {
        "LocalWriteForwardingStatus": "requested",
        "DBClusterIdentifier": "test-global-cluster-2"
    },
    {
        "LocalWriteForwardingStatus": "null",
        "DBClusterIdentifier": "aurora-mysql-v2-cluster"
    }
]
```

A DB cluster can have the following values for `LocalWriteForwardingStatus`:
+ `disabled` – Write forwarding is disabled.
+ `disabling` – Write forwarding is in the process of being disabled.
+ `enabled` – Write forwarding is enabled.
+ `enabling` – Write forwarding is in the process of being enabled.
+ `null` – Write forwarding isn't available for this DB cluster.
+ `requested` – Write forwarding has been requested, but is not yet active.

## Application and SQL compatibility with write forwarding


You can use the following kinds of SQL statements with write forwarding:
+ Data manipulation language (DML) statements, such as `INSERT`, `DELETE`, and `UPDATE`. There are some restrictions on the properties of these statements that you can use with write forwarding, as described following.
+ `SELECT ... LOCK IN SHARE MODE` and `SELECT FOR UPDATE` statements.
+ `PREPARE` and `EXECUTE` statements.

Certain statements aren't allowed or can produce stale results when you use them in a DB cluster with write forwarding. In addition, user defined functions and user defined procedures aren't supported. Thus, the `EnableLocalWriteForwarding` setting is disabled by default for DB clusters. Before enabling it, check to make sure that your application code isn't affected by any of these restrictions.

The following restrictions apply to the SQL statements you use with write forwarding. In some cases, you can use the statements on DB clusters with write forwarding enabled. This approach works if write forwarding isn't enabled within the session by the `aurora_replica_read_consistency` configuration parameter. If you try to use a statement when it's not allowed because of write forwarding, then you will see an error message similar to the following:

```
ERROR 1235 (42000): This version of MySQL doesn't yet support 'operation with write forwarding'.
```

**Data definition language (DDL)**  
Connect to the writer DB instance to run DDL statements. You can't run them from reader DB instances.

**Updating a permanent table using data from a temporary table**  
You can use temporary tables on DB clusters with write forwarding enabled. However, you can't use a DML statement to modify a permanent table if the statement refers to a temporary table. For example, you can't use an `INSERT ... SELECT` statement that takes the data from a temporary table.

**XA transactions**  
You can't use the following statements on a DB cluster when write forwarding is enabled within the session. You can use these statements on DB clusters that don't have write forwarding enabled, or within sessions where the `aurora_replica_read_consistency` setting is empty. Before enabling write forwarding within a session, check if your code uses these statements.  

```
XA {START|BEGIN} xid [JOIN|RESUME]
XA END xid [SUSPEND [FOR MIGRATE]]
XA PREPARE xid
XA COMMIT xid [ONE PHASE]
XA ROLLBACK xid
XA RECOVER [CONVERT XID]
```

**LOAD statements for permanent tables**  
You can't use the following statements on a DB cluster with write forwarding enabled.  

```
LOAD DATA INFILE 'data.txt' INTO TABLE t1;
LOAD XML LOCAL INFILE 'test.xml' INTO TABLE t1;
```

**Plugin statements**  
You can't use the following statements on a DB cluster with write forwarding enabled.  

```
INSTALL PLUGIN example SONAME 'ha_example.so';
UNINSTALL PLUGIN example;
```

**SAVEPOINT statements**  
You can't use the following statements on a DB cluster when write forwarding is enabled within the session. You can use these statements on DB clusters that don't have write forwarding enabled, or within sessions where the `aurora_replica_read_consistency` setting is blank. Check if your code uses these statements before enabling write forwarding within a session.  

```
SAVEPOINT t1_save;
ROLLBACK TO SAVEPOINT t1_save;
RELEASE SAVEPOINT t1_save;
```

## Isolation levels for write forwarding


In sessions that use write forwarding, you can only use the `REPEATABLE READ` isolation level. Although you can also use the `READ COMMITTED` isolation level with Aurora Replicas, that isolation level doesn't work with write forwarding. For information about the `REPEATABLE READ` and `READ COMMITTED` isolation levels, see [Aurora MySQL isolation levels](AuroraMySQL.Reference.IsolationLevels.md).

# Read consistency for write forwarding
Read consistency

You can control the degree of read consistency on a DB cluster. The read consistency level determines how long the DB cluster waits before each read operation to ensure that some or all changes are replicated from the writer. You can adjust the read consistency level to make sure that all forwarded write operations from your session are visible in the DB cluster before any subsequent queries. You can also use this setting to make sure that queries on the DB cluster always see the most current updates from the writer. This setting also applies to queries submitted by other sessions or other clusters. To specify this type of behavior for your application, choose a value for the `aurora_replica_read_consistency` DB parameter or DB cluster parameter.

**Important**  
Always set the `aurora_replica_read_consistency` DB parameter or DB cluster parameter when you want to forward writes. If you don't, then Aurora doesn't forward writes. This parameter has an empty value by default, so choose a specific value when you use this parameter. The `aurora_replica_read_consistency` parameter only affects DB clusters or instances that have write forwarding enabled.

As you increase the consistency level, your application spends more time waiting for changes to be propagated between DB instances. You can choose the balance between fast response time and making sure that changes made in other DB instances are fully available before your queries run.

You can specify the following values for the `aurora_replica_read_consistency` parameter:
+ `EVENTUAL` – Results of write operations in the same session aren't visible until the write operation is performed on the writer DB instance. The query doesn't wait for the updated results to be available. Thus it might retrieve the older data or the updated data, depending on the timing of the statements and the amount of replication lag. This is the same consistency as for Aurora MySQL DB clusters that don't use write forwarding.
+ `SESSION` – All queries that use write forwarding see the results of all changes made in that session. The changes are visible regardless of whether the transaction is committed. If necessary, the query waits for the results of forwarded write operations to be replicated.
+ `GLOBAL` – A session sees all committed changes across all sessions and instances in the DB cluster. Each query might wait for a period that varies depending on the amount of session lag. The query proceeds when the DB cluster is up-to-date with all committed data from the writer, as of the time that the query began.

For information about the configuration parameters involved in write forwarding, see [Configuration parameters for write forwarding](aurora-mysql-write-forwarding.md#aurora-mysql-write-forwarding-params).

**Note**  
You can also use `aurora_replica_read_consistency` as a session variable, for example:  

```
mysql> set aurora_replica_read_consistency = 'session';
```

## Examples of using write forwarding


The following examples show the effects of the `aurora_replica_read_consistency` parameter on running `INSERT` statements followed by `SELECT` statements. The results can differ, depending on the value of `aurora_replica_read_consistency` and the timing of the statements.

To achieve higher consistency, you might wait briefly before issuing the `SELECT` statement. Or Aurora can automatically wait until the results finish replicating before proceeding with `SELECT`.

For information on setting DB parameters, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).

**Example with `aurora_replica_read_consistency` set to `EVENTUAL`**  
Running an `INSERT` statement, immediately followed by a `SELECT` statement, returns a value for `COUNT(*)` with the number of rows before the new row is inserted. Running the `SELECT` again a short time later returns the updated row count. The `SELECT` statements don't wait.  

```
mysql> select count(*) from t1;
+----------+
| count(*) |
+----------+
|        5 |
+----------+
1 row in set (0.00 sec)

mysql> insert into t1 values (6); select count(*) from t1;
+----------+
| count(*) |
+----------+
|        5 |
+----------+
1 row in set (0.00 sec)

mysql> select count(*) from t1;
+----------+
| count(*) |
+----------+
|        6 |
+----------+
1 row in set (0.00 sec)
```

**Example with `aurora_replica_read_consistency` set to `SESSION`**  
A `SELECT` statement immediately after an `INSERT` waits until the changes from the `INSERT` statement are visible. Subsequent `SELECT` statements don't wait.  

```
mysql> select count(*) from t1;
+----------+
| count(*) |
+----------+
|        6 |
+----------+
1 row in set (0.01 sec)

mysql> insert into t1 values (6); select count(*) from t1; select count(*) from t1;
Query OK, 1 row affected (0.08 sec)
+----------+
| count(*) |
+----------+
|        7 |
+----------+
1 row in set (0.37 sec)
+----------+
| count(*) |
+----------+
|        7 |
+----------+
1 row in set (0.00 sec)
```
With the read consistency setting still set to `SESSION`, introducing a brief wait after performing an `INSERT` statement makes the updated row count available by the time the next `SELECT` statement runs.  

```
mysql> insert into t1 values (6); select sleep(2); select count(*) from t1;
Query OK, 1 row affected (0.07 sec)
+----------+
| sleep(2) |
+----------+
|        0 |
+----------+
1 row in set (2.01 sec)
+----------+
| count(*) |
+----------+
|        8 |
+----------+
1 row in set (0.00 sec)
```

**Example with `aurora_replica_read_consistency` set to `GLOBAL`**  
Each `SELECT` statement waits for all data changes, as of the start time of the statement, to be visible before performing the query. The wait time for each `SELECT` statement varies, depending on the amount of replication lag.  

```
mysql> select count(*) from t1;
+----------+
| count(*) |
+----------+
|        8 |
+----------+
1 row in set (0.75 sec)

mysql> select count(*) from t1;
+----------+
| count(*) |
+----------+
|        8 |
+----------+
1 row in set (0.37 sec)

mysql> select count(*) from t1;
+----------+
| count(*) |
+----------+
|        8 |
+----------+
1 row in set (0.66 sec)
```

## Running multipart statements with write forwarding


A DML statement might consist of multiple parts, such as an `INSERT ... SELECT` statement or a `DELETE ... WHERE` statement. In this case, the entire statement is forwarded to the writer DB instance and run there.

## Transactions with write forwarding


If the transaction access mode is set to read only, write forwarding isn't used. You can specify the access mode for the transaction by using the `SET TRANSACTION` statement or the `START TRANSACTION` statement. You can also specify the transaction access mode by changing the value of the [transaction\$1read\$1only](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_transaction_read_only) session variable. You can change this session value only while you're connected to a DB cluster that has write forwarding enabled.

If a long-running transaction doesn't issue any statement for a substantial period of time, it might exceed the idle timeout period. This period has a default of one minute. You can set the `aurora_fwd_writer_idle_timeout` parameter to increase it up to one day. A transaction that exceeds the idle timeout is canceled by the writer instance. The next subsequent statement you submit receives a timeout error. Then Aurora rolls back the transaction.

This type of error can occur in other cases when write forwarding becomes unavailable. For example, Aurora cancels any transactions that use write forwarding if you restart the DB cluster or if you disable write forwarding.

When a writer instance in a cluster using local write forwarding is restarted, any active, forwarded transactions and queries on reader instances using local write forwarding are automatically closed. After the writer instance is available again, you can retry these transactions.

## Configuration parameters for write forwarding


The Aurora DB parameter groups include settings for the write forwarding feature. Details about these parameters are summarized in the following table, with usage notes after the table.


| Parameter | Scope | Type | Default value | Valid values | 
| --- | --- | --- | --- | --- | 
| aurora\$1fwd\$1writer\$1idle\$1timeout | Cluster | Unsigned integer | 60 | 1–86,400 | 
| aurora\$1fwd\$1writer\$1max\$1connections\$1pct | Cluster | Unsigned long integer | 10 | 0–90 | 
| aurora\$1replica\$1read\$1consistency | Cluster or instance | Enum | '' (null) | EVENTUAL, SESSION, GLOBAL | 

To control incoming write requests, use these settings:
+ `aurora_fwd_writer_idle_timeout` – The number of seconds the writer DB instance waits for activity on a connection that's forwarded from a reader instance before closing it. If the session remains idle beyond this period, Aurora cancels the session.
+ `aurora_fwd_writer_max_connections_pct` – The upper limit on database connections that can be used on a writer DB instance to handle queries forwarded from reader instances. It's expressed as a percentage of the `max_connections` setting for the writer. For example, if `max_connections` is 800 and `aurora_fwd_master_max_connections_pct` or `aurora_fwd_writer_max_connections_pct` is 10, then the writer allows a maximum of 80 simultaneous forwarded sessions. These connections come from the same connection pool managed by the `max_connections` setting.

  This setting applies only on the writer when it has write forwarding enabled. If you decrease the value, existing connections aren't affected. Aurora takes the new value of the setting into account when attempting to create a new connection from a DB cluster. The default value is 10, representing 10% of the `max_connections` value.

**Note**  
Because `aurora_fwd_writer_idle_timeout` and `aurora_fwd_writer_max_connections_pct` are DB cluster parameters, all DB instances in each cluster have the same values for these parameters.

For more information about `aurora_replica_read_consistency`, see [Read consistency for write forwarding](aurora-mysql-write-forwarding-consistency.md).

For more information on DB parameter groups, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).

# Amazon CloudWatch metrics and Aurora MySQL status variables for write forwarding
Metrics for write forwarding

The following Amazon CloudWatch metrics and Aurora MySQL status variables apply when you use write forwarding for Aurora MySQL. For more information about metrics for Aurora MySQL writer and reader DB instances, see the following topics.

**Topics**
+ [

## Metrics for write forwarding for Aurora MySQL writer DB instances
](#aurora-mysql-write-forwarding-cloudwatch-writer-metrics)
+ [

## Metrics for write forwarding for Aurora MySQL reader DB instances
](#aurora-mysql-write-forwarding-cloudwatch-reader-metrics)

## Metrics for write forwarding for Aurora MySQL writer DB instances
Metrics for writer DB instances

The following Amazon CloudWatch metrics apply when you use write forwarding on one or more DB clusters. These metrics are all measured on the writer DB instance.


| CloudWatch metric | Unit | Description | 
| --- | --- | --- | 
|  `ForwardingWriterDMLLatency`  | Milliseconds |  Average time to process each forwarded DML statement on the writer DB instance. It doesn't include the time for the DB cluster to forward the write request, or the time to replicate changes back to the writer.  | 
|  `ForwardingWriterDMLThroughput`   | Count per second | Number of forwarded DML statements processed each second by this writer DB instance. | 
|  `ForwardingWriterOpenSessions`  | Count | Number of forwarded sessions on the writer DB instance. | 

The following Aurora MySQL status variables apply when you use write forwarding on one or more DB clusters. These status variables are all measured on the writer DB instance.


| Aurora MySQL status variable | Unit | Description | 
| --- | --- | --- | 
| Aurora\$1fwd\$1writer\$1dml\$1stmt\$1count | Count | Total number of DML statements forwarded to this writer DB instance. | 
| Aurora\$1fwd\$1writer\$1dml\$1stmt\$1duration | Microseconds | Total duration of DML statements forwarded to this writer DB instance. | 
| Aurora\$1fwd\$1writer\$1open\$1sessions | Count | Number of forwarded sessions on the writer DB instance. | 
| Aurora\$1fwd\$1writer\$1select\$1stmt\$1count | Count | Total number of SELECT statements forwarded to this writer DB instance. | 
| Aurora\$1fwd\$1writer\$1select\$1stmt\$1duration | Microseconds | Total duration of SELECT statements forwarded to this writer DB instance. | 

## Metrics for write forwarding for Aurora MySQL reader DB instances
Metrics for reader DB instances

The following CloudWatch metrics are measured on each reader DB instance in a DB cluster with write forwarding enabled.


| CloudWatch metric | Unit | Description | 
| --- | --- | --- | 
|  `ForwardingReplicaDMLLatency`  | Milliseconds | Average response time of forwarded DMLs on replica. | 
|  `ForwardingReplicaDMLThroughput`  | Count per second | Number of forwarded DML statements processed each second. | 
|  `ForwardingReplicaOpenSessions`  | Count | Number of sessions that are using write forwarding on a reader DB instance. | 
|  `ForwardingReplicaReadWaitLatency`  | Milliseconds |  Average wait time that a `SELECT` statement on a reader DB instance waits to catch up to the writer. The degree to which the reader DB instance waits before processing a query depends on the `aurora_replica_read_consistency` setting.  | 
|  `ForwardingReplicaReadWaitThroughput`  | Count per second | Total number of SELECT statements processed each second in all sessions that are forwarding writes. | 
|   `ForwardingReplicaSelectLatency`  | Milliseconds | Forwarded SELECT latency, averaged over all forwarded SELECT statements within the monitoring period. | 
|   `ForwardingReplicaSelectThroughput`  | Count per second | Forwarded SELECT throughput per second average within the monitoring period. | 

The following Aurora MySQL status variables are measured on each reader DB instance in a DB cluster with write forwarding enabled.


| Aurora MySQL status variable | Unit | Description | 
| --- | --- | --- | 
| Aurora\$1fwd\$1replica\$1dml\$1stmt\$1count | Count | Total number of DML statements forwarded from this reader DB instance. | 
| Aurora\$1fwd\$1replica\$1dml\$1stmt\$1duration | Microseconds | Total duration of all DML statements forwarded from this reader DB instance. | 
| Aurora\$1fwd\$1replica\$1errors\$1session\$1limit | Count |  Number of sessions rejected by the primary cluster due to one of the following error conditions: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-mysql-write-forwarding-cloudwatch.html)  | 
| Aurora\$1fwd\$1replica\$1open\$1sessions | Count | Number of sessions that are using write forwarding on a reader DB instance. | 
| Aurora\$1fwd\$1replica\$1read\$1wait\$1count | Count | Total number of read-after-write waits on this reader DB instance. | 
| Aurora\$1fwd\$1replica\$1read\$1wait\$1duration | Microseconds | Total duration of waits due to the read consistency setting on this reader DB instance. | 
| Aurora\$1fwd\$1replica\$1select\$1stmt\$1count | Count | Total number of SELECT statements forwarded from this reader DB instance. | 
| Aurora\$1fwd\$1replica\$1select\$1stmt\$1duration | Microseconds | Total duration of SELECT statements forwarded from this reader DB instance. | 

## Identifying forwarded transactions and queries


You can use the `information_schema.aurora_forwarding_processlist` table to identify forwarded transactions and queries. For more information on this table, see [information\$1schema.aurora\$1forwarding\$1processlist](AuroraMySQL.Reference.ISTables.md#AuroraMySQL.Reference.ISTables.aurora_forwarding_processlist).

The following example shows all forwarded connections on a writer DB instance.

```
mysql> select * from information_schema.AURORA_FORWARDING_PROCESSLIST where IS_FORWARDED=1 order by REPLICA_SESSION_ID;

+-----+----------+--------------------+----------+---------+------+--------------+--------------------------------------------+--------------+--------------------+---------------------------------+----------------------+----------------+
| ID  | USER     | HOST               | DB       | COMMAND | TIME | STATE        | INFO                                       | IS_FORWARDED | REPLICA_SESSION_ID | REPLICA_INSTANCE_IDENTIFIER     | REPLICA_CLUSTER_NAME | REPLICA_REGION |
+-----+----------+--------------------+----------+---------+------+--------------+--------------------------------------------+--------------+--------------------+---------------------------------+---------------------------------------+
| 648 | myuser   | IP_address:port1   | sysbench | Query   |    0 | async commit | UPDATE sbtest58 SET k=k+1 WHERE id=4802579 |            1 |                637 | my-db-cluster-instance-2        | my-db-cluster        | us-west-2      |
| 650 | myuser   | IP_address:port2   | sysbench | Query   |    0 | async commit | UPDATE sbtest54 SET k=k+1 WHERE id=2503953 |            1 |                639 | my-db-cluster-instance-2        | my-db-cluster        | us-west-2      |
+-----+----------+--------------------+----------+---------+------+--------------+--------------------------------------------+--------------+--------------------+---------------------------------+----------------------+----------------+
```

On the forwarding reader DB instance, you can see the threads associated with these writer DB connections by running `SHOW PROCESSLIST`. The `REPLICA_SESSION_ID` values on the writer, 637 and 639, are the same as the `Id` values on the reader.

```
mysql> select @@aurora_server_id;

+---------------------------------+
| @@aurora_server_id              |
+---------------------------------+
| my-db-cluster-instance-2        |
+---------------------------------+
1 row in set (0.00 sec)

mysql> show processlist;

+-----+----------+--------------------+----------+---------+------+--------------+---------------------------------------------+
| Id  | User     | Host               | db       | Command | Time | State        | Info                                        |
+-----+----------+--------------------+----------+---------+------+--------------+---------------------------------------------+
| 637 | myuser   | IP_address:port1   | sysbench | Query   |    0 | async commit | UPDATE sbtest12 SET k=k+1 WHERE id=4802579  |
| 639 | myuser   | IP_address:port2   | sysbench | Query   |    0 | async commit | UPDATE sbtest61 SET k=k+1 WHERE id=2503953  |
+-----+----------+--------------------+----------+---------+------+--------------+---------------------------------------------+
12 rows in set (0.00 sec)
```

# Integrating Amazon Aurora MySQL with other AWS services
Integrating Aurora MySQL with AWS services

Amazon Aurora MySQL integrates with other AWS services so that you can extend your Aurora MySQL DB cluster to use additional capabilities in the AWS Cloud. Your Aurora MySQL DB cluster can use AWS services to do the following:
+ Synchronously or asynchronously invoke an AWS Lambda function using the native functions `lambda_sync` or `lambda_async`. For more information, see [Invoking a Lambda function from an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Lambda.md).
+ Load data from text or XML files stored in an Amazon Simple Storage Service (Amazon S3) bucket into your DB cluster using the `LOAD DATA FROM S3` or `LOAD XML FROM S3` command. For more information, see [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md).
+ Save data to text files stored in an Amazon S3 bucket from your DB cluster using the `SELECT INTO OUTFILE S3` command. For more information, see [Saving data from an Amazon Aurora MySQL DB cluster into text files in an Amazon S3 bucket](AuroraMySQL.Integrating.SaveIntoS3.md). 
+ Automatically add or remove Aurora Replicas with Application Auto Scaling. For more information, see [Amazon Aurora Auto Scaling with Aurora Replicas](Aurora.Integrating.AutoScaling.md).
+  Perform sentiment analysis with Amazon Comprehend, or a wide variety of machine learning algorithms with SageMaker AI. For more information, see [Using Amazon Aurora machine learning](aurora-ml.md). 

Aurora secures the ability to access other AWS services by using AWS Identity and Access Management (IAM). You grant permission to access other AWS services by creating an IAM role with the necessary permissions, and then associating the role with your DB cluster. For details and instructions on how to permit your Aurora MySQL DB cluster to access other AWS services on your behalf, see [Authorizing Amazon Aurora MySQL to access other AWS services on your behalf](AuroraMySQL.Integrating.Authorizing.md).

# Authorizing Amazon Aurora MySQL to access other AWS services on your behalf
Authorizing Aurora MySQL to access AWS services

For your Aurora MySQL DB cluster to access other services on your behalf, create and configure an AWS Identity and Access Management (IAM) role. This role authorizes database users in your DB cluster to access other AWS services. For more information, see [Setting up IAM roles to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.md).

You must also configure your Aurora DB cluster to allow outbound connections to the target AWS service. For more information, see [Enabling network communication from Amazon Aurora to other AWS services](AuroraMySQL.Integrating.Authorizing.Network.md).

If you do so, your database users can perform these actions using other AWS services:
+ Synchronously or asynchronously invoke an AWS Lambda function using the native functions `lambda_sync` or `lambda_async`. Or, asynchronously invoke an AWS Lambda function using the `mysql.lambda_async` procedure. For more information, see [Invoking a Lambda function with an Aurora MySQL native function](AuroraMySQL.Integrating.NativeLambda.md).
+ Load data from text or XML files stored in an Amazon S3 bucket into your DB cluster by using the `LOAD DATA FROM S3` or `LOAD XML FROM S3` statement. For more information, see [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md).
+ Save data from your DB cluster into text files stored in an Amazon S3 bucket by using the `SELECT INTO OUTFILE S3` statement. For more information, see [Saving data from an Amazon Aurora MySQL DB cluster into text files in an Amazon S3 bucket](AuroraMySQL.Integrating.SaveIntoS3.md).
+ Export log data to Amazon CloudWatch Logs MySQL. For more information, see [Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs](AuroraMySQL.Integrating.CloudWatch.md).
+ Automatically add or remove Aurora Replicas with Application Auto Scaling. For more information, see [Amazon Aurora Auto Scaling with Aurora Replicas](Aurora.Integrating.AutoScaling.md).

# Setting up IAM roles to access AWS services
Setting up IAM roles to access AWS services

To permit your Aurora DB cluster to access another AWS service, do the following:

1. Create an IAM policy that grants permission to the AWS service. For more information, see the following topics.
   + [Creating an IAM policy to access Amazon S3 resources](AuroraMySQL.Integrating.Authorizing.IAM.S3CreatePolicy.md)
   + [Creating an IAM policy to access AWS Lambda resources](AuroraMySQL.Integrating.Authorizing.IAM.LambdaCreatePolicy.md)
   + [Creating an IAM policy to access CloudWatch Logs resources](AuroraMySQL.Integrating.Authorizing.IAM.CWCreatePolicy.md)
   + [Creating an IAM policy to access AWS KMS resources](AuroraMySQL.Integrating.Authorizing.IAM.KMSCreatePolicy.md)

1. Create an IAM role and attach the policy that you created. For more information, see [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

1. Associate that IAM role with your Aurora DB cluster. For more information, see [Associating an IAM role with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.md).

# Creating an IAM policy to access Amazon S3 resources
Creating an IAM policy to access Amazon S3

Aurora can access Amazon S3 resources to either load data to or save data from an Aurora DB cluster. However, you must first create an IAM policy that provides the bucket and object permissions that allow Aurora to access Amazon S3.

The following table lists the Aurora features that can access an Amazon S3 bucket on your behalf, and the minimum required bucket and object permissions required by each feature.


| Feature | Bucket permissions | Object permissions | 
| --- | --- | --- | 
|  `LOAD DATA FROM S3`  |  `ListBucket`  |  `GetObject` `GetObjectVersion`  | 
| LOAD XML FROM S3 |  `ListBucket`  |  `GetObject` `GetObjectVersion`  | 
|  `SELECT INTO OUTFILE S3`  |  `ListBucket`  |  `AbortMultipartUpload` `DeleteObject` `GetObject` `ListMultipartUploadParts` `PutObject`  | 

The following policy adds the permissions that might be required by Aurora to access an Amazon S3 bucket on your behalf. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowAuroraToExampleBucket",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:AbortMultipartUpload",
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:GetObjectVersion",
                "s3:ListMultipartUploadParts"
            ],
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket/*",
                "arn:aws:s3:::amzn-s3-demo-bucket"
            ]
        }
    ]
}
```

------

**Note**  
 Make sure to include both entries for the `Resource` value. Aurora needs the permissions on both the bucket itself and all the objects inside the bucket.   
Based on your use case, you might not need to add all of the permissions in the sample policy. Also, other permissions might be required. For example, if your Amazon S3 bucket is encrypted, you need to add `kms:Decrypt` permissions.

You can use the following steps to create an IAM policy that provides the minimum required permissions for Aurora to access an Amazon S3 bucket on your behalf. To allow Aurora to access all of your Amazon S3 buckets, you can skip these steps and use either the `AmazonS3ReadOnlyAccess` or `AmazonS3FullAccess` predefined IAM policy instead of creating your own.

**To create an IAM policy to grant access to your Amazon S3 resources**

1. Open the [IAM Management Console](https://console.aws.amazon.com/iam/home?#home).

1. In the navigation pane, choose **Policies**.

1. Choose **Create policy**.

1. On the **Visual editor** tab, choose **Choose a service**, and then choose **S3**.

1. For **Actions**, choose **Expand all**, and then choose the bucket permissions and object permissions needed for the IAM policy.

   Object permissions are permissions for object operations in Amazon S3, and need to be granted for objects in a bucket, not the bucket itself. For more information about permissions for object operations in Amazon S3, see [Permissions for object operations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-actions.html#using-with-s3-actions-related-to-objects).

1. Choose **Resources**, and choose **Add ARN** for **bucket**.

1. In the **Add ARN(s)** dialog box, provide the details about your resource, and choose **Add**.

   Specify the Amazon S3 bucket to allow access to. For instance, if you want to allow Aurora to access the Amazon S3 bucket named *amzn-s3-demo-bucket*, then set the Amazon Resource Name (ARN) value to `arn:aws:s3:::amzn-s3-demo-bucket`.

1. If the **object** resource is listed, choose **Add ARN** for **object**.

1. In the **Add ARN(s)** dialog box, provide the details about your resource.

   For the Amazon S3 bucket, specify the Amazon S3 bucket to allow access to. For the object, you can choose **Any** to grant permissions to any object in the bucket.
**Note**  
You can set **Amazon Resource Name (ARN)** to a more specific ARN value in order to allow Aurora to access only specific files or folders in an Amazon S3 bucket. For more information about how to define an access policy for Amazon S3, see [Managing access permissions to your Amazon S3 resources](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-access-control.html).

1. (Optional) Choose **Add ARN** for **bucket** to add another Amazon S3 bucket to the policy, and repeat the previous steps for the bucket.
**Note**  
You can repeat this to add corresponding bucket permission statements to your policy for each Amazon S3 bucket that you want Aurora to access. Optionally, you can also grant access to all buckets and objects in Amazon S3.

1. Choose **Review policy**.

1. For **Name**, enter a name for your IAM policy, for example `AllowAuroraToExampleBucket`. You use this name when you create an IAM role to associate with your Aurora DB cluster. You can also add an optional **Description** value.

1. Choose **Create policy**.

1. Complete the steps in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

# Creating an IAM policy to access AWS Lambda resources
Creating an IAM policy to access Lambda

You can create an IAM policy that provides the minimum required permissions for Aurora to invoke an AWS Lambda function on your behalf.

The following policy adds the permissions required by Aurora to invoke an AWS Lambda function on your behalf.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowAuroraToExampleFunction",
      "Effect": "Allow",
      "Action": "lambda:InvokeFunction",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:example_function"
    }
  ]
}
```

------

You can use the following steps to create an IAM policy that provides the minimum required permissions for Aurora to invoke an AWS Lambda function on your behalf. To allow Aurora to invoke all of your AWS Lambda functions, you can skip these steps and use the predefined `AWSLambdaRole` policy instead of creating your own.

**To create an IAM policy to grant invoke to your AWS Lambda functions**

1. Open the [IAM console](https://console.aws.amazon.com/iam/home?#home).

1. In the navigation pane, choose **Policies**.

1. Choose **Create policy**.

1. On the **Visual editor** tab, choose **Choose a service**, and then choose **Lambda**.

1. For **Actions**, choose **Expand all**, and then choose the AWS Lambda permissions needed for the IAM policy.

   Ensure that `InvokeFunction` is selected. It is the minimum required permission to enable Amazon Aurora to invoke an AWS Lambda function.

1. Choose **Resources** and choose **Add ARN** for **function**.

1. In the **Add ARN(s)** dialog box, provide the details about your resource.

   Specify the Lambda function to allow access to. For instance, if you want to allow Aurora to access a Lambda function named `example_function`, then set the ARN value to `arn:aws:lambda:::function:example_function`. 

   For more information on how to define an access policy for AWS Lambda, see [Authentication and access control for AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/lambda-auth-and-access-control.html).

1. Optionally, choose **Add additional permissions** to add another AWS Lambda function to the policy, and repeat the previous steps for the function.
**Note**  
You can repeat this to add corresponding function permission statements to your policy for each AWS Lambda function that you want Aurora to access.

1. Choose **Review policy**.

1. Set **Name** to a name for your IAM policy, for example `AllowAuroraToExampleFunction`. You use this name when you create an IAM role to associate with your Aurora DB cluster. You can also add an optional **Description** value.

1. Choose **Create policy**.

1. Complete the steps in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

# Creating an IAM policy to access CloudWatch Logs resources
Creating an IAM policy to access CloudWatch Logs

Aurora can access CloudWatch Logs to export audit log data from an Aurora DB cluster. However, you must first create an IAM policy that provides the log group and log stream permissions that allow Aurora to access CloudWatch Logs. 

The following policy adds the permissions required by Aurora to access Amazon CloudWatch Logs on your behalf, and the minimum required permissions to create log groups and export data. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "EnableCreationAndManagementOfRDSCloudwatchLogEvents",
            "Effect": "Allow",
            "Action": [
                "logs:GetLogEvents",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:log-group:/aws/rds/*:log-stream:*"
        },
        {
            "Sid": "EnableCreationAndManagementOfRDSCloudwatchLogGroupsAndStreams",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:DescribeLogStreams",
                "logs:PutRetentionPolicy",
                "logs:CreateLogGroup"
            ],
            "Resource": "arn:aws:logs:*:*:log-group:/aws/rds/*"
        }
    ]
}
```

------

You can modify the ARNs in the policy to restrict access to a specific AWS Region and account.

You can use the following steps to create an IAM policy that provides the minimum required permissions for Aurora to access CloudWatch Logs on your behalf. To allow Aurora full access to CloudWatch Logs, you can skip these steps and use the `CloudWatchLogsFullAccess` predefined IAM policy instead of creating your own. For more information, see [Using identity-based policies (IAM policies) for CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-identity-based-access-control-cwl.html#managed-policies-cwl) in the* Amazon CloudWatch User Guide.*

**To create an IAM policy to grant access to your CloudWatch Logs resources**

1. Open the [IAM console](https://console.aws.amazon.com/iam/home?#home).

1. In the navigation pane, choose **Policies**.

1. Choose **Create policy**.

1. On the **Visual editor** tab, choose **Choose a service**, and then choose **CloudWatch Logs**.

1. For **Actions**, choose **Expand all** (on the right), and then choose the Amazon CloudWatch Logs permissions needed for the IAM policy.

   Ensure that the following permissions are selected:
   + `CreateLogGroup`
   + `CreateLogStream`
   + `DescribeLogStreams`
   + `GetLogEvents`
   + `PutLogEvents`
   + `PutRetentionPolicy`

1. Choose **Resources** and choose **Add ARN** for **log-group**.

1. In the **Add ARN(s)** dialog box, enter the following values:
   + **Region** – An AWS Region or `*`
   + **Account** – An account number or `*`
   + **Log Group Name** – `/aws/rds/*`

1. In the **Add ARN(s)** dialog box, choose **Add**.

1. Choose **Add ARN** for **log-stream**.

1. In the **Add ARN(s)** dialog box, enter the following values:
   + **Region** – An AWS Region or `*`
   + **Account** – An account number or `*`
   + **Log Group Name** – `/aws/rds/*`
   + **Log Stream Name** – `*`

1. In the **Add ARN(s)** dialog box, choose **Add**.

1. Choose **Review policy**.

1. Set **Name** to a name for your IAM policy, for example `AmazonRDSCloudWatchLogs`. You use this name when you create an IAM role to associate with your Aurora DB cluster. You can also add an optional **Description** value.

1. Choose **Create policy**.

1. Complete the steps in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

# Creating an IAM policy to access AWS KMS resources
Creating an IAM policy to access AWS KMS

Aurora can access the AWS KMS keys used for encrypting their database backups. However, you must first create an IAM policy that provides the permissions that allow Aurora to access KMS keys.

The following policy adds the permissions required by Aurora to access KMS keys on your behalf.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowAuroraToAccessKey",
      "Effect": "Allow",
      "Action": [
        "kms:Decrypt"
      ],
      "Resource": "arn:aws:kms:us-east-1:123456789012:key/key-ID"
    }
  ]
}
```

------

You can use the following steps to create an IAM policy that provides the minimum required permissions for Aurora to access KMS keys on your behalf.

**To create an IAM policy to grant access to your KMS keys**

1. Open the [IAM console](https://console.aws.amazon.com/iam/home?#home).

1. In the navigation pane, choose **Policies**.

1. Choose **Create policy**.

1. On the **Visual editor** tab, choose **Choose a service**, and then choose **KMS**.

1. For **Actions**, choose **Write**, and then choose **Decrypt**.

1. Choose **Resources**, and choose **Add ARN**.

1. In the **Add ARN(s)** dialog box, enter the following values:
   + **Region** – Type the AWS Region, such as `us-west-2`.
   + **Account** – Type the user account number.
   + **Log Stream Name** – Type the KMS key identifier.

1. In the **Add ARN(s)** dialog box, choose **Add**.

1. Choose **Review policy**.

1. Set **Name** to a name for your IAM policy, for example `AmazonRDSKMSKey`. You use this name when you create an IAM role to associate with your Aurora DB cluster. You can also add an optional **Description** value.

1. Choose **Create policy**.

1. Complete the steps in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

# Creating an IAM role to allow Amazon Aurora to access AWS services
Creating an IAM role to access AWS services

After creating an IAM policy to allow Aurora to access AWS resources, you must create an IAM role and attach the IAM policy to the new IAM role.

To create an IAM role to permit your Amazon RDS cluster to communicate with other AWS services on your behalf, take the following steps.<a name="Create.IAMRole.AWSServices"></a>

**To create an IAM role to allow Amazon RDS to access AWS services**

1. Open the [IAM console](https://console.aws.amazon.com/iam/home?#home).

1. In the navigation pane, choose **Roles**.

1. Choose **Create role**.

1. Under **AWS service**, choose **RDS**.

1. Under **Select your use case**, choose **RDS – Add Role to Database**.

1. Choose **Next**.

1. On the **Permissions policies** page, enter the name of your policy in the **Search** field.

1. When it appears in the list, select the policy that you defined earlier using the instructions in one of the following sections:
   + [Creating an IAM policy to access Amazon S3 resources](AuroraMySQL.Integrating.Authorizing.IAM.S3CreatePolicy.md)
   + [Creating an IAM policy to access AWS Lambda resources](AuroraMySQL.Integrating.Authorizing.IAM.LambdaCreatePolicy.md)
   + [Creating an IAM policy to access CloudWatch Logs resources](AuroraMySQL.Integrating.Authorizing.IAM.CWCreatePolicy.md)
   + [Creating an IAM policy to access AWS KMS resources](AuroraMySQL.Integrating.Authorizing.IAM.KMSCreatePolicy.md)

1. Choose **Next**.

1. In **Role name**, enter a name for your IAM role, for example `RDSLoadFromS3`. You can also add an optional **Description** value.

1. Choose **Create Role**.

1. Complete the steps in [Associating an IAM role with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.md).

# Associating an IAM role with an Amazon Aurora MySQL DB cluster
Associating an IAM role with a DB cluster

To permit database users in an Amazon Aurora DB cluster to access other AWS services, you associate the IAM role that you created in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md) with that DB cluster. You can also have AWS create a new IAM role by associating the service directly.

**Note**  
You can't associate an IAM role with an Aurora Serverless v1 DB cluster. For more information, see [Using Amazon Aurora Serverless v1](aurora-serverless.md).  
You can associate an IAM role with an Aurora Serverless v2 DB cluster.

To associate an IAM role with a DB cluster you do two things:

1. Add the role to the list of associated roles for a DB cluster by using the RDS console, the [add-role-to-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/add-role-to-db-cluster.html) AWS CLI command, or the [AddRoleToDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_AddRoleToDBCluster.html) RDS API operation.

   You can add a maximum of five IAM roles for each Aurora DB cluster.

1. Set the cluster-level parameter for the related AWS service to the ARN for the associated IAM role.

   The following table describes the cluster-level parameter names for the IAM roles used to access other AWS services.    
<a name="aurora_cluster_params_iam_roles"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.html)

To associate an IAM role to permit your Amazon RDS cluster to communicate with other AWS services on your behalf, take the following steps.

## Console


**To associate an IAM role with an Aurora DB cluster using the console**

1. Open the RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose **Databases**.

1. Choose the name of the Aurora DB cluster that you want to associate an IAM role with to show its details.

1. On the **Connectivity & security** tab, in the **Manage IAM roles** section, do one of the following:
   + **Select IAM roles to add to this cluster** (default)
   + **Select a service to connect to this cluster**  
![\[Associate an IAM role with a DB cluster\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/AuroraAssociateIAMRole-02.png)

1. To use an existing IAM role, choose it from the menu, then choose **Add role**.

   If adding the role is successful, its status shows as `Pending`, then `Available`.

1. To connect a service directly:

   1. Choose **Select a service to connect to this cluster**.

   1. Choose the service from the menu, then choose **Connect service**.

   1. For **Connect cluster to *Service Name***, enter the Amazon Resource Name (ARN) to use to connect to the service, then choose **Connect service**.

   AWS creates a new IAM role for connecting to the service. Its status shows as `Pending`, then `Available`.

1. (Optional) To stop associating an IAM role with a DB cluster and remove the related permission, choose the role and then choose **Delete**.

**To set the cluster-level parameter for the associated IAM role**

1. In the RDS console, choose **Parameter groups** in the navigation pane.

1. If you are already using a custom DB parameter group, you can select that group to use instead of creating a new DB cluster parameter group. If you are using the default DB cluster parameter group, create a new DB cluster parameter group, as described in the following steps:

   1. Choose **Create parameter group**.

   1. For **Parameter group family**, choose `aurora-mysql8.0` for an Aurora MySQL 8.0-compatible DB cluster, or `aurora-mysql5.7` for an Aurora MySQL 5.7-compatible DB cluster.

   1. For **Type**, choose **DB Cluster Parameter Group**. 

   1. For **Group name**, type the name of your new DB cluster parameter group.

   1. For **Description**, type a description for your new DB cluster parameter group.  
![\[Create a DB cluster parameter group\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/AuroraAssociateIAMRole-03.png)

   1. Choose **Create**. 

1. On the **Parameter groups** page, select your DB cluster parameter group and choose **Edit** for **Parameter group actions**.

1. Set the appropriate cluster-level [parameters](#aurora_cluster_params_iam_roles) to the related IAM role ARN values.

   For example, you can set just the `aws_default_s3_role` parameter to `arn:aws:iam::123456789012:role/AllowS3Access`.

1. Choose **Save changes**.

1. To change the DB cluster parameter group for your DB cluster, complete the following steps:

   1. Choose **Databases**, and then choose your Aurora DB cluster.

   1. Choose **Modify**.

   1. Scroll to **Database options** and set **DB cluster parameter group** to the DB cluster parameter group.

   1. Choose **Continue**.

   1. Verify your changes and then choose **Apply immediately**.

   1. Choose **Modify cluster**.

   1. Choose **Databases**, and then choose the primary instance for your DB cluster.

   1. For **Actions**, choose **Reboot**.

      When the instance has rebooted, your IAM role is associated with your DB cluster.

      For more information about cluster parameter groups, see [Aurora MySQL configuration parameters](AuroraMySQL.Reference.ParameterGroups.md).

## CLI


**To associate an IAM role with a DB cluster by using the AWS CLI**

1. Call the `add-role-to-db-cluster` command from the AWS CLI to add the ARNs for your IAM roles to the DB cluster, as shown following. 

   ```
   PROMPT> aws rds add-role-to-db-cluster --db-cluster-identifier my-cluster --role-arn arn:aws:iam::123456789012:role/AllowAuroraS3Role
   PROMPT> aws rds add-role-to-db-cluster --db-cluster-identifier my-cluster --role-arn arn:aws:iam::123456789012:role/AllowAuroraLambdaRole
   ```

1. If you are using the default DB cluster parameter group, create a new DB cluster parameter group. If you are already using a custom DB parameter group, you can use that group instead of creating a new DB cluster parameter group.

   To create a new DB cluster parameter group, call the `create-db-cluster-parameter-group` command from the AWS CLI, as shown following.

   ```
   PROMPT> aws rds create-db-cluster-parameter-group  --db-cluster-parameter-group-name AllowAWSAccess \
        --db-parameter-group-family aurora5.7 --description "Allow access to Amazon S3 and AWS Lambda"
   ```

   For an Aurora MySQL 5.7-compatible DB cluster, specify `aurora-mysql5.7` for `--db-parameter-group-family`. For an Aurora MySQL 8.0-compatible DB cluster, specify `aurora-mysql8.0` for `--db-parameter-group-family`.

1. Set the appropriate cluster-level parameter or parameters and the related IAM role ARN values in your DB cluster parameter group, as shown following. 

   ```
   PROMPT> aws rds modify-db-cluster-parameter-group --db-cluster-parameter-group-name AllowAWSAccess \
       --parameters "ParameterName=aws_default_s3_role,ParameterValue=arn:aws:iam::123456789012:role/AllowAuroraS3Role,method=pending-reboot" \
       --parameters "ParameterName=aws_default_lambda_role,ParameterValue=arn:aws:iam::123456789012:role/AllowAuroraLambdaRole,method=pending-reboot"
   ```

1. Modify the DB cluster to use the new DB cluster parameter group and then reboot the cluster, as shown following.

   ```
   PROMPT> aws rds modify-db-cluster --db-cluster-identifier my-cluster --db-cluster-parameter-group-name AllowAWSAccess
   PROMPT> aws rds reboot-db-instance --db-instance-identifier my-cluster-primary
   ```

   When the instance has rebooted, your IAM roles are associated with your DB cluster.

   For more information about cluster parameter groups, see [Aurora MySQL configuration parameters](AuroraMySQL.Reference.ParameterGroups.md).

# Enabling network communication from Amazon Aurora to other AWS services
Enabling network communication to AWS services

To use certain other AWS services with Amazon Aurora, the network configuration of your Aurora DB cluster must allow outbound connections to endpoints for those services. The following operations require this network configuration.
+  Invoking AWS Lambda functions. To learn about this feature, see [Invoking a Lambda function with an Aurora MySQL native function](AuroraMySQL.Integrating.NativeLambda.md). 
+  Accessing files from Amazon S3. To learn about this feature, see [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md) and [Saving data from an Amazon Aurora MySQL DB cluster into text files in an Amazon S3 bucket](AuroraMySQL.Integrating.SaveIntoS3.md). 
+ Accessing AWS KMS endpoints. AWS KMS access is required to use database activity streams with Aurora MySQL. To learn about this feature, see [Monitoring Amazon Aurora with Database Activity Streams](DBActivityStreams.md).
+ Accessing SageMaker AI endpoints. SageMaker AI access is required to use SageMaker AI machine learning with Aurora MySQL. To learn about this feature, see [Using Amazon Aurora machine learning with Aurora MySQL](mysql-ml.md).

Aurora returns the following error messages if it can't connect to a service endpoint.

```
ERROR 1871 (HY000): S3 API returned error: Network Connection
```

```
ERROR 1873 (HY000): Lambda API returned error: Network Connection. Unable to connect to endpoint
```

```
ERROR 1815 (HY000): Internal error: Unable to initialize S3Stream
```

For database activity streams using Aurora MySQL, the activity stream stops functioning if the DB cluster can't access the AWS KMS endpoint. Aurora notifies you about this issue using RDS Events.

If you encounter these messages while using the corresponding AWS services, check if your Aurora DB cluster is public or private. If your Aurora DB cluster is private, you must configure it to enable connections.

For an Aurora DB cluster to be public, it must be marked as publicly accessible. If you look at the details for the DB cluster in the AWS Management Console, **Publicly Accessible** is **Yes** if this is the case. The DB cluster must also be in an Amazon VPC public subnet. For more information about publicly accessible DB instances, see [Working with a DB cluster in a VPC](USER_VPC.WorkingWithRDSInstanceinaVPC.md). For more information about public Amazon VPC subnets, see [Your VPC and subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html).

If your Aurora DB cluster isn't publicly accessible and in a VPC public subnet, it is private. You might have a DB cluster that is private and want to use one of the features that requires this network configuration. If so, configure the cluster so that it can connect to Internet addresses through Network Address Translation (NAT). As an alternative for Amazon S3, Amazon SageMaker AI, and AWS Lambda, you can instead configure the VPC to have a VPC endpoint for the other service associated with the DB cluster's route table, see [Working with a DB cluster in a VPC](USER_VPC.WorkingWithRDSInstanceinaVPC.md). For more information about configuring NAT in your VPC, see [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html). For more information about configuring VPC endpoints, see [VPC endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html). You can also create an S3 gateway endpoint to access your S3 bucket. For more information, see [Gateway endpoints for Amazon S3](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html).

You might also have to open the ephemeral ports for your network access control lists (ACLs) in the outbound rules for your VPC security group. For more information on ephemeral ports for network ACLs, see [Ephemeral ports](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#nacl-ephemeral-ports) in the *Amazon Virtual Private Cloud User Guide*.

## Related topics

+ [Integrating Aurora with other AWS services](Aurora.Integrating.md)
+ [Managing an Amazon Aurora DB cluster](CHAP_Aurora.md)

# Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket
Loading data from text files in Amazon S3<a name="load_from_s3"></a><a name="load_data"></a><a name="load_xml"></a>

You can use the `LOAD DATA FROM S3` or `LOAD XML FROM S3` statement to load data from files stored in an Amazon S3 bucket. In Aurora MySQL, the files are first stored on the local disk, and then imported to the database. After the imports to the database are done, the local files are deleted.

**Note**  
Loading data into a table from text files isn't supported for Aurora Serverless v1. It is supported for Aurora Serverless v2.

**Contents**
+ [

## Giving Aurora access to Amazon S3
](#AuroraMySQL.Integrating.LoadFromS3.Authorize)
+ [

## Granting privileges to load data in Amazon Aurora MySQL
](#AuroraMySQL.Integrating.LoadFromS3.Grant)
+ [

## Specifying the path (URI) to an Amazon S3 bucket
](#AuroraMySQL.Integrating.LoadFromS3.URI)
+ [

## LOAD DATA FROM S3
](#AuroraMySQL.Integrating.LoadFromS3.Text)
  + [

### Syntax
](#AuroraMySQL.Integrating.LoadFromS3.Text.Syntax)
  + [

### Parameters
](#AuroraMySQL.Integrating.LoadFromS3.Text.Parameters)
  + [

### Using a manifest to specify data files to load
](#AuroraMySQL.Integrating.LoadFromS3.Manifest)
    + [

#### Verifying loaded files using the aurora\$1s3\$1load\$1history table
](#AuroraMySQL.Integrating.LoadFromS3.Manifest.History)
  + [

### Examples
](#AuroraMySQL.Integrating.LoadFromS3.Text.Examples)
+ [

## LOAD XML FROM S3
](#AuroraMySQL.Integrating.LoadFromS3.XML)
  + [

### Syntax
](#AuroraMySQL.Integrating.LoadFromS3.XML.Syntax)
  + [

### Parameters
](#AuroraMySQL.Integrating.LoadFromS3.XML.Parameters)

## Giving Aurora access to Amazon S3


Before you can load data from an Amazon S3 bucket, you must first give your Aurora MySQL DB cluster permission to access Amazon S3.

**To give Aurora MySQL access to Amazon S3**

1. Create an AWS Identity and Access Management (IAM) policy that provides the bucket and object permissions that allow your Aurora MySQL DB cluster to access Amazon S3. For instructions, see [Creating an IAM policy to access Amazon S3 resources](AuroraMySQL.Integrating.Authorizing.IAM.S3CreatePolicy.md).
**Note**  
In Aurora MySQL version 3.05 and higher, you can load objects that are encrypted using customer-managed AWS KMS keys. To do so, include the `kms:Decrypt` permission in your IAM policy. For more information, see [Creating an IAM policy to access AWS KMS resources](AuroraMySQL.Integrating.Authorizing.IAM.KMSCreatePolicy.md).  
You don't need this permission to load objects that are encrypted using AWS managed keys or Amazon S3 managed keys (SSE-S3).

1. Create an IAM role, and attach the IAM policy you created in [Creating an IAM policy to access Amazon S3 resources](AuroraMySQL.Integrating.Authorizing.IAM.S3CreatePolicy.md) to the new IAM role. For instructions, see [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

1. Make sure the DB cluster is using a custom DB cluster parameter group.

   For more information about creating a custom DB cluster parameter group, see [Creating a DB cluster parameter groupin Amazon Aurora](USER_WorkingWithParamGroups.CreatingCluster.md).

1. For Aurora MySQL version 2, set either the `aurora_load_from_s3_role` or `aws_default_s3_role` DB cluster parameter to the Amazon Resource Name (ARN) of the new IAM role. If an IAM role isn't specified for `aurora_load_from_s3_role`, Aurora uses the IAM role specified in `aws_default_s3_role`.

   For Aurora MySQL version 3, use `aws_default_s3_role`.

   If the cluster is part of an Aurora global database, set this parameter for each Aurora cluster in the global database. Although only the primary cluster in an Aurora global database can load data, another cluster might be promoted by the failover mechanism and become the primary cluster.

   For more information about DB cluster parameters, see [Amazon Aurora DB cluster and DB instance parameters](USER_WorkingWithDBClusterParamGroups.md#Aurora.Managing.ParameterGroups).

1. To permit database users in an Aurora MySQL DB cluster to access Amazon S3, associate the role that you created in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md) with the DB cluster. For an Aurora global database, associate the role with each Aurora cluster in the global database. For information about associating an IAM role with a DB cluster, see [Associating an IAM role with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.md).

1. Configure your Aurora MySQL DB cluster to allow outbound connections to Amazon S3. For instructions, see [Enabling network communication from Amazon Aurora to other AWS services](AuroraMySQL.Integrating.Authorizing.Network.md). 

   If your DB cluster isn't publicly accessible and in a VPC public subnet, it is private. You can create an S3 gateway endpoint to access your S3 bucket. For more information, see [Gateway endpoints for Amazon S3](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html).

    For an Aurora global database, enable outbound connections for each Aurora cluster in the global database. 

## Granting privileges to load data in Amazon Aurora MySQL
Granting privileges to load data

The database user that issues the `LOAD DATA FROM S3` or `LOAD XML FROM S3` statement must have a specific role or privilege to issue either statement. In Aurora MySQL version 3, you grant the `AWS_LOAD_S3_ACCESS` role. In Aurora MySQL version 2, you grant the `LOAD FROM S3` privilege. The administrative user for a DB cluster is granted the appropriate role or privilege by default. You can grant the privilege to another user by using one of the following statements.

 Use the following statement for Aurora MySQL version 3: 

```
GRANT AWS_LOAD_S3_ACCESS TO 'user'@'domain-or-ip-address'
```

**Tip**  
When you use the role technique in Aurora MySQL version 3, you can also activate the role by using the `SET ROLE role_name` or `SET ROLE ALL` statement. If you aren't familiar with the MySQL 8.0 role system, you can learn more in [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model). For more details, see [Using roles](https://dev.mysql.com/doc/refman/8.0/en/roles.html) in the *MySQL Reference Manual*.  
This only applies to the current active session. When you reconnect, you must run the `SET ROLE` statement again to grant privileges. For more information, see [SET ROLE statement](https://dev.mysql.com/doc/refman/8.0/en/set-role.html) in the *MySQL Reference Manual*.  
You can use the `activate_all_roles_on_login` DB cluster parameter to automatically activate all roles when a user connects to a DB instance. When this parameter is set, you generally don't have to call the `SET ROLE` statement explicitly to activate a role. For more information, see [activate\$1all\$1roles\$1on\$1login](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_activate_all_roles_on_login) in the *MySQL Reference Manual*.  
However, you must call `SET ROLE ALL` explicitly at the beginning of a stored procedure to activate the role, when the stored procedure is called by a different user.

Use the following statement for Aurora MySQL version 2:

```
GRANT LOAD FROM S3 ON *.* TO 'user'@'domain-or-ip-address'
```

The `AWS_LOAD_S3_ACCESS` role and `LOAD FROM S3` privilege are specific to Amazon Aurora and are not available for external MySQL databases or RDS for MySQL DB instances. If you have set up replication between an Aurora DB cluster as the replication source and a MySQL database as the replication client, then the `GRANT` statement for the role or privilege causes replication to stop with an error. You can safely skip the error to resume replication. To skip the error on an RDS for MySQL instance, use the [mysql\$1rds\$1skip\$1repl\$1error](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql_rds_skip_repl_error.html) procedure. To skip the error on an external MySQL database, use the [slave\$1skip\$1errors](https://dev.mysql.com/doc/refman/5.7/en/replication-options-replica.html#sysvar_slave_skip_errors) system variable (Aurora MySQL version 2) or [replica\$1skip\$1errors](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#sysvar_replica_skip_errors) system variable (Aurora MySQL version 3).

**Note**  
The database user must have `INSERT` privileges for the database into which it's loading data.

## Specifying the path (URI) to an Amazon S3 bucket


The syntax for specifying the path (URI) to files stored on an Amazon S3 bucket is as follows.

```
s3-region://amzn-s3-demo-bucket/file-name-or-prefix
```

The path includes the following values:
+ `region` (optional) – The AWS Region that contains the Amazon S3 bucket to load from. This value is optional. If you don't specify a `region` value, then Aurora loads your file from Amazon S3 in the same region as your DB cluster.
+ `bucket-name` – The name of the Amazon S3 bucket that contains the data to load. Object prefixes that identify a virtual folder path are supported.
+ `file-name-or-prefix` – The name of the Amazon S3 text file or XML file, or a prefix that identifies one or more text or XML files to load. You can also specify a manifest file that identifies one or more text files to load. For more information about using a manifest file to load text files from Amazon S3, see [Using a manifest to specify data files to load](#AuroraMySQL.Integrating.LoadFromS3.Manifest).

**To copy the URI for files in an S3 bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane, choose **Buckets**, and then choose the bucket whose URI you want to copy.

1. Select the prefix or file that you want to load from S3.

1. Choose **Copy S3 URI**.

## LOAD DATA FROM S3


You can use the `LOAD DATA FROM S3` statement to load data from any text file format that is supported by the MySQL [LOAD DATA INFILE](https://dev.mysql.com/doc/refman/8.0/en/load-data.html) statement, such as text data that is comma-delimited. Compressed files are not supported.

**Note**  
Make sure that your Aurora MySQL DB cluster allows outbound connections to S3. For more information, see [Enabling network communication from Amazon Aurora to other AWS services](AuroraMySQL.Integrating.Authorizing.Network.md).

### Syntax


```
LOAD DATA [FROM] S3 [FILE | PREFIX | MANIFEST] 'S3-URI'
    [REPLACE | IGNORE]
    INTO TABLE tbl_name
    [PARTITION (partition_name,...)]
    [CHARACTER SET charset_name]
    [{FIELDS | COLUMNS}
        [TERMINATED BY 'string']
        [[OPTIONALLY] ENCLOSED BY 'char']
        [ESCAPED BY 'char']
    ]
    [LINES
        [STARTING BY 'string']
        [TERMINATED BY 'string']
    ]
    [IGNORE number {LINES | ROWS}]
    [(col_name_or_user_var,...)]
    [SET col_name = expr,...]
```

**Note**  
In Aurora MySQL version 3.05 and higher, the keyword `FROM` is optional.

### Parameters


The `LOAD DATA FROM S3` statement uses the following required and optional parameters. You can find more details about some of these parameters in [LOAD DATA Statement](https://dev.mysql.com/doc/refman/8.0/en/load-data.html) in the MySQL documentation.

**FILE \$1 PREFIX \$1 MANIFEST**  
Identifies whether to load the data from a single file, from all files that match a given prefix, or from all files in a specified manifest. `FILE` is the default.

**S3-URI**  
Specifies the URI for a text or manifest file to load, or an Amazon S3 prefix to use. Specify the URI using the syntax described in [Specifying the path (URI) to an Amazon S3 bucket](#AuroraMySQL.Integrating.LoadFromS3.URI).

**REPLACE \$1 IGNORE**  
Determines what action to take if an input row has the same unique key values as an existing row in the database table.  
+ Specify `REPLACE` if you want the input row to replace the existing row in the table.
+ Specify `IGNORE` if you want to discard the input row.

**INTO TABLE**  
Identifies the name of the database table to load the input rows into.

**PARTITION**  
Requires that all input rows be inserted into the partitions identified by the specified list of comma-separated partition names. If an input row cannot be inserted into one of the specified partitions, then the statement fails and an error is returned.

**CHARACTER SET**  
Identifies the character set of the data in the input file.

**FIELDS \$1 COLUMNS**  
Identifies how the fields or columns in the input file are delimited. Fields are tab-delimited by default.

**LINES**  
Identifies how the lines in the input file are delimited. Lines are delimited by a newline character (`'\n'`) by default.

**IGNORE *number* LINES \$1 ROWS**  
Specifies to ignore a certain number of lines or rows at the start of the input file. For example, you can use `IGNORE 1 LINES` to skip over an initial header line containing column names, or `IGNORE 2 ROWS` to skip over the first two rows of data in the input file. If you also use `PREFIX`, `IGNORE` skips a certain number of lines or rows at the start of the first input file.

**col\$1name\$1or\$1user\$1var, ...**  
Specifies a comma-separated list of one or more column names or user variables that identify which columns to load by name. The name of a user variable used for this purpose must match the name of an element from the text file, prefixed with @. You can employ user variables to store the corresponding field values for subsequent reuse.  
For example, the following statement loads the first column from the input file into the first column of `table1`, and sets the value of the `table_column2` column in `table1` to the input value of the second column divided by 100.  

```
LOAD DATA FROM S3 's3://amzn-s3-demo-bucket/data.txt'
    INTO TABLE table1
    (column1, @var1)
    SET table_column2 = @var1/100;
```

**SET**  
Specifies a comma-separated list of assignment operations that set the values of columns in the table to values not included in the input file.  
For example, the following statement sets the first two columns of `table1` to the values in the first two columns from the input file, and then sets the value of the `column3` in `table1` to the current time stamp.  

```
LOAD DATA FROM S3  's3://amzn-s3-demo-bucket/data.txt'
    INTO TABLE table1
    (column1, column2)
    SET column3 = CURRENT_TIMESTAMP;
```
You can use subqueries in the right side of `SET` assignments. For a subquery that returns a value to be assigned to a column, you can use only a scalar subquery. Also, you cannot use a subquery to select from the table that is being loaded. 

You can't use the `LOCAL` keyword of the `LOAD DATA FROM S3` statement if you're loading data from an Amazon S3 bucket.

### Using a manifest to specify data files to load
Using a manifest

You can use the `LOAD DATA FROM S3` statement with the `MANIFEST` keyword to specify a manifest file in JSON format that lists the text files to be loaded into a table in your DB cluster.

The following JSON schema describes the format and content of a manifest file.

```
{
    "$schema": "http://json-schema.org/draft-04/schema#",
    "additionalProperties": false,
    "definitions": {},
    "id": "Aurora_LoadFromS3_Manifest",
    "properties": {
        "entries": {
            "additionalItems": false,
            "id": "/properties/entries",
            "items": {
                "additionalProperties": false,
                "id": "/properties/entries/items",
                "properties": {
                    "mandatory": {
                        "default": "false",
                        "id": "/properties/entries/items/properties/mandatory",
                        "type": "boolean"
                    },
                    "url": {
                        "id": "/properties/entries/items/properties/url",
                        "maxLength": 1024,
                        "minLength": 1,
                        "type": "string"
                    }
                },
                "required": [
                    "url"
                ],
                "type": "object"
            },
            "type": "array",
            "uniqueItems": true
        }
    },
    "required": [
        "entries"
    ],
    "type": "object"
}
```

Each `url` in the manifest must specify a URL with the bucket name and full object path for the file, not just a prefix. You can use a manifest to load files from different buckets, different regions, or files that do not share the same prefix. If a region is not specified in the URL, the region of the target Aurora DB cluster is used. The following example shows a manifest file that loads four files from different buckets.

```
{
  "entries": [
    {
      "url":"s3://aurora-bucket/2013-10-04-customerdata", 
      "mandatory":true
    },
    {
      "url":"s3-us-west-2://aurora-bucket-usw2/2013-10-05-customerdata",
      "mandatory":true
    },
    {
      "url":"s3://aurora-bucket/2013-10-04-customerdata", 
      "mandatory":false
    },
    {
      "url":"s3://aurora-bucket/2013-10-05-customerdata"
    }
  ]
}
```

The optional `mandatory` flag specifies whether `LOAD DATA FROM S3` should return an error if the file is not found. The `mandatory` flag defaults to `false`. Regardless of how `mandatory` is set, `LOAD DATA FROM S3` terminates if no files are found.

Manifest files can have any extension. The following example runs the `LOAD DATA FROM S3` statement with the manifest in the previous example, which is named **customer.manifest**. 

```
LOAD DATA FROM S3 MANIFEST 's3-us-west-2://aurora-bucket/customer.manifest'
    INTO TABLE CUSTOMER
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    (ID, FIRSTNAME, LASTNAME, EMAIL);
```

After the statement completes, an entry for each successfully loaded file is written to the `aurora_s3_load_history` table. 

#### Verifying loaded files using the aurora\$1s3\$1load\$1history table


Every successful `LOAD DATA FROM S3` statement updates the `aurora_s3_load_history` table in the `mysql` schema with an entry for each file that was loaded.

After you run the `LOAD DATA FROM S3` statement, you can verify which files were loaded by querying the `aurora_s3_load_history` table. To see the files that were loaded from one iteration of the statement, use the `WHERE` clause to filter the records on the Amazon S3 URI for the manifest file used in the statement. If you have used the same manifest file before, filter the results using the `timestamp` field.

```
select * from mysql.aurora_s3_load_history where load_prefix = 'S3_URI';
```

The following table describes the fields in the `aurora_s3_load_history` table.


| Field | Description | 
| --- | --- | 
| `load_prefix` |  The URI that was specified in the load statement. This URI can map to any of the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.LoadFromS3.html)  | 
|  `file_name`  |  The name of a file that was loaded into Aurora from Amazon S3 using the URI identified in the `load_prefix` field.  | 
| `version_number` |  The version number of the file identified by the `file_name` field that was loaded, if the Amazon S3 bucket has a version number.  | 
|  `bytes_loaded`  |  The size of the file loaded, in bytes.  | 
| `load_timestamp`  |  The timestamp when the `LOAD DATA FROM S3` statement completed.  | 

### Examples


The following statement loads data from an Amazon S3 bucket that is in the same region as the Aurora DB cluster. The statement reads the comma-delimited data in the file `customerdata.txt` that is in the *amzn-s3-demo-bucket* Amazon S3 bucket, and then loads the data into the table `store-schema.customer-table`.

```
LOAD DATA FROM S3 's3://amzn-s3-demo-bucket/customerdata.csv' 
    INTO TABLE store-schema.customer-table
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    (ID, FIRSTNAME, LASTNAME, ADDRESS, EMAIL, PHONE);
```

The following statement loads data from an Amazon S3 bucket that is in a different region from the Aurora DB cluster. The statement reads the comma-delimited data from all files that match the `employee-data` object prefix in the *amzn-s3-demo-bucket* Amazon S3 bucket in the `us-west-2` region, and then loads the data into the `employees` table.

```
LOAD DATA FROM S3 PREFIX 's3-us-west-2://amzn-s3-demo-bucket/employee_data'
    INTO TABLE employees
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    (ID, FIRSTNAME, LASTNAME, EMAIL, SALARY);
```

The following statement loads data from the files specified in a JSON manifest file named q1\$1sales.json into the `sales` table. 

```
LOAD DATA FROM S3 MANIFEST 's3-us-west-2://amzn-s3-demo-bucket1/q1_sales.json'
    INTO TABLE sales
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    (MONTH, STORE, GROSS, NET);
```

## LOAD XML FROM S3


You can use the `LOAD XML FROM S3` statement to load data from XML files stored on an Amazon S3 bucket in one of three different XML formats:
+ Column names as attributes of a `<row>` element. The attribute value identifies the contents of the table field.

  ```
  <row column1="value1" column2="value2" .../>
  ```
+ Column names as child elements of a `<row>` element. The value of the child element identifies the contents of the table field.

  ```
  <row>
    <column1>value1</column1>
    <column2>value2</column2>
  </row>
  ```
+ Column names in the `name` attribute of `<field>` elements in a `<row>` element. The value of the `<field>` element identifies the contents of the table field. 

  ```
  <row>
    <field name='column1'>value1</field>
    <field name='column2'>value2</field>
  </row>
  ```

### Syntax


```
LOAD XML FROM S3 'S3-URI'
    [REPLACE | IGNORE]
    INTO TABLE tbl_name
    [PARTITION (partition_name,...)]
    [CHARACTER SET charset_name]
    [ROWS IDENTIFIED BY '<element-name>']
    [IGNORE number {LINES | ROWS}]
    [(field_name_or_user_var,...)]
    [SET col_name = expr,...]
```

### Parameters


The `LOAD XML FROM S3` statement uses the following required and optional parameters. You can find more details about some of these parameters in [LOAD XML Statement](https://dev.mysql.com/doc/refman/8.0/en/load-xml.html) in the MySQL documentation.

**FILE \$1 PREFIX**  
Identifies whether to load the data from a single file, or from all files that match a given prefix. `FILE` is the default.

**REPLACE \$1 IGNORE**  
Determines what action to take if an input row has the same unique key values as an existing row in the database table.  
+ Specify `REPLACE` if you want the input row to replace the existing row in the table.
+ Specify `IGNORE` if you want to discard the input row. `IGNORE` is the default.

**INTO TABLE**  
Identifies the name of the database table to load the input rows into.

**PARTITION**  
Requires that all input rows be inserted into the partitions identified by the specified list of comma-separated partition names. If an input row cannot be inserted into one of the specified partitions, then the statement fails and an error is returned.

**CHARACTER SET**  
Identifies the character set of the data in the input file.

**ROWS IDENTIFIED BY**  
Identifies the element name that identifies a row in the input file. The default is `<row>`.

**IGNORE *number* LINES \$1 ROWS**  
Specifies to ignore a certain number of lines or rows at the start of the input file. For example, you can use `IGNORE 1 LINES` to skip over the first line in the text file, or `IGNORE 2 ROWS` to skip over the first two rows of data in the input XML.

**field\$1name\$1or\$1user\$1var, ...**  
Specifies a comma-separated list of one or more XML element names or user variables that identify which elements to load by name. The name of a user variable used for this purpose must match the name of an element from the XML file, prefixed with @. You can employ user variables to store the corresponding field values for subsequent reuse.  
For example, the following statement loads the first column from the input file into the first column of `table1`, and sets the value of the `table_column2` column in `table1` to the input value of the second column divided by 100.  

```
LOAD XML FROM S3 's3://amzn-s3-demo-bucket/data.xml'
   INTO TABLE table1
   (column1, @var1)
   SET table_column2 = @var1/100;
```

**SET**  
Specifies a comma-separated list of assignment operations that set the values of columns in the table to values not included in the input file.  
For example, the following statement sets the first two columns of `table1` to the values in the first two columns from the input file, and then sets the value of the `column3` in `table1` to the current time stamp.  

```
LOAD XML FROM S3 's3://amzn-s3-demo-bucket/data.xml'
   INTO TABLE table1
   (column1, column2)
   SET column3 = CURRENT_TIMESTAMP;
```
You can use subqueries in the right side of `SET` assignments. For a subquery that returns a value to be assigned to a column, you can use only a scalar subquery. Also, you can't use a subquery to select from the table that's being loaded.

# Saving data from an Amazon Aurora MySQL DB cluster into text files in an Amazon S3 bucket
Saving data into text files in Amazon S3<a name="save_into_s3"></a><a name="select_into_outfile"></a>

You can use the `SELECT INTO OUTFILE S3` statement to query data from an Amazon Aurora MySQL DB cluster and save it into text files stored in an Amazon S3 bucket. In Aurora MySQL, the files are first stored on the local disk, and then exported to S3. After the exports are done, the local files are deleted.

You can encrypt the Amazon S3 bucket using an Amazon S3 managed key (SSE-S3) or AWS KMS key (SSE-KMS: AWS managed key or customer managed key).

The `LOAD DATA FROM S3` statement can use files created by the `SELECT INTO OUTFILE S3` statement to load data into an Aurora DB cluster. For more information, see [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md).

**Note**  
This feature isn't supported for Aurora Serverless v1 DB clusters. It is supported for Aurora Serverless v2 DB clusters.  
You can also save DB cluster data and DB cluster snapshot data to Amazon S3 using the AWS Management Console, AWS CLI, or Amazon RDS API. For more information, see [Exporting DB cluster data to Amazon S3](export-cluster-data.md) and [Exporting DB cluster snapshot data to Amazon S3](aurora-export-snapshot.md).

**Contents**
+ [

## Giving Aurora MySQL access to Amazon S3
](#AuroraMySQL.Integrating.SaveIntoS3.Authorize)
+ [

## Granting privileges to save data in Aurora MySQL
](#AuroraMySQL.Integrating.SaveIntoS3.Grant)
+ [

## Specifying a path to an Amazon S3 bucket
](#AuroraMySQL.Integrating.SaveIntoS3.URI)
+ [

## Creating a manifest to list data files
](#AuroraMySQL.Integrating.SaveIntoS3.Manifest)
+ [

## SELECT INTO OUTFILE S3
](#AuroraMySQL.Integrating.SaveIntoS3.Statement)
  + [

### Syntax
](#AuroraMySQL.Integrating.SaveIntoS3.Statement.Syntax)
  + [

### Parameters
](#AuroraMySQL.Integrating.SaveIntoS3.Statement.Parameters)
  + [

### Considerations
](#AuroraMySQL.Integrating.SaveIntoS3.Considerations)
  + [

### Examples
](#AuroraMySQL.Integrating.SaveIntoS3.Examples)

## Giving Aurora MySQL access to Amazon S3


Before you can save data into an Amazon S3 bucket, you must first give your Aurora MySQL DB cluster permission to access Amazon S3.

**To give Aurora MySQL access to Amazon S3**

1. Create an AWS Identity and Access Management (IAM) policy that provides the bucket and object permissions that allow your Aurora MySQL DB cluster to access Amazon S3. For instructions, see [Creating an IAM policy to access Amazon S3 resources](AuroraMySQL.Integrating.Authorizing.IAM.S3CreatePolicy.md).
**Note**  
In Aurora MySQL version 3.05 and higher, you can encrypt objects using AWS KMS customer managed keys. To do so, include the `kms:GenerateDataKey` permission in your IAM policy. For more information, see [Creating an IAM policy to access AWS KMS resources](AuroraMySQL.Integrating.Authorizing.IAM.KMSCreatePolicy.md).  
You don't need this permission to encrypt objects using AWS managed keys or Amazon S3 managed keys (SSE-S3).

1. Create an IAM role, and attach the IAM policy you created in [Creating an IAM policy to access Amazon S3 resources](AuroraMySQL.Integrating.Authorizing.IAM.S3CreatePolicy.md) to the new IAM role. For instructions, see [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

1. For Aurora MySQL version 2, set either the `aurora_select_into_s3_role` or `aws_default_s3_role` DB cluster parameter to the Amazon Resource Name (ARN) of the new IAM role. If an IAM role isn't specified for `aurora_select_into_s3_role`, Aurora uses the IAM role specified in `aws_default_s3_role`.

   For Aurora MySQL version 3, use `aws_default_s3_role`.

   If the cluster is part of an Aurora global database, set this parameter for each Aurora cluster in the global database.

   For more information about DB cluster parameters, see [Amazon Aurora DB cluster and DB instance parameters](USER_WorkingWithDBClusterParamGroups.md#Aurora.Managing.ParameterGroups).

1. To permit database users in an Aurora MySQL DB cluster to access Amazon S3, associate the role that you created in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md) with the DB cluster.

   For an Aurora global database, associate the role with each Aurora cluster in the global database.

   For information about associating an IAM role with a DB cluster, see [Associating an IAM role with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.md).

1. Configure your Aurora MySQL DB cluster to allow outbound connections to Amazon S3. For instructions, see [Enabling network communication from Amazon Aurora to other AWS services](AuroraMySQL.Integrating.Authorizing.Network.md). 

    For an Aurora global database, enable outbound connections for each Aurora cluster in the global database. 

## Granting privileges to save data in Aurora MySQL


The database user that issues the `SELECT INTO OUTFILE S3` statement must have a specific role or privilege. In Aurora MySQL version 3, you grant the `AWS_SELECT_S3_ACCESS` role. In Aurora MySQL version 2, you grant the `SELECT INTO S3` privilege. The administrative user for a DB cluster is granted the appropriate role or privilege by default. You can grant the privilege to another user by using one of the following statements.

 Use the following statement for Aurora MySQL version 3: 

```
GRANT AWS_SELECT_S3_ACCESS TO 'user'@'domain-or-ip-address'
```

**Tip**  
When you use the role technique in Aurora MySQL version 3, you can also activate the role by using the `SET ROLE role_name` or `SET ROLE ALL` statement. If you aren't familiar with the MySQL 8.0 role system, you can learn more in [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model). For more details, see [Using roles](https://dev.mysql.com/doc/refman/8.0/en/roles.html) in the *MySQL Reference Manual*.  
This only applies to the current active session. When you reconnect, you must run the `SET ROLE` statement again to grant privileges. For more information, see [SET ROLE statement](https://dev.mysql.com/doc/refman/8.0/en/set-role.html) in the *MySQL Reference Manual*.  
You can use the `activate_all_roles_on_login` DB cluster parameter to automatically activate all roles when a user connects to a DB instance. When this parameter is set, you generally don't have to call the `SET ROLE` statement explicitly to activate a role. For more information, see [activate\$1all\$1roles\$1on\$1login](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_activate_all_roles_on_login) in the *MySQL Reference Manual*.  
However, you must call `SET ROLE ALL` explicitly at the beginning of a stored procedure to activate the role, when the stored procedure is called by a different user.

Use the following statement for Aurora MySQL version 2:

```
GRANT SELECT INTO S3 ON *.* TO 'user'@'domain-or-ip-address'
```

The `AWS_SELECT_S3_ACCESS` role and `SELECT INTO S3` privilege are specific to Amazon Aurora MySQL and are not available for MySQL databases or RDS for MySQL DB instances. If you have set up replication between an Aurora MySQL DB cluster as the replication source and a MySQL database as the replication client, then the `GRANT` statement for the role or privilege causes replication to stop with an error. You can safely skip the error to resume replication. To skip the error on an RDS for MySQL DB instance, use the [mysql\$1rds\$1skip\$1repl\$1error](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql_rds_skip_repl_error.html) procedure. To skip the error on an external MySQL database, use the [slave\$1skip\$1errors](https://dev.mysql.com/doc/refman/5.7/en/replication-options-replica.html#sysvar_slave_skip_errors) system variable (Aurora MySQL version 2) or [replica\$1skip\$1errors](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#sysvar_replica_skip_errors) system variable (Aurora MySQL version 3).

## Specifying a path to an Amazon S3 bucket


The syntax for specifying a path to store the data and manifest files on an Amazon S3 bucket is similar to that used in the `LOAD DATA FROM S3 PREFIX` statement, as shown following.

```
s3-region://bucket-name/file-prefix
```

The path includes the following values:
+ `region` (optional) – The AWS Region that contains the Amazon S3 bucket to save the data into. This value is optional. If you don't specify a `region` value, then Aurora saves your files into Amazon S3 in the same region as your DB cluster.
+ `bucket-name` – The name of the Amazon S3 bucket to save the data into. Object prefixes that identify a virtual folder path are supported.
+ `file-prefix` – The Amazon S3 object prefix that identifies the files to be saved in Amazon S3. 

The data files created by the `SELECT INTO OUTFILE S3` statement use the following path, in which *00000* represents a 5-digit, zero-based integer number.

```
s3-region://bucket-name/file-prefix.part_00000
```

For example, suppose that a `SELECT INTO OUTFILE S3` statement specifies `s3-us-west-2://bucket/prefix` as the path in which to store data files and creates three data files. The specified Amazon S3 bucket contains the following data files.
+ s3-us-west-2://bucket/prefix.part\$100000
+ s3-us-west-2://bucket/prefix.part\$100001
+ s3-us-west-2://bucket/prefix.part\$100002

## Creating a manifest to list data files
Creating a manifest

You can use the `SELECT INTO OUTFILE S3` statement with the `MANIFEST ON` option to create a manifest file in JSON format that lists the text files created by the statement. The `LOAD DATA FROM S3` statement can use the manifest file to load the data files back into an Aurora MySQL DB cluster. For more information about using a manifest to load data files from Amazon S3 into an Aurora MySQL DB cluster, see [Using a manifest to specify data files to load](AuroraMySQL.Integrating.LoadFromS3.md#AuroraMySQL.Integrating.LoadFromS3.Manifest). 

The data files included in the manifest created by the `SELECT INTO OUTFILE S3` statement are listed in the order that they're created by the statement. For example, suppose that a `SELECT INTO OUTFILE S3` statement specified `s3-us-west-2://bucket/prefix` as the path in which to store data files and creates three data files and a manifest file. The specified Amazon S3 bucket contains a manifest file named `s3-us-west-2://bucket/prefix.manifest`, that contains the following information.

```
{
  "entries": [
    {
      "url":"s3-us-west-2://bucket/prefix.part_00000"
    },
    {
      "url":"s3-us-west-2://bucket/prefix.part_00001"
    },
    {
      "url":"s3-us-west-2://bucket/prefix.part_00002"
    }
  ]
}
```

## SELECT INTO OUTFILE S3


You can use the `SELECT INTO OUTFILE S3` statement to query data from a DB cluster and save it directly into delimited text files stored in an Amazon S3 bucket.

Compressed files aren't supported. Encrypted files are supported starting in Aurora MySQL version 2.09.0.

### Syntax


```
SELECT
    [ALL | DISTINCT | DISTINCTROW ]
        [HIGH_PRIORITY]
        [STRAIGHT_JOIN]
        [SQL_SMALL_RESULT] [SQL_BIG_RESULT] [SQL_BUFFER_RESULT]
        [SQL_CACHE | SQL_NO_CACHE] [SQL_CALC_FOUND_ROWS]
    select_expr [, select_expr ...]
    [FROM table_references
        [PARTITION partition_list]
    [WHERE where_condition]
    [GROUP BY {col_name | expr | position}
        [ASC | DESC], ... [WITH ROLLUP]]
    [HAVING where_condition]
    [ORDER BY {col_name | expr | position}
         [ASC | DESC], ...]
    [LIMIT {[offset,] row_count | row_count OFFSET offset}]
INTO OUTFILE S3 's3_uri'
[CHARACTER SET charset_name]
    [export_options]
    [MANIFEST {ON | OFF}]
    [OVERWRITE {ON | OFF}]
    [ENCRYPTION {ON | OFF | SSE_S3 | SSE_KMS ['cmk_id']}]

export_options:
    [FORMAT {CSV|TEXT} [HEADER]]
    [{FIELDS | COLUMNS}
        [TERMINATED BY 'string']
        [[OPTIONALLY] ENCLOSED BY 'char']
        [ESCAPED BY 'char']
    ]
    [LINES
        [STARTING BY 'string']
        [TERMINATED BY 'string']
]
```

### Parameters


The `SELECT INTO OUTFILE S3` statement uses the following required and optional parameters that are specific to Aurora.

**s3-uri**  
Specifies the URI for an Amazon S3 prefix to use. Use the syntax described in [Specifying a path to an Amazon S3 bucket](#AuroraMySQL.Integrating.SaveIntoS3.URI).

**FORMAT \$1CSV\$1TEXT\$1 [HEADER]**  
Optionally saves the data in CSV format.  
The `TEXT` option is the default and produces the existing MySQL export format.  
The `CSV` option produces comma-separated data values. The CSV format follows the specification in [RFC-4180](https://tools.ietf.org/html/rfc4180). If you specify the optional keyword `HEADER`, the output file contains one header line. The labels in the header line correspond to the column names from the `SELECT` statement. You can use the CSV files for training data models for use with AWS ML services. For more information about using exported Aurora data with AWS ML services, see [Exporting data to Amazon S3 for SageMaker AI model training (Advanced)](mysql-ml.md#exporting-data-to-s3-for-model-training).

**MANIFEST \$1ON \$1 OFF\$1**  
Indicates whether a manifest file is created in Amazon S3. The manifest file is a JavaScript Object Notation (JSON) file that can be used to load data into an Aurora DB cluster with the `LOAD DATA FROM S3 MANIFEST` statement. For more information about `LOAD DATA FROM S3 MANIFEST`, see [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md).  
If `MANIFEST ON` is specified in the query, the manifest file is created in Amazon S3 after all data files have been created and uploaded. The manifest file is created using the following path:  

```
s3-region://bucket-name/file-prefix.manifest
```
For more information about the format of the manifest file's contents, see [Creating a manifest to list data files](#AuroraMySQL.Integrating.SaveIntoS3.Manifest).

**OVERWRITE \$1ON \$1 OFF\$1**  
Indicates whether existing files in the specified Amazon S3 bucket are overwritten. If `OVERWRITE ON` is specified, existing files that match the file prefix in the URI specified in `s3-uri`are overwritten. Otherwise, an error occurs.

**ENCRYPTION \$1ON \$1 OFF \$1 SSE\$1S3 \$1 SSE\$1KMS ['*cmk\$1id*']\$1**  
Indicates whether to use server-side encryption with Amazon S3 managed keys (SSE-S3) or AWS KMS keys (SSE-KMS, including AWS managed keys and customer managed keys). The `SSE_S3` and `SSE_KMS` settings are available in Aurora MySQL version 3.05 and higher.  
You can also use the `aurora_select_into_s3_encryption_default` session variable instead of the `ENCRYPTION` clause, as shown in the following example. Use either the SQL clause or the session variable, but not both.  

```
set session set session aurora_select_into_s3_encryption_default={ON | OFF | SSE_S3 | SSE_KMS};
```
The `SSE_S3` and `SSE_KMS` settings are available in Aurora MySQL version 3.05 and higher.  
When you set `aurora_select_into_s3_encryption_default` to the following value:  
+ `OFF` – The default encryption policy of the S3 bucket is followed. The default value of `aurora_select_into_s3_encryption_default` is `OFF`.
+ `ON` or `SSE_S3` – The S3 object is encrypted using Amazon S3 managed keys (SSE-S3).
+ `SSE_KMS` – The S3 object is encrypted using an AWS KMS key.

  In this case, you also include the session variable `aurora_s3_default_cmk_id`, for example:

  ```
  set session aurora_select_into_s3_encryption_default={SSE_KMS};
  set session aurora_s3_default_cmk_id={NULL | 'cmk_id'};
  ```
  + When `aurora_s3_default_cmk_id` is `NULL`, the S3 object is encrypted using an AWS managed key.
  + When `aurora_s3_default_cmk_id` is a nonempty string `cmk_id`, the S3 object is encrypted using a customer managed key.

    The value of `cmk_id` can't be an empty string.
When you use the `SELECT INTO OUTFILE S3` command, Aurora determines the encryption as follows:  
+ If the `ENCRYPTION` clause is present in the SQL command, Aurora relies only on the value of `ENCRYPTION`, and doesn't use a session variable.
+ If the `ENCRYPTION` clause isn't present, Aurora relies on the value of the session variable.
For more information, see [Using server-side encryption with Amazon S3 managed keys (SSE-S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html) and [Using server-side encryption withAWS KMS keys (SSE-KMS)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html) in the *Amazon Simple Storage Service User Guide*.

You can find more details about other parameters in [SELECT statement](https://dev.mysql.com/doc/refman/8.0/en/select.html) and [LOAD DATA statement](https://dev.mysql.com/doc/refman/8.0/en/load-data.html), in the MySQL documentation.

### Considerations


The number of files written to the Amazon S3 bucket depends on the amount of data selected by the `SELECT INTO OUTFILE S3` statement and the file size threshold for Aurora MySQL. The default file size threshold is 6 gigabytes (GB). If the data selected by the statement is less than the file size threshold, a single file is created; otherwise, multiple files are created. Other considerations for files created by this statement include the following:
+ Aurora MySQL guarantees that rows in data files are not split across file boundaries. For multiple files, the size of every data file except the last is typically close to the file size threshold. However, occasionally staying under the file size threshold results in a row being split across two data files. In this case, Aurora MySQL creates a data file that keeps the row intact, but might be larger than the file size threshold. 
+ Because each `SELECT` statement in Aurora MySQL runs as an atomic transaction, a `SELECT INTO OUTFILE S3` statement that selects a large data set might run for some time. If the statement fails for any reason, you might need to start over and issue the statement again. If the statement fails, however, files already uploaded to Amazon S3 remain in the specified Amazon S3 bucket. You can use another statement to upload the remaining data instead of starting over again.
+ If the amount of data to be selected is large (more than 25 GB), we recommend that you use multiple `SELECT INTO OUTFILE S3` statements to save the data to Amazon S3. Each statement should select a different portion of the data to be saved, and also specify a different `file_prefix` in the `s3-uri` parameter to use when saving the data files. Partitioning the data to be selected with multiple statements makes it easier to recover from an error in one statement. If an error occurs for one statement, only a portion of data needs to be re-selected and uploaded to Amazon S3. Using multiple statements also helps to avoid a single long-running transaction, which can improve performance.
+ If multiple `SELECT INTO OUTFILE S3` statements that use the same `file_prefix` in the `s3-uri` parameter run in parallel to select data into Amazon S3, the behavior is undefined.
+ Metadata, such as table schema or file metadata, is not uploaded by Aurora MySQL to Amazon S3.
+ In some cases, you might re-run a `SELECT INTO OUTFILE S3` query, such as to recover from a failure. In these cases, you must either remove any existing data files in the Amazon S3 bucket with the same file prefix specified in `s3-uri`, or include `OVERWRITE ON` in the `SELECT INTO OUTFILE S3` query.

The `SELECT INTO OUTFILE S3` statement returns a typical MySQL error number and response on success or failure. If you don't have access to the MySQL error number and response, the easiest way to determine when it's done is by specifying `MANIFEST ON` in the statement. The manifest file is the last file written by the statement. In other words, if you have a manifest file, the statement has completed.

Currently, there's no way to directly monitor the progress of the `SELECT INTO OUTFILE S3` statement while it runs. However, suppose that you're writing a large amount of data from Aurora MySQL to Amazon S3 using this statement, and you know the size of the data selected by the statement. In this case, you can estimate progress by monitoring the creation of data files in Amazon S3.

To do so, you can use the fact that a data file is created in the specified Amazon S3 bucket for about every 6 GB of data selected by the statement. Divide the size of the data selected by 6 GB to get the estimated number of data files to create. You can then estimate the progress of the statement by monitoring the number of files uploaded to Amazon S3 while the statement runs.

### Examples


The following statement selects all of the data in the `employees` table and saves the data into an Amazon S3 bucket that is in a different region from the Aurora MySQL DB cluster. The statement creates data files in which each field is terminated by a comma (`,`) character and each row is terminated by a newline (`\n`) character. The statement returns an error if files that match the `sample_employee_data` file prefix exist in the specified Amazon S3 bucket.

```
SELECT * FROM employees INTO OUTFILE S3 's3-us-west-2://aurora-select-into-s3-pdx/sample_employee_data'
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n';
```

The following statement selects all of the data in the `employees` table and saves the data into an Amazon S3 bucket that is in the same region as the Aurora MySQL DB cluster. The statement creates data files in which each field is terminated by a comma (`,`) character and each row is terminated by a newline (`\n`) character, and also a manifest file. The statement returns an error if files that match the `sample_employee_data` file prefix exist in the specified Amazon S3 bucket.

```
SELECT * FROM employees INTO OUTFILE S3 's3://aurora-select-into-s3-pdx/sample_employee_data'
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    MANIFEST ON;
```

The following statement selects all of the data in the `employees` table and saves the data into an Amazon S3 bucket that is in a different region from the Aurora DB cluster. The statement creates data files in which each field is terminated by a comma (`,`) character and each row is terminated by a newline (`\n`) character. The statement overwrites any existing files that match the `sample_employee_data` file prefix in the specified Amazon S3 bucket.

```
SELECT * FROM employees INTO OUTFILE S3 's3-us-west-2://aurora-select-into-s3-pdx/sample_employee_data'
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    OVERWRITE ON;
```

The following statement selects all of the data in the `employees` table and saves the data into an Amazon S3 bucket that is in the same region as the Aurora MySQL DB cluster. The statement creates data files in which each field is terminated by a comma (`,`) character and each row is terminated by a newline (`\n`) character, and also a manifest file. The statement overwrites any existing files that match the `sample_employee_data` file prefix in the specified Amazon S3 bucket.

```
SELECT * FROM employees INTO OUTFILE S3 's3://aurora-select-into-s3-pdx/sample_employee_data'
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    MANIFEST ON
    OVERWRITE ON;
```

# Invoking a Lambda function from an Amazon Aurora MySQL DB cluster
Invoking a Lambda function from Aurora MySQL<a name="lambda"></a>

You can invoke an AWS Lambda function from an Amazon Aurora MySQL-Compatible Edition DB cluster with the native function `lambda_sync` or `lambda_async`. Before invoking a Lambda function from an Aurora MySQL, the Aurora DB cluster must have access to Lambda. For details about granting access to Aurora MySQL, see [Giving Aurora access to Lambda](AuroraMySQL.Integrating.LambdaAccess.md). For information about the `lambda_sync` and `lambda_async` stored functions, see [Invoking a Lambda function with an Aurora MySQL native function](AuroraMySQL.Integrating.NativeLambda.md). 

 You can also call an AWS Lambda function by using a stored procedure. However, using a stored procedure is deprecated. We strongly recommend using an Aurora MySQL native function if you are using one of the following Aurora MySQL versions: 
+ Aurora MySQL version 2, for MySQL 5.7-compatible clusters.
+ Aurora MySQL version 3.01 and higher, for MySQL 8.0-compatible clusters. The stored procedure isn't available in Aurora MySQL version 3.

For information about giving Aurora access to Lambda and invoking a Lambda function, see the following topics.

**Topics**
+ [

# Giving Aurora access to Lambda
](AuroraMySQL.Integrating.LambdaAccess.md)
+ [

# Invoking a Lambda function with an Aurora MySQL native function
](AuroraMySQL.Integrating.NativeLambda.md)
+ [

# Invoking a Lambda function with an Aurora MySQL stored procedure (deprecated)
](AuroraMySQL.Integrating.ProcLambda.md)

# Giving Aurora access to Lambda


Before you can invoke Lambda functions from an Aurora MySQL DB cluster, make sure to first give your cluster permission to access Lambda.

**To give Aurora MySQL access to Lambda**

1. Create an AWS Identity and Access Management (IAM) policy that provides the permissions that allow your Aurora MySQL DB cluster to invoke Lambda functions. For instructions, see [Creating an IAM policy to access AWS Lambda resources](AuroraMySQL.Integrating.Authorizing.IAM.LambdaCreatePolicy.md).

1. Create an IAM role, and attach the IAM policy you created in [Creating an IAM policy to access AWS Lambda resources](AuroraMySQL.Integrating.Authorizing.IAM.LambdaCreatePolicy.md) to the new IAM role. For instructions, see [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

1. Set the `aws_default_lambda_role` DB cluster parameter to the Amazon Resource Name (ARN) of the new IAM role.

   If the cluster is part of an Aurora global database, apply the same setting for each Aurora cluster in the global database. 

   For more information about DB cluster parameters, see [Amazon Aurora DB cluster and DB instance parameters](USER_WorkingWithDBClusterParamGroups.md#Aurora.Managing.ParameterGroups).

1. To permit database users in an Aurora MySQL DB cluster to invoke Lambda functions, associate the role that you created in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md) with the DB cluster. For information about associating an IAM role with a DB cluster, see [Associating an IAM role with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.md).

    If the cluster is part of an Aurora global database, associate the role with each Aurora cluster in the global database. 

1. Configure your Aurora MySQL DB cluster to allow outbound connections to Lambda. For instructions, see [Enabling network communication from Amazon Aurora to other AWS services](AuroraMySQL.Integrating.Authorizing.Network.md).

    If the cluster is part of an Aurora global database, enable outbound connections for each Aurora cluster in the global database. 

# Invoking a Lambda function with an Aurora MySQL native function
Invoking a Lambda function with a native function

**Note**  
You can call the native functions `lambda_sync` and `lambda_async` when you use Aurora MySQL version 2, or Aurora MySQL version 3.01 and higher. For more information about Aurora MySQL versions, see [Database engine updates for Amazon Aurora MySQLLong-term support (LTS) and beta releases for Amazon Aurora MySQL](AuroraMySQL.Updates.md).

You can invoke an AWS Lambda function from an Aurora MySQL DB cluster by calling the native functions `lambda_sync` and `lambda_async`. This approach can be useful when you want to integrate your database running on Aurora MySQL with other AWS services. For example, you might want to send a notification using Amazon Simple Notification Service (Amazon SNS) whenever a row is inserted into a specific table in your database.

**Contents**
+ [

## Working with native functions to invoke a Lambda function
](#AuroraMySQL.Integrating.NativeLambda.lambda_functions)
  + [

### Granting the role in Aurora MySQL version 3
](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.v3)
  + [

### Granting the privilege in Aurora MySQL version 2
](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.v2)
  + [

### Syntax for the lambda\$1sync function
](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.Sync.Syntax)
  + [

### Parameters for the lambda\$1sync function
](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.Sync.Parameters)
  + [

### Example for the lambda\$1sync function
](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.Sync.Example)
  + [

### Syntax for the lambda\$1async function
](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.Async.Syntax)
  + [

### Parameters for the lambda\$1async function
](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.Async.Parameters)
  + [

### Example for the lambda\$1async function
](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.Async.Example)
  + [

### Invoking a Lambda function within a trigger
](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.trigger)

## Working with native functions to invoke a Lambda function
Working with native functions

The `lambda_sync` and `lambda_async` functions are built-in, native functions that invoke a Lambda function synchronously or asynchronously. When you must know the result of the Lambda function before moving on to another action, use the synchronous function `lambda_sync`. When you don't need to know the result of the Lambda function before moving on to another action, use the asynchronous function `lambda_async`.

### Granting the role in Aurora MySQL version 3


In Aurora MySQL version 3, the user invoking a native function must be granted the `AWS_LAMBDA_ACCESS` role. To grant this role to a user, connect to the DB instance as the administrative user, and run the following statement.

```
GRANT AWS_LAMBDA_ACCESS TO user@domain-or-ip-address
```

You can revoke this role by running the following statement.

```
REVOKE AWS_LAMBDA_ACCESS FROM user@domain-or-ip-address
```

**Tip**  
When you use the role technique in Aurora MySQL version 3, you can also activate the role by using the `SET ROLE role_name` or `SET ROLE ALL` statement. If you aren't familiar with the MySQL 8.0 role system, you can learn more in [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model). For more details, see [Using roles](https://dev.mysql.com/doc/refman/8.0/en/roles.html) in the *MySQL Reference Manual*.  
This only applies to the current active session. When you reconnect, you must run the `SET ROLE` statement again to grant privileges. For more information, see [SET ROLE statement](https://dev.mysql.com/doc/refman/8.0/en/set-role.html) in the *MySQL Reference Manual*.  
You can use the `activate_all_roles_on_login` DB cluster parameter to automatically activate all roles when a user connects to a DB instance. When this parameter is set, you generally don't have to call the `SET ROLE` statement explicitly to activate a role. For more information, see [activate\$1all\$1roles\$1on\$1login](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_activate_all_roles_on_login) in the *MySQL Reference Manual*.  
However, you must call `SET ROLE ALL` explicitly at the beginning of a stored procedure to activate the role, when the stored procedure is called by a different user.

If you get an error such as the following when you try to invoke a Lambda function, then run a `SET ROLE` statement.

```
SQL Error [1227] [42000]: Access denied; you need (at least one of) the Invoke Lambda privilege(s) for this operation
```

Make sure that you're granting the role to the correct user, as shown in the `mysql.users` table entries. There might be multiple users with the same name, but on different hosts. Depending on which application or host is invoking the `lambda_sync` function, MySQL selects the user with the best match according to the `host` column entries.

### Granting the privilege in Aurora MySQL version 2


In Aurora MySQL version 2, the user invoking a native function must be granted the `INVOKE LAMBDA` privilege. To grant this privilege to a user, connect to the DB instance as the administrative user, and run the following statement.

```
GRANT INVOKE LAMBDA ON *.* TO user@domain-or-ip-address
```

You can revoke this privilege by running the following statement.

```
REVOKE INVOKE LAMBDA ON *.* FROM user@domain-or-ip-address
```

### Syntax for the lambda\$1sync function


You invoke the `lambda_sync` function synchronously with the `RequestResponse` invocation type. The function returns the result of the Lambda invocation in a JSON payload. The function has the following syntax.

```
lambda_sync (
  lambda_function_ARN,
  JSON_payload
)
```

### Parameters for the lambda\$1sync function


The `lambda_sync` function has the following parameters.

* lambda\$1function\$1ARN *  
The Amazon Resource Name (ARN) of the Lambda function to invoke.

* JSON\$1payload *  
The payload for the invoked Lambda function, in JSON format.

**Note**  
Aurora MySQL version 3 supports the JSON parsing functions from MySQL 8.0. However, Aurora MySQL version 2 doesn't include those functions. JSON parsing isn't required when a Lambda function returns an atomic value, such as a number or a string.

### Example for the lambda\$1sync function


The following query based on `lambda_sync` invokes the Lambda function `BasicTestLambda` synchronously using the function ARN. The payload for the function is `{"operation": "ping"}`.

```
SELECT lambda_sync(
    'arn:aws:lambda:us-east-1:123456789012:function:BasicTestLambda',
    '{"operation": "ping"}');
```

### Syntax for the lambda\$1async function


You invoke the `lambda_async` function asynchronously with the `Event` invocation type. The function returns the result of the Lambda invocation in a JSON payload. The function has the following syntax.

```
lambda_async (
  lambda_function_ARN,
  JSON_payload
)
```

### Parameters for the lambda\$1async function


The `lambda_async` function has the following parameters.

* lambda\$1function\$1ARN *  
The Amazon Resource Name (ARN) of the Lambda function to invoke.

* JSON\$1payload *  
The payload for the invoked Lambda function, in JSON format.

**Note**  
Aurora MySQL version 3 supports the JSON parsing functions from MySQL 8.0. However, Aurora MySQL version 2 doesn't include those functions. JSON parsing isn't required when a Lambda function returns an atomic value, such as a number or a string.

### Example for the lambda\$1async function


The following query based on `lambda_async` invokes the Lambda function `BasicTestLambda` asynchronously using the function ARN. The payload for the function is `{"operation": "ping"}`.

```
SELECT lambda_async(
    'arn:aws:lambda:us-east-1:123456789012:function:BasicTestLambda',
    '{"operation": "ping"}');
```

### Invoking a Lambda function within a trigger


You can use triggers to call Lambda on data-modifying statements. The following example uses the `lambda_async` native function and stores the result in a variable.

```
mysql>SET @result=0;
mysql>DELIMITER //
mysql>CREATE TRIGGER myFirstTrigger
      AFTER INSERT
          ON Test_trigger FOR EACH ROW
      BEGIN
      SELECT lambda_async(
          'arn:aws:lambda:us-east-1:123456789012:function:BasicTestLambda',
          '{"operation": "ping"}')
          INTO @result;
      END; //
mysql>DELIMITER ;
```

**Note**  
Triggers aren't run once per SQL statement, but once per row modified, one row at a time. When a trigger runs, the process is synchronous. The data-modifying statement only returns when the trigger completes.  
Be careful when invoking an AWS Lambda function from triggers on tables that experience high write traffic. `INSERT`, `UPDATE`, and `DELETE` triggers are activated per row. A write-heavy workload on a table with `INSERT`, `UPDATE`, or `DELETE` triggers results in a large number of calls to your AWS Lambda function.

# Invoking a Lambda function with an Aurora MySQL stored procedure (deprecated)
Invoking a Lambda function with a stored procedure (deprecated)

You can invoke an AWS Lambda function from an Aurora MySQL DB cluster by calling the `mysql.lambda_async` procedure. This approach can be useful when you want to integrate your database running on Aurora MySQL with other AWS services. For example, you might want to send a notification using Amazon Simple Notification Service (Amazon SNS) whenever a row is inserted into a specific table in your database. 

**Contents**
+ [

## Aurora MySQL version considerations
](#AuroraMySQL.Integrating.ProcLambda.caveats)
+ [

## Working with the mysql.lambda\$1async procedure to invoke a Lambda function (deprecated)
](#AuroraMySQL.Integrating.Lambda.mysql_lambda_async)
  + [

### Syntax
](#AuroraMySQL.Integrating.Lambda.mysql_lambda_async.Syntax)
  + [

### Parameters
](#AuroraMySQL.Integrating.Lambda.mysql_lambda_async.Parameters)
  + [

### Examples
](#AuroraMySQL.Integrating.Lambda.mysql_lambda_async.Examples)

## Aurora MySQL version considerations
Version considerations

Starting in Aurora MySQL version 2, you can use the native function method instead of these stored procedures to invoke a Lambda function. For more information about the native functions, see [Working with native functions to invoke a Lambda function](AuroraMySQL.Integrating.NativeLambda.md#AuroraMySQL.Integrating.NativeLambda.lambda_functions).

In Aurora MySQL version 2, the stored procedure `mysql.lambda_async` is no longer supported. We strongly recommend that you work with native Lambda functions instead.

In Aurora MySQL version 3, the stored procedure isn't available.

## Working with the mysql.lambda\$1async procedure to invoke a Lambda function (deprecated)
Working with mysql.lambda\$1async (deprecated)

The `mysql.lambda_async` procedure is a built-in stored procedure that invokes a Lambda function asynchronously. To use this procedure, your database user must have `EXECUTE` privilege on the `mysql.lambda_async` stored procedure.

### Syntax


The `mysql.lambda_async` procedure has the following syntax.

```
CALL mysql.lambda_async (
  lambda_function_ARN,
  lambda_function_input
)
```

### Parameters


The `mysql.lambda_async` procedure has the following parameters.

* lambda\$1function\$1ARN *  
The Amazon Resource Name (ARN) of the Lambda function to invoke.

* lambda\$1function\$1input *  
The input string, in JSON format, for the invoked Lambda function.

### Examples


As a best practice, we recommend that you wrap calls to the `mysql.lambda_async` procedure in a stored procedure that can be called from different sources such as triggers or client code. This approach can help to avoid impedance mismatch issues and make it easier to invoke Lambda functions. 

**Note**  
Be careful when invoking an AWS Lambda function from triggers on tables that experience high write traffic. `INSERT`, `UPDATE`, and `DELETE` triggers are activated per row. A write-heavy workload on a table with `INSERT`, `UPDATE`, or `DELETE` triggers results in a large number of calls to your AWS Lambda function.   
Although calls to the `mysql.lambda_async` procedure are asynchronous, triggers are synchronous. A statement that results in a large number of trigger activations doesn't wait for the call to the AWS Lambda function to complete, but it does wait for the triggers to complete before returning control to the client.

**Example: Invoke an AWS Lambda function to send email**  
The following example creates a stored procedure that you can call in your database code to send an email using a Lambda function.  
**AWS Lambda Function**  

```
import boto3

ses = boto3.client('ses')

def SES_send_email(event, context):

    return ses.send_email(
        Source=event['email_from'],
        Destination={
            'ToAddresses': [
            event['email_to'],
            ]
        },

        Message={
            'Subject': {
            'Data': event['email_subject']
            },
            'Body': {
                'Text': {
                    'Data': event['email_body']
                }
            }
        }
    )
```
**Stored Procedure**  

```
DROP PROCEDURE IF EXISTS SES_send_email;
DELIMITER ;;
  CREATE PROCEDURE SES_send_email(IN email_from VARCHAR(255),
                                  IN email_to VARCHAR(255),
                                  IN subject VARCHAR(255),
                                  IN body TEXT) LANGUAGE SQL
  BEGIN
    CALL mysql.lambda_async(
         'arn:aws:lambda:us-west-2:123456789012:function:SES_send_email',
         CONCAT('{"email_to" : "', email_to,
             '", "email_from" : "', email_from,
             '", "email_subject" : "', subject,
             '", "email_body" : "', body, '"}')
     );
  END
  ;;
DELIMITER ;
```
**Call the Stored Procedure to Invoke the AWS Lambda Function**  

```
mysql> call SES_send_email('example_from@amazon.com', 'example_to@amazon.com', 'Email subject', 'Email content');
```

**Example: Invoke an AWS Lambda function to publish an event from a trigger**  
The following example creates a stored procedure that publishes an event by using Amazon SNS. The code calls the procedure from a trigger when a row is added to a table.  
**AWS Lambda Function**  

```
import boto3

sns = boto3.client('sns')

def SNS_publish_message(event, context):

    return sns.publish(
        TopicArn='arn:aws:sns:us-west-2:123456789012:Sample_Topic',
        Message=event['message'],
        Subject=event['subject'],
        MessageStructure='string'
    )
```
**Stored Procedure**  

```
DROP PROCEDURE IF EXISTS SNS_Publish_Message;
DELIMITER ;;
CREATE PROCEDURE SNS_Publish_Message (IN subject VARCHAR(255),
                                      IN message TEXT) LANGUAGE SQL
BEGIN
  CALL mysql.lambda_async('arn:aws:lambda:us-west-2:123456789012:function:SNS_publish_message',
     CONCAT('{ "subject" : "', subject,
            '", "message" : "', message, '" }')
     );
END
;;
DELIMITER ;
```
**Table**  

```
CREATE TABLE 'Customer_Feedback' (
  'id' int(11) NOT NULL AUTO_INCREMENT,
  'customer_name' varchar(255) NOT NULL,
  'customer_feedback' varchar(1024) NOT NULL,
  PRIMARY KEY ('id')
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
```
**Trigger**  

```
DELIMITER ;;
CREATE TRIGGER TR_Customer_Feedback_AI
  AFTER INSERT ON Customer_Feedback
  FOR EACH ROW
BEGIN
  SELECT CONCAT('New customer feedback from ', NEW.customer_name), NEW.customer_feedback INTO @subject, @feedback;
  CALL SNS_Publish_Message(@subject, @feedback);
END
;;
DELIMITER ;
```
**Insert a Row into the Table to Trigger the Notification**  

```
mysql> insert into Customer_Feedback (customer_name, customer_feedback) VALUES ('Sample Customer', 'Good job guys!');
```

# Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs
Publishing Aurora MySQL logs to CloudWatch Logs

You can configure your Aurora MySQL DB cluster to publish general, slow, audit, and error log data to a log group in Amazon CloudWatch Logs. With CloudWatch Logs, you can perform real-time analysis of the log data, and use CloudWatch to create alarms and view metrics. You can use CloudWatch Logs to store your log records in highly durable storage.

To publish logs to CloudWatch Logs, the respective logs must be enabled. Error logs are enabled by default, but you must enable the other types of logs explicitly. For information about enabling logs in MySQL, see [Selecting general query and slow query log output destinations](https://dev.mysql.com/doc/refman/8.0/en/log-destinations.html) in the MySQL documentation. For more information about enabling Aurora MySQL audit logs, see [Enabling Advanced Auditing](AuroraMySQL.Auditing.md#AuroraMySQL.Auditing.Enable).

**Note**  
If exporting log data is disabled, Aurora doesn't delete existing log groups or log streams. If exporting log data is disabled, existing log data remains available in CloudWatch Logs, depending on log retention, and you still incur charges for stored audit log data. You can delete log streams and log groups using the CloudWatch Logs console, the AWS CLI, or the CloudWatch Logs API.
An alternative way to publish audit logs to CloudWatch Logs is by enabling Advanced Auditing, then creating a custom DB cluster parameter group and setting the `server_audit_logs_upload` parameter to `1`. The default for the `server_audit_logs_upload` DB cluster parameter is `0`. For information on enabling Advanced Auditing, see [Using Advanced Auditing with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Auditing.md).  
If you use this alternative method, you must have an IAM role to access CloudWatch Logs and set the `aws_default_logs_role` cluster-level parameter to the ARN for this role. For information about creating the role, see [Setting up IAM roles to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.md). However, if you have the `AWSServiceRoleForRDS` service-linked role, it provides access to CloudWatch Logs and overrides any custom-defined roles. For information about service-linked roles for Amazon RDS, see [Using service-linked roles for Amazon Aurora](UsingWithRDS.IAM.ServiceLinkedRoles.md).
If you don't want to export audit logs to CloudWatch Logs, make sure that all methods of exporting audit logs are disabled. These methods are the AWS Management Console, the AWS CLI, the RDS API, and the `server_audit_logs_upload` parameter.
 The procedure is slightly different for Aurora Serverless v1 DB clusters than for DB clusters with provisioned or Aurora Serverless v2 DB instances. Aurora Serverless v1 clusters automatically upload all of the logs that you enable through configuration parameters.  
Therefore, you turn on or turn off log upload for Aurora Serverless v1 DB clusters by turning different log types on and off in the DB cluster parameter group. You don't modify the settings of the cluster itself through the AWS Management Console, AWS CLI, or RDS API. For information about turning on and off MySQL logs for Aurora Serverless v1 clusters, see [Parameter groups for Aurora Serverless v1](aurora-serverless-v1.how-it-works.md#aurora-serverless.parameter-groups). 

## Console


You can publish Aurora MySQL logs for provisioned clusters to CloudWatch Logs with the console.

**To publish Aurora MySQL logs from the console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the Aurora MySQL DB cluster that you want to publish the log data for.

1. Choose **Modify**.

1. In the **Log exports** section, choose the logs that you want to start publishing to CloudWatch Logs.

1. Choose **Continue**, and then choose **Modify DB Cluster** on the summary page.

## AWS CLI


You can publish Aurora MySQL logs for provisioned clusters with the AWS CLI. To do so, you run the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) AWS CLI command with the following options: 
+ `--db-cluster-identifier`—The DB cluster identifier.
+ `--cloudwatch-logs-export-configuration`—The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

You can also publish Aurora MySQL logs by running one of the following AWS CLI commands: 
+ [create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html)
+ [restore-db-cluster-from-s3](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-s3.html)
+ [restore-db-cluster-from-snapshot](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-snapshot.html)
+ [restore-db-cluster-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html)

Run one of these AWS CLI commands with the following options:
+ `--db-cluster-identifier`—The DB cluster identifier.
+ `--engine`—The database engine.
+ `--enable-cloudwatch-logs-exports`—The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

Other options might be required depending on the AWS CLI command that you run.

**Example**  
The following command modifies an existing Aurora MySQL DB cluster to publish log files to CloudWatch Logs.  
For Linux, macOS, or Unix:  

```
1. aws rds modify-db-cluster \
2.     --db-cluster-identifier mydbcluster \
3.     --cloudwatch-logs-export-configuration '{"EnableLogTypes":["error","general","slowquery","audit","instance"]}'
```
For Windows:  

```
1. aws rds modify-db-cluster ^
2.     --db-cluster-identifier mydbcluster ^
3.     --cloudwatch-logs-export-configuration '{"EnableLogTypes":["error","general","slowquery","audit","instance"]}'
```

**Example**  
The following command creates an Aurora MySQL DB cluster to publish log files to CloudWatch Logs.  
For Linux, macOS, or Unix:  

```
1. aws rds create-db-cluster \
2.     --db-cluster-identifier mydbcluster \
3.     --engine aurora \
4.     --enable-cloudwatch-logs-exports '["error","general","slowquery","audit","instance"]'
```
For Windows:  

```
1. aws rds create-db-cluster ^
2.     --db-cluster-identifier mydbcluster ^
3.     --engine aurora ^
4.     --enable-cloudwatch-logs-exports '["error","general","slowquery","audit","instance"]'
```

## RDS API


You can publish Aurora MySQL logs for provisioned clusters with the RDS API. To do so, you run the [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) operation with the following options: 
+ `DBClusterIdentifier`—The DB cluster identifier.
+ `CloudwatchLogsExportConfiguration`—The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

You can also publish Aurora MySQL logs with the RDS API by running one of the following RDS API operations: 
+ [CreateDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html)
+ [RestoreDBClusterFromS3](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromS3.html)
+ [RestoreDBClusterFromSnapshot](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromSnapshot.html)
+ [RestoreDBClusterToPointInTime](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html)

Run the RDS API operation with the following parameters: 
+ `DBClusterIdentifier`—The DB cluster identifier.
+ `Engine`—The database engine.
+ `EnableCloudwatchLogsExports`—The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

Other parameters might be required depending on the AWS CLI command that you run.

## Monitoring log events in Amazon CloudWatch


After enabling Aurora MySQL log events, you can monitor the events in Amazon CloudWatch Logs. A new log group is automatically created for the Aurora DB cluster under the following prefix, in which `cluster-name` represents the DB cluster name, and `log_type` represents the log type.

```
/aws/rds/cluster/cluster-name/log_type
```

For example, if you configure the export function to include the slow query log for a DB cluster named `mydbcluster`, slow query data is stored in the `/aws/rds/cluster/mydbcluster/slowquery` log group.

The events from all instances in your cluster are pushed to a log group using different log streams. The behavior depends on which of the following conditions is true:
+ A log group with the specified name exists.

  Aurora uses the existing log group to export log data for the cluster. To create log groups with predefined log retention periods, metric filters, and customer access, you can use automated configuration, such as AWS CloudFormation.
+ A log group with the specified name doesn't exist.

  When a matching log entry is detected in the log file for the instance, Aurora MySQL creates a new log group in CloudWatch Logs automatically. The log group uses the default log retention period of **Never Expire**.

  To change the log retention period, use the CloudWatch Logs console, the AWS CLI, or the CloudWatch Logs API. For more information about changing log retention periods in CloudWatch Logs, see [Change log data retention in CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SettingLogRetention.html).

To search for information within the log events for a DB cluster, use the CloudWatch Logs console, the AWS CLI, or the CloudWatch Logs API. For more information about searching and filtering log data, see [Searching and filtering log data](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html).

# Amazon Aurora MySQL lab mode
Aurora MySQL lab mode<a name="labmode"></a>

**Important**  
Lab mode was introduced in Aurora MySQL version 2 to enable the [Fast DDL](AuroraMySQL.Managing.FastDDL.md) optimization, which improved the efficiency of certain DDL operations. In Aurora MySQL version 3, lab mode has been removed, and Fast DDL has been replaced by the MySQL 8.0 feature called [Instant DDL](https://dev.mysql.com/doc/refman/8.4/en/innodb-online-ddl-operations.html).

Aurora lab mode is used to enable Aurora features that are available in the current Aurora database version, but are not enabled by default. While Aurora lab mode features are not recommended for use in production DB clusters, you can use Aurora lab mode to enable these features for DB clusters in your development and test environments. For more information about Aurora features available when Aurora lab mode is enabled, see [Aurora lab mode features](#AuroraMySQL.Updates.LabModeFeatures).

The `aurora_lab_mode` parameter is an instance-level parameter that is in the default parameter group. The parameter is set to `0` (disabled) in the default parameter group. To enable Aurora lab mode, create a custom parameter group, set the `aurora_lab_mode` parameter to `1` (enabled) in the custom parameter group, and modify one or more DB instances in your Aurora cluster to use the custom parameter group. Then connect to the appropriate instance endpoint to try the lab mode features. For information on modifying a DB parameter group, see [Modifying parameters in a DB parameter group in Amazon Aurora](USER_WorkingWithParamGroups.Modifying.md). For information on parameter groups and Amazon Aurora, see [Aurora MySQL configuration parameters](AuroraMySQL.Reference.ParameterGroups.md).

## Aurora lab mode features
Aurora lab mode features

The following Aurora feature is currently available when Aurora lab mode is enabled. You must enable Aurora lab mode before any of these features can be used.

**Fast DDL**  
This feature allows you to run an `ALTER TABLE tbl_name ADD COLUMN col_name column_definition` operation nearly instantaneously. The operation completes without requiring the table to be copied and without materially impacting other DML statements. Because it doesn't consume temporary storage for a table copy, it makes DDL statements practical even for large tables on small instance classes.  
Fast DDL is currently only supported for adding a nullable column, without a default value, at the end of a table. For more information about using this feature, see [Altering tables in Amazon Aurora using Fast DDL](AuroraMySQL.Managing.FastDDL.md).

# Best practices with Amazon Aurora MySQL
Best practices with Aurora MySQL<a name="best_practices"></a>

This topic includes information on best practices and options for using or migrating data to an Amazon Aurora MySQL DB cluster. The information in this topic summarizes and reiterates some of the guidelines and procedures that you can find in [Managing an Amazon Aurora DB cluster](CHAP_Aurora.md).

**Contents**
+ [

## Determining which DB instance you are connected to
](#AuroraMySQL.BestPractices.DeterminePrimaryInstanceConnection)
+ [

# Best practices for Aurora MySQL performance and scaling
](AuroraMySQL.BestPractices.Performance.md)
  + [

## Using T instance classes for development and testing
](AuroraMySQL.BestPractices.Performance.md#AuroraMySQL.BestPractices.T2Medium)
  + [

## Optimizing Aurora MySQL indexed join queries with asynchronous key prefetch
](AuroraMySQL.BestPractices.Performance.md#Aurora.BestPractices.AKP)
    + [

### Enabling asynchronous key prefetch
](AuroraMySQL.BestPractices.Performance.md#Aurora.BestPractices.AKP.Enabling)
    + [

### Optimizing queries for asynchronous key prefetch
](AuroraMySQL.BestPractices.Performance.md#Aurora.BestPractices.AKP.Optimizing)
  + [

## Optimizing large Aurora MySQL join queries with hash joins
](AuroraMySQL.BestPractices.Performance.md#Aurora.BestPractices.HashJoin)
    + [

### Enabling hash joins
](AuroraMySQL.BestPractices.Performance.md#Aurora.BestPractices.HashJoin.Enabling)
    + [

### Optimizing queries for hash joins
](AuroraMySQL.BestPractices.Performance.md#Aurora.BestPractices.HashJoin.Optimizing)
  + [

## Using Amazon Aurora to scale reads for your MySQL database
](AuroraMySQL.BestPractices.Performance.md#AuroraMySQL.BestPractices.ReadScaling)
  + [

## Optimizing timestamp operations
](AuroraMySQL.BestPractices.Performance.md#AuroraMySQL.BestPractices.Performance.TimeZone)
  + [

## Virtual index ID overflow errors
](AuroraMySQL.BestPractices.Performance.md#AuroraMySQL.BestPractices.Performance.VirtualIndexIDOverflow)
+ [

# Best practices for Aurora MySQL high availability
](AuroraMySQL.BestPractices.HA.md)
  + [

## Using Amazon Aurora for Disaster Recovery with your MySQL databases
](AuroraMySQL.BestPractices.HA.md#AuroraMySQL.BestPractices.DisasterRecovery)
  + [

## Migrating from MySQL to Amazon Aurora MySQL with reduced downtime
](AuroraMySQL.BestPractices.HA.md#AuroraMySQL.BestPractices.Migrating)
  + [

## Avoiding slow performance, automatic restart, and failover for Aurora MySQL DB instances
](AuroraMySQL.BestPractices.HA.md#AuroraMySQL.BestPractices.Avoiding)
+ [

# Recommendations for MySQL features in Aurora MySQL
](AuroraMySQL.BestPractices.FeatureRecommendations.md)
  + [

## Using multithreaded replication in Aurora MySQL
](AuroraMySQL.BestPractices.FeatureRecommendations.md#AuroraMySQL.BestPractices.MTReplica)
  + [

## Invoking AWS Lambda functions using native MySQL functions
](AuroraMySQL.BestPractices.FeatureRecommendations.md#AuroraMySQL.BestPractices.Lambda)
  + [

## Avoiding XA transactions with Amazon Aurora MySQL
](AuroraMySQL.BestPractices.FeatureRecommendations.md#AuroraMySQL.BestPractices.XA)
  + [

## Keeping foreign keys turned on during DML statements
](AuroraMySQL.BestPractices.FeatureRecommendations.md#Aurora.BestPractices.ForeignKeys)
  + [

## Configuring how frequently the log buffer is flushed
](AuroraMySQL.BestPractices.FeatureRecommendations.md#AuroraMySQL.BestPractices.Flush)
  + [

## Minimizing and troubleshooting Aurora MySQL deadlocks
](AuroraMySQL.BestPractices.FeatureRecommendations.md#AuroraMySQL.BestPractices.deadlocks)
    + [

### Minimizing InnoDB deadlocks
](AuroraMySQL.BestPractices.FeatureRecommendations.md#AuroraMySQL.BestPractices.deadlocks-minimize)
    + [

### Monitoring InnoDB deadlocks
](AuroraMySQL.BestPractices.FeatureRecommendations.md#AuroraMySQL.BestPractices.deadlocks-monitor)
+ [

# Evaluating DB instance usage for Aurora MySQL with Amazon CloudWatch metrics
](AuroraMySQL.BestPractices.CW.md)

## Determining which DB instance you are connected to


To determine which DB instance in an Aurora MySQL DB cluster a connection is connected to, check the `innodb_read_only` global variable, as shown in the following example.

```
SHOW GLOBAL VARIABLES LIKE 'innodb_read_only'; 
```

The `innodb_read_only` variable is set to `ON` if you are connected to a reader DB instance. This setting is `OFF` if you are connected to a writer DB instance, such as primary instance in a provisioned cluster.

This approach can be helpful if you want to add logic to your application code to balance the workload or to ensure that a write operation is using the correct connection.

# Best practices for Aurora MySQL performance and scaling
Best practices for performance and scaling

You can apply the following best practices to improve the performance and scalability of your Aurora MySQL clusters.

**Topics**
+ [

## Using T instance classes for development and testing
](#AuroraMySQL.BestPractices.T2Medium)
+ [

## Optimizing Aurora MySQL indexed join queries with asynchronous key prefetch
](#Aurora.BestPractices.AKP)
+ [

## Optimizing large Aurora MySQL join queries with hash joins
](#Aurora.BestPractices.HashJoin)
+ [

## Using Amazon Aurora to scale reads for your MySQL database
](#AuroraMySQL.BestPractices.ReadScaling)
+ [

## Optimizing timestamp operations
](#AuroraMySQL.BestPractices.Performance.TimeZone)
+ [

## Virtual index ID overflow errors
](#AuroraMySQL.BestPractices.Performance.VirtualIndexIDOverflow)

## Using T instance classes for development and testing


Amazon Aurora MySQL instances that use the `db.t2`, `db.t3`, or `db.t4g` DB instance classes are best suited for applications that do not support a high workload for an extended amount of time. The T instances are designed to provide moderate baseline performance and the capability to burst to significantly higher performance as required by your workload. They are intended for workloads that don't use the full CPU often or consistently, but occasionally need to burst. We recommend using the T DB instance classes only for development and test servers, or other non-production servers. For more details on the T instance classes, see [Burstable performance instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances.html).

If your Aurora cluster is larger than 40 TB, don't use the T instance classes. When your database has a large volume of data, the memory overhead for managing schema objects can exceed the capacity of a T instance.

Don't enable the MySQL Performance Schema on Amazon Aurora MySQL T instances. If the Performance Schema is enabled, the instance might run out of memory.

**Tip**  
 If your database is sometimes idle but at other times has a substantial workload, you can use Aurora Serverless v2 as an alternative to T instances. With Aurora Serverless v2, you define a capacity range and Aurora automatically scales your database up or down depending on the current workload. For usage details, see [Using Aurora Serverless v2](aurora-serverless-v2.md). For the database engine versions that you can use with Aurora Serverless v2, see [Requirements and limitations for Aurora Serverless v2](aurora-serverless-v2.requirements.md). 

When you use a T instance as a DB instance in an Aurora MySQL DB cluster, we recommend the following:
+ Use the same DB instance class for all instances in your DB cluster. For example, if you use `db.t2.medium` for your writer instance, then we recommend that you use `db.t2.medium` for your reader instances also.
+ Don't adjust any memory-related configuration settings, such as `innodb_buffer_pool_size`. Aurora uses a highly tuned set of default values for memory buffers on T instances. These special defaults are needed for Aurora to run on memory-constrained instances. If you change any memory-related settings on a T instance, you are much more likely to encounter out-of-memory conditions, even if your change is intended to increase buffer sizes.
+ Monitor your CPU Credit Balance (`CPUCreditBalance`) to ensure that it is at a sustainable level. That is, CPU credits are being accumulated at the same rate as they are being used.

  When you have exhausted the CPU credits for an instance, you see an immediate drop in the available CPU and an increase in the read and write latency for the instance. This situation results in a severe decrease in the overall performance of the instance.

  If your CPU credit balance is not at a sustainable level, then we recommend that you modify your DB instance to use a one of the supported R DB instance classes (scale compute).

  For more information on monitoring metrics, see [Viewing metrics in the Amazon RDS console](USER_Monitoring.md).
+ Monitor the replica lag (`AuroraReplicaLag`) between the writer instance and the reader instances.

  If a reader instance runs out of CPU credits before the writer instance does, the resulting lag can cause the reader instance to restart frequently. This result is common when an application has a heavy load of read operations distributed among reader instances, at the same time that the writer instance has a minimal load of write operations.

  If you see a sustained increase in replica lag, make sure that your CPU credit balance for the reader instances in your DB cluster is not being exhausted.

  If your CPU credit balance is not at a sustainable level, then we recommend that you modify your DB instance to use one of the supported R DB instance classes (scale compute).
+ Keep the number of inserts per transaction below 1 million for DB clusters that have binary logging enabled.

  If the DB cluster parameter group for your DB cluster has the `binlog_format` parameter set to a value other than `OFF`, then your DB cluster might experience out-of-memory conditions if the DB cluster receives transactions that contain over 1 million rows to insert. You can monitor the freeable memory (`FreeableMemory`) metric to determine if your DB cluster is running out of available memory. You then check the write operations (`VolumeWriteIOPS`) metric to see if a writer instance is receiving a heavy load of write operations. If this is the case, then we recommend that you update your application to limit the number of inserts in a transaction to less than 1 million. Alternatively, you can modify your instance to use one of the supported R DB instance classes (scale compute).

## Optimizing Aurora MySQL indexed join queries with asynchronous key prefetch
Asynchronous key prefetch for indexed join queries

Aurora MySQL can use the asynchronous key prefetch (AKP) feature to improve the performance of queries that join tables across indexes. This feature improves performance by anticipating the rows needed to run queries in which a JOIN query requires use of the Batched Key Access (BKA) Join algorithm and Multi-Range Read (MRR) optimization features. For more information about BKA and MRR, see [Block nested-loop and batched key access joins](https://dev.mysql.com/doc/refman/5.6/en/bnl-bka-optimization.html) and [Multi-range read optimization](https://dev.mysql.com/doc/refman/5.6/en/mrr-optimization.html) in the MySQL documentation.

To take advantage of the AKP feature, a query must use both BKA and MRR. Typically, such a query occurs when the JOIN clause of a query uses a secondary index, but also needs some columns from the primary index. For example, you can use AKP when a JOIN clause represents an equijoin on index values between a small outer and large inner table, and the index is highly selective on the larger table. AKP works in concert with BKA and MRR to perform a secondary to primary index lookup during the evaluation of the JOIN clause. AKP identifies the rows required to run the query during the evaluation of the JOIN clause. It then uses a background thread to asynchronously load the pages containing those rows into memory before running the query.

AKP is available for Aurora MySQL version 2.10 and higher, and version 3. For more information about Aurora MySQL versions, see [Database engine updates for Amazon Aurora MySQLLong-term support (LTS) and beta releases for Amazon Aurora MySQL](AuroraMySQL.Updates.md).

### Enabling asynchronous key prefetch


You can enable the AKP feature by setting `aurora_use_key_prefetch`, a MySQL server variable, to `on`. By default, this value is set to `on`. However, AKP can't be enabled until you also enable the BKA Join algorithm and disable cost-based MRR functionality. To do so, you must set the following values for `optimizer_switch`, a MySQL server variable:
+ Set `batched_key_access` to `on`. This value controls the use of the BKA Join algorithm. By default, this value is set to `off`.
+ Set `mrr_cost_based` to `off`. This value controls the use of cost-based MRR functionality. By default, this value is set to `on`.

Currently, you can set these values only at the session level. The following example illustrates how to set these values to enable AKP for the current session by executing SET statements.

```
mysql> set @@session.aurora_use_key_prefetch=on;
mysql> set @@session.optimizer_switch='batched_key_access=on,mrr_cost_based=off';
```

Similarly, you can use SET statements to disable AKP and the BKA Join algorithm and re-enable cost-based MRR functionality for the current session, as shown in the following example.

```
mysql> set @@session.aurora_use_key_prefetch=off;
mysql> set @@session.optimizer_switch='batched_key_access=off,mrr_cost_based=on';
```

For more information about the **batched\$1key\$1access** and **mrr\$1cost\$1based** optimizer switches, see [Switchable optimizations](https://dev.mysql.com/doc/refman/5.6/en/switchable-optimizations.html) in the MySQL documentation.

### Optimizing queries for asynchronous key prefetch


You can confirm whether a query can take advantage of the AKP feature. To do so, use the `EXPLAIN` statement to profile the query before running it. The `EXPLAIN` statement provides information about the execution plan to use for a specified query.

In the output for the `EXPLAIN` statement, the `Extra` column describes additional information included with the execution plan. If the AKP feature applies to a table used in the query, this column includes one of the following values:
+ `Using Key Prefetching`
+ `Using join buffer (Batched Key Access with Key Prefetching)`

The following example shows the use of `EXPLAIN` to view the execution plan for a query that can take advantage of AKP.

```
mysql> explain select sql_no_cache
    ->     ps_partkey,
    ->     sum(ps_supplycost * ps_availqty) as value
    -> from
    ->     partsupp,
    ->     supplier,
    ->     nation
    -> where
    ->     ps_suppkey = s_suppkey
    ->     and s_nationkey = n_nationkey
    ->     and n_name = 'ETHIOPIA'
    -> group by
    ->     ps_partkey having
    ->         sum(ps_supplycost * ps_availqty) > (
    ->             select
    ->                 sum(ps_supplycost * ps_availqty) * 0.0000003333
    ->             from
    ->                 partsupp,
    ->                 supplier,
    ->                 nation
    ->             where
    ->                 ps_suppkey = s_suppkey
    ->                 and s_nationkey = n_nationkey
    ->                 and n_name = 'ETHIOPIA'
    ->         )
    -> order by
    ->     value desc;
+----+-------------+----------+------+-----------------------+---------------+---------+----------------------------------+------+----------+-------------------------------------------------------------+
| id | select_type | table    | type | possible_keys         | key           | key_len | ref                              | rows | filtered | Extra                                                       |
+----+-------------+----------+------+-----------------------+---------------+---------+----------------------------------+------+----------+-------------------------------------------------------------+
|  1 | PRIMARY     | nation   | ALL  | PRIMARY               | NULL          | NULL    | NULL                             |   25 |   100.00 | Using where; Using temporary; Using filesort                |
|  1 | PRIMARY     | supplier | ref  | PRIMARY,i_s_nationkey | i_s_nationkey | 5       | dbt3_scale_10.nation.n_nationkey | 2057 |   100.00 | Using index                                                 |
|  1 | PRIMARY     | partsupp | ref  | i_ps_suppkey          | i_ps_suppkey  | 4       | dbt3_scale_10.supplier.s_suppkey |   42 |   100.00 | Using join buffer (Batched Key Access with Key Prefetching) |
|  2 | SUBQUERY    | nation   | ALL  | PRIMARY               | NULL          | NULL    | NULL                             |   25 |   100.00 | Using where                                                 |
|  2 | SUBQUERY    | supplier | ref  | PRIMARY,i_s_nationkey | i_s_nationkey | 5       | dbt3_scale_10.nation.n_nationkey | 2057 |   100.00 | Using index                                                 |
|  2 | SUBQUERY    | partsupp | ref  | i_ps_suppkey          | i_ps_suppkey  | 4       | dbt3_scale_10.supplier.s_suppkey |   42 |   100.00 | Using join buffer (Batched Key Access with Key Prefetching) |
+----+-------------+----------+------+-----------------------+---------------+---------+----------------------------------+------+----------+-------------------------------------------------------------+
6 rows in set, 1 warning (0.00 sec)
```

For more information about the `EXPLAIN` output format, see [Extended EXPLAIN output format](https://dev.mysql.com/doc/refman/8.0/en/explain-extended.html) in the MySQL documentation.

## Optimizing large Aurora MySQL join queries with hash joins
Hash joins for large join queries

When you need to join a large amount of data by using an equijoin, a hash join can improve query performance. You can enable hash joins for Aurora MySQL.

A hash join column can be any complex expression. In a hash join column, you can compare across data types in the following ways:
+ You can compare anything across the category of precise numeric data types, such as `int`, `bigint`, `numeric`, and `bit`.
+ You can compare anything across the category of approximate numeric data types, such as `float` and `double`.
+ You can compare items across string types if the string types have the same character set and collation.
+ You can compare items with date and timestamp data types if the types are the same.

**Note**  
You can't compare data types in different categories.

The following restrictions apply to hash joins for Aurora MySQL:
+ Left-right outer joins aren't supported for Aurora MySQL version 2, but are supported for version 3.
+ Semijoins such as subqueries aren't supported, unless the subqueries are materialized first.
+ Multiple-table updates or deletes aren't supported.
**Note**  
Single-table updates or deletes are supported.
+ BLOB and spatial data type columns can't be join columns in a hash join.

### Enabling hash joins


To enable hash joins:
+ Aurora MySQL version 2 – Set the DB parameter or DB cluster parameter `aurora_disable_hash_join` to `0`. Turning off `aurora_disable_hash_join` sets the value of `optimizer_switch` to `hash_join=on`.
+ Aurora MySQL version 3 – Set the MySQL server parameter `optimizer_switch` to `block_nested_loop=on`.

Hash joins are turned on by default in Aurora MySQL version 3 and turned off by default in Aurora MySQL version 2. The following example illustrates how to enable hash joins for Aurora MySQL version 3. You can issue the statement `select @@optimizer_switch` first to see what other settings are present in the `SET` parameter string. Updating one setting in the `optimizer_switch` parameter doesn't erase or modify the other settings.

```
mysql> SET optimizer_switch='block_nested_loop=on';
```

**Note**  
For Aurora MySQL version 3, hash join support is available in all minor versions and is turned on by default.  
For Aurora MySQL version 2, hash join support is available in all minor versions. In Aurora MySQL version 2, the hash join feature is always controlled by the `aurora_disable_hash_join` value.

With this setting, the optimizer chooses to use a hash join based on cost, query characteristics, and resource availability. If the cost estimation is incorrect, you can force the optimizer to choose a hash join. You do so by setting `hash_join_cost_based`, a MySQL server variable, to `off`. The following example illustrates how to force the optimizer to choose a hash join.

```
mysql> SET optimizer_switch='hash_join_cost_based=off';
```

**Note**  
This setting overrides the decisions of the cost-based optimizer. While the setting can be useful for testing and development, we recommend that you not use it in production.

### Optimizing queries for hash joins


To find out whether a query can take advantage of a hash join, use the `EXPLAIN` statement to profile the query first. The `EXPLAIN` statement provides information about the execution plan to use for a specified query.

In the output for the `EXPLAIN` statement, the `Extra` column describes additional information included with the execution plan. If a hash join applies to the tables used in the query, this column includes values similar to the following:
+ `Using where; Using join buffer (Hash Join Outer table table1_name)`
+ `Using where; Using join buffer (Hash Join Inner table table2_name)`

The following example shows the use of EXPLAIN to view the execution plan for a hash join query.

```
mysql> explain SELECT sql_no_cache * FROM hj_small, hj_big, hj_big2
    ->     WHERE hj_small.col1 = hj_big.col1 and hj_big.col1=hj_big2.col1 ORDER BY 1;
+----+-------------+----------+------+---------------+------+---------+------+------+----------------------------------------------------------------+
| id | select_type | table    | type | possible_keys | key  | key_len | ref  | rows | Extra                                                          |
+----+-------------+----------+------+---------------+------+---------+------+------+----------------------------------------------------------------+
|  1 | SIMPLE      | hj_small | ALL  | NULL          | NULL | NULL    | NULL |    6 | Using temporary; Using filesort                                |
|  1 | SIMPLE      | hj_big   | ALL  | NULL          | NULL | NULL    | NULL |   10 | Using where; Using join buffer (Hash Join Outer table hj_big)  |
|  1 | SIMPLE      | hj_big2  | ALL  | NULL          | NULL | NULL    | NULL |   15 | Using where; Using join buffer (Hash Join Inner table hj_big2) |
+----+-------------+----------+------+---------------+------+---------+------+------+----------------------------------------------------------------+
3 rows in set (0.04 sec)
```

In the output, the `Hash Join Inner table` is the table used to build hash table, and the `Hash Join Outer table` is the table that is used to probe the hash table.

For more information about the extended `EXPLAIN` output format, see [Extended EXPLAIN Output Format](https://dev.mysql.com/doc/refman/8.0/en/explain-extended.html) in the MySQL product documentation.

 In Aurora MySQL 2.08 and higher, you can use SQL hints to influence whether a query uses hash join or not, and which tables to use for the build and probe sides of the join. For details, see [Aurora MySQL hints](AuroraMySQL.Reference.Hints.md). 

## Using Amazon Aurora to scale reads for your MySQL database


You can use Amazon Aurora with your MySQL DB instance to take advantage of the read scaling capabilities of Amazon Aurora and expand the read workload for your MySQL DB instance. To use Aurora to read scale your MySQL DB instance, create an Aurora MySQL DB cluster and make it a read replica of your MySQL DB instance. Then connect to the Aurora MySQL cluster to process the read queries. The source database can be an RDS for MySQL DB instance, or a MySQL database running external to Amazon RDS. For more information, see [Scaling reads for your MySQL database with Amazon Aurora](AuroraMySQL.Replication.ReadScaling.md).

## Optimizing timestamp operations


When the value of the system variable `time_zone` is set to `SYSTEM`, each MySQL function call that requires a time zone calculation makes a system library call. When you run SQL statements that return or change such `TIMESTAMP` values at high concurrency, you might experience increased latency, lock contention, and CPU usage. For more information, see [time\$1zone](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_time_zone) in the MySQL documentation.

To avoid this behavior, we recommend that you change the value of the `time_zone` DB cluster parameter to `UTC`. For more information, see [Modifying parameters in a DB cluster parameter groupin Amazon Aurora](USER_WorkingWithParamGroups.ModifyingCluster.md).

While the `time_zone` parameter is dynamic (doesn't require a database server restart), the new value is used only for new connections. To make sure that all connections are updated to use the new `time_zone` value, we recommend that you recycle your application connections after updating the DB cluster parameter.

## Virtual index ID overflow errors


Aurora MySQL limits values for virtual index IDs to 8 bits prevent an issue caused by the undo format in MySQL. If an index exceeds the virtual index ID limit, your cluster might not be available. When an index approaches the virtual index ID limit or when you attempt to create an index above the virtual index ID limit, RDS might throw error code `63955` or warning code `63955`. To address a virtual index ID limit error, we recommend you recreate your database with a logical dump and restore.

For more information about logical dump and restore for Amazon Aurora MySQL, see [ Migrate very large databases to Amazon Aurora MySQL using MyDumper and MyLoader](https://aws.amazon.com/blogs/database/migrate-very-large-databases-to-amazon-aurora-mysql-using-mydumper-and-myloader/). Fore more information about accessing error logs in Amazon Aurora, see [Monitoring Amazon Aurora log files](USER_LogAccess.md).

# Best practices for Aurora MySQL high availability
Best practices for high availability

You can apply the following best practices to improve the availability of your Aurora MySQL clusters.

**Topics**
+ [

## Using Amazon Aurora for Disaster Recovery with your MySQL databases
](#AuroraMySQL.BestPractices.DisasterRecovery)
+ [

## Migrating from MySQL to Amazon Aurora MySQL with reduced downtime
](#AuroraMySQL.BestPractices.Migrating)
+ [

## Avoiding slow performance, automatic restart, and failover for Aurora MySQL DB instances
](#AuroraMySQL.BestPractices.Avoiding)

## Using Amazon Aurora for Disaster Recovery with your MySQL databases


You can use Amazon Aurora with your MySQL DB instance to create an offsite backup for disaster recovery. To use Aurora for disaster recovery of your MySQL DB instance, create an Amazon Aurora DB cluster and make it a read replica of your MySQL DB instance. This applies to an RDS for MySQL DB instance, or a MySQL database running external to Amazon RDS.

**Important**  
When you set up replication between a MySQL DB instance and an Amazon Aurora MySQL DB cluster, you should monitor the replication to ensure that it remains healthy and repair it if necessary.

For instructions on how to create an Amazon Aurora MySQL DB cluster and make it a read replica of your MySQL DB instance, follow the procedure in [Using Amazon Aurora to scale reads for your MySQL database](AuroraMySQL.BestPractices.Performance.md#AuroraMySQL.BestPractices.ReadScaling).

For more information on disaster recovery models, see [How to choose the best disaster recovery option for your Amazon Aurora MySQL cluster](https://aws.amazon.com/blogs/database/how-to-choose-the-best-disaster-recovery-option-for-your-amazon-aurora-mysql-cluster/).

## Migrating from MySQL to Amazon Aurora MySQL with reduced downtime


When importing data from a MySQL database that supports a live application to an Amazon Aurora MySQL DB cluster, you might want to reduce the time that service is interrupted while you migrate. To do so, you can use the procedure documented in [Importing data to an Amazon RDS for MySQL DB instance with reduced downtime](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql-importing-data-reduced-downtime.html) in the *Amazon Relational Database Service User Guide*. This procedure can especially help if you are working with a very large database. You can use the procedure to reduce the cost of the import by minimizing the amount of data that is passed across the network to AWS.

The procedure lists steps to transfer a copy of your database data to an Amazon EC2 instance and import the data into a new RDS for MySQL DB instance. Because Amazon Aurora is compatible with MySQL, you can instead use an Amazon Aurora DB cluster for the target Amazon RDS MySQL DB instance.

## Avoiding slow performance, automatic restart, and failover for Aurora MySQL DB instances


If you're running a heavy workload or workloads that spike beyond the allocated resources of your DB instance, you can exhaust the resources on which you're running your application and Aurora database. To get metrics on your database instance such as CPU utilization, memory usage, and number of database connections used, you can refer to the metrics provided by Amazon CloudWatch, Performance Insights, and Enhanced Monitoring. For more information on monitoring your DB instance, see [Monitoring metrics in an Amazon Aurora cluster](MonitoringAurora.md).

If your workload exhausts the resources you're using, your DB instance might slow down, restart, or even fail over to another DB instance. To avoid this, monitor your resource utilization, examine the workload running on your DB instance, and make optimizations where necessary. If optimizations don't improve the instance metrics and mitigate the resource exhaustion, consider scaling up your DB instance before you reach its limits. For more information on available DB instance classes and their specifications, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md).

# Recommendations for MySQL features in Aurora MySQL
Recommendations for MySQL features

The following features are available in Aurora MySQL for MySQL compatibility. However, they have performance, scalability, stability, or compatibility issues in the Aurora environment. Thus, we recommend that you follow certain guidelines in your use of these features. For example, we recommend that you don't use certain features for production Aurora deployments.

**Topics**
+ [

## Using multithreaded replication in Aurora MySQL
](#AuroraMySQL.BestPractices.MTReplica)
+ [

## Invoking AWS Lambda functions using native MySQL functions
](#AuroraMySQL.BestPractices.Lambda)
+ [

## Avoiding XA transactions with Amazon Aurora MySQL
](#AuroraMySQL.BestPractices.XA)
+ [

## Keeping foreign keys turned on during DML statements
](#Aurora.BestPractices.ForeignKeys)
+ [

## Configuring how frequently the log buffer is flushed
](#AuroraMySQL.BestPractices.Flush)
+ [

## Minimizing and troubleshooting Aurora MySQL deadlocks
](#AuroraMySQL.BestPractices.deadlocks)

## Using multithreaded replication in Aurora MySQL


With multithreaded binary log replication, a SQL thread reads events from the relay log and queues them up for SQL worker threads to apply. The SQL worker threads are managed by a coordinator thread. The binary log events are applied in parallel when possible.

Multithreaded replication is supported in Aurora MySQL version 3, and in Aurora MySQL version 2.12.1 and higher.

For Aurora MySQL versions lower than 3.04, Aurora uses single-threaded replication by default when an Aurora MySQL DB cluster is used as a read replica for binary log replication.

Earlier versions of Aurora MySQL version 2 inherited several issues regarding multithreaded replication from MySQL Community Edition. For those versions, we recommend that you not use multithreaded replication in production.

If you do use multithreaded replication, we recommend that you test it thoroughly.

For more information about using replication in Amazon Aurora, see [Replication with Amazon Aurora](Aurora.Replication.md). For more information about multithreaded replication in Aurora MySQL, see [Multithreaded binary log replication](binlog-optimization.md#binlog-optimization-multithreading). 

## Invoking AWS Lambda functions using native MySQL functions


We recommend using the native MySQL functions `lambda_sync` and `lambda_async` to invoke Lambda functions.

If you are using the deprecated `mysql.lambda_async` procedure, we recommend that you wrap calls to the `mysql.lambda_async` procedure in a stored procedure. You can call this stored procedure from different sources, such as triggers or client code. This approach can help to avoid impedance mismatch issues and make it easier for your database programmers to invoke Lambda functions.

For more information on invoking Lambda functions from Amazon Aurora, see [Invoking a Lambda function from an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Lambda.md).

## Avoiding XA transactions with Amazon Aurora MySQL


We recommend that you don't use eXtended Architecture (XA) transactions with Aurora MySQL, because they can cause long recovery times if the XA was in the `PREPARED` state. If you must use XA transactions with Aurora MySQL, follow these best practices:
+ Don't leave an XA transaction open in the `PREPARED` state.
+ Keep XA transactions as small as possible.

For more information about using XA transactions with MySQL, see [XA transactions](https://dev.mysql.com/doc/refman/8.0/en/xa.html) in the MySQL documentation.

## Keeping foreign keys turned on during DML statements


We strongly recommend that you don't run any data definition language (DDL) statements when the `foreign_key_checks` variable is set to `0` (off).

If you need to insert or update rows that require a transient violation of foreign keys, follow these steps:

1. Set `foreign_key_checks` to `0`.

1. Make your data manipulation language (DML) changes.

1. Make sure that your completed changes don't violate any foreign key constraints.

1. Set `foreign_key_checks` to `1` (on).

In addition, follow these other best practices for foreign key constraints:
+ Make sure that your client applications don't set the `foreign_key_checks` variable to `0` as a part of the `init_connect` variable.
+ If a restore from a logical backup such as `mysqldump` fails or is incomplete, make sure that `foreign_key_checks` is set to `1` before starting any other operations in the same session. A logical backup sets `foreign_key_checks` to `0` when it starts.

## Configuring how frequently the log buffer is flushed


In MySQL Community Edition, to make transactions durable, the InnoDB log buffer must be flushed to durable storage. You use the`innodb_flush_log_at_trx_commit` parameter to configure how frequently the log buffer is flushed to disk.

When you set the `innodb_flush_log_at_trx_commit` parameter to the default value of 1, the log buffer is flushed at each transaction commit. This setting helps to keep the database [ACID](https://dev.mysql.com/doc/refman/5.7/en/glossary.html#glos_acid) compliant. We recommend that you keep the default setting of 1.

Changing `innodb_flush_log_at_trx_commit` to a nondefault value can help reduce data manipulation language (DML) latency, but sacrifices the durability of the log records. This lack of durability makes the database ACID noncompliant. We recommend that your databases be ACID compliant to avoid the risk of data loss in the event of a server restart. For more information on this parameter, see [innodb\$1flush\$1log\$1at\$1trx\$1commit](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit) in the MySQL documentation.

In Aurora MySQL, redo log processing is offloaded to the storage layer, so no flushing to log files occurs on the DB instance. When a write is issued, redo logs are sent from the writer DB instance directly to the Aurora cluster volume. The only writes that cross the network are redo log records. No pages are ever written from the database tier.

By default, each thread committing a transaction waits for confirmation from the Aurora cluster volume. This confirmation indicates that this record and all previous redo log records are written and have achieved [quorum](https://aws.amazon.com/blogs/database/amazon-aurora-under-the-hood-quorum-and-correlated-failure/). Persisting the log records and achieving quorum make the transaction durable, whether through autocommit or explicit commit. For more information on the Aurora storage architecture, see [Amazon Aurora storage demystified](https://d1.awsstatic.com/events/reinvent/2020/Amazon_Aurora_storage_demystified_DAT401.pdf).

Aurora MySQL doesn't flush logs to data files as MySQL Community Edition does. However, you can use the `innodb_flush_log_at_trx_commit` parameter to relax durability constraints when writing redo log records to the Aurora cluster volume.

For Aurora MySQL version 2:
+ `innodb_flush_log_at_trx_commit` = 0 or 2 – The database doesn't wait for confirmation that the redo log records are written to the Aurora cluster volume.
+ `innodb_flush_log_at_trx_commit` = 1 – The database waits for confirmation that the redo log records are written to the Aurora cluster volume.

For Aurora MySQL version 3:
+ `innodb_flush_log_at_trx_commit` = 0 – The database doesn't wait for confirmation that the redo log records are written to the Aurora cluster volume.
+ `innodb_flush_log_at_trx_commit` = 1 or 2 – The database waits for confirmation that the redo log records are written to the Aurora cluster volume.

Therefore, to obtain the same nondefault behavior in Aurora MySQL version 3 that you would with the value set to 0 or 2 in Aurora MySQL version 2, set the parameter to 0.

While these settings can lower DML latency to the client, they can also result in data loss in the event of a failover or restart. Therefore, we recommend that you keep the `innodb_flush_log_at_trx_commit` parameter set to the default value of 1.

While data loss can occur in both MySQL Community Edition and Aurora MySQL, behavior differs in each database because of their different architectures. These architectural differences can lead to varying degrees of data loss. To make sure that your database is ACID compliant, always set `innodb_flush_log_at_trx_commit` to 1.

**Note**  
In Aurora MySQL version 3, before you can change `innodb_flush_log_at_trx_commit` to a value other than 1, you must first change the value of `innodb_trx_commit_allow_data_loss` to 1. By doing so, you acknowledge the risk of data loss.

## Minimizing and troubleshooting Aurora MySQL deadlocks


Users running workloads that regularly experience constraint violations on unique secondary indexes or foreign keys, when modifying records on the same data page concurrently, might experience increased deadlocks and lock wait timeouts. These deadlocks and timeouts are because of a MySQL Community Edition [bug fix](https://bugs.mysql.com/bug.php?id=98324).

This fix is included in MySQL Community Edition versions 5.7.26 and higher, and was backported into Aurora MySQL versions 2.10.3 and higher. The fix is necessary for enforcing *serializability*, by implementing additional locking for these types of data manipulation language (DML) operations, on changes made to records in an InnoDB table. This issue was uncovered as part of an investigation into deadlock issues introduced by a previous MySQL Community Edition [bug fix](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-26.html).

The fix changed the internal handling for the *partial rollback* of a tuple (row) update in the InnoDB storage engine. Operations that generate constraint violations on foreign keys or unique secondary indexes cause partial rollback. This includes, but isn't limited to, concurrent `INSERT...ON DUPLICATE KEY UPDATE`, `REPLACE INTO,` and `INSERT IGNORE` statements (*upserts*).

In this context, partial rollback doesn't refer to the rollback of application-level transactions, but rather an internal InnoDB rollback of changes to a clustered index, when a constraint violation is encountered. For example, a duplicate key value is found during an upsert operation.

In a normal insert operation, InnoDB atomically creates [clustered](https://dev.mysql.com/doc/refman/5.7/en/innodb-index-types.html) and secondary index entries for each index. If InnoDB detects a duplicate value on a unique secondary index during an upsert operation, the inserted entry in the clustered index has to be reverted (partial rollback), and the update then has to be applied to the existing duplicate row. During this internal partial rollback step, InnoDB must lock each record seen as part of the operation. The fix ensures transaction serializability by introducing additional locking after the partial rollback.

### Minimizing InnoDB deadlocks


You can take the following approaches to reduce the frequency of deadlocks in your database instance. More examples can be found in the [MySQL documentation](https://bugs.mysql.com/bug.php?id=98324).

1. To reduce the chances of deadlocks, commit transactions immediately after making a related set of changes. You can do this by breaking up large transactions (multiple row updates between commits) into smaller ones. If you're batch inserting rows, then try to reduce batch insert sizes, especially when using the upsert operations mentioned previously.

   To reduce the number of possible partial rollbacks, you can try some of the following approaches:

   1. Replace batch insert operations with inserting one row at a time. This can reduce the amount of time where locks are held by transactions that might have conflicts.

   1. Instead of using `REPLACE INTO`, rewrite the SQL statement as a multistatement transaction such as the following:

      ```
      BEGIN;
      DELETE conflicting rows;
      INSERT new rows;
      COMMIT;
      ```

   1. Instead of using `INSERT...ON DUPLICATE KEY UPDATE`, rewrite the SQL statement as a multistatement transaction such as the following:

      ```
      BEGIN;
      SELECT rows that conflict on secondary indexes;
      UPDATE conflicting rows;
      INSERT new rows;
      COMMIT;
      ```

1. Avoid long-running transactions, active or idle, that might hold onto locks. This includes interactive MySQL client sessions that might be open for an extended period with an uncommitted transaction. When optimizing transaction sizes or batch sizes, the impact can vary depending on a number of factors such as concurrency, number of duplicates, and table structure. Any changes should be implemented and tested based on your workload.

1. In some situations, deadlocks can occur when two transactions attempt to access the same datasets, either in one or multiple tables, in different orders. To prevent this, you can modify the transactions to access the data in the same order, thereby serializing the access. For example, create a queue of transactions to be completed. This approach can help to avoid deadlocks when multiple transactions occur concurrently.

1. Adding carefully chosen indexes to your tables can improve selectivity and reduce the need to access rows, which leads to less locking.

1. If you encounter [gap locking](https://dev.mysql.com/doc/refman/5.7/en/innodb-locking.html#innodb-gap-locks), you can modify the transaction isolation level to `READ COMMITTED` for the session or transaction to prevent it. For more information on InnoDB isolation levels and their behaviors, see [Transaction isolation levels](https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html) in the MySQL documentation.

**Note**  
While you can take precautions to reduce the possibility of deadlocks occurring, deadlocks are an expected database behavior and can still occur. Applications should have the necessary logic to handle deadlocks when they are encountered. For example, implement retry and backing-off logic in the application. It’s best to address the root cause of the issue but if a deadlock does occur, the application has the option to wait and retry.

### Monitoring InnoDB deadlocks


[Deadlocks](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_deadlock) can occur in MySQL when application transactions try to take table-level and row-level locks in a way that results in circular waiting. An occasional InnoDB deadlock isn't necessarily an issue, because the InnoDB storage engine detects the condition immediately and rolls back one of the transactions automatically. If you encounter deadlocks frequently, we recommend reviewing and modifying your application to alleviate performance issues and avoid deadlocks. When [deadlock detection](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_deadlock_detection) is turned on (the default), InnoDB automatically detects transaction deadlocks and rolls back a transaction or transactions to break the deadlock. InnoDB tries to pick small transactions to roll back, where the size of a transaction is determined by the number of rows inserted, updated, or deleted.
+ `SHOW ENGINE` statement – The `SHOW ENGINE INNODB STATUS \G` statement contains [details](https://dev.mysql.com/doc/refman/5.7/en/show-engine.html) of the most recent deadlock encountered on the database since the last restart.
+ MySQL error log – If you encounter frequent deadlocks where the output of the `SHOW ENGINE` statement is inadequate, you can turn on the [innodb\$1print\$1all\$1deadlocks](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_print_all_deadlocks) DB cluster parameter.

  When this parameter is turned on, information about all deadlocks in InnoDB user transactions is recorded in the Aurora MySQL [error log](https://dev.mysql.com/doc/refman/8.0/en/error-log.html).
+ Amazon CloudWatch metrics – We also recommend that you proactively monitor deadlocks using the CloudWatch metric `Deadlocks`. For more information, see [Instance-level metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md#Aurora.AuroraMySQL.Monitoring.Metrics.instances).
+ Amazon CloudWatch Logs – With CloudWatch Logs, you can view metrics, analyze log data, and create real-time alarms. For more information, see [Monitor errors in Amazon Aurora MySQL and Amazon RDS for MySQL using Amazon CloudWatch and send notifications using Amazon SNS](https://aws.amazon.com/blogs/database/monitor-errors-in-amazon-aurora-mysql-and-amazon-rds-for-mysql-using-amazon-cloudwatch-and-send-notifications-using-amazon-sns/).

  Using CloudWatch Logs with `innodb_print_all_deadlocks` turned on, you can configure alarms to notify you when the number of deadlocks exceeds a given threshold. To define a threshold, we recommend that you observe your trends and use a value based on your normal workload.
+ Performance Insights – When you use Performance Insights, you can monitor the `innodb_deadlocks` and `innodb_lock_wait_timeout` metrics. For more information on these metrics, see [Non-native counters for Aurora MySQL](USER_PerfInsights_Counters.md#USER_PerfInsights_Counters.Aurora_MySQL.NonNative).

# Evaluating DB instance usage for Aurora MySQL with Amazon CloudWatch metrics


You can use CloudWatch metrics to monitor your DB instance throughput and determine whether your DB instance class provides sufficient resources for your applications. For information about your DB instance class limits, see [Hardware specifications for DB instance classesfor Aurora](Concepts.DBInstanceClass.Summary.md). Find the specifications for your DB instance class to find the network performance.

If your DB instance usage is near the instance class limit, then performance may begin to slow. The CloudWatch metrics can confirm this situation so you can plan to manually scale-up to a larger instance class. 

Combine the following CloudWatch metrics values to find out if you are nearing the instance class limit:
+ **NetworkThroughput** – The amount of network throughput received and transmitted by the clients for each instance in the Aurora DB cluster. This throughput value doesn't include network traffic between instances in the DB cluster and the cluster volume. 
+ **StorageNetworkThroughput** – The amount of network throughput received and sent to the Aurora storage subsystem by each instance in the Aurora DB cluster. 

Add the **NetworkThroughput** to the **StorageNetworkThroughput** to find the network throughput received from and sent to the Aurora storage subsystem by each instance in your Aurora DB cluster. The instance class limit for your instance should be greater than the sum of these two combined metrics. 

 You can use the following metrics to review additional details of the network traffic from your client applications when sending and receiving:
+ **NetworkReceiveThroughput** – The amount of network throughput received from clients by each DB instance in the Aurora MySQL DB cluster. This throughput doesn't include network traffic between instances in the DB cluster and the cluster volume.
+ **NetworkTransmitThroughput** – The amount of network throughput sent to clients by each instance in the Aurora DB cluster. This throughput doesn't include network traffic between instances in the DB cluster and the cluster volume.
+ **StorageNetworkReceiveThroughput** – The amount of network throughput received from the Aurora storage subsystem by each instance in the DB cluster.
+ **StorageNetworkTransmitThroughput** – The amount of network throughput sent to the Aurora storage subsystem by each instance in the DB cluster.

Add all of these metrics together to evaluate how your network usage compares to the DB instance class limit. The instance class limit should be greater than the sum of these combined metrics.

The network limits and CPU usage for storage are directly related. When the network throughput increases, then the CPU usage also increases. Monitoring the CPU and network usage provides information about how and why the resources are being exhausted.

To help minimize network usage, you can consider the following:
+ Using a larger DB instance class.
+ Dividing the write requests in batches to reduce overall transactions.
+ Directing the read-only workload to a read-only instance.
+ Deleting any unused indexes.

# Troubleshooting Amazon Aurora MySQL database performance
Troubleshooting Aurora MySQL performance

This topic focuses on some common Aurora MySQL DB performance issues, and how to troubleshoot or collect information to remediate these issues quickly. We divide database performance into two categories:
+ Server performance – The entire database server runs slower.
+ Query performance – One or more queries take longer to run.

## AWS monitoring options


We recommend that you use the following AWS monitoring options to help with troubleshooting:
+ Amazon CloudWatch – Amazon CloudWatch monitors your AWS resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. For more information, see [What is Amazon CloudWatch?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html).

  You can view all of the system metrics and process information for your DB instances on the AWS Management Console. You can configure your Aurora MySQL DB cluster to publish general, slow, audit, and error log data to a log group in Amazon CloudWatch Logs. This allows you to view trends, maintain logs if a host is impacted, and create a baseline for "normal" performance to easily identify anomalies or changes. For more information, see [Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs](AuroraMySQL.Integrating.CloudWatch.md).
+ Enhanced Monitoring – To enable additional Amazon CloudWatch metrics for an Aurora MySQL database, turn on Enhanced Monitoring. When you create or modify an Aurora DB cluster, select **Enable Enhanced Monitoring**. This allows Aurora to publish performance metrics to CloudWatch. Some of the key metrics available include CPU usage, database connections, storage usage, and query latency. These can help identify performance bottlenecks.

  The amount of information transferred for a DB instance is directly proportional to the defined granularity for Enhanced Monitoring. A smaller monitoring interval results in more frequent reporting of OS metrics and increases your monitoring cost. To manage costs, set different granularities for different instances in your AWS accounts. The default granularity at creation of an instance is 60 seconds. For more information, see [Cost of Enhanced Monitoring](USER_Monitoring.OS.md#USER_Monitoring.OS.cost).
+ Performance Insights – You can view all of the database call metrics. This includes DB locks, waits, and the number of rows processed, all of which you can use for troubleshooting. When you create or modify an Aurora DB cluster, select **Turn on Performance Insights**. By default, Performance Insights has a 7-day data retention period, but can be customized to analyze longer-term performance trends. For longer than 7-day retention, you need to upgrade to the paid tier. For more information, see [Performance Insights pricing](https://aws.amazon.com/rds/performance-insights/pricing/). You can set the data retention period for each Aurora DB instance separately. For more information, see [Monitoring DB load with Performance Insights on Amazon Aurora](USER_PerfInsights.md).

## Most common reasons for Aurora MySQL database performance issues
Most common reasons for DB performance issues

You can use the following steps to troubleshoot performance issues in your Aurora MySQL database. We list these steps in the logical order of investigation, but they're not intended to be linear. One discovery could jump across steps, which allow for a series of investigative paths.

1. [Workload](aurora-mysql-troubleshooting-workload.md) – Understand your database workload.

1. [Logging](aurora-mysql-troubleshooting-logging.md) – Review all of the database logs.

1. [Database connections ](mysql-troubleshooting-dbconn.md) – Make sure that the connections between your applications and your database are reliable.

1. [Query performance](aurora-mysql-troubleshooting-query.md) – Examine your query execution plans to see if they've changed. Code changes can cause plans to change.

# Troubleshooting workload issues for Aurora MySQL databases
Troubleshooting workload issues

Database workload can be viewed as reads and writes. With an understanding of "normal" database workload, you can tune queries and the database server to meet demand as it changes. There are a number of different reasons why performance can change, so the first step is to understand what has changed.
+ Has there been a major or minor version upgrade?

  A major version upgrade includes changes to the engine code, particularly in the optimizer, that can change the query execution plan. When upgrading database versions, especially major versions, it's very important that you analyze the database workload and tune accordingly. Tuning can involve optimizing and rewriting queries, or adding and updating parameter settings, depending on the results of testing. Understanding what is causing the impact will allow you to start focusing on that specific area.

  For more information, see [What is new in MySQL 8.0](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html) and [Server and status variables and options added, deprecated, or removed in MySQL 8.0](https://dev.mysql.com/doc/refman/8.0/en/added-deprecated-removed.html) in the MySQL documentation, and [Comparing Aurora MySQL version 2 and Aurora MySQL version 3](AuroraMySQL.Compare-v2-v3.md).
+ Has there been an increase in data being processed (row counts)?
+ Are there more queries running concurrently?
+ Are there schema or database changes?
+ Have there been code defects or fixes?

**Contents**
+ [

## Instance host metrics
](#ams-workload-instance)
  + [

### CPU usage
](#ams-workload-cpu)
  + [

### Memory usage
](#ams-workload-instance-memory)
  + [

### Network throughput
](#ams-workload-network)
+ [

## Database metrics
](#ams-workload-db)
+ [

# Troubleshooting memory usage issues for Aurora MySQL databases
](ams-workload-memory.md)
  + [

## Example 1: Continuous high memory usage
](ams-workload-memory.md#ams-workload-memory.example1)
  + [

## Example 2: Transient memory spikes
](ams-workload-memory.md#ams-workload-memory.example2)
  + [

## Example 3: Freeable memory drops continuously and isn't reclaimed
](ams-workload-memory.md#ams-workload-memory.example3)
+ [

# Troubleshooting out-of-memory issues for Aurora MySQL databases
](AuroraMySQLOOM.md)

## Instance host metrics


Monitor instance host metrics such as CPU, memory, and network activity to help understand whether there has been a workload change. There are two main concepts for understanding workload changes:
+ Utilization – The usage of a device, such as CPU or disk. It can be time-based or capacity-based.
  + Time-based – The amount of time that a resource is busy over a particular observation period.
  + Capacity-based – The amount of throughput that a system or component can deliver, as a percentage of its capacity.
+ Saturation – The degree to which more work is required of a resource than it can process. When capacity-based usage reaches 100%, the extra work can't be processed and must be queued.

### CPU usage


You can use the following tools to identify CPU usage and saturation:
+ CloudWatch provides the `CPUUtilization` metric. If this reaches 100%, then the instance is saturated. However, CloudWatch metrics are averaged over 1 minute, and lack granularity.

  For more information on CloudWatch metrics, see [Instance-level metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md#Aurora.AuroraMySQL.Monitoring.Metrics.instances).
+ Enhanced Monitoring provides metrics returned by the operating system `top` command. It shows load averages and the following CPU states, with 1-second granularity:
  + `Idle (%)` = Idle time
  + `IRQ (%)` = Software interrupts
  + `Nice (%)` = Nice time for processes with a [niced](https://en.wikipedia.org/wiki/Nice_(Unix)) priority.
  + `Steal (%)` = Time spent serving other tenants (virtualization related)
  + `System (%)` = System time
  + `User (%)` = User time
  + `Wait (%)` = I/O wait

  For more information on Enhanced Monitoring metrics, see [OS metrics for Aurora](USER_Monitoring-Available-OS-Metrics.md#USER_Monitoring-Available-OS-Metrics-RDS).

### Memory usage


If the system is under memory pressure, and resource consumption is reaching saturation, you should be observing a high degree of page scanning, paging, swapping, and out-of-memory errors.

You can use the following tools to identify memory usage and saturation:

CloudWatch provides the `FreeableMemory` metric, that shows how much memory can be reclaimed by flushing some of the OS caches and the current free memory.

For more information on CloudWatch metrics, see [Instance-level metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md#Aurora.AuroraMySQL.Monitoring.Metrics.instances).

Enhanced Monitoring provides the following metrics that can help you identify memory usage issues:
+ `Buffers (KB)` – The amount of memory used for buffering I/O requests before writing to the storage device, in kilobytes.
+ `Cached (KB)` – The amount of memory used for caching file system–based I/O.
+ `Free (KB)` – The amount of unassigned memory, in kilobytes.
+ `Swap` – Cached, Free, and Total.

For example, if you see that your DB instance uses `Swap` memory, then the total amount of memory for your workload is larger than your instance currently has available. We recommend increasing the size of your DB instance or tuning your workload to use less memory.

For more information on Enhanced Monitoring metrics, see [OS metrics for Aurora](USER_Monitoring-Available-OS-Metrics.md#USER_Monitoring-Available-OS-Metrics-RDS).

For more detailed information on using the Performance Schema and `sys` schema to determine which connections and components are using memory, see [Troubleshooting memory usage issues for Aurora MySQL databases](ams-workload-memory.md).

### Network throughput


CloudWatch provides the following metrics for total network throughput, all averaged over 1 minute:
+ `NetworkReceiveThroughput` – The amount of network throughput received from clients by each instance in the Aurora DB cluster.
+ `NetworkTransmitThroughput` – The amount of network throughput sent to clients by each instance in the Aurora DB cluster.
+ `NetworkThroughput` – The amount of network throughput both received from and transmitted to clients by each instance in the Aurora DB cluster.
+ `StorageNetworkReceiveThroughput` – The amount of network throughput received from the Aurora storage subsystem by each instance in the DB cluster.
+ `StorageNetworkTransmitThroughput` – The amount of network throughput sent to the Aurora storage subsystem by each instance in the Aurora DB cluster.
+ `StorageNetworkThroughput` – The amount of network throughput received from and sent to the Aurora storage subsystem by each instance in the Aurora DB cluster.

For more information on CloudWatch metrics, see [Instance-level metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md#Aurora.AuroraMySQL.Monitoring.Metrics.instances).

Enhanced Monitoring provides the `network` received (**RX**) and transmitted (**TX**) graphs, with up to 1-second granularity.

For more information on Enhanced Monitoring metrics, see [OS metrics for Aurora](USER_Monitoring-Available-OS-Metrics.md#USER_Monitoring-Available-OS-Metrics-RDS).

## Database metrics


Examine the following CloudWatch metrics for workload changes:
+ `BlockedTransactions` – The average number of transactions in the database that are blocked per second.
+ `BufferCacheHitRatio` – The percentage of requests that are served by the buffer cache.
+ `CommitThroughput` – The average number of commit operations per second.
+ `DatabaseConnections` – The number of client network connections to the database instance.
+ `Deadlocks` – The average number of deadlocks in the database per second.
+ `DMLThroughput` – The average number of inserts, updates, and deletes per second.
+ `ResultSetCacheHitRatio` – The percentage of requests that are served by the query cache.
+ `RollbackSegmentHistoryListLength` – The undo logs that record committed transactions with delete-marked records.
+ `RowLockTime` – The total time spent acquiring row locks for InnoDB tables.
+ `SelectThroughput` – The average number of select queries per second.

For more information on CloudWatch metrics, see [Instance-level metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md#Aurora.AuroraMySQL.Monitoring.Metrics.instances).

Consider the following questions when examining the workload:

1. Were there recent changes in DB instance class, for example reducing the instance size from 8xlarge to 4xlarge, or changing from db.r5 to db.r6? 

1. Can you create a clone and reproduce the issue, or is it happening only on that one instance?

1. Is there server resource exhaustion, high CPU or memory exhaustion? If yes, this could mean that additional hardware is required.

1. Are one or more queries taking longer?

1. Are the changes caused by an upgrade, especially a major version upgrade? If yes, then compare the pre- and post-upgrade metrics.

1. Are there changes in the number of reader DB instances?

1. Have you enabled general, audit, or binary logging? For more information, see [Logging for Aurora MySQL databases](aurora-mysql-troubleshooting-logging.md).

1. Did you enable, disable, or change your use of binary log (binlog) replication?

1. Are there any long-running transactions holding large numbers of row locks? Examine the InnoDB history list length (HLL) for indications of long-running transactions.

   For more information, see [The InnoDB history list length increased significantly](proactive-insights.history-list.md) and the blog post [Why is my SELECT query running slowly on my Amazon Aurora MySQL DB cluster?](https://repost.aws/knowledge-center/aurora-mysql-slow-select-query).

   1. If a large HLL is caused by a write transaction, it means that `UNDO` logs are accumulating (not being cleaned regularly). In a large write transaction, this accumulation can grow quickly. In MySQL, `UNDO` is stored in the [SYSTEM tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-system-tablespace.html). The `SYSTEM` tablespace is not shrinkable. The `UNDO` log might cause the `SYSTEM` tablespace to grow to several GB, or even TB. After the purge, release the allocated space by taking a logical backup (dump) of the data, then import the dump to a new DB instance.

   1. If a large HLL is caused by a read transaction (long-running query), it can mean that the query is using a large amount of temporary space. Release the temporary space by rebooting. Examine Performance Insights DB metrics for any changes in the `Temp` section, such as `created_tmp_tables`. For more information, see [Monitoring DB load with Performance Insights on Amazon Aurora](USER_PerfInsights.md).

1. Can you split long-running transactions into smaller ones that modify fewer rows?

1. Are there any changes in blocked transactions or increases in deadlocks? Examine Performance Insights DB metrics for any changes in status variables in the `Locks` section, such as `innodb_row_lock_time`, ` innodb_row_lock_waits`, and ` innodb_dead_locks`. Use 1-minute or 5-minute intervals.

1. Are there increased wait events? Examine Performance Insights wait events and wait types using 1-minute or 5-minute intervals. Analyze the top wait events and see whether they are correlated to workload changes or database contention. For example, `buf_pool mutex` indicates buffer pool contention. For more information, see [Tuning Aurora MySQL with wait events](AuroraMySQL.Managing.Tuning.wait-events.md).

# Troubleshooting memory usage issues for Aurora MySQL databases
Troubleshooting Aurora MySQL memory usage issues

While CloudWatch, Enhanced Monitoring, and Performance Insights provide a good overview of memory usage at the operating system level, such as how much memory the database process is using, they don't allow you to break down what connections or components within the engine might be causing this memory usage.

To troubleshoot this, you can use the Performance Schema and `sys` schema. In Aurora MySQL version 3, memory instrumentation is enabled by default when the Performance Schema is enabled. In Aurora MySQL version 2, only memory instrumentation for Performance Schema memory usage is enabled by default. For information on tables available in the Performance Schema to track memory usage and enabling Performance Schema memory instrumentation, see [Memory summary tables](https://dev.mysql.com/doc/refman/8.3/en/performance-schema-memory-summary-tables.html) in the MySQL documentation. For more information on using the Performance Schema with Performance Insights, see [Overview of the Performance Schema for Performance Insights on Aurora MySQL](USER_PerfInsights.EnableMySQL.md).

While detailed information is available in the Performance Schema to track current memory usage, the MySQL [sys schema](https://dev.mysql.com/doc/refman/8.0/en/sys-schema.html) has views on top of Performance Schema tables that you can use to quickly pinpoint where memory is being used.

In the `sys` schema, the following views are available to track memory usage by connection, component, and query.


| View | Description | 
| --- | --- | 
|  [memory\$1by\$1host\$1by\$1current\$1bytes](https://dev.mysql.com/doc/refman/8.0/en/sys-memory-by-host-by-current-bytes.html)  |  Provides information on engine memory usage by host. This can be useful for identifying which application servers or client hosts are consuming memory.  | 
|  [memory\$1by\$1thread\$1by\$1current\$1bytes](https://dev.mysql.com/doc/refman/8.0/en/sys-memory-by-thread-by-current-bytes.html)  |  Provides information on engine memory usage by thread ID. The thread ID in MySQL can be a client connection or a background thread. You can map thread IDs to MySQL connection IDs by using the [sys.processlist](https://dev.mysql.com/doc/refman/8.0/en/sys-processlist.html) view or [performance\$1schema.threads](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-threads-table.html) table.  | 
|  [memory\$1by\$1user\$1by\$1current\$1bytes](https://dev.mysql.com/doc/refman/8.0/en/sys-memory-by-user-by-current-bytes.html)  |  Provides information on engine memory usage by user. This can be useful for identifying which user accounts or clients are consuming memory.  | 
|  [memory\$1global\$1by\$1current\$1bytes](https://dev.mysql.com/doc/refman/8.0/en/sys-memory-global-by-current-bytes.html)  |  Provides information on engine memory usage by engine component. This can be useful for identifying memory usage globally by engine buffers or components. For example, you might see the `memory/innodb/buf_buf_pool` event for the InnoDB buffer pool, or the `memory/sql/Prepared_statement::main_mem_root` event for prepared statements.  | 
|  [memory\$1global\$1total](https://dev.mysql.com/doc/refman/8.0/en/sys-memory-global-total.html)  |  Provides an overview of total tracked memory usage in the database engine.  | 

In Aurora MySQL version 3.05 and higher, you can also track maximum memory usage by statement digest in the [Performance Schema statement summary tables](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-statement-summary-tables.html). The statement summary tables contain normalized statement digests and aggregated statistics on their execution. The `MAX_TOTAL_MEMORY` column can help you identify maximum memory used by query digest since the statistics were last reset, or since the database instance was restarted. This can be useful in identifying specific queries that might be consuming a lot of memory.

**Note**  
The Performance Schema and `sys` schema show you the current memory usage on the server, and the high-water marks for memory consumed per connection and engine component. Because the Performance Schema is maintained in memory, information is reset when the DB instance restarts. To maintain a history over time, we recommend that you configure retrieval and storage of this data outside of the Performance Schema.

**Topics**
+ [

## Example 1: Continuous high memory usage
](#ams-workload-memory.example1)
+ [

## Example 2: Transient memory spikes
](#ams-workload-memory.example2)
+ [

## Example 3: Freeable memory drops continuously and isn't reclaimed
](#ams-workload-memory.example3)

## Example 1: Continuous high memory usage


Looking globally at `FreeableMemory` in CloudWatch, we can see that memory usage greatly increased at 2024-03-26 02:59 UTC.

![\[FreeableMemory graph showing high memory usage.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/ams-freeable-memory.png)


This doesn't tell us the whole picture. To determine which component is using the most memory, you can log into the database and look at `sys.memory_global_by_current_bytes`. This table contains a list of memory events that MySQL tracks, along with information on memory allocation per event. Each memory tracking event starts with `memory/%`, followed by other information on which engine component/feature the event is associated with.

For example, `memory/performance_schema/%` is for memory events related to the Performance Schema, `memory/innodb/%` is for InnoDB, and so on. For more information on event naming conventions, see [Performance Schema instrument naming conventions](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-instrument-naming.html) in the MySQL documentation.

From the following query, we can find the likely culprit based on `current_alloc`, but we can also see many `memory/performance_schema/%` events.

```
mysql> SELECT * FROM sys.memory_global_by_current_bytes LIMIT 10;

+-----------------------------------------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+
| event_name                                                                  | current_count | current_alloc | current_avg_alloc | high_count | high_alloc | high_avg_alloc |
+-----------------------------------------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+
| memory/sql/Prepared_statement::main_mem_root                                |        512817 | 4.91 GiB      | 10.04 KiB         |     512823 | 4.91 GiB   | 10.04 KiB      |
| memory/performance_schema/prepared_statements_instances                     |           252 | 488.25 MiB    | 1.94 MiB          |        252 | 488.25 MiB | 1.94 MiB       |
| memory/innodb/hash0hash                                                     |             4 | 79.07 MiB     | 19.77 MiB         |          4 | 79.07 MiB  | 19.77 MiB      |
| memory/performance_schema/events_errors_summary_by_thread_by_error          |          1028 | 52.27 MiB     | 52.06 KiB         |       1028 | 52.27 MiB  | 52.06 KiB      |
| memory/performance_schema/events_statements_summary_by_thread_by_event_name |             4 | 47.25 MiB     | 11.81 MiB         |          4 | 47.25 MiB  | 11.81 MiB      |
| memory/performance_schema/events_statements_summary_by_digest               |             1 | 40.28 MiB     | 40.28 MiB         |          1 | 40.28 MiB  | 40.28 MiB      |
| memory/performance_schema/memory_summary_by_thread_by_event_name            |             4 | 31.64 MiB     | 7.91 MiB          |          4 | 31.64 MiB  | 7.91 MiB       |
| memory/innodb/memory                                                        |         15227 | 27.44 MiB     | 1.85 KiB          |      20619 | 33.33 MiB  | 1.66 KiB       |
| memory/sql/String::value                                                    |         74411 | 21.85 MiB     |  307 bytes        |      76867 | 25.54 MiB  |  348 bytes     |
| memory/sql/TABLE                                                            |          8381 | 21.03 MiB     | 2.57 KiB          |       8381 | 21.03 MiB  | 2.57 KiB       |
+-----------------------------------------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+
10 rows in set (0.02 sec)
```

We mentioned previously that the Performance Schema is stored in memory, which means that it's also tracked in the `performance_schema` memory instrumentation.

**Note**  
If you find that the Performance Schema is using a lot of memory, and want to limit its memory usage, you can tune database parameters based on your requirements. For more information, see [The Performance Schema memory-allocation model](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-memory-model.html) in the MySQL documentation.

For readability, you can rerun the same query but exclude Performance Schema events. The output shows the following:
+ The main memory consumer is `memory/sql/Prepared_statement::main_mem_root`.
+ The `current_alloc` column tells us that MySQL has 4.91 GiB currently allocated to this event.
+ The `high_alloc column` tells us that 4.91 GiB is the high-water mark of `current_alloc` since the stats were last reset or since the server restarted. This means that `memory/sql/Prepared_statement::main_mem_root` is at its highest value.

```
mysql> SELECT * FROM sys.memory_global_by_current_bytes WHERE event_name NOT LIKE 'memory/performance_schema/%' LIMIT 10;

+-----------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+
| event_name                                    | current_count | current_alloc | current_avg_alloc | high_count | high_alloc | high_avg_alloc |
+-----------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+
| memory/sql/Prepared_statement::main_mem_root  |        512817 | 4.91 GiB      | 10.04 KiB         |     512823 | 4.91 GiB   | 10.04 KiB      |
| memory/innodb/hash0hash                       |             4 | 79.07 MiB     | 19.77 MiB         |          4 | 79.07 MiB  | 19.77 MiB      |
| memory/innodb/memory                          |         17096 | 31.68 MiB     | 1.90 KiB          |      22498 | 37.60 MiB  | 1.71 KiB       |
| memory/sql/String::value                      |        122277 | 27.94 MiB     |  239 bytes        |     124699 | 29.47 MiB  |  247 bytes     |
| memory/sql/TABLE                              |          9927 | 24.67 MiB     | 2.55 KiB          |       9929 | 24.68 MiB  | 2.55 KiB       |
| memory/innodb/lock0lock                       |          8888 | 19.71 MiB     | 2.27 KiB          |       8888 | 19.71 MiB  | 2.27 KiB       |
| memory/sql/Prepared_statement::infrastructure |        257623 | 16.24 MiB     |   66 bytes        |     257631 | 16.24 MiB  |   66 bytes     |
| memory/mysys/KEY_CACHE                        |             3 | 16.00 MiB     | 5.33 MiB          |          3 | 16.00 MiB  | 5.33 MiB       |
| memory/innodb/sync0arr                        |             3 | 7.03 MiB      | 2.34 MiB          |          3 | 7.03 MiB   | 2.34 MiB       |
| memory/sql/THD::main_mem_root                 |           815 | 6.56 MiB      | 8.24 KiB          |        849 | 7.19 MiB   | 8.67 KiB       |
+-----------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+
10 rows in set (0.06 sec)
```

From the name of the event, we can tell that this memory is being used for prepared statements. If you want to see which connections are using this memory, you can check [memory\$1by\$1thread\$1by\$1current\$1bytes](https://dev.mysql.com/doc/refman/8.0/en/sys-memory-by-thread-by-current-bytes.html).

In the following example, each connection has approximately 7 MiB allocated, with a high-water mark of approximately 6.29 MiB (`current_max_alloc`). This makes sense, because the example is using `sysbench` with 80 tables and 800 connections with prepared statements. If you want to reduce memory usage in this scenario, you can optimize your application's usage of prepared statements to reduce memory consumption.

```
mysql> SELECT * FROM sys.memory_by_thread_by_current_bytes;

+-----------+-------------------------------------------+--------------------+-------------------+-------------------+-------------------+-----------------+
| thread_id | user                                      | current_count_used | current_allocated | current_avg_alloc | current_max_alloc | total_allocated |
+-----------+-------------------------------------------+--------------------+-------------------+-------------------+-------------------+-----------------+
|        46 | rdsadmin@localhost                        |                405 | 8.47 MiB          | 21.42 KiB         | 8.00 MiB          | 155.86 MiB      |
|        61 | reinvent@10.0.4.4                         |               1749 | 6.72 MiB          | 3.93 KiB          | 6.29 MiB          | 14.24 MiB       |
|       101 | reinvent@10.0.4.4                         |               1845 | 6.71 MiB          | 3.72 KiB          | 6.29 MiB          | 14.50 MiB       |
|        55 | reinvent@10.0.4.4                         |               1674 | 6.68 MiB          | 4.09 KiB          | 6.29 MiB          | 14.13 MiB       |
|        57 | reinvent@10.0.4.4                         |               1416 | 6.66 MiB          | 4.82 KiB          | 6.29 MiB          | 13.52 MiB       |
|       112 | reinvent@10.0.4.4                         |               1759 | 6.66 MiB          | 3.88 KiB          | 6.29 MiB          | 14.17 MiB       |
|        66 | reinvent@10.0.4.4                         |               1428 | 6.64 MiB          | 4.76 KiB          | 6.29 MiB          | 13.47 MiB       |
|        75 | reinvent@10.0.4.4                         |               1389 | 6.62 MiB          | 4.88 KiB          | 6.29 MiB          | 13.40 MiB       |
|       116 | reinvent@10.0.4.4                         |               1333 | 6.61 MiB          | 5.08 KiB          | 6.29 MiB          | 13.21 MiB       |
|        90 | reinvent@10.0.4.4                         |               1448 | 6.59 MiB          | 4.66 KiB          | 6.29 MiB          | 13.58 MiB       |
|        98 | reinvent@10.0.4.4                         |               1440 | 6.57 MiB          | 4.67 KiB          | 6.29 MiB          | 13.52 MiB       |
|        94 | reinvent@10.0.4.4                         |               1433 | 6.57 MiB          | 4.69 KiB          | 6.29 MiB          | 13.49 MiB       |
|        62 | reinvent@10.0.4.4                         |               1323 | 6.55 MiB          | 5.07 KiB          | 6.29 MiB          | 13.48 MiB       |
|        87 | reinvent@10.0.4.4                         |               1323 | 6.55 MiB          | 5.07 KiB          | 6.29 MiB          | 13.25 MiB       |
|        99 | reinvent@10.0.4.4                         |               1346 | 6.54 MiB          | 4.98 KiB          | 6.29 MiB          | 13.24 MiB       |
|       105 | reinvent@10.0.4.4                         |               1347 | 6.54 MiB          | 4.97 KiB          | 6.29 MiB          | 13.34 MiB       |
|        73 | reinvent@10.0.4.4                         |               1335 | 6.54 MiB          | 5.02 KiB          | 6.29 MiB          | 13.23 MiB       |
|        54 | reinvent@10.0.4.4                         |               1510 | 6.53 MiB          | 4.43 KiB          | 6.29 MiB          | 13.49 MiB       |
.                                                                                                                                                          .
.                                                                                                                                                          .
.                                                                                                                                                          .
|       812 | reinvent@10.0.4.4                         |               1259 | 6.38 MiB          | 5.19 KiB          | 6.29 MiB          | 13.05 MiB       |
|       214 | reinvent@10.0.4.4                         |               1279 | 6.38 MiB          | 5.10 KiB          | 6.29 MiB          | 12.90 MiB       |
|       325 | reinvent@10.0.4.4                         |               1254 | 6.38 MiB          | 5.21 KiB          | 6.29 MiB          | 12.99 MiB       |
|       705 | reinvent@10.0.4.4                         |               1273 | 6.37 MiB          | 5.13 KiB          | 6.29 MiB          | 13.03 MiB       |
|       530 | reinvent@10.0.4.4                         |               1268 | 6.37 MiB          | 5.15 KiB          | 6.29 MiB          | 12.92 MiB       |
|       307 | reinvent@10.0.4.4                         |               1263 | 6.37 MiB          | 5.17 KiB          | 6.29 MiB          | 12.87 MiB       |
|       738 | reinvent@10.0.4.4                         |               1260 | 6.37 MiB          | 5.18 KiB          | 6.29 MiB          | 13.00 MiB       |
|       819 | reinvent@10.0.4.4                         |               1252 | 6.37 MiB          | 5.21 KiB          | 6.29 MiB          | 13.01 MiB       |
|        31 | innodb/srv_purge_thread                   |              17810 | 3.14 MiB          |  184 bytes        | 2.40 MiB          | 205.69 MiB      |
|        38 | rdsadmin@localhost                        |                599 | 1.76 MiB          | 3.01 KiB          | 1.00 MiB          | 25.58 MiB       |
|         1 | sql/main                                  |               3756 | 1.32 MiB          |  367 bytes        | 355.78 KiB        | 6.19 MiB        |
|       854 | rdsadmin@localhost                        |                 46 | 1.08 MiB          | 23.98 KiB         | 1.00 MiB          | 5.10 MiB        |
|        30 | innodb/clone_gtid_thread                  |               1596 | 573.14 KiB        |  367 bytes        | 254.91 KiB        | 970.69 KiB      |
|        40 | rdsadmin@localhost                        |                235 | 245.19 KiB        | 1.04 KiB          | 128.88 KiB        | 808.64 KiB      |
|       853 | rdsadmin@localhost                        |                 96 | 94.63 KiB         | 1009 bytes        | 29.73 KiB         | 422.45 KiB      |
|        36 | rdsadmin@localhost                        |                 33 | 36.29 KiB         | 1.10 KiB          | 16.08 KiB         | 74.15 MiB       |
|        33 | sql/event_scheduler                       |                  3 | 16.27 KiB         | 5.42 KiB          | 16.04 KiB         | 16.27 KiB       |
|        35 | sql/compress_gtid_table                   |                  8 | 14.20 KiB         | 1.77 KiB          | 8.05 KiB          | 18.62 KiB       |
|        25 | innodb/fts_optimize_thread                |                 12 | 1.86 KiB          |  158 bytes        |  648 bytes        | 1.98 KiB        |
|        23 | innodb/srv_master_thread                  |                 11 | 1.23 KiB          |  114 bytes        |  361 bytes        | 24.40 KiB       |
|        24 | innodb/dict_stats_thread                  |                 11 | 1.23 KiB          |  114 bytes        |  361 bytes        | 1.35 KiB        |
|         5 | innodb/io_read_thread                     |                  1 |  144 bytes        |  144 bytes        |  144 bytes        |  144 bytes      |
|         6 | innodb/io_read_thread                     |                  1 |  144 bytes        |  144 bytes        |  144 bytes        |  144 bytes      |
|         2 | sql/aws_oscar_log_level_monitor           |                  0 |    0 bytes        |    0 bytes        |    0 bytes        |    0 bytes      |
|         4 | innodb/io_ibuf_thread                     |                  0 |    0 bytes        |    0 bytes        |    0 bytes        |    0 bytes      |
|         7 | innodb/io_write_thread                    |                  0 |    0 bytes        |    0 bytes        |    0 bytes        |    0 bytes      |
|         8 | innodb/io_write_thread                    |                  0 |    0 bytes        |    0 bytes        |    0 bytes        |    0 bytes      |
|         9 | innodb/io_write_thread                    |                  0 |    0 bytes        |    0 bytes        |    0 bytes        |    0 bytes      |
|        10 | innodb/io_write_thread                    |                  0 |    0 bytes        |    0 bytes        |    0 bytes        |    0 bytes      |
|        11 | innodb/srv_lra_thread                     |                  0 |    0 bytes        |    0 bytes        |    0 bytes        |    0 bytes      |
|        12 | innodb/srv_akp_thread                     |                  0 |    0 bytes        |    0 bytes        |    0 bytes        |    0 bytes      |
|        18 | innodb/srv_lock_timeout_thread            |                  0 |    0 bytes        |    0 bytes        |    0 bytes        |  248 bytes      |
|        19 | innodb/srv_error_monitor_thread           |                  0 |    0 bytes        |    0 bytes        |    0 bytes        |    0 bytes      |
|        20 | innodb/srv_monitor_thread                 |                  0 |    0 bytes        |    0 bytes        |    0 bytes        |    0 bytes      |
|        21 | innodb/buf_resize_thread                  |                  0 |    0 bytes        |    0 bytes        |    0 bytes        |    0 bytes      |
|        22 | innodb/btr_search_sys_toggle_thread       |                  0 |    0 bytes        |    0 bytes        |    0 bytes        |    0 bytes      |
|        32 | innodb/dict_persist_metadata_table_thread |                  0 |    0 bytes        |    0 bytes        |    0 bytes        |    0 bytes      |
|        34 | sql/signal_handler                        |                  0 |    0 bytes        |    0 bytes        |    0 bytes        |    0 bytes      |
+-----------+-------------------------------------------+--------------------+-------------------+-------------------+-------------------+-----------------+
831 rows in set (2.48 sec)
```

As mentioned earlier, the thread ID (`thd_id`) value here can refer to server background threads or database connections. If you want to map thread ID values to database connection IDs, you can use the `performance_schema.threads` table or the `sys.processlist` view, where `conn_id` is the connection ID.

```
mysql> SELECT thd_id,conn_id,user,db,command,state,time,last_wait FROM sys.processlist WHERE user='reinvent@10.0.4.4';

+--------+---------+-------------------+----------+---------+----------------+------+-------------------------------------------------+
| thd_id | conn_id | user              | db       | command | state          | time | last_wait                                       |
+--------+---------+-------------------+----------+---------+----------------+------+-------------------------------------------------+
|    590 |     562 | reinvent@10.0.4.4 | sysbench | Execute | closing tables |    0 | wait/io/redo_log_flush                          |
|    578 |     550 | reinvent@10.0.4.4 | sysbench | Sleep   | NULL           |    0 | idle                                            |
|    579 |     551 | reinvent@10.0.4.4 | sysbench | Execute | closing tables |    0 | wait/io/redo_log_flush                          |
|    580 |     552 | reinvent@10.0.4.4 | sysbench | Execute | updating       |    0 | wait/io/table/sql/handler                       |
|    581 |     553 | reinvent@10.0.4.4 | sysbench | Execute | updating       |    0 | wait/io/table/sql/handler                       |
|    582 |     554 | reinvent@10.0.4.4 | sysbench | Sleep   | NULL           |    0 | idle                                            |
|    583 |     555 | reinvent@10.0.4.4 | sysbench | Sleep   | NULL           |    0 | idle                                            |
|    584 |     556 | reinvent@10.0.4.4 | sysbench | Execute | updating       |    0 | wait/io/table/sql/handler                       |
|    585 |     557 | reinvent@10.0.4.4 | sysbench | Execute | closing tables |    0 | wait/io/redo_log_flush                          |
|    586 |     558 | reinvent@10.0.4.4 | sysbench | Execute | updating       |    0 | wait/io/table/sql/handler                       |
|    587 |     559 | reinvent@10.0.4.4 | sysbench | Execute | closing tables |    0 | wait/io/redo_log_flush                          |
.                                                                                                                                     .
.                                                                                                                                     .
.                                                                                                                                     .
|    323 |     295 | reinvent@10.0.4.4 | sysbench | Sleep   | NULL           |    0 | idle                                            |
|    324 |     296 | reinvent@10.0.4.4 | sysbench | Execute | updating       |    0 | wait/io/table/sql/handler                       |
|    325 |     297 | reinvent@10.0.4.4 | sysbench | Execute | closing tables |    0 | wait/io/redo_log_flush                          |
|    326 |     298 | reinvent@10.0.4.4 | sysbench | Execute | updating       |    0 | wait/io/table/sql/handler                       |
|    438 |     410 | reinvent@10.0.4.4 | sysbench | Execute | System lock    |    0 | wait/lock/table/sql/handler                     |
|    280 |     252 | reinvent@10.0.4.4 | sysbench | Sleep   | starting       |    0 | wait/io/socket/sql/client_connection            |
|     98 |      70 | reinvent@10.0.4.4 | sysbench | Query   | freeing items  |    0 | NULL                                            |
+--------+---------+-------------------+----------+---------+----------------+------+-------------------------------------------------+
804 rows in set (5.51 sec)
```

Now we stop the `sysbench` workload, which closes the connections and released the memory. Checking the events again, we can confirm that memory is released, but `high_alloc` still tells us what the high-water mark is. The `high_alloc` column can be very useful in identifying short spikes in memory usage, where you might not be able to immediately identify usage from `current_alloc`, which shows only currently allocated memory.

```
mysql> SELECT * FROM sys.memory_global_by_current_bytes WHERE event_name='memory/sql/Prepared_statement::main_mem_root' LIMIT 10;

+----------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+
| event_name                                   | current_count | current_alloc | current_avg_alloc | high_count | high_alloc | high_avg_alloc |
+----------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+
| memory/sql/Prepared_statement::main_mem_root |            17 | 253.80 KiB    | 14.93 KiB         |     512823 | 4.91 GiB   | 10.04 KiB      |
+----------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+
1 row in set (0.00 sec)
```

If you want to reset `high_alloc`, you can truncate the `performance_schema` memory summary tables, but this resets all memory instrumentation. For more information, see [Performance Schema general table characteristics](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-table-characteristics.html) in the MySQL documentation.

In the following example, we can see that `high_alloc` is reset after truncation.

```
mysql> TRUNCATE `performance_schema`.`memory_summary_global_by_event_name`;
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT * FROM sys.memory_global_by_current_bytes WHERE event_name='memory/sql/Prepared_statement::main_mem_root' LIMIT 10;

+----------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+
| event_name                                   | current_count | current_alloc | current_avg_alloc | high_count | high_alloc | high_avg_alloc |
+----------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+
| memory/sql/Prepared_statement::main_mem_root |            17 | 253.80 KiB    | 14.93 KiB         |         17 | 253.80 KiB | 14.93 KiB      |
+----------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+
1 row in set (0.00 sec)
```

## Example 2: Transient memory spikes


Another common occurrence is short spikes in memory usage on a database server. These can be periodic drops in freeable memory that are difficult to troubleshoot using `current_alloc` in `sys.memory_global_by_current_bytes`, because the memory has already been freed.

**Note**  
If Performance Schema statistics have been reset, or the database instance has been restarted, this information won't be available in `sys` or p`erformance_schema`. To retain this information, we recommend that you configure external metrics collection.

The following graph of the `os.memory.free` metric in Enhanced Monitoring shows brief 7-second spikes in memory usage. Enhanced Monitoring allows you to monitor at intervals as short as 1 second, which is perfect for catching transient spikes like these.

![\[Graph showing transient memory usage spikes over time with periodic pattern indicating potential memory management issues.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/ams-free-memory-spikes.png)


To help diagnose the cause of the memory usage here, we can use a combination of `high_alloc` in the `sys` memory summary views and [Performance Schema statement summary tables](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-statement-summary-tables.html) to try to identify offending sessions and connections.

As expected, because memory usage isn't currently high, we can't see any major offenders in the `sys` schema view under `current_alloc`.

```
mysql> SELECT * FROM sys.memory_global_by_current_bytes LIMIT 10;

+-----------------------------------------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+
| event_name                                                                  | current_count | current_alloc | current_avg_alloc | high_count | high_alloc | high_avg_alloc |
+-----------------------------------------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+
| memory/innodb/hash0hash                                                     |             4 | 79.07 MiB     | 19.77 MiB         |          4 | 79.07 MiB  | 19.77 MiB      |
| memory/innodb/os0event                                                      |        439372 | 60.34 MiB     |  144 bytes        |     439372 | 60.34 MiB  |  144 bytes     |
| memory/performance_schema/events_statements_summary_by_digest               |             1 | 40.28 MiB     | 40.28 MiB         |          1 | 40.28 MiB  | 40.28 MiB      |
| memory/mysys/KEY_CACHE                                                      |             3 | 16.00 MiB     | 5.33 MiB          |          3 | 16.00 MiB  | 5.33 MiB       |
| memory/performance_schema/events_statements_history_long                    |             1 | 14.34 MiB     | 14.34 MiB         |          1 | 14.34 MiB  | 14.34 MiB      |
| memory/performance_schema/events_errors_summary_by_thread_by_error          |           257 | 13.07 MiB     | 52.06 KiB         |        257 | 13.07 MiB  | 52.06 KiB      |
| memory/performance_schema/events_statements_summary_by_thread_by_event_name |             1 | 11.81 MiB     | 11.81 MiB         |          1 | 11.81 MiB  | 11.81 MiB      |
| memory/performance_schema/events_statements_summary_by_digest.digest_text   |             1 | 9.77 MiB      | 9.77 MiB          |          1 | 9.77 MiB   | 9.77 MiB       |
| memory/performance_schema/events_statements_history_long.digest_text        |             1 | 9.77 MiB      | 9.77 MiB          |          1 | 9.77 MiB   | 9.77 MiB       |
| memory/performance_schema/events_statements_history_long.sql_text           |             1 | 9.77 MiB      | 9.77 MiB          |          1 | 9.77 MiB   | 9.77 MiB       |
+-----------------------------------------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+
10 rows in set (0.01 sec)
```

Expanding the view to order by `high_alloc`, we can now see that the `memory/temptable/physical_ram` component is a very good candidate here. At its highest, it consumed 515.00 MiB.

As its name suggests, `memory/temptable/physical_ram` instruments memory usage for the `TEMP` storage engine in MySQL, which was introduced in MySQL 8.0. For more information on how MySQL uses temporary tables, see [Internal temporary table use in MySQL](https://dev.mysql.com/doc/refman/8.0/en/internal-temporary-tables.html) in the MySQL documentation.

**Note**  
We're using the `sys.x$memory_global_by_current_bytes` view in this example.

```
mysql> SELECT event_name, format_bytes(current_alloc) AS "currently allocated", sys.format_bytes(high_alloc) AS "high-water mark"  
FROM sys.x$memory_global_by_current_bytes ORDER BY high_alloc DESC LIMIT 10;

+-----------------------------------------------------------------------------+---------------------+-----------------+
| event_name                                                                  | currently allocated | high-water mark |
+-----------------------------------------------------------------------------+---------------------+-----------------+
| memory/temptable/physical_ram                                               | 4.00 MiB            | 515.00 MiB      |
| memory/innodb/hash0hash                                                     | 79.07 MiB           | 79.07 MiB       |
| memory/innodb/os0event                                                      | 63.95 MiB           | 63.95 MiB       |
| memory/performance_schema/events_statements_summary_by_digest               | 40.28 MiB           | 40.28 MiB       |
| memory/mysys/KEY_CACHE                                                      | 16.00 MiB           | 16.00 MiB       |
| memory/performance_schema/events_statements_history_long                    | 14.34 MiB           | 14.34 MiB       |
| memory/performance_schema/events_errors_summary_by_thread_by_error          | 13.07 MiB           | 13.07 MiB       |
| memory/performance_schema/events_statements_summary_by_thread_by_event_name | 11.81 MiB           | 11.81 MiB       |
| memory/performance_schema/events_statements_summary_by_digest.digest_text   | 9.77 MiB            | 9.77 MiB        |
| memory/performance_schema/events_statements_history_long.sql_text           | 9.77 MiB            | 9.77 MiB        |
+-----------------------------------------------------------------------------+---------------------+-----------------+
10 rows in set (0.00 sec)
```

In [Example 1: Continuous high memory usage](#ams-workload-memory.example1), we checked the current memory usage for each connection to determine which connection is responsible for using the memory in question. In this example, the memory is already freed, so checking the memory usage for current connections isn't useful.

To dig deeper and find the offending statements, users, and hosts, we use the Performance Schema. The Performance Schema contains multiple statement summary tables that are sliced by different dimensions such as event name, statement digest, host, thread, and user. Each view will allow you dig deeper into where certain statements are being run and what they are doing. This section is focused on `MAX_TOTAL_MEMORY`, but you can find more information on all of the columns available in the [Performance Schema statement summary tables](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-statement-summary-tables.html) documentation.

```
mysql> SHOW TABLES IN performance_schema LIKE 'events_statements_summary_%';

+------------------------------------------------------------+
| Tables_in_performance_schema (events_statements_summary_%) |
+------------------------------------------------------------+
| events_statements_summary_by_account_by_event_name         |
| events_statements_summary_by_digest                        |
| events_statements_summary_by_host_by_event_name            |
| events_statements_summary_by_program                       |
| events_statements_summary_by_thread_by_event_name          |
| events_statements_summary_by_user_by_event_name            |
| events_statements_summary_global_by_event_name             |
+------------------------------------------------------------+
7 rows in set (0.00 sec)
```

First we check `events_statements_summary_by_digest` to see `MAX_TOTAL_MEMORY`.

From this we can see the following:
+ The query with digest `20676ce4a690592ff05debcffcbc26faeb76f22005e7628364d7a498769d0c4a` seems to be a good candidate for this memory usage. The `MAX_TOTAL_MEMORY` is 537450710, which matches the high-water mark we saw for the `memory/temptable/physical_ram` event in `sys.x$memory_global_by_current_bytes`.
+ It has been run four times (`COUNT_STAR`), first at 2024-03-26 04:08:34.943256, and last at 2024-03-26 04:43:06.998310.

```
mysql> SELECT SCHEMA_NAME,DIGEST,COUNT_STAR,MAX_TOTAL_MEMORY,FIRST_SEEN,LAST_SEEN
FROM performance_schema.events_statements_summary_by_digest ORDER BY MAX_TOTAL_MEMORY DESC LIMIT 5;

+-------------+------------------------------------------------------------------+------------+------------------+----------------------------+----------------------------+
| SCHEMA_NAME | DIGEST                                                           | COUNT_STAR | MAX_TOTAL_MEMORY | FIRST_SEEN                 | LAST_SEEN                  |
+-------------+------------------------------------------------------------------+------------+------------------+----------------------------+----------------------------+
| sysbench    | 20676ce4a690592ff05debcffcbc26faeb76f22005e7628364d7a498769d0c4a |          4 |        537450710 | 2024-03-26 04:08:34.943256 | 2024-03-26 04:43:06.998310 |
| NULL        | f158282ea0313fefd0a4778f6e9b92fc7d1e839af59ebd8c5eea35e12732c45d |          4 |          3636413 | 2024-03-26 04:29:32.712348 | 2024-03-26 04:36:26.269329 |
| NULL        | 0046bc5f642c586b8a9afd6ce1ab70612dc5b1fd2408fa8677f370c1b0ca3213 |          2 |          3459965 | 2024-03-26 04:31:37.674008 | 2024-03-26 04:32:09.410718 |
| NULL        | 8924f01bba3c55324701716c7b50071a60b9ceaf17108c71fd064c20c4ab14db |          1 |          3290981 | 2024-03-26 04:31:49.751506 | 2024-03-26 04:31:49.751506 |
| NULL        | 90142bbcb50a744fcec03a1aa336b2169761597ea06d85c7f6ab03b5a4e1d841 |          1 |          3131729 | 2024-03-26 04:15:09.719557 | 2024-03-26 04:15:09.719557 |
+-------------+------------------------------------------------------------------+------------+------------------+----------------------------+----------------------------+
5 rows in set (0.00 sec)
```

Now that we know the offending digest, we can get more details such as the query text, the user who ran it, and where it was run. Based on the digest text returned, we can see that this is a common table expression (CTE) that creates four temporary tables and performs four table scans, which is very inefficient.

```
mysql> SELECT SCHEMA_NAME,DIGEST_TEXT,QUERY_SAMPLE_TEXT,MAX_TOTAL_MEMORY,SUM_ROWS_SENT,SUM_ROWS_EXAMINED,SUM_CREATED_TMP_TABLES,SUM_NO_INDEX_USED
FROM performance_schema.events_statements_summary_by_digest
WHERE DIGEST='20676ce4a690592ff05debcffcbc26faeb76f22005e7628364d7a498769d0c4a'\G;

*************************** 1. row ***************************
           SCHEMA_NAME: sysbench
           DIGEST_TEXT: WITH RECURSIVE `cte` ( `n` ) AS ( SELECT ? FROM `sbtest1` UNION ALL SELECT `id` + ? FROM `sbtest1` ) SELECT * FROM `cte`
     QUERY_SAMPLE_TEXT: WITH RECURSIVE cte (n) AS (   SELECT 1  from sbtest1 UNION ALL   SELECT id + 1 FROM sbtest1) SELECT * FROM cte
      MAX_TOTAL_MEMORY: 537450710
         SUM_ROWS_SENT: 80000000
     SUM_ROWS_EXAMINED: 80000000
SUM_CREATED_TMP_TABLES: 4
     SUM_NO_INDEX_USED: 4
1 row in set (0.01 sec)
```

For more information on the `events_statements_summary_by_digest` table and other Performance Schema statement summary tables, see [Statement summary tables](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-statement-summary-tables.html) in the MySQL documentation.

You can also run an [EXPLAIN](https://dev.mysql.com/doc/refman/8.0/en/explain.html) or [EXPLAIN ANALYZE](https://dev.mysql.com/doc/refman/8.0/en/explain.html#explain-analyze) statement to see more details.

**Note**  
`EXPLAIN ANALYZE` can provide more information than `EXPLAIN`, but it also runs the query, so be careful.

```
-- EXPLAIN
mysql> EXPLAIN WITH RECURSIVE cte (n) AS (SELECT 1  FROM sbtest1 UNION ALL SELECT id + 1 FROM sbtest1) SELECT * FROM cte;

+----+-------------+------------+------------+-------+---------------+------+---------+------+----------+----------+-------------+
| id | select_type | table      | partitions | type  | possible_keys | key  | key_len | ref  | rows     | filtered | Extra       |
+----+-------------+------------+------------+-------+---------------+------+---------+------+----------+----------+-------------+
|  1 | PRIMARY     | <derived2> | NULL       | ALL   | NULL          | NULL | NULL    | NULL | 19221520 |   100.00 | NULL        |
|  2 | DERIVED     | sbtest1    | NULL       | index | NULL          | k_1  | 4       | NULL |  9610760 |   100.00 | Using index |
|  3 | UNION       | sbtest1    | NULL       | index | NULL          | k_1  | 4       | NULL |  9610760 |   100.00 | Using index |
+----+-------------+------------+------------+-------+---------------+------+---------+------+----------+----------+-------------+
3 rows in set, 1 warning (0.00 sec)

-- EXPLAIN format=tree 
mysql> EXPLAIN format=tree WITH RECURSIVE cte (n) AS (SELECT 1 FROM sbtest1 UNION ALL SELECT id + 1 FROM sbtest1) SELECT * FROM cte\G;

*************************** 1. row ***************************
EXPLAIN: -> Table scan on cte  (cost=4.11e+6..4.35e+6 rows=19.2e+6)
    -> Materialize union CTE cte  (cost=4.11e+6..4.11e+6 rows=19.2e+6)
        -> Index scan on sbtest1 using k_1  (cost=1.09e+6 rows=9.61e+6)
        -> Index scan on sbtest1 using k_1  (cost=1.09e+6 rows=9.61e+6)
1 row in set (0.00 sec)

-- EXPLAIN ANALYZE 
mysql> EXPLAIN ANALYZE WITH RECURSIVE cte (n) AS (SELECT 1 from sbtest1 UNION ALL SELECT id + 1 FROM sbtest1) SELECT * FROM cte\G;

*************************** 1. row ***************************
EXPLAIN: -> Table scan on cte  (cost=4.11e+6..4.35e+6 rows=19.2e+6) (actual time=6666..9201 rows=20e+6 loops=1)
    -> Materialize union CTE cte  (cost=4.11e+6..4.11e+6 rows=19.2e+6) (actual time=6666..6666 rows=20e+6 loops=1)
        -> Covering index scan on sbtest1 using k_1  (cost=1.09e+6 rows=9.61e+6) (actual time=0.0365..2006 rows=10e+6 loops=1)
        -> Covering index scan on sbtest1 using k_1  (cost=1.09e+6 rows=9.61e+6) (actual time=0.0311..2494 rows=10e+6 loops=1)
1 row in set (10.53 sec)
```

But who ran it? We can see in the Performance Schema that the `destructive_operator` user had `MAX_TOTAL_MEMORY` of 537450710, which again matches the previous results.

**Note**  
The Performance Schema is stored in memory, so should not be relied upon as the sole source for auditing. If you need to maintain a history of statements run, and from which users, we recommend that you enable [Aurora Advanced Auditing](AuroraMySQL.Auditing.md). If you also need to maintain information on memory usage, we recommend that you configure monitoring to export and store these values.

```
mysql> SELECT USER,EVENT_NAME,COUNT_STAR,MAX_TOTAL_MEMORY FROM performance_schema.events_statements_summary_by_user_by_event_name
ORDER BY MAX_CONTROLLED_MEMORY DESC LIMIT 5;

+----------------------+---------------------------+------------+------------------+
| USER                 | EVENT_NAME                | COUNT_STAR | MAX_TOTAL_MEMORY |
+----------------------+---------------------------+------------+------------------+
| destructive_operator | statement/sql/select      |          4 |        537450710 |
| rdsadmin             | statement/sql/select      |       4172 |          3290981 |
| rdsadmin             | statement/sql/show_tables |          2 |          3615821 |
| rdsadmin             | statement/sql/show_fields |          2 |          3459965 |
| rdsadmin             | statement/sql/show_status |         75 |          1914976 |
+----------------------+---------------------------+------------+------------------+
5 rows in set (0.00 sec)

mysql> SELECT HOST,EVENT_NAME,COUNT_STAR,MAX_TOTAL_MEMORY FROM performance_schema.events_statements_summary_by_host_by_event_name
WHERE HOST != 'localhost' AND COUNT_STAR>0 ORDER BY MAX_CONTROLLED_MEMORY DESC LIMIT 5;

+------------+----------------------+------------+------------------+
| HOST       | EVENT_NAME           | COUNT_STAR | MAX_TOTAL_MEMORY |
+------------+----------------------+------------+------------------+
| 10.0.8.231 | statement/sql/select |          4 |        537450710 |
+------------+----------------------+------------+------------------+
1 row in set (0.00 sec)
```

## Example 3: Freeable memory drops continuously and isn't reclaimed


The InnoDB database engine employs a range of specialized memory tracking events for different components. These specific events allow for granular tracking of memory usage in key InnoDB subsystems, for example:
+ `memory/innodb/buf0buf` – Dedicated to monitoring memory allocations for the InnoDB buffer pool.
+ `memory/innodb/ibuf0ibuf` – Specifically tracks memory changes related to the InnoDB change buffer.

To identify the top consumers of memory, we can query `sys.memory_global_by_current_bytes`:

```
mysql> SELECT event_name,current_alloc FROM sys.memory_global_by_current_bytes LIMIT 10;

+-----------------------------------------------------------------+---------------+
| event_name                                                      | current_alloc |
+-----------------------------------------------------------------+---------------+
| memory/innodb/memory                                            | 5.28 GiB      |
| memory/performance_schema/table_io_waits_summary_by_index_usage | 495.00 MiB    |
| memory/performance_schema/table_shares                          | 488.00 MiB    |
| memory/sql/TABLE_SHARE::mem_root                                | 388.95 MiB    |
| memory/innodb/std                                               | 226.88 MiB    |
| memory/innodb/fil0fil                                           | 198.49 MiB    |
| memory/sql/binlog_io_cache                                      | 128.00 MiB    |
| memory/innodb/mem0mem                                           | 96.82 MiB     |
| memory/innodb/dict0dict                                         | 96.76 MiB     |
| memory/performance_schema/rwlock_instances                      | 88.00 MiB     |
+-----------------------------------------------------------------+---------------+
10 rows in set (0.00 sec)
```

The results show that `memory/innodb/memory` is the top consumer, using 5.28 GiB of currently allocated memory. This event serves as a category for memory allocations across various InnoDB components not associated with more specific wait events, such as `memory/innodb/buf0buf` mentioned previously.

Having established that InnoDB components are the primary consumers of memory, we can dive deeper into the specifics using the following MySQL command:

```
SHOW ENGINE INNODB STATUS \G;
```

The [SHOW ENGINE INNODB STATUS](https://dev.mysql.com/doc/refman/8.4/en/show-engine.html) command provides a comprehensive status report for the InnoDB storage engine, including detailed memory usage statistics for different InnoDB components. It can help identify which specific InnoDB structures or operations are consuming the most memory. For more information, see [InnoDB in-memory structures](https://dev.mysql.com/doc/refman/8.0/en/innodb-in-memory-structures.html) in the MySQL documentation.

Analyzing the `BUFFER POOL AND MEMORY` section of the InnoDB status report, we see that 5,051,647,748 bytes (4.7 GiB) is allocated to the [dictionary object cache](https://dev.mysql.com/doc/refman/8.0/en/data-dictionary-object-cache.html), which accounts for 89% of the memory tracked by `memory/innodb/memory`.

```
----------------------
BUFFER POOL AND MEMORY
----------------------
Total large memory allocated 0
Dictionary memory allocated 5051647748
Buffer pool size 170512
Free buffers 142568
Database pages 27944
Old database pages 10354
Modified db pages 6
Pending reads 0
```

The dictionary object cache is a shared global cache that stores previously accessed data dictionary objects in memory to enable object reuse and improve performance. The high memory allocation to the dictionary object cache suggests a large number of database objects in the data dictionary cache.

Now that we know that the data dictionary cache is a primary consumer, we proceed to inspect the data dictionary cache for open tables. To find the number of tables in the table definition cache, query the global status variable [open\$1table\$1definitions](https://dev.mysql.com/doc/refman/8.4/en/server-status-variables.html#statvar_Open_table_definitions).

```
mysql> show global status like 'open_table_definitions';

+------------------------+-------+
| Variable_name          | Value |
+------------------------+-------+
| Open_table_definitions | 20000 |
+------------------------+-------+
1 row in set (0.00 sec)
```

For more information, see [How MySQL opens and closes tables](https://dev.mysql.com/doc/refman/8.0/en/table-cache.html) in the MySQL documentation.

You can limit the number of table definitions in the data dictionary cache by limiting the `table_definition_cache` parameter in the DB cluster or DB instance parameter group. For Aurora MySQL, this value serves as a soft limit for the number of tables in the table definition cache. The default value is dependent on the instance class and is set to the following:

```
LEAST({DBInstanceClassMemory/393040}, 20000)
```

When the number of tables exceeds the `table_definition_cache` limit, a least recently used (LRU) mechanism evicts and remove tables from the cache. However, tables involved in foreign key relationships aren't placed in the LRU list, preventing their removal.

In our current scenario, we run [FLUSH TABLES](https://dev.mysql.com/doc/refman/8.4/en/flush.html) to clear the table definition cache. This action results in a significant drop in the [Open\$1table\$1definitions](https://dev.mysql.com/doc/refman/8.0/en/server-status-variables.html#statvar_Open_table_definitions) global status variable, from 20,000 to 12, as shown here:

```
mysql> show global status like 'open_table_definitions';

+------------------------+-------+
| Variable_name          | Value |
+------------------------+-------+
| Open_table_definitions | 12    |
+------------------------+-------+
1 row in set (0.00 sec)
```

Despite this reduction, we observe that the memory allocation for `memory/innodb/memory` remains high at 5.18 GiB, and the dictionary memory allocated also remains unchanged. This is evident from the following query results:

```
mysql> SELECT event_name,current_alloc FROM sys.memory_global_by_current_bytes LIMIT 10;

+-----------------------------------------------------------------+---------------+
| event_name                                                      | current_alloc |
+-----------------------------------------------------------------+---------------+
| memory/innodb/memory                                            | 5.18 GiB      |
| memory/performance_schema/table_io_waits_summary_by_index_usage | 495.00 MiB    |
| memory/performance_schema/table_shares                          | 488.00 MiB    |
| memory/sql/TABLE_SHARE::mem_root                                | 388.95 MiB    |
| memory/innodb/std                                               | 226.88 MiB    |
| memory/innodb/fil0fil                                           | 198.49 MiB    |
| memory/sql/binlog_io_cache                                      | 128.00 MiB    |
| memory/innodb/mem0mem                                           | 96.82 MiB     |
| memory/innodb/dict0dict                                         | 96.76 MiB     |
| memory/performance_schema/rwlock_instances                      | 88.00 MiB     |
+-----------------------------------------------------------------+---------------+
10 rows in set (0.00 sec)
```

```
----------------------
BUFFER POOL AND MEMORY
----------------------
Total large memory allocated 0
Dictionary memory allocated 5001599639
Buffer pool size 170512
Free buffers 142568
Database pages 27944
Old database pages 10354
Modified db pages 6
Pending reads 0
```

This persistently high memory usage can be attributed to tables involved in foreign key relationships. These tables aren't placed in the LRU list for removal, explaining why the memory allocation remains high even after flushing the table definition cache.

To address this issue:

1. Review and optimize your database schema, particularly foreign key relationships.

1. Consider moving to a larger DB instance class that has more memory to accommodate your dictionary objects.

By following these steps and understanding the memory allocation patterns, you can better manage memory usage in your Aurora MySQL DB instance and prevent potential performance issues due to memory pressure.

# Troubleshooting out-of-memory issues for Aurora MySQL databases
Troubleshooting Aurora MySQL out-of-memory issues

The Aurora MySQL `aurora_oom_response` instance-level parameter can enable the DB instance to monitor the system memory and estimate the memory consumed by various statements and connections. If the system runs low on memory, it can perform a list of actions to attempt to release that memory. It does so in an attempt to avoid a database restart due to out-of-memory (OOM) issues. The instance-level parameter takes a string of comma-separated actions that a DB instance performs when its memory is low. The `aurora_oom_response` parameter is supported for Aurora MySQL versions 2 and 3.

The following values, and combinations of them, can be used for the `aurora_oom_response` parameter. An empty string means that no action is taken, and effectively turns off the feature, leaving the database prone to OOM restarts.
+ `decline` – Declines new queries when the DB instance is low on memory.
+ `kill_connect` – Closes database connections that are consuming a large amount of memory, and ends current transactions and Data Definition Language (DDL) statements. This response isn't supported for Aurora MySQL version 2.

  For more information, see [KILL statement](https://dev.mysql.com/doc/refman/8.0/en/kill.html) in the MySQL documentation.
+ `kill_query` – Ends queries in descending order of memory consumption until the instance memory surfaces above the low threshold. DDL statements aren't ended.

  For more information, see [KILL statement](https://dev.mysql.com/doc/refman/8.0/en/kill.html) in the MySQL documentation.
+ `print` – Only prints the queries that are consuming a large amount of memory.
+ `tune` – Tunes the internal table caches to release some memory back to the system. Aurora MySQL decreases the memory used for caches such as `table_open_cache` and `table_definition_cache` in low-memory conditions. Eventually, Aurora MySQL sets their memory usage back to normal when the system is no longer low on memory.

  For more information, see [table\$1open\$1cache](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_table_open_cache) and [table\$1definition\$1cache](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_table_definition_cache) in the MySQL documentation.
+ `tune_buffer_pool` – Decreases the size of the buffer pool to release some memory and make it available for the database server to process connections. This response is supported for Aurora MySQL version 3.06 and higher.

  You must pair `tune_buffer_pool` with either `kill_query` or `kill_connect` in the `aurora_oom_response` parameter value. If not, buffer pool resizing won't happen, even when you include `tune_buffer_pool` in the parameter value.

In Aurora MySQL versions lower than 3.06, for DB instance classes with memory less than or equal to 4 GiB, when the instance is under memory pressure, the default actions include `print`, `tune`, `decline`, and `kill_query`. For DB instance classes with memory greater than 4 GiB, the parameter value is empty by default (disabled).

In Aurora MySQL version 3.06 and higher, for DB instance classes with memory less than or equal to 4 GiB, Aurora MySQL also closes the top memory-consuming connections (`kill_connect`). For DB instance classes with memory greater than 4 GiB, the default parameter value is `print`.

In Aurora MySQL version 3.09 and higher, for DB instance classes with memory greater than 4 GiB, the default parameter value is `print,decline,kill_connect`.

If you frequently run into out-of-memory issues, memory usage can be monitored using [memory summary tables](https://dev.mysql.com/doc/refman/8.3/en/performance-schema-memory-summary-tables.html) when `performance_schema` is enabled.

For Amazon CloudWatch metrics related to OOM, see [Instance-level metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md#Aurora.AuroraMySQL.Monitoring.Metrics.instances). For global status variables related to OOM, see [Aurora MySQL global status variables](AuroraMySQL.Reference.GlobalStatusVars.md).

# Logging for Aurora MySQL databases
Logging for Aurora MySQL

Aurora MySQL logs provide essential information about database activity and errors. By enabling these logs, you can identify and troubleshoot issues, understand database performance, and audit database activity. We recommend that you enable these logs for all of your Aurora MySQL DB instances to ensure optimal performance and availability of the databases. The following types of logging can be enabled. Each log contains specific information that can lead to uncovering impacts to database processing.
+ Error – Aurora MySQL writes to the error log only on startup, shutdown, and when it encounters errors. A DB instance can go hours or days without new entries being written to the error log. If you see no recent entries, it's because the server didn't encounter an error that would result in a log entry. Error logging is enabled by default. For more information, see [Aurora MySQL error logs](USER_LogAccess.MySQL.LogFileSize.md#USER_LogAccess.MySQL.Errorlog).
+ General – The general log provides detailed information about database activity, including all SQL statements executed by the database engine. For more information on enabling general logging and setting logging parameters, see [Aurora MySQL slow query and general logs](USER_LogAccess.MySQL.LogFileSize.md#USER_LogAccess.MySQL.Generallog), and [The general query log](https://dev.mysql.com/doc/refman/8.0/en/query-log.html) in the MySQL documentation.
**Note**  
General logs can grow to be very large and consume your storage. For more information, see [Log rotation and retention for Aurora MySQL](USER_LogAccess.MySQL.LogFileSize.md#USER_LogAccess.AMS.LogFileSize.retention).
+ Slow query – The slow query log consists of SQL statements that take more than [long\$1query\$1time](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_long_query_time) seconds to run and require at least [min\$1examined\$1row\$1limit](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_min_examined_row_limit) rows to be examined. You can use the slow query log to find queries that take a long time to run and are therefore candidates for optimization.

  The default value for `long_query_time` is 10 seconds. We recommend that you start with a high value to identify the slowest queries, then work your way down for fine tuning.

  You can also use related parameters, such as `log_slow_admin_statements` and `log_queries_not_using_indexes`. Compare `rows_examined` with `rows_returned`. If `rows_examined` is much greater than `rows_returned`, then those queries can potentially be blocking.

  In Aurora MySQL version 3, you can enable `log_slow_extra` to obtain more details. For more information, see [Slow query log contents](https://dev.mysql.com/doc/refman/8.0/en/slow-query-log.html#slow-query-log-contents) in the MySQL documentation. You can also modify `long_query_time` at the session level for debugging query execution interactively, which is especially useful if `log_slow_extra` is enabled globally.

  For more information on enabling slow query logging and setting logging parameters, see [Aurora MySQL slow query and general logs](USER_LogAccess.MySQL.LogFileSize.md#USER_LogAccess.MySQL.Generallog), and [The slow query log](https://dev.mysql.com/doc/refman/8.0/en/slow-query-log.html) in the MySQL documentation.
+ Audit – The audit log monitors and logs database activity. Audit logging for Aurora MySQL is called Advanced Auditing. To enable Advanced Auditing, you set certain DB cluster parameters. For more information, see [Using Advanced Auditing with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Auditing.md).
+ Binary – The binary log (binlog) contains events that describe database changes, such as table creation operations and changes to table data. It also contains events for statements that potentially could have made changes (for example, a [DELETE](https://dev.mysql.com/doc/refman/8.0/en/delete.html) that matched no rows), unless row-based logging is used. The binary log also contains information about how long each statement took that updated data.

  Running a server with binary logging enabled makes performance slightly slower. However, the benefits of the binary log in enabling you to set up replication and for restore operations generally outweigh this minor performance decrease.
**Note**  
Aurora MySQL doesn't require binary logging for restore operations.

  For more information on enabling binary logging and setting the binlog format, see [Configuring Aurora MySQL binary logging for Single-AZ databases](USER_LogAccess.MySQL.BinaryFormat.md), and [The binary log](https://dev.mysql.com/doc/refman/8.0/en/binary-log.html) in the MySQL documentation.

You can publish the error, general, slow, query, and audit logs to Amazon CloudWatch Logs. For more information, see [Publishing database logs to Amazon CloudWatch Logs](USER_LogAccess.Procedural.UploadtoCloudWatch.md).

Another useful tool for summarizing slow, general, and binary log files is [pt-query-digest](https://docs.percona.com/percona-toolkit/pt-query-digest.html).

# Troubleshooting connection issues for Aurora MySQL databases
Troubleshooting database connection issues

Ensuring reliable connectivity between your applications and your RDS DB instance is crucial for the smooth operation of your workloads. However, connectivity issues can arise because of various factors, such as network configurations, authentication problems, or resource constraints. This guide aims to provide a comprehensive approach to troubleshooting connectivity issues with Aurora MySQL.

**Contents**
+ [

## Identifying database connectivity issues for Aurora MySQL
](#mysql-dbconn-identify)
+ [

## Gathering data on connectivity issues for Aurora MySQL
](#mysql-dbconn-gather)
+ [

## Monitoring database connections for Aurora MySQL
](#mysql-dbconn-monitor)
  + [

### Additional monitoring for Aurora MySQL
](#mysql-dbconn-monitor-ams)
+ [

## Connectivity error codes for Aurora MySQL
](#mysql-dbconn-errors)
+ [

## Parameter tuning recommendations for Aurora MySQL
](#mysql-dbconn-params)
+ [

## Examples of troubleshooting database connection issues for Aurora MySQL
](#mysql-dbconn-examples)
  + [

### Example 1: Troubleshooting failed connection attempts
](#mysql-dbconn-example1)
  + [

### Example 2: Troubleshooting abnormal client disconnects
](#mysql-dbconn-example2)
  + [

### Example 3: Troubleshooting IAM failed connection attempts
](#mysql-dbconn-example3)

## Identifying database connectivity issues for Aurora MySQL
Identifying database connectivity issues

Identifying the specific category of the connectivity issue can help narrow down the potential causes and guide the troubleshooting process. Each category might require different approaches and techniques for diagnosis and resolution. Database connectivity issues can broadly be classified into the following categories.

**Connection errors and exceptions**  
Connection errors and exceptions can occur for various reasons, such as incorrect connection strings, authentication failures, network disruptions, or database server issues. Causes can include misconfigured connection parameters, invalid credentials, network outages, or database server crashes or restarts. Misconfigured security groups, virtual private cloud (VPC) settings, network Access Control Lists (ACLs), and route tables associated with subnets can also lead to connection issues.

**Connection limit reached**  
This issue arises when the number of concurrent connections to the database server exceeds the maximum allowed limit. Database servers typically have a configurable maximum connection limit defined by the parameter max\$1connections in the clusters and instance parameter groups. By imposing a connection limit, the database server ensures that it has sufficient resources (for example, memory, CPU, and file handles) to handle the existing connections efficiently and provide acceptable performance. Causes can include connection leaks in the application, inefficient connection pooling, or an unexpected surge in connection requests.

**Connection timeouts**  
Connection timeouts occur when the client application is unable to establish a connection with the database server within a specified timeout period. Common causes include network issues, server overload, firewall rules, and misconfigured connection settings.

**Idle connection timeouts**  
Idle connections that remain inactive for a prolonged period might be closed automatically by the database server to conserve resources. This timeout is typically configurable using the `wait_timeout` and `interactive_timeout parameters`, and should be adjusted based on the application's connection usage patterns. Causes can include application logic that leaves connections idle for extended periods, or improper connection management.

**Intermittent disconnect of existing connections**  
This class of errors refers to a scenario where established connections between a client application and the database are unexpectedly terminated or disconnected at irregular intervals, despite being active and in use. These disconnections occur intermittently, meaning they happen at irregular intervals and not consistently. The causes can include the following:  
+ Database server issues such as restarts or failovers
+ Improper application connection handling
+ Load balancing and proxy issues
+ Network instability
+ Problems with third-party components or middleware involved in the connection path
+ Query execution timeouts
+ Resource constraints on the server or client side
Identifying the root cause through comprehensive monitoring, logging, and analysis is crucial, while implementing proper error handling, connection pooling, and retry mechanisms can help mitigate the impact of these intermittent disconnections on the application's functionality and user experience.

## Gathering data on connectivity issues for Aurora MySQL
Gathering data on connectivity issues

Gathering comprehensive data related to the application, database, network, and infrastructure components is crucial for effectively troubleshooting connectivity issues between an application and an Aurora MySQL database. By collecting relevant logs, configurations, and diagnostic information, you gain valuable insights that can help identify the root cause of the connectivity problems and guide you towards an appropriate resolution.

Network logs and configurations, such as security group rules, VPC settings, and route tables, are essential for identifying potential network-related bottlenecks or misconfigurations that could be preventing the application from establishing a successful connection with the database. By analyzing these network components, you can make sure that the necessary ports are open, IP addresses are allowed, and routing configurations are set up correctly.

**Timestamps**  
Record the exact timestamps when the connectivity issues occur. This can help identify patterns or correlate the issues with other events or activities.

**DB engine logs**  
In addition to the general database logs, review the database engine logs (for example, the MySQL error log and slow query log) for any relevant information or errors that might be related to the intermittent connectivity issues. For more information, see [Logging for Aurora MySQL databases](aurora-mysql-troubleshooting-logging.md).

**Client application logs**  
Collect detailed logs from the client applications that connect to the database. Application logs provide visibility into the connection attempts, errors, and any relevant information from the application's perspective, which can reveal issues related to connection strings, authentication credentials, or application-level connection handling.  
Database logs, on the other hand, offer insights into database-side errors, slow queries, or events that might be contributing to the connectivity issues. For more information, see [Logging for Aurora MySQL databases](aurora-mysql-troubleshooting-logging.md).

**Client environment variables**  
Check whether any environment variables or configuration settings on the client side might be affecting the database connection, such as proxy settings, SSL/TLS settings, or any other relevant variables.

**Client library versions**  
Make sure that the client is using the latest versions of any database drivers, libraries, or frameworks used for database connectivity. Outdated versions can have known issues or compatibility problems.

**Client network capture**  
Perform a network capture on the client side using a tool such as Wireshark or `tcpdump` during the times when connectivity issues occur. This can help identify any network-related issues or anomalies on the client side.

**Client network topology**  
Understand the client's network topology, including any firewalls, load balancers, or other components such as RDS Proxy or Proxy SQL that are making connections to the database instead of the client directly making connections.

**Client operating system settings**  
Check the client's operating system settings that might affect network connectivity, such as firewall rules, network adapter settings, and any other relevant settings.

**Connection pooling configuration**  
If you're using a connection pooling mechanism in your application, review the configuration settings and monitor the pool metrics (for example, active connections, idle connections, and connection timeouts) to ensure that the pool is functioning correctly. Also review the pool settings, such as the maximum pool size, minimum pool size, and connection validation settings, to ensure that they are configured correctly.

**Connection string**  
The connection string typically includes parameters such as the hostname or endpoint, port number, database name, and authentication credentials. Analyzing the connection string can help identify potential misconfigurations or incorrect settings that may be causing connectivity problems. For example, an incorrect hostname or port number can prevent the client from reaching the database instance, while invalid authentication credentials can lead to authentication failures and connection rejections. Additionally, the connection string can reveal issues related to connection pooling, timeouts, or other connection-specific settings that could contribute to connectivity issues. Providing the complete connection string used by the client application can help pinpoint any misconfigurations on the client.

**Database metrics**  
Monitor database metrics such as CPU usage, memory usage, and disk I/O during the times when connectivity issues occur. These can help identify whether the DB instance is experiencing resource contention or performance issues.

**DB engine version**  
Note the Aurora MySQL DB engine version. AWS regularly releases updates addressing known issues, security vulnerabilities, and introducing performance enhancements. Therefore, we highly recommend that you upgrade to the latest available versions, as these updates often include bug fixes and improvements specifically related to connectivity, performance, and stability. Providing the database version information, along with the other collected details, can assist Support in effectively diagnosing and resolving connectivity issues.

**Network metrics**  
Collect network metrics such as latency, packet loss, and throughput during the times when connectivity issues occur. Tools such as `ping`, `traceroute`, and network monitoring tools can help gather this data.

**Source and client details**  
Determine the IP addresses of the application servers, load balancers, or any other components that are initiating the database connections. This could be a single IP address or a range of IP addresses (CIDR notation). If the source is an Amazon EC2 instance, it also helps to review the instance type, Availability Zone, subnet ID , and security groups associated with the instance, and network interface details such as private IP address and public IP address.

By thoroughly analyzing the gathered data, you can identify misconfigurations, resource constraints, network disruptions, or other underlying issues that are causing the intermittent or persistent connectivity problems. This information allows you to take targeted actions, such as adjusting configurations, resolving network issues, or addressing application-level connection handling.

## Monitoring database connections for Aurora MySQL
Monitoring database connections

To monitor and troubleshoot connectivity issues, you can use the following metrics and features.

**CloudWatch metrics**  
+ `CPUUtilization` – High CPU usage on the DB instance can lead to slow query execution, which can result in connection timeouts or rejections.
+ `DatabaseConnections` – Monitor the number of active connections to the DB instance. A high number of connections close to the configured maximum can indicate potential connectivity issues or connection pool exhaustion.
+ `FreeableMemory` – Low available memory can cause performance issues and connectivity problems because of resource constraints.
+ `NetworkReceiveThroughput` and `NetworkTransmitThroughput` – Unusual spikes or drops in network throughput can indicate connectivity issues or network bottlenecks.

**Performance Insights metrics**  
To troubleshoot connectivity issues in Aurora MySQL using Performance Insights, analyze Database metrics such as the following:  
+ Aborted\$1clients
+ Aborted\$1connects
+ Connections
+ max\$1connections
+ Threads\$1connected
+ Threads\$1created
+ Threads\$1running
These metrics can help you to identify connection bottlenecks, detect network or authentication problems, optimize connection pooling, and ensure efficient thread management. For more information, see [Performance Insights counters for Aurora MySQL](USER_PerfInsights_Counters.md#USER_PerfInsights_Counters.Aurora_MySQL).

**Performance Insights features**  
+ **Database Load** – Visualize the database load over time and correlate it with connectivity issues or performance degradation.
+ **SQL Statistics** – Analyze SQL statistics to identify inefficient queries or database operations that might contribute to connectivity problems.
+ **Top Queries** – Identify and analyze the most resource-intensive queries, which can help identify potential performance bottlenecks or long-running queries that may be causing connectivity issues.

By monitoring these metrics and leveraging Performance Insights, you can gain visibility into the database instance's performance, resource usage, and potential bottlenecks that might be causing connectivity issues. For example:
+ High `DatabaseConnections` close to the maximum limit can indicate connection pool exhaustion or improper connection handling, leading to connectivity problems.
+ High `CPUUtilization` or low `FreeableMemory` can indicate resource constraints, which may cause slow query execution and connection timeouts or rejections.
+ Analyzing the **Top Queries** and **SQL Statistics** can help identify inefficient or resource-intensive queries that may be contributing to connectivity issues.

Additionally, monitoring CloudWatch Logs and setting up alarms can help you proactively identify and respond to connectivity problems before they escalate.

It's important to note that while these metrics and tools can provide valuable insights, they should be used in conjunction with other troubleshooting steps. By also reviewing network configurations, security group rules, and application-level connection handling, you can comprehensively diagnose and resolve connectivity issues with Aurora MySQL DB instances.

### Additional monitoring for Aurora MySQL


**CloudWatch metrics**  
+ `AbortedClients` – Tracks the number of client connections that have not been closed properly.
+ `AuroraSlowConnectionHandleCount` – Tracks the number of slow connection handle operations, indicating potential connectivity issues or performance bottlenecks.
+ `AuroraSlowHandshakeCount` – Measures the number of slow handshake operations, which can also be an indicator of connectivity problems.
+ `ConnectionAttempts` – Measures the number of connection attempts made to the Aurora MySQL DB instance.

**Global status variables**  
`Aurora_external_connection_count` – Shows the number of database connections to the DB instance, excluding RDS service connections used for database health checks.

By monitoring these metrics and global status variables you can gain visibility into the connection patterns, errors, and potential bottlenecks that may be causing connectivity issues with your Amazon Aurora MySQL instance.

For example, a high number of `AbortedClients` or `AuroraSlowConnectionHandleCount` can indicate connectivity problems.

Additionally, setting up CloudWatch alarms and notifications can help you proactively identify and respond to connectivity issues before they escalate and impact your application's performance.

## Connectivity error codes for Aurora MySQL
Connectivity error codes

The following are some common connectivity errors for Aurora MySQL databases, along with their error codes and explanations.

**Error Code 1040: Too many connections**  
This error occurs when the client tries to establish more connections than the maximum allowed by the database server. Possible causes include the following:  
+ Connection pooling misconfiguration – If using a connection pooling mechanism, ensure that the maximum pool size is not set too high, and that connections are being properly released back to the pool.
+ Database instance configuration – Verify the maximum allowed connections setting for the database instance and adjust it if necessary by setting the `max_connections` parameter.
+ High concurrency – If multiple clients or applications are connecting to the database simultaneously, the maximum allowed connections limit may be reached.

**Error Code 1045: Access denied for user '...'@'...' (using password: YES/NO)**  
This error indicates an authentication failure when attempting to connect to the database. Possible causes include the following:  
+ Authentication plugin compatibility – Check whether the authentication plugin used by the client is compatible with the database server's authentication mechanism.
+ Incorrect username or password – Verify that the correct username and password are being used in the connection string or authentication mechanism.
+ User permissions – Make sure that the user has the necessary permissions to connect to the database instance from the specified host or network.

**Error Code 1049: Unknown database '...'**  
This error indicates that the client is attempting to connect to a database that does not exist on the server. Possible causes include the following:  
+ Database not created – Make sure that the specified database has been created on the database server.
+ Incorrect database name – Double-check the database name used in the connection string or query for accuracy.
+ User permissions – Verify that the user has the necessary permissions to access the specified database.

**Error Code 1153: Got a packet bigger than 'max\$1allowed\$1packet' bytes**  
This error occurs when the client attempts to send or receive data that exceeds the maximum packet size allowed by the database server. Possible causes include the following:  
+ Large queries or result sets – If executing queries that involve large amounts of data, the packet size limit may be exceeded.
+ Misconfigured packet size settings – Check the `max_allowed_packet` setting on the database server and adjust it if necessary.
+ Network configuration issues – Make sure that the network configuration (for example, MTU size) allows for the required packet sizes.

**Error Code 1226: User '...' has exceeded the 'max\$1user\$1connections' resource (current value: ...)**  
This error indicates that the user has exceeded the maximum number of concurrent connections allowed by the database server. Possible causes include the following::  
+ Connection pooling misconfiguration – If using a connection pooling mechanism, ensure that the maximum pool size is not set too high for the user's connection limit.
+ Database instance configuration – Verify the `max_user_connections` setting for the database instance and adjust it if necessary.
+ High concurrency – If multiple clients or applications are connecting to the database simultaneously using the same user, the user-specific connection limit may be reached.

**Error Code 2003: Can't connect to MySQL server on '...' (10061)**  
This error typically occurs when the client is unable to establish a TCP/IP connection with the database server. It can be caused by various issues, such as the following:  
+ Database instance status – Make sure that the database instance is in the `available` state, and not undergoing any maintenance or backup operations.
+ Firewall rules – Check whether any firewalls (operating system, network, or security group) are blocking the connection on the specified port (usually 3306 for MySQL).
+ Incorrect hostname or endpoint – Make sure that the hostname or endpoint used in the connection string is correct and matches the database instance.
+ Network connectivity issues – Verify that the client machine can reach the database instance over the network. Check for any network outages, routing issues, or VPC or subnet misconfigurations.

**Error Code 2005: Unknown MySQL server host '...' (11001)**  
This error occurs when the client is unable to resolve the hostname or endpoint of the database server to an IP address. Possible causes include the following:  
+ DNS resolution issues – Verify that the client machine can resolve the hostname correctly using DNS. Check the DNS settings, DNS cache, and try using the IP address instead of the hostname.
+ Incorrect hostname or endpoint – Double-check the hostname or endpoint used in the connection string for accuracy.
+ Network configuration issues – Make sure that the client's network configuration (for example, VPC, subnet, and route tables) allows DNS resolution and connectivity to the database instance.

**Error Code 2026: SSL connection error**  
This error occurs when there is an issue with the SSL/TLS configuration or certificate validation during the connection attempt. Possible causes include the following:  
+ Certificate expiration – Check whether the SSL/TLS certificate used by the server has expired and needs to be renewed.
+ Certificate validation issues – Verify that the client is able to validate the server's SSL/TLS certificate correctly, and that the certificate is trusted.
+ Network configuration issues – Make sure that the network configuration allows for SSL/TLS connections and doesn't block or interfere with the SSL/TLS handshake process.
+ SSL/TLS configuration mismatch – Make sure that the SSL/TLS settings (for example, cipher suites and protocol versions) on the client and server are compatible.

By understanding the detailed explanations and potential causes for each error code, you can better troubleshoot and resolve connectivity issues when working with Aurora MySQL databases.

## Parameter tuning recommendations for Aurora MySQL
Parameter tuning recommendations

**Maximum connections**  
Adjusting these parameters can help prevent connection issues caused by reaching the maximum allowed connections limit. Make sure that these values are set appropriately based on your application's concurrency requirements and resource constraints.  
+ `max_connections` – This parameter specifies the maximum number of concurrent connections allowed to the DB instance.
+ `max_user_connections` – This parameter can be specified during user creation and modification, and sets the maximum number of concurrent connections allowed for a specific user account.

**Network buffer size**  
Increasing these values can improve network performance, especially for workloads involving large data transfers or result sets. However, be cautious as larger buffer sizes can consume more memory.  
+ `net_buffer_length` – This parameter sets the initial size for client connection and result buffers, balancing memory usage with query performance.
+ `max_allowed_packet` – This parameter specifies the maximum size of a single network packet that can be sent or received by the DB instance.

**Network compression (client side)**  
Enabling network compression can reduce network bandwidth usage, but it can increase CPU overhead on both the client and server sides.  
+ `compress` – This parameter enables or disables network compression for client/server communication.
+ `compress_protocol` – This parameter specifies the compression protocol to use for network communication.

**Network performance tuning**  
Adjusting these timeouts can help manage idle connections and prevent resource exhaustion, but be cautious as low values can cause premature connection terminations.  
+ `interactive_timeout` – This parameter specifies the number of seconds the server waits for activity on an interactive connection before closing it.
+ `wait_timeout` – This parameter determines the number of seconds the server waits for activity on a noninteractive connection before closing it.

**Network timeout settings**  
Adjusting these timeouts can help address issues related to slow or unresponsive connections. But be careful not to set them too low, as it can cause premature connection failures.  
+ `net_read_timeout` – This parameter specifies the number of seconds to wait for more data from a connection before ending the read operation.
+ `net_write_timeout` – This parameter determines the number of seconds to wait for a block to be written to a connection before ending the write operation.

## Examples of troubleshooting database connection issues for Aurora MySQL
Examples of troubleshooting database connections

The following examples show how to identify and troubleshoot database connection issues for Aurora MySQL.

### Example 1: Troubleshooting failed connection attempts


Connection attempts can fail for several reasons, including authentication failures, SSL/TLS handshake failures, `max_connections` limit reached, and resource constraints on the DB instance.

You can track the number of failed connections either from Performance Insights or by using the following command.

```
mysql> show global status like 'aborted_connects';
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| Aborted_connects | 7     |
+------------------+-------+
1 row in set (0.00 sec)
```

If the number of `Aborted_connects` increases over time, then the application could be having intermittent connectivity issues.

You can use [Aurora Advanced Auditing](AuroraMySQL.Auditing.md) to log the connects and disconnects from the client connections. You can do this by setting the following parameters in the DB cluster parameter group:
+ `server_audit_logging` = `1`
+ `server_audit_events` = `CONNECT`

 The following is an extract from the audit logs for a failed login.

```
1728498527380921,auora-mysql-node1,user_1,172.31.49.222,147189,0,FAILED_CONNECT,,,1045
1728498527380940,auora-mysql-node1,user_1,172.31.49.222,147189,0,DISCONNECT,,,0
```

Where:
+ `1728498527380921` – The epoch timestamp of when the failed login occurred
+ `aurora-mysql-node1` – The instance identifier of the node of the Aurora MySQL cluster on which the connection failed
+ `user_1` – The name of the database user for which the login failed
+ `172.31.49.222` – The private IP address of the client from which the connection was established
+ `147189` – The connection ID of the failed login
+ `FAILED_CONNECT` – Indicates that the connection failed.
+ `1045` – The return code. A nonzero value indicates an error. In this case, `1045` corresponds to access denied.

For more information, see [Server error codes](https://dev.mysql.com/doc/mysql-errors/5.7/en/server-error-reference.html) and [Client error codes](https://dev.mysql.com/doc/mysql-errors/5.7/en/client-error-reference.html) in the MySQL documentation.

 You can also examine the Aurora MySQL error logs for any related error messages, for example:

```
2024-10-09T19:26:59.310443Z 220 [Note] [MY-010926] [Server] Access denied for user 'user_1'@'172.31.49.222' (using password: YES) (sql_authentication.cc:1502)
```

### Example 2: Troubleshooting abnormal client disconnects


You can track the number of abnormal client disconnects either from Performance Insights or by using the following command.

```
mysql> show global status like 'aborted_clients';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| Aborted_clients | 9     |
+-----------------+-------+
1 row in set (0.01 sec)
```

If the number of `Aborted_clients` increases over time, then the application isn't closing the connections to the database correctly. If connections aren't closed properly, it can lead to resource leaks and potential performance issues. Leaving connections open unnecessarily can consume system resources, such as memory and file descriptors, which can eventually cause the application or server to become unresponsive or restart.

You can use the following query to identify accounts that aren't closing connections properly. It retrieves the user account name, the host from which the user is connecting, the number of connections not closed, and the percentage of connections not closed.

```
SELECT
    ess.user,
    ess.host,
    (a.total_connections - a.current_connections) - ess.count_star AS not_closed,
    (((a.total_connections - a.current_connections) - ess.count_star) * 100) / (a.total_connections - a.current_connections) AS pct_not_closed
FROM
    performance_schema.events_statements_summary_by_account_by_event_name AS ess
    JOIN performance_schema.accounts AS a ON (ess.user = a.user AND ess.host = a.host)
WHERE
    ess.event_name = 'statement/com/quit'
    AND (a.total_connections - a.current_connections) > ess.count_star;

+----------+---------------+------------+----------------+
| user     | host          | not_closed | pct_not_closed |
+----------+---------------+------------+----------------+
| user1    | 172.31.49.222 |          1 |        33.3333 |
| user1    | 172.31.93.250 |       1024 |        12.1021 |
| user2    | 172.31.93.250 |         10 |        12.8551 |
+----------+---------------+------------+----------------+
3 rows in set (0.00 sec)
```

After you identify the user accounts and hosts from which the connections aren't closed, you can proceed to check the code that isn't closing the connections gracefully. 

For example, with the MySQL connector in Python, use the `close()` method of the connection object to close connections. Here's an example function that establishes a connection to a database, performs a query, and closes the connection:

```
import mysql.connector

def execute_query(query):
    # Establish a connection to the database
    connection = mysql.connector.connect(
        host="your_host",
        user="your_username",
        password="your_password",
        database="your_database"
    )

    try:
        # Create a cursor object
        cursor = connection.cursor()

        # Execute the query
        cursor.execute(query)

        # Fetch and process the results
        results = cursor.fetchall()
        for row in results:
            print(row)

    finally:
        # Close the cursor and connection
        cursor.close()
        connection.close()
```

In this example, the `connection.close()` method is called in the `finally` block to make sure that the connection is closed, whether or not an exception occurs.

### Example 3: Troubleshooting IAM failed connection attempts


Connectivity with AWS Identity and Access Management (IAM) users can fail for several reasons, such as:
+ Incorrect IAM policy configuration
+ Expired security credentials
+ Network connectivity issues
+ Database permission mismatches

To troubleshoot these authentication errors, enable the `iam-db-auth-error` log export feature in your Amazon Relational Database Service (RDS) or Aurora database. This will allow you to view detailed authentication error messages in CloudWatch Log group for your Amazon RDS or Amazon Aurora cluster.

Once enabled, you can review these logs to identify and resolve the specific cause of your IAM authentication failures.

For example:

```
2025-09-22T12:02:30,806 [ERROR] Failed to authorize the connection request for user 'user_1' due to an internal IAM DB Auth error. (Status Code: 500, Error Code: InternalError)
```

and

```
2025-09-22T12:02:51,954 [ERROR] Failed to authenticate the connection request for user 'user_2' because the provided token is malformed or otherwise invalid. (Status Code: 400, Error Code: InvalidToken)
```

For troubleshooting guidance, refer to the [Aurora](UsingWithRDS.IAMDBAuth.Troubleshooting.md) troubleshooting guide for IAM DB authentication.

# Troubleshooting query performance for Aurora MySQL databases
Troubleshooting query performance

MySQL provides [query optimizer control](https://dev.mysql.com/doc/refman/8.0/en/controlling-optimizer.html) through system variables that affect how query plans are evaluated, switchable optimizations, optimizer and index hints, and the optimizer cost model. These data points can be helpful not only while comparing different MySQL environments, but also to compare previous query execution plans with current execution plans, and to understand the overall execution of a MySQL query at any point.

Query performance depends on many factors, including the execution plan, table schema and size, statistics, resources, indexes, and parameter configuration. Query tuning requires identifying bottlenecks and optimizing the execution path.
+ Find the execution plan for the query and check whether the query is using appropriate indexes. You can optimize your query by using `EXPLAIN` and reviewing the details of each plan.
+ Aurora MySQL version 3 (compatible with MySQL 8.0 Community Edition) uses an `EXPLAIN ANALYZE` statement. The `EXPLAIN ANALYZE` statement is a profiling tool that shows where MySQL spends time on your query and why. With `EXPLAIN ANALYZE`, Aurora MySQL plans, prepares, and runs the query while counting rows and measuring the time spent at various points of the execution plan. When the query completes, `EXPLAIN ANALYZE` prints the plan and its measurements instead of the query result.
+ Keep your schema statistics updated by using the `ANALYZE` statement. The query optimizer can sometimes choose poor execution plans because of outdated statistics. This can lead to poor performance of a query because of inaccurate cardinality estimates of both tables and indexes. The `last_update` column of the [innodb\$1table\$1stats](https://dev.mysql.com/doc/refman/8.0/en/innodb-persistent-stats.html#innodb-persistent-stats-tables) table shows the last time your schema statistics were updated, which is a good indicator of "staleness."
+ Other issues can occur, such as distribution skew of data, that aren't taken into account for table cardinality. For more information, see [Estimating ANALYZE TABLE complexity for InnoDB tables](https://dev.mysql.com/doc/refman/8.0/en/innodb-analyze-table-complexity.html) and [Histogram statistics in MySQL](https://dev.mysql.com/blog-archive/histogram-statistics-in-mysql/) in the MySQL documentation.

## Understanding the time spent by queries


The following are ways to determine the time spent by queries:
+ [Profiling](https://dev.mysql.com/doc/refman/8.0/en/show-profile.html)
+ [Performance Schema](https://dev.mysql.com/doc/refman/8.0/en/performance-schema.html)
+ [Query optimizer](https://dev.mysql.com/doc/refman/8.0/en/controlling-optimizer.html)

**Profiling**  
By default, profiling is disabled. Enable profiling, then run the slow query and review its profile.  

```
SET profiling = 1;
Run your query.
SHOW PROFILE;
```

1. Identify the stage where the most time is spent. According to [General thread states](https://dev.mysql.com/doc/refman/8.0/en/general-thread-states.html) in the MySQL documentation, reading and processing rows for a `SELECT` statement is often the longest-running state over the lifetime of a given query. You can use the `EXPLAIN` statement to understand how MySQL runs this query.

1. Review the slow query log to evaluate `rows_examined` and `rows_sent` to make sure that the workload is similar in each environment. For more information, see [Logging for Aurora MySQL databases](aurora-mysql-troubleshooting-logging.md).

1. Run the following command for tables that are part of the identified query:

   ```
   SHOW TABLE STATUS\G;
   ```

1. Capture the following outputs before and after running the query on each environment:

   ```
   SHOW GLOBAL STATUS;
   ```

1. Run the following commands on each environment to see if there are any other query/session influencing the performance of this sample query.

   ```
   SHOW FULL PROCESSLIST;
   
   SHOW ENGINE INNODB STATUS\G;
   ```

   Sometimes, when resources on the server are busy, it impacts every other operation on the server, including queries. You can also capture information periodically when queries are run or set up a `cron` job to capture information at useful intervals.

**Performance Schema**  
The Performance Schema provides useful information about server runtime performance, while having minimal impact on that performance. This is different from the `information_schema`, which provides schema information about the DB instance. For more information, see [Overview of the Performance Schema for Performance Insights on Aurora MySQL](USER_PerfInsights.EnableMySQL.md).

**Query optimizer trace**  
To understand why a particular [query plan was chosen for execution](https://dev.mysql.com/doc/refman/8.0/en/execution-plan-information.html), you can set up `optimizer_trace` to access the MySQL query optimizer.  
Run an optimizer trace to show extensive information on all the paths available to the optimizer and its choice.  

```
SET SESSION OPTIMIZER_TRACE="enabled=on"; 
SET optimizer_trace_offset=-5, optimizer_trace_limit=5;

-- Run your query.
SELECT * FROM table WHERE x = 1 AND y = 'A';

-- After the query completes:
SELECT * FROM information_schema.OPTIMIZER_TRACE;
SET SESSION OPTIMIZER_TRACE="enabled=off";
```

## Reviewing query optimizer settings


Aurora MySQL version 3 (compatible with MySQL 8.0 Community Edition) has many optimizer-related changes compared with Aurora MySQL version 2 (compatible with MySQL 5.7 Community Edition). If you have some custom values for the `optimizer_switch`, we recommend that you review the differences in the defaults and set `optimizer_switch` values that work best for your workload. We also recommend that you test the options available for Aurora MySQL version 3 to examine how your queries perform.

**Note**  
Aurora MySQL version 3 uses the community default value of 20 for the [innodb\$1stats\$1persistent\$1sample\$1pages](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_stats_persistent_sample_pages) parameter.

You can use the following command to show the `optimizer_switch` values:

```
SELECT @@optimizer_switch\G;
```

The following table shows the default `optimizer_switch` values for Aurora MySQL versions 2 and 3.


| Setting | Aurora MySQL version 2 | Aurora MySQL version 3 | 
| --- | --- | --- | 
| batched\$1key\$1access | off | off | 
| block\$1nested\$1loop | on | on | 
| condition\$1fanout\$1filter | on | on | 
| derived\$1condition\$1pushdown | – | on | 
| derived\$1merge | on | on | 
| duplicateweedout | on | on | 
| engine\$1condition\$1pushdown | on | on | 
| firstmatch | on | on | 
| hash\$1join | off | on | 
| hash\$1join\$1cost\$1based | on | – | 
| hypergraph\$1optimizer | – | off | 
| index\$1condition\$1pushdown | on | on | 
| index\$1merge | on | on | 
| index\$1merge\$1intersection | on | on | 
| index\$1merge\$1sort\$1union | on | on | 
| index\$1merge\$1union | on | on | 
| loosescan | on | on | 
| materialization | on | on | 
| mrr | on | on | 
| mrr\$1cost\$1based | on | on | 
| prefer\$1ordering\$1index | on | on | 
| semijoin | on | on | 
| skip\$1scan | – | on | 
| subquery\$1materialization\$1cost\$1based | on | on | 
| subquery\$1to\$1derived | – | off | 
| use\$1index\$1extensions | on | on | 
| use\$1invisible\$1indexes | – | off | 

For more information, see [Switchable optimizations (MySQL 5.7)](https://dev.mysql.com/doc/refman/5.7/en/switchable-optimizations.html) and [Switchable optimizations (MySQL 8.0)](https://dev.mysql.com/doc/refman/8.0/en/switchable-optimizations.html) in the MySQL documentation.

# Amazon Aurora MySQL reference
Aurora MySQL reference<a name="mysqlref"></a>

This reference includes information about Aurora MySQL parameters, status variables, and general SQL extensions or differences from the community MySQL database engine.

**Topics**
+ [

# Aurora MySQL configuration parameters
](AuroraMySQL.Reference.ParameterGroups.md)
+ [

# Aurora MySQL global status variables
](AuroraMySQL.Reference.GlobalStatusVars.md)
+ [

# Aurora MySQL wait events
](AuroraMySQL.Reference.Waitevents.md)
+ [

# Aurora MySQL thread states
](AuroraMySQL.Reference.thread-states.md)
+ [

# Aurora MySQL isolation levels
](AuroraMySQL.Reference.IsolationLevels.md)
+ [

# Aurora MySQL hints
](AuroraMySQL.Reference.Hints.md)
+ [

# Aurora MySQL stored procedure reference
](AuroraMySQL.Reference.StoredProcs.md)
+ [

# Aurora MySQL–specific information\$1schema tables
](AuroraMySQL.Reference.ISTables.md)

# Aurora MySQL configuration parameters
Configuration parameters<a name="param_groups"></a>

You manage your Amazon Aurora MySQL DB cluster in the same way that you manage other Amazon RDS DB instances, by using parameters in a DB parameter group. Amazon Aurora differs from other DB engines in that you have a DB cluster that contains multiple DB instances. As a result, some of the parameters that you use to manage your Aurora MySQL DB cluster apply to the entire cluster. Other parameters apply only to a particular DB instance in the DB cluster.

To manage cluster-level parameters, use DB cluster parameter groups. To manage instance-level parameters, use DB parameter groups. Each DB instance in an Aurora MySQL DB cluster is compatible with the MySQL database engine. However, you apply some of the MySQL database engine parameters at the cluster level, and you manage these parameters using DB cluster parameter groups. You can't find cluster-level parameters in the DB parameter group for an instance in an Aurora DB cluster. A list of cluster-level parameters appears later in this topic.

You can manage both cluster-level and instance-level parameters using the AWS Management Console, the AWS CLI, or the Amazon RDS API. You use separate commands for managing cluster-level parameters and instance-level parameters. For example, you can use the [modify-db-cluster-parameter-group](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster-parameter-group.html) CLI command to manage cluster-level parameters in a DB cluster parameter group. You can use the [modify-db-parameter-group](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-parameter-group.html) CLI command to manage instance-level parameters in a DB parameter group for a DB instance in a DB cluster.

You can view both cluster-level and instance-level parameters in the console, or by using the CLI or RDS API. For example, you can use the [describe-db-cluster-parameters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-cluster-parameters.html) AWS CLI command to view cluster-level parameters in a DB cluster parameter group. You can use the [describe-db-parameters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-parameters.html) CLI command to view instance-level parameters in a DB parameter group for a DB instance in a DB cluster.

**Note**  
Each [default parameter group](USER_WorkingWithParamGroups.md) contains the default values for all parameters in the parameter group. If the parameter has "engine default" for this value, see the version-specific MySQL or PostgreSQL documentation for the actual default value.  
Unless otherwise noted, parameters listed in the following tables are valid for Aurora MySQL versions 2 and 3.

For more information about DB parameter groups, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md). For rules and restrictions for Aurora Serverless v1 clusters, see [Parameter groups for Aurora Serverless v1](aurora-serverless-v1.how-it-works.md#aurora-serverless.parameter-groups).

**Topics**
+ [

## Cluster-level parameters
](#AuroraMySQL.Reference.Parameters.Cluster)
+ [

## Instance-level parameters
](#AuroraMySQL.Reference.Parameters.Instance)
+ [

## MySQL parameters that don't apply to Aurora MySQL
](#AuroraMySQL.Reference.Parameters.Inapplicable)

## Cluster-level parameters
<a name="cluster_params"></a><a name="params"></a>

The following table shows all of the parameters that apply to the entire Aurora MySQL DB cluster.


| Parameter name | Modifiable | Notes | 
| --- | --- | --- | 
|  `aurora_binlog_read_buffer_size`  |  Yes  |  Only affects clusters that use binary log (binlog) replication. For information about binlog replication, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md). Removed from Aurora MySQL version 3.  | 
|  `aurora_binlog_replication_max_yield_seconds`  |  Yes  |  Only affects clusters that use binary log (binlog) replication. For information about binlog replication, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md).  | 
|  `aurora_binlog_replication_sec_index_parallel_workers`  |  Yes  |  Sets the total number of parallel threads available to apply secondary index changes when replicating transactions for large tables with more than one secondary index. The parameter is set to `0` (disabled) by default. This parameter is available in Aurora MySQL version 306 and higher. For more information, see [Optimizing binary log replication for Aurora MySQL](binlog-optimization.md).  | 
|  `aurora_binlog_use_large_read_buffer`  |  Yes  |  Only affects clusters that use binary log (binlog) replication. For information about binlog replication, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md). Removed from Aurora MySQL version 3.  | 
|  `aurora_disable_hash_join`   |  Yes  |  Set this parameter to `ON` to turn off hash join optimization in Aurora MySQL version 2.09 or higher. It isn't supported for version 3. For more information, see [Parallel query for Amazon Aurora MySQL](aurora-mysql-parallel-query.md).  | 
|   `aurora_enable_replica_log_compression`   |   Yes   |   For more information, see [Performance considerations for Amazon Aurora MySQL replication](AuroraMySQL.Replication.md#AuroraMySQL.Replication.Performance). Doesn't apply to clusters that are part of an Aurora global database. Removed from Aurora MySQL version 3.  | 
|   `aurora_enable_repl_bin_log_filtering`   |   Yes   |   For more information, see [Performance considerations for Amazon Aurora MySQL replication](AuroraMySQL.Replication.md#AuroraMySQL.Replication.Performance). Doesn't apply to clusters that are part of an Aurora global database. Removed from Aurora MySQL version 3.  | 
|  `aurora_enable_staggered_replica_restart`  |  Yes  | This setting is available in Aurora MySQL version 3, but it isn't used. | 
|   `aurora_enable_zdr`   |   Yes   |   This setting is turned on by default in Aurora MySQL 2.10 and higher.  | 
|   `aurora_in_memory_relaylog`   |  Yes  |  Sets the in memory relay log mode. You can use this feature on binlog replicas to improve binary log replication performance. To turn this feature off, set the parameter to OFF. To turn this feature on, set the parameter to ON.  | 
|   `aurora_enhanced_binlog`   |   Yes   |   Set the value of this parameter to 1 to turn on the enhanced binlog in Aurora MySQL version 3.03.1 and higher. For more information, see [Setting up enhanced binlog for Aurora MySQL](AuroraMySQL.Enhanced.binlog.md).   | 
|   `aurora_full_double_precision_in_json`   |  Yes  |   Set the value of this parameter to enable the parsing of floating point numbers in JSON documents with full precision.   | 
|  `aurora_jemalloc_background_thread`  |  Yes  |  Use this parameter to enable a background thread to perform memory maintenance operations. The allowed values are `0` (disabled) and `1` (enabled). The default value is `0`. This parameter applies to Aurora MySQL version 3.05 and higher.  | 
|  `aurora_jemalloc_dirty_decay_ms`  |  Yes  |  Use this parameter to retain freed memory for a certain amount of time (in milliseconds). Retaining memory allows for faster reuse. The allowed values are `0`–`18446744073709551615`. The default value is `10000` (10 seconds). You can use a shorter delay to help avoid out-of-memory issues, at the expense of slower performance. This parameter applies to Aurora MySQL version 3.05 and higher.  | 
|  `aurora_jemalloc_tcache_enabled`  |  Yes  |  Use this parameter to serve small memory requests (up to 32 KiB) in a thread local cache, bypassing the memory arenas. The allowed values are `0` (disabled) and `1` (enabled). The default value is `1`. This parameter applies to Aurora MySQL version 3.05 and higher.  | 
|   `aurora_load_from_s3_role`   |   Yes   |   For more information, see [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md). Currently not available in Aurora MySQL version 3. Use `aws_default_s3_role`.  | 
|  `aurora_mask_password_hashes_type`  |  Yes  |  This setting is turned on by default in Aurora MySQL 2.11 and higher. Use this setting to mask Aurora MySQL password hashes in the slow query and audit logs. The allowed values are `0` and `1` (default). When set to `1`, passwords are logged as `<secret>`. When set to `0`, passwords are logged as hash (`#`) values.  | 
|   `aurora_select_into_s3_role`   |   Yes   |   For more information, see [Saving data from an Amazon Aurora MySQL DB cluster into text files in an Amazon S3 bucket](AuroraMySQL.Integrating.SaveIntoS3.md). Currently not available in Aurora MySQL version 3. Use `aws_default_s3_role`.  | 
|  `authentication_kerberos_caseins_cmp`  |  Yes  |  Controls case-insensitive username comparison for the `authentication_kerberos` plugin. Set it to `true` for case-insensitive comparison. By default, case-sensitive comparison is used (`false`). For more information, see [Using Kerberos authentication for Aurora MySQL](aurora-mysql-kerberos.md). This parameter is available in Aurora MySQL version 3.03 and higher.  | 
|   `auto_increment_increment`   |   Yes   |  None  | 
|   `auto_increment_offset`   |   Yes   |  None  | 
|   `aws_default_lambda_role`   |   Yes   |   For more information, see [Invoking a Lambda function from an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Lambda.md).  | 
|  `aws_default_s3_role`  | Yes |  Used when invoking the `LOAD DATA FROM S3`, `LOAD XML FROM S3`, or `SELECT INTO OUTFILE S3` statement from your DB cluster. In Aurora MySQL version 2, the IAM role specified in this parameter is used if an IAM role isn't specified for `aurora_load_from_s3_role` or `aurora_select_into_s3_role` for the appropriate statement. In Aurora MySQL version 3, the IAM role specified for this parameter is always used. For more information, see [Associating an IAM role with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.md).  | 
|   `binlog_backup`   |   Yes   |   Set the value of this parameter to 0 to turn on the enhanced binlog in Aurora MySQL version 3.03.1 and higher. You can turn off this parameter only when you use enhanced binlog. For more information, see [Setting up enhanced binlog for Aurora MySQL](AuroraMySQL.Enhanced.binlog.md).  | 
|   `binlog_checksum`   |   Yes   |  The AWS CLI and RDS API report a value of `None` if this parameter isn't set. In that case, Aurora MySQL uses the engine default value, which is `CRC32`. This is different from the explicit setting of `NONE`, which turns off the checksum.  | 
|   `binlog-do-db`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `binlog_format`   |   Yes   |   For more information, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md).  | 
|   `binlog_group_commit_sync_delay`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `binlog_group_commit_sync_no_delay_count`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `binlog-ignore-db`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `binlog_replication_globaldb`   |   Yes   |   Set the value of this parameter to 0 to turn on the enhanced binlog in Aurora MySQL version 3.03.1 and higher. You can turn off this parameter only when you use enhanced binlog. For more information, see [Setting up enhanced binlog for Aurora MySQL](AuroraMySQL.Enhanced.binlog.md).  | 
|   `binlog_row_image`   |   No   |  None  | 
|   `binlog_row_metadata`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `binlog_row_value_options`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `binlog_rows_query_log_events`   |   Yes   |  None  | 
|   `binlog_transaction_compression`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `binlog_transaction_compression_level_zstd`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|  `binlog_transaction_dependency_history_size`  |  Yes  |  This parameter sets an upper limit on the number of row hashes that are kept in memory and used for looking up the transaction that last modified a given row. After this number of hashes has been reached, the history is purged. This parameter applies to Aurora MySQL version 2.12 and higher, and version 3.  | 
|   `binlog_transaction_dependency_tracking`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `character-set-client-handshake`   |   Yes   |  None  | 
|   `character_set_client`   |   Yes   |  None  | 
|   `character_set_connection`   |   Yes   |  None  | 
|   `character_set_database`   |   Yes   |  The character set used by the default database. The default value is `utf8mb4`.  | 
|   `character_set_filesystem`   |   Yes   |  None  | 
|   `character_set_results`   |   Yes   |  None  | 
|   `character_set_server`   |   Yes   |  None  | 
|   `collation_connection`   |   Yes   |  None  | 
|   `collation_server`   |   Yes   |  None  | 
|   `completion_type`   |   Yes   |  None  | 
|   `default_storage_engine`   |   No   |   Aurora MySQL clusters use the InnoDB storage engine for all of your data.  | 
|   `enforce_gtid_consistency`   |   Sometimes   |  Modifiable in Aurora MySQL version 2 and higher.  | 
|  `event_scheduler`  |  Yes  |  Indicates the status of the Event Scheduler. Modifiable only at the cluster level in Aurora MySQL version 3.  | 
|   `gtid-mode`   |   Sometimes   |  Modifiable in Aurora MySQL version 2 and higher.  | 
|  `information_schema_stats_expiry`  |  Yes  |  The number of seconds after which the MySQL database server fetches data from the storage engine and replaces the data in the cache. The allowed values are `0`–`31536000`. This parameter applies to Aurora MySQL version 3.  | 
|   `init_connect`   |   Yes   |  The command to be run by the server for each client that connects. Use double quotes (") for settings to avoid connection failures, for example: <pre>SET optimizer_switch="hash_join=off"</pre> In Aurora MySQL version 3, this parameter doesn't apply for users who have the `CONNECTION_ADMIN` privilege. This includes the Aurora master user. For more information, see [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model).  | 
|  `innodb_adaptive_hash_index`  |  Yes  |  You can modify this parameter at the DB cluster level in Aurora MySQL versions 2 and 3. The Adaptive Hash Index isn't supported on reader DB instances.  | 
|  `innodb_aurora_instant_alter_column_allowed`  | Yes |  Controls whether the `INSTANT` algorithm can be used for `ALTER COLUMN` operations at the global level. The allowed values are the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Reference.ParameterGroups.html) For more information, see [Column Operations](https://dev.mysql.com/doc/refman/8.0/en/innodb-online-ddl-operations.html#online-ddl-column-operations) in the MySQL documentation. This parameter applies to Aurora MySQL version 3.05 and higher.  | 
|   `innodb_autoinc_lock_mode`   |   Yes   |  None  | 
|   `innodb_checksums`   |   No   | Removed from Aurora MySQL version 3.  | 
|   `innodb_cmp_per_index_enabled`   |   Yes   |  None  | 
|   `innodb_commit_concurrency`   |   Yes   |  None  | 
|   `innodb_data_home_dir`   |   No   |   Aurora MySQL uses managed instances where you don't access the file system directly.  | 
|   `innodb_deadlock_detect`   |  Yes  |  This option is used to disable deadlock detection in Aurora MySQL version 2.11 and higher and version 3. On high-concurrency systems, deadlock detection can cause a slowdown when numerous threads wait for the same lock. Consult the MySQL documentation for more information on this parameter.  | 
|  `innodb_default_row_format`  | Yes |  This parameter defines the default row format for InnoDB tables (including user-created InnoDB temporary tables). It applies to Aurora MySQL versions 2 and 3. Its value can be `DYNAMIC`, `COMPACT`, or `REDUNDANT.`  | 
|   `innodb_file_per_table`   |   Yes   |  This parameter affects how table storage is organized. For more information, see [Storage scaling](Aurora.Managing.Performance.md#Aurora.Managing.Performance.StorageScaling).  | 
|  `innodb_flush_log_at_trx_commit`  |  Yes  |  We highly recommend that you use the default value of `1`. In Aurora MySQL version 3, before you can set this parameter to a value other than `1`, you must set the value of `innodb_trx_commit_allow_data_loss` to `1`. For more information, see [Configuring how frequently the log buffer is flushed](AuroraMySQL.BestPractices.FeatureRecommendations.md#AuroraMySQL.BestPractices.Flush).  | 
|   `innodb_ft_max_token_size`   |   Yes   |  None  | 
|   `innodb_ft_min_token_size`   |   Yes   |  None  | 
|   `innodb_ft_num_word_optimize`   |   Yes   |  None  | 
|   `innodb_ft_sort_pll_degree`   |   Yes   |  None  | 
|   `innodb_online_alter_log_max_size`   |   Yes   |  None  | 
|   `innodb_optimize_fulltext_only`   |   Yes   |  None  | 
|   `innodb_page_size`   |   No   |  None  | 
|   `innodb_print_all_deadlocks`   |   Yes   |  When turned on, records information about all InnoDB deadlocks in the Aurora MySQL error log. For more information, see [Minimizing and troubleshooting Aurora MySQL deadlocks](AuroraMySQL.BestPractices.FeatureRecommendations.md#AuroraMySQL.BestPractices.deadlocks).  | 
|   `innodb_purge_batch_size`   |   Yes   |  None  | 
|   `innodb_purge_threads`   |   Yes   |  None  | 
|   `innodb_rollback_on_timeout`   |   Yes   |  None  | 
|   `innodb_rollback_segments`   |   Yes   |  None  | 
|   `innodb_spin_wait_delay`   |   Yes   |  None  | 
|   `innodb_strict_mode`   |   Yes   |  None  | 
|   `innodb_support_xa`   |   Yes   | Removed from Aurora MySQL version 3. | 
|   `innodb_sync_array_size`   |   Yes   |  None  | 
|   `innodb_sync_spin_loops`   |   Yes   |  None  | 
|  `innodb_stats_include_delete_marked`  |  Yes  |  When this parameter is enabled, InnoDB includes delete-marked records when calculating persistent optimizer statistics. This parameter applies to Aurora MySQL version 2.12 and higher, and version 3.  | 
|   `innodb_table_locks`   |   Yes   |  None  | 
|  `innodb_trx_commit_allow_data_loss`  |  Yes  |  In Aurora MySQL version 3, set the value of this parameter to `1` so that you can change the value of `innodb_flush_log_at_trx_commit`. The default value of `innodb_trx_commit_allow_data_loss` is `0`. For more information, see [Configuring how frequently the log buffer is flushed](AuroraMySQL.BestPractices.FeatureRecommendations.md#AuroraMySQL.BestPractices.Flush).  | 
|   `innodb_undo_directory`   |   No   |   Aurora MySQL uses managed instances where you don't access the file system directly.  | 
|  `internal_tmp_disk_storage_engine`  | Yes |  Controls which in-memory storage engine is used for internal temporary tables. Allowed values are `INNODB` and `MYISAM`. This parameter applies to Aurora MySQL version 2.  | 
|  `internal_tmp_mem_storage_engine`  |   Yes   |  Controls which in-memory storage engine is used for internal temporary tables. Allowed values are `MEMORY` and `TempTable`. This parameter applies to Aurora MySQL version 3.  | 
|  `key_buffer_size`  |   Yes   |  Key cache for MyISAM tables. For more information, see [keycache->cache\$1lock mutex](AuroraMySQL.Reference.Waitevents.md#key-cache.cache-lock).  | 
|   `lc_time_names`   |   Yes   |  None  | 
|  `log_error_suppression_list`  |  Yes  |  Specifies a list of error codes that aren't logged in the MySQL error log. This allows you to ignore certain noncritical error conditions to help keep your error logs clean. For more information, see [log\$1error\$1suppression\$1list](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_log_error_suppression_list) in the MySQL documentation. This parameter applies to Aurora MySQL version 3.03 and higher.  | 
|  `low_priority_updates`  |  Yes  |  `INSERT`, `UPDATE`, `DELETE`, and `LOCK TABLE WRITE` operations wait until there's no pending `SELECT` operation. This parameter affects only storage engines that use only table-level locking (MyISAM, MEMORY, MERGE). This parameter applies to Aurora MySQL version 3.  | 
|  `lower_case_table_names`  |  Yes (Aurora MySQL version 2) Only at cluster creation time (Aurora MySQL version 3)  |  In Aurora MySQL version 2.10 and higher 2.x versions, make sure to reboot all reader instances after changing this setting and rebooting the writer instance. For details, see [Rebooting an Aurora cluster with read availability](aurora-mysql-survivable-replicas.md). In Aurora MySQL version 3, the value of this parameter is set permanently at the time the cluster is created. If you use a nondefault value for this option, set up your Aurora MySQL version 3 custom parameter group before upgrading, and specify the parameter group during the snapshot restore operation that creates the version 3 cluster. With an Aurora global database based on Aurora MySQL, you can't perform an in-place upgrade from Aurora MySQL version 2 to version 3 if the `lower_case_table_names` parameter is turned on. For more information on the methods that you can use, see [Major version upgrades](aurora-global-database-upgrade.md#aurora-global-database-upgrade.major).  | 
|   `master-info-repository`   |   Yes   |  Removed from Aurora MySQL version 3.  | 
|   `master_verify_checksum`   |   Yes   |  Aurora MySQL version 2. Use `source_verify_checksum` in Aurora MySQL version 3.  | 
|  `max_delayed_threads`  | Yes |  Sets the maximum number of threads to handle `INSERT DELAYED` statements. This parameter applies to Aurora MySQL version 3.  | 
|  `max_error_count`  | Yes |  The maximum number of error, warning, and note messages to be stored for display. This parameter applies to Aurora MySQL version 3.  | 
|  `max_execution_time`  | Yes |  The timeout for running `SELECT` statements, in milliseconds. The value can be from `0`–`18446744073709551615`. When set to `0`, there is no timeout. For more information, see [max\$1execution\$1time](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_max_execution_time) in the MySQL documentation.  | 
|  `min_examined_row_limit`  | Yes |  Use this parameter to prevent queries that examine fewer than the specified number of rows from being logged.  | 
|   `partial_revokes`   |   No   |  This parameter applies to Aurora MySQL version 3.  | 
|  `preload_buffer_size`  | Yes |  The size of the buffer that's allocated when preloading indexes. This parameter applies to Aurora MySQL version 3.  | 
|  `query_cache_type`  |  Yes  |  Removed from Aurora MySQL version 3.  | 
|   `read_only`   |   Yes   |  When this parameter is turned on, the server permits no updates except from those performed by replica threads. For Aurora MySQL version 2, valid values are the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Reference.ParameterGroups.html) For Aurora MySQL version 3, valid values are the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Reference.ParameterGroups.html) In Aurora MySQL version 3, this parameter doesn't apply for users who have the `CONNECTION_ADMIN` privilege. This includes the Aurora master user. For more information, see [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model).  | 
|   `relay-log-space-limit`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|  `replica_parallel_type`  | Yes |  This parameter enables parallel execution on the replica of all uncommitted threads already in the prepare phase, without violating consistency. It applies to Aurora MySQL version 3. In Aurora MySQL version 3.03.\$1 and lower, the default value is DATABASE. In Aurora MySQL version 3.04 and higher, the default value is LOGICAL\$1CLOCK.  | 
|   `replica_preserve_commit_order`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `replica_transaction_retries`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|  `replica_type_conversions`  |  Yes  |  This parameter determines the type conversions used on replicas. The allowed values are: `ALL_LOSSY`, `ALL_NON_LOSSY`, `ALL_SIGNED`, and `ALL_UNSIGNED`. For more information, see [Replication with differing table definitions on source and replica](https://dev.mysql.com/doc/refman/8.0/en/replication-features-differing-tables.html) in the MySQL documentation. This parameter applies to Aurora MySQL version 3.  | 
|   `replicate-do-db`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `replicate-do-table`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `replicate-ignore-db`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `replicate-ignore-table`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `replicate-wild-do-table`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `replicate-wild-ignore-table`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `require_secure_transport`   |   Yes   |   This parameter applies to Aurora MySQL version 2 and 3. For more information, see [TLS connections to Aurora MySQL DB clusters](AuroraMySQL.Security.md#AuroraMySQL.Security.SSL).  | 
|   `rpl_read_size`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|  `server_audit_cw_upload`  |   No   |  This parameter has been deprecated in Aurora MySQL. Use `server_audit_logs_upload`. For more information, see [Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs](AuroraMySQL.Integrating.CloudWatch.md).  | 
|   `server_audit_events`   |   Yes   |  For more information, see [Using Advanced Auditing with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Auditing.md).  | 
|   `server_audit_excl_users`   |   Yes   |  For more information, see [Using Advanced Auditing with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Auditing.md).  | 
|   `server_audit_incl_users`   |   Yes   |  For more information, see [Using Advanced Auditing with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Auditing.md).  | 
|   `server_audit_logging`   |   Yes   |   For instructions on uploading the logs to Amazon CloudWatch Logs, see [Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs](AuroraMySQL.Integrating.CloudWatch.md).   | 
|  `server_audit_logs_upload`  |  Yes  |  You can publish audit logs to CloudWatch Logs by enabling Advanced Auditing and setting this parameter to `1`. The default for the `server_audit_logs_upload` parameter is `0`. For more information, see [Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs](AuroraMySQL.Integrating.CloudWatch.md).  | 
|   `server_id`   |   No   |  None  | 
|   `skip-character-set-client-handshake`   |   Yes   |  None  | 
|   `skip_name_resolve`   |   No   |  None  | 
|   `slave-skip-errors`   |   Yes   |  Only applies to Aurora MySQL version 2 clusters, with MySQL 5.7 compatibility.  | 
|   `source_verify_checksum`   |   Yes   |  This parameter applies to Aurora MySQL version 3.  | 
|  `sync_frm`  |  Yes  |  Removed from Aurora MySQL version 3.  | 
|  `thread_cache_size`  | Yes | The number of threads to be cached. This parameter applies to Aurora MySQL versions 2 and 3. | 
|   `time_zone`   |   Yes   |  By default, the time zone for an Aurora DB cluster is Universal Time Coordinated (UTC). You can set the time zone for instances in your DB cluster to the local time zone for your application instead. For more information, see [Local time zone for Amazon Aurora DB clusters](Concepts.RegionsAndAvailabilityZones.md#Aurora.Overview.LocalTimeZone).  | 
|   `tls_version`   |   Yes   |   For more information, see [TLS versions for Aurora MySQL](AuroraMySQL.Security.md#AuroraMySQL.Security.SSL.TLS_Version).  | 

## Instance-level parameters
<a name="instance_params"></a><a name="db_params"></a>

 The following table shows all of the parameters that apply to a specific DB instance in an Aurora MySQL DB cluster.


|  Parameter name  |  Modifiable  |  Notes  | 
| --- | --- | --- | 
|   `activate_all_roles_on_login`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `allow-suspicious-udfs`   |   No   |  None  | 
|  `aurora_disable_hash_join`   |  Yes  |  Set this parameter to `ON` to turn off hash join optimization in Aurora MySQL version 2.09 or higher. It isn't supported for version 3. For more information, see [Parallel query for Amazon Aurora MySQL](aurora-mysql-parallel-query.md).  | 
|   `aurora_lab_mode`   |   Yes   |   For more information, see [Amazon Aurora MySQL lab mode](AuroraMySQL.Updates.LabMode.md). Removed from Aurora MySQL version 3.  | 
|   `aurora_oom_response`   |   Yes   |  This parameter is supported for Aurora MySQL versions 2 and 3. For more information, see [Troubleshooting out-of-memory issues for Aurora MySQL databases](AuroraMySQLOOM.md).  | 
|   `aurora_parallel_query`   |   Yes   |  Set to `ON` to turn on parallel query in Aurora MySQL version 2.09 or higher. The old `aurora_pq` parameter isn't used in these versions. For more information, see [Parallel query for Amazon Aurora MySQL](aurora-mysql-parallel-query.md).  | 
|   `aurora_pq`   |   Yes   |  Set to `OFF` to turn off parallel query for specific DB instances in Aurora MySQL versions before 2.09. In version 2.09 or higher, turn parallel query on and off with `aurora_parallel_query` instead. For more information, see [Parallel query for Amazon Aurora MySQL](aurora-mysql-parallel-query.md).  | 
|  `aurora_read_replica_read_committed`  |  Yes  |   Enables `READ COMMITTED` isolation level for Aurora Replicas and changes the isolation behavior to reduce purge lag during long-running queries. Enable this setting only if you understand the behavior changes and how they affect your query results. For example, this setting uses less-strict isolation than the MySQL default. When it's enabled, long-running queries might see more than one copy of the same row because Aurora reorganizes the table data while the query is running. For more information, see [Aurora MySQL isolation levels](AuroraMySQL.Reference.IsolationLevels.md).   | 
|  `aurora_tmptable_enable_per_table_limit`  |  Yes  |  Determines whether the `tmp_table_size` parameter controls the maximum size of in-memory temporary tables created by the `TempTable` storage engine in Aurora MySQL version 3.04 and higher. For more information, see [Limiting the size of internal, in-memory temporary tables](ams3-temptable-behavior.md#ams3-temptable-behavior-limit).  | 
|  `aurora_use_vector_instructions`  |  Yes  |  When this parameter is enabled, Aurora MySQL uses optimized vector processing instructions provided by modern CPUs to improve performance on I/O-intensive workloads. This setting is enabled by default in Aurora MySQL version 3.05 and higher.  | 
|   `autocommit`   |   Yes   |  None  | 
|   `automatic_sp_privileges`   |   Yes   |  None  | 
|   `back_log`   |   Yes   |  None  | 
|   `basedir`   |   No   |   Aurora MySQL uses managed instances where you don't access the file system directly.  | 
|   `binlog_cache_size`   |   Yes   |  None  | 
|   `binlog_max_flush_queue_time`   |   Yes   |  None  | 
|   `binlog_order_commits`   |   Yes   |  None  | 
|   `binlog_stmt_cache_size`   |   Yes   |  None  | 
|   `binlog_transaction_compression`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `binlog_transaction_compression_level_zstd`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `bulk_insert_buffer_size`   |   Yes   |  None  | 
|   `concurrent_insert`   |   Yes   |  None  | 
|   `connect_timeout`   |   Yes   |  None  | 
|   `core-file`   |   No   |   Aurora MySQL uses managed instances where you don't access the file system directly.  | 
|   `datadir`   |   No   |   Aurora MySQL uses managed instances where you don't access the file system directly.  | 
|   `default_authentication_plugin`   |   No   |   This parameter applies to Aurora MySQL version 3.  | 
|   `default_time_zone`   |   No   |  None  | 
|   `default_tmp_storage_engine`   |   Yes   |  The default storage engine for temporary tables.  | 
|   `default_week_format`   |   Yes   |  None  | 
|   `delay_key_write`   |   Yes   |  None  | 
|   `delayed_insert_limit`   |   Yes   |  None  | 
|   `delayed_insert_timeout`   |   Yes   |  None  | 
|   `delayed_queue_size`   |   Yes   |  None  | 
|   `div_precision_increment`   |   Yes   |  None  | 
|   `end_markers_in_json`   |   Yes   |  None  | 
|   `eq_range_index_dive_limit`   |   Yes   |  None  | 
|   `event_scheduler`   |  Sometimes  |  Indicates the status of the Event Scheduler. Modifiable only at the cluster level in Aurora MySQL version 3.  | 
|   `explicit_defaults_for_timestamp`   |   Yes   |  None  | 
|   `flush`   |   No   |  None  | 
|   `flush_time`   |   Yes   |  None  | 
|   `ft_boolean_syntax`   |   No   |  None  | 
|   `ft_max_word_len`   |   Yes   |  None  | 
|   `ft_min_word_len`   |   Yes   |  None  | 
|   `ft_query_expansion_limit`   |   Yes   |  None  | 
|   `ft_stopword_file`   |   Yes   |  None  | 
|   `general_log`   |   Yes   |   For instructions on uploading the logs to CloudWatch Logs, see [Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs](AuroraMySQL.Integrating.CloudWatch.md).   | 
|   `general_log_file`   |   No   |   Aurora MySQL uses managed instances where you don't access the file system directly.  | 
|   `group_concat_max_len`   |   Yes   |  None  | 
|   `host_cache_size`   |   Yes   |  None  | 
|   `init_connect`   |   Yes   |  The command to be run by the server for each client that connects. Use double quotes (") for settings to avoid connection failures, for example: <pre>SET optimizer_switch="hash_join=off"</pre> In Aurora MySQL version 3, this parameter doesn't apply for users who have the `CONNECTION_ADMIN` privilege, including the Aurora master user. For more information, see [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model).  | 
|  `innodb_adaptive_hash_index`  |  Yes  |  You can modify this parameter at the DB instance level in Aurora MySQL version 2. It's modifiable only at the DB cluster level in Aurora MySQL version 3. The Adaptive Hash Index isn't supported on reader DB instances.  | 
|   `innodb_adaptive_max_sleep_delay`   |   Yes   |   Modifying this parameter has no effect because `innodb_thread_concurrency` is always 0 for Aurora.  | 
|  `innodb_aurora_max_partitions_for_range`  | Yes |  In some cases where persisted statistics aren't available, you can use this parameter to improve the performance of row count estimations on partitioned tables. You can set it to a value from 0–8192, where the value determines the number of partitions to check during row count estimation. The default value is 0, which estimates using all of the partitions, consistent with default MySQL behavior. This parameter is available for Aurora MySQL version 3.03.1 and higher.  | 
|   `innodb_autoextend_increment`   |   Yes   |  None  | 
|   `innodb_buffer_pool_dump_at_shutdown`   |   No   |  None  | 
|   `innodb_buffer_pool_dump_now`   |   No   |  None  | 
|   `innodb_buffer_pool_filename`   |   No   |  None  | 
|   `innodb_buffer_pool_load_abort`   |   No   |  None  | 
|   `innodb_buffer_pool_load_at_startup`   |   No   |  None  | 
|   `innodb_buffer_pool_load_now`   |   No   |  None  | 
|   `innodb_buffer_pool_size`   |   Yes   |  The default value is represented by a formula. For details about how the `DBInstanceClassMemory` value in the formula is calculated, see [DB parameter formula variables](USER_ParamValuesRef.md#USER_FormulaVariables).  | 
|   `innodb_change_buffer_max_size`   |   No   |   Aurora MySQL doesn't use the InnoDB change buffer at all.  | 
|   `innodb_compression_failure_threshold_pct`   |   Yes   |  None  | 
|   `innodb_compression_level`   |   Yes   |  None  | 
|   `innodb_compression_pad_pct_max`   |   Yes   |  None  | 
|   `innodb_concurrency_tickets`   |   Yes   |   Modifying this parameter has no effect, because `innodb_thread_concurrency` is always 0 for Aurora.  | 
|   `innodb_deadlock_detect`   |  Yes  |  This option is used to disable deadlock detection in Aurora MySQL version 2.11 and higher and version 3. On high-concurrency systems, deadlock detection can cause a slowdown when numerous threads wait for the same lock. Consult the MySQL documentation for more information on this parameter.  | 
|   `innodb_file_format`   |   Yes   |   Removed from Aurora MySQL version 3.  | 
|   `innodb_flushing_avg_loops`   |   No   |  None  | 
|   `innodb_force_load_corrupted`   |   No   |  None  | 
|   `innodb_ft_aux_table`   |   Yes   |  None  | 
|   `innodb_ft_cache_size`   |   Yes   |  None  | 
|   `innodb_ft_enable_stopword`   |   Yes   |  None  | 
|   `innodb_ft_server_stopword_table`   |   Yes   |  None  | 
|   `innodb_ft_user_stopword_table`   |   Yes   |  None  | 
|   `innodb_large_prefix`   |   Yes   |   Removed from Aurora MySQL version 3.  | 
|   `innodb_lock_wait_timeout`   |   Yes   |  None  | 
|   `innodb_log_compressed_pages`   |   No   |  None  | 
|   `innodb_lru_scan_depth`   |   Yes   |  None  | 
|   `innodb_max_purge_lag`   |   Yes   |  None  | 
|   `innodb_max_purge_lag_delay`   |   Yes   |  None  | 
|   `innodb_monitor_disable`   |   Yes   |  None  | 
|   `innodb_monitor_enable`   |   Yes   |  None  | 
|   `innodb_monitor_reset`   |   Yes   |  None  | 
|   `innodb_monitor_reset_all`   |   Yes   |  None  | 
|   `innodb_old_blocks_pct`   |   Yes   |  None  | 
|   `innodb_old_blocks_time`   |   Yes   |  None  | 
|   `innodb_open_files`   |   Yes   |  None  | 
|   `innodb_print_all_deadlocks`   |   Yes   |  When turned on, records information about all InnoDB deadlocks in the Aurora MySQL error log. For more information, see [Minimizing and troubleshooting Aurora MySQL deadlocks](AuroraMySQL.BestPractices.FeatureRecommendations.md#AuroraMySQL.BestPractices.deadlocks).  | 
|   `innodb_random_read_ahead`   |   Yes   |  None  | 
|   `innodb_read_ahead_threshold`   |   Yes   |  None  | 
|   `innodb_read_io_threads`   |   No   |  None  | 
|   `innodb_read_only`   |   No   |   Aurora MySQL manages the read-only and read/write state of DB instances based on the type of cluster. For example, a provisioned cluster has one read/write DB instance (the *primary instance*) and any other instances in the cluster are read-only (the Aurora Replicas).   | 
|   `innodb_replication_delay`   |   Yes   |  None  | 
|   `innodb_sort_buffer_size`   |   Yes   |  None  | 
|   `innodb_stats_auto_recalc`   |   Yes   |  None  | 
|   `innodb_stats_method`   |   Yes   |  None  | 
|   `innodb_stats_on_metadata`   |   Yes   |  None  | 
|   `innodb_stats_persistent`   |   Yes   |  None  | 
|   `innodb_stats_persistent_sample_pages`   |   Yes   |  None  | 
|   `innodb_stats_transient_sample_pages`   |   Yes   |  None  | 
|   `innodb_thread_concurrency`   |   No   |  None  | 
|   `innodb_thread_sleep_delay`   |   Yes   |   Modifying this parameter has no effect because `innodb_thread_concurrency` is always 0 for Aurora.  | 
|   `interactive_timeout`   |   Yes   |   Aurora evaluates the minimum value of `interactive_timeout` and `wait_timeout`. It then uses that minimum as the timeout to end all idle sessions, both interactive and noninteractive.   | 
|  `internal_tmp_disk_storage_engine`  | Yes |  Controls which in-memory storage engine is used for internal temporary tables. Allowed values are `INNODB` and `MYISAM`. This parameter applies to Aurora MySQL version 2.  | 
|  `internal_tmp_mem_storage_engine`  |  Sometimes  |  Controls which in-memory storage engine is used for internal temporary tables. Allowed values for writer DB instances are `MEMORY` and `TempTable`. For reader DB instances, this parameter is set to `TempTable` and can't be modified. This parameter applies to Aurora MySQL version 3.  | 
|   `join_buffer_size`   |   Yes   |  None  | 
|   `keep_files_on_create`   |   Yes   |  None  | 
|  `key_buffer_size`  |   Yes   |  Key cache for MyISAM tables. For more information, see [keycache->cache\$1lock mutex](AuroraMySQL.Reference.Waitevents.md#key-cache.cache-lock).  | 
|   `key_cache_age_threshold`   |   Yes   |  None  | 
|   `key_cache_block_size`   |   Yes   |  None  | 
|   `key_cache_division_limit`   |   Yes   |  None  | 
|   `local_infile`   |   Yes   |  None  | 
|   `lock_wait_timeout`   |   Yes   |  None  | 
|   `log-bin`   |   No   |   Setting `binlog_format` to `STATEMENT`, `MIXED`, or `ROW` automatically sets `log-bin` to `ON`. Setting `binlog_format` to `OFF` automatically sets `log-bin` to `OFF`. For more information, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md).   | 
|   `log_bin_trust_function_creators`   |   Yes   |  None  | 
|   `log_bin_use_v1_row_events`   |   Yes   |   Removed from Aurora MySQL version 3.  | 
|   `log_error`   |   No   |  None  | 
|  `log_error_suppression_list`  |  Yes  |  Specifies a list of error codes that aren't logged in the MySQL error log. This allows you to ignore certain noncritical error conditions to help keep your error logs clean. For more information, see [log\$1error\$1suppression\$1list](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_log_error_suppression_list) in the MySQL documentation. This parameter applies to Aurora MySQL version 3.03 and higher.  | 
|   `log_output`   |   Yes   |  None  | 
|   `log_queries_not_using_indexes`   |   Yes   |  None  | 
|   `log_slave_updates`   |   No   |   Aurora MySQL version 2. Use `log_replica_updates` in Aurora MySQL version 3.  | 
|   `log_replica_updates`   |   No   |   Aurora MySQL version 3   | 
|   `log_throttle_queries_not_using_indexes`   |   Yes   |  None  | 
|   `log_warnings`   |   Yes   |   Removed from Aurora MySQL version 3.  | 
|   `long_query_time`   |   Yes   |  None  | 
|   `low_priority_updates`   |   Yes   |  `INSERT`, `UPDATE`, `DELETE`, and `LOCK TABLE WRITE` operations wait until there's no pending `SELECT` operation. This parameter affects only storage engines that use only table-level locking (MyISAM, MEMORY, MERGE). This parameter applies to Aurora MySQL version 3.  | 
|   `max_allowed_packet`   |   Yes   |  None  | 
|   `max_binlog_cache_size`   |   Yes   |  None  | 
|   `max_binlog_size`   |   No   |  None  | 
|   `max_binlog_stmt_cache_size`   |   Yes   |  None  | 
|   `max_connect_errors`   |   Yes   |  None  | 
|   `max_connections`   |   Yes   |  The default value is represented by a formula. For details about how the `DBInstanceClassMemory` value in the formula is calculated, see [DB parameter formula variables](USER_ParamValuesRef.md#USER_FormulaVariables). For the default values depending on the instance class, see [Maximum connections to an Aurora MySQL DB instance](AuroraMySQL.Managing.Performance.md#AuroraMySQL.Managing.MaxConnections).  | 
|   `max_delayed_threads`   |   Yes   |  Sets the maximum number of threads to handle `INSERT DELAYED` statements. This parameter applies to Aurora MySQL version 3.  | 
|   `max_error_count`   |   Yes   |  The maximum number of error, warning, and note messages to be stored for display. This parameter applies to Aurora MySQL version 3.  | 
|  `max_execution_time`  | Yes |  The timeout for running `SELECT` statements, in milliseconds. The value can be from `0`–`18446744073709551615`. When set to `0`, there is no timeout. For more information, see [max\$1execution\$1time](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_max_execution_time) in the MySQL documentation.  | 
|   `max_heap_table_size`   |   Yes   |  None  | 
|   `max_insert_delayed_threads`   |   Yes   |  None  | 
|   `max_join_size`   |   Yes   |  None  | 
|   `max_length_for_sort_data`   |   Yes   |   Removed from Aurora MySQL version 3.  | 
|   `max_prepared_stmt_count`   |   Yes   |  None  | 
|   `max_seeks_for_key`   |   Yes   |  None  | 
|   `max_sort_length`   |   Yes   |  None  | 
|   `max_sp_recursion_depth`   |   Yes   |  None  | 
|   `max_tmp_tables`   |   Yes   |   Removed from Aurora MySQL version 3.  | 
|   `max_user_connections`   |   Yes   |  None  | 
|   `max_write_lock_count`   |   Yes   |  None  | 
|   `metadata_locks_cache_size`   |   Yes   |   Removed from Aurora MySQL version 3.  | 
|   `min_examined_row_limit`   |   Yes   |  Use this parameter to prevent queries that examine fewer than the specified number of rows from being logged. This parameter applies to Aurora MySQL version 3.  | 
|   `myisam_data_pointer_size`   |   Yes   |  None  | 
|   `myisam_max_sort_file_size`   |   Yes   |  None  | 
|   `myisam_mmap_size`   |   Yes   |  None  | 
|   `myisam_sort_buffer_size`   |   Yes   |  None  | 
|   `myisam_stats_method`   |   Yes   |  None  | 
|   `myisam_use_mmap`   |   Yes   |  None  | 
|   `net_buffer_length`   |   Yes   |  None  | 
|   `net_read_timeout`   |   Yes   |  None  | 
|   `net_retry_count`   |   Yes   |  None  | 
|   `net_write_timeout`   |   Yes   |  None  | 
|   `old-style-user-limits`   |   Yes   |  None  | 
|   `old_passwords`   |   Yes   |   Removed from Aurora MySQL version 3.  | 
|   `optimizer_prune_level`   |   Yes   |  None  | 
|   `optimizer_search_depth`   |   Yes   |  None  | 
|   `optimizer_switch`   |   Yes   |   For information about Aurora MySQL features that use this switch, see [Best practices with Amazon Aurora MySQL](AuroraMySQL.BestPractices.md).  | 
|   `optimizer_trace`   |   Yes   |  None  | 
|   `optimizer_trace_features`   |   Yes   |  None  | 
|   `optimizer_trace_limit`   |   Yes   |  None  | 
|   `optimizer_trace_max_mem_size`   |   Yes   |  None  | 
|   `optimizer_trace_offset`   |   Yes   |  None  | 
|   `performance-schema-consumer-events-waits-current`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance-schema-consumer-events-waits-current`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance-schema-instrument`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance-schema-instrument`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema`   |   Yes   |  If the **Source** column is set to `Modified`, Performance Insights is managing the Performance Schema. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_accounts_size`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_accounts_size`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_consumer_global_instrumentation`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_consumer_global_instrumentation`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_consumer_thread_instrumentation`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_consumer_thread_instrumentation`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_consumer_events_stages_current`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_consumer_events_stages_current`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_consumer_events_stages_history`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_consumer_events_stages_history`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_consumer_events_stages_history_long`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_consumer_events_stages_history_long`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_consumer_events_statements_current`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_consumer_events_statements_current`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_consumer_events_statements_history`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_consumer_events_statements_history`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_consumer_events_statements_history_long`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_consumer_events_statements_history_long`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_consumer_events_waits_history`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_consumer_events_waits_history`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_consumer_events_waits_history_long`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_consumer_events_waits_history_long`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_consumer_statements_digest`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_consumer_statements_digest`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_digests_size`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_digests_size`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_events_stages_history_long_size`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_events_stages_history_long_size`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_events_stages_history_size`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_events_stages_history_size`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_events_statements_history_long_size`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_events_statements_history_long_size`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_events_statements_history_size`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_events_statements_history_size`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_events_transactions_history_long_size`   |  Yes  |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_events_transactions_history_long_size`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_events_transactions_history_size`   |  Yes  |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_events_transactions_history_size`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_events_waits_history_long_size`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_events_waits_history_long_size`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_events_waits_history_size`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_events_waits_history_size`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_hosts_size`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_hosts_size`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_cond_classes`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_cond_classes`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_cond_instances`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_cond_instances`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_digest_length`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_digest_length`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_file_classes`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_file_classes`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_file_handles`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_file_handles`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_file_instances`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_file_instances`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|  `performance_schema_max_index_stat`  |  Yes  |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_index_stat`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_memory_classes`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_memory_classes`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_metadata_locks`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_metadata_locks`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_mutex_classes`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_mutex_classes`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_mutex_instances`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_mutex_instances`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_prepared_statements_instances`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_prepared_statements_instances`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_program_instances`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_program_instances`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_rwlock_classes`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_rwlock_classes`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_rwlock_instances`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_rwlock_instances`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_socket_classes`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_socket_classes`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_socket_instances`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_socket_instances`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_sql_text_length`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_sql_text_length`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_stage_classes`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_stage_classes`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_statement_classes`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_statement_classes`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_statement_stack`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_statement_stack`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_table_handles`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_table_handles`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_table_instances`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_table_instances`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_table_lock_stat`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_table_lock_stat`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_thread_classes`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_thread_classes`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_max_thread_instances`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_max_thread_instances`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_session_connect_attrs_size`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_session_connect_attrs_size`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_setup_actors_size`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_setup_actors_size`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `performance_schema_setup_objects_size`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_setup_objects_size`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|  `performance_schema_show_processlist`  |  Yes  | This parameter determines which SHOW PROCESSLIST implementation to use: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Reference.ParameterGroups.html)This parameter applies to Aurora MySQL version 2.12 and higher, and version 3. If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_show_processlist`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md). | 
|   `performance_schema_users_size`   |   Yes   |  If the **Source** column for the parameter `performance_schema` is set to `Modified`, performance schema is using the parameter `performance_schema_users_size`. For more information about enabling the Performance Schema, see [Determining whether Performance Insights is managing the Performance Schema](USER_PerfInsights.EnableMySQL.determining-status.md).  | 
|   `pid_file`   |   No   |  None  | 
|   `plugin_dir`   |   No   |   Aurora MySQL uses managed instances where you don't access the file system directly.  | 
|   `port`   |   No   |   Aurora MySQL manages the connection properties and enforces consistent settings for all DB instances in a cluster.  | 
|   `preload_buffer_size`   |   Yes   |  The size of the buffer that's allocated when preloading indexes. This parameter applies to Aurora MySQL version 3.  | 
|   `profiling_history_size`   |   Yes   |  None  | 
|   `query_alloc_block_size`   |   Yes   |  None  | 
|   `query_cache_limit`   |   Yes   |   Removed from Aurora MySQL version 3.  | 
|   `query_cache_min_res_unit`   |   Yes   |   Removed from Aurora MySQL version 3.  | 
|   `query_cache_size`   |   Yes   |  The default value is represented by a formula. For details about how the `DBInstanceClassMemory` value in the formula is calculated, see [DB parameter formula variables](USER_ParamValuesRef.md#USER_FormulaVariables).  Removed from Aurora MySQL version 3.  | 
|  `query_cache_type`  |  Yes  |  Removed from Aurora MySQL version 3.  | 
|   `query_cache_wlock_invalidate`   |   Yes   |   Removed from Aurora MySQL version 3.  | 
|   `query_prealloc_size`   |   Yes   |  None  | 
|   `range_alloc_block_size`   |   Yes   |  None  | 
|   `read_buffer_size`   |   Yes   |  None  | 
|   `read_only`   |   Yes   |  When this parameter is turned on, the server permits no updates except from those performed by replica threads. For Aurora MySQL version 2, valid values are the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Reference.ParameterGroups.html) We recommend that you use the DB cluster parameter group in Aurora MySQL version 2 to make sure that the `read_only` parameter is applied to new writer instances on failover.  Reader instances are always read only, because Aurora MySQL sets `innodb_read_only` to `1` on all readers. Therefore, `read_only` is redundant on reader instances.  Removed at the instance level from Aurora MySQL version 3.  | 
|   `read_rnd_buffer_size`   |   Yes   |  None  | 
|   `relay-log`   |   No   |  None  | 
|   `relay_log_info_repository`   |   Yes   |   Removed from Aurora MySQL version 3.  | 
|   `relay_log_recovery`  |   No   |  None  | 
|  `replica_checkpoint_group`  |   Yes   |   Aurora MySQL version 3   | 
|  `replica_checkpoint_period`  |   Yes   |  Aurora MySQL version 3   | 
|  `replica_parallel_workers`  |   Yes   |  Aurora MySQL version 3   | 
|  `replica_pending_jobs_size_max`  |   Yes   |  Aurora MySQL version 3   | 
|  `replica_skip_errors`  |   Yes   |  Aurora MySQL version 3   | 
|  `replica_sql_verify_checksum`  |   Yes   |  Aurora MySQL version 3   | 
|   `safe-user-create`   |   Yes   |  None  | 
|   `secure_auth`   |   Yes   |  This parameter is always turned on in Aurora MySQL version 2. Trying to turn it off generates an error. Removed from Aurora MySQL version 3.  | 
|   `secure_file_priv`   |   No   |   Aurora MySQL uses managed instances where you don't access the file system directly.  | 
|  `show_create_table_verbosity`  |  Yes  |  Enabling this variable causes [SHOW\$1CREATE\$1TABLE](https://dev.mysql.com/doc/refman/5.7/en/show-create-table.html) to display the `ROW_FORMAT` regardless of whether it's the default format. This parameter applies to Aurora MySQL version 2.12 and higher, and version 3.  | 
|   `skip-slave-start`   |   No   |  None  | 
|   `skip_external_locking`   |   No   |  None  | 
|   `skip_show_database`   |   Yes   |  None  | 
|   `slave_checkpoint_group`   |   Yes   |   Aurora MySQL version 2. Use `replica_checkpoint_group` in Aurora MySQL version 3.  | 
|   `slave_checkpoint_period`   |   Yes   |   Aurora MySQL version 2. Use `replica_checkpoint_period` in Aurora MySQL version 3.  | 
|   `slave_parallel_workers`   |   Yes   |   Aurora MySQL version 2. Use `replica_parallel_workers` in Aurora MySQL version 3.  | 
|   `slave_pending_jobs_size_max`   |   Yes   |   Aurora MySQL version 2. Use `replica_pending_jobs_size_max` in Aurora MySQL version 3.  | 
|   `slave_sql_verify_checksum`   |   Yes   |   Aurora MySQL version 2. Use `replica_sql_verify_checksum` in Aurora MySQL version 3.  | 
|   `slow_launch_time`   |   Yes   |  None  | 
|   `slow_query_log`   |   Yes   |   For instructions on uploading the logs to CloudWatch Logs, see [Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs](AuroraMySQL.Integrating.CloudWatch.md).   | 
|   `slow_query_log_file`   |   No   |   Aurora MySQL uses managed instances where you don't access the file system directly.  | 
|   `socket`   |   No   |  None  | 
|   `sort_buffer_size`   |   Yes   |  None  | 
|   `sql_mode`   |   Yes   |  None  | 
|   `sql_select_limit`   |   Yes   |  None  | 
|   `stored_program_cache`   |   Yes   |  None  | 
|   `sync_binlog`   |   No   |  None  | 
|   `sync_master_info`   |   Yes   |  None  | 
|   `sync_source_info`   |   Yes   |   This parameter applies to Aurora MySQL version 3.  | 
|   `sync_relay_log`   |   Yes   |   Removed from Aurora MySQL version 3.  | 
|   `sync_relay_log_info`   |   Yes   |  None  | 
|   `sysdate-is-now`   |   Yes   |  None  | 
|   `table_cache_element_entry_ttl`   |   No   |  None  | 
|   `table_definition_cache`   |   Yes   |  The default value is represented by a formula. For details about how the `DBInstanceClassMemory` value in the formula is calculated, see [DB parameter formula variables](USER_ParamValuesRef.md#USER_FormulaVariables).  | 
|   `table_open_cache`   |   Yes   |  The default value is represented by a formula. For details about how the `DBInstanceClassMemory` value in the formula is calculated, see [DB parameter formula variables](USER_ParamValuesRef.md#USER_FormulaVariables).  | 
|   `table_open_cache_instances`   |   Yes   |  None  | 
|   `temp-pool`   |   Yes   |   Removed from Aurora MySQL version 3.  | 
|   `temptable_max_mmap`   |   Yes   |  This parameter applies to Aurora MySQL version 3. For details, see [New temporary table behavior in Aurora MySQL version 3](ams3-temptable-behavior.md).  | 
|  `temptable_max_ram`  |  Yes  |  This parameter applies to Aurora MySQL version 3. For details, see [New temporary table behavior in Aurora MySQL version 3](ams3-temptable-behavior.md).  | 
|  `temptable_use_mmap`  |  Yes  |  This parameter applies to Aurora MySQL version 3. For details, see [New temporary table behavior in Aurora MySQL version 3](ams3-temptable-behavior.md).  | 
|  `thread_cache_size`  | Yes | The number of threads to be cached. This parameter applies to Aurora MySQL versions 2 and 3. | 
|  `thread_handling`  |  No  |  None  | 
|   `thread_stack`   |  Yes  |  None  | 
|   `timed_mutexes`   |  Yes  |  None  | 
|  `tmp_table_size`  |  Yes  |  Defines the maximum size of internal in-memory temporary tables created by the `MEMORY` storage engine in Aurora MySQL version 3. In Aurora MySQL version 3.04 and higher, defines the maximum size of internal in-memory temporary tables created by the `TempTable` storage engine when `aurora_tmptable_enable_per_table_limit` is `ON`. For more information, see [Limiting the size of internal, in-memory temporary tables](ams3-temptable-behavior.md#ams3-temptable-behavior-limit).  | 
|   `tmpdir`   |  No  |   Aurora MySQL uses managed instances where you don't access the file system directly.  | 
|   `transaction_alloc_block_size`   |   Yes   |  None  | 
|   `transaction_isolation`   |   Yes   |   This parameter applies to Aurora MySQL version 3. It replaces `tx_isolation`.  | 
|   `transaction_prealloc_size`   |   Yes   |  None  | 
|   `tx_isolation`   |   Yes   |   Removed from Aurora MySQL version 3. It is replaced by `transaction_isolation`.  | 
|   `updatable_views_with_limit`   |   Yes   |  None  | 
|   `validate-password`   |   No   |  None  | 
|   `validate_password_dictionary_file`   |   No   |  None  | 
|   `validate_password_length`   |   No   |  None  | 
|   `validate_password_mixed_case_count`   |   No   |  None  | 
|   `validate_password_number_count`   |   No   |  None  | 
|   `validate_password_policy`   |   No   |  None  | 
|   `validate_password_special_char_count`   |   No   |  None  | 
|   `wait_timeout`   |   Yes   |  Aurora evaluates the minimum value of `interactive_timeout` and `wait_timeout`. It then uses that minimum as the timeout to end all idle sessions, both interactive and noninteractive.   | 

## MySQL parameters that don't apply to Aurora MySQL


 Because of architectural differences between Aurora MySQL and MySQL, some MySQL parameters don't apply to Aurora MySQL.

The following MySQL parameters don't apply to Aurora MySQL. This list isn't exhaustive.
+ `activate_all_roles_on_login` – This parameter doesn't apply to Aurora MySQL version 2. It is available in Aurora MySQL version 3.
+ `big_tables`
+ `bind_address`
+ `character_sets_dir`
+ `innodb_adaptive_flushing`
+ `innodb_adaptive_flushing_lwm`
+ `innodb_buffer_pool_chunk_size`
+ `innodb_buffer_pool_instances`
+ `innodb_change_buffering`
+ `innodb_checksum_algorithm`
+ `innodb_data_file_path`
+ `innodb_dedicated_server`
+ `innodb_doublewrite`
+ `innodb_flush_log_at_timeout` – This parameter doesn't apply to Aurora MySQL. For more information, see [Configuring how frequently the log buffer is flushed](AuroraMySQL.BestPractices.FeatureRecommendations.md#AuroraMySQL.BestPractices.Flush).
+ `innodb_flush_method`
+ `innodb_flush_neighbors`
+ `innodb_io_capacity`
+ `innodb_io_capacity_max`
+ `innodb_log_buffer_size`
+ `innodb_log_file_size`
+ `innodb_log_files_in_group`
+ `innodb_log_spin_cpu_abs_lwm`
+ `innodb_log_spin_cpu_pct_hwm`
+ `innodb_log_writer_threads`
+ `innodb_max_dirty_pages_pct`
+ `innodb_numa_interleave`
+ `innodb_page_size`
+ `innodb_redo_log_capacity`
+ `innodb_redo_log_encrypt`
+ `innodb_undo_log_encrypt`
+ `innodb_undo_log_truncate`
+ `innodb_undo_logs`
+ `innodb_undo_tablespaces`
+ `innodb_use_native_aio`
+ `innodb_write_io_threads`

# Aurora MySQL global status variables
Global status variables

 Aurora MySQL includes status variables from community MySQL and variables that are unique to Aurora. You can examine these variables to learn about what's happening inside the database engine. For more information about the status variables in community MySQL, see [Server Status Variables](https://dev.mysql.com/doc/refman/8.0/en/server-status-variables.html) in the community MySQL 8.0 documentation. 

You can find the current values for Aurora MySQL global status variables by using a statement such as the following:

```
show global status like '%aurora%';
```

**Note**  
Global status variables are cleared when the DB engine reboots.

The following table describes the global status variables that Aurora MySQL uses.


| Name | Description | 
| --- | --- | 
|  `AuroraDb_commits`  |  The total number of commits since the last restart.  | 
|  `AuroraDb_commit_latency`  |  The aggregate commit latency since the last restart.  | 
|  `AuroraDb_ddl_stmt_duration`  |  The aggregate DDL latency since the last restart.  | 
|  `AuroraDb_select_stmt_duration`  |  The aggregate `SELECT` statement latency since the last restart.  | 
|  `AuroraDb_insert_stmt_duration`  |  The aggregate `INSERT` statement latency since the last restart.  | 
|  `AuroraDb_update_stmt_duration`  |  The aggregate `UPDATE` statement latency since the last restart.  | 
|  `AuroraDb_delete_stmt_duration`  |  The aggregate `DELETE` statement latency since the last restart.  | 
|  `Aurora_binlog_io_cache_allocated`  | The number of bytes allocated to the binlog I/O cache. | 
|  `Aurora_binlog_io_cache_read_requests`  |  The number of read requests made to the binlog I/O cache.  | 
|  `Aurora_binlog_io_cache_reads`  |  The number of read requests that were served from the binlog I/O cache.  | 
|  `Aurora_enhanced_binlog`  |  Indicates whether enhanced binlog is enabled or disabled for this DB instance. For more information, see [Setting up enhanced binlog for Aurora MySQL](AuroraMySQL.Enhanced.binlog.md).  | 
|  `Aurora_external_connection_count`  |  The number of database connections to the DB instance, excluding RDS service connections used for database health checks.  | 
|  `Aurora_fast_insert_cache_hits`  |  A counter that's incremented when the cached cursor is successfully retrieved and verified. For more information on the fast insert cache, see [Amazon Aurora MySQL performance enhancements](Aurora.AuroraMySQL.Overview.md#Aurora.AuroraMySQL.Performance).  | 
|  `Aurora_fast_insert_cache_misses`  |  A counter that's incremented when the cached cursor is no longer valid and Aurora performs a normal index traversal. For more information on the fast insert cache, see [Amazon Aurora MySQL performance enhancements](Aurora.AuroraMySQL.Overview.md#Aurora.AuroraMySQL.Performance).  | 
|  `Aurora_fts_cache_memory_used`  |  The amount of memory in bytes that the InnoDB full-text search system is using. This variable applies to Aurora MySQL version 3.07 and higher.  | 
|  `Aurora_fwd_master_dml_stmt_count`  |  The total number of DML statements forwarded to this writer DB instance. This variable applies to Aurora MySQL version 2.  | 
|  `Aurora_fwd_master_dml_stmt_duration`  |  The total duration of DML statements forwarded to this writer DB instance. This variable applies to Aurora MySQL version 2.  | 
|  `Aurora_fwd_master_errors_rpc_timeout`  |  The number of times a forwarded connection failed to be established on the writer.  | 
|  `Aurora_fwd_master_errors_session_limit`  |  The number of forwarded queries that get rejected due to `session full` on the writer.  | 
|  `Aurora_fwd_master_errors_session_timeout`  |  The number of times a forwarding session is ended due to a timeout on the writer.  | 
|  `Aurora_fwd_master_open_sessions`  |  The number of forwarded sessions on the writer DB instance. This variable applies to Aurora MySQL version 2.  | 
|  `Aurora_fwd_master_select_stmt_count`  |  The total number of `SELECT` statements forwarded to this writer DB instance. This variable applies to Aurora MySQL version 2.  | 
|  `Aurora_fwd_master_select_stmt_duration`  |  The total duration of `SELECT` statements forwarded to this writer DB instance. This variable applies to Aurora MySQL version 2.  | 
|  `Aurora_fwd_writer_dml_stmt_count`  |  The total number of DML statements forwarded to this writer DB instance. This variable applies to Aurora MySQL version 3.  | 
|  `Aurora_fwd_writer_dml_stmt_duration`  |  The total duration of DML statements forwarded to this writer DB instance. This variable applies to Aurora MySQL version 3.  | 
|  `Aurora_fwd_writer_errors_rpc_timeout`  |  The number of times a forwarded connection failed to be established on the writer.  | 
|  `Aurora_fwd_writer_errors_session_limit`  |  The number of forwarded queries that get rejected due to `session full` on the writer.  | 
|  `Aurora_fwd_writer_errors_session_timeout`  |  The number of times a forwarding session is ended due to a timeout on the writer.  | 
|  `Aurora_fwd_writer_open_sessions`  |  The number of forwarded sessions on the writer DB instance. This variable applies to Aurora MySQL version 3.  | 
|  `Aurora_fwd_writer_select_stmt_count`  |  The total number of `SELECT` statements forwarded to this writer DB instance. This variable applies to Aurora MySQL version 3.  | 
|  `Aurora_fwd_writer_select_stmt_duration`  |  The total duration of `SELECT` statements forwarded to this writer DB instance. This variable applies to Aurora MySQL version 3.  | 
|  `Aurora_lockmgr_buffer_pool_memory_used`  |  The amount of buffer pool memory in bytes that the Aurora MySQL lock manager is using.  | 
|  `Aurora_lockmgr_memory_used`  |  The amount of memory in bytes that the Aurora MySQL lock manager is using.  | 
|  `Aurora_ml_actual_request_cnt`  |  The aggregate request count that Aurora MySQLmakes to the Aurora machine learning services across all queries run by users of the DB instance. For more information, see [Using Amazon Aurora machine learning with Aurora MySQL](mysql-ml.md).  | 
|  `Aurora_ml_actual_response_cnt`  |  The aggregate response count that Aurora MySQL receives from the Aurora machine learning services across all queries run by users of the DB instance. For more information, see [Using Amazon Aurora machine learning with Aurora MySQL](mysql-ml.md).  | 
|  `Aurora_ml_cache_hit_cnt`  |  The aggregate internal cache hit count that Aurora MySQL receives from the Aurora machine learning services across all queries run by users of the DB instance. For more information, see [Using Amazon Aurora machine learning with Aurora MySQL](mysql-ml.md).  | 
|  `Aurora_ml_logical_request_cnt`  |  The number of logical requests that the DB instance has evaluated to be sent to the Aurora machine learning services since the last status reset. Depending on whether batching has been used, this value can be higher than `Aurora_ml_actual_request_cnt`. For more information, see [Using Amazon Aurora machine learning with Aurora MySQL](mysql-ml.md).  | 
|  `Aurora_ml_logical_response_cnt`  |  The aggregate response count that Aurora MySQL receives from the Aurora machine learning services across all queries run by users of the DB instance. For more information, see [Using Amazon Aurora machine learning with Aurora MySQL](mysql-ml.md).  | 
|  `Aurora_ml_retry_request_cnt`  |  The number of retried requests that the DB instance has sent to the Aurora machine learning services since the last status reset. For more information, see [Using Amazon Aurora machine learning with Aurora MySQL](mysql-ml.md).  | 
|  `Aurora_ml_single_request_cnt`  |  The aggregate count of Aurora machine learning functions that are evaluated by non-batch mode across all queries run by users of the DB instance. For more information, see [Using Amazon Aurora machine learning with Aurora MySQL](mysql-ml.md).  | 
|  `aurora_oom_avoidance_recovery_state`  |  Indicates whether Aurora out-of-memory (OOM) avoidance recovery is in the `ACTIVE` or `INACTIVE` state for this DB instance. This variable applies to Aurora MySQL version 3.06.0 and higher.  | 
|  `aurora_oom_reserved_mem_enter_kb`  |  Represents the threshold for entering the `RESERVED` state in Aurora's OOM handling mechanism. When the available memory on the server falls below this threshold, `aurora_oom_status` changes to `RESERVED`, indicating that the server is approaching a critical level of memory usage. This variable applies to Aurora MySQL version 3.06.0 and higher.  | 
|  `aurora_oom_reserved_mem_exit_kb`  |  Represents the threshold for exiting the `RESERVED` state in Aurora's OOM handling mechanism. When the available memory on the server rises above this threshold, `aurora_oom_status` reverts to `NORMAL`, indicating that the server has returned to a more stable state with sufficient memory resources. This variable applies to Aurora MySQL version 3.06.0 and higher.  | 
|  `aurora_oom_status`  |  Represents the current OOM status of this DB instance. When the value is `NORMAL`, it indicates that there are sufficient memory resources. If the value changes to `RESERVED`, it indicates that the server has low available memory. Actions are taken based on the `aurora_oom_response` parameter configuration. For more information, see [Troubleshooting out-of-memory issues for Aurora MySQL databases](AuroraMySQLOOM.md). This variable applies to Aurora MySQL version 3.06.0 and higher.  | 
|  `Aurora_pq_bytes_returned`  |  The number of bytes for the tuple data structures transmitted to the head node during parallel queries. Divide by 16,384 to compare against `Aurora_pq_pages_pushed_down`.  | 
|  `Aurora_pq_max_concurrent_requests`  |  The maximum number of parallel query sessions that can run concurrently on this Aurora DB instance. This is a fixed number that depends on the AWS DB instance class.  | 
|  `Aurora_pq_pages_pushed_down`  |  The number of data pages (each with a fixed size of 16 KiB) where parallel query avoided a network transmission to the head node.  | 
|  `Aurora_pq_request_attempted`  |  The number of parallel query sessions requested. This value might represent more than one session per query, depending on SQL constructs such as subqueries and joins.  | 
|  `Aurora_pq_request_executed`  |  The number of parallel query sessions run successfully.  | 
|  `Aurora_pq_request_failed`  |  The number of parallel query sessions that returned an error to the client. In some cases, a request for a parallel query might fail, for example due to a problem in the storage layer. In these cases, the query part that failed is retried using the nonparallel query mechanism. If the retried query also fails, an error is returned to the client and this counter is incremented.  | 
|  `Aurora_pq_request_in_progress`  |  The number of parallel query sessions currently in progress. This number applies to the particular Aurora DB instance that you are connected to, not the entire Aurora DB cluster. To see if a DB instance is close to its concurrency limit, compare this value to `Aurora_pq_max_concurrent_requests`.  | 
|  `Aurora_pq_request_not_chosen`  |  The number of times parallel query wasn't chosen to satisfy a query. This value is the sum of several other more granular counters. An `EXPLAIN` statement can increment this counter even though the query isn't actually performed.  | 
|  `Aurora_pq_request_not_chosen_below_min_rows`  |  The number of times parallel query wasn't chosen due to the number of rows in the table. An `EXPLAIN` statement can increment this counter even though the query isn't actually performed.  | 
|  `Aurora_pq_request_not_chosen_column_bit`  |  The number of parallel query requests that use the nonparallel query processing path because of an unsupported data type in the list of projected columns.  | 
|  `Aurora_pq_request_not_chosen_column_geometry`  |  The number of parallel query requests that use the nonparallel query processing path because the table has columns with the `GEOMETRY` data type. For information about Aurora MySQL versions that remove this limitation, see [Upgrading parallel query clusters to Aurora MySQL version 3](aurora-mysql-parallel-query-optimizing.md#aurora-mysql-parallel-query-upgrade-pqv2).  | 
|  `Aurora_pq_request_not_chosen_column_lob`  |  The number of parallel query requests that use the nonparallel query processing path because the table has columns with a `LOB` data type, or `VARCHAR` columns that are stored externally due to the declared length. For information about Aurora MySQL versions that remove this limitation, see [Upgrading parallel query clusters to Aurora MySQL version 3](aurora-mysql-parallel-query-optimizing.md#aurora-mysql-parallel-query-upgrade-pqv2).  | 
|  `Aurora_pq_request_not_chosen_column_virtual`  |  The number of parallel query requests that use the nonparallel query processing path because the table contains a virtual column.  | 
|  `Aurora_pq_request_not_chosen_custom_charset`  |  The number of parallel query requests that use the nonparallel query processing path because the table has columns with a custom character set.  | 
|  `Aurora_pq_request_not_chosen_fast_ddl`  |  The number of parallel query requests that use the nonparallel query processing path because the table is currently being altered by a fast DDL `ALTER` statement.  | 
|  `Aurora_pq_request_not_chosen_few_pages_outside_buffer_pool`  |  The number of times parallel query wasn't chosen, even though less than 95 percent of the table data was in the buffer pool, because there wasn't enough unbuffered table data to make parallel query worthwhile.  | 
|  `Aurora_pq_request_not_chosen_full_text_index`  |  The number of parallel query requests that use the nonparallel query processing path because the table has full-text indexes.  | 
|  `Aurora_pq_request_not_chosen_high_buffer_pool_pct`  |  The number of times parallel query wasn't chosen because a high percentage of the table data (currently, greater than 95 percent) was already in the buffer pool. In these cases, the optimizer determines that reading the data from the buffer pool is more efficient. An `EXPLAIN` statement can increment this counter even though the query isn't actually performed.  | 
|  `Aurora_pq_request_not_chosen_index_hint`  |  The number of parallel query requests that use the nonparallel query processing path because the query includes an index hint.  | 
|  `Aurora_pq_request_not_chosen_innodb_table_format`  |  The number of parallel query requests that use the nonparallel query processing path because the table uses an unsupported InnoDB row format. Aurora parallel query only applies to the `COMPACT`, `REDUNDANT`, and `DYNAMIC` row formats.  | 
|  `Aurora_pq_request_not_chosen_long_trx`  |  The number of parallel query requests that used the nonparallel query processing path, due to the query being started inside a long-running transaction. An `EXPLAIN` statement can increment this counter even though the query isn't actually performed.  | 
|  `Aurora_pq_request_not_chosen_no_where_clause`  |  The number of parallel query requests that use the nonparallel query processing path because the query doesn't include any `WHERE` clause.  | 
|  `Aurora_pq_request_not_chosen_range_scan`  |  The number of parallel query requests that use the nonparallel query processing path because the query uses a range scan on an index.  | 
|  `Aurora_pq_request_not_chosen_row_length_too_long`  |  The number of parallel query requests that use the nonparallel query processing path because the total combined length of all the columns is too long.  | 
|  `Aurora_pq_request_not_chosen_small_table`  |  The number of times parallel query wasn't chosen due to the overall size of the table, as determined by number of rows and average row length. An `EXPLAIN` statement can increment this counter even though the query isn't actually performed.  | 
|  `Aurora_pq_request_not_chosen_temporary_table`  |  The number of parallel query requests that use the nonparallel query processing path because the query refers to temporary tables that use the unsupported `MyISAM` or `memory` table types.  | 
|  `Aurora_pq_request_not_chosen_tx_isolation`  |  The number of parallel query requests that use the nonparallel query processing path because query uses an unsupported transaction isolation level. On reader DB instances, parallel query only applies to the `REPEATABLE READ` and `READ COMMITTED` isolation levels.  | 
|  `Aurora_pq_request_not_chosen_update_delete_stmts`  |  The number of parallel query requests that use the nonparallel query processing path because the query is part of an `UPDATE` or `DELETE` statement.  | 
|  `Aurora_pq_request_not_chosen_unsupported_access`  |  The number of parallel query requests that use the nonparallel query processing path because the `WHERE` clause doesn't meet the criteria for parallel query. This result can occur if the query doesn't require a data-intensive scan, or if the query is a `DELETE` or `UPDATE` statement.  | 
|  `Aurora_pq_request_not_chosen_unsupported_storage_type`  |  The number of parallel query requests that use the nonparallel query processing path because the Aurora MySQL DB cluster isn't using a supported Aurora cluster storage configuration. For more information, see [Limitations](aurora-mysql-parallel-query.md#aurora-mysql-parallel-query-limitations). This parameter applies to Aurora MySQL version 3.04 and higher.  | 
|  `Aurora_pq_request_throttled`  |  The number of times parallel query wasn't chosen due to the maximum number of concurrent parallel queries already running on a particular Aurora DB instance.  | 
|  `Aurora_repl_bytes_received`  |  Number of bytes replicated to an Aurora MySQL reader database instance since the last restart. For more information, see [Replication with Amazon Aurora MySQL](AuroraMySQL.Replication.md).  | 
|  `Aurora_reserved_mem_exceeded_incidents`  |  The number of times since the last restart that the engine has exceeded reserved memory limits. If `aurora_oom_response` is configured, this threshold defines when out-of-memory (OOM) avoidance activities are triggered. For more information on the Aurora MySQL OOM response, see [Troubleshooting out-of-memory issues for Aurora MySQL databases](AuroraMySQLOOM.md).  | 
|  `aurora_temptable_max_ram_allocation`  |  The maximum amount of memory, in bytes, used at any point by internal temporary tables since the last restart.  | 
|  `aurora_temptable_ram_allocation`  |  The current amount of memory, in bytes, used by internal temporary tables.  | 
|  `Aurora_in_memory_relaylog_status`  |  The current status of in memory relay log feature, the value can be ENABLED or DISABLED.  | 
|  `Aurora_in_memory_relaylog_disabled_reason`  |  Shows the reason of current in memory relay log feature status, if the feature is disabled, display a message of explanation on why the feature is disabled.  | 
|  `Aurora_in_memory_relaylog_fallback_count`  |  Show the total number of fallbacks of in memory relay log feature to persistent relay log mode (legacy). Fallback can be caused by either single event larger than cache size (currently 128MB) or transaction retry exceed the replica transaction retry limit replica\$1transaction\$1retries.  | 
|  `Aurora_in_memory_relaylog_recovery_count`  |  Shows the total number of in memory relay log recovery performed automatically. This count includes the total number of fallbacks and the number of automatic mode switch back to in memory relay log mode after the temporary fallbacks.  | 
|  `Aurora_thread_pool_thread_count`  |  The current number of threads in the Aurora thread pool. For more information on the thread pool in Aurora MySQL, see [Thread pool](AuroraMySQL.Managing.Tuning.concepts.md#AuroraMySQL.Managing.Tuning.concepts.processes.pool).  | 
|  `Aurora_tmz_version`  |  Denotes the current version of the time zone information used by the DB cluster. The values follow the Internet Assigned Numbers Authority (IANA) format: `YYYYsuffix`, for example `2022a` and `2023c`. This parameter applies to Aurora MySQL version 2.12 and higher, and version 3.04 and higher.  | 
|  `Aurora_zdr_oom_threshold`  |  Represents the memory threshold, in kilobytes (KB), for an Aurora DB instance to initiate a zero downtime restart (ZDR) to recover from potential memory-related issues.  | 
|  `server_aurora_das_running`  |  Indicates whether Database Activity Streams (DAS) are enabled or disabled on this DB instance. For more information, see [Monitoring Amazon Aurora with Database Activity Streams](DBActivityStreams.md).  | 

## MySQL status variables that don't apply to Aurora MySQL
<a name="status_vars"></a>

 Because of architectural differences between Aurora MySQL and MySQL, some MySQL status variables don't apply to Aurora MySQL.

 The following MySQL status variables don't apply to Aurora MySQL. This list isn't exhaustive.
+  `innodb_buffer_pool_bytes_dirty` 
+  `innodb_buffer_pool_pages_dirty` 
+  `innodb_buffer_pool_pages_flushed` 

Aurora MySQL version 3 removes the following status variables that were in Aurora MySQL version 2:
+  `AuroraDb_lockmgr_bitmaps0_in_use` 
+  `AuroraDb_lockmgr_bitmaps1_in_use` 
+  `AuroraDb_lockmgr_bitmaps_mem_used` 
+  `AuroraDb_thread_deadlocks` 
+  `available_alter_table_log_entries` 
+  `Aurora_lockmgr_memory_used` 
+  `Aurora_missing_history_on_replica_incidents` 
+  `Aurora_new_lock_manager_lock_release_cnt` 
+  `Aurora_new_lock_manager_lock_release_total_duration_micro` 
+  `Aurora_new_lock_manager_lock_timeout_cnt` 
+  `Aurora_total_op_memory` 
+  `Aurora_total_op_temp_space` 
+  `Aurora_used_alter_table_log_entries` 
+  `Aurora_using_new_lock_manager` 
+  `Aurora_volume_bytes_allocated` 
+  `Aurora_volume_bytes_left_extent` 
+  `Aurora_volume_bytes_left_total` 
+  `Com_alter_db_upgrade` 
+  `Compression` 
+  `External_threads_connected` 
+  `Innodb_available_undo_logs` 
+  `Last_query_cost` 
+  `Last_query_partial_plans` 
+  `Slave_heartbeat_period` 
+  `Slave_last_heartbeat` 
+  `Slave_received_heartbeats` 
+  `Slave_retried_transactions` 
+  `Slave_running` 
+  `Time_since_zero_connections` 

These MySQL status variables are available in Aurora MySQL version 2, but they aren't available in Aurora MySQL version 3:
+  `Innodb_redo_log_enabled` 
+  `Innodb_undo_tablespaces_total` 
+  `Innodb_undo_tablespaces_implicit` 
+  `Innodb_undo_tablespaces_explicit` 
+  `Innodb_undo_tablespaces_active` 

# Aurora MySQL wait events
Wait events

The following are some common wait events for Aurora MySQL.

**Note**  
For information on tuning Aurora MySQL performance using wait events, see [Tuning Aurora MySQL with wait events](AuroraMySQL.Managing.Tuning.wait-events.md).  
For information about the naming conventions used in MySQL wait events, see [ Performance Schema instrument naming conventions](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-instrument-naming.html) in the MySQL documentation.

**cpu**  
The number of active connections that are ready to run is consistently higher than the number of vCPUs. For more information, see [cpu](ams-waits.cpu.md).

**io/aurora\$1redo\$1log\$1flush**  
A session is persisting data to Aurora storage. Typically, this wait event is for a write I/O operation in Aurora MySQL. For more information, see [io/aurora\$1redo\$1log\$1flush](ams-waits.io-auredologflush.md).

**io/aurora\$1respond\$1to\$1client**  
Query processing has completed and results are being returned to the application client for the following Aurora MySQL versions: 2.10.2 and higher 2.10 versions, 2.09.3 and higher 2.09 versions, and 2.07.7 and higher 2.07 versions. Compare the network bandwidth of the DB instance class with the size of the result set being returned. Also, check client-side response times. If the client is unresponsive and can't process the TCP packets, packet drops and TCP retransmissions can occur. This situation negatively affects network bandwidth. In versions lower than 2.10.2, 2.09.3, and 2.07.7, the wait event erroneously includes idle time. To learn how to tune your database when this wait is prominent, see [io/aurora\$1respond\$1to\$1client](ams-waits.respond-to-client.md).

**io/file/csv/data**  
Threads are writing to tables in comma-separated value (CSV) format. Check your CSV table usage. A typical cause of this event is setting `log_output` on a table.

**io/file/sql/binlog**  
A thread is waiting on a binary log (binlog) file that is being written to disk.

**io/redo\$1log\$1flush**  
A session is persisting data to Aurora storage. Typically, this wait event is for a write I/O operation in Aurora MySQL. For more information, see [io/redo\$1log\$1flush](ams-waits.io-redologflush.md).

**io/socket/sql/client\$1connection**  
The `mysqld` program is busy creating threads to handle incoming new client connections. For more information, see [io/socket/sql/client\$1connection](ams-waits.client-connection.md).

**io/table/sql/handler**  
The engine is waiting for access to a table. This event occurs regardless of whether the data is cached in the buffer pool or accessed on disk. For more information, see [io/table/sql/handler](ams-waits.waitio.md).

**lock/table/sql/handler**  
This wait event is a table lock wait event handler. For more information about atom and molecule events in the Performance Schema, see [ Performance Schema atom and molecule events](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-atom-molecule-events.html) in the MySQL documentation.

**synch/cond/innodb/row\$1lock\$1wait**  
Multiple data manipulation language (DML) statements are accessing the same database rows at the same time. For more information, see [synch/cond/innodb/row\$1lock\$1wait](ams-waits.row-lock-wait.md).

**synch/cond/innodb/row\$1lock\$1wait\$1cond**  
Multiple DML statements are accessing the same database rows at the same time. For more information, see [synch/cond/innodb/row\$1lock\$1wait\$1cond](ams-waits.row-lock-wait-cond.md).

**synch/cond/sql/MDL\$1context::COND\$1wait\$1status**  
Threads are waiting on a table metadata lock. The engine uses this type of lock to manage concurrent access to a database schema and to ensure data consistency. For more information, see [Optimizing locking operations](https://dev.mysql.com/doc/refman/8.0/en/locking-issues.html) in the MySQL documentation. To learn how to tune your database when this event is prominent, see [synch/cond/sql/MDL\$1context::COND\$1wait\$1status](ams-waits.cond-wait-status.md).

**synch/cond/sql/MYSQL\$1BIN\$1LOG::COND\$1done**  
You have turned on binary logging. There might be a high commit throughput, large number transactions committing, or replicas reading binlogs. Consider using multirow statements or bundling statements into one transaction. In Aurora, use global databases instead of binary log replication, or use the `aurora_binlog_*` parameters.

**synch/mutex/innodb/aurora\$1lock\$1thread\$1slot\$1futex**  
Multiple DML statements are accessing the same database rows at the same time. For more information, see [synch/mutex/innodb/aurora\$1lock\$1thread\$1slot\$1futex](ams-waits.waitsynch.md).

**synch/mutex/innodb/buf\$1pool\$1mutex**  
The buffer pool isn't large enough to hold the working data set. Or the workload accesses pages from a specific table, which leads to contention in the buffer pool. For more information, see [synch/mutex/innodb/buf\$1pool\$1mutex](ams-waits.bufpoolmutex.md).

**synch/mutex/innodb/fil\$1system\$1mutex**  
The process is waiting for access to the tablespace memory cache. For more information, see [synch/mutex/innodb/fil\$1system\$1mutex](ams-waits.innodb-fil-system-mutex.md).

**synch/mutex/innodb/trx\$1sys\$1mutex**  
Operations are checking, updating, deleting, or adding transaction IDs in InnoDB in a consistent or controlled manner. These operations require a `trx_sys` mutex call, which is tracked by Performance Schema instrumentation. Operations include management of the transaction system when the database starts or shuts down, rollbacks, undo cleanups, row read access, and buffer pool loads. High database load with a large number of transactions results in the frequent appearance of this wait event. For more information, see [synch/mutex/innodb/trx\$1sys\$1mutex](ams-waits.trxsysmutex.md).

**synch/mutex/mysys/KEY\$1CACHE::cache\$1lock**  <a name="key-cache.cache-lock"></a>
The `keycache->cache_lock` mutex controls access to the key cache for MyISAM tables. While Aurora MySQL doesn't allow usage of MyISAM tables to store persistent data, they are used to store internal temporary tables. Consider checking the `created_tmp_tables` or `created_tmp_disk_tables` status counters, because in certain situations, temporary tables are written to disk when they no longer fit in memory.

**synch/mutex/sql/FILE\$1AS\$1TABLE::LOCK\$1offsets**  
The engine acquires this mutex when opening or creating a table metadata file. When this wait event occurs with excessive frequency, the number of tables being created or opened has spiked.

**synch/mutex/sql/FILE\$1AS\$1TABLE::LOCK\$1shim\$1lists**  
The engine acquires this mutex while performing operations such as `reset_size`, `detach_contents`, or `add_contents` on the internal structure that keeps track of opened tables. The mutex synchronizes access to the list contents. When this wait event occurs with high frequency, it indicates a sudden change in the set of tables that were previously accessed. The engine needs to access new tables or let go of the context related to previously accessed tables.

**synch/mutex/sql/LOCK\$1open**  
The number of tables that your sessions are opening exceeds the size of the table definition cache or the table open cache. Increase the size of these caches. For more information, see [How MySQL opens and closes tables](https://dev.mysql.com/doc/refman/8.0/en/table-cache.html).

**synch/mutex/sql/LOCK\$1table\$1cache**  
The number of tables that your sessions are opening exceeds the size of the table definition cache or the table open cache. Increase the size of these caches. For more information, see [How MySQL opens and closes tables](https://dev.mysql.com/doc/refman/8.0/en/table-cache.html).

**synch/mutex/sql/LOG**  
In this wait event, there are threads waiting on a log lock. For example, a thread might wait for a lock to write to the slow query log file.

**synch/mutex/sql/MYSQL\$1BIN\$1LOG::LOCK\$1commit**  
In this wait event, there is a thread that is waiting to acquire a lock with the intention of committing to the binary log. Binary logging contention can occur on databases with a very high change rate. Depending on your version of MySQL, there are certain locks being used to protect the consistency and durability of the binary log. In RDS for MySQL, binary logs are used for replication and the automated backup process. In Aurora MySQL, binary logs are not needed for native replication or backups. They are disabled by default but can be enabled and used for external replication or change data capture. For more information, see [The binary log](https://dev.mysql.com/doc/refman/8.0/en/binary-log.html) in the MySQL documentation.

**sync/mutex/sql/MYSQL\$1BIN\$1LOG::LOCK\$1dump\$1thread\$1metrics\$1collection**  
If binary logging is turned on, the engine acquires this mutex when it prints active dump threads metrics to the engine error log and to the internal operations map.

**sync/mutex/sql/MYSQL\$1BIN\$1LOG::LOCK\$1inactive\$1binlogs\$1map**  
If binary logging is turned on, the engine acquires this mutex when it adds to, deletes from, or searches through the list of binlog files behind the latest one.

**sync/mutex/sql/MYSQL\$1BIN\$1LOG::LOCK\$1io\$1cache**  
If binary logging is turned on, the engine acquires this mutex during Aurora binlog IO cache operations: allocate, resize, free, write, read, purge, and access cache info. If this event occurs frequently, the engine is accessing the cache where binlog events are stored. To reduce wait times, reduce commits. Try grouping multiple statements into a single transaction.

**synch/mutex/sql/MYSQL\$1BIN\$1LOG::LOCK\$1log**  
You have turned on binary logging. There might be high commit throughput, many transactions committing, or replicas reading binlogs. Consider using multirow statements or bundling statements into one transaction. In Aurora, use global databases instead of binary log replication or use the `aurora_binlog_*` parameters.

**synch/mutex/sql/SERVER\$1THREAD::LOCK\$1sync**  
The mutex `SERVER_THREAD::LOCK_sync` is acquired during the scheduling, processing, or launching of threads for file writes. The excessive occurrence of this wait event indicates increased write activity in the database.

**synch/mutex/sql/TABLESPACES:lock**  
The engine acquires the `TABLESPACES:lock` mutex during the following tablespace operations: create, delete, truncate, and extend. The excessive occurrence of this wait event indicates a high frequency of tablespace operations. An example is loading a large amount of data into the database.

**synch/rwlock/innodb/dict**  
In this wait event, there are threads waiting on an rwlock held on the InnoDB data dictionary.

**synch/rwlock/innodb/dict\$1operation\$1lock**  
In this wait event, there are threads holding locks on InnoDB data dictionary operations.

**synch/rwlock/innodb/dict sys RW lock**  
A high number of concurrent data control language statements (DCLs) in data definition language code (DDLs) are triggered at the same time. Reduce the application's dependency on DDLs during regular application activity.

**synch/rwlock/innodb/index\$1tree\$1rw\$1lock**  
A large number of similar data manipulation language (DML) statements are accessing the same database object at the same time. Try using multirow statements. Also, spread the workload over different database objects. For example, implement partitioning.

**synch/sxlock/innodb/dict\$1operation\$1lock**  
A high number of concurrent data control language statements (DCLs) in data definition language code (DDLs) are triggered at the same time. Reduce the application's dependency on DDLs during regular application activity.

**synch/sxlock/innodb/dict\$1sys\$1lock**  
A high number of concurrent data control language statements (DCLs) in data definition language code (DDLs) are triggered at the same time. Reduce the application's dependency on DDLs during regular application activity.

**synch/sxlock/innodb/hash\$1table\$1locks**  
The session couldn't find pages in the buffer pool. The engine either needs to read a file or modify the least-recently used (LRU) list for the buffer pool. Consider increasing the buffer cache size and improving access paths for the relevant queries.

**synch/sxlock/innodb/index\$1tree\$1rw\$1lock**  
Many similar data manipulation language (DML) statements are accessing the same database object at the same time. Try using multirow statements. Also, spread the workload over different database objects. For example, implement partitioning.

**synch/mutex/innodb/temp\$1pool\$1manager\$1mutex**  
This wait event occurs when a session is waiting to acquire a mutex for managing the pool of session temporary tablespaces. 

For more information on troubleshooting synch wait events, see [Why is my MySQL DB instance showing a high number of active sessions waiting on SYNCH wait events in Performance Insights?](https://aws.amazon.com/premiumsupport/knowledge-center/aurora-mysql-synch-wait-events/).

# Aurora MySQL thread states
Thread states

The following are some common thread states for Aurora MySQL.

**checking permissions**  
The thread is checking whether the server has the required privileges to run the statement.

**checking query cache for query**  
The server is checking whether the current query is present in the query cache.

**cleaned up**  
This is the final state of a connection whose work is complete but which hasn't been closed by the client. The best solution is to explicitly close the connection in code. Or you can set a lower value for `wait_timeout` in your parameter group.

**closing tables**  
The thread is flushing the changed table data to disk and closing the used tables. If this isn't a fast operation, verify the network bandwidth consumption metrics against the instance class network bandwidth. Also, check that the parameter values for `table_open_cache` and `table_definition_cache` parameter allow for enough tables to be simultaneously open so that the engine doesn't need to open and close tables frequently. These parameters influence the memory consumption on the instance.

**converting HEAP to MyISAM**  
The query is converting a temporary table from in-memory to on-disk. This conversion is necessary because the temporary tables created by MySQL in the intermediate steps of query processing grew too big for memory. Check the values of `tmp_table_size` and `max_heap_table_size`. In later versions, this thread state name is `converting HEAP to ondisk`.

**converting HEAP to ondisk**  
The thread is converting an internal temporary table from an in-memory table to an on-disk table.

**copy to tmp table**  
The thread is processing an `ALTER TABLE` statement. This state occurs after the table with the new structure has been created but before rows are copied into it. For a thread in this state, you can use the Performance Schema to obtain information about the progress of the copy operation.

**creating sort index**  
Aurora MySQL is performing a sort because it can't use an existing index to satisfy the `ORDER BY` or `GROUP BY` clause of a query. For more information, see [creating sort index](ams-states.sort-index.md).

**creating table**  
The thread is creating a permanent or temporary table.

**delayed commit ok done**  
An asynchronous commit in Aurora MySQL has received an acknowledgement and is complete.

**delayed commit ok initiated**  
The Aurora MySQL thread has started the async commit process but is waiting for acknowledgement. This is usually the genuine commit time of a transaction.

**delayed send ok done**  
An Aurora MySQL worker thread that is tied to a connection can be freed while a response is sent to the client. The thread can begin other work. The state `delayed send ok` means that the asynchronous acknowledgement to the client completed.

**delayed send ok initiated**  
An Aurora MySQL worker thread has sent a response asynchronously to a client and is now free to do work for other connections. The transaction has started an async commit process that hasn't yet been acknowledged.

**executing**  
The thread has begun running a statement.

**freeing items**  
The thread has run a command. Some freeing of items done during this state involves the query cache. This state is usually followed by cleaning up.

**init**  
This state occurs before the initialization of `ALTER TABLE`, `DELETE`, `INSERT`, `SELECT`, or `UPDATE` statements. Actions in this state include flushing the binary log or InnoDB log, and some cleanup of the query cache.

**Source has sent all binlog to replica; waiting for more updates**  
The primary node has finished its part of the replication. The thread is waiting for more queries to run so that it can write to the binary log (binlog).

**opening tables**  
The thread is trying to open a table. This operation is fast unless an `ALTER TABLE` or a `LOCK TABLE` statement needs to finish, or it exceeds the value of `table_open_cache`.

**optimizing**  
The server is performing initial optimizations for a query.

**preparing**  
This state occurs during query optimization.

**query end**  
This state occurs after processing a query but before the freeing items state.

**removing duplicates**  
Aurora MySQL couldn't optimize a `DISTINCT` operation in the early stage of a query. Aurora MySQL must remove all duplicated rows before sending the result to the client.

**searching rows for update**  
The thread is finding all matching rows before updating them. This stage is necessary if the `UPDATE` is changing the index that the engine uses to find the rows.

**sending binlog event to slave**  
The thread read an event from the binary log and is sending it to the replica.

**sending cached result to client**  
The server is taking the result of a query from the query cache and sending it to the client.

**sending data**  
The thread is reading and processing rows for a `SELECT` statement but hasn't yet started sending data to the client. The process is identifying which pages contain the results necessary to satisfy the query. For more information, see [sending data](ams-states.sending-data.md).

**sending to client**  
The server is writing a packet to the client. In earlier MySQL versions, this wait event was labeled `writing to net`.

**starting**  
This is the first stage at the beginning of statement execution.

**statistics**  
The server is calculating statistics to develop a query execution plan. If a thread is in this state for a long time, the server is probably disk-bound while performing other work.

**storing result in query cache**  
The server is storing the result of a query in the query cache.

**system lock**  
The thread has called `mysql_lock_tables`, but the thread state hasn't been updated since the call. This general state occurs for many reasons.

**update**  
The thread is preparing to start updating the table.

**updating**  
The thread is searching for rows and is updating them.

**user lock**  
The thread issued a `GET_LOCK` call. The thread either requested an advisory lock and is waiting for it, or is planning to request it.

**waiting for more updates**  
The primary node has finished its part of the replication. The thread is waiting for more queries to run so that it can write to the binary log (binlog).

**waiting for schema metadata lock**  
This is a wait for a metadata lock.

**waiting for stored function metadata lock**  
This is a wait for a metadata lock.

**waiting for stored procedure metadata lock**  
This is a wait for a metadata lock.

**waiting for table flush**  
The thread is executing `FLUSH TABLES` and is waiting for all threads to close their tables. Or the thread received notification that the underlying structure for a table changed, so it must reopen the table to get the new structure. To reopen the table, the thread must wait until all other threads have closed the table. This notification takes place if another thread has used one of the following statements on the table: `FLUSH TABLES`, `ALTER TABLE`, `RENAME TABLE`, `REPAIR TABLE`, `ANALYZE TABLE`, or `OPTIMIZE TABLE`.

**waiting for table level lock**  
One session is holding a lock on a table while another session tries to acquire the same lock on the same table.

**waiting for table metadata lock**  
Aurora MySQL uses metadata locking to manage concurrent access to database objects and to ensure data consistency. In this wait event, one session is holding a metadata lock on a table while another session tries to acquire the same lock on the same table. When the Performance Schema is enabled, this thread state is reported as the wait event `synch/cond/sql/MDL_context::COND_wait_status`.

**writing to net**  
The server is writing a packet to the network. In later MySQL versions, this wait event is labeled `Sending to client`.

# Aurora MySQL isolation levels
Isolation levels

Learn how DB instances in an Aurora MySQL cluster implement the database property of isolation. This topic explains how the Aurora MySQL default behavior balances between strict consistency and high performance. You can use this information to help you decide when to change the default settings based on the traits of your workload. 

## Available isolation levels for writer instances


You can use the isolation levels `REPEATABLE READ`, `READ COMMITTED`, `READ UNCOMMITTED`, and `SERIALIZABLE` on the primary instance of an Aurora MySQL DB cluster. These isolation levels work the same in Aurora MySQL as in RDS for MySQL.

## REPEATABLE READ isolation level for reader instances


By default, Aurora MySQL DB instances that are configured as read-only Aurora Replicas always use the `REPEATABLE READ` isolation level. These DB instances ignore any `SET TRANSACTION ISOLATION LEVEL` statements and continue using the `REPEATABLE READ` isolation level.

## READ COMMITTED isolation level for reader instances


If your application includes a write-intensive workload on the primary instance and long-running queries on the Aurora Replicas, you might experience substantial purge lag. *Purge lag* happens when internal garbage collection is blocked by long-running queries. The symptom that you see is a high value for `history list length` in the output from the `SHOW ENGINE INNODB STATUS` command. You can monitor this value using the `RollbackSegmentHistoryListLength` metric in CloudWatch. Substantial purge lag can reduce the effectiveness of secondary indexes, decrease overall query performance, and lead to wasted storage space.

If you experience such issues, you can set an Aurora MySQL session-level configuration setting, `aurora_read_replica_read_committed`, to use the `READ COMMITTED` isolation level on Aurora Replicas. When you apply this setting, you can help reduce slowdowns and wasted space that can result from performing long-running queries at the same time as transactions that modify your tables.

We recommend making sure that you understand the specific Aurora MySQL behavior of the `READ COMMITTED` isolation before using this setting. The Aurora Replica `READ COMMITTED` behavior complies with the ANSI SQL standard. However, the isolation is less strict than typical MySQL `READ COMMITTED` behavior that you might be familiar with. Therefore, you might see different query results under `READ COMMITTED` on an Aurora MySQL read replica than you might see for the same query under `READ COMMITTED` on the Aurora MySQL primary instance or on RDS for MySQL. You might consider using the `aurora_read_replica_read_committed` setting for such cases as a comprehensive report that scans a very large database. In contrast, you might avoid it for short queries with small result sets, where precision and repeatability are important.

The `READ COMMITTED` isolation level isn't available for sessions within a secondary cluster in an Aurora global database that use the write forwarding feature. For information about write forwarding, see [Using write forwarding in an Amazon Aurora global database](aurora-global-database-write-forwarding.md).

### Using READ COMMITTED for readers


To use the `READ COMMITTED` isolation level for Aurora Replicas, set the `aurora_read_replica_read_committed` configuration setting to `ON`. Use this setting at the session level while connected to a specific Aurora Replica. To do so, run the following SQL commands.

```
set session aurora_read_replica_read_committed = ON;
set session transaction isolation level read committed;
```

You might use this configuration setting temporarily to perform interactive, one-time queries. You might also want to run a reporting or data analysis application that benefits from the `READ COMMITTED` isolation level, while leaving the default setting unchanged for other applications.

When the `aurora_read_replica_read_committed` setting is turned on, use the `SET TRANSACTION ISOLATION LEVEL` command to specify the isolation level for the appropriate transactions.

```
set transaction isolation level read committed;
```

### Differences in READ COMMITTED behavior on Aurora replicas


The `aurora_read_replica_read_committed` setting makes the `READ COMMITTED` isolation level available for an Aurora Replica, with consistency behavior that is optimized for long-running transactions. The `READ COMMITTED` isolation level on Aurora Replicas has less strict isolation than on Aurora primary instances. For that reason, enable this setting only on Aurora Replicas where you know that your queries can accept the possibility of certain types of inconsistent results.

Your queries can experience certain kinds of read anomalies when the `aurora_read_replica_read_committed` setting is turned on. Two kinds of anomalies are especially important to understand and handle in your application code. A *non-repeatable read* occurs when another transaction commits while your query is running. A long-running query can see different data at the start of the query than it sees at the end. A *phantom read* occurs when other transactions cause existing rows to be reorganized while your query is running, and one or more rows are read twice by your query.

Your queries might experience inconsistent row counts as a result of phantom reads. Your queries might also return incomplete or inconsistent results due to non-repeatable reads. For example, suppose that a join operation refers to tables that are concurrently modified by SQL statements such as `INSERT` or `DELETE`. In this case, the join query might read a row from one table but not the corresponding row from another table.

The ANSI SQL standard allows both of these behaviors for the `READ COMMITTED` isolation level. However, those behaviors are different than the typical MySQL implementation of `READ COMMITTED`. Thus, before enabling the `aurora_read_replica_read_committed` setting, check any existing SQL code to verify if it operates as expected under the looser consistency model.

Row counts and other results might not be strongly consistent under the `READ COMMITTED` isolation level while this setting is enabled. Thus, you typically enable the setting only while running analytic queries that aggregate large amounts of data and don't require absolute precision. If you don't have these kinds of long-running queries alongside a write-intensive workload, you probably don't need the `aurora_read_replica_read_committed` setting. Without the combination of long-running queries and a write-intensive workload, you're unlikely to encounter issues with the length of the history list.

**Example Queries showing isolation behavior for READ COMMITTED on Aurora Replicas**  
The following example shows how `READ COMMITTED` queries on an Aurora Replica might return non-repeatable results if transactions modify the associated tables at the same time. The table `BIG_TABLE` contains 1 million rows before any queries start. Other data manipulation language (DML) statements add, remove, or change rows while they are running.  
The queries on the Aurora primary instance under the `READ COMMITTED` isolation level produce predictable results. However, the overhead of keeping the consistent read view for the lifetime of every long-running query can lead to expensive garbage collection later.  
The queries on the Aurora Replica under the `READ COMMITTED` isolation level are optimized to minimize this garbage collection overhead. The tradeoff is that the results might vary depending on whether the queries retrieve rows that are added, removed, or reorganized by transactions that are committed while the query is running. The queries are allowed to consider these rows, but aren't required to. For demonstration purposes, the queries check only the number of rows in the table by using the `COUNT(*)` function.  


| Time | DML statement on Aurora primary instance | Query on Aurora primary instance with READ COMMITTED | Query on Aurora Replica with READ COMMITTED | 
| --- | --- | --- | --- | 
|  T1  |  INSERT INTO big\$1table SELECT \$1 FROM other\$1table LIMIT 1000000; COMMIT;   |  |  | 
|  T2  |  |  Q1: SELECT COUNT(\$1) FROM big\$1table;  |  Q2: SELECT COUNT(\$1) FROM big\$1table;  | 
|  T3  |  INSERT INTO big\$1table (c1, c2) VALUES (1, 'one more row'); COMMIT;   |  |  | 
|  T4  |  |  If Q1 finishes now, result is 1,000,000.  |  If Q2 finishes now, result is 1,000,000 or 1,000,001.  | 
|  T5  |  DELETE FROM big\$1table LIMIT 2; COMMIT;   |  |  | 
|  T6  |  |  If Q1 finishes now, result is 1,000,000.  |  If Q2 finishes now, result is 1,000,000 or 1,000,001 or 999,999 or 999,998.  | 
|  T7  |  UPDATE big\$1table SET c2 = CONCAT(c2,c2,c2); COMMIT;   |  |  | 
|  T8  |  |  If Q1 finishes now, result is 1,000,000.  |  If Q2 finishes now, result is 1,000,000 or 1,000,001 or 999,999, or possibly some higher number.  | 
|  T9  |  |  Q3: SELECT COUNT(\$1) FROM big\$1table;  |  Q4: SELECT COUNT(\$1) FROM big\$1table;  | 
|  T10  |  |  If Q3 finishes now, result is 999,999.  |  If Q4 finishes now, result is 999,999.  | 
|  T11  |  |  Q5: SELECT COUNT(\$1) FROM parent\$1table p JOIN child\$1table c ON (p.id = c.id) WHERE p.id = 1000;  |  Q6: SELECT COUNT(\$1) FROM parent\$1table p JOIN child\$1table c ON (p.id = c.id) WHERE p.id = 1000;  | 
|  T12  |   INSERT INTO parent\$1table (id, s) VALUES (1000, 'hello'); INSERT INTO child\$1table (id, s) VALUES (1000, 'world'); COMMIT;   |  |  | 
|  T13  |  |  If Q5 finishes now, result is 0.  |  If Q6 finishes now, result is 0 or 1.  | 
If the queries finish quickly, before any other transactions perform DML statements and commit, the results are predictable and the same between the primary instance and the Aurora Replica. Let's examine the differences in behavior in detail, starting with the first query.  
The results for Q1 are highly predictable because `READ COMMITTED` on the primary instance uses a strong consistency model similar to the `REPEATABLE READ` isolation level.  
The results for Q2 might vary depending on what transactions are committed while that query is running. For example, suppose that other transactions perform DML statements and commit while the queries are running. In this case, the query on the Aurora Replica with the `READ COMMITTED` isolation level might or might not take the changes into account. The row counts aren't predictable in the same way as under the `REPEATABLE READ` isolation level. They also aren't as predictable as queries running under the `READ COMMITTED` isolation level on the primary instance, or on an RDS for MySQL instance.  
The `UPDATE` statement at T7 doesn't actually change the number of rows in the table. However, by changing the length of a variable-length column, this statement can cause rows to be reorganized internally. A long-running `READ COMMITTED` transaction might see the old version of a row, and later within the same query see the new version of the same row. The query can also skip both the old and new versions of the row, so the row count might be different than expected.  
The results of Q5 and Q6 might be identical or slightly different. Query Q6 on the Aurora Replica under `READ COMMITTED` is able to see, but is not required to see, the new rows that are committed while the query is running. It might also see the row from one table, but not from the other table. If the join query doesn't find a matching row in both tables, it returns a count of zero. If the query does find both the new rows in `PARENT_TABLE` and `CHILD_TABLE`, the query returns a count of one. In a long-running query, the lookups from the joined tables might happen at widely separated times.  
These differences in behavior depend on the timing of when transactions are committed and when the queries process the underlying table rows. Thus, you're most likely to see such differences in report queries that take minutes or hours and that run on Aurora clusters processing OLTP transactions at the same time. These are the kinds of mixed workloads that benefit the most from the `READ COMMITTED` isolation level on Aurora Replicas.

# Aurora MySQL hints
Hints<a name="hints"></a>

You can use SQL hints with Aurora MySQL queries to fine-tune performance. You can also use hints to prevent execution plans for important queries from changing because of unpredictable conditions.

**Tip**  
To verify the effect that a hint has on a query, examine the query plan produced by the `EXPLAIN` statement. Compare the query plans with and without the hint.

In Aurora MySQL version 3, you can use all the hints that are available in MySQL Community Edition 8.0. For more information about these hints, see [Optimizer Hints](https://dev.mysql.com/doc/refman/8.0/en/optimizer-hints.html) in the *MySQL Reference Manual*.

The following hints are available in Aurora MySQL version 2. These hints apply to queries that use the hash join feature in Aurora MySQL version 2, especially queries that use parallel query optimization.

**PQ, NO\$1PQ**  
Specifies whether to force the optimizer to use parallel query on a per-table or per-query basis.  
`PQ` forces the optimizer to use parallel query for specified tables or the whole query (block). `NO_PQ` prevents the optimizer from using parallel query for specified tables or the whole query (block).  
This hint is available in Aurora MySQL version 2.11 and higher. The following examples show you how to use this hint.  
Specifying a table name forces the optimizer to apply the `PQ/NO_PQ` hint only on those select tables. Not specifying a table name forces the `PQ/NO_PQ` hint on all tables affected by the query block.

```
EXPLAIN SELECT /*+ PQ() */ f1, f2
    FROM num1 t1 WHERE f1 > 10 and f2 < 100;

EXPLAIN SELECT /*+ PQ(t1) */ f1, f2
    FROM num1 t1 WHERE f1 > 10 and f2 < 100;

EXPLAIN SELECT /*+ PQ(t1,t2) */ f1, f2
    FROM num1 t1, num1 t2 WHERE t1.f1 = t2.f21;

EXPLAIN SELECT /*+ NO_PQ() */ f1, f2
    FROM num1 t1 WHERE f1 > 10 and f2 < 100;

EXPLAIN SELECT /*+ NO_PQ(t1) */ f1, f2
    FROM num1 t1 WHERE f1 > 10 and f2 < 100;

EXPLAIN SELECT /*+ NO_PQ(t1,t2) */ f1, f2
    FROM num1 t1, num1 t2 WHERE t1.f1 = t2.f21;
```

**HASH\$1JOIN, NO\$1HASH\$1JOIN**  
Turns on or off the ability of the parallel query optimizer to choose whether to use the hash join optimization method for a query. `HASH_JOIN` lets the optimizer use hash join if that mechanism is more efficient. `NO_HASH_JOIN` prevents the optimizer from using hash join for the query. This hint is available in Aurora MySQL version 2.08 and higher. It has no effect in Aurora MySQL version 3.  
The following examples show you how to use this hint.  

```
EXPLAIN SELECT/*+ HASH_JOIN(t2) */ f1, f2
  FROM t1, t2 WHERE t1.f1 = t2.f1;

EXPLAIN SELECT /*+ NO_HASH_JOIN(t2) */ f1, f2
  FROM t1, t2 WHERE t1.f1 = t2.f1;
```

**HASH\$1JOIN\$1PROBING, NO\$1HASH\$1JOIN\$1PROBING**  
In a hash join query, specifies whether to use the specified table for the probe side of the join. The query tests if column values from the build table exist in the probe table, instead of reading the entire contents of the probe table. You can use `HASH_JOIN_PROBING` and `HASH_JOIN_BUILDING` to specify how hash join queries are processed without reordering the tables within the query text. This hint is available in Aurora MySQL version 2.08 and higher. It has no effect in Aurora MySQL version 3.  
The following examples show how to use this hint. Specifying the `HASH_JOIN_PROBING` hint for the table `T2` has the same effect as specifying `NO_HASH_JOIN_PROBING` for the table `T1`.  

```
EXPLAIN SELECT /*+ HASH_JOIN(t2) HASH_JOIN_PROBING(t2) */ f1, f2
  FROM t1, t2 WHERE t1.f1 = t2.f1;

EXPLAIN SELECT /*+ HASH_JOIN(t2) NO_HASH_JOIN_PROBING(t1) */ f1, f2
  FROM t1, t2 WHERE t1.f1 = t2.f1;
```

**HASH\$1JOIN\$1BUILDING, NO\$1HASH\$1JOIN\$1BUILDING**  
In a hash join query, specifies whether to use the specified table for the build side of the join. The query processes all the rows from this table to build the list of column values to cross-reference with the other table. You can use `HASH_JOIN_PROBING` and `HASH_JOIN_BUILDING` to specify how hash join queries are processed without reordering the tables within the query text. This hint is available in Aurora MySQL version 2.08 and higher. It has no effect in Aurora MySQL version 3.  
The following example shows you how to use this hint. Specifying the `HASH_JOIN_BUILDING` hint for the table `T2` has the same effect as specifying `NO_HASH_JOIN_BUILDING` for the table `T1`.  

```
EXPLAIN SELECT /*+ HASH_JOIN(t2) HASH_JOIN_BUILDING(t2) */ f1, f2
  FROM t1, t2 WHERE t1.f1 = t2.f1;

EXPLAIN SELECT /*+ HASH_JOIN(t2) NO_HASH_JOIN_BUILDING(t1) */ f1, f2
  FROM t1, t2 WHERE t1.f1 = t2.f1;
```

**JOIN\$1FIXED\$1ORDER**  
Specifies that tables in the query are joined based on the order they are listed in the query. It is useful with queries involving three or more tables. It is intended as a replacement for the MySQL `STRAIGHT_JOIN` hint and is equivalent to the MySQL [JOIN\$1FIXED\$1ORDER](https://dev.mysql.com/doc/refman/8.0/en/optimizer-hints.html) hint. This hint is available in Aurora MySQL version 2.08 and higher.  
The following example shows you how to use this hint.  

```
EXPLAIN SELECT /*+ JOIN_FIXED_ORDER() */ f1, f2
  FROM t1 JOIN t2 USING (id) JOIN t3 USING (id) JOIN t4 USING (id);
```

**JOIN\$1ORDER**  
Specifies the join order for the tables in the query. It is useful with queries involving three or more tables. It is equivalent to the MySQL [JOIN\$1ORDER](https://dev.mysql.com/doc/refman/8.0/en/optimizer-hints.html) hint. This hint is available in Aurora MySQL version 2.08 and higher.  
The following example shows you how to use this hint.  

```
EXPLAIN SELECT /*+ JOIN_ORDER (t4, t2, t1, t3) */ f1, f2
  FROM t1 JOIN t2 USING (id) JOIN t3 USING (id) JOIN t4 USING (id);
```

**JOIN\$1PREFIX**  
Specifies the tables to put first in the join order. It is useful with queries involving three or more tables. It is equivalent to the MySQL [JOIN\$1PREFIX](https://dev.mysql.com/doc/refman/8.0/en/optimizer-hints.html) hint. This hint is available in Aurora MySQL version 2.08 and higher.  
The following example shows you how to use this hint.  

```
EXPLAIN SELECT /*+ JOIN_PREFIX (t4, t2) */ f1, f2
  FROM t1 JOIN t2 USING (id) JOIN t3 USING (id) JOIN t4 USING (id);
```

**JOIN\$1SUFFIX**  
Specifies the tables to put last in the join order. It is useful with queries involving three or more tables. It is equivalent to the MySQL [JOIN\$1SUFFIX](https://dev.mysql.com/doc/refman/8.0/en/optimizer-hints.html) hint. This hint is available in Aurora MySQL version 2.08 and higher.  
The following example shows you how to use this hint.  

```
EXPLAIN SELECT /*+ JOIN_SUFFIX (t1) */ f1, f2
  FROM t1 JOIN t2 USING (id) JOIN t3 USING (id) JOIN t4 USING (id);
```

For information about using hash join queries, see [Optimizing large Aurora MySQL join queries with hash joins](AuroraMySQL.BestPractices.Performance.md#Aurora.BestPractices.HashJoin).

# Aurora MySQL stored procedure reference
Stored procedure reference

You can manage your Aurora MySQL DB cluster by calling built-in stored procedures.

**Topics**
+ [

# Collecting and maintaining the Global Status History
](mysql-stored-proc-gsh.md)
+ [

# Configuring, starting, and stopping binary log (binlog) replication
](mysql-stored-proc-replicating.md)
+ [

# Ending a session or query
](mysql-stored-proc-ending.md)
+ [

# Replicating transactions using GTIDs
](mysql-stored-proc-gtid.md)
+ [

# Rotating the query logs
](mysql-stored-proc-logging.md)
+ [

# Setting and showing binary log configuration
](mysql-stored-proc-configuring.md)

# Collecting and maintaining the Global Status History


Amazon RDS provides a set of procedures that take snapshots of the values of status variables over time and write them to a table, along with any changes since the last snapshot. This infrastructure is called Global Status History. For more information, see [Managing the Global Status History](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.MySQL.CommonDBATasks.html#Appendix.MySQL.CommonDBATasks.GoSH).

The following stored procedures manage how the Global Status History is collected and maintained.

**Topics**
+ [

## mysql.rds\$1collect\$1global\$1status\$1history
](#mysql_rds_collect_global_status_history)
+ [

## mysql.rds\$1disable\$1gsh\$1collector
](#mysql_rds_disable_gsh_collector)
+ [

## mysql.rds\$1disable\$1gsh\$1rotation
](#mysql_rds_disable_gsh_rotation)
+ [

## mysql.rds\$1enable\$1gsh\$1collector
](#mysql_rds_enable_gsh_collector)
+ [

## mysql.rds\$1enable\$1gsh\$1rotation
](#mysql_rds_enable_gsh_rotation)
+ [

## mysql.rds\$1rotate\$1global\$1status\$1history
](#mysql_rds_rotate_global_status_history)
+ [

## mysql.rds\$1set\$1gsh\$1collector
](#mysql_rds_set_gsh_collector)
+ [

## mysql.rds\$1set\$1gsh\$1rotation
](#mysql_rds_set_gsh_rotation)

## mysql.rds\$1collect\$1global\$1status\$1history
mysql.rds\$1collect\$1global\$1status\$1history

Takes a snapshot on demand for the Global Status History.

### Syntax
Syntax

 

```
CALL mysql.rds_collect_global_status_history;
```

## mysql.rds\$1disable\$1gsh\$1collector
mysql.rds\$1disable\$1gsh\$1collector

Turns off snapshots taken by the Global Status History.

### Syntax
Syntax

 

```
CALL mysql.rds_disable_gsh_collector;
```

## mysql.rds\$1disable\$1gsh\$1rotation
mysql.rds\$1disable\$1gsh\$1rotation

Turns off rotation of the `mysql.global_status_history` table.

### Syntax
Syntax

 

```
CALL mysql.rds_disable_gsh_rotation;
```

## mysql.rds\$1enable\$1gsh\$1collector
mysql.rds\$1enable\$1gsh\$1collector

Turns on the Global Status History to take default snapshots at intervals specified by `rds_set_gsh_collector`.

### Syntax
Syntax

 

```
CALL mysql.rds_enable_gsh_collector;
```

## mysql.rds\$1enable\$1gsh\$1rotation
mysql.rds\$1enable\$1gsh\$1rotation

Turns on rotation of the contents of the `mysql.global_status_history` table to `mysql.global_status_history_old` at intervals specified by `rds_set_gsh_rotation`.

### Syntax
Syntax

 

```
CALL mysql.rds_enable_gsh_rotation;
```

## mysql.rds\$1rotate\$1global\$1status\$1history
mysql.rds\$1rotate\$1global\$1status\$1history

Rotates the contents of the `mysql.global_status_history` table to `mysql.global_status_history_old` on demand.

### Syntax
Syntax

 

```
CALL mysql.rds_rotate_global_status_history;
```

## mysql.rds\$1set\$1gsh\$1collector
mysql.rds\$1set\$1gsh\$1collector

Specifies the interval, in minutes, between snapshots taken by the Global Status History.

### Syntax
Syntax

 

```
CALL mysql.rds_set_gsh_collector(intervalPeriod);
```

### Parameters
Parameters

 *intervalPeriod*   
The interval, in minutes, between snapshots. Default value is `5`.

## mysql.rds\$1set\$1gsh\$1rotation
mysql.rds\$1set\$1gsh\$1rotation

Specifies the interval, in days, between rotations of the `mysql.global_status_history` table.

### Syntax
Syntax

 

```
CALL mysql.rds_set_gsh_rotation(intervalPeriod);
```

### Parameters
Parameters

 *intervalPeriod*   
The interval, in days, between table rotations. Default value is `7`.

# Configuring, starting, and stopping binary log (binlog) replication


You can call the following stored procedures while connected to the primary instance in an Aurora MySQL cluster. These procedures control how transactions are replicated from an external database into Aurora MySQL, or from Aurora MySQL to an external database.

**Topics**
+ [

## mysql.rds\$1disable\$1session\$1binlog (Aurora MySQL version 2)
](#mysql_rds_disable_session_binlog)
+ [

## mysql.rds\$1enable\$1session\$1binlog (Aurora MySQL version 2)
](#mysql_rds_enable_session_binlog)
+ [

## mysql.rds\$1import\$1binlog\$1ssl\$1material
](#mysql_rds_import_binlog_ssl_material)
+ [

## mysql.rds\$1next\$1master\$1log (Aurora MySQL version 2)
](#mysql_rds_next_master_log)
+ [

## mysql.rds\$1next\$1source\$1log (Aurora MySQL version 3)
](#mysql_rds_next_source_log)
+ [

## mysql.rds\$1remove\$1binlog\$1ssl\$1material
](#mysql_rds_remove_binlog_ssl_material)
+ [

## mysql.rds\$1reset\$1external\$1master (Aurora MySQL version 2)
](#mysql_rds_reset_external_master)
+ [

## mysql.rds\$1reset\$1external\$1source (Aurora MySQL version 3)
](#mysql_rds_reset_external_source)
+ [

## mysql.rds\$1set\$1binlog\$1source\$1ssl (Aurora MySQL version 3)
](#mysql_rds_set_binlog_source_ssl)
+ [

## mysql.rds\$1set\$1external\$1master (Aurora MySQL version 2)
](#mysql_rds_set_external_master)
+ [

## mysql.rds\$1set\$1external\$1source (Aurora MySQL version 3)
](#mysql_rds_set_external_source)
+ [

## mysql.rds\$1set\$1external\$1master\$1with\$1auto\$1position (Aurora MySQL version 2)
](#mysql_rds_set_external_master_with_auto_position)
+ [

## mysql.rds\$1set\$1external\$1source\$1with\$1auto\$1position (Aurora MySQL version 3)
](#mysql_rds_set_external_source_with_auto_position)
+ [

## mysql.rds\$1set\$1master\$1auto\$1position (Aurora MySQL version 2)
](#mysql_rds_set_master_auto_position)
+ [

## mysql.rds\$1set\$1read\$1only (Aurora MySQL version 3)
](#mysql_rds_set_read_only)
+ [

## mysql.rds\$1set\$1session\$1binlog\$1format (Aurora MySQL version 2)
](#mysql_rds_set_session_binlog_format)
+ [

## mysql.rds\$1set\$1source\$1auto\$1position (Aurora MySQL version 3)
](#mysql_rds_set_source_auto_position)
+ [

## mysql.rds\$1skip\$1repl\$1error
](#mysql_rds_skip_repl_error)
+ [

## mysql.rds\$1start\$1replication
](#mysql_rds_start_replication)
+ [

## mysql.rds\$1start\$1replication\$1until(Aurora MySQL version 3)
](#mysql_rds_start_replication_until)
+ [

## mysql.rds\$1stop\$1replication
](#mysql_rds_stop_replication)

## mysql.rds\$1disable\$1session\$1binlog (Aurora MySQL version 2)
mysql.rds\$1disable\$1session\$1binlog

Turns off binary logging for the current session by setting the `sql_log_bin` variable to `OFF`.

### Syntax


```
CALL mysql.rds_disable_session_binlog;
```

### Parameters


None

### Usage notes


For an Aurora MySQL DB cluster, you call this stored procedure while connected to the primary instance.

For Aurora, this procedure is supported for Aurora MySQL version 2.12 and higher MySQL 5.7–compatible versions.

**Note**  
In Aurora MySQL version 3, you can use the following command to disable binary logging for the current session if you have the `SESSION_VARIABLES_ADMIN` privilege:  

```
SET SESSION sql_log_bin = OFF;
```

## mysql.rds\$1enable\$1session\$1binlog (Aurora MySQL version 2)
mysql.rds\$1enable\$1session\$1binlog

Turns on binary logging for the current session by setting the `sql_log_bin` variable to `ON`.

### Syntax


```
CALL mysql.rds_enable_session_binlog;
```

### Parameters


None

### Usage notes


For an Aurora MySQL DB cluster, you call this stored procedure while connected to the primary instance.

For Aurora, this procedure is supported for Aurora MySQL version 2.12 and higher MySQL 5.7–compatible versions.

**Note**  
In Aurora MySQL version 3, you can use the following command to enable binary logging for the current session if you have the `SESSION_VARIABLES_ADMIN` privilege:  

```
SET SESSION sql_log_bin = ON;
```

## mysql.rds\$1import\$1binlog\$1ssl\$1material


Imports the certificate authority certificate, client certificate, and client key into an Aurora MySQL DB cluster. The information is required for SSL communication and encrypted replication.

**Note**  
Currently, this procedure is supported for Aurora MySQL version 2: 2.09.2, 2.10.0, 2.10.1, and 2.11.0; and version 3: 3.01.1 and higher.

### Syntax


 

```
CALL mysql.rds_import_binlog_ssl_material (
  ssl_material
);
```

### Parameters


 *ssl\$1material*   
JSON payload that contains the contents of the following .pem format files for a MySQL client:  
+ "ssl\$1ca":"*Certificate authority certificate*"
+ "ssl\$1cert":"*Client certificate*"
+ "ssl\$1key":"*Client key*"

### Usage notes


Prepare for encrypted replication before you run this procedure:
+ If you don't have SSL enabled on the external MySQL source database instance and don't have a client key and client certificate prepared, enable SSL on the MySQL database server and generate the required client key and client certificate.
+ If SSL is enabled on the external source database instance, supply a client key and certificate for the Aurora MySQL DB cluster. If you don't have these, generate a new key and certificate for the Aurora MySQL DB cluster. To sign the client certificate, you must have the certificate authority key you used to configure SSL on the external MySQL source database instance.

For more information, see [ Creating SSL certificates and keys using openssl](https://dev.mysql.com/doc/refman/8.0/en/creating-ssl-files-using-openssl.html) in the MySQL documentation.

**Important**  
After you prepare for encrypted replication, use an SSL connection to run this procedure. The client key must not be transferred across an insecure connection. 

This procedure imports SSL information from an external MySQL database into an Aurora MySQL DB cluster. The SSL information is in .pem format files that contain the SSL information for the Aurora MySQL DB cluster. During encrypted replication, the Aurora MySQL DB cluster acts a client to the MySQL database server. The certificates and keys for the Aurora MySQL client are in files in .pem format.

You can copy the information from these files into the `ssl_material` parameter in the correct JSON payload. To support encrypted replication, import this SSL information into the Aurora MySQL DB cluster.

The JSON payload must be in the following format.

```
'{"ssl_ca":"-----BEGIN CERTIFICATE-----
ssl_ca_pem_body_code
-----END CERTIFICATE-----\n","ssl_cert":"-----BEGIN CERTIFICATE-----
ssl_cert_pem_body_code
-----END CERTIFICATE-----\n","ssl_key":"-----BEGIN RSA PRIVATE KEY-----
ssl_key_pem_body_code
-----END RSA PRIVATE KEY-----\n"}'
```

### Examples


The following example imports SSL information into an Aurora MySQL. In .pem format files, the body code typically is longer than the body code shown in the example.

```
call mysql.rds_import_binlog_ssl_material(
'{"ssl_ca":"-----BEGIN CERTIFICATE-----
AAAAB3NzaC1yc2EAAAADAQABAAABAQClKsfkNkuSevGj3eYhCe53pcjqP3maAhDFcvBS7O6V
hz2ItxCih+PnDSUaw+WNQn/mZphTk/a/gU8jEzoOWbkM4yxyb/wB96xbiFveSFJuOp/d6RJhJOI0iBXr
lsLnBItntckiJ7FbtxJMXLvvwJryDUilBMTjYtwB+QhYXUMOzce5Pjz5/i8SeJtjnV3iAoG/cQk+0FzZ
qaeJAAHco+CY/5WrUBkrHmFJr6HcXkvJdWPkYQS3xqC0+FmUZofz221CBt5IMucxXPkX4rWi+z7wB3Rb
BQoQzd8v7yeb7OzlPnWOyN0qFU0XA246RA8QFYiCNYwI3f05p6KLxEXAMPLE
-----END CERTIFICATE-----\n","ssl_cert":"-----BEGIN CERTIFICATE-----
AAAAB3NzaC1yc2EAAAADAQABAAABAQClKsfkNkuSevGj3eYhCe53pcjqP3maAhDFcvBS7O6V
hz2ItxCih+PnDSUaw+WNQn/mZphTk/a/gU8jEzoOWbkM4yxyb/wB96xbiFveSFJuOp/d6RJhJOI0iBXr
lsLnBItntckiJ7FbtxJMXLvvwJryDUilBMTjYtwB+QhYXUMOzce5Pjz5/i8SeJtjnV3iAoG/cQk+0FzZ
qaeJAAHco+CY/5WrUBkrHmFJr6HcXkvJdWPkYQS3xqC0+FmUZofz221CBt5IMucxXPkX4rWi+z7wB3Rb
BQoQzd8v7yeb7OzlPnWOyN0qFU0XA246RA8QFYiCNYwI3f05p6KLxEXAMPLE
-----END CERTIFICATE-----\n","ssl_key":"-----BEGIN RSA PRIVATE KEY-----
AAAAB3NzaC1yc2EAAAADAQABAAABAQClKsfkNkuSevGj3eYhCe53pcjqP3maAhDFcvBS7O6V
hz2ItxCih+PnDSUaw+WNQn/mZphTk/a/gU8jEzoOWbkM4yxyb/wB96xbiFveSFJuOp/d6RJhJOI0iBXr
lsLnBItntckiJ7FbtxJMXLvvwJryDUilBMTjYtwB+QhYXUMOzce5Pjz5/i8SeJtjnV3iAoG/cQk+0FzZ
qaeJAAHco+CY/5WrUBkrHmFJr6HcXkvJdWPkYQS3xqC0+FmUZofz221CBt5IMucxXPkX4rWi+z7wB3Rb
BQoQzd8v7yeb7OzlPnWOyN0qFU0XA246RA8QFYiCNYwI3f05p6KLxEXAMPLE
-----END RSA PRIVATE KEY-----\n"}');
```

## mysql.rds\$1next\$1master\$1log (Aurora MySQL version 2)
mysql.rds\$1next\$1master\$1log

Changes the source database instance log position to the start of the next binary log on the source database instance. Use this procedure only if you are receiving replication I/O error 1236 on a read replica.

### Syntax


 

```
CALL mysql.rds_next_master_log(
curr_master_log
);
```

### Parameters


 *curr\$1master\$1log*   
The index of the current master log file. For example, if the current file is named `mysql-bin-changelog.012345`, then the index is 12345. To determine the current master log file name, run the `SHOW REPLICA STATUS` command and view the `Master_Log_File` field.

### Usage notes


The master user must run the `mysql.rds_next_master_log` procedure. 

**Warning**  
Call `mysql.rds_next_master_log` only if replication fails after a failover of a Multi-AZ DB instance that is the replication source, and the `Last_IO_Errno` field of `SHOW REPLICA STATUS` reports I/O error 1236.  
Calling `mysql.rds_next_master_log` can result in data loss in the read replica if transactions in the source instance were not written to the binary log on disk before the failover event occurred. 

### Examples


Assume replication fails on an Aurora MySQL read replica. Running `SHOW REPLICA STATUS\G` on the read replica returns the following result:

```
*************************** 1. row ***************************
             Replica_IO_State:
                  Source_Host: myhost.XXXXXXXXXXXXXXX.rr-rrrr-1.rds.amazonaws.com
                  Source_User: MasterUser
                  Source_Port: 3306
                Connect_Retry: 10
              Source_Log_File: mysql-bin-changelog.012345
          Read_Source_Log_Pos: 1219393
               Relay_Log_File: relaylog.012340
                Relay_Log_Pos: 30223388
        Relay_Source_Log_File: mysql-bin-changelog.012345
           Replica_IO_Running: No
          Replica_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Source_Log_Pos: 30223232
              Relay_Log_Space: 5248928866
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Source_SSL_Allowed: No
           Source_SSL_CA_File:
           Source_SSL_CA_Path:
              Source_SSL_Cert:
            Source_SSL_Cipher:
               Source_SSL_Key:
        Seconds_Behind_Master: NULL
Source_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 1236
                Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Client requested master to start replication from impossible position; the first event 'mysql-bin-changelog.013406' at 1219393, the last event read from '/rdsdbdata/log/binlog/mysql-bin-changelog.012345' at 4, the last byte read from '/rdsdbdata/log/binlog/mysql-bin-changelog.012345' at 4.'
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Source_Server_Id: 67285976
```

The `Last_IO_Errno` field shows that the instance is receiving I/O error 1236. The `Master_Log_File` field shows that the file name is `mysql-bin-changelog.012345`, which means that the log file index is `12345`. To resolve the error, you can call `mysql.rds_next_master_log` with the following parameter:

```
CALL mysql.rds_next_master_log(12345);
```

## mysql.rds\$1next\$1source\$1log (Aurora MySQL version 3)
mysql.rds\$1next\$1source\$1log

Changes the source database instance log position to the start of the next binary log on the source database instance. Use this procedure only if you are receiving replication I/O error 1236 on a read replica.

### Syntax


 

```
CALL mysql.rds_next_source_log(
curr_source_log
);
```

### Parameters


 *curr\$1source\$1log*   
The index of the current source log file. For example, if the current file is named `mysql-bin-changelog.012345`, then the index is 12345. To determine the current source log file name, run the `SHOW REPLICA STATUS` command and view the `Source_Log_File` field.

### Usage notes


The administrative user must run the `mysql.rds_next_source_log` procedure. 

**Warning**  
Call `mysql.rds_next_source_log` only if replication fails after a failover of a Multi-AZ DB instance that is the replication source, and the `Last_IO_Errno` field of `SHOW REPLICA STATUS` reports I/O error 1236.  
Calling `mysql.rds_next_source_log` can result in data loss in the read replica if transactions in the source instance were not written to the binary log on disk before the failover event occurred. You can reduce the chance of this happening by setting the source instance parameters `sync_binlog` and `innodb_support_xa` to `1`, even though this might reduce performance. 

### Examples


Assume replication fails on an Aurora MySQL read replica. Running `SHOW REPLICA STATUS\G` on the read replica returns the following result:

```
*************************** 1. row ***************************
             Replica_IO_State:
                  Source_Host: myhost.XXXXXXXXXXXXXXX.rr-rrrr-1.rds.amazonaws.com
                  Source_User: MasterUser
                  Source_Port: 3306
                Connect_Retry: 10
              Source_Log_File: mysql-bin-changelog.012345
          Read_Source_Log_Pos: 1219393
               Relay_Log_File: relaylog.012340
                Relay_Log_Pos: 30223388
        Relay_Source_Log_File: mysql-bin-changelog.012345
           Replica_IO_Running: No
          Replica_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Source_Log_Pos: 30223232
              Relay_Log_Space: 5248928866
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Source_SSL_Allowed: No
           Source_SSL_CA_File:
           Source_SSL_CA_Path:
              Source_SSL_Cert:
            Source_SSL_Cipher:
               Source_SSL_Key:
        Seconds_Behind_Source: NULL
Source_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 1236
                Last_IO_Error: Got fatal error 1236 from source when reading data from binary log: 'Client requested source to start replication from impossible position; the first event 'mysql-bin-changelog.013406' at 1219393, the last event read from '/rdsdbdata/log/binlog/mysql-bin-changelog.012345' at 4, the last byte read from '/rdsdbdata/log/binlog/mysql-bin-changelog.012345' at 4.'
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Source_Server_Id: 67285976
```

The `Last_IO_Errno` field shows that the instance is receiving I/O error 1236. The `Source_Log_File` field shows that the file name is `mysql-bin-changelog.012345`, which means that the log file index is `12345`. To resolve the error, you can call `mysql.rds_next_source_log` with the following parameter:

```
CALL mysql.rds_next_source_log(12345);
```

## mysql.rds\$1remove\$1binlog\$1ssl\$1material


Removes the certificate authority certificate, client certificate, and client key for SSL communication and encrypted replication. This information is imported by using [mysql.rds\$1import\$1binlog\$1ssl\$1material](#mysql_rds_import_binlog_ssl_material).

### Syntax


 

```
CALL mysql.rds_remove_binlog_ssl_material;
```

## mysql.rds\$1reset\$1external\$1master (Aurora MySQL version 2)
mysql.rds\$1reset\$1external\$1master

Reconfigures an Aurora MySQL DB instance to no longer be a read replica of an instance of MySQL running external to Amazon RDS.

**Important**  
To run this procedure, `autocommit` must be enabled. To enable it, set the `autocommit` parameter to `1`. For information about modifying parameters, see [Modifying parameters in a DB parameter group in Amazon Aurora](USER_WorkingWithParamGroups.Modifying.md).

### Syntax


 

```
CALL mysql.rds_reset_external_master;
```

### Usage notes


The master user must run the `mysql.rds_reset_external_master` procedure. This procedure must be run on the MySQL DB instance to be removed as a read replica of a MySQL instance running external to Amazon RDS.

**Note**  
We offer these stored procedures primarily to enable replication with MySQL instances running external to Amazon RDS. We recommend that you use Aurora Replicas to manage replication within an Aurora MySQL DB cluster when possible. For information about managing replication in Aurora MySQL DB clusters, see [Using Aurora Replicas](AuroraMySQL.Replication.md#AuroraMySQL.Replication.Replicas).

For more information about using replication to import data from an instance of MySQL running external to Aurora MySQL, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md).

## mysql.rds\$1reset\$1external\$1source (Aurora MySQL version 3)
mysql.rds\$1reset\$1external\$1source

Reconfigures an Aurora MySQL DB instance to no longer be a read replica of an instance of MySQL running external to Amazon RDS.

**Important**  
To run this procedure, `autocommit` must be enabled. To enable it, set the `autocommit` parameter to `1`. For information about modifying parameters, see [Modifying parameters in a DB parameter group in Amazon Aurora](USER_WorkingWithParamGroups.Modifying.md).

### Syntax


 

```
CALL mysql.rds_reset_external_source;
```

### Usage notes


The administrative user must run the `mysql.rds_reset_external_source` procedure. This procedure must be run on the MySQL DB instance to be removed as a read replica of a MySQL instance running external to Amazon RDS.

**Note**  
We offer these stored procedures primarily to enable replication with MySQL instances running external to Amazon RDS. We recommend that you use Aurora Replicas to manage replication within an Aurora MySQL DB cluster when possible. For information about managing replication in Aurora MySQL DB clusters, see [Using Aurora Replicas](AuroraMySQL.Replication.md#AuroraMySQL.Replication.Replicas).

## mysql.rds\$1set\$1binlog\$1source\$1ssl (Aurora MySQL version 3)
mysql.rds\$1set\$1binlog\$1source\$1ssl

Enables `SOURCE_SSL` encryption for binlog replication. For more information, see [CHANGE REPLICATION SOURCE TO statement](https://dev.mysql.com/doc/refman/8.0/en/change-replication-source-to.html) in the MySQL documentation.

### Syntax


```
CALL mysql.rds_set_binlog_source_ssl(mode);
```

### Parameters


*mode*  
A value that indicates whether `SOURCE_SSL` encryption is enabled:  
+ `0` – `SOURCE_SSL` encryption is disabled. The default is `0`.
+ `1` – `SOURCE_SSL` encryption is enabled. You can configure encryption using SSL or TLS.

### Usage notes


This procedure is supported for Aurora MySQL version 3.06 and higher.

## mysql.rds\$1set\$1external\$1master (Aurora MySQL version 2)
mysql.rds\$1set\$1external\$1master

Configures an Aurora MySQL DB instance to be a read replica of an instance of MySQL running external to Amazon RDS.

The `mysql.rds_set_external_master` procedure is deprecated and will be removed in a future release. Use `mysql.rds\$1set\$1external\$1source` instead.

**Important**  
To run this procedure, `autocommit` must be enabled. To enable it, set the `autocommit` parameter to `1`. For information about modifying parameters, see [Modifying parameters in a DB parameter group in Amazon Aurora](USER_WorkingWithParamGroups.Modifying.md).

### Syntax


 

```
CALL mysql.rds_set_external_master (
  host_name
  , host_port
  , replication_user_name
  , replication_user_password
  , mysql_binary_log_file_name
  , mysql_binary_log_file_location
  , ssl_encryption
);
```

### Parameters


 *host\$1name*   
The host name or IP address of the MySQL instance running external to Amazon RDS to become the source database instance.

 *host\$1port*   
The port used by the MySQL instance running external to Amazon RDS to be configured as the source database instance. If your network configuration includes Secure Shell (SSH) port replication that converts the port number, specify the port number that is exposed by SSH.

 *replication\$1user\$1name*   
The ID of a user with `REPLICATION CLIENT` and `REPLICATION SLAVE` permissions on the MySQL instance running external to Amazon RDS. We recommend that you provide an account that is used solely for replication with the external instance.

 *replication\$1user\$1password*   
The password of the user ID specified in `replication_user_name`.

 *mysql\$1binary\$1log\$1file\$1name*   
The name of the binary log on the source database instance that contains the replication information.

 *mysql\$1binary\$1log\$1file\$1location*   
The location in the `mysql_binary_log_file_name` binary log at which replication starts reading the replication information.  
You can determine the binlog file name and location by running `SHOW MASTER STATUS` on the source database instance.

 *ssl\$1encryption*   
A value that specifies whether Secure Socket Layer (SSL) encryption is used on the replication connection. 1 specifies to use SSL encryption, 0 specifies to not use encryption. The default is 0.  
The `MASTER_SSL_VERIFY_SERVER_CERT` option isn't supported. This option is set to 0, which means that the connection is encrypted, but the certificates aren't verified.

### Usage notes


The master user must run the `mysql.rds_set_external_master` procedure. This procedure must be run on the MySQL DB instance to be configured as the read replica of a MySQL instance running external to Amazon RDS. 

Before you run `mysql.rds_set_external_master`, you must configure the instance of MySQL running external to Amazon RDS to be a source database instance. To connect to the MySQL instance running external to Amazon RDS, you must specify `replication_user_name` and `replication_user_password` values that indicate a replication user that has `REPLICATION CLIENT` and `REPLICATION SLAVE` permissions on the external instance of MySQL. 

**To configure an external instance of MySQL as a source database instance**

1. Using the MySQL client of your choice, connect to the external instance of MySQL and create a user account to be used for replication. The following is an example.

   **MySQL 5.7**

   ```
   CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED BY 'password';
   ```

   **MySQL 8.0**

   ```
   CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED WITH mysql_native_password BY 'password';
   ```
**Note**  
Specify a password other than the prompt shown here as a security best practice.

1. On the external instance of MySQL, grant `REPLICATION CLIENT` and `REPLICATION SLAVE` privileges to your replication user. The following example grants `REPLICATION CLIENT` and `REPLICATION SLAVE` privileges on all databases for the 'repl\$1user' user for your domain.

   **MySQL 5.7**

   ```
   GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com' IDENTIFIED BY 'password';
   ```

   **MySQL 8.0**

   ```
   GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com';
   ```

To use encrypted replication, configure source database instance to use SSL connections. Also, import the certificate authority certificate, client certificate, and client key into the DB instance or DB cluster using the [mysql.rds\$1import\$1binlog\$1ssl\$1material](#mysql_rds_import_binlog_ssl_material) procedure.

**Note**  
We offer these stored procedures primarily to enable replication with MySQL instances running external to Amazon RDS. We recommend that you use Aurora Replicas to manage replication within an Aurora MySQL DB cluster when possible. For information about managing replication in Aurora MySQL DB clusters, see [Using Aurora Replicas](AuroraMySQL.Replication.md#AuroraMySQL.Replication.Replicas).

After calling `mysql.rds_set_external_master` to configure an Amazon RDS DB instance as a read replica, you can call [mysql.rds\$1start\$1replication](#mysql_rds_start_replication) on the read replica to start the replication process. You can call [mysql.rds\$1reset\$1external\$1master (Aurora MySQL version 2)](#mysql_rds_reset_external_master) to remove the read replica configuration.

When `mysql.rds_set_external_master` is called, Amazon RDS records the time, user, and an action of `set master` in the `mysql.rds_history` and `mysql.rds_replication_status` tables.

### Examples


When run on a MySQL DB instance, the following example configures the DB instance to be a read replica of an instance of MySQL running external to Amazon RDS.

```
call mysql.rds_set_external_master(
  'Externaldb.some.com',
  3306,
  'repl_user',
  'password',
  'mysql-bin-changelog.0777',
  120,
  1);
```

## mysql.rds\$1set\$1external\$1source (Aurora MySQL version 3)
mysql.rds\$1set\$1external\$1source

Configures an Aurora MySQL DB instance to be a read replica of an instance of MySQL running external to Amazon RDS.

**Important**  
To run this procedure, `autocommit` must be enabled. To enable it, set the `autocommit` parameter to `1`. For information about modifying parameters, see [Modifying parameters in a DB parameter group in Amazon Aurora](USER_WorkingWithParamGroups.Modifying.md).

### Syntax


 

```
CALL mysql.rds_set_external_source (
  host_name
  , host_port
  , replication_user_name
  , replication_user_password
  , mysql_binary_log_file_name
  , mysql_binary_log_file_location
  , ssl_encryption
);
```

### Parameters


 *host\$1name*   
The host name or IP address of the MySQL instance running external to Amazon RDS to become the source database instance.

 *host\$1port*   
The port used by the MySQL instance running external to Amazon RDS to be configured as the source database instance. If your network configuration includes Secure Shell (SSH) port replication that converts the port number, specify the port number that is exposed by SSH.

 *replication\$1user\$1name*   
The ID of a user with `REPLICATION CLIENT` and `REPLICATION SLAVE` permissions on the MySQL instance running external to Amazon RDS. We recommend that you provide an account that is used solely for replication with the external instance.

 *replication\$1user\$1password*   
The password of the user ID specified in `replication_user_name`.

 *mysql\$1binary\$1log\$1file\$1name*   
The name of the binary log on the source database instance that contains the replication information.

 *mysql\$1binary\$1log\$1file\$1location*   
The location in the `mysql_binary_log_file_name` binary log at which replication starts reading the replication information.  
You can determine the binlog file name and location by running `SHOW MASTER STATUS` on the source database instance.

 *ssl\$1encryption*   
A value that specifies whether Secure Socket Layer (SSL) encryption is used on the replication connection. 1 specifies to use SSL encryption, 0 specifies to not use encryption. The default is 0.  
You must have imported a custom SSL certificate using [mysql.rds\$1import\$1binlog\$1ssl\$1material](#mysql_rds_import_binlog_ssl_material) to enable this option. If you haven't imported an custom SSL certificate, then set this parameter to 0 and use [mysql.rds\$1set\$1binlog\$1source\$1ssl (Aurora MySQL version 3)](#mysql_rds_set_binlog_source_ssl) to enable SSL for binary log replication.  
The `SOURCE_SSL_VERIFY_SERVER_CERT` option isn't supported. This option is set to 0, which means that the connection is encrypted, but the certificates aren't verified.

### Usage notes


The administrative user must run the `mysql.rds_set_external_source` procedure. This procedure must be run on the Aurora MySQL DB instance to be configured as the read replica of a MySQL instance running external to Amazon RDS. 

 Before you run `mysql.rds_set_external_source`, you must configure the instance of MySQL running external to Amazon RDS to be a source database instance. To connect to the MySQL instance running external to Amazon RDS, you must specify `replication_user_name` and `replication_user_password` values that indicate a replication user that has `REPLICATION CLIENT` and `REPLICATION SLAVE` permissions on the external instance of MySQL.

**To configure an external instance of MySQL as a source database instance**

1. Using the MySQL client of your choice, connect to the external instance of MySQL and create a user account to be used for replication. The following is an example.

   ```
   CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED BY 'password';
   ```
**Note**  
Specify a password other than the prompt shown here as a security best practice.

1. On the external instance of MySQL, grant `REPLICATION CLIENT` and `REPLICATION SLAVE` privileges to your replication user. The following example grants `REPLICATION CLIENT` and `REPLICATION SLAVE` privileges on all databases for the 'repl\$1user' user for your domain.

   ```
   GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com';
   ```

To use encrypted replication, configure source database instance to use SSL connections. Also, import the certificate authority certificate, client certificate, and client key into the DB instance or DB cluster using the [mysql.rds\$1import\$1binlog\$1ssl\$1material](url-rds-user;mysql_rds_import_binlog_ssl_material.html) procedure.

**Note**  
We offer these stored procedures primarily to enable replication with MySQL instances running external to Amazon RDS. We recommend that you use Aurora Replicas to manage replication within an Aurora MySQL DB cluster when possible. For information about managing replication in Aurora MySQL DB clusters, see [Using Aurora Replicas](AuroraMySQL.Replication.md#AuroraMySQL.Replication.Replicas).

After calling `mysql.rds_set_external_source` to configure an Aurora MySQL DB instance as a read replica, you can call [mysql.rds\$1start\$1replication](#mysql_rds_start_replication) on the read replica to start the replication process. You can call [mysql.rds\$1reset\$1external\$1source (Aurora MySQL version 3)](#mysql_rds_reset_external_source) to remove the read replica configuration.

When `mysql.rds_set_external_source` is called, Amazon RDS records the time, user, and an action of `set master` in the `mysql.rds_history` and `mysql.rds_replication_status` tables.

### Examples


When run on an Aurora MySQL DB instance, the following example configures the DB instance to be a read replica of an instance of MySQL running external to Amazon RDS.

```
call mysql.rds_set_external_source(
  'Externaldb.some.com',
  3306,
  'repl_user',
  'password',
  'mysql-bin-changelog.0777',
  120,
  1);
```

## mysql.rds\$1set\$1external\$1master\$1with\$1auto\$1position (Aurora MySQL version 2)
mysql.rds\$1set\$1external\$1master\$1with\$1auto\$1position

Configures an Aurora MySQL primary instance to accept incoming replication from an external MySQL instance. This procedure also configures replication based on global transaction identifiers (GTIDs).

This procedure doesn't configure delayed replication, because Aurora MySQL doesn't support delayed replication.

### Syntax


```
CALL mysql.rds_set_external_master_with_auto_position (
  host_name
  , host_port
  , replication_user_name
  , replication_user_password
  , ssl_encryption
);
```

### Parameters


*host\$1name*  
 The host name or IP address of the MySQL instance running external to Aurora to become the replication source. 

*host\$1port*  
 The port used by the MySQL instance running external to Aurora to be configured as the replication source. If your network configuration includes Secure Shell (SSH) port replication that converts the port number, specify the port number that is exposed by SSH. 

*replication\$1user\$1name*  
 The ID of a user with `REPLICATION CLIENT` and `REPLICATION SLAVE` permissions on the MySQL instance running external to Aurora. We recommend that you provide an account that is used solely for replication with the external instance. 

*replication\$1user\$1password*  
The password of the user ID specified in `replication_user_name`.

*ssl\$1encryption*  
This option isn't currently implemented. The default is 0.

### Usage notes


For an Aurora MySQL DB cluster, you call this stored procedure while connected to the primary instance.

The master user must run the `mysql.rds_set_external_master_with_auto_position` procedure. The master user runs this procedure on the primary instance of an Aurora MySQL DB cluster that acts as a replication target. This can be the replication target of an external MySQL DB instance or an Aurora MySQL DB cluster.

This procedure is supported for Aurora MySQL version 2. For Aurora MySQL version 3, use the procedure [mysql.rds\$1set\$1external\$1source\$1with\$1auto\$1position (Aurora MySQL version 3)](#mysql_rds_set_external_source_with_auto_position) instead.

Before you run `mysql.rds_set_external_master_with_auto_position`, configure the external MySQL DB instance to be a replication source. To connect to the external MySQL instance, specify values for `replication_user_name` and `replication_user_password`. These values must indicate a replication user that has `REPLICATION CLIENT` and `REPLICATION SLAVE` permissions on the external MySQL instance.

**To configure an external MySQL instance as a replication source**

1. Using the MySQL client of your choice, connect to the external MySQL instance and create a user account to be used for replication. The following is an example.

   ```
   CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED BY 'SomePassW0rd'
   ```

1. On the external MySQL instance, grant `REPLICATION CLIENT` and `REPLICATION SLAVE` privileges to your replication user. The following example grants `REPLICATION CLIENT` and `REPLICATION SLAVE` privileges on all databases for the `'repl_user'` user for your domain.

   ```
   GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com'
   IDENTIFIED BY 'SomePassW0rd'
   ```

When you call `mysql.rds_set_external_master_with_auto_position`, Amazon RDS records certain information. This information is the time, the user, and an action of `"set master"` in the `mysql.rds_history` and `mysql.rds_replication_status` tables.

To skip a specific GTID-based transaction that is known to cause a problem, you can use the [mysql.rds\$1skip\$1transaction\$1with\$1gtid(Aurora MySQL version 2 and 3)](mysql-stored-proc-gtid.md#mysql_rds_skip_transaction_with_gtid) stored procedure. For more information about working with GTID-based replication, see [Using GTID-based replication](mysql-replication-gtid.md).

### Examples


 When run on an Aurora primary instance, the following example configures the Aurora cluster to act as a read replica of an instance of MySQL running external to Aurora. 

```
call mysql.rds_set_external_master_with_auto_position(
  'Externaldb.some.com',
  3306,
  'repl_user'@'mydomain.com',
  'SomePassW0rd');
```

## mysql.rds\$1set\$1external\$1source\$1with\$1auto\$1position (Aurora MySQL version 3)
mysql.rds\$1set\$1external\$1source\$1with\$1auto\$1position

Configures an Aurora MySQL primary instance to accept incoming replication from an external MySQL instance. This procedure also configures replication based on global transaction identifiers (GTIDs).

### Syntax


```
CALL mysql.rds_set_external_source_with_auto_position (
  host_name
  , host_port
  , replication_user_name
  , replication_user_password
  , ssl_encryption
);
```

### Parameters


*host\$1name*  
 The host name or IP address of the MySQL instance running external to Aurora to become the replication source. 

*host\$1port*  
 The port used by the MySQL instance running external to Aurora to be configured as the replication source. If your network configuration includes Secure Shell (SSH) port replication that converts the port number, specify the port number that is exposed by SSH. 

*replication\$1user\$1name*  
 The ID of a user with `REPLICATION CLIENT` and `REPLICATION SLAVE` permissions on the MySQL instance running external to Aurora. We recommend that you provide an account that is used solely for replication with the external instance. 

*replication\$1user\$1password*  
 The password of the user ID specified in `replication_user_name`. 

*ssl\$1encryption*  
This option isn't currently implemented. The default is 0.  
Use [mysql.rds\$1set\$1binlog\$1source\$1ssl (Aurora MySQL version 3)](#mysql_rds_set_binlog_source_ssl) to enable SSL for binary log replication.

### Usage notes


 For an Aurora MySQL DB cluster, you call this stored procedure while connected to the primary instance. 

 The administrative user must run the `mysql.rds_set_external_source_with_auto_position` procedure. The administrative user runs this procedure on the primary instance of an Aurora MySQL DB cluster that acts as a replication target. This can be the replication target of an external MySQL DB instance or an Aurora MySQL DB cluster. 

This procedure is supported for Aurora MySQL version 3. This procedure doesn't configure delayed replication, because Aurora MySQL doesn't support delayed replication.

 Before you run `mysql.rds_set_external_source_with_auto_position`, configure the external MySQL DB instance to be a replication source. To connect to the external MySQL instance, specify values for `replication_user_name` and `replication_user_password`. These values must indicate a replication user that has `REPLICATION CLIENT` and `REPLICATION SLAVE` permissions on the external MySQL instance. 

**To configure an external MySQL instance as a replication source**

1.  Using the MySQL client of your choice, connect to the external MySQL instance and create a user account to be used for replication. The following is an example. 

   ```
   CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED BY 'SomePassW0rd'
   ```

1.  On the external MySQL instance, grant `REPLICATION CLIENT` and `REPLICATION SLAVE` privileges to your replication user. The following example grants `REPLICATION CLIENT` and `REPLICATION SLAVE` privileges on all databases for the `'repl_user'` user for your domain. 

   ```
   GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com'
   IDENTIFIED BY 'SomePassW0rd'
   ```

 When you call `mysql.rds_set_external_source_with_auto_position`, Amazon RDS records certain information. This information is the time, the user, and an action of `"set master"` in the `mysql.rds_history` and `mysql.rds_replication_status` tables. 

 To skip a specific GTID-based transaction that is known to cause a problem, you can use the [mysql.rds\$1skip\$1transaction\$1with\$1gtid(Aurora MySQL version 2 and 3)](mysql-stored-proc-gtid.md#mysql_rds_skip_transaction_with_gtid) stored procedure. For more information about working with GTID-based replication, see [Using GTID-based replication](mysql-replication-gtid.md). 

### Examples


 When run on an Aurora primary instance, the following example configures the Aurora cluster to act as a read replica of an instance of MySQL running external to Aurora. 

```
call mysql.rds_set_external_source_with_auto_position(
  'Externaldb.some.com',
  3306,
  'repl_user'@'mydomain.com',
  'SomePassW0rd');
```

## mysql.rds\$1set\$1master\$1auto\$1position (Aurora MySQL version 2)
mysql.rds\$1set\$1master\$1auto\$1position

Sets the replication mode to be based on either binary log file positions or on global transaction identifiers (GTIDs).

### Syntax


 

```
CALL mysql.rds_set_master_auto_position (
auto_position_mode
);
```

### Parameters


 *auto\$1position\$1mode*   
A value that indicates whether to use log file position replication or GTID-based replication:  
+ `0` – Use the replication method based on binary log file position. The default is `0`.
+ `1` – Use the GTID-based replication method.

### Usage notes


The master user must run the `mysql.rds_set_master_auto_position` procedure.

This procedure is supported for Aurora MySQL version 2.

## mysql.rds\$1set\$1read\$1only (Aurora MySQL version 3)
mysql\$1rds\$1set\$1read\$1only

Turns `read_only` mode on or off globally for the DB instance.

### Syntax


```
CALL mysql.rds_set_read_only(mode);
```

### Parameters


*mode*  
A value that indicates whether `read_only` mode is on or off globally for the DB instance:  
+ `0` – `OFF`. The default is `0`.
+ `1` – `ON`

### Usage notes


The `mysql.rds_set_read_only` stored procedure modifies only the `read_only` parameter. The `innodb_read_only` parameter can't be changed on reader DB instances.

The `read_only` parameter change doesn't persist on rebooting. To make permanent changes to `read_only`, you must use the `read_only` DB cluster parameter.

This procedure is supported for Aurora MySQL version 3.06 and higher.

## mysql.rds\$1set\$1session\$1binlog\$1format (Aurora MySQL version 2)
mysql.rds\$1set\$1session\$1binlog\$1format

Sets the binary log format for the current session.

### Syntax


```
CALL mysql.rds_set_session_binlog_format(format);
```

### Parameters


*format*  
A value that indicates the binary log format for the current session:  
+ `STATEMENT` – The replication source writes events to the binary log based on SQL statements.
+ `ROW` – The replication source writes events to the binary log that indicate changes to individual table rows.
+ `MIXED` – Logging is generally based on SQL statements, but switches to rows under certain conditions. For more information, see [Mixed Binary Logging Format](https://dev.mysql.com/doc/refman/8.0/en/binary-log-mixed.html) in the MySQL documentation.

### Usage notes


For an Aurora MySQL DB cluster, you call this stored procedure while connected to the primary instance.

To use this stored procedure, you must have binary logging configured for the current session.

For Aurora, this procedure is supported for Aurora MySQL version 2.12 and higher MySQL 5.7–compatible versions.

## mysql.rds\$1set\$1source\$1auto\$1position (Aurora MySQL version 3)
mysql.rds\$1set\$1source\$1auto\$1position

Sets the replication mode to be based on either binary log file positions or on global transaction identifiers (GTIDs).

### Syntax


```
CALL mysql.rds_set_source_auto_position (auto_position_mode);
```

### Parameters


*auto\$1position\$1mode*  
A value that indicates whether to use log file position replication or GTID-based replication:  
+  `0` – Use the replication method based on binary log file position. The default is `0`. 
+  `1` – Use the GTID-based replication method. 

### Usage notes


For an Aurora MySQL DB cluster, you call this stored procedure while connected to the primary instance. 

The administrative user must run the `mysql.rds_set_source_auto_position` procedure. 

## mysql.rds\$1skip\$1repl\$1error


Skips and deletes a replication error on a MySQL DB read replica.

### Syntax


 

```
CALL mysql.rds_skip_repl_error;
```

### Usage notes


The master user must run the `mysql.rds_skip_repl_error` procedure on a read replica. For more information about this procedure, see [Skipping the current replication error](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.MySQL.CommonDBATasks.html#Appendix.MySQL.CommonDBATasks.SkipError).

To determine if there are errors, run the MySQL `SHOW REPLICA STATUS\G` command. If a replication error isn't critical, you can run `mysql.rds_skip_repl_error` to skip the error. If there are multiple errors, `mysql.rds_skip_repl_error` deletes the first error, then warns that others are present. You can then use `SHOW REPLICA STATUS\G` to determine the correct course of action for the next error. For information about the values returned, see [SHOW REPLICA STATUS statement](https://dev.mysql.com/doc/refman/8.0/en/show-replica-status.html) in the MySQL documentation.

For more information about addressing replication errors with Aurora MySQL, see [Diagnosing and resolving a MySQL read replication failure](CHAP_Troubleshooting.md#CHAP_Troubleshooting.MySQL.RR).

#### Replication stopped error


When you call the `mysql.rds_skip_repl_error` procedure, you might receive an error message stating that the replica is down or disabled.

This error message appears if you run the procedure on the primary instance instead of the read replica. You must run this procedure on the read replica for the procedure to work.

This error message might also appear if you run the procedure on the read replica, but replication can't be restarted successfully.

If you need to skip a large number of errors, the replication lag can increase beyond the default retention period for binary log (binlog) files. In this case, you might encounter a fatal error because of binlog files being purged before they have been replayed on the read replica. This purge causes replication to stop, and you can no longer call the `mysql.rds_skip_repl_error` command to skip replication errors.

You can mitigate this issue by increasing the number of hours that binlog files are retained on your source database instance. After you have increased the binlog retention time, you can restart replication and call the `mysql.rds_skip_repl_error` command as needed.

To set the binlog retention time, use the [mysql.rds\$1set\$1configuration](mysql-stored-proc-configuring.md#mysql_rds_set_configuration) procedure and specify a configuration parameter of `'binlog retention hours'` along with the number of hours to retain binlog files on the DB cluster. The following example sets the retention period for binlog files to 48 hours.

```
CALL mysql.rds_set_configuration('binlog retention hours', 48);
```

## mysql.rds\$1start\$1replication


Initiates replication from an Aurora MySQL DB cluster.

**Note**  
You can use the [mysql.rds\$1start\$1replication\$1until(Aurora MySQL version 3)](#mysql_rds_start_replication_until) or [mysql.rds\$1start\$1replication\$1until\$1gtid(Aurora MySQL version 3)](mysql-stored-proc-gtid.md#mysql_rds_start_replication_until_gtid) stored procedure to initiate replication from an Aurora MySQL DB instance and stop replication at the specified binary log file location.

### Syntax


 

```
CALL mysql.rds_start_replication;
```

### Usage notes


The master user must run the `mysql.rds_start_replication` procedure.

To import data from an instance of MySQL external to Amazon RDS, call `mysql.rds_start_replication` on the read replica to start the replication process after you call [mysql.rds\$1set\$1external\$1master (Aurora MySQL version 2)](#mysql_rds_set_external_master) or [mysql.rds\$1set\$1external\$1source (Aurora MySQL version 3)](#mysql_rds_set_external_source) to build the replication configuration. For more information, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md).

To export data to an instance of MySQL external to Amazon RDS, call `mysql.rds_start_replication` and `mysql.rds_stop_replication` on the read replica to control some replication actions, such as purging binary logs. For more information, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md).

You can also call `mysql.rds_start_replication` on the read replica to restart any replication process that you previously stopped by calling `mysql.rds_stop_replication`. For more information, see [Replication stopped error](CHAP_Troubleshooting.md#CHAP_Troubleshooting.MySQL.ReplicationStopped).

## mysql.rds\$1start\$1replication\$1until(Aurora MySQL version 3)
mysql.rds\$1start\$1replication\$1until

Initiates replication from an Aurora MySQL DB cluster and stops replication at the specified binary log file location.

### Syntax


 

```
CALL mysql.rds_start_replication_until (
replication_log_file
  , replication_stop_point
);
```

### Parameters


 *replication\$1log\$1file*   
The name of the binary log on the source database instance that contains the replication information.

 *replication\$1stop\$1point *   
The location in the `replication_log_file` binary log at which replication will stop.

### Usage notes


The master user must run the `mysql.rds_start_replication_until` procedure.

This procedure is supported for Aurora MySQL version 3.04 and higher.

The `mysql.rds_start_replication_until` stored procedure isn't supported for managed replication, which includes the following:
+ [Replicating Amazon Aurora MySQL DB clusters across AWS Regions](AuroraMySQL.Replication.CrossRegion.md)
+ [Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster by using an Aurora read replica](AuroraMySQL.Migrating.RDSMySQL.Replica.md)

The file name specified for the `replication_log_file` parameter must match the source database instance binlog file name.

When the `replication_stop_point` parameter specifies a stop location that is in the past, replication is stopped immediately.

### Examples


The following example initiates replication and replicates changes until it reaches location `120` in the `mysql-bin-changelog.000777` binary log file.

```
call mysql.rds_start_replication_until(
  'mysql-bin-changelog.000777',
  120);
```

## mysql.rds\$1stop\$1replication


Stops replication from a MySQL DB instance.

### Syntax


 

```
CALL mysql.rds_stop_replication;
```

### Usage notes


The master user must run the `mysql.rds_stop_replication` procedure. 

If you are configuring replication to import data from an instance of MySQL running external to Amazon RDS, you call `mysql.rds_stop_replication` on the read replica to stop the replication process after the import has completed. For more information, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md).

If you are configuring replication to export data to an instance of MySQL external to Amazon RDS, you call `mysql.rds_start_replication` and `mysql.rds_stop_replication` on the read replica to control some replication actions, such as purging binary logs. For more information, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md).

The `mysql.rds_stop_replication` stored procedure isn't supported for managed replication, which includes the following:
+ [Replicating Amazon Aurora MySQL DB clusters across AWS Regions](AuroraMySQL.Replication.CrossRegion.md)
+ [Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster by using an Aurora read replica](AuroraMySQL.Migrating.RDSMySQL.Replica.md)

# Ending a session or query


The following stored procedures end a session or query.

**Topics**
+ [

## mysql.rds\$1kill
](#mysql_rds_kill)
+ [

## mysql.rds\$1kill\$1query
](#mysql_rds_kill_query)

## mysql.rds\$1kill
mysql.rds\$1kill

Ends a connection to the MySQL server.

### Syntax
Syntax

```
CALL mysql.rds_kill(processID);
```

### Parameters
Parameters

 *processID*   
The identity of the connection thread to be ended.

### Usage notes
Usage notes

Each connection to the MySQL server runs in a separate thread. To end a connection, use the `mysql.rds_kill` procedure and pass in the thread ID of that connection. To obtain the thread ID, use the MySQL [SHOW PROCESSLIST](https://dev.mysql.com/doc/refman/8.0/en/show-processlist.html) command.

### Examples
Examples

The following example ends a connection with a thread ID of 4243:

```
CALL mysql.rds_kill(4243);
```

## mysql.rds\$1kill\$1query
mysql.rds\$1kill\$1query

Ends a query running against the MySQL server.

### Syntax
Syntax

```
CALL mysql.rds_kill_query(processID);
```

### Parameters
Parameters

 *processID*   
The identity of the process or thread that is running the query to be ended.

### Usage notes
Usage notes

To stop a query running against the MySQL server, use the `mysql_rds_kill_query` procedure and pass in the connection ID of the thread that is running the query. The procedure then terminates the connection.

To obtain the ID, query the MySQL [INFORMATION\$1SCHEMA PROCESSLIST table](https://dev.mysql.com/doc/refman/8.0/en/information-schema-processlist-table.html) or use the MySQL [SHOW PROCESSLIST](https://dev.mysql.com/doc/refman/8.0/en/show-processlist.html) command. The value in the ID column from `SHOW PROCESSLIST` or `SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST` is the *processID*. 

### Examples
Examples

The following example stops a query with a query thread ID of 230040:

```
CALL mysql.rds_kill_query(230040);
```

# Replicating transactions using GTIDs


The following stored procedures control how transactions are replicated using global transaction identifiers (GTIDs) with Aurora MySQL. To learn how to use replication based on GTIDs with Aurora MySQL, see [Using GTID-based replication](mysql-replication-gtid.md).

**Topics**
+ [

## mysql.rds\$1assign\$1gtids\$1to\$1anonymous\$1transactions (Aurora MySQL version 3)
](#mysql_assign_gtids_to_anonymous_transactions)
+ [

## mysql.rds\$1gtid\$1purged (Aurora MySQL version 3)
](#mysql_rds_gtid_purged)
+ [

## mysql.rds\$1skip\$1transaction\$1with\$1gtid(Aurora MySQL version 2 and 3)
](#mysql_rds_skip_transaction_with_gtid)
+ [

## mysql.rds\$1start\$1replication\$1until\$1gtid(Aurora MySQL version 3)
](#mysql_rds_start_replication_until_gtid)

## mysql.rds\$1assign\$1gtids\$1to\$1anonymous\$1transactions (Aurora MySQL version 3)
mysql.rds\$1assign\$1gtids\$1to\$1anonymous\$1transactions

Configures the `ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS` option of the `CHANGE REPLICATION SOURCE TO` statement. It makes the replication channel assign a GTID to replicated transactions that don't have one. That way, you can perform binary log replication from a source that doesn't use GTID-based replication to a replica that does. For more information, see [CHANGE REPLICATION SOURCE TO Statement](https://dev.mysql.com/doc/refman/8.0/en/change-replication-source-to.html) and [Replication From a Source Without GTIDs to a Replica With GTIDs](https://dev.mysql.com/doc/refman/8.0/en/replication-gtids-assign-anon.html) in the *MySQL Reference Manual*.

### Syntax


```
CALL mysql.rds_assign_gtids_to_anonymous_transactions(gtid_option);
```

### Parameters


 *gtid\$1option*  
String value. The allowed values are `OFF`, `LOCAL`, or a specified UUID.

### Usage notes


This procedure has the same effect as issuing the statement `CHANGE REPLICATION SOURCE TO ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS = gtid_option` in community MySQL.

 GTID must be turned to `ON` for *gtid\$1option* to be set to `LOCAL` or a specific UUID. 

The default is `OFF`, meaning that the feature isn't used.

`LOCAL` assigns a GTID including the replica's own UUID (the `server_uuid` setting).

Passing a parameter that is a UUID assigns a GTID that includes the specified UUID, such as the `server_uuid` setting for the replication source server.

### Examples


To turn off this feature:

```
mysql> call mysql.rds_assign_gtids_to_anonymous_transactions('OFF');
+-------------------------------------------------------------+
| Message  |
+-------------------------------------------------------------+
| ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS has been set to: OFF |
+-------------------------------------------------------------+
1 row in set (0.07 sec)
```

To use the replica's own UUID:

```
mysql> call mysql.rds_assign_gtids_to_anonymous_transactions('LOCAL');
+---------------------------------------------------------------+
| Message  |
+---------------------------------------------------------------+
| ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS has been set to: LOCAL |
+---------------------------------------------------------------+
1 row in set (0.07 sec)
```

To use a specified UUID:

```
mysql> call mysql.rds_assign_gtids_to_anonymous_transactions('317a4760-f3dd-3b74-8e45-0615ed29de0e');
+----------------------------------------------------------------------------------------------+
| Message |
+----------------------------------------------------------------------------------------------+
| ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS has been set to: 317a4760-f3dd-3b74-8e45-0615ed29de0e |
+----------------------------------------------------------------------------------------------+
1 row in set (0.07 sec)
```

## mysql.rds\$1gtid\$1purged (Aurora MySQL version 3)
mysql.rds\$1gtid\$1purged



Sets the global value of the system variable `gtid_purged` to a given global transaction identifier (GTID) set. The `gtid_purged` system variable is a GTID set that consists of the GTIDs of all transactions that have been committed on the server, but don't exist in any binary log file on the server.

To allow compatibility with MySQL 8.0, there are two ways to set the value of `gtid_purged`:
+ Replace the value of `gtid_purged` with your specified GTID set.
+ Append your specified GTID set to the GTID set that `gtid_purged` already contains.

### Syntax
Syntax

To replace the value of `gtid_purged` with your specified GTID set:

```
CALL mysql.rds_gtid_purged (gtid_set);
```

To append the value of `gtid_purged` to your specified GTID set:

```
CALL mysql.rds_gtid_purged (+gtid_set);
```

### Parameters


*gtid\$1set*  
The value of *gtid\$1set* must be a superset of the current value of `gtid_purged`, and can't intersect with `gtid_subtract(gtid_executed,gtid_purged)`. That is, the new GTID set must include any GTIDs that were already in `gtid_purged`, and can't include any GTIDs in `gtid_executed` that haven't yet been purged. The *gtid\$1set* parameter also can't include any GTIDs that are in the global `gtid_owned` set, the GTIDs for transactions that are currently being processed on the server.

### Usage notes


The master user must run the `mysql.rds_gtid_purged` procedure.

This procedure is supported for Aurora MySQL version 3.04 and higher.

### Examples
Examples

The following example assigns the GTID `3E11FA47-71CA-11E1-9E33-C80AA9429562:23` to the `gtid_purged` global variable.

```
CALL mysql.rds_gtid_purged('3E11FA47-71CA-11E1-9E33-C80AA9429562:23');
```

## mysql.rds\$1skip\$1transaction\$1with\$1gtid(Aurora MySQL version 2 and 3)
mysql.rds\$1skip\$1transaction\$1with\$1gtid

Skips replication of a transaction with the specified global transaction identifier (GTID) on an Aurora primary instance.

You can use this procedure for disaster recovery when a specific GTID transaction is known to cause a problem. Use this stored procedure to skip the problematic transaction. Examples of problematic transactions include transactions that disable replication, delete important data, or cause the DB instance to become unavailable.

### Syntax
Syntax

 

```
CALL mysql.rds_skip_transaction_with_gtid (
gtid_to_skip
);
```

### Parameters
Parameters

 *gtid\$1to\$1skip*   
The GTID of the replication transaction to skip.

### Usage notes
Usage notes

The master user must run the `mysql.rds_skip_transaction_with_gtid` procedure.

This procedure is supported for Aurora MySQL version 2 and 3.

### Examples
Examples

The following example skips replication of the transaction with the GTID `3E11FA47-71CA-11E1-9E33-C80AA9429562:23`.

```
CALL mysql.rds_skip_transaction_with_gtid('3E11FA47-71CA-11E1-9E33-C80AA9429562:23');
```

## mysql.rds\$1start\$1replication\$1until\$1gtid(Aurora MySQL version 3)
mysql.rds\$1start\$1replication\$1until\$1gtid

Initiates replication from an Aurora MySQL DB cluster and stops replication immediately after the specified global transaction identifier (GTID).

### Syntax
Syntax

 

```
CALL mysql.rds_start_replication_until_gtid(gtid);
```

### Parameters
Parameters

 *gtid*   
The GTID after which replication is to stop.

### Usage notes
Usage notes

The master user must run the `mysql.rds_start_replication_until_gtid` procedure.

This procedure is supported for Aurora MySQL version 3.04 and higher.

The `mysql.rds_start_replication_until_gtid` stored procedure isn't supported for managed replication, which includes the following:
+ [Replicating Amazon Aurora MySQL DB clusters across AWS Regions](AuroraMySQL.Replication.CrossRegion.md)
+ [Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster by using an Aurora read replica](AuroraMySQL.Migrating.RDSMySQL.Replica.md)

When the `gtid` parameter specifies a transaction that has already been run by the replica, replication is stopped immediately.

### Examples
Examples

The following example initiates replication and replicates changes until it reaches GTID `3E11FA47-71CA-11E1-9E33-C80AA9429562:23`.

```
call mysql.rds_start_replication_until_gtid('3E11FA47-71CA-11E1-9E33-C80AA9429562:23');
```

# Rotating the query logs


The following stored procedures rotate MySQL logs to backup tables. For more information, see [AuroraMySQL database log files](USER_LogAccess.Concepts.MySQL.md).

**Topics**
+ [

## mysql.rds\$1rotate\$1general\$1log
](#mysql_rds_rotate_general_log)
+ [

## mysql.rds\$1rotate\$1slow\$1log
](#mysql_rds_rotate_slow_log)

## mysql.rds\$1rotate\$1general\$1log
mysql.rds\$1rotate\$1general\$1log

Rotates the `mysql.general_log` table to a backup table.

### Syntax
Syntax

 

```
CALL mysql.rds_rotate_general_log;
```

### Usage notes
Usage notes

You can rotate the `mysql.general_log` table to a backup table by calling the `mysql.rds_rotate_general_log` procedure. When log tables are rotated, the current log table is copied to a backup log table and the entries in the current log table are removed. If a backup log table already exists, then it is deleted before the current log table is copied to the backup. You can query the backup log table if needed. The backup log table for the `mysql.general_log` table is named `mysql.general_log_backup`.

You can run this procedure only when the `log_output` parameter is set to `TABLE`.

## mysql.rds\$1rotate\$1slow\$1log
mysql.rds\$1rotate\$1slow\$1log

Rotates the `mysql.slow_log` table to a backup table.

### Syntax
Syntax

 

```
CALL mysql.rds_rotate_slow_log;
```

### Usage notes
Usage notes

You can rotate the `mysql.slow_log` table to a backup table by calling the `mysql.rds_rotate_slow_log` procedure. When log tables are rotated, the current log table is copied to a backup log table and the entries in the current log table are removed. If a backup log table already exists, then it is deleted before the current log table is copied to the backup. 

You can query the backup log table if needed. The backup log table for the `mysql.slow_log` table is named `mysql.slow_log_backup`. 

# Setting and showing binary log configuration


The following stored procedures set and show configuration parameters, such as for binary log file retention.

**Topics**
+ [

## mysql.rds\$1set\$1configuration
](#mysql_rds_set_configuration)
+ [

## mysql.rds\$1show\$1configuration
](#mysql_rds_show_configuration)

## mysql.rds\$1set\$1configuration


Specifies the number of hours to retain binary logs or the number of seconds to delay replication.

### Syntax
Syntax

 

```
CALL mysql.rds_set_configuration(name,value);
```

### Parameters
Parameters

 *name*   
The name of the configuration parameter to set.

 *value*   
The value of the configuration parameter.

### Usage notes
Usage notes

The `mysql.rds_set_configuration` procedure supports the following configuration parameters:
+ [binlog retention hours](#mysql_rds_set_configuration-usage-notes.binlog-retention-hours)

The configuration parameters are stored permanently and survive any DB instance reboot or failover.

#### binlog retention hours


The `binlog retention hours` parameter is used to specify the number of hours to retain binary log files. Amazon Aurora normally purges a binary log as soon as possible, but the binary log might still be required for replication with a MySQL database external to Aurora.

The default value of `binlog retention hours` is `NULL`. For Aurora MySQL, `NULL` means binary logs are cleaned up lazily. Aurora MySQL binary logs might remain in the system for a certain period, which is usually not longer than a day.

To specify the number of hours to retain binary logs on a DB cluster, use the `mysql.rds_set_configuration` stored procedure and specify a period with enough time for replication to occur, as shown in the following example.

`call mysql.rds_set_configuration('binlog retention hours', 24);`

**Note**  
You can't use the value `0` for `binlog retention hours`.

For Aurora MySQL version 2.11.0 and higher and version 3 DB clusters, the maximum `binlog retention hours` value is 2160 (90 days).

After you set the retention period, monitor storage usage for the DB instance to make sure that the retained binary logs don't take up too much storage.

## mysql.rds\$1show\$1configuration


The number of hours that binary logs are retained.

### Syntax
Syntax

 

```
CALL mysql.rds_show_configuration;
```

### Usage notes
Usage notes

To verify the number of hours that Amazon RDS retains binary logs, use the `mysql.rds_show_configuration` stored procedure.

### Examples
Examples

The following example displays the retention period:

```
call mysql.rds_show_configuration;
                name                         value     description
                binlog retention hours       24        binlog retention hours specifies the duration in hours before binary logs are automatically deleted.
```

# Aurora MySQL–specific information\$1schema tables
information\$1schema tables

Aurora MySQL has certain `information_schema` tables that are specific to Aurora.

## information\$1schema.aurora\$1global\$1db\$1instance\$1status


The `information_schema.aurora_global_db_instance_status` table contains information about the status of all DB instances in a global database's primary and secondary DB clusters. The following table shows the columns that you can use. The remaining columns are for Aurora internal use only.

**Note**  
This information schema table is only available with Aurora MySQL version 3.04.0 and higher global databases.


| Column | Data type | Description | 
| --- | --- | --- | 
| SERVER\$1ID | varchar(100) | The identifier of the DB instance. | 
| SESSION\$1ID | varchar(100) | A unique identifier for the current session. A value of MASTER\$1SESSION\$1ID identifies the Writer (primary) DB instance. | 
| AWS\$1REGION | varchar(100) | The AWS Region in which this global database instance runs. For a list of Regions, see [Region availability](Concepts.RegionsAndAvailabilityZones.md#Aurora.Overview.Availability). | 
| DURABLE\$1LSN | bigint unsigned | The log sequence number (LSN) made durable in storage. A log sequence number (LSN) is a unique sequential number that identifies a record in the database transaction log. LSNs are ordered such that a larger LSN represents a later transaction. | 
| HIGHEST\$1LSN\$1RCVD | bigint unsigned | The highest LSN received by the DB instance from the writer DB instance. | 
| OLDEST\$1READ\$1VIEW\$1TRX\$1ID | bigint unsigned | The ID of the oldest transaction that the writer DB instance can purge to. | 
| OLDEST\$1READ\$1VIEW\$1LSN | bigint unsigned | The oldest LSN used by the DB instance to read from storage. | 
| VISIBILITY\$1LAG\$1IN\$1MSEC | float(10,0) unsigned | For readers in the primary DB cluster, how far this DB instance is lagging behind the writer DB instance in milliseconds. For readers in a secondary DB cluster, how far this DB instance is lagging behind the secondary volume in milliseconds. | 

## information\$1schema.aurora\$1global\$1db\$1status


The `information_schema.aurora_global_db_status` table contains information about various aspects of Aurora global database lag, specifically, lag of the underlying Aurora storage (so called durability lag) and lag between the recovery point objective (RPO). The following table shows the columns that you can use. The remaining columns are for Aurora internal use only.

**Note**  
This information schema table is only available with Aurora MySQL version 3.04.0 and higher global databases.


| Column | Data type | Description | 
| --- | --- | --- | 
| AWS\$1REGION | varchar(100) | The AWS Region in which this global database instance runs. For a list of Regions, see [Region availability](Concepts.RegionsAndAvailabilityZones.md#Aurora.Overview.Availability). | 
| HIGHEST\$1LSN\$1WRITTEN | bigint unsigned | The highest log sequence number (LSN) that currently exists on this DB cluster. A log sequence number (LSN) is a unique sequential number that identifies a record in the database transaction log. LSNs are ordered such that a larger LSN represents a later transaction. | 
| DURABILITY\$1LAG\$1IN\$1MILLISECONDS | float(10,0) unsigned | The difference in the timestamp values between the HIGHEST\$1LSN\$1WRITTEN on a secondary DB cluster and the HIGHEST\$1LSN\$1WRITTEN on the primary DB cluster. This value is always 0 on the primary DB cluster of the Aurora global database. | 
| RPO\$1LAG\$1IN\$1MILLISECONDS | float(10,0) unsigned | The recovery point objective (RPO) lag. The RPO lag is the time it takes for the most recent user transaction COMMIT to be stored on a secondary DB cluster after it's been stored on the primary DB cluster of the Aurora global database. This value is always 0 on the primary DB cluster of the Aurora global database. In simple terms, this metric calculates the recovery point objective for each Aurora MySQL DB cluster in the Aurora global database, that is, how much data might be lost if there were an outage. As with lag, RPO is measured in time. | 
| LAST\$1LAG\$1CALCULATION\$1TIMESTAMP | datetime | The timestamp that specifies when values were last calculated for DURABILITY\$1LAG\$1IN\$1MILLISECONDS and RPO\$1LAG\$1IN\$1MILLISECONDS. A time value such as 1970-01-01 00:00:00\$100 means this is the primary DB cluster. | 
| OLDEST\$1READ\$1VIEW\$1TRX\$1ID | bigint unsigned | The ID of the oldest transaction that the writer DB instance can purge to. | 

## information\$1schema.replica\$1host\$1status


The `information_schema.replica_host_status` table contains replication information. The columns that you can use are shown in the following table. The remaining columns are for Aurora internal use only.


| Column | Data type | Description | 
| --- | --- | --- | 
| CPU | double | The CPU percentage usage of the replica host. | 
| IS\$1CURRENT | tinyint | Whether the replica is current. | 
| LAST\$1UPDATE\$1TIMESTAMP | datetime(6) | The time the last update occurred. Used to determine whether a record is stale. | 
| REPLICA\$1LAG\$1IN\$1MILLISECONDS | double | The replica lag in milliseconds. | 
| SERVER\$1ID | varchar(100) | The ID of the database server. | 
| SESSION\$1ID | varchar(100) | The ID of the database session. Used to determine whether a DB instance is a writer or reader instance. | 

**Note**  
When a replica instance falls behind, the information queried from its `information_schema.replica_host_status` table might be outdated. In this situation, we recommend that you query from the writer instance instead.  
While the `mysql.ro_replica_status` table has similar information, we don't recommend that you use it.

## information\$1schema.aurora\$1forwarding\$1processlist


The `information_schema.aurora_forwarding_processlist` table contains information about processes involved in write forwarding.

The contents of this table are visible only on the writer DB instance for a DB cluster with global or in-cluster write forwarding turned on. An empty result set is returned on reader DB instances.


| Field | Data type | Description | 
| --- | --- | --- | 
| ID | bigint | The identifier of the connection on the writer DB instance. This identifier is the same value displayed in the Id column of the SHOW PROCESSLIST statement and returned by the CONNECTION\$1ID() function within the thread. | 
| USER | varchar(32) | The MySQL user that issued the statement. | 
| HOST | varchar(255) | The MySQL client that issued the statement. For forwarded statements, this field shows the application client host address that established the connection on the forwarding reader DB instance. | 
| DB | varchar(64) | The default database for the thread. | 
| COMMAND | varchar(16) | The type of command the thread is executing on behalf of the client, or Sleep if the session is idle. For descriptions of thread commands, see the MySQL documentation on [Thread Command Values](https://dev.mysql.com/doc/refman/8.0/en/thread-commands.html) in the MySQL documentation. | 
| TIME | int | The time in seconds that the thread has been in its current state. | 
| STATE | varchar(64) | An action, event, or state that indicates what the thread is doing. For descriptions of state values, see [General Thread States](https://dev.mysql.com/doc/refman/8.0/en/general-thread-states.html) in the MySQL documentation. | 
| INFO | longtext | The statement that the thread is executing, or NULL if it isn't executing a statement. The statement might be the one sent to the server, or an innermost statement if the statement executes other statements. | 
| IS\$1FORWARDED | bigint | Indicates whether the thread is forwarded from a reader DB instance. | 
| REPLICA\$1SESSION\$1ID | bigint | The connection identifier on the Aurora Replica. This identifier is the same value displayed in the Id column of the SHOW PROCESSLIST statement on the forwarding Aurora reader DB instance. | 
| REPLICA\$1INSTANCE\$1IDENTIFIER | varchar(64) | The DB instance identifier of the forwarding thread. | 
| REPLICA\$1CLUSTER\$1NAME | varchar(64) | The DB cluster identifier of the forwarding thread. For in-cluster write forwarding, this identifier is the same DB cluster as the writer DB instance. | 
| REPLICA\$1REGION | varchar(64) | The AWS Region from which the forwarding thread originates. For in-cluster write forwarding, this Region is the same AWS Region as the writer DB instance. | 

# Database engine updates for Amazon Aurora MySQL
Aurora MySQL updates<a name="mysql_relnotes"></a>

Amazon Aurora releases updates regularly. Updates are applied to Aurora DB clusters during system maintenance windows. The timing when updates are applied depends on the region and maintenance window setting for the DB cluster, as well as the type of update. 

Amazon Aurora releases are made available to all AWS Regions over the course of multiple days. Some Regions might temporarily show an engine version that isn't available in a different Region yet.

 Updates are applied to all instances in a DB cluster at the same time. An update requires a database restart on all instances in a DB cluster, so you experience 20 to 30 seconds of downtime, after which you can resume using your DB cluster or clusters. You can view or change your maintenance window settings from the [AWS Management Console](https://console.aws.amazon.com/). 

For details about the Aurora MySQL versions that are supported by Amazon Aurora, see the [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/Welcome.html](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/Welcome.html).

 Following, you can learn how to choose the right Aurora MySQL version for your cluster, how to specify the version when you create or upgrade a cluster, and the procedures to upgrade a cluster from one version to another with minimal interruption. 

**Topics**
+ [

# Checking Aurora MySQL version numbers
](AuroraMySQL.Updates.Versions.md)
+ [

# Long-term support (LTS) and beta releases for Amazon Aurora MySQL
](AuroraMySQL.Update.SpecialVersions.md)
+ [

# Preparing for Amazon Aurora MySQL-Compatible Edition version 2 end of standard support
](Aurora.MySQL57.EOL.md)
+ [

# Preparing for Amazon Aurora MySQL-Compatible Edition version 1 end of life
](Aurora.MySQL56.EOL.md)
+ [

# Upgrading Amazon Aurora MySQL DB clusters
](AuroraMySQL.Updates.Upgrading.md)
+ [

# Database engine updates and fixes for Amazon Aurora MySQL
](AuroraMySQL.Updates.RN.md)

# Checking Aurora MySQL version numbers
Checking version numbers

 Although Aurora MySQL-Compatible Edition is compatible with the MySQL database engines, Aurora MySQL includes features and bug fixes that are specific to particular Aurora MySQL versions. Application developers can check the Aurora MySQL version in their applications by using SQL. Database administrators can check and specify Aurora MySQL versions when creating or upgrading Aurora MySQL DB clusters and DB instances. 

**Topics**
+ [

## Checking or specifying Aurora MySQL engine versions through AWS
](#AuroraMySQL.Updates.EngineVersions)
+ [

## Checking Aurora MySQL versions using SQL
](#AuroraMySQL.Updates.DBVersions)

## Checking or specifying Aurora MySQL engine versions through AWS
Aurora MySQL engine versions

 When you perform administrative tasks using the AWS Management Console, AWS CLI, or RDS API, you specify the Aurora MySQL version in a descriptive alphanumeric format. 

 Starting with Aurora MySQL version 2, Aurora engine versions have the following syntax. 

```
mysql-major-version.mysql_aurora.aurora-mysql-version
```

 The `mysql-major-version-` portion is `5.7` or `8.0`. This value represents the version of the client protocol and general level of MySQL feature support for the corresponding Aurora MySQL version. 

 The `aurora-mysql-version` is a dotted value with three parts: the Aurora MySQL major version, the Aurora MySQL minor version, and the patch level. The major version is `2` or `3`. Those values represent Aurora MySQL compatible with MySQL 5.7 or 8.0, respectively. The minor version represents the feature release within the 2.x or 3.x series. The patch level begins at `0` for each minor version, and represents the set of subsequent bug fixes that apply to the minor version. Occasionally, a new feature is incorporated into a minor version but not made visible immediately. In these cases, the feature undergoes fine-tuning and is made public in a later patch level. 

All 2.x Aurora MySQL engine versions are wire-compatible with Community MySQL 5.7.12 or higher. All 3.x Aurora MySQL engine versions are wire-compatible with MySQL 8.0.23 or higher. You can refer to release notes of the specific 3.x version to find the corresponding MySQL compatible version.

For example, the engine versions for Aurora MySQL 3.04.0 and 2.11.2 are the following.

```
8.0.mysql_aurora.3.04.0
5.7.mysql_aurora.2.11.2
```

**Note**  
There isn't a one-to-one correspondence between community MySQL versions and the Aurora MySQL 2.x versions. For Aurora MySQL version 3, there is a more direct mapping. To check which bug fixes and new features are in a particular Aurora MySQL release, see [ Database engine updates for Amazon Aurora MySQL version 3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.30Updates.html) and [ Database engine updates for Amazon Aurora MySQL version 2](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.20Updates.html) in the *Release Notes for Aurora MySQL*. For a chronological list of new features and releases, see [Document history](WhatsNew.md). To check the minimum version required for a security-related fix, see [ Security vulnerabilities fixed in Aurora MySQL](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.CVE_list.html)in the *Release Notes for Aurora MySQL*.

You specify the Aurora MySQL engine version in some AWS CLI commands and RDS API operations. For example, you specify the `--engine-version` option when you run the AWS CLI commands [create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html) and [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html). You specify the `EngineVersion` parameter when you run the RDS API operations [CreateDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html) and [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html).

In Aurora MySQL version 2 and higher, the engine version in the AWS Management Console also includes the Aurora version. Upgrading the cluster changes the displayed value. This change helps you to specify and check the precise Aurora MySQL versions, without the need to connect to the cluster or run any SQL commands.

**Tip**  
For Aurora clusters managed through CloudFormation, this change in the `EngineVersion` setting can trigger actions by CloudFormation. For information about how CloudFormation treats changes to the `EngineVersion` setting, see [the CloudFormation documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbcluster.html).

## Checking Aurora MySQL versions using SQL


 The Aurora version numbers that you can retrieve in your application using SQL queries use the format `<major version>.<minor version>.<patch version>`. You can get this version number for any DB instance in your Aurora MySQL cluster by querying the `AURORA_VERSION` system variable. To get this version number, use one of the following queries. 

```
select aurora_version();
select @@aurora_version;
```

 Those queries produce output similar to the following. 

```
mysql> select aurora_version(), @@aurora_version;
+------------------+------------------+
| aurora_version() | @@aurora_version |
+------------------+------------------+
| 3.05.2           | 3.05.2           |
+------------------+------------------+
```

 The version numbers that the console, CLI, and RDS API return by using the techniques described in [Checking or specifying Aurora MySQL engine versions through AWS](#AuroraMySQL.Updates.EngineVersions) are typically more descriptive.

# Long-term support (LTS) and beta releases for Amazon Aurora MySQL
Long-term support and beta releases

Aurora MySQL provides long-term support (LTS) and beta releases for some Aurora MySQL engine versions. 

**Topics**
+ [

## Aurora MySQL long-term support (LTS) releases
](#AuroraMySQL.Updates.LTS)
+ [

## Aurora MySQL beta releases
](#AuroraMySQL.Updates.Beta)

## Aurora MySQL long-term support (LTS) releases


Each new Aurora MySQL version remains available for a certain amount of time for you to use when you create or upgrade a DB cluster. After this period, you must upgrade any clusters that use that version. You can manually upgrade your cluster before the support period ends, or Aurora can automatically upgrade it for you when its Aurora MySQL version is no longer supported.

Aurora designates certain Aurora MySQL versions as long-term support (LTS) releases. DB clusters that use LTS releases can stay on the same version longer and undergo fewer upgrade cycles than clusters that use non-LTS releases. Database clusters that use LTS releases can stay on the same minor version for at least three years, or until end of standard support for the major version, whichever comes first. When a DB cluster that's on an LTS release is required to upgrade, Aurora upgrades it to the next LTS release. That way, the cluster doesn't need to be upgraded again for a long time.

During the lifetime of an Aurora MySQL LTS release, new patch levels introduce fixes to important issues. The patch levels don't include any new features. You can choose whether to apply such patches to DB clusters running the LTS release. We recommend customers running LTS releases to upgrade to the latest patch release of the LTS minor version at least once a year to take advantage of high severity security and operational fixes. For certain critical fixes, Amazon might perform a managed upgrade to a patch level within the same LTS release. Such managed upgrades are performed automatically within the cluster maintenance window. All Aurora MySQL releases (both LTS and non-LTS releases) undergo extensive stability and operational testing. Select minor versions are designated as LTS releases to enable customers to stay on those minor versions longer without upgrading to a newer minor version. 

We recommend that you upgrade to the latest release, instead of using the LTS release, for most of your Aurora MySQL clusters. Doing so takes advantage of Aurora as a managed service and gives you access to the latest features and bug fixes. The LTS releases are intended for clusters with the following characteristics:
+ You can't afford downtime on your Aurora MySQL application for upgrades outside of rare occurrences for critical patches. 
+ The testing cycle for the cluster and associated applications takes a long time for each update to the Aurora MySQL database engine. 
+ The database version for your Aurora MySQL cluster has all the DB engine features and bug fixes that your application needs.

The current LTS releases for Aurora MySQL are as follows:
+ Aurora MySQL version 3.10.\$1. 
+ Aurora MySQL version 3.04.\$1. 

For more details about the LTS version, see [Database engine updates for Amazon Aurora MySQL version 3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.30Updates.html) in the *Release Notes for Aurora MySQL*.

**Note**  
We recommend that you disable automatic minor version upgrades for LTS versions. Set the `AutoMinorVersionUpgrade` parameter to `false`, or clear the **Enable auto minor version upgrade** check box on the AWS Management Console.  
If you don't disable it, your DB cluster could be upgraded to a non-LTS version.

## Aurora MySQL beta releases


An Aurora MySQL beta release is an early, security fix–only release in a limited number of AWS Regions. These fixes are later deployed more broadly across all Regions with the next patch release.

The numbering for a beta release is similar to an Aurora MySQL minor version, but with an extra fourth digit, for example 2.12.0.1 or 3.05.0.1.

For more information, see [Database engine updates for Amazon Aurora MySQL version 2](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.20Updates.html) and [Database engine updates for Amazon Aurora MySQL version 3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.30Updates.html) in the *Release Notes for Aurora MySQL*.

# Preparing for Amazon Aurora MySQL-Compatible Edition version 2 end of standard support
Preparing for Aurora MySQL version 2 end of life

Amazon Aurora MySQL-Compatible Edition version 2 (with MySQL 5.7 compatibility) is planned to reach the end of standard support on October 31, 2024. We recommend that you upgrade all clusters running Aurora MySQL version 2 to the default Aurora MySQL version 3 (with MySQL 8.0 compatibility) or higher before Aurora MySQL version 2 reaches the end of its standard support period. On October 31, 2024, Amazon RDS will automatically enroll your databases into [Amazon RDS Extended Support](extended-support.md). If you're running Amazon Aurora MySQL version 2 (with MySQL 5.7 compatibility) in an Aurora Serverless version 1 cluster, this doesn't apply to you. If you want to upgrade your Aurora Serverless version 1 clusters to Aurora MySQL version 3, see [Upgrade path for Aurora Serverless v1 DB clusters](#Aurora-Upgrade-Serverlessv1-Clusters).

You can find upcoming end-of-support dates for Aurora MySQL major versions in [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.release-calendars.html#AuroraMySQL.release-calendars.major](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.release-calendars.html#AuroraMySQL.release-calendars.major).

If you have clusters running Aurora MySQL version 2, you will receive periodic notices with the latest information about how to conduct an upgrade as we get closer to the end of standard support date. We will update this page periodically with the latest information.

## End of standard support timeline


1. Now through October 31, 2024 – You can upgrade clusters from Aurora MySQL version 2 (with MySQL 5.7 compatibility) to Aurora MySQL version 3 (with MySQL 8.0 compatibility).

1. October 31, 2024 – On this date, Aurora MySQL version 2 will reach the end of standard support and Amazon RDS automatically enrolls your clusters into Amazon RDS Extended Support.

We will automatically enroll you in RDS Extended Support. For more information, see [Amazon RDS Extended Support with Amazon Aurora](extended-support.md).

## Finding clusters affected by this end-of-life process


To find clusters affected by this end-of-life process, use the following procedures.

**Important**  
Be sure to perform these instructions in every AWS Region and for each AWS account where your resources are located.

### Console


**To find an Aurora MySQL version 2 cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  In the navigation pane, choose **Databases**.

1.  In the **Filter by databases** box, enter **5.7**.

1. Check for Aurora MySQL in the engine column.

### AWS CLI


To find clusters affected by this end-of-life process using the AWS CLI, call the [describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) command. You can use the sample script following.

**Example**  

```
aws rds describe-db-clusters --include-share --query 'DBClusters[?(Engine==`aurora-mysql` && contains(EngineVersion,`5.7.mysql_aurora`))].{EngineVersion:EngineVersion, DBClusterIdentifier:DBClusterIdentifier, EngineMode:EngineMode}' --output table --region us-east-1     
        
        +---------------------------------------------------------------+
        |                      DescribeDBClusters                       |
        +---------------------+---------------+-------------------------+
        |         DBCI        |      EM       |          EV             |
        +---------------------+---------------+-------------------------+
        |    aurora-mysql2    |  provisioned  | 5.7.mysql_aurora.2.11.3 |
        | aurora-serverlessv1 |   serverless  | 5.7.mysql_aurora.2.11.3 |
        +---------------------+---------------+-------------------------+
```

### RDS API


To find Aurora MySQL DB clusters running Aurora MySQL version 2, use the RDS [DescribeDBClusters](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html) API operation with the following required parameters: 
+  `DescribeDBClusters`
  + Filters.Filter.N
    + Name
      + engine
    + Values.Value.N
      + ['aurora']

## Amazon RDS Extended Support


You can use Amazon RDS Extended Support over community MySQL 5.7 at no charge until the end of support date, October 31, 2024. On October 31, 2024, Amazon RDS automatically enrolls your databases into RDS Extended Support for Aurora MySQL version 2. RDS Extended Support for Aurora is a paid service that provides up to 28 additional months of support for Aurora MySQL version 2 until the end of RDS Extended Support in February 2027. RDS Extended Support will only be offered for Aurora MySQL minor versions 2.11 and 2.12. To use Amazon Aurora MySQL version 2 past the end of standard support, plan to run your databases on one of these minor versions before October 31, 2024.

For more information about RDS Extended Support, such as charges and other considerations, see [Amazon RDS Extended Support with Amazon Aurora](extended-support.md).

## Performing an upgrade


Upgrading between major versions requires more extensive planning and testing than for a minor version. The process can take substantial time. We want to look at the upgrade as a three-step process, with activities before the upgrade, for the upgrade, and after the upgrade.

**Before the upgrade:**

Before the upgrade, we recommend that you check for application compatibility, performance, maintenance procedures, and similar considerations for the upgraded cluster, thereby confirming that post-upgrade your applications will work as expected. Here are five recommendations that will help provide you a better upgrade experience.
+ First, it's critical to understand [How the Aurora MySQL in-place major version upgrade works](AuroraMySQL.Updates.MajorVersionUpgrade.md#AuroraMySQL.Upgrading.Sequence).
+ Next, explore the upgrade techniques that are available when [Upgrading from Aurora MySQL version 2 to version 3](AuroraMySQL.Updates.MajorVersionUpgrade.md#AuroraMySQL.Updates.MajorVersionUpgrade.2to3).
+ To help you decide the right time and approach to upgrade, you can learn the differences between Aurora MySQL version 3 and your current environment with [Comparing Aurora MySQL version 2 and Aurora MySQL version 3](AuroraMySQL.Compare-v2-v3.md). 
+ After you've decided on the option that's convenient and works best, try a mock in-place upgrade on a cloned cluster, using [Planning a major version upgrade for an Aurora MySQL cluster](AuroraMySQL.Updates.MajorVersionUpgrade.md#AuroraMySQL.Upgrading.Planning).
+ Review the [Major version upgrade prechecks for Aurora MySQL](AuroraMySQL.upgrade-prechecks.md). The upgrade prechecker can run and determine whether your database can be upgraded successfully, and if there are any application incompatibility issues post-upgrade as well as performance, maintenance procedures, and similar considerations.
+ Not all kinds or versions of Aurora MySQL clusters can use the in-place upgrade mechanism. For more information, see [Aurora MySQL major version upgrade paths](AuroraMySQL.Updates.MajorVersionUpgrade.md#AuroraMySQL.Upgrading.Compatibility).

If you have any questions or concerns, the AWS Support Team is available on the [community forums](https://repost.aws/) and [Premium Support](https://aws.amazon.com/premiumsupport/).

**Doing the upgrade:**

You can use one of the following upgrade techniques. The amount of downtime your system will experience depends on the technique chosen.
+ **Blue/Green Deployments** – For situations where the top priority is to reduce application downtime, you can use [Amazon RDS Blue/Green Deployments](https://aws.amazon.com/blogs/aws/new-fully-managed-blue-green-deployments-in-amazon-aurora-and-amazon-rds/) for performing the major version upgrade in provisioned Amazon Aurora DB clusters. A blue/green deployment creates a staging environment that copies the production environment. You can make certain changes to the Aurora DB cluster in the green (staging) environment without affecting production workloads. The switchover typically takes under a minute with no data loss. For more information, see [Overview of Amazon Aurora Blue/Green Deployments](blue-green-deployments-overview.md). This minimizes downtime, but requires you to run additional resources while performing the upgrade.
+ **In-place upgrades** – You can perform an [in-place upgrade](AuroraMySQL.Updates.MajorVersionUpgrade.md#AuroraMySQL.Upgrading.Sequence) where Aurora automatically performs a precheck process for you, takes the cluster offline, backs up your cluster, performs the upgrade, and puts your cluster back online. An in-place major version upgrade can be performed in a few clicks, and doesn't involve other coordination or failovers with other clusters, but does involve downtime. For more information, see [How to perform an in-place upgrade](AuroraMySQL.Upgrading.Procedure.md)
+ **Snapshot restore** – You can upgrade your Aurora MySQL version 2 cluster by restoring from an Aurora MySQL version 2 snapshot into an Aurora MySQL version 3 cluster. To do this, you should follow the process for taking a snapshot and [restoring](aurora-restore-snapshot.md) from it. This process involves database interruption because you're restoring from a snapshot.

**After the upgrade:**

After the upgrade, you need to closely monitor your system (application and database) and make fine-tuning changes if necessary. Following the pre-upgrade steps closely will minimize the required changes needed. For more information, see [Troubleshooting Amazon Aurora MySQL database performance](aurora-mysql-troubleshooting.md).

To learn more about the methods, planning, testing, and troubleshooting of Aurora MySQL major version upgrades, be sure to thoroughly read [Upgrading the major version of an Amazon Aurora MySQL DB cluster](AuroraMySQL.Updates.MajorVersionUpgrade.md), including [Troubleshooting for Aurora MySQL in-place upgrade](AuroraMySQL.Upgrading.Troubleshooting.md). Also, note that some instance types aren't supported for Aurora MySQL version 3. For more information, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md).

## Upgrade path for Aurora Serverless v1 DB clusters


Upgrading between major versions requires more extensive planning and testing than for a minor version. The process can take substantial time. We want to look at the upgrade as a three-step process, with activities before the upgrade, for the upgrade, and after the upgrade.

 Aurora MySQL version 2 (with MySQL 5.7 compatibility) will continue to receive standard support for Aurora Serverless v1 clusters. 

If you want to upgrade to Amazon Aurora MySQL 3 (with MySQL 8.0 compatibility) and continue running Aurora Serverless, you can use Amazon Aurora Serverless v2. To understand the differences between Aurora Serverless v1 and Aurora Serverless v2, see [Comparison of Aurora Serverless v2 and Aurora Serverless v1](aurora-serverless-v2.upgrade.md#aurora-serverless.comparison).

**Upgrade to Aurora Serverless v2:** You can upgrade an Aurora Serverless v1 cluster to Aurora Serverless v2. For more information, see [Upgrading from an Aurora Serverless v1 cluster to Aurora Serverless v2](aurora-serverless-v2.upgrade.md#aurora-serverless-v2.upgrade-from-serverless-v1-procedure).

# Preparing for Amazon Aurora MySQL-Compatible Edition version 1 end of life
Preparing for Aurora MySQL version 1 end of life

Amazon Aurora MySQL-Compatible Edition version 1 (with MySQL 5.6 compatibility) is planned to reach end of life on February 28, 2023. Amazon advises that you upgrade all clusters (provisioned and Aurora Serverless) running Aurora MySQL version 1 to Aurora MySQL version 2 (with MySQL 5.7 compatibility) or Aurora MySQL version 3 (with MySQL 8.0 compatibility). Do this before Aurora MySQL version 1 reaches the end of its support period.

For Aurora provisioned DB clusters, you can complete upgrades from Aurora MySQL version 1 to Aurora MySQL version 2 by several methods. You can find instructions for the in-place upgrade mechanism in [How to perform an in-place upgrade](AuroraMySQL.Upgrading.Procedure.md). Another way to complete the upgrade is to take a snapshot of an Aurora MySQL version 1 cluster and restore the snapshot to an Aurora MySQL version 2 cluster. Or you can follow a multistep process that runs the old and new clusters side by side. For more details about each method, see [Upgrading the major version of an Amazon Aurora MySQL DB cluster](AuroraMySQL.Updates.MajorVersionUpgrade.md).

For Aurora Serverless v1 DB clusters, you can perform an in-place upgrade from Aurora MySQL version 1 to Aurora MySQL version 2. For more details about this method, see [Modifying an Aurora Serverless v1 DB cluster](aurora-serverless.modifying.md).

For Aurora provisioned DB clusters, you can complete upgrades from Aurora MySQL version 1 to Aurora MySQL version 3 by using a two-stage upgrade process:

1. Upgrade from Aurora MySQL version 1 to Aurora MySQL version 2 using the methods described preceding.

1. Upgrade from Aurora MySQL version 2 to Aurora MySQL version 3 using the same methods as for upgrading from version 1 to version 2. For more details, see [Upgrading from Aurora MySQL version 2 to version 3](AuroraMySQL.Updates.MajorVersionUpgrade.md#AuroraMySQL.Updates.MajorVersionUpgrade.2to3). Note the [Feature differences between Aurora MySQL version 2 and 3](AuroraMySQL.Compare-v2-v3.md#AuroraMySQL.Compare-v2-v3-features).

You can find upcoming end-of-life dates for Aurora major versions in [Amazon Aurora versions](Aurora.VersionPolicy.md). Amazon automatically upgrades any clusters that you don't upgrade yourself before the end-of-life date. After the end-of-life date, these automatic upgrades to the subsequent major version occur during a scheduled maintenance window for clusters. 

The following are additional milestones for upgrading Aurora MySQL version 1 clusters (provisioned and Aurora Serverless) that are reaching end of life. For each, the start time is 00:00 Universal Coordinated Time (UTC). 

1. Now through February 28, 2023 – You can at any time start upgrades of Aurora MySQL version 1 (with MySQL 5.6 compatibility) clusters to Aurora MySQL version 2 (with MySQL 5.7 compatibility). From Aurora MySQL version 2, you can do a further upgrade to Aurora MySQL version 3 (with MySQL 8.0 compatibility) for Aurora provisioned DB clusters. 

1. January 16, 2023 – After this time, you can't create new Aurora MySQL version 1 clusters or instances from either the AWS Management Console or the AWS Command Line Interface (AWS CLI). You also can't add new secondary Regions to an Aurora global database. This might affect your ability to recover from an unplanned outage as outlined in [Recovering an Amazon Aurora global database from an unplanned outage](aurora-global-database-disaster-recovery.md#aurora-global-database-failover), because you can't complete steps 5 and 6 after this time. You will also be unable to create a new cross-Region read replica running Aurora MySQL version 1. You can still do the following for existing Aurora MySQL version 1 clusters until February 28, 2023:
   + Restore a snapshot taken of an Aurora MySQL version 1 cluster to the same version as the original snapshot cluster.
   + Add read replicas (not applicable for Aurora Serverless DB clusters).
   + Change instance configuration.
   + Perform point-in-time restore.
   + Create clones of existing version 1 clusters.
   + Create a new cross-Region read replica running Aurora MySQL version 2 or higher.

1.  February 28, 2023 – After this time, we plan to automatically upgrade Aurora MySQL version 1 clusters to the default version of Aurora MySQL version 2 within a scheduled maintenance window that follows. Restoring Aurora MySQL version 1 DB snapshots results in an automatic upgrade of the restored cluster to the default version of Aurora MySQL version 2 at that time. 

Upgrading between major versions requires more extensive planning and testing than for a minor version. The process can take substantial time.

For situations where the top priority is to reduce downtime, you can also use [blue/green deployments](https://aws.amazon.com/blogs/aws/new-fully-managed-blue-green-deployments-in-amazon-aurora-and-amazon-rds/) for performing the major version upgrade in provisioned Amazon Aurora DB clusters. A blue/green deployment creates a staging environment that copies the production environment. You can make changes to the Aurora DB cluster in the green (staging) environment without affecting production workloads. The switchover typically takes under a minute with no data loss and no need for application changes. For more information, see [Overview of Amazon Aurora Blue/Green Deployments](blue-green-deployments-overview.md).

After the upgrade is finished, you also might have follow-up work to do. For example, you might need to follow up due to differences in SQL compatibility, the way certain MySQL-related features work, or parameter settings between the old and new versions.

To learn more about the methods, planning, testing, and troubleshooting of Aurora MySQL major version upgrades, be sure to thoroughly read [Upgrading the major version of an Amazon Aurora MySQL DB cluster](AuroraMySQL.Updates.MajorVersionUpgrade.md).

## Finding clusters affected by this end-of-life process


To find clusters affected by this end-of-life process, use the following procedures.

**Important**  
Be sure to perform these instructions in every AWS Region and for each AWS account where your resources are located.

### Console


**To find an Aurora MySQL version 1 cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  In the navigation pane, choose **Databases**.

1.  In the **Filter by databases** box, enter **5.6**.

1. Check for Aurora MySQL in the engine column.

### AWS CLI


To find clusters affected by this end-of-life process using the AWS CLI, call the [describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) command. You can use the sample script following.

**Example**  

```
aws rds describe-db-clusters --include-share --query 'DBClusters[?Engine==`aurora`].{EV:EngineVersion, DBCI:DBClusterIdentifier, EM:EngineMode}' --output table --region us-east-1     
        
        +------------------------------------------+
        |            DescribeDBClusters            |
        +---------------+--------------+-----------+
        |     DBCI      |     EM       |    EV     |
        +---------------+--------------+-----------+
        |  my-database-1|  serverless  |  5.6.10a  |
        +---------------+--------------+-----------+
```

### RDS API


To find Aurora MySQL DB clusters running Aurora MySQL version 1, use the RDS [DescribeDBClusters](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html) API operation with the following required parameters: 
+  `DescribeDBClusters`
  + Filters.Filter.N
    + Name
      + engine
    + Values.Value.N
      + ['aurora']

# Upgrading Amazon Aurora MySQL DB clusters
<a name="mysql_upgrade"></a>

 You can upgrade an Aurora MySQL DB cluster to get bug fixes, new Aurora MySQL features, or to change to an entirely new version of the underlying database engine. The following sections show how. 

**Note**  
 The type of upgrade that you do depends on how much downtime you can afford for your cluster, how much verification testing you plan to do, how important the specific bug fixes or new features are for your use case, and whether you plan to do frequent small upgrades or occasional upgrades that skip several intermediate versions. For each upgrade, you can change the major version, the minor version, and the patch level for your cluster. If you aren't familiar with the distinction between Aurora MySQL major versions, minor versions, and patch levels, you can read the background information at [Checking Aurora MySQL version numbers](AuroraMySQL.Updates.Versions.md). 

**Tip**  
You can minimize the downtime required for a DB cluster upgrade by using a blue/green deployment. For more information, see [Using Amazon Aurora Blue/Green Deployments for database updates](blue-green-deployments.md).

**Topics**
+ [

# Upgrading the minor version or patch level of an Aurora MySQL DB cluster
](AuroraMySQL.Updates.Patching.md)
+ [

# Upgrading the major version of an Amazon Aurora MySQL DB cluster
](AuroraMySQL.Updates.MajorVersionUpgrade.md)

# Upgrading the minor version or patch level of an Aurora MySQL DB cluster


 You can use the following methods to upgrade the minor version of a DB cluster or to patch a DB cluster: 
+ [Upgrading Aurora MySQL by modifying the engine version](AuroraMySQL.Updates.Patching.ModifyEngineVersion.md) (for Aurora MySQL version 2 and 3)
+ [Enabling automatic upgrades between minor Aurora MySQL versions](AuroraMySQL.Updates.AMVU.md)

 For information about how zero-downtime patching can reduce interruptions during the upgrade process, see [Using zero-downtime patching](AuroraMySQL.Updates.ZDP.md). 

For information about performing a minor version upgrade for your Aurora MySQL DB cluster, see the following topics. 

**Topics**
+ [

## Before performing a minor version upgrade
](#USER_UpgradeDBInstance.PostgreSQL.BeforeMinor)
+ [

## Minor version upgrade prechecks for Aurora MySQL
](#AuroraMySQL.minor-upgrade-prechecks)
+ [

# Upgrading Aurora MySQL by modifying the engine version
](AuroraMySQL.Updates.Patching.ModifyEngineVersion.md)
+ [

# Enabling automatic upgrades between minor Aurora MySQL versions
](AuroraMySQL.Updates.AMVU.md)
+ [

# Using zero-downtime patching
](AuroraMySQL.Updates.ZDP.md)
+ [

## Alternative blue/green upgrade technique
](#AuroraMySQL.UpgradingMinor.BlueGreen)

## Before performing a minor version upgrade
Before a performing minor version upgrade

We recommend that you perform the following actions to reduce the downtime during a minor version upgrade:
+ The Aurora DB cluster maintenance should be performed during a period of low traffic. Use Performance Insights to identify these time periods in order to configure the maintenance windows correctly. For more information on Performance Insights, see [Monitoring DB load with Performance Insights on Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html). For more information on DB cluster maintenance window, [Adjusting the preferred DB cluster maintenance window](USER_UpgradeDBInstance.Maintenance.md#AdjustingTheMaintenanceWindow.Aurora).
+ Use AWS SDKs that support exponential backoff and jitter as a best practice. For more information, see [Exponential Backoff And Jitter](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/).

## Minor version upgrade prechecks for Aurora MySQL


When you start a minor version upgrade, Amazon Aurora runs prechecks automatically.

These prechecks are mandatory. You can't choose to skip them. The prechecks provide the following benefits:
+ They enable you to avoid unplanned downtime during the upgrade.
+ If there are incompatibilities, Amazon Aurora prevents the upgrade and provides a log for you to learn about them. You can then use the log to prepare your database for the upgrade by reducing the incompatibilities. For detailed information about removing incompatibilities, see [Preparing your installation for upgrade](https://dev.mysql.com/doc/refman/8.0/en/upgrade-prerequisites.html) in the MySQL documentation.

The prechecks run before the DB instance is stopped for the upgrade, meaning that they don't cause any downtime when they run. If the prechecks find an incompatibility, Aurora automatically cancels the upgrade before the DB instance is stopped. Aurora also generates an event for the incompatibility. For more information about Amazon Aurora events, see [Working with Amazon RDS event notification](USER_Events.md).

Aurora records detailed information about each incompatibility in the log file `PrePatchCompatibility.log`. In most cases, the log entry includes a link to the MySQL documentation for correcting the incompatibility. For more information about viewing log files, see [Viewing and listing database log files](USER_LogAccess.Procedural.Viewing.md).

Due to the nature of the prechecks, they analyze the objects in your database. This analysis results in resource consumption and increases the time for the upgrade to complete.

# Upgrading Aurora MySQL by modifying the engine version
Modifying the engine version

Upgrading the minor version of an Aurora MySQL DB cluster applies additional fixes and new features to an existing cluster.

This kind of upgrade applies to Aurora MySQL clusters where the original version and the upgraded version both have the same Aurora MySQL major version, either 2 or 3. The process is fast and straightforward because it doesn't involve any conversion for the Aurora MySQL metadata or reorganization of your table data.

You perform this kind of upgrade by modifying the engine version of the DB cluster using the AWS Management Console, AWS CLI, or the RDS API. For example, if your cluster is running Aurora MySQL 3.x, choose a higher 3.x version.

If you're performing a minor upgrade on an Aurora Global Database, upgrade all of the secondary clusters before you upgrade the primary cluster.

**Note**  
To perform a minor version upgrade to Aurora MySQL version 3.04.\$1 or higher, or version 2.12.\$1, use the following process:  
Remove all secondary Regions from the global cluster. Follow the steps in [Removing a cluster from an Amazon Aurora global database](aurora-global-database-detaching.md).
Upgrade the engine version of the primary Region to version 3.04.\$1 or higher, or version 2.12.\$1, as applicable. Follow the steps in [To modify the engine version of a DB cluster](#modify-db-cluster-engine-version).
Add secondary Regions to the global cluster. Follow the steps in [Adding an AWS Region to an Amazon Aurora global database](aurora-global-database-attaching.md).

 **To modify the engine version of a DB cluster** 
+ **By using the console** – Modify the properties of your cluster. In the **Modify DB cluster** window, change the Aurora MySQL engine version in the **DB engine version** box. If you aren't familiar with the general procedure for modifying a cluster, follow the instructions at [Modifying the DB cluster by using the console, CLI, and API](Aurora.Modifying.md#Aurora.Modifying.Cluster).
+ **By using the AWS CLI** – Call the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) AWS CLI command, and specify the name of your DB cluster for the `--db-cluster-identifier` option and the engine version for the `--engine-version` option.

  For example, to upgrade to Aurora MySQL version 3.04.1, set the `--engine-version` option to `8.0.mysql_aurora.3.04.1`. Specify the `--apply-immediately` option to immediately update the engine version for your DB cluster.
+ **By using the RDS API** – Call the [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) API operation, and specify the name of your DB cluster for the `DBClusterIdentifier` parameter and the engine version for the `EngineVersion` parameter. Set the `ApplyImmediately` parameter to `true` to immediately update the engine version for your DB cluster.

# Enabling automatic upgrades between minor Aurora MySQL versions
Enabling automatic upgrades between minor versions<a name="amvu"></a>

For an Amazon Aurora MySQL DB cluster, you can specify that Aurora upgrades the DB cluster automatically to new minor versions. You do so by setting the `AutoMinorVersionUpgrade` property (**Auto minor version upgrade** in the AWS Management Console) of the DB cluster.

Automatic upgrades occur during the maintenance window. If the individual DB instances in the DB cluster have different maintenance windows from the cluster maintenance window, then the cluster maintenance window takes precedence.

Automatic minor version upgrade doesn't apply to the following kinds of Aurora MySQL clusters:
+ Clusters that are part of an Aurora global database
+ Clusters that have cross-Region replicas

The outage duration varies depending on workload, cluster size, the amount of binary log data, and if Aurora can use the zero-downtime patching (ZDP) feature. Aurora restarts the database cluster, so you might experience a short period of unavailability before resuming use of your cluster. In particular, the amount of binary log data affects recovery time. The DB instance processes the binary log data during recovery. Thus, a high volume of binary log data increases recovery time.

**Note**  
Aurora only performs automatic upgrades if all DB instances in your DB cluster have the `AutoMinorVersionUpgrade` setting enabled. For information on how to set it, and how it works when applied at the cluster and instance levels, see [Automatic minor version upgrades for Aurora DB clusters](USER_UpgradeDBInstance.Maintenance.md#Aurora.Maintenance.AMVU).  
Then if an upgrade path exists for the DB cluster's instances to a minor DB engine version that has `AutoUpgrade` set to true, the upgrade will take place. The `AutoUpgrade` setting is dynamic, and is set by RDS.  
Auto minor version upgrades are performed to the default minor version.

You can use a CLI command such as the following to check the status of the `AutoMinorVersionUpgrade` setting for all of the DB instances in your Aurora MySQL clusters.

```
aws rds describe-db-instances \
  --query '*[].{DBClusterIdentifier:DBClusterIdentifier,DBInstanceIdentifier:DBInstanceIdentifier,AutoMinorVersionUpgrade:AutoMinorVersionUpgrade}'
```

That command produces output similar to the following:

```
[
  {
      "DBInstanceIdentifier": "db-t2-medium-instance",
      "DBClusterIdentifier": "cluster-57-2020-06-03-6411",
      "AutoMinorVersionUpgrade": true
  },
  {
      "DBInstanceIdentifier": "db-t2-small-original-size",
      "DBClusterIdentifier": "cluster-57-2020-06-03-6411",
      "AutoMinorVersionUpgrade": false
  },
  {
      "DBInstanceIdentifier": "instance-2020-05-01-2332",
      "DBClusterIdentifier": "cluster-57-2020-05-01-4615",
      "AutoMinorVersionUpgrade": true
  },
... output omitted ...
```

In this example, **Enable auto minor version upgrade** is turned off for the DB cluster `cluster-57-2020-06-03-6411`, because it's turned off for one of the DB instances in the cluster.

# Using zero-downtime patching
<a name="zdp"></a>

Performing upgrades for Aurora MySQL DB clusters involves the possibility of an outage when the database is shut down and while it's being upgraded. By default, if you start the upgrade while the database is busy, you lose all the connections and transactions that the DB cluster is processing. If you wait until the database is idle to perform the upgrade, you might have to wait a long time.

The zero-downtime patching (ZDP) feature attempts, on a best-effort basis, to preserve client connections through an Aurora MySQL upgrade. If ZDP completes successfully, application sessions are preserved and the database engine restarts while the upgrade is in progress. The database engine restart can cause a drop in throughput lasting for a few seconds to approximately one minute.

ZDP doesn't apply to the following:
+ Operating system (OS) patches and upgrades
+ Major version upgrades

ZDP is available for all supported Aurora MySQL versions and DB instance classes.

ZDP isn't supported for Aurora Serverless v1 or Aurora global databases.

**Note**  
We recommend using the T DB instance classes only for development and test servers, or other non-production servers. For more details on the T instance classes, see [Using T instance classes for development and testing](AuroraMySQL.BestPractices.Performance.md#AuroraMySQL.BestPractices.T2Medium).

You can see metrics of important attributes during ZDP in the MySQL error log. You can also see information about when Aurora MySQL uses ZDP or chooses not to use ZDP on the **Events** page in the AWS Management Console.

In Aurora MySQL, Aurora can perform a zero-downtime patch whether or not binary log replication is enabled. If binary log replication is enabled, Aurora MySQL automatically drops the connection to the binlog target during a ZDP operation. Aurora MySQL automatically reconnects to the binlog target and resumes replication after the restart finishes.

ZDP also works in combination with the reboot enhancements in Aurora MySQL. Patching the writer DB instance automatically patches readers at the same time. After performing the patch, Aurora restores the connections on both the writer and reader DB instances.

ZDP might not complete successfully under the following conditions:
+ Long-running queries or transactions are in progress. If Aurora can perform ZDP in this case, any open transactions are canceled but their connections are retained.
+ Temporary tables, user locks, or table locks are in use, for example while data definition language (DDL) statements run. Aurora drops these connections.
+ Pending parameter changes exist.

If no suitable time window for performing ZDP becomes available because of one or more of these conditions, patching reverts to the standard behavior.

Although connections remain intact following a successful ZDP operation, some variables and features are reinitialized. The following kinds of information aren't preserved through a restart caused by zero-downtime patching:
+ Global variables. Aurora restores session variables, but it doesn't restore global variables after the restart.
+ Status variables. In particular, the uptime value reported by the engine status is reset after a restart that uses the ZDR or ZDP mechanisms.
+ `LAST_INSERT_ID`.
+ In-memory `auto_increment` state for tables. The in-memory auto-increment state is reinitialized. For more information about auto-increment values, see [MySQL Reference Manual](https://dev.mysql.com/doc/refman/5.7/en/innodb-auto-increment-handling.html#innodb-auto-increment-initialization).
+ Diagnostic information from `INFORMATION_SCHEMA` and `PERFORMANCE_SCHEMA` tables. This diagnostic information also appears in the output of commands such as `SHOW PROFILE` and `SHOW PROFILES`. 

The following activities related to zero-downtime restart are reported on the **Events** page:
+ Attempting to upgrade the database with zero downtime.
+ Attempting to upgrade the database with zero downtime finished. The event reports how long the process took. The event also reports how many connections were preserved during the restart and how many connections were dropped. You can consult the database error log to see more details about what happened during the restart.

## Alternative blue/green upgrade technique


In some situations, your top priority is to perform an immediate switchover from the old cluster to an upgraded one. In such situations, you can use a multistep process that runs the old and new clusters side-by-side. Here, you replicate data from the old cluster to the new one until you are ready for the new cluster to take over. For details, see [Using Amazon Aurora Blue/Green Deployments for database updates](blue-green-deployments.md).

# Upgrading the major version of an Amazon Aurora MySQL DB cluster
Upgrading the major version of an Aurora MySQL DB cluster<a name="mvu"></a>

In an Aurora MySQL version number such as 3.04.1, the 3 represents the major version. Aurora MySQL version 2 is compatible with MySQL 5.7. Aurora MySQL version 3 is compatible with MySQL 8.0.

Upgrading between major versions requires more extensive planning and testing than for a minor version. The process can take substantial time. After the upgrade is finished, you also might have followup work to do. For example, this might occur because of differences in SQL compatibility or the way certain MySQL-related features work. Or it might occur because of differing parameter settings between the old and new versions.

**Contents**
+ [

## Upgrading from Aurora MySQL version 2 to version 3
](#AuroraMySQL.Updates.MajorVersionUpgrade.2to3)
+ [

## Aurora MySQL major version upgrade paths
](#AuroraMySQL.Upgrading.Compatibility)
+ [

## How the Aurora MySQL in-place major version upgrade works
](#AuroraMySQL.Upgrading.Sequence)
+ [

## Planning a major version upgrade for an Aurora MySQL cluster
](#AuroraMySQL.Upgrading.Planning)
  + [

### Simulating the upgrade by cloning your DB cluster
](#AuroraMySQL.Upgrading.Planning.clone)
  + [

### Blue/Green Deployments
](#AuroraMySQL.UpgradingMajor.BlueGreen)
+ [

# Major version upgrade prechecks for Aurora MySQL
](AuroraMySQL.upgrade-prechecks.md)
  + [

## Precheck process for Aurora MySQL
](AuroraMySQL.upgrade-prechecks.md#AuroraMySQL.upgrade-prechecks.process)
  + [

## Precheck log format for Aurora MySQL
](AuroraMySQL.upgrade-prechecks.md#AuroraMySQL.upgrade-prechecks.log-format)
  + [

## Precheck log output examples for Aurora MySQL
](AuroraMySQL.upgrade-prechecks.md#AuroraMySQL.upgrade-prechecks.log-examples)
  + [

## Precheck performance for Aurora MySQL
](AuroraMySQL.upgrade-prechecks.md#AuroraMySQL.upgrade-prechecks.performance)
  + [

## Summary of Community MySQL upgrade prechecks
](AuroraMySQL.upgrade-prechecks.md#AuroraMySQL.upgrade-prechecks.community)
  + [

## Summary of Aurora MySQL upgrade prechecks
](AuroraMySQL.upgrade-prechecks.md#AuroraMySQL.upgrade-prechecks.ams)
  + [

# Precheck descriptions reference for Aurora MySQL
](AuroraMySQL.upgrade-prechecks.descriptions.md)
    + [

## Errors
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-errors)
      + [

### MySQL prechecks that report errors
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-errors.mysql)
      + [

### Aurora MySQL prechecks that report errors
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-errors.aurora)
    + [

## Warnings
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-warnings)
      + [

### MySQL prechecks that report warnings
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-warnings.mysql)
      + [

### Aurora MySQL prechecks that report warnings
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-warnings.aurora)
    + [

## Notices
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-notices)
    + [

## Errors, warnings, or notices
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-all)
+ [

# How to perform an in-place upgrade
](AuroraMySQL.Upgrading.Procedure.md)
  + [

## How in-place upgrades affect the parameter groups for a cluster
](AuroraMySQL.Upgrading.Procedure.md#AuroraMySQL.Upgrading.ParamGroups)
  + [

## Changes to cluster properties between Aurora MySQL versions
](AuroraMySQL.Upgrading.Procedure.md#AuroraMySQL.Upgrading.Attrs)
  + [

## In-place major upgrades for global databases
](AuroraMySQL.Upgrading.Procedure.md#AuroraMySQL.Upgrading.GlobalDB)
  + [

## In-place upgrades for DB clusters with cross-Region read replicas
](AuroraMySQL.Upgrading.Procedure.md#AuroraMySQL.Upgrading.XRRR)
+ [

# Aurora MySQL in-place upgrade tutorial
](AuroraMySQL.Upgrading.Tutorial.md)
+ [

# Finding the reasons for Aurora MySQL major version upgrade failures
](AuroraMySQL.Upgrading.failure-events.md)
+ [

# Troubleshooting for Aurora MySQL in-place upgrade
](AuroraMySQL.Upgrading.Troubleshooting.md)
+ [

# Post-upgrade cleanup for Aurora MySQL version 3
](AuroraMySQL.mysql80-post-upgrade.md)
  + [

## Spatial indexes
](AuroraMySQL.mysql80-post-upgrade.md#AuroraMySQL.mysql80-spatial)

## Upgrading from Aurora MySQL version 2 to version 3


If you have a MySQL 5.7–compatible cluster and want to upgrade it to a MySQL–8.0 compatible cluster, you can do so by running an upgrade process on the cluster itself. This kind of upgrade is an *in-place upgrade*, in contrast to upgrades that you do by creating a new cluster. This technique keeps the same endpoint and other characteristics of the cluster. The upgrade is relatively fast because it doesn't require copying all your data to a new cluster volume. This stability helps to minimize any configuration changes in your applications. It also helps to reduce the amount of testing for the upgraded cluster. This is because the number of DB instances and their instance classes all stay the same.

The in-place upgrade mechanism involves shutting down your DB cluster while the operation takes place. Aurora performs a clean shutdown and completes outstanding operations such as transaction rollback and undo purge. For more information, see [How the Aurora MySQL in-place major version upgrade works](#AuroraMySQL.Upgrading.Sequence).

The in-place upgrade method is convenient, because it is simple to perform and minimizes configuration changes to associated applications. For example, an in-place upgrade preserves the endpoints and set of DB instances for your cluster. However, the time needed for an in-place upgrade can vary depending on the properties of your schema and how busy the cluster is. Thus, depending on the needs for your cluster, you can choose among the upgrade techniques:
+ [In-place upgrade](AuroraMySQL.Upgrading.Procedure.md)
+ [Blue/Green Deployment](#AuroraMySQL.UpgradingMajor.BlueGreen)
+ [Snapshot restore](aurora-restore-snapshot.md)
**Note**  
If you use the AWS CLI or RDS API for the snapshot restore upgrade method, you must run a subsequent operation to create a writer DB instance in the restored DB cluster.

For general information about Aurora MySQL version 3 and its new features, see [Aurora MySQL version 3 compatible with MySQL 8.0](AuroraMySQL.MySQL80.md).

For details about planning an upgrade, see [Planning a major version upgrade for an Aurora MySQL cluster](#AuroraMySQL.Upgrading.Planning) and [How to perform an in-place upgrade](AuroraMySQL.Upgrading.Procedure.md).

## Aurora MySQL major version upgrade paths


Not all kinds or versions of Aurora MySQL clusters can use the in-place upgrade mechanism. You can learn the appropriate upgrade path for each Aurora MySQL cluster by consulting the following table.


|  Type of Aurora MySQL DB cluster  | Can it use in-place upgrade?  |  Action  | 
| --- | --- | --- | 
|   Aurora MySQL provisioned cluster, version 2  |  Yes  |  In-place upgrade is supported for MySQL 5.7–compatible Aurora MySQL clusters. For information about upgrading to Aurora MySQL version 3, see [Planning a major version upgrade for an Aurora MySQL cluster](#AuroraMySQL.Upgrading.Planning) and [How to perform an in-place upgrade](AuroraMySQL.Upgrading.Procedure.md).  | 
|   Aurora MySQL provisioned cluster, version 3  |  Not applicable  |  Use a minor version upgrade procedure to upgrade between Aurora MySQL version 3 versions.  | 
|  Aurora Serverless v1 cluster  |  Not applicable  |  Aurora Serverless v1 is supported for Aurora MySQL only on version 2.  | 
|  Aurora Serverless v2 cluster  |  Not applicable  | Aurora Serverless v2 is supported for Aurora MySQL only on version 3. | 
|  Cluster in an Aurora global database  |  Yes  |  To upgrade Aurora MySQL from version 2 to version 3, follow the [procedure for doing an in-place upgrade](AuroraMySQL.Upgrading.Procedure.md) for clusters in an Aurora global database. Perform the upgrade on the global cluster. Aurora upgrades the primary cluster and all the secondary clusters in the global database at the same time. If you use the AWS CLI or RDS API, call the `modify-global-cluster` command or `ModifyGlobalCluster` operation instead of `modify-db-cluster` or `ModifyDBCluster`. You can perform an in-place upgrade from Aurora MySQL version 2 to version 3 only if the `lower_case_table_names` parameter is set to default and you reboot your global database. For more information, see [Major version upgrades](aurora-global-database-upgrade.md#aurora-global-database-upgrade.major).  | 
|  Parallel query cluster  |  Yes  |  You can perform an in-place upgrade.  | 
|  Cluster that is the target of binary log replication  |  Maybe  |  If the binary log replication is from an Aurora MySQL cluster, you can perform an in-place upgrade. You can't perform the upgrade if the binary log replication is from an RDS for MySQL or an on-premises MySQL DB instance. In that case, you can upgrade using the snapshot restore mechanism.  | 
|  Cluster with zero DB instances  |  No  |  Using the AWS CLI or the RDS API, you can create an Aurora MySQL cluster without any attached DB instances. In the same way, you can also remove all DB instances from an Aurora MySQL cluster while leaving the data in the cluster volume intact. While a cluster has zero DB instances, you can't perform an in-place upgrade. The upgrade mechanism requires a writer instance in the cluster to perform conversions on the system tables, data files, and so on. In this case, use the AWS CLI or the RDS API to create a writer instance for the cluster. Then you can perform an in-place upgrade.  | 
|  Cluster with backtrack enabled  |  Yes  |  You can perform an in-place upgrade for an Aurora MySQL cluster that uses the Backtrack feature. However, after the upgrade, you can't backtrack the cluster to a time before the upgrade.  | 

## How the Aurora MySQL in-place major version upgrade works


 Aurora MySQL performs a major version upgrade as a multistage process. You can check the current status of an upgrade. Some of the upgrade steps also provide progress information. As each stage begins, Aurora MySQL records an event. You can examine events as they occur on the **Events** page in the RDS console. For more information about working with events, see [Working with Amazon RDS event notification](USER_Events.md). 

**Important**  
 Once the process begins, it runs until the upgrade either succeeds or fails. You can't cancel the upgrade while it's underway. If the upgrade fails, Aurora rolls back all the changes and your cluster has the same engine version, metadata, and so on as before. 

 The upgrade process consists of these stages: 

1.  Aurora performs a series of [prechecks](AuroraMySQL.upgrade-prechecks.md) before beginning the upgrade process. Your cluster keeps running while Aurora does these checks. For example, the cluster can't have any XA transactions in the prepared state or be processing any data definition language (DDL) statements. For example, you might need to shut down applications that are submitting certain kinds of SQL statements. Or you might simply wait until certain long-running statements are finished. Then try the upgrade again. Some checks test for conditions that don't prevent the upgrade but might make the upgrade take a long time. 

    If Aurora detects that any required conditions aren't met, modify the conditions identified in the event details. Follow the guidance in [Troubleshooting for Aurora MySQL in-place upgrade](AuroraMySQL.Upgrading.Troubleshooting.md). If Aurora detects conditions that might cause a slow upgrade, plan to monitor the upgrade over an extended period. 

1.  Aurora takes your cluster offline. Then Aurora performs a similar set of tests as in the previous stage, to confirm that no new issues arose during the shutdown process. If Aurora detects any conditions at this point that would prevent the upgrade, Aurora cancels the upgrade and brings the cluster back online. In this case, confirm when the conditions no longer apply and start the upgrade again. 

1.  Aurora creates a snapshot of your cluster volume. Suppose that you discover compatibility or other kinds of issues after the upgrade is finished. Or suppose that you want to perform testing using both the original and upgraded clusters. In such cases, you can restore from this snapshot to create a new cluster with the original engine version and the original data. 
**Tip**  
This snapshot is a manual snapshot. However, Aurora can create it and continue with the upgrade process even if you have reached your quota for manual snapshots. This snapshot remains permanently (if needed) until you delete it. After you finish all post-upgrade testing, you can delete this snapshot to minimize storage charges.

1.  Aurora clones your cluster volume. Cloning is a fast operation that doesn't involve copying the actual table data. If Aurora encounters an issue during the upgrade, it reverts to the original data from the cloned cluster volume and brings the cluster back online. The temporary cloned volume during the upgrade isn't subject to the usual limit on the number of clones for a single cluster volume. 

1.  Aurora performs a clean shutdown for the writer DB instance. During the clean shutdown, progress events are recorded every 15 minutes for the following operations. You can examine events as they occur on the **Events** page in the RDS console. 
   +  Aurora purges the undo records for old versions of rows. 
   +  Aurora rolls back any uncommitted transactions. 

1.  Aurora upgrades the engine version on the writer DB instance: 
   +  Aurora installs the binary for the new engine version on the writer DB instance. 
   +  Aurora uses the writer DB instance to upgrade your data to MySQL 5.7-compatible format. During this stage, Aurora modifies the system tables and performs other conversions that affect the data in your cluster volume. In particular, Aurora upgrades the partition metadata in the system tables to be compatible with the MySQL 5.7 partition format. This stage can take a long time if the tables in your cluster have a large number of partitions. 

      If any errors occur during this stage, you can find the details in the MySQL error logs. After this stage starts, if the upgrade process fails for any reason, Aurora restores the original data from the cloned cluster volume. 

1.  Aurora upgrades the engine version on the reader DB instances. 

1.  The upgrade process is completed. Aurora records a final event to indicate that the upgrade process completed successfully. Now your DB cluster is running the new major version. 

## Planning a major version upgrade for an Aurora MySQL cluster


To help you decide the right time and approach to upgrade, you can learn the differences between Aurora MySQL version 3 and your current environment:
+ If you're converting from RDS for MySQL 8.0 or MySQL 8.0 Community Edition, see [Comparing Aurora MySQL version 3 and MySQL 8.0 Community Edition](AuroraMySQL.Compare-80-v3.md).
+ If you're upgrading from Aurora MySQL version 2, RDS for MySQL 5.7, or community MySQL 5.7, see [Comparing Aurora MySQL version 2 and Aurora MySQL version 3](AuroraMySQL.Compare-v2-v3.md). 
+ Create new MySQL 8.0-compatible versions of any custom parameter groups. Apply any necessary custom parameter values to the new parameter groups. Consult [Parameter changes for Aurora MySQL version 3](AuroraMySQL.Compare-v2-v3.md#AuroraMySQL.mysql80-parameter-changes) to learn about parameter changes.
+ Review your Aurora MySQL version 2 database schema and object definitions for the usage of new reserved keywords introduced in MySQL 8.0 Community Edition. Do so before you upgrade. For more information, see [MySQL 8.0 New Keywords and Reserved Words](https://dev.mysql.com/doc/mysqld-version-reference/en/keywords-8-0.html#keywords-new-in-8-0) in the MySQL documentation.

You can also find more MySQL-specific upgrade considerations and tips in [Changes in MySQL 8.0](https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html) in the *MySQL Reference Manual*. For example, you can use the command `mysqlcheck --check-upgrade` to analyze your existing Aurora MySQL databases and identify potential upgrade issues.

**Note**  
We recommend using larger DB instance classes when upgrading to Aurora MySQL version 3 using the in-place upgrade or snapshot restore technique. Examples are db.r5.24xlarge and db.r6g.16xlarge. This helps the upgrade process to complete faster by using the majority of available CPU capacity on the DB instance. You can change to the DB instance class that you want after the major version upgrade is complete.

After you finish the upgrade itself, you can follow the post-upgrade procedures in [Post-upgrade cleanup for Aurora MySQL version 3](AuroraMySQL.mysql80-post-upgrade.md). Finally, test your application's functionality and performance. 

If you're converting from RDS from MySQL or community MySQL, follow the migration procedure explained in [Migrating data to an Amazon Aurora MySQL DB cluster](AuroraMySQL.Migrating.md). In some cases, you might use binary log replication to synchronize your data with an Aurora MySQL version 3 cluster as part of the migration. If so, the source system must run a version that's compatible with your target DB cluster.

To make sure that your applications and administration procedures work smoothly after upgrading a cluster between major versions, do some advance planning and preparation. To see what sorts of management code to update for your AWS CLI scripts or RDS API–based applications, see [How in-place upgrades affect the parameter groups for a cluster](AuroraMySQL.Upgrading.Procedure.md#AuroraMySQL.Upgrading.ParamGroups). Also see [Changes to cluster properties between Aurora MySQL versions](AuroraMySQL.Upgrading.Procedure.md#AuroraMySQL.Upgrading.Attrs).

To learn what issues that you might encounter during the upgrade, see [Troubleshooting for Aurora MySQL in-place upgrade](AuroraMySQL.Upgrading.Troubleshooting.md). For issues that might cause the upgrade to take a long time, you can test those conditions in advance and correct them.

**Note**  
An in-place upgrade involves shutting down your DB cluster while the operation takes place. Aurora MySQL performs a clean shutdown and completes outstanding operations such as undo purge. An upgrade might take a long time if there many undo records to purge. We recommend performing the upgrade only after the history list length (HLL) is low. A generally acceptable value for the HLL is 100,000 or less. For more information, see this [blog post](https://aws.amazon.com/blogs/database/amazon-aurora-mysql-version-2-with-mysql-5-7-compatibility-to-version-3-with-mysql-8-0-compatibility-upgrade-checklist-part-2).

### Simulating the upgrade by cloning your DB cluster


You can check application compatibility, performance, maintenance procedures, and similar considerations for the upgraded cluster. To do so, you can perform a simulation of the upgrade before doing the real upgrade. This technique can be especially useful for production clusters. Here, it's important to minimize downtime and have the upgraded cluster ready to go as soon as the upgrade has finished.

Use the following steps:

1. Create a clone of the original cluster. Follow the procedure in [Cloning a volume for an Amazon Aurora DB cluster](Aurora.Managing.Clone.md).

1. Set up a similar set of writer and reader DB instances as in the original cluster.

1. Perform an in-place upgrade of the cloned cluster. Follow the procedure in [How to perform an in-place upgrade](AuroraMySQL.Upgrading.Procedure.md).

   Start the upgrade immediately after creating the clone. That way, the cluster volume is still identical to the state of the original cluster. If the clone sits idle before you do the upgrade, Aurora performs database cleanup processes in the background. In that case, the upgrade of the clone isn't an accurate simulation of upgrading the original cluster.

1. Test application compatibility, performance, administration procedures, and so on, using the cloned cluster.

1. If you encounter any issues, adjust your upgrade plans to account for them. For example, adapt any application code to be compatible with the feature set of the higher version. Estimate how long the upgrade is likely to take based on the amount of data in your cluster. You might also choose to schedule the upgrade for a time when the cluster isn't busy.

1. After you're satisfied that your applications and workload work properly with the test cluster, you can perform the in-place upgrade for your production cluster.

1. Work to minimize the total downtime of your cluster during a major version upgrade. To do so, make sure that the workload on the cluster is low or zero at the time of the upgrade. In particular, make sure that there are no long running transactions in progress when you start the upgrade.

### Blue/Green Deployments


In some situations, your top priority is to perform an immediate switchover from the old cluster to an upgraded one. In such situations, you can use a multistep process that runs the old and new clusters side-by-side. Here, you replicate data from the old cluster to the new one until you are ready for the new cluster to take over. For details, see [Using Amazon Aurora Blue/Green Deployments for database updates](blue-green-deployments.md).

# Major version upgrade prechecks for Aurora MySQL


Upgrading MySQL from one major version to another, such as going from MySQL 5.7 to MySQL 8.0, involves some significant architectural changes that require careful planning and preparation. Unlike minor version upgrades where the focus is mainly on updating the database engine software and in some cases system tables, major MySQL upgrades often introduce fundamental changes to how the database stores and manages its metadata.

To assist you in identifying such incompatibilities, when upgrading from Aurora MySQL version 2 to version 3, Aurora runs upgrade compatibility checks (prechecks) automatically to examine objects in your database cluster and identify known incompatibilities that can block the upgrade from proceeding. For details about the Aurora MySQL prechecks, see [Precheck descriptions reference for Aurora MySQL](AuroraMySQL.upgrade-prechecks.descriptions.md). The Aurora prechecks run in addition to those run by the Community MySQL [upgrade checker utility](https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-utilities-upgrade.html).

These prechecks are mandatory. You can't choose to skip them. The prechecks provide the following benefits:
+ They can reduce the possibility of running into upgrade failures that can lead to extended downtime.
+ If there are incompatibilities, Amazon Aurora prevents the upgrade from proceeding and provides a log for you to learn about them. You can then use the log to prepare your database for the upgrade to version 3 by resolving the incompatibilities. For detailed information about resolving incompatibilities, see [ Preparing your installation for upgrade](https://dev.mysql.com/doc/refman/8.0/en/upgrade-prerequisites.html) in the MySQL documentation and [ Upgrading to MySQL 8.0? Here is what you need to know...](https://dev.mysql.com/blog-archive/upgrading-to-mysql-8-0-here-is-what-you-need-to-know/) on the MySQL Server Blog.

  For more information about upgrading to MySQL 8.0, see [Upgrading MySQL](https://dev.mysql.com/doc/refman/8.0/en/upgrading.html) in the MySQL documentation.

The prechecks run before your DB cluster is taken offline for the major version upgrade. If the prechecks find an incompatibility, Aurora automatically cancels the upgrade before the DB instance is stopped. Aurora also generates an event for the incompatibility. For more information about Amazon Aurora events, see [Working with Amazon RDS event notification](USER_Events.md).

After the prechecks are completed, Aurora records detailed information about each incompatibility in the `upgrade-prechecks.log` file. In most cases, the log entry includes a link to the MySQL documentation for correcting the incompatibility. For more information about viewing log files, see [Viewing and listing database log files](USER_LogAccess.Procedural.Viewing.md).

**Note**  
Due to the nature of the prechecks, they analyze the objects in your database. This analysis results in resource consumption and increases the time for the upgrade to complete. For more information on precheck performance considerations, see [Precheck process for Aurora MySQL](#AuroraMySQL.upgrade-prechecks.process).

**Contents**
+ [

## Precheck process for Aurora MySQL
](#AuroraMySQL.upgrade-prechecks.process)
+ [

## Precheck log format for Aurora MySQL
](#AuroraMySQL.upgrade-prechecks.log-format)
+ [

## Precheck log output examples for Aurora MySQL
](#AuroraMySQL.upgrade-prechecks.log-examples)
+ [

## Precheck performance for Aurora MySQL
](#AuroraMySQL.upgrade-prechecks.performance)
+ [

## Summary of Community MySQL upgrade prechecks
](#AuroraMySQL.upgrade-prechecks.community)
+ [

## Summary of Aurora MySQL upgrade prechecks
](#AuroraMySQL.upgrade-prechecks.ams)
+ [

# Precheck descriptions reference for Aurora MySQL
](AuroraMySQL.upgrade-prechecks.descriptions.md)
  + [

## Errors
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-errors)
    + [

### MySQL prechecks that report errors
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-errors.mysql)
    + [

### Aurora MySQL prechecks that report errors
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-errors.aurora)
  + [

## Warnings
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-warnings)
    + [

### MySQL prechecks that report warnings
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-warnings.mysql)
    + [

### Aurora MySQL prechecks that report warnings
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-warnings.aurora)
  + [

## Notices
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-notices)
  + [

## Errors, warnings, or notices
](AuroraMySQL.upgrade-prechecks.descriptions.md#precheck-descriptions-all)

## Precheck process for Aurora MySQL


As described previously, the Aurora MySQL upgrade process involves running compatibility checks (prechecks) on your database before the major version upgrade can proceed.

For in-place upgrades, the prechecks run on your writer DB instance while it's online. If the precheck succeeds, the upgrade proceeds. If errors are found, they're logged in the `upgrade-prechecks.log` file and the upgrade is canceled. Before attempting the upgrade again, resolve any errors returned in the `upgrade-prechecks.log` file.

For snapshot-restore upgrades, the precheck runs during the restore process. If it succeeds, your database will upgrade to the new Aurora MySQL version. If errors are found, they're logged in the `upgrade-prechecks.log` file and the upgrade is canceled. Before attempting the upgrade again, resolve any errors returned in the `upgrade-prechecks.log` file.

For more information, see [Finding the reasons for Aurora MySQL major version upgrade failures](AuroraMySQL.Upgrading.failure-events.md) and [Precheck descriptions reference for Aurora MySQL](AuroraMySQL.upgrade-prechecks.descriptions.md).

To monitor precheck status, you can view the following events on your DB cluster.


| Precheck status | Event message | Action | 
| --- | --- | --- | 
|  Started  |  Upgrade preparation in progress: Starting online upgrade prechecks.  | None | 
|  Failed  |  Database cluster is in a state that cannot be upgraded: Upgrade prechecks failed. For more details, see the upgrade-prechecks.log file. For more information on troubleshooting the cause of the upgrade failure, see [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Upgrading.Troubleshooting.html](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Upgrading.Troubleshooting.html)  |  Review `upgrade-prechecks.log` for errors.  Remediate errors. Retry the upgrade.  | 
|  Succeeded  |  Upgrade preparation in progress: Completed online upgrade prechecks.  |  Precheck succeeded with no errors returned. Review `upgrade-prechecks.log` for warnings and notices.  | 

For more information on viewing events, see [Viewing Amazon RDS events](USER_ListEvents.md).

## Precheck log format for Aurora MySQL


After the upgrade compatibility checks (prechecks) are complete, you can review the `upgrade-prechecks.log` file. The log file contains the results, affected objects, and remediation information for each precheck.

Errors block the upgrade. You must resolve them before retrying the upgrade.

Warnings and notices are less critical, but we still recommend that you review them carefully to make sure that there are no compatibility issues with the application workload. Address any identified issues soon.

The log file has the following format:
+ `targetVersion` – The MySQL-compatible version of the Aurora MySQL upgrade.
+ `auroraServerVersion` – The Aurora MySQL version on which the precheck was run.
+ `auroraTargetVersion` – The Aurora MySQL version to which you're upgrading.
+ `checksPerformed` – Contains the list of prechecks performed.
+ `id` – The name of the precheck being run.
+ `title` – A description of the precheck being run.
+ `status` – This doesn't indicate whether the precheck succeeded or failed, but shows the status of the precheck query:
  + `OK` – The precheck query ran and completed successfully.
  + `ERROR` – The precheck query failed to run. This can occur because of issues such as resource constraints, unexpected instance restarts, or the compatibility precheck query being interrupted.

    For more information, see [this example](#precheck-query-failed).
+ `description` – A general description of the incompatibility, and how to remediate the issue.
+ `documentationLink` – Where applicable, a link to relevant Aurora MySQL or MySQL documentation is noted here. For more information, see [Precheck descriptions reference for Aurora MySQL](AuroraMySQL.upgrade-prechecks.descriptions.md).
+ `detectedProblems` – If the precheck returns an error, warning, or notice, this shows details of the incompatibility, and incompatible objects where applicable:
  + `level` – The level of the incompatibility detected by the precheck. Valid levels are the following:
    + `Error` – The upgrade can't proceed until you resolve the incompatibility.
    + `Warning` – The upgrade can proceed, but a deprecated object, syntax, or configuration was detected. Review warnings carefully, and resolve them soon to avoid issues in future releases. 
    + `Notice` – The upgrade can proceed, but a deprecated object, syntax, or configuration was detected. Review notices carefully, and resolve them soon to avoid issues in future releases. 
  + `dbObject` – The name of the database object in which the incompatibility was detected.
  + `description` – A detailed description of the incompatibility, and how to remediate the issue.
+ `errorCount` – The number of incompatibility errors detected. These block the upgrade.
+ `warningCount` – The number of incompatibility warnings detected. These don't block the upgrade, but address them soon to avoid problems in future releases.
+ `noticeCount` – The number of incompatibility notices detected. These don't block the upgrade, address them soon to avoid problems in future releases.
+ `Summary` – A summary of the precheck compatibility error, warning, and notice counts.

## Precheck log output examples for Aurora MySQL


The following examples show the precheck log output that you might see. For details of the prechecks that are run, see [Precheck descriptions reference for Aurora MySQL](AuroraMySQL.upgrade-prechecks.descriptions.md).

**Precheck status OK, no incompatibility detected**  
The precheck query completed successfully. No incompatibilities were detected.  

```
{
  "id": "auroraUpgradeCheckIndexLengthLimitOnTinytext",
  "title": "Check for the tables with indexes defined with prefix length greater than 255 bytes on tiny text columns",
  "status": "OK",
  "detectedProblems": []
},
```

**Precheck status OK, error detected**  
The precheck query completed successfully. One error was detected.  

```
{
  "id": "auroraUpgradeCheckForPrefixIndexOnGeometryColumns",
  "title": "Check for geometry columns on prefix indexes",
  "status": "OK",
  "description": "Consider dropping the prefix indexes of geometry columns and restart the upgrade.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test25.sbtest1",
        "description": "Table `test25`.`sbtest1` has an index `idx_t1` on geometry column/s. Mysql 8.0 does not support this type of index on a geometry column https://dev.mysql.com/worklog/task/?id=11808. To upgrade to MySQL 8.0, Run 'DROP INDEX `idx_t1` ON `test25`.`sbtest1`;"
      },
 }
```

**Precheck status OK, warning detected**  
Warnings can be returned when a precheck is successful or unsuccessful.  
Here the precheck query completed successfully. Two warnings were detected.  

```
{
  "id": "zeroDatesCheck",
  "title": "Zero Date, Datetime, and Timestamp values",
  "status": "OK",
  "description": "Warning: By default zero date/datetime/timestamp values are no longer allowed in MySQL, as of 5.7.8 NO_ZERO_IN_DATE and NO_ZERO_DATE are included in SQL_MODE by default. These modes should be used with strict mode as they will be merged with strict mode in a future release. If you do not include these modes in your SQL_MODE setting, you are able to insert date/datetime/timestamp values that contain zeros. It is strongly advised to replace zero values with valid ones, as they may not work correctly in the future.",
  "documentationLink": "https://lefred.be/content/mysql-8-0-and-wrong-dates/",
  "detectedProblems": [
      {
        "level": "Warning",
        "dbObject": "global.sql_mode",
        "description": "does not contain either NO_ZERO_DATE or NO_ZERO_IN_DATE which allows insertion of zero dates"
      },
      {
        "level": "Warning",
        "dbObject": "session.sql_mode",
        "description": " of 10 session(s) does not contain either NO_ZERO_DATE or NO_ZERO_IN_DATE which allows insertion of zero dates"
      }
    ]
}
```

**Precheck status ERROR, no incompatibilities reported**  
The precheck query failed with an error, so incompatibilities couldn't be verified.  

```
{
  "id": "auroraUpgradeCheckForDatafilePathInconsistency",
  "title": "Check for inconsistency related to ibd file path.",
  "status": "ERROR",
  "description": "Can't connect to MySQL server on 'localhost:3306' (111) at 13/08/2024 12:22:20 UTC. This failure can occur due to low memory available on the instance for executing upgrade prechecks. Please check 'FreeableMemory' Cloudwatch metric to verify the available memory on the instance while executing prechecks. If instance ran out of memory, we recommend to retry the upgrade on a higher instance class."
}
```
This failure can occur because of an unexpected instance restart or a compatibility precheck query being interrupted on the database while running. For example, on smaller DB instance classes, you might experience this when the available memory on the instance runs low.  
You can use the `FreeableMemory` Amazon CloudWatch metric to verify the available memory on the instance while running prechecks. If the instance ran out of memory, we recommend retrying the upgrade on a larger DB instance class. In some cases, you can use a [Blue/Green deployment](blue-green-deployments-overview.md) This allows prechecks and upgrades to run on the “green” DB cluster independent of the production workload, which also consumes system resources.  
For more information, see [Troubleshooting memory usage issues for Aurora MySQL databases](ams-workload-memory.md).

**Precheck summary, one error and three warnings detected**  
The compatibility prechecks also contain information on the source and target Aurora MySQL versions, and a summary of error, warning, and notice counts at the end of the precheck output.  
For example, the following output shows that an attempt was made to upgrade from Aurora MySQL 2.11.6 to Aurora MySQL 3.07.1. The upgrade returned one error, three warnings, and no notices. Because upgrades can't proceed when an error is returned, you must resolve the [routineSyntaxCheck](AuroraMySQL.upgrade-prechecks.descriptions.md#routineSyntaxCheck) compatibility issue and retry the upgrade.  

```
{
  "serverAddress": "/tmp%2Fmysql.sock",
  "serverVersion": "5.7.12 - MySQL Community Server (GPL)",
  "targetVersion": "8.0.36",
  "auroraServerVersion": "2.11.6",
  "auroraTargetVersion": "3.07.1",
  "outfilePath": "/rdsdbdata/tmp/PreChecker.log",
  "checksPerformed": [{
      ... output for each individual precheck ...
      .
      .
      {
        "id": "oldTemporalCheck",
        "title": "Usage of old temporal type",
        "status": "OK",
          "detectedProblems": []
      },
      {
        "id": "routinesSyntaxCheck",
        "title": "MySQL 8.0 syntax check for routine-like objects",
        "status": "OK",
        "description": "The following objects did not pass a syntax check with the latest MySQL 8.0 grammar. A common reason is that they reference names that conflict with new reserved keywords. You must update these routine definitions and `quote` any such references before upgrading.",
        "documentationLink": "https://dev.mysql.com/doc/refman/en/keywords.html",
        "detectedProblems": [{
            "level": "Error",
            "dbObject": "test.select_res_word",
            "description": "at line 2,18: unexpected token 'except'"
        }]
      },
      .
      .
      .
      {
        "id": "zeroDatesCheck",
        "title": "Zero Date, Datetime, and Timestamp values",
        "status": "OK",
        "description": "Warning: By default zero date/datetime/timestamp values are no longer allowed in MySQL, as of 5.7.8 NO_ZERO_IN_DATE and NO_ZERO_DATE are included in SQL_MODE by default. These modes should be used with strict mode as they will be merged with strict mode in a future release. If you do not include these modes in your SQL_MODE setting, you are able to insert date/datetime/timestamp values that contain zeros. It is strongly advised to replace zero values with valid ones, as they may not work correctly in the future.",
        "documentationLink": "https://lefred.be/content/mysql-8-0-and-wrong-dates/",
        "detectedProblems": [{
            "level": "Warning",
            "dbObject": "global.sql_mode",
            "description": "does not contain either NO_ZERO_DATE or NO_ZERO_IN_DATE which allows insertion of zero dates"
            },
            {
            "level": "Warning",
            "dbObject": "session.sql_mode",
            "description": " of 8 session(s) does not contain either NO_ZERO_DATE or NO_ZERO_IN_DATE which allows insertion of zero dates"
            }
          ]
       },
       .
       .
       .
  }],
  "errorCount": 1,
  "warningCount": 3,
  "noticeCount": 0,
  "Summary": "1 errors were found. Please correct these issues before upgrading to avoid compatibility issues."
}
```

## Precheck performance for Aurora MySQL


The compatibility prechecks run before the DB instance is taken offline for the upgrade, so under regular circumstances they don't cause DB instance downtime while running. However, they can impact application workload running on the writer DB instance. The prechecks access the data dictionary through [information\$1schema](https://dev.mysql.com/doc/mysql-infoschema-excerpt/5.7/en/information-schema-introduction.html) tables, which can be slow if there are many database objects. Consider the following factors:
+ Precheck duration varies with the number of database objects such as tables, columns, routines, and constraints. DB clusters with a large number of objects can take longer to run.

  For example, the [removedFunctionsCheck](AuroraMySQL.upgrade-prechecks.descriptions.md#removedFunctionsCheck) can take longer and use more resources based on the number of [stored objects](https://dev.mysql.com/doc/refman/5.7/en/stored-objects.html).
+ For in-place upgrades, using a larger DB instance class (for example, db.r5.24xlarge or db.r6g.16xlarge) can help the upgrade complete faster by using more CPU. You can downsize after the upgrade.
+ Queries on the `information_schema` across multiple databases can be slow, especially with many objects and on smaller DB instances. In such cases, consider using cloning, snapshot restore, or a [Blue/Green deployment](blue-green-deployments-overview.md) for upgrades.
+ Precheck resource usage (CPU, memory) can increase with more objects, leading to longer run times on smaller DB instances. In such cases, consider testing using cloning, snapshot restore, or a Blue/Green deployment for upgrades.

  If the prechecks fail due to lack of resources, you can detect this in the precheck log using the status output:

  ```
  "status": "ERROR",
  ```

For more information, see [How the Aurora MySQL in-place major version upgrade works](AuroraMySQL.Updates.MajorVersionUpgrade.md#AuroraMySQL.Upgrading.Sequence) and [Planning a major version upgrade for an Aurora MySQL cluster](AuroraMySQL.Updates.MajorVersionUpgrade.md#AuroraMySQL.Upgrading.Planning).

## Summary of Community MySQL upgrade prechecks


The following is a general list of incompatibilities between MySQL 5.7 and 8.0:
+ Your MySQL 5.7–compatible DB cluster must not use features that aren't supported in MySQL 8.0.

  For more information, see [ Features removed in MySQL 8.0](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) in the MySQL documentation.
+ There must be no keyword or reserved word violations. Some keywords might be reserved in MySQL 8.0 that were not reserved previously.

  For more information, see [Keywords and reserved words](https://dev.mysql.com/doc/refman/8.0/en/keywords.html) in the MySQL documentation.
+ For improved Unicode support, consider converting objects that use the `utf8mb3` charset to use the `utf8mb4` charset. The `utf8mb3` character set is deprecated. Also, consider using `utf8mb4` for character set references instead of `utf8`, because currently `utf8` is an alias for the `utf8mb3` charset.

  For more information, see [ The utf8mb3 character set (3-byte UTF-8 unicode encoding)](https://dev.mysql.com/doc/refman/8.0/en/charset-unicode-utf8mb3.html) in the MySQL documentation.
+ There must be no InnoDB tables with a nondefault row format.
+ There must be no `ZEROFILL` or `display` length type attributes.
+ There must be no partitioned table that uses a storage engine that does not have native partitioning support.
+ There must be no tables in the MySQL 5.7 `mysql` system database that have the same name as a table used by the MySQL 8.0 data dictionary.
+ There must be no tables that use obsolete data types or functions.
+ There must be no foreign key constraint names longer than 64 characters.
+ There must be no obsolete SQL modes defined in your `sql_mode` system variable setting.
+ There must be no tables or stored procedures with individual `ENUM` or `SET` column elements that exceed 255 characters in length.
+ There must be no table partitions that reside in shared InnoDB tablespaces.
+ There must be no circular references in tablespace data file paths.
+ There must be no queries and stored program definitions that use `ASC` or `DESC` qualifiers for `GROUP BY` clauses.
+ There must be no removed system variables, and system variables must use the new default values for MySQL 8.0.
+ There must be no zero (`0`) date, datetime, or timestamp values.
+ There must be no schema inconsistencies resulting from file removal or corruption.
+ There must be no table names that contain the `FTS` character string.
+ There must be no InnoDB tables that belong to a different engine.
+ There must be no table or schema names that are invalid for MySQL 5.7.

For details of the prechecks that are run, see [Precheck descriptions reference for Aurora MySQL](AuroraMySQL.upgrade-prechecks.descriptions.md).

For more information about upgrading to MySQL 8.0, see [Upgrading MySQL](https://dev.mysql.com/doc/refman/8.0/en/upgrading.html) in the MySQL documentation. For a general description of changes in MySQL 8.0, see [What is new in MySQL 8.0](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html) in the MySQL documentation.

## Summary of Aurora MySQL upgrade prechecks


Aurora MySQL has its own specific requirements when upgrading from version 2 to version 3, including the following:
+ There must be no deprecated SQL syntax, such as `SQL_CACHE`, `SQL_NO_CACHE`, and `QUERY_CACHE`, in views, routines, triggers, and events.
+ There must be no `FTS_DOC_ID` column present on any table without the `FTS` index.
+ There must be no column definition mismatch between the InnoDB data dictionary and the actual table definition.
+ All database and table names must be lowercase when the `lower_case_table_names` parameter is set to `1`.
+ Events and triggers must not have a missing or empty definer or an invalid creation context.
+ All trigger names in a database must be unique.
+ DDL recovery and Fast DDL aren't supported in Aurora MySQL version 3. There must be no artifacts in databases related to these features.
+ Tables with the `REDUNDANT` or `COMPACT` row format can't have indexes larger than 767 bytes.
+ The prefix length of indexes defined on `tiny` text columns can't exceed 255 bytes. With the `utf8mb4` character set, this limits the prefix length supported to 63 characters.

  A larger prefix length was allowed in MySQL 5.7 using the `innodb_large_prefix` parameter. This parameter is deprecated in MySQL 8.0.
+ There must be no InnoDB metadata inconsistency in the `mysql.host` table.
+ There must be no column data type mismatch in system tables.
+ There must be no XA transactions in the `prepared` state.
+ Column names in views can't be longer than 64 characters.
+ Special characters in stored procedures can't be inconsistent.
+ Tables can't have data file path inconsistency.

For details of the prechecks that are run, see [Precheck descriptions reference for Aurora MySQL](AuroraMySQL.upgrade-prechecks.descriptions.md).

# Precheck descriptions reference for Aurora MySQL


The upgrade prechecks for Aurora MySQL are described here in detail.

**Contents**
+ [

## Errors
](#precheck-descriptions-errors)
  + [

### MySQL prechecks that report errors
](#precheck-descriptions-errors.mysql)
  + [

### Aurora MySQL prechecks that report errors
](#precheck-descriptions-errors.aurora)
+ [

## Warnings
](#precheck-descriptions-warnings)
  + [

### MySQL prechecks that report warnings
](#precheck-descriptions-warnings.mysql)
  + [

### Aurora MySQL prechecks that report warnings
](#precheck-descriptions-warnings.aurora)
+ [

## Notices
](#precheck-descriptions-notices)
+ [

## Errors, warnings, or notices
](#precheck-descriptions-all)

## Errors


The following prechecks generate errors when the precheck fails, and the upgrade can't proceed.

**Topics**
+ [

### MySQL prechecks that report errors
](#precheck-descriptions-errors.mysql)
+ [

### Aurora MySQL prechecks that report errors
](#precheck-descriptions-errors.aurora)

### MySQL prechecks that report errors


The following prechecks are from Community MySQL:
+ [checkTableMysqlSchema](#checkTableMysqlSchema)
+ [circularDirectoryCheck](#circularDirectoryCheck)
+ [columnsWhichCannotHaveDefaultsCheck](#columnsWhichCannotHaveDefaultsCheck)
+ [depreciatedSyntaxCheck](#depreciatedSyntaxCheck)
+ [engineMixupCheck](#engineMixupCheck)
+ [enumSetElementLengthCheck](#enumSetElementLengthCheck)
+ [foreignKeyLengthCheck](#foreignKeyLengthCheck)
+ [getDuplicateTriggers](#getDuplicateTriggers)
+ [getEventsWithNullDefiner](#getEventsWithNullDefiner)
+ [getMismatchedMetadata](#getMismatchedMetadata)
+ [getTriggersWithNullDefiner](#getTriggersWithNullDefiner)
+ [getValueOfVariablelower\$1case\$1table\$1names](#getValueOfVariable)
+ [groupByAscSyntaxCheck](#groupByAscSyntaxCheck)
+ [mysqlEmptyDotTableSyntaxCheck](#mysqlEmptyDotTableSyntaxCheck)
+ [mysqlIndexTooLargeCheck](#mysqlIndexTooLargeCheck)
+ [mysqlInvalid57NamesCheck](#mysqlInvalid57NamesCheck)
+ [mysqlOrphanedRoutinesCheck](#mysqlOrphanedRoutinesCheck)
+ [mysqlSchemaCheck](#mysqlSchemaCheck)
+ [nonNativePartitioningCheck](#nonNativePartitioningCheck)
+ [oldTemporalCheck](#oldTemporalCheck)
+ [partitionedTablesInSharedTablespaceCheck](#partitionedTablesInSharedTablespace)
+ [removedFunctionsCheck](#removedFunctionsCheck)
+ [routineSyntaxCheck](#routineSyntaxCheck)
+ [schemaInconsistencyCheck](#schemaInconsistencyCheck)

**checkTableMysqlSchema**  
**Precheck level: Error**  
**Issues reported by the `check table x for upgrade` command for the `mysql` schema**  
Before starting the upgrade to Aurora MySQL version 3, `check table for upgrade` is run on each table in the `mysql` schema on the DB instance. The `check table for upgrade` command examines tables for any potential issues that might arise during an upgrade to a newer version of MySQL. Running this command before attempting an upgrade can help identify and resolve any incompatibilities ahead of time, making the actual upgrade process smoother.  
This command performs various checks on each table, such as the following:  
+ Verifying that the table structure and metadata are compatible with the target MySQL version
+ Checking for any deprecated or removed features used by the table
+ Ensuring that the table can be properly upgraded without data loss
For more information, see [CHECK TABLE statement](https://dev.mysql.com/doc/refman/5.7/en/check-table.html) in the MySQL documentation.  
**Example output:**  

```
{
  "id": "checkTableMysqlSchema",
  "title": "Issues reported by 'check table x for upgrade' command for mysql schema.",
  "status": "OK",
  "detectedProblems": []
}
```
The output for this precheck depends on the error encountered, and when it's encountered, because `check table for upgrade` performs multiple checks.  
If you encounter any errors with this precheck, open a case with [AWS Support](https://aws.amazon.com/support) to request that the metadata inconsistency be resolved. Alternatively, you can retry the upgrade by performing a logical dump, then restoring to a new Aurora MySQL version 3 DB cluster.

**circularDirectoryCheck**  
**Precheck level: Error**  
**Circular directory references in tablespace data file paths**  
As of [MySQL 8.0.17](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-17.html), the `CREATE TABLESPACE ... ADD DATAFILE` clause no longer permits circular directory references. To avoid upgrade issues, remove any circular directory references from tablespace data file paths before upgrading to Aurora MySQL version 3.  
**Example output:**  

```
{
  "id": "circularDirectory",
  "title": "Circular directory references in tablespace data file paths",
  "status": "OK",
  "description": "Error: Following tablespaces contain circular directory references (e.g. the reference '/../') in data file paths which as of MySQL 8.0.17 are not permitted by the CREATE TABLESPACE ... ADD DATAFILE clause. An exception to the restriction exists on Linux, where a circular directory reference is permitted if the preceding directory is a symbolic link. To avoid upgrade issues, remove any circular directory references from tablespace data file paths before upgrading.",
  "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-innodb-changes",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "ts2",
        "description": "circular reference in datafile path: '/home/ec2-user/dbdata/mysql_5_7_44/../ts2.ibd'",
        "dbObjectType": "Tablespace"
      }
  ]
}
```
If you receive this error, rebuild your tables using a [file-per-table tablespace](https://dev.mysql.com/doc/refman/8.0/en/innodb-file-per-table-tablespaces.html). Use default file paths for all tablespace and table definitions.  
Aurora MySQL doesn't support general tablespaces or `CREATE TABLESPACE` commands.  
Before rebuilding tablespaces, see [Online DDL operations](https://dev.mysql.com/doc/refman/5.7/en/innodb-online-ddl-operations.html) in the MySQL documentation to understand the effects of locking and data movement on foreground transactions.
After rebuilding, the precheck passes, allowing the upgrade to proceed.  

```
{
  "id": "circularDirectoryCheck",
  "title": "Circular directory references in tablespace data file paths",
  "status": "OK",
  "detectedProblems": []
},
```

**columnsWhichCannotHaveDefaultsCheck**  
**Precheck level: Error**  
**Columns that can't have default values**  
Before MySQL 8.0.13, `BLOB`, `TEXT`, `GEOMETRY`, and `JSON` columns can't have [default values](https://dev.mysql.com/doc/refman/5.7/en/data-type-defaults.html). Remove any default clauses on these columns before upgrading to Aurora MySQL version 3. For more information on changes to the default handling for these data types, see the [Data type default values](https://dev.mysql.com/doc/refman/8.0/en/data-type-defaults.html) in the MySQL documentation.  
**Example output:**  

```
{
  "id": "columnsWhichCannotHaveDefaultsCheck",
  "title": "Columns which cannot have default values",
  "status": "OK",
  "description": "Error: The following columns are defined as either BLOB, TEXT, GEOMETRY or JSON and have a default value set. These data types cannot have default values in MySQL versions prior to 8.0.13, while starting with 8.0.13, the default value must be specified as an expression. In order to fix this issue, please use the ALTER TABLE ... ALTER COLUMN ... DROP DEFAULT statement.",
  "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/data-type-defaults.html#data-type-defaults-explicit",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.test_blob_default.geo_col",
        "description": "geometry"
      }
  ]
},
```
The precheck returns an error because the `geo_col` column in the `test.test_blob_default` table is using a `BLOB`, `TEXT`, `GEOMETRY`, or `JSON` data type with a default value specified.  
Looking at the table definition, we can see that the `geo_col` column is defined as `geo_col geometry NOT NULL default ''`.  

```
mysql> show create table test_blob_default\G
*************************** 1. row ***************************
       Table: test_blob_default
Create Table: CREATE TABLE `test_blob_default` (
  `geo_col` geometry NOT NULL DEFAULT ''
) ENGINE=InnoDB DEFAULT CHARSET=latin1
```
Removing this default clause to allow the precheck to pass.  
Before running `ALTER TABLE` statements or rebuilding tablespaces, see [Online DDL operations](https://dev.mysql.com/doc/refman/5.7/en/innodb-online-ddl-operations.html) in the MySQL documentation to understand the effects of locking and data movement on foreground transactions.

```
mysql> ALTER TABLE test_blob_default modify COLUMN geo_col geometry NOT NULL;
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> show create table test_blob_default\G
*************************** 1. row ***************************
       Table: test_blob_default
Create Table: CREATE TABLE `test_blob_default` (
  `geo_col` geometry NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
```
The precheck passes, and you can retry the upgrade.  

```
{
  "id": "columnsWhichCannotHaveDefaultsCheck",
  "title": "Columns which cannot have default values",
  "status": "OK",
  "detectedProblems": []
},
```

**depreciatedSyntaxCheck**  
**Precheck level: Error**  
**Usage of depreciated keywords in definition**  
MySQL 8.0 has removed the [query cache](https://dev.mysql.com/doc/refman/5.7/en/query-cache.html). As a result, some query cache–specific SQL syntax has been removed. If any of your database objects contain the `QUERY CACHE`, `SQL_CACHE`, or `SQL_NO_CACHE` keywords, a precheck error is returned. To resolve this issue, re-create these objects, removing the mentioned keywords.  
**Example output:**  

```
{
  "id": "depreciatedSyntaxCheck",
  "title": "Usage of depreciated keywords in definition",
  "status": "OK",
  "description": "Error: The following DB objects contain keywords like 'QUERY CACHE', 'SQL_CACHE', 'SQL_NO_CACHE' which are not supported in major version 8.0. It is recommended to drop these DB objects or rebuild without any of the above keywords before upgrade.",
  "detectedProblems": [
      {
"level": "Error",
"dbObject": "test.no_query_cache_check",
"description": "PROCEDURE uses depreciated words in definition"
      }
  ]
}
```
The precheck reports that the `test.no_query_cache_check` stored procedure is using one of the removed keywords. Looking at the procedure definition, we can see that it uses `SQL_NO_CACHE`.  

```
mysql> show create procedure test.no_query_cache_check\G
*************************** 1. row ***************************
           Procedure: no_query_cache_check
            sql_mode:
    Create Procedure: CREATE DEFINER=`reinvent`@`%` PROCEDURE `no_query_cache_check`()
BEGIN
    SELECT SQL_NO_CACHE k from sbtest1 where id > 10 and id < 20 group by k asc;
END
character_set_client: utf8mb4
collation_connection: utf8mb4_0900_ai_ci
  Database Collation: latin1_swedish_ci
1 row in set (0.00 sec)
```
Remove the keyword.  

```
mysql> drop procedure test.no_query_cache_check;
Query OK, 0 rows affected (0.01 sec)

mysql> delimiter //

mysql> CREATE DEFINER=`reinvent`@`%` PROCEDURE `no_query_cache_check`() BEGIN     SELECT k from sbtest1 where id > 10 and id < 20 group by k asc; END//
Query OK, 0 rows affected (0.00 sec)

mysql> delimiter ;
```
After removing the keyword, the precheck completes successfully.  

```
{
  "id": "depreciatedSyntaxCheck",
  "title": "Usage of depreciated keywords in definition",
  "status": "OK",
  "detectedProblems": []
}
```

**engineMixupCheck**  
**Precheck level: Error**  
**Tables recognized by InnoDB that belong to a different engine**  
Similar to [schemaInconsistencyCheck](#schemaInconsistencyCheck), this precheck verifies that table metadata in MySQL is consistent before proceeding with the upgrade.   
If you encounter any errors with this precheck, open a case with [AWS Support](https://aws.amazon.com/support) to request that the metadata inconsistency be resolved. Alternatively, you can retry the upgrade by performing a logical dump, then restoring to a new Aurora MySQL version 3 DB cluster.  
**Example output:**  

```
{
  "id": "engineMixupCheck",
  "title": "Tables recognized by InnoDB that belong to a different engine",
  "status": "OK",
  "description": "Error: Following tables are recognized by InnoDB engine while the SQL layer believes they belong to a different engine. Such situation may happen when one removes InnoDB table files manually from the disk and creates e.g. a MyISAM table with the same name.\n\nA possible way to solve this situation is to e.g. in case of MyISAM table:\n\n1. Rename the MyISAM table to a temporary name (RENAME TABLE).\n2. Create some dummy InnoDB table (its definition does not need to match), then copy (copy, not move) and rename the dummy .frm and .ibd files to the orphan name using OS file commands.\n3. The orphan table can be then dropped (DROP TABLE), as well as the dummy table.\n4. Finally the MyISAM table can be renamed back to its original name.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "mysql.general_log_backup",
        "description": "recognized by the InnoDB engine but belongs to CSV"
      }
  ]
}
```

**enumSetElementLengthCheck**  
**Precheck level: Error**  
**`ENUM` and `SET` column definitions containing elements longer than 255 characters**  
Tables and stored procedures must not have `ENUM` or `SET` column elements exceeding 255 characters or 1020 bytes. Before MySQL 8.0, the maximum combined length was 64K, but 8.0 limits individual elements to 255 characters or 1020 bytes (supporting multibyte). If you get a precheck failure for `enumSetElementLengthCheck`, modify any elements exceeding these new limits before retrying the upgrade.  
**Example output:**  

```
{
  "id": "enumSetElementLengthCheck",
  "title": "ENUM/SET column definitions containing elements longer than 255 characters",
  "status": "OK",
  "description": "Error: The following columns are defined as either ENUM or SET and contain at least one element longer that 255 characters. They need to be altered so that all elements fit into the 255 characters limit.",
  "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/string-type-overview.html",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.large_set.s",
        "description": "SET contains element longer than 255 characters"
      }
  ]
},
```
The precheck reports an error because the column `s` in the `test.large_set` table contains a `SET` element larger than 255 characters.  
After reducing the `SET` size for this column, the precheck passes, allowing the upgrade to proceed.  

```
{
  "id": "enumSetElementLenghtCheck",
  "title": "ENUM/SET column definitions containing elements longer than 255 characters",
  "status": "OK",
  "detectedProblems": []
},
```

**foreignKeyLengthCheck**  
**Precheck level: Error**  
**Foreign key constraint names longer than 64 characters**  
In MySQL, the length of identifiers is limited to 64 characters, as outlined in the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/identifier-length.html). Due to [issues](https://bugs.mysql.com/bug.php?id=88118) identified where foreign key lengths could equal or exceed this value, leading to upgrade failures, this precheck was implemented. If you encounter errors with this precheck you should [alter or rename](https://dev.mysql.com/doc/refman/8.0/en/alter-table.html) your constraint so that it is less than 64 characters before retrying the upgrade.  
**Example output:**  

```
{
  "id": "foreignKeyLength",
  "title": "Foreign key constraint names longer than 64 characters",
  "status": "OK",
  "detectedProblems": []
}
```

**getDuplicateTriggers**  
**Precheck level: Error**  
**All trigger names in a database must be unique.**  
Due to changes in the data dictionary implementation, MySQL 8.0 doesn't support case-sensitive triggers within a database. This precheck validates that your DB cluster doesn’t have one or more databases containing duplicate triggers. For more information, see [Identifier case sensitivity](https://dev.mysql.com/doc/refman/8.0/en/identifier-case-sensitivity.html) in the MySQL documentation.  
**Example output:**  

```
{
  "id": "getDuplicateTriggers",
  "title": "MySQL pre-checks that all trigger names in a database are unique or not.",
  "status": "OK",
  "description": "Error: You have one or more database containing duplicate triggers. Mysql 8.0 does not support case sensitive triggers within a database https://dev.mysql.com/doc/refman/8.0/en/identifier-case-sensitivity.html. To upgrade to MySQL 8.0, drop the triggers with case-insensitive duplicate names and recreate with distinct names.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test",
        "description": "before_insert_product"
      },
      {
        "level": "Error",
        "dbObject": "test",
        "description": "before_insert_PRODUCT"
      }
  ]
}
```
The precheck reports an error that the database cluster has two triggers with the same name, but using different cases: `test.before_insert_product` and `test.before_insert_PRODUCT`.  
Before upgrading, rename the triggers or drop and re-create them with a new name.  
After renaming `test.before_insert_PRODUCT` to `test.before_insert_product_2`, the precheck succeeds.  

```
{
  "id": "getDuplicateTriggers",
  "title": "MySQL pre-checks that all trigger names in a database are unique or not.",
  "status": "OK",
  "detectedProblems": []
}
```

**getEventsWithNullDefiner**  
**Precheck level: Error**  
**The definer column for `mysql.event` can't be null or blank.**  
The `DEFINER` attribute specifies the MySQL account that owns a stored object definition, such as a trigger, stored procedure, or event. This attribute is particularly useful in situations where you want to control the security context under which the stored object runs. When creating a stored object, if a `DEFINER` isn't specified, the default is the user who created the object.  
When upgrading to MySQL 8.0, you can't have any stored objects that have a `null` or blank definer in the MySQL data dictionary. If you have such stored objects, a precheck error is raised. You must fix it before the upgrade can proceed.  
Example error:  

```
{
  "id": "getEventsWithNullDefiner",
  "title": "The definer column for mysql.event cannot be null or blank.",
  "status": "OK",
  "description": "Error: Set definer column in mysql.event to a valid non-null definer.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.get_version",
        "description": "Set definer for event get_version in Schema test"
      }
  ]
}
```
The precheck returns an error for the `test.get_version` [event](https://dev.mysql.com/doc/refman/5.7/en/events-overview.html) because it has a `null` definer.  
To resolve this you can check the event definition. As you can see, the definer is `null` or blank.  

```
mysql> select db,name,definer from mysql.event where name='get_version';
+------+-------------+---------+
| db   | name        | definer |
+------+-------------+---------+
| test | get_version |         |
+------+-------------+---------+
1 row in set (0.00 sec)
```
Drop or re-create the event with a valid definer.  
Before dropping or redefining a `DEFINER`, carefully review and check your application and privilege requirements. For more information, see [Stored object access control](https://dev.mysql.com/doc/refman/5.7/en/stored-objects-security.html) in the MySQL documentation.

```
mysql> drop event test.get_version;
Query OK, 0 rows affected (0.00 sec)

mysql> DELIMITER ;
mysql> delimiter $$
mysql> CREATE EVENT get_version
    ->     ON SCHEDULE
    ->       EVERY 1 DAY
    ->     DO
    ->      ///DO SOMETHING //
    -> $$
Query OK, 0 rows affected (0.01 sec)

mysql> DELIMITER ;

mysql> select db,name,definer from mysql.event where name='get_version';
+------+-------------+------------+
| db   | name        | definer    |
+------+-------------+------------+
| test | get_version | reinvent@% |
+------+-------------+------------+
1 row in set (0.00 sec)
```
Now the precheck passes.  

```
{
  "id": "getEventsWithNullDefiner",
  "title": "The definer column for mysql.event cannot be null or blank.",
  "status": "OK",
  "detectedProblems": []},
```

**getMismatchedMetadata**  
**Precheck level: Error**  
**Column definition mismatch between InnoDB data dictionary and actual table definition**  
Similar to [schemaInconsistencyCheck](#schemaInconsistencyCheck), this precheck verifies that table metadata in MySQL is consistent before proceeding with the upgrade. In this case, the precheck verifies that the column definitions match between the InnoDB data dictionary and the MySQL table definition. If a mismatch if detected, the upgrade doesn't proceed.  
If you encounter any errors with this precheck, open a case with [AWS Support](https://aws.amazon.com/support) to request that the metadata inconsistency be resolved. Alternatively, you can retry the upgrade by performing a logical dump, then restoring to a new Aurora MySQL version 3 DB cluster.  
**Example output:**  

```
{
  "id": "getMismatchedMetadata",
  "title": "Column definition mismatch between InnoDB Data Dictionary and actual table definition.",
  "status": "OK",
  "description": "Error: Your database has mismatched metadata. The upgrade to mysql 8.0 will not succeed until this is fixed.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.mismatchTable",
        "description": "Table `test/mismatchTable` column names mismatch with InnoDb dictionary column names: iD <> id"
      }
  ]
}
```
The precheck reports a mismatch in the metadata for the `id` column in the `test.mismatchTable` table. Specifically, the MySQL metadata has the column name as `iD`, while InnoDB has it as `id`.

**getTriggersWithNullDefiner**  
**Precheck level: Error**  
**The definer column for `information_schema.triggers` can't be `null` or blank.**  
The precheck validates that your database has no triggers defined with `null` or blank definers. For more information on definer requirements for stored objects, see [getEventsWithNullDefiner](#getEventsWithNullDefiner).  
**Example output:**  

```
{
  "id": "getTriggersWithNullDefiner",
  "title": "The definer column for information_schema.triggers cannot be null or blank.",
  "status": "OK",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.example_trigger",
        "description": "Set definer for trigger example_trigger in Schema test"
      }
  ]
}
```
The precheck returns an error because the `example_trigger` trigger in the `test` schema has a `null` definer. To correct this issue, fix the definer by re-creating the trigger with a valid user, or drop the trigger. For more information, see the example in [getEventsWithNullDefiner](#getEventsWithNullDefiner).  
Before dropping or redefining a `DEFINER`, carefully review and check your application and privilege requirements. For more information, see [Stored object access control](https://dev.mysql.com/doc/refman/5.7/en/stored-objects-security.html) in the MySQL documentation.

**getValueOfVariablelower\$1case\$1table\$1names**  
**Precheck level: Error**  
**All database or table names must be lowercase when the `lower_case_table_names` parameter is set to `1`.**  
Before MySQL 8.0, database names, table names and other objects corresponded to files in the data directory, such as file-based metadata (.frm). The [lower\$1case\$1table\$1names](https://dev.mysql.com/doc/refman/5.7/en/identifier-case-sensitivity.html) system variable allows users to control how the server handles identifier case sensitivity for database objects, and the storage of such metadata objects. This parameter could be changed on an already initialized server following a reboot.  
However, in MySQL 8.0, while this parameter still controls how the server handles identifier case sensitivity, it can't be changed after the data dictionary is initialized. When upgrading or creating a MySQL 8.0 database, the value set for `lower_case_table_names` the first time the data dictionary is started on MySQL, is used for the lifetime of that database. This restriction was put in place as part of the implementation of the [Atomic Data Dictionary](https://dev.mysql.com/doc/refman/8.0/en/data-dictionary-file-removal.html) implementation, where database objects are migrated from file-based metadata to internal InnoDB tables in the `mysql` schema.  
For more information, see [Data dictionary changes](https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-data-dictionary-changes) in the MySQL documentation.  
To avoid issues during upgrade when updating file-based metadata to the new Atomic Data Dictionary, this precheck validates that when `lower_case_table_names = 1`, all tables are stored on disk in lower case. If they aren’t, a precheck error is returned, and you must correct the metadata before proceeding with the upgrade.  
**Example output:**  

```
{
  "id": "getValueOfVariablelower_case_table_names",
  "title": "MySQL pre-checks that all database or table names are lowercase when the lower_case_table_names parameter is set to 1.",
  "status": "OK",
  "description": "Error: You have one or more databases or tables with uppercase letters in the names, but the lower_case_table_names parameter is set to 1. To upgrade to MySQL 8.0, either change all database or table names to lowercase, or set the parameter to 0.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.TEST",
        "description": "Table test.TEST contains one or more capital letters in name while lower_case_table_names = 1"
      }
  ]
}
```
An error is returned because the table `test.TEST` contains uppercase letters, but `lower_case_table_names` is set to `1`.  
To resolve this issue, you can rename the table to use lowercase, or modify the `lower_case_table_names` parameter on the DB cluster before starting the upgrade.  
Carefully test and review the documentation on [case sensitivity](https://dev.mysql.com/doc/refman/5.7/en/identifier-case-sensitivity.html) in MySQL, and how any such changes might affect your application.  
Also review the MySQL 8.0 documentation on how [lower\$1case\$1table\$1names](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_lower_case_table_names) are handled differently in MySQL 8.0.

**groupByAscSyntaxCheck**  
**Precheck level: Error**  
**Usage of removed `GROUP BY ASC/DESC` syntax**  
As of MySQL 8.0.13, the deprecated `ASC` or `DESC` syntax for `GROUP BY` clauses has been removed. Queries relying on `GROUP BY` sorting might now produce different results. To get a specific sort order, use an `ORDER BY` clause instead. If any objects exist in your database using this syntax, you must re-create them using an `ORDER BY` clause before retrying the upgrade. For more information, see [SQL changes](https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-sql-changes) in the MySQL documentation.  
**Example output:**  

```
{
  "id": "groupbyAscSyntaxCheck",
  "title": "Usage of removed GROUP BY ASC/DESC syntax",
  "status": "OK",
  "description": "Error: The following DB objects use removed GROUP BY ASC/DESC syntax. They need to be altered so that ASC/DESC keyword is removed from GROUP BY clause and placed in appropriate ORDER BY clause.",
  "documentationLink": "https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-13.html#mysqld-8-0-13-sql-syntax",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.groupbyasc",
        "description": "PROCEDURE uses removed GROUP BY ASC syntax",
        "dbObjectType": "Routine"
      }
  ]
}
```

**mysqlEmptyDotTableSyntaxCheck**  
**Precheck level: Error**  
**Check for deprecated `.<table>` syntax used in routines.**  
In MySQL 8.0, routines can no longer contain the deprecated identifier syntax (`\".<table>\"`). If any stored routines or triggers contain such identifiers, the upgrade fails. For example, the following `.dot_table` reference is no longer permitted:  

```
mysql> show create procedure incorrect_procedure\G
*************************** 1. row ***************************
           Procedure: incorrect_procedure
            sql_mode:
    Create Procedure: CREATE DEFINER=`reinvent`@`%` PROCEDURE `incorrect_procedure`()
BEGIN delete FROM .dot_table; select * from .dot_table where 1=1; END
character_set_client: utf8mb4
collation_connection: utf8mb4_0900_ai_ci
  Database Collation: latin1_swedish_ci
1 row in set (0.00 sec)
```
After you re-create the routines and triggers to use the correct identifier syntax and escaping, the precheck passes, and the upgrade can proceed. For more information on identifiers, see [Schema object names](https://dev.mysql.com/doc/refman/8.0/en/identifiers.html) in the MySQL documentation.  
**Example output:**  

```
{
  "id": "mysqlEmptyDotTableSyntaxCheck",
  "title": "Check for deprecated '.<table>' syntax used in routines.",
  "status": "OK",
  "description": "Error: The following routines contain identifiers in deprecated identifier syntax (\".<table>\"), and should be corrected before upgrade:\n",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.incorrect_procedure",
        "description": " routine body contains deprecated identifiers."
      }
  ]
}
```
The precheck returns an error for the `incorrect_procedure` routine in the `test` database because it contains deprecated syntax.  
After you correct the routine, the precheck succeeds, and you can retry the upgrade.

**mysqlIndexTooLargeCheck**  
**Precheck level: Error**  
**Check for indexes that are too large to work on MySQL versions higher than 5.7**  
For compact or redundant row formats, it shouldn't be possible to create an index with a prefix larger than 767 bytes. However, before MySQL version 5.7.35 this was possible. For more information, see the [MySQL 5.7.35 release notes](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-35.html).  
Any indexes that were affected by this bug will become inaccessible after upgrading to MySQL 8.0. This precheck identifies offending indexes that have to be rebuilt before the upgrade is allowed to proceed.  

```
 {
  "id": "mysqlIndexTooLargeCheck",
  "title": "Check for indexes that are too large to work on higher versions of MySQL Server than 5.7",
  "status": "OK",
  "description": "Error: The following indexes ware made too large for their format in an older version of MySQL (older than 5.7.34). Normally those indexes within tables with compact or redundant row formats shouldn't be larger than 767 bytes. To fix this problem those indexes should be dropped before upgrading or those tables will be inaccessible.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.table_with_large_idx",
        "description": "IDX_2"
      }
  ]
}
```
The precheck returns an error because the `test.table_with_large_idx` table contains an index on a table using a compact or redundant row format that's larger than 767 bytes. These tables would become unaccessible after upgrading to MySQL 8.0. Before proceeding with the upgrade, do one of the following:  
+ Drop the index mentioned in the precheck.
+ Add an index mentioned in the precheck.
+ Change the row format used by the table.
Here we rebuild the table to resolve the precheck failure. Before rebuilding the table, make sure that the [innodb\$1file\$1format](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_file_format) is set to `Barracuda`, and the [innodb\$1default\$1row\$1format](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_default_row_format) is set to `dynamic`. These are the defaults in MySQL 5.7. For more information, see [InnoDB row formats](https://dev.mysql.com/doc/refman/5.7/en/innodb-row-format.html) and [InnoDB file-format management](https://dev.mysql.com/doc/refman/5.7/en/innodb-file-format.html) in the MySQL documentation.  
Before rebuilding tablespaces, see [Online DDL operations](https://dev.mysql.com/doc/refman/5.7/en/innodb-online-ddl-operations.html) in the MySQL documentation to understand the effects of locking and data movement on foreground transactions.

```
mysql > select @@innodb_file_format,@@innodb_default_row_format;
+----------------------+-----------------------------+
| @@innodb_file_format | @@innodb_default_row_format |
+----------------------+-----------------------------+
| Barracuda            | dynamic                     |
+----------------------+-----------------------------+
1 row in set (0.00 sec)

mysql> optimize table table_with_large_idx;
+---------------------------+----------+----------+-------------------------------------------------------------------+
| Table                     | Op       | Msg_type | Msg_text                                                          |
+---------------------------+----------+----------+-------------------------------------------------------------------+
| test.table_with_large_idx | optimize | note     | Table does not support optimize, doing recreate + analyze instead |
| test.table_with_large_idx | optimize | status   | OK                                                                |
+---------------------------+----------+----------+-------------------------------------------------------------------+
2 rows in set (0.02 sec)

# Verify FILE_FORMAT and ROW_FORMAT
mysql>  select * from information_schema.innodb_sys_tables where name like 'test/table_with_large_idx';
+----------+---------------------------+------+--------+-------+-------------+------------+---------------+------------+
| TABLE_ID | NAME                      | FLAG | N_COLS | SPACE | FILE_FORMAT | ROW_FORMAT | ZIP_PAGE_SIZE | SPACE_TYPE |
+----------+---------------------------+------+--------+-------+-------------+------------+---------------+------------+
|       43 | test/table_with_large_idx |   33 |      4 |    26 | Barracuda   | Dynamic    |             0 | Single     |
+----------+---------------------------+------+--------+-------+-------------+------------+---------------+------------+
1 row in set (0.00 sec)
```
After rebuilding the table, the precheck passes, and the upgrade can proceed.  

```
{
  "id": "mysqlIndexTooLargeCheck",
  "title": "Check for indexes that are too large to work on higher versions of MySQL Server than 5.7",
  "status": "OK",
  "detectedProblems": []
},
```

**mysqlInvalid57NamesCheck**  
**Precheck level: Error**  
**Check for invalid table and schema names used in MySQL 5.7**  
When migrating to the new data dictionary in MySQL 8.0, your database instance can't contain schemas or tables prefixed with `#mysql50#`. If any such objects exist, the upgrade fails. To resolve this issue, run [mysqlcheck](https://dev.mysql.com/doc/refman/8.0/en/mysqlcheck.html) against the returned schemas and tables.  
Make sure that you use a [MySQL 5.7 version](https://downloads.mysql.com/archives/community/) of `mysqlcheck` , because [--fix-db-names](https://dev.mysql.com/doc/refman/5.7/en/mysqlcheck.html#option_mysqlcheck_fix-db-names) and [--fix-table-names](https://dev.mysql.com/doc/refman/5.7/en/mysqlcheck.html#option_mysqlcheck_fix-table-names) have been removed from [MySQL 8.0](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html).
**Example output:**  

```
{
  "id": "mysqlInvalid57NamesCheck",
  "title": "Check for invalid table names and schema names used in 5.7",
  "status": "OK",
  "description": "The following tables and/or schemas have invalid names. In order to fix them use the mysqlcheck utility as follows:\n\n  $ mysqlcheck --check-upgrade --all-databases\n  $ mysqlcheck --fix-db-names --fix-table-names --all-databases\n\nOR via mysql client, for eg:\n\n  ALTER DATABASE `#mysql50#lost+found` UPGRADE DATA DIRECTORY NAME;",
  "documentationLink": "https://dev.mysql.com/doc/refman/5.7/en/identifier-mapping.html https://dev.mysql.com/doc/refman/5.7/en/alter-database.html https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "#mysql50#fix_db_names",
        "description": "Schema name"
      }
  ]
}
```
The precheck reports that the schema `#mysql50#fix_db_names` has an invalid name.  
After fixing the schema name, the precheck passes, allowing the upgrade to proceed.  

```
{
  "id": "mysqlInvalid57NamesCheck",
  "title": "Check for invalid table names and schema names used in 5.7",
  "status": "OK",
  "detectedProblems": []
},
```

**mysqlOrphanedRoutinesCheck**  
**Precheck level: Error**  
**Check for orphaned routines in 5.7**  
When migrating to the new data dictionary in MySQL 8.0, if there are any stored procedures in the database where the schema no longer exists, the upgrade fails. This precheck verifies that all schemas referenced in stored procedures on your DB instance still exist. To allow the upgrade to proceed, drop these stored procedures.  
**Example output:**  

```
{
  "id": "mysqlOrphanedRoutinesCheck",
  "title": "Check for orphaned routines in 5.7",
  "status": "OK",
  "description": "Error: The following routines have been orphaned. Schemas that they are referencing no longer exists.\nThey have to be cleaned up or the upgrade will fail.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "dropped_db.get_version",
        "description": "is orphaned"
      }
  ]
},
```
The precheck reports that the `get_version` stored procedure in the `dropped_db` database is orphaned.  
To clean up this procedure, you can re-create the missing schema.  

```
mysql> create database dropped_db;
Query OK, 1 row affected (0.01 sec)
```
After the schema is re-created, you can drop the procedure to allow the upgrade to proceed.  

```
{
  "id": "mysqlOrphanedRoutinesCheck",
  "title": "Check for orphaned routines in 5.7",
  "status": "OK",
  "detectedProblems": []
},
```

**mysqlSchemaCheck**  
**Precheck level: Error**  
**Table names in the `mysql` schema conflicting with new tables in MySQL 8.0**  
The new [Atomic Data Dictionary](https://dev.mysql.com/doc/refman/8.0/en/data-dictionary-file-removal.html) introduced in MySQL 8.0 stores all metadata in a set of internal InnoDB tables in the `mysql` schema. During the upgrade, the new [internal data dictionary tables](https://dev.mysql.com/doc/refman/8.0/en/data-dictionary-schema.html) are created in the `mysql` schema. To avoid naming collisions, which would result in upgrade failures, the precheck examines all table names in the `mysql` schema to ensure that none of the new table names are already in use. If they are, an error is returned, and the upgrade isn't allowed to proceed.  
**Example output:**  

```
{
  "id": "mysqlSchema",
  "title": "Table names in the mysql schema conflicting with new tables in the latest MySQL.",
  "status": "OK",
  "description": "Error: The following tables in mysql schema have names that will conflict with the ones introduced in the latest version. They must be renamed or removed before upgrading (use RENAME TABLE command). This may also entail changes to applications that use the affected tables.",
  "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/upgrade-before-you-begin.html",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "mysql.tablespaces",
        "description": "Table name used in mysql schema.",
        "dbObjectType": "Table"
      }
  ]
}
```
An error is returned because there is a table named `tablespaces` in the `mysql` schema. This is one of the new internal data dictionary table names in MySQL 8.0. You must rename or remove any such tables before upgrading, by using the `RENAME TABLE` command.

**nonNativePartitioningCheck**  
**Precheck level: Error**  
**Partitioned tables using engines with non-native partitioning**  
According to the [MySQL 8.0 documentation](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html), two storage engines currently provide native partitioning support: [InnoDB](https://dev.mysql.com/doc/refman/8.0/en/innodb-storage-engine.html) and [NDB](https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster.html). Of these, only InnoDB is supported in Aurora MySQL version 3, compatible with MySQL 8.0. Any attempt to create partitioned tables in MySQL 8.0 using any other storage engine fails. This precheck looks for tables in your DB cluster that are using non-native partitioning. If any are returned, you must remove the partitioning or convert the storage engine to InnoDB.  
**Example output:**  

```
{
  "id": "nonNativePartitioning",
  "title": "Partitioned tables using engines with non native partitioning",
  "status": "OK",
  "description": "Error: In the latest MySQL storage engine is responsible for providing its own partitioning handler, and the MySQL server no longer provides generic partitioning support. InnoDB and NDB are the only storage engines that provide a native partitioning handler that is supported in the latest MySQL. A partitioned table using any other storage engine must be altered—either to convert it to InnoDB or NDB, or to remove its partitioning—before upgrading the server, else it cannot be used afterwards.",
  "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-configuration-changes",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.partMyisamTable",
         "description": "MyISAM engine does not support native partitioning",
         "dbObjectType": "Table"
      }
  ]
}
```
Here a MyISAM table is using partitioning, which requires action before the upgrade can proceed.

**oldTemporalCheck**  
**Precheck level: Error**  
**Usage of old temporal type**  
"Old temporals" refer to the temporal type columns (such as `TIMESTAMP` and `DATETIME`) created in MySQL versions 5.5 and lower. In MySQL 8.0, support for these old temporal data types is removed, meaning that in-place upgrades from MySQL 5.7 to 8.0 aren't possible if the database contains these old temporal types. To fix this, you must [rebuild](https://dev.mysql.com/doc/refman/5.7/en/rebuilding-tables.html) any tables containing such old temporal date types, before proceeding with the upgrade.  
For more information on the deprecation of old temporal data types in MySQL 5.7, see this [blog](https://dev.mysql.com/blog-archive/how-to-easily-identify-tables-with-temporal-types-in-old-format/). For more information on the removal of old temporal data types in MySQL 8.0, see this [blog](https://dev.mysql.com/blog-archive/mysql-8-0-removing-support-for-old-temporal-datatypes/).  
Before rebuilding tablespaces, see [Online DDL operations](https://dev.mysql.com/doc/refman/5.7/en/innodb-online-ddl-operations.html) in the MySQL documentation to understand the effects of locking and data movement on foreground transactions.
**Example output:**  

```
{
  "id": "oldTemporalCheck",
  "title": "Usage of old temporal type",
  "status": "OK",
  "description": "Error: Following table columns use a deprecated and no longer supported temporal disk storage format. They must be converted to the new format before upgrading. It can by done by rebuilding the table using 'ALTER TABLE <table_name> FORCE' command",
  "documentationLink": "https://dev.mysql.com/blog-archive/mysql-8-0-removing-support-for-old-temporal-datatypes/",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.55_temporal_table.timestamp_column",
        "description": "timestamp /* 5.5 binary format */",
        "dbObjectType": "Column"
      }
  ]
},
```
An error is reported for the column `timestamp_column` in the table `test.55_temporal_table`, because it uses an old temporal disk storage format that's no longer supported.  
To resolve this issue and allow the upgrade to proceed, rebuild the table to convert the old temporal disk storage format to the new one introduced in MySQL 5.6. For more information and prerequisites before doing so, see [Converting between 3-byte and 4-byte Unicode character sets](https://dev.mysql.com/doc/refman/8.0/en/charset-unicode-conversion.html) in the MySQL documentation.  
Running the following command to rebuild the tables mentioned in this precheck converts the old temporal types to the newer format with fractional-second precision.  

```
ALTER TABLE ... ENGINE=InnoDB;
```
For more information on rebuilding tables, see [ALTER TABLE statement](https://dev.mysql.com/doc/refman/8.0/en/alter-table.html) in the MySQL documentation.  
After rebuilding the table in question and restarting the upgrade, the compatibility check passes, allowing the upgrade to proceed.  

```
{
  "id": "oldTemporalCheck",
  "title": "Usage of old temporal type",
  "status": "OK",
  "detectedProblems": []
}
```

**partitionedTablesInSharedTablespaceCheck**  
**Precheck level: Error**  
**Usage of partitioned tables in shared tablespaces**  
As of [MySQL 8.0.13](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-13.html), support for placing table partitions in shared tablespaces is removed. Before upgrading, move any such tables from shared tablespaces to file-per-table tablespaces.  
Before rebuilding tablespaces, see [Partitioning operations](https://dev.mysql.com/doc/refman/5.7/en/innodb-online-ddl-operations.html#online-ddl-partitioning) in the MySQL documentation to understand the effects of locking and data movement on foreground transactions.
**Example output:**  

```
{
  "id": "partitionedTablesInSharedTablespaceCheck",
  "title": "Usage of partitioned tables in shared tablespaces",
  "status": "OK",
  "description": "Error: The following tables have partitions in shared tablespaces. They need to be moved to file-per-table tablespace before upgrading. You can do this by running query like 'ALTER TABLE table_name REORGANIZE PARTITION X INTO (PARTITION X VALUES LESS THAN (30) TABLESPACE=innodb_file_per_table);'",
  "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.partInnoDBTable",
        "description": "Partition p1 is in shared tablespace innodb",
        "dbObjectType": "Table"
      }
  ]
}
```
The precheck fails because partition `p1` from table `test.partInnoDBTable` is in the system tablespace.  
To resolve this issue, rebuild the `test.partInnodbTable` table, placing the offending partition `p1` in a file-per-table tablespace.  

```
mysql > ALTER TABLE partInnodbTable REORGANIZE PARTITION p1
    ->   INTO (PARTITION p1 VALUES LESS THAN ('2014-01-01') TABLESPACE=innodb_file_per_table);
Query OK, 0 rows affected, 1 warning (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0
```
After doing so, the precheck passes.  

```
{
  "id": "partitionedTablesInSharedTablespaceCheck",
  "title": "Usage of partitioned tables in shared tablespaces",
  "status": "OK",
  "detectedProblems": []
}
```

**removedFunctionsCheck**  
**Precheck level: Error**  
**Usage of functions that were removed from the latest MySQL version**  
In MySQL 8.0, a number of built-in functions have been removed. This precheck examines your database for objects that might use these functions. If they're found, an error is returned. You must resolve the issues before retrying the upgrade.  
The majority of functions removed are [spatial functions](https://dev.mysql.com/doc/refman/5.7/en/gis-wkt-functions.html), which have been replaced with equivalent `ST_*` functions. In these cases, you modify the database objects to use the new procedure naming. For more information, see [Features removed in MySQL 8.0](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) in the MySQL documentation.  
**Example output:**  

```
{
  "id": "removedFunctionsCheck",
  "title": "Usage of removed functions",
  "status": "OK",
  "description": "Error: The following DB objects use functions that were removed in the latest MySQL version. Please make sure to update them to use supported alternatives before upgrade.",
  "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.GetLocationsInPolygon",
        "description": "PROCEDURE uses removed function POLYGONFROMTEXT (consider using ST_POLYGONFROMTEXT instead)",
        "dbObjectType": "Routine"
      },
      {
        "level": "Error",
        "dbObject": "test.InsertLocation",
        "description": "PROCEDURE uses removed function POINTFROMTEXT (consider using ST_POINTFROMTEXT instead)",
        "dbObjectType": "Routine"
      }
  ]
},
```
The precheck reports that the `test.GetLocationsInPolygon` stored procedure is using two removed functions: [POLYGONFROMTEXT](https://dev.mysql.com/doc/refman/5.7/en/gis-wkt-functions.html#function_polyfromtext) and [POINTFROMTEXT](https://dev.mysql.com/doc/refman/5.7/en/gis-wkt-functions.html#function_st-mpointfromtext). It also suggests that you use the new [ST\$1POLYGONFROMTEXT](https://dev.mysql.com/doc/refman/8.0/en/gis-wkt-functions.html#function_st-polyfromtext) and [ST\$1POINTFROMTEXT](https://dev.mysql.com/doc/refman/8.0/en/gis-wkt-functions.html#function_st-mpointfromtext) as replacements. After re-creating the procedure using the suggestions, the precheck completes successfully.  

```
{
  "id": "removedFunctionsCheck",
  "title": "Usage of removed functions",
  "status": "OK",
  "detectedProblems": []
},
```
While in most cases the deprecated functions have direct replacements, make sure that you test your application and review the documentation for any changes in behavior as a result of the change.

**routineSyntaxCheck**  
**Precheck level: Error**  
**MySQL syntax check for routine-like objects**  
MySQL 8.0 introduced [reserved keywords](https://dev.mysql.com/doc/mysqld-version-reference/en/keywords-8-0.html#keywords-new-in-8-0) that were not reserved previously. The upgrade prechecker evaluates the usage of reserved keywords in the names of database objects and in their definitions and body. If it detects reserved keywords being used in database objects, such as stored procedures, functions, events, and triggers, the upgrade fails and an error is published to the `upgrade-prechecks.log` file. To resolve the issue, you must update these object definitions and enclose any such references in single quotes (') before upgrading. For more information on escaping reserved words in MySQL, see [String literals](https://dev.mysql.com/doc/refman/8.0/en/string-literals.html) in the MySQL documentation.  
Alternatively, you can change the name to a different name, which may require application changes.  
**Example output:**  

```
{
  "id": "routineSyntaxCheck",
  "title": "MySQL syntax check for routine-like objects",
  "status": "OK",
  "description": "The following objects did not pass a syntax check with the latest MySQL grammar. A common reason is that they reference names that conflict with new reserved keywords. You must update these routine definitions and `quote` any such references before upgrading.",
  "documentationLink": "https://dev.mysql.com/doc/refman/en/keywords.html",
  "detectedProblems": [
      {
         "level": "Error",
         "dbObject": "test.select_res_word",
         "description": "at line 2,18: unexpected token 'except'",
         "dbObjectType": "Routine"
      }
  ]
}
```
To fix this issue, check the routine definition.  

```
SHOW CREATE PROCEDURE test.select_res_word\G

*************************** 1. row ***************************
           Procedure: select_res_word
            sql_mode: ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
    Create Procedure: CREATE PROCEDURE 'select_res_word'()
BEGIN
    SELECT * FROM except;
END
character_set_client: utf8
collation_connection: utf8_general_ci
  Database Collation: latin1_swedish_ci
1 row in set (0.00 sec)
```
The procedure uses a table named `except`, which is a new keyword in MySQL 8.0. Re-create the procedure by escaping the string literal.  

```
> drop procedure if exists select_res_word;
Query OK, 0 rows affected (0.00 sec)

> DELIMITER $$
 > CREATE PROCEDURE select_res_word()
    -> BEGIN
    ->     SELECT * FROM 'except';
    -> END$$
Query OK, 0 rows affected (0.00 sec)

 > DELIMITER ;
```
The precheck now passes.  

```
{
  "id": "routineSyntaxCheck",
  "title": "MySQL syntax check for routine-like objects",
  "status": "OK",
  "detectedProblems": []
}
```

**schemaInconsistencyCheck**  
**Precheck level: Error**  
**Schema inconsistencies resulting from file removal or corruption**  
As described previously, MySQL 8.0 introduced the [Atomic Data Dictionary](https://dev.mysql.com/doc/refman/8.0/en/data-dictionary-file-removal.html), which stores all metadata in a set of internal InnoDB tables in the `mysql` schema. This new architecture provides a transactional, [ACID](https://en.wikipedia.org/wiki/ACID)-compliant way to manage database metadata, solving the "atomic DDL" problem from the old file-based approach. Before MySQL 8.0, it was possible for schema objects to become orphaned if a DDL operation was unexpectedly interrupted. The migration of file-based metadata to the new Atomic Data Dictionary tables during upgrade ensures that there are no such orphaned schema objects in the DB instance. If any orphaned objects are encountered, the upgrade fails.  
To help detect these orphaned objects before initiating the upgrade, the `schemaInconsistencyCheck` precheck is run to ensure that all data dictionary metadata objects are in sync. If any orphaned metadata objects are detected, the upgrade doesn't proceed. To proceed with the upgrade, clean up these orphaned metadata objects.  
If you encounter any errors with this precheck, open a case with [AWS Support](https://aws.amazon.com/support) to request that the metadata inconsistency be resolved. Alternatively, you can retry the upgrade by performing a logical dump, then restoring to a new Aurora MySQL version 3 DB cluster.  
**Example output:**  

```
{
  "id": "schemaInconsistencyCheck",
  "title": "Schema inconsistencies resulting from file removal or corruption",
  "status": "OK",
  "description": "Error: Following tables show signs that either table datadir directory or frm file was removed/corrupted. Please check server logs, examine datadir to detect the issue and fix it before upgrade",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.schemaInconsistencyCheck_failure",
        "description": "present in INFORMATION_SCHEMA's INNODB_SYS_TABLES table but missing from TABLES table"
      }
  ]
}
```
The precheck reports that the `test.schemaInconsistencyCheck_failure` table has inconsistent metadata. In this case, the table exists in the InnoDB storage engine metadata (`information_schema.INNODB_SYS_TABLES`), but not in the MySQL metadata (`information_schema.TABLES`).

### Aurora MySQL prechecks that report errors


The following prechecks are specific to Aurora MySQL:
+ [auroraCheckDDLRecovery](#auroraCheckDDLRecovery)
+ [auroraCheckRdsUpgradePrechecksTable](#auroraCheckRdsUpgradePrechecksTable)
+ [auroraFODUpgradeCheck](#auroraFODUpgradeCheck)
+ [auroraGetDanglingFulltextIndex](#auroraGetDanglingFulltextIndex)
+ [auroraUpgradeCheckForDatafilePathInconsistency](#auroraUpgradeCheckForDatafilePathInconsistency)
+ [auroraUpgradeCheckForFtsSpaceIdZero](#auroraUpgradeCheckForFtsSpaceIdZero)
+ [auroraUpgradeCheckForIncompleteXATransactions](#auroraUpgradeCheckForIncompleteXATransactions)
+ [auroraUpgradeCheckForInstanceLimit](#auroraUpgradeCheckForInstanceLimit)
+ [auroraUpgradeCheckForInternalUsers](#auroraUpgradeCheckForInternalUsers)
+ [auroraUpgradeCheckForInvalidUtf8mb3CharacterStringInViews](#auroraUpgradeCheckForInvalidUtf8mb3CharacterStringInViews)
+ [auroraUpgradeCheckForInvalidUtf8mb3ColumnComments](#auroraUpgradeCheckForInvalidUtf8mb3ColumnComments)
+ [auroraUpgradeCheckForInvalidUtf8mb3IndexComments](#auroraUpgradeCheckForInvalidUtf8mb3IndexComments)
+ [auroraUpgradeCheckForInvalidUtf8mb3TableComments](#auroraUpgradeCheckForInvalidUtf8mb3TableComments)
+ [auroraUpgradeCheckForMasterUser](#auroraUpgradeCheckForMasterUser)
+ [auroraUpgradeCheckForPrefixIndexOnGeometryColumns](#auroraUpgradeCheckForPrefixIndexOnGeometryColumns)
+ [auroraUpgradeCheckForSpecialCharactersInProcedures](#auroraUpgradeCheckForSpecialCharactersInProcedures)
+ [auroraUpgradeCheckForSysSchemaObjectTypeMismatch](#auroraUpgradeCheckForSysSchemaObjectTypeMismatch)
+ [auroraUpgradeCheckForViewColumnNameLength](#auroraUpgradeCheckForViewColumnNameLength)
+ [auroraUpgradeCheckIndexLengthLimitOnTinyColumns](#auroraUpgradeCheckIndexLengthLimitOnTinyColumns)
+ [auroraUpgradeCheckMissingInnodbMetadataForMysqlHostTable](#auroraUpgradeCheckMissingInnodbMetadataForMysqlHostTable)

**auroraCheckDDLRecovery**  
**Precheck level: Error**  
**Check for artifacts related to Aurora DDL recovery feature**  
As part of the Data Definition Language (DDL) recovery implementation in Aurora MySQL, metadata on inflight DDL statements is maintained in the `ddl_log_md_table` and `ddl_log_table` tables in the `mysql` schema. Aurora's implementation of DDL recovery isn't supported for version 3 onward, because the functionality is part of the new [Atomic Data Dictionary](https://dev.mysql.com/doc/refman/8.0/en/data-dictionary-file-removal.html) implementation in MySQL 8.0. If you have any DDL statements running during the compatibility checks, this precheck might fail. We recommend that you try the upgrade while no DDL statements are running.  
If this precheck fails without any running DDL statements, open a case with [AWS Support](https://aws.amazon.com/support) to request that the metadata inconsistency be resolved. Alternatively, you can retry the upgrade by performing a logical dump, then restoring to a new Aurora MySQL version 3 DB cluster.  
If any DDL statements are running, the precheck output prints the following message:  

```
“There are DDL statements in process. Please allow DDL statements to finish before upgrading.”
```
**Example output:**  

```
{
  "id": "auroraCheckDDLRecovery",
  "title": "Check for artifacts related to Aurora DDL recovery feature",
  "status": "OK",
  "description": "Aurora implementation of DDL recovery is not supported from 3.x onwards. This check verifies that the database do not have artifacts realted to the feature",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "mysql.ddl_log_md_table",
        "description": "Table mysql.ddl_log_md_table is not empty. Your database has pending DDL recovery operations. Reachout to AWS support for assistance."
      },
      {
        "level": "Error",
        "dbObject": "mysql.ddl_log_table",
        "description": "Table mysql.ddl_log_table is not empty. Your database has pending DDL recovery operations. Reachout to AWS support for assistance."
      },
      {
        "level": "Error",
        "dbObject": "information_schema.processlist",
        "description": "There are DDL statements in process. Please allow DDL statements to finish before upgrading."
      }
  ]
}
```
The precheck returned an error due to an inflight DDL running concurrently with the compatibility checks. We recommend that you retry the upgrade without any DDLs running.

**auroraCheckRdsUpgradePrechecksTable**  
**Precheck level: Error**  
**Check existence of `mysql.rds_upgrade_prechecks` table**  
This is an internal-only precheck carried out by the RDS service. Any errors will be automatically handled on upgrade and can be safely ignored.  
If you encounter any errors with this precheck, open a case with [AWS Support](https://aws.amazon.com/support) to request that the metadata inconsistency be resolved. Alternatively, you can retry the upgrade by performing a logical dump, then restoring to a new Aurora MySQL version 3 DB cluster.  

```
{
  "id": "auroraCheckRdsUpgradePrechecksTable",
  "title": "Check existence of mysql.rds_upgrade_prechecks table",
  "status": "OK",
  "detectedProblems": []
}
```

**auroraFODUpgradeCheck**  
**Precheck level: Error**  
**Check for artifacts related to Aurora fast DDL feature**  
The [Fast DDL](AuroraMySQL.Managing.FastDDL.md) optimization was introduced in [lab mode](AuroraMySQL.Updates.LabMode.md) on Aurora MySQL version 2 to improve the efficiency of some DDL operations. In Aurora MySQL version 3, lab mode has been removed, and the Fast DDL implementation has been superseded by the MySQL 8.0 feature called [Instant DDL](https://dev.mysql.com/doc/refman/8.4/en/innodb-online-ddl-operations.html).  
Before upgrading to Aurora MySQL version 3, any tables that use Fast DDL in lab mode will have to be rebuilt by running the `OPTIMIZE TABLE` or `ALTER TABLE ... ENGINE=InnoDB` command to ensure compatibility with Aurora MySQL version 3.  
This precheck returns a list of any such tables. After the returned tables have been rebuilt, you can retry the upgrade.  
**Example output:**  

```
{
  "id": "auroraFODUpgradeCheck",
  "title": "Check for artifacts related to Aurora fast DDL feature",
  "status": "OK",
  "description": "Aurora fast DDL is not supported from 3.x onwards. This check verifies that the database does not have artifacts related to the feature",
  "documentationLink": "https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.FastDDL.html#AuroraMySQL.Managing.FastDDL-v2",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.test",
        "description": "Your table has pending Aurora fast DDL operations. Run 'OPTIMIZE TABLE <table name>' for the table to apply all the pending DDL updates. Then try the upgrade again."
      }
  ]
}
```
The precheck reports that the table `test.test` has pending Fast DDL operations.  
To allow the upgrade to proceed, you can rebuild the table, then retry the upgrade.  
Before rebuilding tablespaces, see [Online DDL operations](https://dev.mysql.com/doc/refman/5.7/en/innodb-online-ddl-operations.html) in the MySQL documentation to understand the effects of locking and data movement on foreground transactions.

```
mysql> optimize table test.test;
+-----------+----------+----------+-------------------------------------------------------------------+
| Table     | Op       | Msg_type | Msg_text                                                          |
+-----------+----------+----------+-------------------------------------------------------------------+
| test.test | optimize | note     | Table does not support optimize, doing recreate + analyze instead |
| test.test | optimize | status   | OK                                                                |
+-----------+----------+----------+-------------------------------------------------------------------+
2 rows in set (0.04 sec)
```
After rebuilding the table, the precheck succeeds.  

```
{
  "id": "auroraFODUpgradeCheck",
  "title": "Check for artifacts related to Aurora fast DDL feature",
  "status": "OK",
  "detectedProblems": []
}
```

**auroraGetDanglingFulltextIndex**  
**Precheck level: Error**  
**Tables with dangling `FULLTEXT` index reference**  
Before MySQL 5.6.26, it was possible that after dropping a full-text search index, the hidden `FTS_DOC_ID` and `FTS_DOC_ID_INDEX` columns would become orphaned. For more information, see [Bug \$176012](https://bugs.mysql.com/bug.php?id=76012).  
If you have any tables created on earlier versions of MySQL where this has occurred, it can cause upgrades to Aurora MySQL version 3 to fail. This precheck verifies that no such orphaned, or “dangling” full-text indexes exist on your DB cluster before upgrading to MySQL 8.0. If this precheck fails, rebuild any tables that contain such dangling full-text indexes.  
**Example output:**  

```
{
  "id": "auroraGetDanglingFulltextIndex",
  "title": "Tables with dangling FULLTEXT index reference",
  "status": "OK",
  "description": "Error: The following tables contain dangling FULLTEXT index which is not supported. It is recommended to rebuild the table before upgrade.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.table_with_fts_index",
        "description": "Table `test.table_with_fts_index` contains dangling FULLTEXT index. Kindly recreate the table before upgrade."
      }
  ]
},
```
The precheck reports an error for the `test.table_with_fts_index` table because it contains a dangling full-text index. To allow the upgrade to proceed, rebuild the table to clean up the full-text index auxiliary tables. Use `OPTIMIZE TABLE test.table_with_fts_index` or `ALTER TABLE test.table_with_fts_index, ENGINE=INNODB`.  
After rebuilding the table, the precheck passes.  

```
{
  "id": "auroraGetDanglingFulltextIndex",
  "title": "Tables with dangling FULLTEXT index reference",
  "status": "OK",
  "detectedProblems": []
},
```

**auroraUpgradeCheckForDatafilePathInconsistency**  
**Precheck level: Error**  
**Check for inconsistency related to `ibd` file path**  
This precheck applies only to Aurora MySQL version 3.03.0 and lower. If you encounter an error with this precheck, upgrade to Aurora MySQL version 3.04 or higher.  
**Example output:**  

```
{
  "id": "auroraUpgradeCheckForDatafilePathInconsistency",
  "title": "Check for inconsistency related to ibd file path.",
  "status": "OK",
  "detectedProblems": []
}
```

**auroraUpgradeCheckForFtsSpaceIdZero**  
**Precheck level: Error**  
**Check for full-text index with space ID as zero**  
In MySQL, when you add a [full-text index](https://dev.mysql.com/doc/refman/5.7/en/innodb-fulltext-index.html) to an InnoDB table, a number of auxiliary index tablespaces are created. Due to a [bug](https://bugs.mysql.com/bug.php?id=72132) in earlier versions of MySQL, which was fixed in version 5.6.20, it was possible that these auxiliary index tables were created in the [system tablespace](https://dev.mysql.com/doc/refman/5.7/en/glossary.html#glos_system_tablespace), rather than their own InnoDB tablespace.  
If any such auxiliary tablespaces exist, the upgrade will fail. Re-create the full-text indexes mentioned in the precheck error, then retry the upgrade.  
**Example output:**  

```
{
  "id": "auroraUpgradeCheckForFtsSpaceIdZero",
  "title": "Check for fulltext index with space id as zero",
  "status": "OK",
  "description": "The auxiliary tables of FTS indexes on the table are created in system table-space. Due to this DDL queries executed on MySQL8.0 shall cause database unavailability. To avoid that, drop and recreate all the FTS indexes on the table or rebuild the table using ALTER TABLE query before the upgrade.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.fts_space_zero_check",
        "description": " The auxiliary tables of FTS indexes on the table 'test.fts_space_zero_check' are created in system table-space due to https://bugs.mysql.com/bug.php?id=72132. In MySQL8.0, DDL queries executed on this table shall cause database unavailability. To avoid that, drop and recreate all the FTS indexes on the table or rebuild the table using ALTER TABLE query before the upgrade."
      }
  ]
},
```
The precheck reports an error for the `test.fts_space_zero_check` table, because it has auxiliary full-text search (FTS) tables in the system tablespace.  
After you drop and re-create the FTS indexes associated with this table, the precheck succeeds.  
Before rebuilding tablespaces, see [Partitioning operations](https://dev.mysql.com/doc/refman/5.7/en/innodb-online-ddl-operations.html#online-ddl-partitioning) in the MySQL documentation to understand the effects of locking and data movement on foreground transactions.

```
{
 "id": "auroraUpgradeCheckForFtsSpaceIdZero",
 "title": "Check for fulltext index with space id as zero",
 "status": "OK",
 "detectedProblems": []
}
```

**auroraUpgradeCheckForIncompleteXATransactions**  
**Precheck level: Error**  
**Check for XA transactions in prepared state**  
While running the major version upgrade process, it is essential that the Aurora MySQL version 2 DB instance undergo a [clean shutdown](https://dev.mysql.com/doc/refman/5.7/en/glossary.html#glos_slow_shutdown). This ensures that all transactions are committed or rolled back, and that InnoDB has purged all undo log records. Because transaction rollback is necessary, if your database has any [XA transactions](https://dev.mysql.com/doc/refman/5.7/en/xa.html) in a prepared state, it can block the clean shutdown from proceeding. For this reason, if any prepared XA transactions are detected, the upgrade will be unable to proceed until you take action to commit or roll them back.  
For more information on finding XA transactions in a prepared state using `XA RECOVER`, see [XA transaction SQL statements](https://dev.mysql.com/doc/refman/5.7/en/xa-statements.html) in the MySQL documentation. For more information on XA transaction states, see [XA transaction states](https://dev.mysql.com/doc/refman/5.7/en/xa-states.html) in the MySQL documentation.  
**Example output:**  

```
{
  "id": "auroraUpgradeCheckForIncompleteXATransactions",
  "title": "Pre-checks for XA Transactions in prepared state.",
  "status": "OK",
  "description": "Your cluster currently has XA transactions in the prepared state. To proceed with the upgrade, commit or rollback these transactions.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "all",
        "description": "Your cluster currently has XA transactions in the prepared state. To proceed with the upgrade, commit or rollback these transactions."
      }
  ]
}
```
This precheck reports an error because there are transactions in a prepared state that should be committed or rolled back.  
After logging into the database, you can check the [information\$1schema.innodb\$1trx](https://dev.mysql.com/doc/refman/5.7/en/information-schema-innodb-trx-table.html) table and the `XA RECOVER` output for more information.  
Before committing or rolling back a transaction, we recommend that you review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/xa-restrictions.html) and your application requirements.

```
mysql> select trx_started,
    trx_mysql_thread_id,
    trx_id,trx_state,
    trx_operation_state,
    trx_rows_modified,
    trx_rows_locked 
from 
    information_schema.innodb_trx;
+---------------------+---------------------+---------+-----------+---------------------+-------------------+-----------------+
| trx_started         | trx_mysql_thread_id | trx_id  | trx_state | trx_operation_state | trx_rows_modified | trx_rows_locked |
+---------------------+---------------------+---------+-----------+---------------------+-------------------+-----------------+
| 2024-08-12 01:09:39 |                   0 | 2849470 | RUNNING   | NULL                |                 1 |               0 |
+---------------------+---------------------+---------+-----------+---------------------+-------------------+-----------------+
1 row in set (0.00 sec)

mysql> xa recover;
+----------+--------------+--------------+--------+
| formatID | gtrid_length | bqual_length | data   |
+----------+--------------+--------------+--------+
|        1 |            6 |            0 | xatest |
+----------+--------------+--------------+--------+
1 row in set (0.00 sec)
```
In this case, we roll back the prepared transaction.  

```
mysql> XA ROLLBACK 'xatest';
Query OK, 0 rows affected (0.00 sec)
v
mysql> xa recover;
Empty set (0.00 sec)
```
After the XA transaction is rolled back, the precheck succeeds.  

```
{
  "id": "auroraUpgradeCheckForIncompleteXATransactions",
  "title": "Pre-checks for XA Transactions in prepared state.",
  "status": "OK",
  "detectedProblems": []
}
```

**auroraUpgradeCheckForInstanceLimit**  
**Precheck level: Error**  
**Check whether upgrade is supported on the current instance class**  
Running an in-place upgrade from Aurora MySQL version 2.12.0 or 2.12.1, where the writer [DB instance class](Concepts.DBInstanceClass.md) is db.r6i.32xlarge, is currently not supported. In this case, the precheck returns an error. To allow the upgrade to proceed, you can either change your DB instance class to db.r6i.24xlarge or smaller. Or you can upgrade to Aurora MySQL version 2.12.2 or higher, where in-place upgrade to Aurora MySQL version 3 is supported on db.r6i.32xlarge.  
**Example output:**  

```
{
  "id": "auroraUpgradeCheckForInstanceLimit",
  "title": "Checks if upgrade is supported on the current instance class",
  "status": "OK",
  "description": "Upgrade from Aurora Version 2.12.0 and 2.12.1 may fail for 32.xl and above instance class.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "all",
        "description": "Upgrade is not supported on this instance size for Aurora MySql Version 2.12.1. Before upgrading to Aurora MySql 3, please consider either: 1. Changing the instance class to 24.xl or lower. -or- 2. Upgrading to patch version 2.12.2 or higher."
      }
  ]
},
```
The precheck returns an error because the writer DB instance is using the db.r6i.32xlarge instance class, and is running on Aurora MySQL version 2.12.1.

**auroraUpgradeCheckForInternalUsers**  
**Precheck level: Error**  
**Check for 8.0 internal users**  
This precheck applies only to Aurora MySQL version 3.03.0 and lower. If you encounter an error with this precheck, upgrade to Aurora MySQL version 3.04 or higher.  
**Example output:**  

```
{
  "id": "auroraUpgradeCheckForInternalUsers",
  "title": "Check for 8.0 internal users.",
  "status": "OK",
  "detectedProblems": []
}
```

**auroraUpgradeCheckForInvalidUtf8mb3CharacterStringInViews**  
**Precheck level: Error**  
**Check for invalid utf8mb3 characters in view definition**  
This precheck identifies views that contain comments with invalid `utf8mb3` character encoding. In MySQL 8.0, stricter validation is applied to character encoding in metadata, including view comments. If any view definition contains characters that are not valid in the `utf8mb3` character set, the upgrade fails.  
To resolve this issue, modify the view definition to remove or replace any non-BMP characters before you attempt the upgrade.  
**Example output:**  

```
{
"id": "auroraUpgradeCheckForInvalidUtf8mb3CharacterStringInViews",
"title": "Check for invalid utf8mb3 character string.",
"status": "OK",
"description": "Definition of following view(s) has/have invalid utf8mb3 character string.",
"detectedProblems": [
        {
            "level": "Error",
            "dbObject": "precheck.utf8mb3_invalid_char_view",
            "description": "Definition of view precheck.utf8mb3_invalid_char_view contains an invalid utf8mb3 character string. This is due to https://bugs.mysql.com/bug.php?id=110177. To fix the inconsistency, we recommend you to modify the view definition to not use non-BMP characters and try the upgrade again."
        }
    ]
},
```
The precheck reports that the `utf8mb3_invalid_char_view` view definition contains invalid `utf8mb3` characters in its definition.  
To resolve this issue, identify the view that contains the unsupported characters and update the comments. First, examine the view structure and identify comments.  

```
MySQL> SHOW CREATE VIEW precheck.utf8mb3_invalid_char_view\G
*************************** 1. row ***************************
                View: utf8mb3_invalid_char_view
        Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`admin`@`%` SQL SECURITY DEFINER VIEW `utf8mb3_invalid_char_view` AS select 'This row contains a dolphin 🐬' AS `message`
character_set_client: utf8
collation_connection: utf8_general_ci
1 row in set, 1 warning (0.00 sec)
```
Once you've identified the view that contains the error, replace the view with the `CREATE OR REPLACE VIEW` statement.  

```
MySQL> CREATE OR REPLACE VIEW precheck.utf8mb3_invalid_char_view AS select 'This view definition to not use non-BMP characters' AS message;
```
After updating all view definitions that contain unsupported characters, the precheck passes and the upgrade can proceed.  

```
{
"id": "auroraUpgradeCheckForInvalidUtf8mb3ColumnComments",
"title": "Check for invalid utf8mb3 column comments.",
"status": "OK",
"detectedProblems": []
}
```

**auroraUpgradeCheckForInvalidUtf8mb3ColumnComments**  
**Precheck level: Error**  
**Check for invalid utf8mb3 column comments**  
This precheck identifies tables that contain column comments with invalid `utf8mb3` character encoding. In MySQL 8.0, stricter validation is applied to character encoding in metadata, including column comments. If any column comments contain characters that are not valid in the utf8mb3 character set, the upgrade will fail.  
To resolve this issue, you must modify the column comments to remove or replace any non-BMP characters before attempting the upgrade. You can use the `ALTER TABLE` statement to update the column comments.  
**Example output:**  

```
{
  "id": "auroraUpgradeCheckForInvalidUtf8mb3ColumnComments",
  "title": "Check for invalid utf8mb3 column comments.",
  "status": "OK",
  "description": "Following table(s) has/have invalid utf8mb3 comments on the column/columns.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.t2",
        "description": "Table test.t2 has invalid utf8mb3 comments in it's column/columns. This is due to non-BMP characters in the comment field. To fix the inconsistency, we recommend you to modify comment fields to not use non-BMP characters and try the upgrade again."
      }
  ]
}
```
The precheck reports that the `test.t2` table contains invalid `utf8mb3` characters in one or more column comments, specifically due to the presence of non-BMP characters.  
To resolve this issue, you can identify the problematic columns and update their comments. First, examine the table structure to identify columns with comments:  

```
mysql> SHOW CREATE TABLE test.t2\G
```
Once you've identified the columns with problematic comments, update them using the `ALTER TABLE` statement. For example:  

```
mysql> ALTER TABLE test.t2 MODIFY COLUMN column_name data_type COMMENT 'Updated comment without non-BMP characters';
```
Alternatively, you can remove the comment entirely:  

```
mysql> ALTER TABLE test.t2 MODIFY COLUMN column_name data_type COMMENT '';
```
After updating all problematic column comments, the precheck will pass and the upgrade can proceed:  

```
{
  "id": "auroraUpgradeCheckForInvalidUtf8mb3ColumnComments",
  "title": "Check for invalid utf8mb3 column comments.",
  "status": "OK",
  "detectedProblems": []
}
```
Before modifying column comments, ensure that any application code or documentation that relies on these comments is updated accordingly. Consider migrating to the utf8mb4 character set for better Unicode support if your application requires non-BMP characters.

**auroraUpgradeCheckForInvalidUtf8mb3IndexComments**  
**Precheck level: Error**  
**Check for invalid utf8mb3 index comments**  
This precheck identifies tables that contain index comments with invalid `utf8mb3` character encoding. In MySQL 8.0, stricter validation is applied to character encoding in metadata, including index comments. If any index comments contain characters that are not valid in the `utf8mb3` character set, the upgrade fails.  
To resolve this issue, you must modify the index comments to remove or replace any non-BMP characters before attempting the upgrade.  
**Example output:**  

```
{
    "id": "auroraUpgradeCheckForInvalidUtf8mb3IndexComments",
    "title": "Check for invalid utf8mb3 index comments.",
    "status": "OK",
    "description": "Following table(s) has/have invalid utf8mb3 comments on the index.",
    "detectedProblems": [
        {
            "level": "Error",
            "dbObject": "precheck.utf8mb3_tab_index_comment",
            "description": "Table precheck.utf8mb3_tab_index_comment has invalid utf8mb3 comments in it's index. This is due to https://bugs.mysql.com/bug.php?id=110177. To fix the inconsistency, we recommend you to modify comment fields to not use non-BMP characters and try the upgrade again."
        }
    ]
},
```
The precheck reports that the `utf8mb3_tab_index_comment` table contains invalid `utf8mb3` characters in one or more column comments, specifically due to the presence of non-BMP characters.  
To resolve this issue, first, examine the table structure to identify the index with problematic comments.  

```
MySQL> SHOW CREATE TABLE precheck.utf8mb3_tab_index_comment\G
*************************** 1. row ***************************
    Table: utf8mb3_tab_index_comment
Create Table: CREATE TABLE `utf8mb3_tab_index_comment` (
`id` int(11) DEFAULT NULL,
`name` varchar(100) DEFAULT NULL,
KEY `idx_name` (`name`) COMMENT 'Name index 🐬'
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.01 sec)
```
Once you identify the index that contains comments that use unsupported characters, drop and recreate the index.  
Dropping and recreating a table index can lead to downtime. We recommend that you plan and schedule this operation during maintenance.

```
MySQL> ALTER TABLE precheck.utf8mb3_tab_index_comment DROP INDEX idx_name;
MySQL> ALTER TABLE precheck.utf8mb3_tab_index_comment ADD INDEX idx_name(name);
```
The following example shows another way to recreate the index.  

```
MySQL> ALTER TABLE utf8mb3_tab_index_comment DROP INDEX idx_name, ADD INDEX idx_name (name) COMMENT 'Updated comment without non-BMP characters';
```
After you remove or update all unsupported index comments, the precheck passes and the upgrade can proceed.  

```
{
"id": "auroraUpgradeCheckForInvalidUtf8mb3IndexComments",
"title": "Check for invalid utf8mb3 index comments.",
"status": "OK",
"detectedProblems": []
},
```

**auroraUpgradeCheckForInvalidUtf8mb3TableComments**  
**Precheck level: Error**  
**Check for invalid utf8mb3 characters in table definition**  
This precheck identifies tables that contain comments with invalid `utf8mb3` character encoding. In MySQL 8.0, stricter validation is applied to character encoding in metadata, including table comments. If any table comments contain characters that are not valid in the `utf8mb3` character set, the upgrade fails.  
To resolve this issue, you must modify the table comments to remove or replace any non-BMP characters before attempting the upgrade.  
**Example output:**  

```
{
    "id": "auroraUpgradeCheckForInvalidUtf8mb3TableComments",
    "title": "Check for invalid utf8mb3 table comments.",
    "status": "OK",
    "description": "Following table(s) has/have invalid utf8mb3 comments.",
    "detectedProblems": [
        {
            "level": "Error",
            "dbObject": "precheck.utf8mb3_table_with_comment",
            "description": "Table precheck.utf8mb3_table_with_comment has invalid utf8mb3 comments. This is due to https://bugs.mysql.com/bug.php?id=110177. To fix the inconsistency, we recommend you to modify comment fields to not use non-BMP characters and try the upgrade again."
        }
        
    ]
},
```
The precheck reports invalid `utf8mb3` comments defined for the `utf8mb3_table_with_comment` tables in the test database.  
To resolve this issue, identify the table that contains unsupported characters and update the comments. First, examine the table structure and identify the comments.  

```
MySQL> SHOW CREATE TABLE precheck.utf8mb3_table_with_comment\G
*************************** 1. row ***************************
    Table: utf8mb3_table_with_comment
Create Table: CREATE TABLE `utf8mb3_table_with_comment` (
`id` int(11) NOT NULL,
`name` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COMMENT='This table comment contains flag 🏳️'
1 row in set (0.00 sec)
```
Once you've identified table comments that contain unsupported chatacters, update the comments with the `ALTER TABLE` statement.  

```
MySQL> ALTER TABLE precheck.utf8mb3_table_with_comment COMMENT='Updated comment without non-BMP characters';
```
Alternatively, you can remove the comment.  

```
MySQL> ALTER TABLE precheck.utf8mb3_table_with_comment COMMENT='';
```
After you remove all unsupported characters from all table comments, the precheck succeeds.  

```
{
"id": "auroraUpgradeCheckForInvalidUtf8mb3TableComments",
"title": "Check for invalid utf8mb3 table comments.",
"status": "OK",
"detectedProblems": []
},
```

**auroraUpgradeCheckForMasterUser**  
**Precheck level: Error**  
**Check whether RDS master user exists**  
MySQL 8 has added a new privilege model with support for [role](https://dev.mysql.com/doc/refman/8.0/en/roles.html) and [dynamic privileges](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#static-dynamic-privileges) to make privilege management easier and more fine grained. As part of this change, Aurora MySQL has introduced the new `rds_superuser_role`, which is automatically granted to the database’s master user on upgrade from Aurora MySQL version 2 to version 3.  
For more information on the roles and privileges assigned to the master user in Aurora MySQL, see [Master user account privileges](UsingWithRDS.MasterAccounts.md). For more information on the role-based privilege model in Aurora MySQL version 3, see [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model).  
This precheck verifies that the master user exists in the database. If the master user doesn't exist, the precheck fails. To allow the upgrade to proceed, re-create the master user by resetting the master user password, or by manually creating the user. Then retry the upgrade. For more information on resetting the master user password, see [Changing the password for the database master user](Aurora.Modifying.md#Aurora.Modifying.Password).  
**Example output:**  

```
{
  "id": "auroraUpgradeCheckForMasterUser",
  "title": "Check if master user exists",
  "status": "OK",
  "description": "Throws error if master user has been dropped!",
  "documentationLink": "https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.MasterAccounts.html",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "all",
        "description": "Your Master User on host '%' has been dropped. To proceed with the upgrade, recreate the master user `reinvent` on default host '%'"
      }
  ]
}
```
After you reset your master user password, the precheck will pass, and you can retry the upgrade.  
The following example uses the AWS CLI to reset the password. Password changes are applied immediately.  

```
aws rds modify-db-cluster \
    --db-cluster-identifier my-db-cluster \
    --master-user-password my-new-password
```
Then the precheck succeeds.  

```
{
  "id": "auroraUpgradeCheckForMasterUser",
  title": "Check if master user exists",
  "status": "OK",
  "detectedProblems": []
}
```

**auroraUpgradeCheckForPrefixIndexOnGeometryColumns**  
**Precheck level: Error**  
**Check for geometry columns on prefix indexes**  
As of [MySQL 8.0.12 ](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-12.html#mysqld-8-0-12-spatial-support), you can no longer create a [prefixed](https://dev.mysql.com/doc/refman/5.7/en/column-indexes.html#column-indexes-prefix) index on a column using the [GEOMETRY](https://dev.mysql.com/doc/refman/5.7/en/gis-data-formats.html) data type. For more information, see [WL\$111808](https://dev.mysql.com/worklog/task/?id=11808).  
If any such indexes exist, your upgrade will fail. To resolve the issue, drop the prefixed `GEOMETRY` indexes on the tables mentioned in the precheck failure.  
**Example output:**  

```
{
  "id": "auroraUpgradeCheckForPrefixIndexOnGeometryColumns",
  "title": "Check for geometry columns on prefix indexs",
  "status": "OK",
  "description": "Consider dropping the prefix Indexes of geometry columns and restart the upgrade.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.geom_index_prefix",
        "description": "Table `test`.`geom_index_prefix` has an index `LatLon` on geometry column/s. Mysql 8.0 does not support this type of index on a geometry column https://dev.mysql.com/worklog/task/?id=11808. To upgrade to MySQL 8.0, Run 'DROP INDEX `LatLon` ON `test`.`geom_index_prefix`;"
      }
  ]
}
```
The precheck reports an error because the `test.geom_index_prefix` table contains an index with a prefix on a `GEOMETRY` column.  
After you drop this index, the precheck succeeds.  

```
{
  "id": "auroraUpgradeCheckForPrefixIndexOnGeometryColumns",
  "title": "Check for geometry columns on prefix indexs",
  "status": "OK",
  "detectedProblems": []
}
```

**auroraUpgradeCheckForSpecialCharactersInProcedures**  
**Precheck level: Error**  
**Check for inconsistency related to special characters in stored procedures**  
Before MySQL 8.0, database names, table names, and other objects corresponded to files in the data directory, that is, file-based metadata. As part of the upgrade to MySQL 8.0, all database objects are migrated to the new internal data dictionary tables that are stored in the `mysql` schema to support the newly implemented [Atomic Data Dictionary](https://dev.mysql.com/doc/refman/8.0/en/data-dictionary-file-removal.html). As part of migrating stored procedures, the procedure definition and body for each procedure is validated as it's ingested into the new data dictionary.  
Before MySQL 8, in some cases it was possible to create stored routines, or insert directly into the `mysql.proc` table, procedures that contained special characters. For example, you could create a stored procedure that contained a comment with the noncompliant, [non-breaking space character](https://en.wikipedia.org/wiki/Non-breaking_space) `\xa0`. If any such procedures are encountered, the upgrade fails.  
This precheck validates that the bodies and definitions of your stored procedures don't contain any such characters. To allow the upgrade to proceed, re-create these stored procedures without any hidden or special characters.  
**Example output:**  

```
{
  "id": "auroraUpgradeCheckForSpecialCharactersInProcedures",
  "title": "Check for inconsistency related to special characters in stored procedures.",
  "status": "OK",
  "description": "Following procedure(s) has special characters inconsistency.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "information_schema.routines",
        "description": "Data Dictionary Metadata is inconsistent for the procedure `get_version_proc` in the database `test` due to usage of special characters in procedure body. To avoid that, drop and recreate the procedure without any special characters before proceeding with the Upgrade."
      }
  ]
}
```
The precheck reports that the DB cluster contains a procedure called `get_version_proc` in the `test` database that contains special characters in the procedure body.  
After dropping and re-creating the stored procedure, the precheck succeeds, allowing the upgrade to proceed.  

```
{
  "id": "auroraUpgradeCheckForSpecialCharactersInProcedures",
  "title": "Check for inconsistency related to special characters in stored procedures.",
  "status": "OK",
  "detectedProblems": []
},
```

**auroraUpgradeCheckForSysSchemaObjectTypeMismatch**  
**Precheck level: Error**  
**Check object type mismatch for `sys` schema**  
The [sys schema](https://dev.mysql.com/doc/refman/8.0/en/sys-schema.html) is a set of objects and views in a MySQL database that helps users troubleshoot, optimize, and monitor their DB instances. When performing a major version upgrade from Aurora MySQL version 2 to version 3, the `sys` schema views are re-created and updated to the new Aurora MySQL version 3 definitions.  
As part of the upgrade, if any objects in the `sys` schema are defined using storage engines (`sys_config/BASE TABLE` in [INFORMATION\$1SCHEMA.TABLES](https://dev.mysql.com/doc/refman/5.7/en/information-schema-tables-table.html)), rather than views, the upgrade will fail. Such tables can be found in the `information_schema.tables` table. This is not an expected behavior, but in some cases can occur due to user error.  
This precheck validates all `sys` schema objects to ensure that they use the correct table definitions, and that views aren't mistakenly defined as InnoDB or MyISAM tables. To resolve the issue, manually fix any returned objects by renaming or dropping them. Then retry the upgrade.  
**Example output:**  

```
{
  "id": "auroraUpgradeCheckForSysSchemaObjectTypeMismatch",
  "title": "Check object type mismatch for sys schema.",
  "status": "OK",
  "description": "Database contains objects with type mismatch for sys schema.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "sys.waits_global_by_latency",
        "description": "Your object sys.waits_global_by_latency has a type mismatch. To fix the inconsistency we recommend to rename or remove the object before upgrading (use RENAME TABLE command). "
      }
  ]
}
```
The precheck reports that the [sys.waits\$1global\$1by\$1latency](https://dev.mysql.com/doc/refman/5.7/en/sys-waits-global-by-latency.html) view in the `sys` schema has a type mismatch that is blocking the upgrade from proceeding.  
After logging into the DB instance, you can see that this object is defined as a InnoDB table, when it should be a view.  

```
mysql> show create table sys.waits_global_by_latency\G
*************************** 1. row ***************************
       Table: waits_global_by_latency
Create Table: CREATE TABLE `waits_global_by_latency` (
  `events` varchar(128) DEFAULT NULL,
  `total` bigint(20) unsigned DEFAULT NULL,
  `total_latency` text,
  `avg_latency` text,
  `max_latency` text
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
```
To resolve this issue we can either drop and re-create the view with the [correct definition](https://github.com/mysql/mysql-server/blob/mysql-5.7.44/scripts/sys_schema/views/p_s/waits_global_by_latency.sql) or rename it. During the upgrade process, it will be automatically created with the correct table definition.  

```
mysql> RENAME TABLE sys.waits_global_by_latency to sys.waits_global_by_latency_old;
Query OK, 0 rows affected (0.01 sec)
```
After doing this, the precheck passes.  

```
{
  "id": "auroraUpgradeCheckForSysSchemaObjectTypeMismatch",
  "title": "Check object type mismatch for sys schema.",
  "status": "OK",
  "detectedProblems": []
}
```

**auroraUpgradeCheckForViewColumnNameLength**  
**Precheck level: Error**  
**Check upper limit for column name in view**  
The [maximum permitted length of a column name](https://dev.mysql.com/doc/refman/5.7/en/identifier-length.html) in MySQL is 64 characters. Before MySQL 8.0, in some cases it was possible to create a view with a column name larger than 64 characters. If any such views exist on your database instance, a precheck error is returned, and the upgrade will fail. To allow the upgrade to proceed, you must re-create the views in question, making sure that their column lengths are less than 64 characters. Then retry the upgrade.  
**Example output:**  

```
{
  "id": "auroraUpgradeCheckForViewColumnNameLength",
  "title": "Check for upperbound limit related to column name in view.",
  "status": "OK",
  "description": "Following view(s) has column(s) with length greater than 64.",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.colname_view_test.col2_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad",
        "description": "View `test`.`colname_view_test`has column `col2_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad` with invalid column name length. To avoid Upgrade errors, view should be altered by renaming the column name so that its length is not 0 and doesn't exceed 64."
      }
  ]
}
```
The precheck reports that the `test.colname_view_test` view contains a column `col2_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad` that exceeds the max permitted column length of 64 characters.  
Looking at the view definition, we can see the offending column.  

```
mysql> desc `test`.`colname_view_test`;
+------------------------------------------------------------------+-------------+------+-----+---------+-------+
| Field                                                            | Type        | Null | Key | Default | Extra |
+------------------------------------------------------------------+-------------+------+-----+---------+-------+
| col1                                                             | varchar(20) | YES  |     | NULL    |       |
| col2_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad_pad | int(11)     | YES  |     | NULL    |       |
+------------------------------------------------------------------+-------------+------+-----+---------+-------+
2 rows in set (0.00 sec)
```
To allow the upgrade to proceed, re-create the view, making sure that the column length doesn't exceed 64 characters.  

```
mysql> drop view `test`.`colname_view_test`;
Query OK, 0 rows affected (0.01 sec)

mysql> create view `test`.`colname_view_test`(col1, col2_nopad) as select inf, fodcol from test;
Query OK, 0 rows affected (0.01 sec)

mysql> desc `test`.`colname_view_test`;
+------------+-------------+------+-----+---------+-------+
| Field      | Type        | Null | Key | Default | Extra |
+------------+-------------+------+-----+---------+-------+
| col1       | varchar(20) | YES  |     | NULL    |       |
| col2_nopad | int(11)     | YES  |     | NULL    |       |
+------------+-------------+------+-----+---------+-------+
2 rows in set (0.00 sec)
```
After doing this, the precheck succeeds.  

```
{
  "id": "auroraUpgradeCheckForViewColumnNameLength",
  "title": "Check for upperbound limit related to column name in view.",
  "status": "OK",
  "detectedProblems": []
}
```

**auroraUpgradeCheckIndexLengthLimitOnTinyColumns**  
**Precheck level: Error**  
**Check for tables with indexes defined with a prefix length greater than 255 bytes on tiny columns**  
When creating an index on a column using a [binary data type](https://dev.mysql.com/doc/refman/5.7/en/binary-varbinary.html) in MySQL, you must add a [prefix](https://dev.mysql.com/doc/refman/5.7/en/column-indexes.html#column-indexes-prefix) length in the index definition. Before MySQL 8.0, in some cases it was possible to specify a prefix length larger than the maximum allowed size of such data types. An example is `TINYTEXT` and `TINYBLOB` columns, where the maximum permitted data size is 255 bytes, but index prefixes larger than this were permitted. For more information, see [InnoDB limits](https://dev.mysql.com/doc/refman/8.0/en/innodb-limits.html) in the MySQL documentation.  
If this precheck fails, drop the offending index or reduce the prefix length of `TINYTEXT` and `TINYBLOB` columns of the index to less than 255 bytes. Then retry the upgrade.  
**Example output:**  

```
{
  "id": "auroraUpgradeCheckIndexLengthLimitOnTinyColumns",
  "title": "Check for the tables with indexes defined with prefix length greater than 255 bytes on tiny columns",
  "status": "OK",
  "description": "Prefix length of the indexes defined on tiny columns cannot exceed 255 bytes. With utf8mb4 char set, this limits the prefix length supported upto 63 characters only. A larger prefix length was allowed in MySQL5.7 using innodb_large_prefix parameter. This parameter is deprecated in MySQL 8.0.",
  "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/innodb-limits.html, https://dev.mysql.com/doc/refman/8.0/en/storage-requirements.html",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.tintxt_prefixed_index.col1",
        "description": "Index 'PRIMARY' on tinytext/tinyblob column `col1` of table `test.tintxt_prefixed_index` is defined with prefix length exceeding 255 bytes. Reduce the prefix length to <=255 bytes depending on character set used. For utf8mb4, it should be <=63."
      }
  ]
}
```
The precheck reports an error for the `test.tintxt_prefixed_index` table, because it has an Index `PRIMARY` that has a prefix larger than 255 bytes on a TINYTEXT or TINYBLOB column.  
Looking at the definition for this table, we can see that the primary key has a prefix of 65 on the `TINYTEXT` column `col1`. Because the table is defined using the `utf8mb4` character set, which stores 4 bytes per character, the prefix exceeds the 255-byte limit.  

```
mysql> show create table `test`.`tintxt_prefixed_index`\G
*************************** 1. row ***************************
       Table: tintxt_prefixed_index
Create Table: CREATE TABLE `tintxt_prefixed_index` (
  `col1` tinytext NOT NULL,
  `col2` tinytext,
  `col_id` tinytext,
  PRIMARY KEY (`col1`(65))
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 ROW_FORMAT=DYNAMIC
1 row in set (0.00 sec)
```
Modifying the index prefix to 63 while using the `utf8mb4` character set will allow the upgrade to proceed.  

```
mysql> alter table `test`.`tintxt_prefixed_index` drop PRIMARY KEY, ADD  PRIMARY KEY (`col1`(63));
Query OK, 0 rows affected (0.04 sec)
Records: 0  Duplicates: 0  Warnings: 0
```
After doing this, the precheck succeeds.  

```
{
  "id": "auroraUpgradeCheckIndexLengthLimitOnTinyColumns",
  "title": "Check for the tables with indexes defined with prefix length greater than 255 bytes on tiny columns",
  "status": "OK",
  "detectedProblems": []
}
```

**auroraUpgradeCheckMissingInnodbMetadataForMysqlHostTable**  
**Precheck level: Error**  
**Check missing InnoDB metadata inconsistency for the `mysql.host` table**  
This is an internal-only precheck carried out by the RDS service. Any errors will be automatically handled on upgrade and can be safely ignored.  
If you encounter any errors with this precheck, open a case with [AWS Support](https://aws.amazon.com/support) to request that the metadata inconsistency be resolved. Alternatively, you can retry the upgrade by performing a logical dump, then restoring to a new Aurora MySQL version 3 DB cluster.

## Warnings


The following prechecks generate warnings when the precheck fails, but the upgrade can proceed.

**Topics**
+ [

### MySQL prechecks that report warnings
](#precheck-descriptions-warnings.mysql)
+ [

### Aurora MySQL prechecks that report warnings
](#precheck-descriptions-warnings.aurora)

### MySQL prechecks that report warnings


The following prechecks are from Community MySQL:
+ [defaultAuthenticationPlugin](#defaultAuthenticationPlugin)
+ [maxdbFlagCheck](#maxdbFlagCheck)
+ [mysqlDollarSignNameCheck](#mysqlDollarSignNameCheck)
+ [reservedKeywordsCheck](#reservedKeywordsCheck)
+ [utf8mb3Check](#utf8mb3Check)
+ [zeroDatesCheck](#zeroDatesCheck)

**defaultAuthenticationPlugin**  
**Precheck level: Warning**  
**New default authentication plugin considerations**  
In MySQL 8.0, the `caching_sha2_password` authentication plugin was introduced, providing more secure password encryption and better performance than the deprecated `mysql_native_password` plugin. For Aurora MySQL version 3, the default authentication plugin used for database users is the `mysql_native_password` plugin.  
This precheck warns that this plugin will be removed and the default changed in a future major version release. Consider evaluating the compatibility of your application clients and users ahead of this change.  
For more information, see [caching\$1sha2\$1password compatibility issues and solutions](https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-caching-sha2-password-compatibility-issues) in the MySQL documentation.  
**Example output:**  

```
{
  "id": "defaultAuthenticationPlugin",
  "title": "New default authentication plugin considerations",
  "description": "Warning: The new default authentication plugin 'caching_sha2_password' offers more secure password hashing than previously used 'mysql_native_password' (and consequent improved client connection authentication). However, it also has compatibility implications that may affect existing MySQL installations. If your MySQL installation must serve pre-8.0 clients and you encounter compatibility issues after upgrading, the simplest way to address those issues is to reconfigure the server to revert to the previous default authentication plugin (mysql_native_password). For example, use these lines in the server option file:\n\n[mysqld]\ndefault_authentication_plugin=mysql_native_password\n\nHowever, the setting should be viewed as temporary, not as a long term or permanent solution, because it causes new accounts created with the setting in effect to forego the improved authentication security.\nIf you are using replication please take time to understand how the authentication plugin changes may impact you.",
  "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-caching-sha2-password-compatibility-issues\nhttps://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-caching-sha2-password-replication"
},
```

**maxdbFlagCheck**  
**Precheck level: Warning**  
**Usage of obsolete `MAXDB` `sql_mode` flag**  
In MySQL 8.0, a number of deprecated [sql\$1mode](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_sql_mode) system variable options were [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html), one of which was `MAXDB`. This precheck examines all currently connected sessions, along with routines and triggers, to ensure that none have `sql_mode` set to any combination that includes `MAXDB`.  
**Example output:**  

```
{
  "id": "maxdbFlagCheck",
  "title": "Usage of obsolete MAXDB sql_mode flag",
  "status": "OK",
  "description": "Warning: The following DB objects have the obsolete MAXDB option persisted for sql_mode, which will be cleared during the upgrade. It can potentially change the datatype DATETIME into TIMESTAMP if it is used inside object's definition, and this in turn can change the behavior in case of dates earlier than 1970 or later than 2037. If this is a concern, please redefine these objects so that they do not rely on the MAXDB flag before running the upgrade.",
  "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals",
  "detectedProblems": [
      {
        "level": "Warning",
        "dbObject": "test.maxdb_stored_routine",
        "description": "PROCEDURE uses obsolete MAXDB sql_mode",
        "dbObjectType": "Routine"
      }
  ]
}
```
The precheck reports that the `test.maxdb_stored_routine` routine contains a unsupported `sql_mode` option.  
After logging into the database, you can see in the routine definition that `sql_mode` contains `MAXDB`.  

```
 > SHOW CREATE PROCEDURE test.maxdb_stored_routine\G

*************************** 1. row ***************************
           Procedure: maxdb_stored_routine
            sql_mode: PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,MAXDB,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS,NO_AUTO_CREATE_USER
    Create Procedure: CREATE DEFINER="msandbox"@"localhost" PROCEDURE "maxdb_stored_routine"()
BEGIN
    SELECT * FROM test;
END
character_set_client: utf8
collation_connection: utf8_general_ci
  Database Collation: latin1_swedish_ci
1 row in set (0.00 sec)
```
To resolve the issue, re-create the procedure after setting the correct `sql_mode` on the client.  
According to the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/create-procedure.html), MySQL stores the `sql_mode` setting that's in effect when a routine is created or altered. It always runs the routine with this setting, regardless of the `sql_mode` setting when the routine starts.  
Before changing `sql_mode`, see [Server SQL modes](https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html) in the MySQL documentation. Carefully evaluate any potential impact of doing so on your application.
Re-create the procedure without the unsupported `sql_mode`.  

```
mysql > set sql_mode='PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE';
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql > DROP PROCEDURE test.maxdb_stored_routine\G
Query OK, 0 rows affected (0.00 sec)

mysql >
mysql > DELIMITER $$
mysql >
mysql > CREATE PROCEDURE test.maxdb_stored_routine()
    -> SQL SECURITY DEFINER
    -> BEGIN
    ->     SELECT * FROM test;
    -> END$$
Query OK, 0 rows affected (0.00 sec)

mysql >
mysql > DELIMITER ;
mysql > show create procedure test.maxdb_stored_routine\G
*************************** 1. row ***************************
           Procedure: maxdb_stored_routine
            sql_mode: PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE
    Create Procedure: CREATE DEFINER="msandbox"@"localhost" PROCEDURE "maxdb_stored_routine"()
BEGIN
    SELECT * FROM test;
END
character_set_client: utf8
collation_connection: utf8_general_ci
  Database Collation: latin1_swedish_ci
1 row in set (0.00 sec)
```
The precheck succeeds.  

```
{
  "id": "maxdbFlagCheck",
  "title": "Usage of obsolete MAXDB sql_mode flag",
  "status": "OK",
  "detectedProblems": []
}
```

**mysqlDollarSignNameCheck**  
**Precheck level: Warning**  
**Check for deprecated usage of single dollar signs in object names**  
As of [MySQL 8.0.32](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-32.html#mysqld-8-0-32-deprecation-removal), use of the dollar sign (`$`) as the first character of an unquoted identifier is deprecated. If you have any schemas, tables, views, columns, or routines containing a `$` as the first character, this precheck returns a warning. While this warning doesn't block the upgrade from proceeding, we recommend that you take action soon to resolve this. From [MySQL 8.4 ](https://dev.mysql.com/doc/refman/8.4/en/mysql-nutshell.html) any such identifiers will return a syntax error rather than a warning.  
**Example output:**  

```
{
  "id": "mysqlDollarSignNameCheck",
  "title": "Check for deprecated usage of single dollar signs in object names",
  "status": "OK",
  "description": "Warning: The following objects have names with deprecated usage of dollar sign ($) at the begining of the identifier. To correct this warning, ensure, that names starting with dollar sign, also end with it, similary to quotes ($example$). ",
  "detectedProblems": [
      {
        "level": "Warning",
        "dbObject": "test.$deprecated_syntx",
        "description": " name starts with $ sign."
      }
  ]
},
```
The precheck reports a warning because the `$deprecated_syntx` table in the `test` schema contains a `$` as the first character.

**reservedKeywordsCheck**  
**Precheck level: Warning**  
**Usage of database objects with names conflicting with new reserved keywords**  
This check is similar to the [routineSyntaxCheck](#routineSyntaxCheck), in that it checks for usage of database objects with names conflicting with new reserved keywords. While they don't block upgrades, you need to evaluate warnings carefully.  
**Example output:**  
Using the previous example with the table named `except`, the precheck returns a warning:  

```
{
  "id": "reservedKeywordsCheck",
  "title": "Usage of db objects with names conflicting with new reserved keywords",
  "status": "OK",
  "description": "Warning: The following objects have names that conflict with new reserved keywords. Ensure queries sent by your applications use `quotes` when referring to them or they will result in errors.",
  "documentationLink": "https://dev.mysql.com/doc/refman/en/keywords.html",
  "detectedProblems": [
      {
        "level": "Warning",
        "dbObject": "test.except",
        "description": "Table name",
        "dbObjectType": "Table"
      }
  ]
}
```
This warning lets you know that there might be some application queries to review. If your application queries aren't correctly [escaping string literals](https://dev.mysql.com/doc/refman/8.0/en/string-literals.html), they might experience errors after upgrading to MySQL 8.0. Review your applications to confirm, testing against a clone or snapshot of your Aurora MySQL DB cluster running on version 3.  
Example of a non-escaped application query that will fail after upgrading:  

```
SELECT * FROM escape;
```
Example of a correctly escaped application query that will succeed after upgrading:  

```
SELECT * FROM 'escape';
```

**utf8mb3Check**  
**Precheck level: Warning**  
**Usage of `utf8mb3` character set**  
In MySQL 8.0 the `utf8mb3` character set is deprecated, and will be removed in a future MySQL major version. This precheck is implemented to raise a warning if any database objects using this character set are detected. While this won't block an upgrade from proceeding, we highly recommend that you think about migrating tables to the `utf8mb4` character set, which is the default as of MySQL 8.0. For more information on [utf8mb3](https://dev.mysql.com/doc/refman/8.0/en/charset-unicode-utf8mb3.html) and [utf8mb4](https://dev.mysql.com/doc/refman/8.0/en/charset-unicode-utf8mb4.html), see [Converting between 3-byte and 4-byte Unicode character sets](https://dev.mysql.com/doc/refman/8.0/en/charset-unicode-conversion.html) in the MySQL documentation.  
**Example output:**  

```
{
  "id": "utf8mb3",
  "title": "Usage of utf8mb3 charset",
  "status": "OK",
  "description": "Warning: The following objects use the deprecated utf8mb3 character set. It is recommended to convert them to use utf8mb4 instead, for improved Unicode support. The utf8mb3 character is subject to removal in the future.",
  "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/charset-unicode-utf8mb3.html",
  "detectedProblems": [
      {
        "level": "Warning",
        "dbObject": "test.t1.col1",
        "description": "column's default character set: utf8",
        "dbObjectType": "Column"
      },
      {
        "level": "Warning",
        "dbObject": "test.t1.col2",
        "description": "column's default character set: utf8",
        "dbObjectType": "Column"
      }
  ]
}
```
To resolve this issue, you rebuild the objects and tables referenced. For more information and prerequisites before doing so, see [Converting between 3-byte and 4-byte Unicode character sets](https://dev.mysql.com/doc/refman/8.0/en/charset-unicode-conversion.html) in the MySQL documentation.

**zeroDatesCheck**  
**Precheck level: Warning**  
**Zero date, datetime, and timestamp values**  
MySQL now enforces stricter rules regarding the use of zero values in date, datetime, and timestamp columns. We recommend that you use the `NO_ZERO_IN_DATE` and `NO_ZERO_DATE SQL` modes in conjunction with `strict` mode, as they will be merged with `strict` mode in a future MySQL release.  
If the `sql_mode` setting for any of your database connections, at the time of running the precheck, doesn't include these modes, a warning is raised in the precheck. Users might still be able to insert date, datetime, and timestamp values containing zero values. However, we strongly advise replacing any zero values with valid ones, as their behavior might change in the future and they might not function correctly. As this is a warning, it won't block upgrades, but we recommend that you start planning to take action.  
**Example output:**  

```
{
  "id": "zeroDatesCheck",
  "title": "Zero Date, Datetime, and Timestamp values",
  "status": "OK",
  "description": "Warning: By default zero date/datetime/timestamp values are no longer allowed in MySQL, as of 5.7.8 NO_ZERO_IN_DATE and NO_ZERO_DATE are included in SQL_MODE by default. These modes should be used with strict mode as they will be merged with strict mode in a future release. If you do not include these modes in your SQL_MODE setting, you are able to insert date/datetime/timestamp values that contain zeros. It is strongly advised to replace zero values with valid ones, as they may not work correctly in the future.",
  "documentationLink": "https://lefred.be/content/mysql-8-0-and-wrong-dates/",
  "detectedProblems": [
      {
        "level": "Warning",
        "dbObject": "global.sql_mode",
        "description": "does not contain either NO_ZERO_DATE or NO_ZERO_IN_DATE which allows insertion of zero dates"
      },
      {
        "level": "Warning",
        "dbObject": "session.sql_mode",
        "description": " of 10 session(s) does not contain either NO_ZERO_DATE or NO_ZERO_IN_DATE which allows insertion of zero dates"
      }
  ]
}
```

### Aurora MySQL prechecks that report warnings


The following prechecks are specific to Aurora MySQL:
+ [auroraUpgradeCheckForRollbackSegmentHistoryLength](#auroraUpgradeCheckForRollbackSegmentHistoryLength)
+ [auroraUpgradeCheckForUncommittedRowModifications](#auroraUpgradeCheckForUncommittedRowModifications)

**auroraUpgradeCheckForRollbackSegmentHistoryLength**  
**Precheck level: Warning**  
**Checks whether the rollback segment history list length for the cluster is high**  
As mentioned in [auroraUpgradeCheckForIncompleteXATransactions](#auroraUpgradeCheckForIncompleteXATransactions), while running the major version upgrade process, it is essential that the Aurora MySQL version 2 DB instance undergo a [clean shutdown](https://dev.mysql.com/doc/refman/5.7/en/glossary.html#glos_slow_shutdown). This ensures that all transactions are committed or rolled back, and that InnoDB has purged all undo log records.  
If your DB cluster has a high rollback segment history list length (HLL), it can prolong the amount of time that InnoDB takes to complete its purge of undo log records, leading to extended downtime during the major version upgrade process. If the precheck detects that the HLL on your DB cluster is high, it raises a warning. While this doesn't block your upgrade from proceeding, we recommend that you closely monitor the HLL on your DB cluster. Keeping it at low levels reduces the downtime required during a major version upgrade. For more information on monitoring HLL, see [The InnoDB history list length increased significantly](proactive-insights.history-list.md).  
**Example output:**  

```
{
  "id": "auroraUpgradeCheckForRollbackSegmentHistoryLength",
  "title": "Checks if the rollback segment history length for the cluster is high",
  "status": "OK",
  "description": "Rollback Segment History length is greater than 1M. Upgrade may take longer time.",
  "detectedProblems": [
      {
        "level": "Warning",
        "dbObject": "information_schema.innodb_metrics",
        "description": "The InnoDB undo history list length('trx_rseg_history_len') is 82989114. Upgrade may take longer due to purging of undo information for old row versions."
      }
  ]
}
```
The precheck returns a warning because it detected the InnoDB undo HLL was high on the database cluster (82989114). Although the upgrade proceeds, depending on the amount of undo to purge, it can extend the downtime required during the upgrade process.  
We recommend that you [investigate open transactions](proactive-insights.history-list.md) on your DB cluster before running the upgrade in production, to make sure your HLL is kept at a manageable size.

**auroraUpgradeCheckForUncommittedRowModifications**  
**Precheck level: Warning**  
**Checks whether there are many uncommitted row modifications**  
As mentioned in [auroraUpgradeCheckForIncompleteXATransactions](#auroraUpgradeCheckForIncompleteXATransactions), while running the major version upgrade process, it is essential that the Aurora MySQL version 2 DB instance undergo a [clean shutdown](https://dev.mysql.com/doc/refman/5.7/en/glossary.html#glos_slow_shutdown). This ensures that all transactions are committed or rolled back, and that InnoDB has purged all undo log records.  
If your DB cluster has transactions that have modified a large number of rows, it can prolong the amount of time InnoDB takes to complete a rollback of this transaction as part of the clean shutdown process. If the precheck finds long-running transactions, with a large number of modified rows on your DB cluster, it raises a warning. While this doesn't block your upgrade from proceeding, we recommend that you closely monitor the size of active transactions on your DB cluster. Keeping it at low levels reduces the downtime required during a major version upgrade.  
**Example output:**  

```
{
  "id": "auroraUpgradeCheckForUncommittedRowModifications",
  "title": "Checks if there are many uncommitted modifications to rows",
  "status": "OK",
  "description": "Database contains uncommitted row changes greater than 10M. Upgrade may take longer time.",
  "detectedProblems": [
      {
        "level": "Warning",
        "dbObject": "information_schema.innodb_trx",
        "description": "The database contains 11000000 uncommitted row change(s) in 1 transaction(s). Upgrade may take longer due to transaction rollback."
      }
  ]
},
```
The precheck reports that the DB cluster contains a transaction with 11,000,000 uncommitted row changes that will need to be rolled back during the clean shutdown process. The upgrade will proceed, but to reduce downtime during the upgrade process, we recommend that you monitor and investigate this before running the upgrade on your production clusters.  
To view active transactions on your writer DB instance, you can use the [information\$1schema.innodb\$1trx](https://dev.mysql.com/doc/refman/5.7/en/information-schema-innodb-trx-table.html) table. The following query on the writer DB instance shows current transactions, run time, state, and modified rows for the DB cluster.  

```
# Example of uncommitted transaction
mysql> SELECT trx_started,
       TIME_TO_SEC(TIMEDIFF(now(), trx_started)) AS seconds_trx_has_been_running,
       trx_mysql_thread_id AS show_processlist_connection_id,
       trx_id,
       trx_state,
       trx_rows_modified AS rows_modified
FROM information_schema.innodb_trx;
+---------------------+------------------------------+--------------------------------+----------+-----------+---------------+
| trx_started         | seconds_trx_has_been_running | show_processlist_connection_id | trx_id   | trx_state | rows_modified |
+---------------------+------------------------------+--------------------------------+----------+-----------+---------------+
| 2024-08-12 18:32:52 |                         1592 |                          20041 | 52866130 | RUNNING   |      11000000 |
+---------------------+------------------------------+--------------------------------+----------+-----------+---------------+
1 row in set (0.01 sec)

# Example of transaction rolling back
mysql> SELECT trx_started,
       TIME_TO_SEC(TIMEDIFF(now(), trx_started)) AS seconds_trx_has_been_running,
       trx_mysql_thread_id AS show_processlist_connection_id,
       trx_id,
       trx_state,
       trx_rows_modified AS rows_modified
FROM information_schema.innodb_trx;
+---------------------+------------------------------+--------------------------------+----------+--------------+---------------+
| trx_started         | seconds_trx_has_been_running | show_processlist_connection_id | trx_id   | trx_state    | rows_modified |
+---------------------+------------------------------+--------------------------------+----------+--------------+---------------+
| 2024-08-12 18:32:52 |                         1719 |                          20041 | 52866130 | ROLLING BACK |      10680479 |
+---------------------+------------------------------+--------------------------------+----------+--------------+---------------+
1 row in set (0.01 sec)
```
After the transaction is committed or rolled back, the precheck no longer returns a warning. Consult the MySQL documentation and your application team before rolling back any large transactions, as rollback can take some time to complete, depending on transaction size.  

```
{
  "id": "auroraUpgradeCheckForUncommittedRowModifications",
  "title": "Checks if there are many uncommitted modifications to rows",
  "status": "OK",
  "detectedProblems": []
},
```
For more information on optimizing InnoDB transaction management, and the potential impact of running and rolling back large transactions on MySQL database instances, see [Optimizing InnoDB transaction management](https://dev.mysql.com/doc/refman/5.7/en/optimizing-innodb-transaction-management.html) in the MySQL documentation.

## Notices


The following precheck generates a notice when the precheck fails, but the upgrade can proceed.

**sqlModeFlagCheck**  
**Precheck level: Notice**  
**Usage of obsolete `sql_mode` flags**  
In addition to `MAXDB`, a number of other `sql_mode` options have been [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html): `DB2`, `MSSQL`, `MYSQL323`, `MYSQL40`, `ORACLE`, `POSTGRESQL`, `NO_FIELD_OPTIONS`, `NO_KEY_OPTIONS`, and `NO_TABLE_OPTIONS`. As of MySQL 8.0, none of these values can be assigned to the `sql_mode` system variable. If this precheck finds any open sessions using these `sql_mode` settings, make sure that your DB instance and DB cluster parameter groups, and client applications and configurations, are updated to disable them.- For more information, see the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html).  
**Example output:**  

```
{
  "id": "sqlModeFlagCheck",
  "title": "Usage of obsolete sql_mode flags",
  "status": "OK",
  "detectedProblems": []
}
```
To resolve any of these precheck failures, see [maxdbFlagCheck](#maxdbFlagCheck).

## Errors, warnings, or notices


The following precheck can return an error, warning, or notice depending on the precheck output.

**checkTableOutput**  
**Precheck level: Error, Warning, or Notice**  
**Issues reported by the `check table x for upgrade` command**  
Before starting the upgrade to Aurora MySQL version 3, `check table for upgrade` is run on each table in the user schemas in your DB cluster. This precheck isn't the same as [checkTableMysqlSchema](#checkTableMysqlSchema).  
The `check table for upgrade` command examines tables for any potential issues that might arise during an upgrade to a newer version of MySQL. Running this command before attempting an upgrade can help identify and resolve any incompatibilities ahead of time, making the actual upgrade process smoother.  
This command performs various checks on each table, such as the following:  
+ Verifying that the table structure and metadata are compatible with the target MySQL version
+ Checking for any deprecated or removed features used by the table
+ Ensuring that the table can be properly upgraded without data loss
Unlike other prechecks, it can return an error, warning, or notice depending on the `check table` output. If this precheck returns any tables, review them carefully, along with the return code and message before initiating the upgrade. For more information, see [CHECK TABLE statement](https://dev.mysql.com/doc/refman/5.7/en/check-table.html) in the MySQL documentation.  
Here we provide an error example and a warning example.  
**Error example:**  

```
{
  "id": "checkTableOutput",
  "title": "Issues reported by 'check table x for upgrade' command",
  "status": "OK",
  "detectedProblems": [
      {
        "level": "Error",
        "dbObject": "test.parent",
        "description": "Table 'test.parent' doesn't exist"
      }
  ]
},
```
The precheck reports an error that the `test.parent` table doesn't exist.  
The `mysql-error.log` file for the writer DB instance shows that there is a foreign key error.  

```
2024-08-13T15:32:10.676893Z 62 [Warning] InnoDB: Load table `test`.`parent` failed, the table has missing foreign key indexes. Turn off 'foreign_key_checks' and try again.
2024-08-13T15:32:10.676905Z 62 [Warning] InnoDB: Cannot open table test/parent from the internal data dictionary of InnoDB though the .frm file for the table exists.
Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting.html for how to resolve the issue.
```
Log into the writer DB instance and run `show engine innodb status\G` to get more information on the foreign key error.  

```
mysql> show engine innodb status\G
*************************** 1. row ***************************
  Type: InnoDB
  Name:
Status:
=====================================
2024-08-13 15:33:33 0x14ef7b8a1700 INNODB MONITOR OUTPUT
=====================================
.
.
.
------------------------
LATEST FOREIGN KEY ERROR
------------------------
2024-08-13 15:32:10 0x14ef6dbbb700 Error in foreign key constraint of table test/child:
there is no index in referenced table which would contain
the columns as the first columns, or the data types in the
referenced table do not match the ones in table. Constraint:
,
  CONSTRAINT `fk_pname` FOREIGN KEY (`p_name`) REFERENCES `parent` (`name`)
The index in the foreign key in table is p_name_idx
Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-foreign-key-constraints.html for correct foreign key definition.
.
.
```
The `LATEST FOREIGN KEY ERROR` message reports that the `fk_pname` foreign key constraint in the `test.child` table, which references the `test.parent` table,has either a missing index or data type mismatch. The MySQL documentation on [foreign key constraints](https://dev.mysql.com/doc/refman/5.7/en/create-table-foreign-keys.html) states that columns referenced in a foreign key must have an associated index, and the parent/child columns must use the same data type.  
To verify whether this is related to a missing index or data type mismatch, log into the database and check the table definitions by temporarily disabling the session variable [foreign\$1key\$1checks](https://dev.mysql.com/doc/refman/5.7/en/create-table-foreign-keys.html#foreign-key-checks). After doing so, we can see that the child constraint in question (`fk_pname`) uses `p_name varchar(20) CHARACTER SET latin1 DEFAULT NULL` to reference the parent table `name varchar(20) NOT NULL`. The parent table uses `DEFAULT CHARSET=utf8`, but the child table’s `p_name` column uses `latin1`, so the data type mismatch error is thrown.  

```
mysql> show create table parent\G
ERROR 1146 (42S02): Table 'test.parent' doesn't exist

mysql> show create table child\G
*************************** 1. row ***************************
       Table: child
Create Table: CREATE TABLE `child` (
  `id` int(11) NOT NULL,
  `p_name` varchar(20) CHARACTER SET latin1 DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `p_name_idx` (`p_name`),
  CONSTRAINT `fk_pname` FOREIGN KEY (`p_name`) REFERENCES `parent` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)

mysql> set foreign_key_checks=0;
Query OK, 0 rows affected (0.00 sec)

mysql> show create table parent\G
*************************** 1. row ***************************
       Table: parent
Create Table: CREATE TABLE `parent` (
  `name` varchar(20) NOT NULL,
  PRIMARY KEY (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)

mysql> show create table child\G
*************************** 1. row ***************************
       Table: child
Create Table: CREATE TABLE `child` (
  `id` int(11) NOT NULL,
  `p_name` varchar(20) CHARACTER SET latin1 DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `p_name_idx` (`p_name`),
  CONSTRAINT `fk_pname` FOREIGN KEY (`p_name`) REFERENCES `parent` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
```
To resolve this issue, we can either change the child table to use the same character set as the parent, or change the parent to use the same character set as the child table. Here, because the child table explicitly uses `latin1` in the `p_name` column definition, we run `ALTER TABLE` to modify the character set to `utf8`.  

```
mysql> alter table child modify p_name varchar(20) character set utf8 DEFAULT NULL;
Query OK, 0 rows affected (0.06 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> flush tables;
Query OK, 0 rows affected (0.01 sec)
```
After doing so, the precheck passes, and the upgrade can proceed.  
**Warning example:**  

```
{
  "id": "checkTableOutput",
  "title": "Issues reported by 'check table x for upgrade' command",
  "status": "OK",
  "detectedProblems": [
      {
        "level": "Warning",
        "dbObject": "test.orders",
        "description": "Trigger test.orders.delete_audit_trigg does not have CREATED attribute."
      }
  ]
}
```
The precheck reports a warning for the `delete_audit_trigg` trigger on the `test.orders` table because it doesn't have a `CREATED` attribute. According to [Checking version compatibility](https://dev.mysql.com/doc/refman/5.7/en/check-table.html#check-table-version-compatibility) in the MySQL documentation, this message is informational, and is printed for triggers created before MySQL 5.7.2.  
Because this is a warning, it doesn't block the upgrade from proceeding. However, if you wish to resolve the issue, you can re-create the trigger in question, and the precheck succeeds with no warnings.  

```
{
  "id": "checkTableOutput",
  "title": "Issues reported by 'check table x for upgrade' command",
  "status": "OK",
  "detectedProblems": []
},
```

# How to perform an in-place upgrade


We recommend that you review the background material in [How the Aurora MySQL in-place major version upgrade works](AuroraMySQL.Updates.MajorVersionUpgrade.md#AuroraMySQL.Upgrading.Sequence).

Perform any preupgrade planning and testing, as described in [Planning a major version upgrade for an Aurora MySQL cluster](AuroraMySQL.Updates.MajorVersionUpgrade.md#AuroraMySQL.Upgrading.Planning).

## Console


The following example upgrades the `mydbcluster-cluster` DB cluster to Aurora MySQL version 3.04.1.

**To upgrade the major version of an Aurora MySQL DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. If you used a custom parameter group for the original DB cluster, create a corresponding parameter group compatible with the new major version. Make any necessary adjustments to the configuration parameters in that new parameter group. For more information, see [How in-place upgrades affect the parameter groups for a cluster](#AuroraMySQL.Upgrading.ParamGroups).

1.  In the navigation pane, choose **Databases**. 

1.  In the list, choose the DB cluster that you want to modify. 

1.  Choose **Modify**. 

1.  For **Version**, choose a new Aurora MySQL major version.

   We generally recommend using the latest minor version of the major version. Here, we choose the current default version.  
![\[In-place upgrade of an Aurora MySQL DB cluster from version 2 to version 3\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/ams-upgrade-v2-v3.png)

1.  Choose **Continue**. 

1.  On the next page, specify when to perform the upgrade. Choose **During the next scheduled maintenance window** or **Immediately**. 

1.  (Optional) Periodically examine the **Events** page in the RDS console during the upgrade. Doing so helps you to monitor the progress of the upgrade and identify any issues. If the upgrade encounters any issues, consult [Troubleshooting for Aurora MySQL in-place upgrade](AuroraMySQL.Upgrading.Troubleshooting.md) for the steps to take. 

1. If you created a new parameter group at the start of this procedure, associate the custom parameter group with your upgraded cluster. For more information, see [How in-place upgrades affect the parameter groups for a cluster](#AuroraMySQL.Upgrading.ParamGroups).
**Note**  
 Performing this step requires you to restart the cluster again to apply the new parameter group. 

1.  (Optional) After you complete any post-upgrade testing, delete the manual snapshot that Aurora created at the beginning of the upgrade. 

## AWS CLI


To upgrade the major version of an Aurora MySQL DB cluster, use the AWS CLI [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) command with the following required parameters:
+ `--db-cluster-identifier`
+ `--engine-version`
+ `--allow-major-version-upgrade`
+  `--apply-immediately` or `--no-apply-immediately`

If your cluster uses any custom parameter groups, also include one or both of the following options:
+ `--db-cluster-parameter-group-name`, if the cluster uses a custom cluster parameter group
+ `--db-instance-parameter-group-name`, if any instances in the cluster use a custom DB parameter group

The following example upgrades the `sample-cluster` DB cluster to Aurora MySQL version 3.04.1. The upgrade happens immediately, instead of waiting for the next maintenance window.

**Example**  
For Linux, macOS, or Unix:  

```
aws rds modify-db-cluster \
          --db-cluster-identifier sample-cluster \
          --engine-version 8.0.mysql_aurora.3.04.1 \
          --allow-major-version-upgrade \
          --apply-immediately
```
For Windows:  

```
aws rds modify-db-cluster ^
          --db-cluster-identifier sample-cluster ^
          --engine-version 8.0.mysql_aurora.3.04.1 ^
          --allow-major-version-upgrade ^
          --apply-immediately
```
You can combine other CLI commands with `modify-db-cluster` to create an automated end-to-end process for performing and verifying upgrades. For more information and examples, see [Aurora MySQL in-place upgrade tutorial](AuroraMySQL.Upgrading.Tutorial.md).

**Note**  
If your cluster is part of an Aurora global database, the in-place upgrade procedure is slightly different. You call the [modify-global-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-global-cluster.html) command operation instead of `modify-db-cluster`. For more information, see [In-place major upgrades for global databases](#AuroraMySQL.Upgrading.GlobalDB).

## RDS API


To upgrade the major version of an Aurora MySQL DB cluster, use the RDS API operation [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) with the following required parameters:
+ `DBClusterIdentifier`
+ `Engine`
+ `EngineVersion`
+ `AllowMajorVersionUpgrade`
+ `ApplyImmediately` (set to `true` or `false`)

**Note**  
If your cluster is part of an Aurora global database, the in-place upgrade procedure is slightly different. You call the [ModifyGlobalCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyGlobalClusterParameterGroup.html) operation instead of `ModifyDBCluster`. For more information, see [In-place major upgrades for global databases](#AuroraMySQL.Upgrading.GlobalDB).

## How in-place upgrades affect the parameter groups for a cluster


Aurora parameter groups have different sets of configuration settings for clusters that are compatible with MySQL 5.7 or 8.0. When you perform an in-place upgrade, the upgraded cluster and all its instances must use the corresponding cluster and instance parameter groups:

Your cluster and instances might use the default 5.7-compatible parameter groups. If so, the upgraded cluster and instance start with the default 8.0-compatible parameter groups. If your cluster and instances use any custom parameter groups, make sure to create corresponding or 8.0-compatible parameter groups. Also make sure to specify those during the upgrade process.

**Note**  
For most parameter settings, you can choose the custom parameter group at two points. These are when you create the cluster or associate the parameter group with the cluster later.  
However, if you use a nondefault setting for the `lower_case_table_names` parameter, you must set up the custom parameter group with this setting in advance. Then specify the parameter group when you perform the snapshot restore to create the cluster. Any change to the `lower_case_table_names` parameter has no effect after the cluster is created.  
We recommend that you use the same setting for `lower_case_table_names` when you upgrade from Aurora MySQL version 2 to version 3.  
With an Aurora global database based on Aurora MySQL, you can perform an in-place upgrade from Aurora MySQL version 2 to version 3 only if you set the `lower_case_table_names` parameter to default and reboot your global database. For more information on the methods that you can use, see [Major version upgrades](aurora-global-database-upgrade.md#aurora-global-database-upgrade.major).

## Changes to cluster properties between Aurora MySQL versions


When you upgrade from Aurora MySQL version 2 to version 3, make sure to check any applications or scripts that you use to set up or manage Aurora MySQL clusters and DB instances.

Also, change your code that manipulates parameter groups to account for the fact that the default parameter group names are different for 5.7- and 8.0-compatible clusters. The default parameter group names for Aurora MySQL version 2 and 3 clusters are `default.aurora-mysql5.7` and `default.aurora-mysql8.0`, respectively.

For example, you might have code like the following that applies to your cluster before an upgrade.

```
# Check the default parameter values for MySQL 5.7–compatible clusters.
aws rds describe-db-parameters --db-parameter-group-name default.aurora-mysql5.7 --region us-east-1
```

 After upgrading the major version of the cluster, modify that code as follows.

```
# Check the default parameter values for MySQL 8.0–compatible clusters.
aws rds describe-db-parameters --db-parameter-group-name default.aurora-mysql8.0 --region us-east-1
```

## In-place major upgrades for global databases


 For an Aurora global database, you upgrade the global database cluster. Aurora automatically upgrades all of the clusters at the same time and makes sure that they all run the same engine version. This requirement is because any changes to system tables, data file formats, and so on, are automatically replicated to all the secondary clusters. 

Follow the instructions in [How the Aurora MySQL in-place major version upgrade works](AuroraMySQL.Updates.MajorVersionUpgrade.md#AuroraMySQL.Upgrading.Sequence). When you specify what to upgrade, make sure to choose the global database cluster instead of one of the clusters it contains.

If you use the AWS Management Console, choose the item with the role **Global database**.

![\[Upgrading global database cluster\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-databases-major-upgrade-global-cluster.png)


 If you use the AWS CLI or RDS API, start the upgrade process by calling the [modify-global-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-global-cluster.html) command or [ModifyGlobalCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyGlobalCluster.html) operation. You use one of these instead of `modify-db-cluster` or `ModifyDBCluster`.

**Note**  
You can't specify a custom parameter group for the global database cluster while you're performing a major version upgrade of that Aurora global database. Create your custom parameter groups in each Region of the global cluster. Then apply them manually to the Regional clusters after the upgrade.

 To upgrade the major version of an Aurora MySQL global database cluster by using the AWS CLI, use the [modify-global-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-global-cluster.html) command with the following required parameters: 
+  `--global-cluster-identifier` 
+  `--engine aurora-mysql` 
+  `--engine-version` 
+  `--allow-major-version-upgrade` 

The following example upgrades the global database cluster to Aurora MySQL version 3.04.2.

**Example**  
For Linux, macOS, or Unix:  

```
aws rds modify-global-cluster \
          --global-cluster-identifier global_cluster_identifier \
          --engine aurora-mysql \
          --engine-version 8.0.mysql_aurora.3.04.2 \
          --allow-major-version-upgrade
```
For Windows:  

```
aws rds modify-global-cluster ^
          --global-cluster-identifier global_cluster_identifier ^
          --engine aurora-mysql ^
          --engine-version 8.0.mysql_aurora.3.04.2 ^
          --allow-major-version-upgrade
```

## In-place upgrades for DB clusters with cross-Region read replicas
In-place upgrades with cross-Region read replicas

You can upgrade an Aurora DB cluster that has a cross-Region read replica using the in-place upgrade procedure, but there are certain considerations:
+ You must upgrade the read replica DB cluster first. If you try to upgrade the primary cluster first, you will receive an error message such as the following:

  Unable to upgrade DB cluster test-xr-primary-cluster because the associated Aurora cross-Region replica test-xr-replica-cluster isn't patched yet. Upgrade the Aurora cross-Region replica and try again.

  This means that the primary DB cluster can't have a higher DB engine version than the replica cluster.
+ Before you upgrade the primary DB cluster, stop the write workload and disable any new connection requests to the writer DB instance of the primary cluster.
+ When you upgrade the primary cluster, choose a custom DB cluster parameter group with the `binlog_format` parameter set to a value that supports binary logging replication, such as `MIXED`.

  For more information about using binary logging with Aurora MySQL, see [Replication between Aurora and MySQL or between Aurora and another Aurora DB cluster (binary log replication)](AuroraMySQL.Replication.MySQL.md). For more information about modifying Aurora MySQL configuration parameters, see [Aurora MySQL configuration parameters](AuroraMySQL.Reference.ParameterGroups.md) and [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).
+ Don't wait a long time to upgrade the primary DB cluster after you upgrade the replica cluster. We recommend not waiting longer than the next maintenance window.
+ After you upgrade the primary DB cluster, reboot its writer DB instance. The custom DB cluster parameter group that enables binlog replication doesn't take effect until the writer DB instance is rebooted.
+ Don't resume the write workload or enable connections to the writer DB instance until you confirm that cross-Region replication has restarted, and that the replica lag in the secondary AWS Region is 0.

# Aurora MySQL in-place upgrade tutorial
Aurora MySQL in-place upgrade tutorial

The following Linux examples show how you might perform the general steps of the in-place upgrade procedure using the AWS CLI.

This first example creates an Aurora DB cluster that's running a 2.x version of Aurora MySQL. The cluster includes a writer DB instance and a reader DB instance. The `wait db-instance-available` command pauses until the writer DB instance is available. That's the point when the cluster is ready to be upgraded.

```
aws rds create-db-cluster --db-cluster-identifier mynewdbcluster --engine aurora-mysql \
  --db-cluster-version 5.7.mysql_aurora.2.11.2
...
aws rds create-db-instance --db-instance-identifier mynewdbcluster-instance1 \
  --db-cluster-identifier mynewdbcluster --db-instance-class db.t4g.medium --engine aurora-mysql
...
aws rds wait db-instance-available --db-instance-identifier mynewdbcluster-instance1
```

The Aurora MySQL 3.x versions that you can upgrade the cluster to depend on the 2.x version that the cluster is currently running and on the AWS Region where the cluster is located. The first command, with `--output text`, just shows the available target version. The second command shows the full JSON output of the response. In that response, you can see details such as the `aurora-mysql` value that you use for the `engine` parameter. You can also see the fact that upgrading to 3.04.0 represents a major version upgrade.

```
aws rds describe-db-clusters --db-cluster-identifier mynewdbcluster \
  --query '*[].{EngineVersion:EngineVersion}' --output text
5.7.mysql_aurora.2.11.2

aws rds describe-db-engine-versions --engine aurora-mysql --engine-version 5.7.mysql_aurora.2.11.2 \
  --query '*[].[ValidUpgradeTarget]'
...
{
    "Engine": "aurora-mysql",
    "EngineVersion": "8.0.mysql_aurora.3.04.0",
    "Description": "Aurora MySQL 3.04.0 (compatible with MySQL 8.0.28)",
    "AutoUpgrade": false,
    "IsMajorVersionUpgrade": true,
    "SupportedEngineModes": [
        "provisioned"
    ],
    "SupportsParallelQuery": true,
    "SupportsGlobalDatabases": true,
    "SupportsBabelfish": false,
    "SupportsIntegrations": false
},
...
```

This example shows how if you enter a target version number that isn't a valid upgrade target for the cluster, Aurora doesn't perform the upgrade. Aurora also doesn't perform a major version upgrade unless you include the `--allow-major-version-upgrade` parameter. That way, you can't accidentally perform an upgrade that has the potential to require extensive testing and changes to your application code.

```
aws rds modify-db-cluster --db-cluster-identifier mynewdbcluster \
  --engine-version 5.7.mysql_aurora.2.09.2 --apply-immediately
An error occurred (InvalidParameterCombination) when calling the ModifyDBCluster operation: Cannot find upgrade target from 5.7.mysql_aurora.2.11.2 with requested version 5.7.mysql_aurora.2.09.2.

aws rds modify-db-cluster --db-cluster-identifier mynewdbcluster \
  --engine-version 8.0.mysql_aurora.3.04.0 --region us-east-1 --apply-immediately
An error occurred (InvalidParameterCombination) when calling the ModifyDBCluster operation: The AllowMajorVersionUpgrade flag must be present when upgrading to a new major version.

aws rds modify-db-cluster --db-cluster-identifier mynewdbcluster \
  --engine-version 8.0.mysql_aurora.3.04.0 --apply-immediately --allow-major-version-upgrade
{
  "DBClusterIdentifier": "mynewdbcluster",
  "Status": "available",
  "Engine": "aurora-mysql",
  "EngineVersion": "5.7.mysql_aurora.2.11.2"
}
```

 It takes a few moments for the status of the cluster and associated DB instances to change to `upgrading`. The version numbers for the cluster and DB instances only change when the upgrade is finished. Again, you can use the `wait db-instance-available` command for the writer DB instance to wait until the upgrade is finished before proceeding. 

```
aws rds describe-db-clusters --db-cluster-identifier mynewdbcluster \
  --query '*[].[Status,EngineVersion]' --output text
upgrading 5.7.mysql_aurora.2.11.2

aws rds describe-db-instances --db-instance-identifier mynewdbcluster-instance1 \
  --query '*[].{DBInstanceIdentifier:DBInstanceIdentifier,DBInstanceStatus:DBInstanceStatus} | [0]'
{
    "DBInstanceIdentifier": "mynewdbcluster-instance1",
    "DBInstanceStatus": "upgrading"
}

aws rds wait db-instance-available --db-instance-identifier mynewdbcluster-instance1
```

 At this point, the version number for the cluster matches the one that was specified for the upgrade. 

```
aws rds describe-db-clusters --db-cluster-identifier mynewdbcluster \
  --query '*[].[EngineVersion]' --output text

8.0.mysql_aurora.3.04.0
```

The preceding example did an immediate upgrade by specifying the `--apply-immediately` parameter. To let the upgrade happen at a convenient time when the cluster isn't expected to be busy, you can specify the `--no-apply-immediately` parameter. Doing so makes the upgrade start during the next maintenance window for the cluster. The maintenance window defines the period during which maintenance operations can start. A long-running operation might not finish during the maintenance window. Thus, you don't need to define a larger maintenance window even if you expect that the upgrade might take a long time.

The following example upgrades a cluster that's initially running Aurora MySQL version 2.11.2. In the `describe-db-engine-versions` output, the `False` and `True` values represent the `IsMajorVersionUpgrade` property. From version 2.11.2, you can upgrade to some other 2.\$1 versions. Those upgrades aren't considered major version upgrades and so don't require an in-place upgrade. In-place upgrade is only available for upgrades to the 3.\$1 versions that are shown in the list.

```
aws rds describe-db-clusters --db-cluster-identifier mynewdbcluster \
  --query '*[].{EngineVersion:EngineVersion}' --output text
5.7.mysql_aurora.2.11.2

aws rds describe-db-engine-versions --engine aurora-mysql --engine-version 5.7.mysql_aurora.2.10.2 \
  --query '*[].[ValidUpgradeTarget]|[0][0]|[*].[EngineVersion,IsMajorVersionUpgrade]' --output text

5.7.mysql_aurora.2.11.3 False
5.7.mysql_aurora.2.11.4 False
5.7.mysql_aurora.2.11.5 False
5.7.mysql_aurora.2.11.6 False
5.7.mysql_aurora.2.12.0 False
5.7.mysql_aurora.2.12.1 False
5.7.mysql_aurora.2.12.2 False
5.7.mysql_aurora.2.12.3 False
8.0.mysql_aurora.3.04.0 True
8.0.mysql_aurora.3.04.1 True
8.0.mysql_aurora.3.04.2 True
8.0.mysql_aurora.3.04.3 True
8.0.mysql_aurora.3.05.2 True
8.0.mysql_aurora.3.06.0 True
8.0.mysql_aurora.3.06.1 True
8.0.mysql_aurora.3.07.1 True

aws rds modify-db-cluster --db-cluster-identifier mynewdbcluster \
  --engine-version 8.0.mysql_aurora.3.04.0 --no-apply-immediately --allow-major-version-upgrade
...
```

When a cluster is created without a specified maintenance window, Aurora picks a random day of the week. In this case, the `modify-db-cluster` command is submitted on a Monday. Thus, we change the maintenance window to be Tuesday morning. All times are represented in the UTC time zone. The `tue:10:00-tue:10:30` window corresponds to 2:00-2:30 AM Pacific time. The change in the maintenance window takes effect immediately.

```
aws rds describe-db-clusters --db-cluster-identifier mynewdbcluster --query '*[].[PreferredMaintenanceWindow]'
[
    [
        "sat:08:20-sat:08:50"
    ]
]

aws rds modify-db-cluster --db-cluster-identifier mynewdbcluster --preferred-maintenance-window tue:10:00-tue:10:30"
aws rds describe-db-clusters --db-cluster-identifier mynewdbcluster --query '*[].[PreferredMaintenanceWindow]'
[
    [
        "tue:10:00-tue:10:30"
    ]
]
```

The following example shows how to get a report of the events generated by the upgrade. The `--duration` argument represents the number of minutes to retrieve the event information. This argument is needed, because by default `describe-events` only returns events from the last hour.

```
aws rds describe-events --source-type db-cluster --source-identifier mynewdbcluster --duration 20160
{
    "Events": [
        {
            "SourceIdentifier": "mynewdbcluster",
            "SourceType": "db-cluster",
            "Message": "DB cluster created",
            "EventCategories": [
                "creation"
            ],
            "Date": "2022-11-17T01:24:11.093000+00:00",
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster"
        },
        {
            "SourceIdentifier": "mynewdbcluster",
            "SourceType": "db-cluster",
            "Message": "Upgrade in progress: Performing online pre-upgrade checks.",
            "EventCategories": [
                "maintenance"
            ],
            "Date": "2022-11-18T22:57:08.450000+00:00",
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster"
        },
        {
            "SourceIdentifier": "mynewdbcluster",
            "SourceType": "db-cluster",
            "Message": "Upgrade in progress: Performing offline pre-upgrade checks.",
            "EventCategories": [
                "maintenance"
            ],
            "Date": "2022-11-18T22:57:59.519000+00:00",
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster"
        },
        {
            "SourceIdentifier": "mynewdbcluster",
            "SourceType": "db-cluster",
            "Message": "Upgrade in progress: Creating pre-upgrade snapshot [preupgrade-mynewdbcluster-5-7-mysql-aurora-2-10-2-to-8-0-mysql-aurora-3-02-0-2022-11-18-22-55].",
            "EventCategories": [
                "maintenance"
            ],
            "Date": "2022-11-18T23:00:22.318000+00:00",
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster"
        },
        {
            "SourceIdentifier": "mynewdbcluster",
            "SourceType": "db-cluster",
            "Message": "Upgrade in progress: Cloning volume.",
            "EventCategories": [
                "maintenance"
            ],
            "Date": "2022-11-18T23:01:45.428000+00:00",
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster"
        },
        {
            "SourceIdentifier": "mynewdbcluster",
            "SourceType": "db-cluster",
            "Message": "Upgrade in progress: Purging undo records for old row versions. Records remaining: 164",
            "EventCategories": [
                "maintenance"
            ],
            "Date": "2022-11-18T23:02:25.141000+00:00",
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster"
        },
        {
            "SourceIdentifier": "mynewdbcluster",
            "SourceType": "db-cluster",
            "Message": "Upgrade in progress: Purging undo records for old row versions. Records remaining: 164",
            "EventCategories": [
                "maintenance"
            ],
            "Date": "2022-11-18T23:06:23.036000+00:00",
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster"
        },
        {
            "SourceIdentifier": "mynewdbcluster",
            "SourceType": "db-cluster",
            "Message": "Upgrade in progress: Upgrading database objects.",
            "EventCategories": [
                "maintenance"
            ],
            "Date": "2022-11-18T23:06:48.208000+00:00",
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster"
        },
        {
            "SourceIdentifier": "mynewdbcluster",
            "SourceType": "db-cluster",
            "Message": "Database cluster major version has been upgraded",
            "EventCategories": [
                "maintenance"
            ],
            "Date": "2022-11-18T23:10:28.999000+00:00",
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster"
        }
    ]
}
```

# Finding the reasons for Aurora MySQL major version upgrade failures
Finding the reasons for major version upgrade failures

In the [tutorial](AuroraMySQL.Upgrading.Tutorial.md), the upgrade from Aurora MySQL version 2 to version 3 succeeded. But if the upgrade had failed, you would want to know why.

You can start by using the `describe-events` AWS CLI command to look at the DB cluster events. This example shows the events for `mydbcluster` for the last 10 hours.

```
aws rds describe-events \
    --source-type db-cluster \
    --source-identifier mydbcluster \
    --duration 600
```

In this case, we had an upgrade precheck failure.

```
{
    "Events": [
        {
            "SourceIdentifier": "mydbcluster",
            "SourceType": "db-cluster",
            "Message": "Database cluster engine version upgrade started.",
            "EventCategories": [
                "maintenance"
            ],
            "Date": "2024-04-11T13:23:22.846000+00:00",
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mydbcluster"
        },
        {
            "SourceIdentifier": "mydbcluster",
            "SourceType": "db-cluster",
            "Message": "Database cluster is in a state that cannot be upgraded: Upgrade prechecks failed. For more details, see the  
             upgrade-prechecks.log file. For more information on troubleshooting the cause of the upgrade failure, see 
             https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Upgrading.Troubleshooting.html",
            "EventCategories": [
                "maintenance"
            ],
            "Date": "2024-04-11T13:23:24.373000+00:00",
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mydbcluster"
        }
    ]
}
```

To diagnose the exact cause of the problem, examine the database logs for the writer DB instance. When an upgrade to Aurora MySQL version 3 fails, the writer instance contains a log file with the name `upgrade-prechecks.log`. This example shows how to detect the presence of that log and then download it to a local file for examination.

```
aws rds describe-db-log-files --db-instance-identifier mydbcluster-instance \
    --query '*[].[LogFileName]' --output text

error/mysql-error-running.log
error/mysql-error-running.log.2024-04-11.20
error/mysql-error-running.log.2024-04-11.21
error/mysql-error.log
external/mysql-external.log
upgrade-prechecks.log

aws rds download-db-log-file-portion --db-instance-identifier mydbcluster-instance \
    --log-file-name upgrade-prechecks.log \
    --starting-token 0 \
    --output text >upgrade_prechecks.log
```

The `upgrade-prechecks.log` file is in JSON format. We download it using the `--output text` option to avoid encoding JSON output within another JSON wrapper. For Aurora MySQL version 3 upgrades, this log always includes certain informational and warning messages. It only includes error messages if the upgrade fails. If the upgrade succeeds, the log file isn't produced at all.

To summarize all of the errors and display the associated object and description fields, you can run the command `grep -A 2 '"level": "Error"'` on the contents of the `upgrade-prechecks.log` file. Doing so displays each error line and the two lines after it. These contain the name of the corresponding database object and guidance about how to correct the problem.

```
$ cat upgrade-prechecks.log | grep -A 2 '"level": "Error"'

"level": "Error",
"dbObject": "problematic_upgrade.dangling_fulltext_index",
"description": "Table `problematic_upgrade.dangling_fulltext_index` contains dangling FULLTEXT index. Kindly recreate the table before upgrade."
```

In this example, you can run the following SQL command on the offending table to try to fix the issue, or you can re-create the table without the dangling index.

```
OPTIMIZE TABLE problematic_upgrade.dangling_fulltext_index;
```

Then retry the upgrade.

# Troubleshooting for Aurora MySQL in-place upgrade


Use the following tips to help troubleshoot problems with Aurora MySQL in-place upgrades. These tips don't apply to Aurora Serverless DB clusters.


| Reason for in-place upgrade being canceled or slow | Effect | Solution to allow in-place upgrade to complete within maintenance window | 
| --- | --- | --- | 
| Associated Aurora cross-Region replica isn't patched yet | Aurora cancels the upgrade. | Upgrade the Aurora cross-Region replica and try again. | 
| Cluster has XA transactions in the prepared state | Aurora cancels the upgrade. | Commit or roll back all prepared XA transactions. | 
| Cluster is processing any data definition language (DDL) statements | Aurora cancels the upgrade. | Consider waiting and performing the upgrade after all DDL statements are finished. | 
| Cluster has uncommitted changes for many rows | Upgrade might take a long time. |  The upgrade process rolls back the uncommitted changes. The indicator for this condition is the value of `TRX_ROWS_MODIFIED` in the `INFORMATION_SCHEMA.INNODB_TRX` table. Consider performing the upgrade only after all large transactions are committed or rolled back.  | 
| Cluster has high number of undo records | Upgrade might take a long time. |  Even if the uncommitted transactions don't affect a large number of rows, they might involve a large volume of data. For example, you might be inserting large BLOBs. Aurora doesn't automatically detect or generate an event for this kind of transaction activity. The indicator for this condition is the history list length (HLL). The upgrade process rolls back the uncommitted changes. You can check the HLL in the output from the `SHOW ENGINE INNODB STATUS` SQL command, or directly by using the following SQL query: <pre>SELECT count FROM information_schema.innodb_metrics WHERE name = 'trx_rseg_history_len';</pre> You can also monitor the `RollbackSegmentHistoryListLength` metric in Amazon CloudWatch. Consider performing the upgrade only after the HLL is smaller.  | 
| Cluster is in the process of committing a large binary log transaction | Upgrade might take a long time. |  The upgrade process waits until the binary log changes are applied. More transactions or DDL statements could start during this period, further slowing down the upgrade process. Schedule the upgrade process when the cluster isn't busy with generating binary log replication changes. Aurora doesn't automatically detect or generate an event for this condition.  | 
| Schema inconsistencies resulting from file removal or corruption | Aurora cancels the upgrade. |  Change the default storage engine for temporary tables from MyISAM to InnoDB. Perform the following steps: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Upgrading.Troubleshooting.html)  | 
| Master user was deleted | Aurora cancels the upgrade. |   Don't delete the master user.  However, if for some reason you should happen to delete the master user, restore it using the following SQL commands: <pre>CREATE USER 'master_username'@'%' IDENTIFIED BY 'master_user_password' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK;<br /><br />GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, <br />LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, <br />TRIGGER, LOAD FROM S3, SELECT INTO S3, INVOKE LAMBDA, INVOKE SAGEMAKER, INVOKE COMPREHEND ON *.* TO 'master_username'@'%' WITH GRANT OPTION;</pre>  | 

For more details on troubleshooting issues that cause upgrade prechecks to fail, see the following blogs:
+ [Amazon Aurora MySQL version 2 (with MySQL 5.7 compatibility) to version 3 (with MySQL 8.0 compatibility) upgrade checklist, Part 1](https://aws.amazon.com/blogs/database/amazon-aurora-mysql-version-2-with-mysql-5-7-compatibility-to-version-3-with-mysql-8-0-compatibility-upgrade-checklist-part-1/)
+ [Amazon Aurora MySQL version 2 (with MySQL 5.7 compatibility) to version 3 (with MySQL 8.0 compatibility) upgrade checklist, Part 2](https://aws.amazon.com/blogs/database/amazon-aurora-mysql-version-2-with-mysql-5-7-compatibility-to-version-3-with-mysql-8-0-compatibility-upgrade-checklist-part-2/)

 You can use the following steps to perform your own checks for some of the conditions in the preceding table. That way, you can schedule the upgrade at a time when you know the database is in a state where the upgrade can complete successfully and quickly. 
+  You can check for open XA transactions by executing the `XA RECOVER` statement. You can then commit or roll back the XA transactions before starting the upgrade. 
+  You can check for DDL statements by executing a `SHOW PROCESSLIST` statement and looking for `CREATE`, `DROP`, `ALTER`, `RENAME`, and `TRUNCATE` statements in the output. Allow all DDL statements to finish before starting the upgrade. 
+  You can check the total number of uncommitted rows by querying the `INFORMATION_SCHEMA.INNODB_TRX` table. The table contains one row for each transaction. The `TRX_ROWS_MODIFIED` column contains the number of rows modified or inserted by the transaction. 
+  You can check the length of the InnoDB history list by executing the `SHOW ENGINE INNODB STATUS SQL` statement and looking for the `History list length` in the output. You can also check the value directly by running the following query: 

  ```
  SELECT count FROM information_schema.innodb_metrics WHERE name = 'trx_rseg_history_len';
  ```

   The length of the history list corresponds to the amount of undo information stored by the database to implement multi-version concurrency control (MVCC). 

# Post-upgrade cleanup for Aurora MySQL version 3


After you finish upgrading any Aurora MySQL version 2 clusters to Aurora MySQL version 3, you can perform these other cleanup actions:
+ Create new MySQL 8.0–compatible versions of any custom parameter groups. Apply any necessary custom parameter values to the new parameter groups.
+ Update any CloudWatch alarms, setup scripts, and so on to use the new names for any metrics whose names were affected by inclusive language changes. For a list of such metrics, see [Inclusive language changes for Aurora MySQL version 3](AuroraMySQL.Compare-v2-v3.md#AuroraMySQL.8.0-inclusive-language).
+ Update any CloudFormation templates to use the new names for any configuration parameters whose names were affected by inclusive language changes. For a list of such parameters, see [Inclusive language changes for Aurora MySQL version 3](AuroraMySQL.Compare-v2-v3.md#AuroraMySQL.8.0-inclusive-language).

## Spatial indexes


After upgrading to Aurora MySQL version 3, check if you need to drop or recreate objects and indexes related to spatial indexes. Before MySQL 8.0, Aurora could optimize spatial queries using indexes that didn't contain a spatial resource identifier (SRID). Aurora MySQL version 3 only uses spatial indexes containing SRIDs. During an upgrade, Aurora automatically drops any spatial indexes without SRIDs and prints warning messages in the database log. If you observe such warning messages, create new spatial indexes with SRIDs after the upgrade. For more information about changes to spatial functions and data types in MySQL 8.0, see [Changes in MySQL 8.0](https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html) in the *MySQL Reference Manual*.

# Database engine updates and fixes for Amazon Aurora MySQL


You can find the following information in the *Release notes for Amazon Aurora MySQL-Compatible Edition*:
+ [Database engine updates for Amazon Aurora MySQL version 3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.30Updates.html)
+ [Database engine updates for Amazon Aurora MySQL version 2](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.20Updates.html)
+ [Database engine updates for Amazon Aurora MySQL version 1 (Deprecated)](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.11Updates.html)
+ [MySQL bugs fixed by Aurora MySQL database engine updates](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.MySQLBugs.html)
+ [Security vulnerabilities fixed in Amazon Aurora MySQL](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.CVE_list.html)