

# Managing an Amazon Aurora DB cluster
Managing an Aurora DB cluster

 This section shows how to manage and maintain your Aurora DB cluster. Aurora involves clusters of database servers that are connected in a replication topology. Thus, managing Aurora often involves deploying changes to multiple servers and making sure that all Aurora Replicas are keeping up with the source server. Because Aurora transparently scales the underlying storage as your data grows, managing Aurora requires relatively little management of disk storage. Likewise, because Aurora automatically performs continuous backups, an Aurora cluster does not require extensive planning or downtime for performing backups. 

**Topics**
+ [

# Stopping and starting an Amazon Aurora DB cluster
](aurora-cluster-stop-start.md)
+ [

# Automatically connecting an EC2 instance and an Aurora DB cluster
](ec2-rds-connect.md)
+ [

# Automatically connecting a Lambda function and an Aurora DB cluster
](lambda-rds-connect.md)
+ [

# Modifying an Amazon Aurora DB cluster
](Aurora.Modifying.md)
+ [

# Adding Aurora Replicas to a DB cluster
](aurora-replicas-adding.md)
+ [

# Managing performance and scaling for Aurora DB clusters
](Aurora.Managing.Performance.md)
+ [

# Cloning a volume for an Amazon Aurora DB cluster
](Aurora.Managing.Clone.md)
+ [

# Integrating Aurora with other AWS services
](Aurora.Integrating.md)
+ [

# Maintaining an Amazon Aurora DB cluster
](USER_UpgradeDBInstance.Maintenance.md)
+ [

# Rebooting an Amazon Aurora DB cluster or Amazon Aurora DB instance
](USER_RebootCluster.md)
+ [

# Failing over an Amazon Aurora DB cluster
](aurora-failover.md)
+ [

# Deleting Aurora DB clusters and DB instances
](USER_DeleteCluster.md)
+ [

# Tagging Amazon Aurora andAmazon RDS resources
](USER_Tagging.md)
+ [

# Amazon Resource Names (ARNs) in Amazon RDS
](USER_Tagging.ARN.md)
+ [

# Amazon Aurora updates
](Aurora.Updates.md)

# Stopping and starting an Amazon Aurora DB cluster
Stopping and starting a cluster

Stopping and starting Aurora DB clusters helps you manage costs for development and test environments. You can temporarily stop all the DB instances in your cluster, instead of setting up and tearing down all the DB instances each time that you use the cluster. 

**Topics**
+ [

## Overview of stopping and starting an Aurora DB cluster
](#aurora-cluster-start-stop-overview)
+ [

## Limitations for stopping and starting Aurora DB clusters
](#aurora-cluster-stop-limitations)
+ [

## Stopping an Aurora DB cluster
](#aurora-cluster-stop)
+ [

## Possible operations while an Aurora DB cluster is stopped
](#aurora-cluster-stopped)
+ [

## Starting an Aurora DB cluster
](#aurora-cluster-start)

## Overview of stopping and starting an Aurora DB cluster
Overview of stopping and starting a cluster

During periods where you don't need an Aurora DB cluster, you can stop all instances in that cluster at once. You can start the cluster again anytime you need to use it. Starting and stopping simplifies the setup and teardown processes for clusters used for development, testing, or similar activities that don't require continuous availability. You can perform all the AWS Management Console procedures involved with only a single action, regardless of how many instances are in the cluster.

While your DB cluster is stopped, you're charged only for cluster storage, manual snapshots, and automated backup storage within your specified retention window. You aren't charged for any DB instance hours.

**Important**  
You can stop a DB cluster for up to seven days. If you don't manually start your DB cluster after seven days, your DB cluster is automatically started so that it doesn't fall behind any required maintenance updates.

To minimize charges for a lightly loaded Aurora cluster, you can stop the cluster instead of deleting all of its Aurora Replicas. For clusters with more than one or two instances, frequently deleting and recreating the DB instances is only practical using the AWS CLI or Amazon RDS API. Such a sequence of operations can also be difficult to perform in the right order, for example to delete all Aurora Replicas before deleting the primary instance to avoid activating the failover mechanism.

Don't use starting and stopping if you need to keep your DB cluster running but it has more capacity than you need. If your cluster is too costly or not very busy, delete one or more DB instances or change all your DB instances to a small instance class. You can't stop an individual Aurora DB instance.

The time to stop your DB cluster varies depending on factors such as DB instance classes, network state, DB engine type, and database state. The process can take several minutes. The Amazon RDS service performs the following actions:
+ Shuts down the database engine processes.
+ Shuts down the RDS platform processes.
+ Terminates the underlying Amazon EC2 instances.

The time to restart your DB cluster varies depending on factors such as database size, DB instance classes, network state, DB engine type, and the database state when the cluster was shut down. The startup process can take minutes to hours, but usually takes several minutes. We recommend that you consider the variability in startup time when creating your availability plan.

To start the DB cluster, the service performs actions such as the following:
+ Provisions the underlying Amazon EC2 instances.
+ Starts the RDS platform processes.
+ Starts the database engine processes.
+ Recovers the DB instances (recovery occurs even after a normal shutdown).

## Limitations for stopping and starting Aurora DB clusters
Limitations

Some Aurora clusters can't be stopped and started:
+ You can only stop and start a cluster that's part of an [Aurora global database](aurora-global-database.md) if it's the only cluster in the global database.
+ You can't stop and start a cluster that has a cross-Region read replica.
+ You can't stop and start a cluster that is part of a [blue/green deployment](blue-green-deployments.md).
+ You can't stop and start an [Aurora Serverless v1 cluster](aurora-serverless.md). With [Aurora Serverless v2](aurora-serverless-v2.md), you can stop and start the cluster.

## Stopping an Aurora DB cluster
Stopping a DB cluster

To use an Aurora DB cluster or perform administration, you always begin with a running Aurora DB cluster, then stop the cluster, and then start the cluster again. While your cluster is stopped, you are charged for cluster storage, manual snapshots, and automated backup storage within your specified retention window, but not for DB instance hours.

The stop operation stops the Aurora Replica instances first, then the primary instance, to avoid activating the failover mechanism.

### Console


**To stop an Aurora cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose a cluster. You can perform the stop operation from this page, or navigate to the details page for the DB cluster that you want to stop.

1. For **Actions**, choose **Stop temporarily**.

1. In the **Stop DB cluster temporarily** window, select the acknowledgement that the DB cluster will restart automatically after 7 days.

1. Choose **Stop temporarily** to stop the DB cluster, or choose **Cancel** to cancel the operation.

### AWS CLI


To stop a DB instance by using the AWS CLI, call the [stop-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/stop-db-cluster.html) command with the following parameters: 
+ `--db-cluster-identifier` – the name of the Aurora cluster. 

**Example**  

```
aws rds stop-db-cluster --db-cluster-identifier mydbcluster
```

### RDS API


To stop a DB instance by using the Amazon RDS API, call the [StopDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_StopDBCluster.html) operation with the following parameter: 
+ `DBClusterIdentifier` – the name of the Aurora cluster. 

## Possible operations while an Aurora DB cluster is stopped
While a DB cluster is stopped

 While an Aurora cluster is stopped, you can do a point-in-time restore to any point within your specified automated backup retention window. For details about doing a point-in-time restore, see [Restoring data](Aurora.Managing.Backups.md#Aurora.Managing.Backups.Restore). 

 You can't modify the configuration of an Aurora DB cluster, or any of its DB instances, while the cluster is stopped. You also can't add or remove DB instances from the cluster, or delete the cluster if it still has any associated DB instances. You must start the cluster before performing any such administrative actions. 

Stopping a DB cluster removes pending actions, except for the DB cluster parameter group or for the DB parameter groups of the DB cluster instances.

 Aurora applies any scheduled maintenance to your stopped cluster after it's started again. Remember that after seven days, Aurora automatically starts any stopped clusters so that they don't fall too far behind in their maintenance status. 

 Aurora also doesn't perform any automated backups, because the underlying data can't change while the cluster is stopped. Aurora doesn't extend the backup retention period while the cluster is stopped. 

## Starting an Aurora DB cluster
Starting a DB cluster

You always start an Aurora DB cluster beginning with an Aurora cluster that is already in the stopped state. When you start the cluster, all its DB instances become available again. The cluster keeps its configuration settings such as endpoints, parameter groups, and VPC security groups.

Starting your DB cluster usually takes several minutes.

### Console


**To start an Aurora cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  In the navigation pane, choose **Databases**, and then choose a cluster. You can perform the start operation from this page, or navigate to the details page for the DB cluster that you want to start. 

1.  For **Actions**, choose **Start**. 

### AWS CLI


To start a DB cluster by using the AWS CLI, call the [start-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/start-db-cluster.html) command with the following parameters: 
+ `--db-cluster-identifier` – the name of the Aurora cluster. This name is either a specific cluster identifier you chose when creating the cluster, or the DB instance identifier you chose with `-cluster` appended to the end. 

**Example**  

```
aws rds start-db-cluster --db-cluster-identifier mydbcluster
```

### RDS API


To start an Aurora DB cluster by using the Amazon RDS API, call the [StartDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_StartDBCluster.html) operation with the following parameter: 
+ `DBCluster` – the name of the Aurora cluster. This name is either a specific cluster identifier you chose when creating the cluster, or the DB instance identifier you chose with `-cluster` appended to the end. 

# Automatically connecting an EC2 instance and an Aurora DB cluster
Connecting an EC2 instance

You can use the Amazon RDS console to simplify setting up a connection between an Amazon Elastic Compute Cloud (Amazon EC2) instance and an Aurora DB cluster. Often, your DB cluster is in a private subnet and your EC2 instance is in a public subnet within a VPC. You can use a SQL client on your EC2 instance to connect to your DB cluster. The EC2 instance can also run web servers or applications that access your private DB cluster. 

![\[Automatically connect an Aurora DB cluster with an EC2 instance.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/auto-connect-aurora-ec2.png)


If you want to connect to an EC2 instance that isn't in the same VPC as the Aurora DB cluster, see the scenarios in [Scenarios for accessing a DB cluster in a VPC](USER_VPC.Scenarios.md).

**Topics**
+ [

## Overview of automatic connectivity with an EC2 instance
](#ec2-rds-connect-overview)
+ [

## Automatically connecting an EC2 instance and an Aurora DB cluster
](#ec2-rds-connect-connecting)
+ [

## Viewing connected compute resources
](#ec2-rds-connect-viewing)
+ [

## Connecting to a DB instance that is running a specific DB engine
](#ec2-rds-Connect-DBEngine)

## Overview of automatic connectivity with an EC2 instance
Overview

When you set up a connection between an EC2 instance and an Aurora DB cluster, Amazon RDSautomatically configures the VPC security group for your EC2 instance and for your DB cluster.

The following are requirements for connecting an EC2 instance with an Aurora DB cluster:
+ The EC2 instance must exist in the same VPC as the DB cluster.

  If no EC2 instances exist in the same VPC, then the console provides a link to create one.
+ Currently, the DB cluster can't be an Aurora Serverless DB cluster or part of an Aurora global database.
+ The user who sets up connectivity must have permissions to perform the following Amazon EC2 operations:
  + `ec2:AuthorizeSecurityGroupEgress` 
  + `ec2:AuthorizeSecurityGroupIngress` 
  + `ec2:CreateSecurityGroup` 
  + `ec2:DescribeInstances` 
  + `ec2:DescribeNetworkInterfaces` 
  + `ec2:DescribeSecurityGroups` 
  + `ec2:ModifyNetworkInterfaceAttribute` 
  + `ec2:RevokeSecurityGroupEgress` 

If the DB instance and EC2 instance are in different Availability Zones, your account may incur cross-Availability Zone costs.

When you set up a connection to an EC2 instance, Amazon RDS acts according to the current configuration of the security groups associated with the DB cluster and EC2 instance, as described in the following table.


****  

| Current RDS security group configuration | Current EC2 security group configuration | RDS action | 
| --- | --- | --- | 
|  There are one or more security groups associated with the DB cluster with a name that matches the pattern `rds-ec2-n` (where `n` is a number). A security group that matches the pattern hasn't been modified. This security group has only one inbound rule with the VPC security group of the EC2 instance as the source.  |  There are one or more security groups associated with the EC2 instance with a name that matches the pattern `ec2-rds-n` (where `n` is a number). A security group that matches the pattern hasn't been modified. This security group has only one outbound rule with the VPC security group of the DB cluster as the source.  |  RDS takes no action. A connection was already configured automatically between the EC2 instance and DB cluster. Because a connection already exists between the EC2 instance and the RDS database, the security groups aren't modified.  | 
|  Either of the following conditions apply: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/ec2-rds-connect.html)  |  Either of the following conditions apply: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/ec2-rds-connect.html)  |  [RDS action: create new security groups](#rds-action-create-new-security-groups)  | 
|  There are one or more security groups associated with the DB cluster with a name that matches the pattern `rds-ec2-n`. A security group that matches the pattern hasn't been modified. This security group has only one inbound rule with the VPC security group of the EC2 instance as the source.  |  There are one or more security groups associated with the EC2 instance with a name that matches the pattern `ec2-rds-n`. However, Amazon RDS can't use any of these security groups for the connection with the DB cluster. Amazon RDS can't use a security group that doesn't have one outbound rule with the VPC security group of the DB cluster as the source. Amazon RDS also can't use a security group that has been modified.  |  [RDS action: create new security groups](#rds-action-create-new-security-groups)  | 
|  There are one or more security groups associated with the DB cluster with a name that matches the pattern `rds-ec2-n`. A security group that matches the pattern hasn't been modified. This security group has only one inbound rule with the VPC security group of the EC2 instance as the source.  |  A valid EC2 security group for the connection exists, but it is not associated with the EC2 instance. This security group has a name that matches the pattern `ec2-rds-n`. It hasn't been modified. It has only one outbound rule with the VPC security group of the DB cluster as the source.  |  [RDS action: associate EC2 security group](#rds-action-associate-ec2-security-group)  | 
|  Either of the following conditions apply: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/ec2-rds-connect.html)  |  There are one or more security groups associated with the EC2 instance with a name that matches the pattern `ec2-rds-n`. A security group that matches the pattern hasn't been modified. This security group has only one outbound rule with the VPC security group of the DB cluster as the source.  |  [RDS action: create new security groups](#rds-action-create-new-security-groups)  | 

**RDS action: create new security groups**  
Amazon RDS takes the following actions:
+ Creates a new security group that matches the pattern `rds-ec2-n`. This security group has an inbound rule with the VPC security group of the EC2 instance as the source. This security group is associated with the DB cluster and allows the EC2 instance to access the  DB cluster.
+ Creates a new security group that matches the pattern `ec2-rds-n`. This security group has an outbound rule with the VPC security group of the DB cluster as the target. This security group is associated with the EC2 instance and allows the EC2 instance to send traffic to the DB cluster.

**RDS action: associate EC2 security group**  
Amazon RDS associates the valid, existing EC2 security group with the EC2 instance. This security group allows the EC2 instance to send traffic to the DB cluster.

## Automatically connecting an EC2 instance and an Aurora DB cluster
Connecting an EC2 instance

Before setting up a connection between an EC2 instance and an Aurora DB cluster, make sure you meet the requirements described in [Overview of automatic connectivity with an EC2 instance](#ec2-rds-connect-overview).

If you make changes to security groups after you configure connectivity, the changes might affect the connection between the EC2 instance and the Aurora DB cluster.

**Note**  
You can only set up a connection between an EC2 instance and an Aurora DB cluster automatically by using the AWS Management Console. You can't set up a connection automatically with the AWS CLI or RDS API.

**To connect an EC2 instance and an Aurora DB cluster automatically**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose the DB cluster.

1. From **Actions**, choose **Set up EC2 connection**.

   The **Set up EC2 connection** page appears.

1. On the **Set up EC2 connection** page, choose the EC2 instance.  
![\[Set up EC2 connection page.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/auto-connect-rds-ec2-set-up.png)

   If no EC2 instances exist in the same VPC, choose **Create EC2 instance** to create one. In this case, make sure the new EC2 instance is in the same VPC as the DB cluster.

1. Choose **Continue**.

   The **Review and confirm** page appears.  
![\[EC2 connection review and confirmation page.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/auto-connect-rds-ec2-confirm.png)

1. On the **Review and confirm** page, review the changes that RDS will make to set up connectivity with the EC2 instance.

   If the changes are correct, choose **Confirm and set up**.

   If the changes aren't correct, choose **Previous** or **Cancel**.

## Viewing connected compute resources
Viewing connected compute resources

You can use the AWS Management Console to view the compute resources that are connected to an Aurora DB cluster. The resources shown include compute resource connections that were set up automatically. You can set up connectivity with compute resources automatically in the following ways:
+ You can select the compute resource when you create the database.

  For more information, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md).
+ You can set up connectivity between an existing database and a compute resource.

  For more information, see [Automatically connecting an EC2 instance and an Aurora DB cluster](#ec2-rds-connect-connecting).

The listed compute resources don't include ones that were connected to the database manually. For example, you can allow a compute resource to access a database manually by adding a rule to the VPC security group associated with the database.

For a compute resource to be listed, the following conditions must apply:
+ The name of the security group associated with the compute resource matches the pattern `ec2-rds-n` (where `n` is a number).
+ The security group associated with the compute resource has an outbound rule with the port range set to the port that the DB cluster uses.
+ The security group associated with the compute resource has an outbound rule with the source set to a security group associated with the DB cluster.
+ The name of the security group associated with the DB cluster matches the pattern `rds-ec2-n` (where `n` is a number).
+ The security group associated with the DB cluster has an inbound rule with the port range set to the port that the DB cluster uses.
+ The security group associated with the DB cluster has an inbound rule with the source set to a security group associated with the compute resource.

**To view compute resources connected to an Aurora DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose the name of the DB cluster.

1. On the **Connectivity & security** tab, view the compute resources in the **Connected compute resources**.  
![\[Connected compute resources.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/ec2-connected-compute-resources.png)

## Connecting to a DB instance that is running a specific DB engine
Connecting to a DB instance running a specific DB engine

For information about connecting to a DB instance that is running a specific DB engine, follow the instructions for your DB engine:
+ [Connecting to an Amazon Aurora MySQL DB cluster](Aurora.Connecting.md#Aurora.Connecting.AuroraMySQL)
+ [Connecting to an Amazon Aurora PostgreSQL DB cluster](Aurora.Connecting.md#Aurora.Connecting.AuroraPostgreSQL)

# Automatically connecting a Lambda function and an Aurora DB cluster
Connecting a Lambda function

You can use the Amazon RDS console to simplify setting up a connection between a Lambda function and an Aurora DB cluster. Often, your DB cluster is in a private subnet within a VPC. The Lambda function can be used by applications to access your private DB cluster. 



The following image shows a direct connection between your DB cluster and your Lambda function.

![\[Automatically connect an Aurora DB cluster with a Lambda function\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/auto-connect-aurora-lambda.png)


You can set up the connection between your Lambda function and your DB cluster through RDS Proxy to improve your database performance and resiliency. Often, Lambda functions make frequent, short database connections that benefit from connection pooling that RDS Proxy offers. You can take advantage of any AWS Identity and Access Management (IAM) authentication that you already have for Lambda functions, instead of managing database credentials in your Lambda application code. For more information, see [Amazon RDS Proxyfor Aurora](rds-proxy.md).

When you use the console to connect with an existing proxy, Amazon RDS updates the proxy security group to allow connections from your DB cluster and Lambda function.

You can also create a new proxy from the same console page. When you create a proxy in the console, to access the DB cluster, you must input your database credentials or select an AWS Secrets Manager secret.

![\[Automatically connect an Aurora DB cluster with a Lambda function through RDS Proxy\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/auto-connect-aurora-lambda-Proxy.png)


**Tip**  
To quickly connect a Lambda function to an Aurora DB cluster, you can also use the in-console guided wizard. To open the wizard, do the following:  
Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.
Select the function you want to connect a database to.
On the **Configuration** tab, select **RDS databases**.
Choose **Connect to RDS database**.
After you've connected your function to a database, you can create a proxy by choosing **Add proxy**.

**Topics**
+ [

## Overview of automatic connectivity with a Lambda function
](#lambda-rds-connect-overview)
+ [

## Automatically connecting a Lambda function and an Aurora DB cluster
](#lambda-rds-connect-connecting)
+ [

## Viewing connected compute resources
](#lambda-rds-connect-viewing)

## Overview of automatic connectivity with a Lambda function
Overview

The following are requirements for connecting a Lambda function with an Aurora DB cluster:
+ The Lambda function must exist in the same VPC as the DB cluster.
+ Currently, the DB cluster can't be an Aurora Serverless DB cluster or part of an Aurora global database.
+ The user who sets up connectivity must have permissions to perform the following Amazon RDS, Amazon EC2, Lambda, Secrets Manager, and IAM operations:
  + Amazon RDS
    + `rds:CreateDBProxies`
    + `rds:DescribeDBClusters`
    + `rds:DescribeDBProxies`
    + `rds:ModifyDBCluster`
    + `rds:ModifyDBProxy`
    + `rds:RegisterProxyTargets`
  + Amazon EC2
    + `ec2:AuthorizeSecurityGroupEgress` 
    + `ec2:AuthorizeSecurityGroupIngress` 
    + `ec2:CreateSecurityGroup` 
    + `ec2:DeleteSecurityGroup`
    + `ec2:DescribeSecurityGroups` 
    + `ec2:RevokeSecurityGroupEgress` 
    + `ec2:RevokeSecurityGroupIngress`
  + Lambda
    + `lambda:CreateFunctions`
    + `lambda:ListFunctions`
    + `lambda:UpdateFunctionConfiguration`
  + Secrets Manager
    + `secretsmanager:CreateSecret`
    + `secretsmanager:DescribeSecret`
  + IAM
    + `iam:AttachPolicy`
    + `iam:CreateRole`
    + `iam:CreatePolicy`
  + AWS KMS
    + `kms:describeKey`

**Note**  
If the DB cluster and Lambda function are in different Availability Zones, your account might incur cross-Availability Zone costs.

When you set up a connection between a Lambda function and an Aurora DB cluster, Amazon RDS configures the VPC security group for your function and for your DB cluster. If you use RDS Proxy, then Amazon RDS also configures the VPC security group for the proxy. Amazon RDS acts according to the current configuration of the security groups associated with the DB cluster, Lambda function, and proxy, as described in the following table.


| Current RDS security group configuration | Current Lambda security group configuration | Current proxy security group configuration | RDS action | 
| --- | --- | --- | --- | 
|  There are one or more security groups associated with the DB cluster with a name that matches the pattern `rds-lambda-n` or if a proxy is already connected to your DB cluster, RDS checks if the `TargetHealth` of an associated proxy is `AVAILABLE`. A security group that matches the pattern hasn't been modified. This security group has only one inbound rule with the VPC security group of the Lambda function or proxy as the source.  |  There are one or more security groups associated with the Lambda function with a name that matches the pattern `lambda-rds-n` or `lambda-rdsproxy-n` (where `n` is a number). A security group that matches the pattern hasn't been modified. This security group has only one outbound rule with either the VPC security group of the DB cluster or the proxy as the destination.  |  There are one or more security groups associated with the proxy with a name that matches the pattern `rdsproxy-lambda-n` (where `n` is a number). A security group that matches the pattern hasn't been modified. This security group has inbound and outbound rules with the VPC security groups of the Lambda function and the DB cluster.  |  Amazon RDS takes no action. A connection was already configured automatically between the Lambda function, the proxy (optional), and DB cluster. Because a connection already exists between the function, proxy, and the database, the security groups aren't modified.  | 
|  Either of the following conditions apply: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/lambda-rds-connect.html) Amazon RDS can't use a security group that doesn't have one inbound rule with the VPC security group of the Lambda function or proxy as the source. Amazon RDS also can't use a security group that has been modified. Examples of modifications include adding a rule or changing the port of an existing rule.  |  Either of the following conditions apply: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/lambda-rds-connect.html) Amazon RDS can't use a security group that doesn't have one outbound rule with the VPC security group of the DB cluster or proxy as the destination. Amazon RDS also can't use a security group that has been modified.  | Either of the following conditions apply:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/lambda-rds-connect.html)Amazon RDS can't use a security group that doesn't have inbound and outbound rules with the VPC security group of the DB cluster and the Lambda function. Amazon RDS also can't use a security group that has been modified. |  [RDS action: create new security groups](#rds-lam-action-create-new-security-groups) | 
|  There are one or more security groups associated with the DB cluster with a name that matches the pattern `rds-lambda-n` or if the `TargetHealth` of an associated proxy is `AVAILABLE`. A security group that matches the pattern hasn't been modified. This security group has only one inbound rule with the VPC security group of the Lambda function or proxy as the source.  |  There are one or more security groups associated with the Lambda function with a name that matches the pattern `lambda-rds-n` or `lambda-rdsproxy-n`. However, Amazon RDS can't use any of these security groups for the connection with the DB cluster. Amazon RDS can't use a security group that doesn't have one outbound rule with the VPC security group of the DB cluster or proxy as the destination. Amazon RDS also can't use a security group that has been modified.  |  There are one or more security groups associated with the proxy with a name that matches the pattern `rdsproxy-lambda-n`. However, Amazon RDS can't use any of these security groups for the connection with the DB cluster or Lambda function. Amazon RDS can't use a security group that doesn't have inbound and outbound rules with the VPC security group of the DB cluster and the Lambda function. Amazon RDS also can't use a security group that has been modified.  |  [RDS action: create new security groups](#rds-lam-action-create-new-security-groups) | 
|  There are one or more security groups associated with the DB cluster with a name that matches the pattern `rds-lambda-n` or if the `TargetHealth` of an associated proxy is `AVAILABLE`. A security group that matches the pattern hasn't been modified. This security group has only one inbound rule with the VPC security group of the Lambda function or proxy as the source.  |  A valid Lambda security group for the connection exists, but it isn't associated with the Lambda function. This security group has a name that matches the pattern `lambda-rds-n` or `lambda-rdsproxy-n`. It hasn't been modified. It has only one outbound rule with the VPC security group of the DB cluster or proxy as the destination.  |  A valid proxy security group for the connection exists, but it isn't associated with the proxy. This security group has a name that matches the pattern `rdsproxy-lambda-n`. It hasn't been modified. It has inbound and outbound rules with the VPC security group of the DB cluster and the Lambda function.  |  [RDS action: associate Lambda security group](#rds-lam-action-associate-lam-security-group)  | 
|  Either of the following conditions apply: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/lambda-rds-connect.html) Amazon RDS can't use a security group that doesn't have one inbound rule with the VPC security group of the Lambda function or proxy as the source. Amazon RDS also can't use a security group that has been modified.  |  There are one or more security groups associated with the Lambda function with a name that matches the pattern `lambda-rds-n` or `lambda-rdsproxy-n`. A security group that matches the pattern hasn't been modified. This security group has only one outbound rule with the VPC security group of the DB instance or proxy as the destination.  |  There are one or more security groups associated with the proxy with a name that matches the pattern `rdsproxy-lambda-n`. A security group that matches the pattern hasn't been modified. This security group has inbound and outbound rules with the VPC security group of the DB cluster and the Lambda function.  |  [RDS action: create new security groups](#rds-lam-action-create-new-security-groups) | 
|  Either of the following conditions apply: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/lambda-rds-connect.html) Amazon RDS can't use a security group that doesn't have one inbound rule with the VPC security group of the Lambda function or proxy as the source. Amazon RDS also can't use a security group that has been modified.  |  Either of the following conditions apply: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/lambda-rds-connect.html) Amazon RDS can't use a security group that doesn't have one outbound rule with the VPC security group of the DB cluster or proxy as the source. Amazon RDS also can't use a security group that has been modified.  | Either of the following conditions apply:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/lambda-rds-connect.html)Amazon RDS can't use a security group that doesn't have inbound and outbound rules with the VPC security group of the DB cluster and the Lambda function. Amazon RDS also can't use a security group that has been modified. | [RDS action: create new security groups](#rds-lam-action-create-new-security-groups) | 

**RDS action: create new security groups**  
Amazon RDS takes the following actions:
+ Creates a new security group that matches the pattern `rds-lambda-n` or `rds-rdsproxy-n` (if you choose to use RDS Proxy). This security group has an inbound rule with the VPC security group of the Lambda function or proxy as the source. This security group is associated with the DB cluster and allows the function or proxy to access the DB cluster.
+ Creates a new security group that matches the pattern `lambda-rds-n` or `lambda-rdsproxy-n`. This security group has an outbound rule with the VPC security group of the DB cluster or proxy as the destination. This security group is associated with the Lambda function and allows the function to send traffic to the DB cluster or send traffic through a proxy.
+ Creates a new security group that matches the pattern `rdsproxy-lambda-n`. This security group has inbound and outbound rules with the VPC security group of the DB cluster and the Lambda function.

**RDS action: associate Lambda security group**  
Amazon RDS associates the valid, existing Lambda security group with the Lambda function. This security group allows the function to send traffic to the DB cluster or send traffic through a proxy.

## Automatically connecting a Lambda function and an Aurora DB cluster
Connecting a Lambda function

You can use the Amazon RDS console to automatically connect a Lambda function to your DB cluster. This simplifies the process of setting up a connection between these resources.

You can also use RDS Proxy to include a proxy in your connection. Lambda functions make frequent short database connections that benefit from the connection pooling that RDS Proxy offers. You can also use any IAM authentication that you've already set up for your Lambda function, instead of managing database credentials in your Lambda application code.

You can connect an existing DB cluster to new and existing Lambda functions using the **Set up Lambda connection** page. The setup process automatically sets up the required security groups for you.

Before setting up a connection between a Lambda function and a DB cluster, make sure that:
+ Your Lambda function and DB cluster are in the same VPC.
+ You have the right permissions for your user account. For more information about the requirements, see [Overview of automatic connectivity with a Lambda function](#lambda-rds-connect-overview).

If you change security groups after you configure connectivity, the changes might affect the connection between the Lambda function and the DB cluster.

**Note**  
You can automatically set up a connection between a DB cluster and a Lambda function only in the AWS Management Console. To connect a Lambda function, all instances in the DB cluster must be in the **Available** state.

**To automatically connect a Lambda function and a DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose the DB cluster that you want to connect to a Lambda function.

1. For **Actions**, choose **Set up Lambda connection**.

1. On the **Set up Lambda connection** page, under **Select Lambda function**, do either of the following:
   + If you have an existing Lambda function in the same VPC as your DB cluster, choose **Choose existing function**, and then choose the function.
   + If you don't have a Lambda function in the same VPC, choose **Create new function**, and then enter a **Function name**. The default runtime is set to Nodejs.18. You can modify the settings for your new Lambda function in the Lambda console after you complete the connection setup.

1. (Optional) Under **RDS Proxy**, select **Connect using RDS Proxy**, and then do any of the following:
   + If you have an existing proxy that you want to use, choose **Choose existing proxy**, and then choose the proxy.
   + If you don't have a proxy, and you want Amazon RDS to automatically create one for you, choose **Create new proxy**. Then, for **Database credentials**, do either of the following:

     1. Choose **Database username and password**, and then enter the **Username** and **Password** for your DB cluster.

     1. Choose **Secrets Manager secret**. Then, for **Select secret**, choose an AWS Secrets Manager secret. If you don't have a Secrets Manager secret, choose **Create new Secrets Manager secret** to [create a new secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html). After you create the secret, for **Select secret**, choose the new secret.

     After you create the new proxy, choose **Choose existing proxy**, and then choose the proxy. Note that it might take some time for your proxy to be available for connection.

1. (Optional) Expand **Connection summary** and verify the highlighted updates for your resources.

1. Choose **Set up**.

After you confirm the setup, Amazon RDS begins the process of connecting your Lambda function, RDS Proxy (if you used a proxy), and DB cluster. The console shows the **Connection details** dialog box, which lists the security group changes that allow connections between your resources.

## Viewing connected compute resources
Viewing connected compute resources

You can use the AWS Management Console to view the Lambda functions that are connected to your DB cluster. The resources shown include compute resource connections that Amazon RDS set up automatically.

The listed compute resources don't include those that are manually connected to the DB cluster. For example, you can allow a compute resource to access your DB cluster manually by adding a rule to your VPC security group associated with the database.

For the console to list a Lambda function, the following conditions must apply:
+ The name of the security group associated with the compute resource matches the pattern `lambda-rds-n` or `lambda-rdsproxy-n` (where `n` is a number).
+ The security group associated with the compute resource has an outbound rule with the port range set to the port of the DB cluster or an associated proxy. The destination for the outbound rule must be set to a security group associated with the DB cluster or an associated proxy.
+ If the configuration includes a proxy, the name of the security group attached to the proxy associated with your database matches the pattern `rdsproxy-lambda-n` (where `n` is a number).
+ The security group associated with the function has an outbound rule with the port set to the port that the DB cluster or associated proxy uses. The destination must be set to a security group associated with the DB cluster or associated proxy.

**To view compute resources automatically connected to an DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose the DB cluster.

1. On the **Connectivity & security** tab, view the compute resources under **Connected compute resources**.

# Modifying an Amazon Aurora DB cluster
Modifying an Aurora DB cluster

You can change the settings of a DB cluster to accomplish tasks such as changing its backup retention period or its database port. You can also modify DB instances in a DB cluster to accomplish tasks such as changing its DB instance class or enabling Performance Insights for it. This topic guides you through modifying an Aurora DB cluster and its DB instances, and describes the settings for each.

We recommend that you test any changes on a test DB cluster or DB instance before modifying a production DB cluster or DB instance, so that you fully understand the impact of each change. This is especially important when upgrading database versions.

**Topics**
+ [

## Modifying the DB cluster by using the console, CLI, and API
](#Aurora.Modifying.Cluster)
+ [

## Modifying a DB instance in a DB cluster
](#Aurora.Modifying.Instance)
+ [

## Changing the password for the database master user
](#Aurora.Modifying.Password)
+ [

## Settings for Amazon Aurora
](#Aurora.Modifying.Settings)
+ [

## Settings that don't apply to Amazon Aurora DB clusters
](#Aurora.Modifying.SettingsNotApplicableDBClusters)
+ [

## Settings that don't apply to Amazon Aurora DB instances
](#Aurora.Modifying.SettingsNotApplicable)

## Modifying the DB cluster by using the console, CLI, and API
<a name="modify_cluster"></a>

You can modify a DB cluster using the AWS Management Console, the AWS CLI, or the RDS API.

**Note**  
Most modifications can be applied immediately or during the next scheduled maintenance window. Some modifications, such as turning on deletion protection, are applied immediately—regardless of when you choose to apply them.  
Changing the master password in the AWS Management Console is always applied immediately.  
If you're using SSL endpoints and change the DB cluster identifier, stop and restart the DB cluster to update the SSL endpoints. For more information, see [Stopping and starting an Amazon Aurora DB cluster](aurora-cluster-stop-start.md).

### Console


**To modify a DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then select the DB cluster that you want to modify.

1. Choose **Modify**. The **Modify DB cluster** page appears.

1. Change any of the settings that you want. For information about each setting, see [Settings for Amazon Aurora](#Aurora.Modifying.Settings). 
**Note**  
In the AWS Management Console, some instance level changes only apply to the current DB instance, while others apply to the entire DB cluster. For information about whether a setting applies to the DB instance or the DB cluster, see the scope for the setting in [Settings for Amazon Aurora](#Aurora.Modifying.Settings). To change a setting that modifies the entire DB cluster at the instance level in the AWS Management Console, follow the instructions in [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance).

1. When all the changes are as you want them, choose **Continue** and check the summary of modifications.

1. To apply the changes immediately, select **Apply immediately**.

1. On the confirmation page, review your changes. If they are correct, choose **Modify cluster** to save your changes. 

   Alternatively, choose **Back** to edit your changes, or choose **Cancel** to cancel your changes. 

### AWS CLI


To modify a DB cluster using the AWS CLI, call the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) command. Specify the DB cluster identifier, and the values for the settings that you want to modify. For information about each setting, see [Settings for Amazon Aurora](#Aurora.Modifying.Settings). 

**Note**  
Some settings only apply to DB instances. To change those settings, follow the instructions in [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance).

**Example**  
The following command modifies `mydbcluster` by setting the backup retention period to 1 week (7 days).   
For Linux, macOS, or Unix:  

```
aws rds modify-db-cluster \
    --db-cluster-identifier mydbcluster \
    --backup-retention-period 7
```
For Windows:  

```
aws rds modify-db-cluster ^
    --db-cluster-identifier mydbcluster ^
    --backup-retention-period 7
```

### RDS API


To modify a DB cluster using the Amazon RDS API, call the [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) operation. Specify the DB cluster identifier, and the values for the settings that you want to modify. For information about each parameter, see [Settings for Amazon Aurora](#Aurora.Modifying.Settings). 

**Note**  
Some settings only apply to DB instances. To change those settings, follow the instructions in [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance).

## Modifying a DB instance in a DB cluster
<a name="modify_instance"></a>

You can modify a DB instance in a DB cluster using the AWS Management Console, the AWS CLI, or the RDS API.

When you modify a DB instance, you can apply the changes immediately. To apply changes immediately, you select the **Apply Immediately** option in the AWS Management Console, you use the `--apply-immediately` parameter when calling the AWS CLI, or you set the `ApplyImmediately` parameter to `true` when using the Amazon RDS API. 

If you don't choose to apply changes immediately, the changes are deferred until the next maintenance window. During the next maintenance window, any of these deferred changes are applied. If you choose to apply changes immediately, your new changes and any previously deferred changes are applied.

To see the modifications that are pending for the next maintenance window, use the [describe-db-clusters](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/rds/describe-db-clusters.html) AWS CLI command and check the `PendingModifiedValues` field.

**Important**  
If any of the deferred modifications require downtime, choosing **Apply immediately** can cause unexpected downtime for the DB instance. There is no downtime for the other DB instances in the DB cluster.  
Modifications that you defer aren't listed in the output of the `describe-pending-maintenance-actions` CLI command. Maintenance actions only include system upgrades that you schedule for the next maintenance window.

### Console


**To modify a DB instance in a DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then select the DB instance that you want to modify.

1. For **Actions**, choose **Modify**. The **Modify DB instance** page appears.

1. Change any of the settings that you want. For information about each setting, see [Settings for Amazon Aurora](#Aurora.Modifying.Settings).
**Note**  
Some settings apply to the entire DB cluster and must be changed at the cluster level. To change those settings, follow the instructions in [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster).  
 In the AWS Management Console, some instance level changes only apply to the current DB instance, while others apply to the entire DB cluster. For information about whether a setting applies to the DB instance or the DB cluster, see the scope for the setting in [Settings for Amazon Aurora](#Aurora.Modifying.Settings).

1. When all the changes are as you want them, choose **Continue** and check the summary of modifications.

1. To apply the changes immediately, select **Apply immediately**.

1. On the confirmation page, review your changes. If they are correct, choose **Modify DB instance** to save your changes.

   Alternatively, choose **Back** to edit your changes, or choose **Cancel** to cancel your changes.

### AWS CLI


To modify a DB instance in a DB cluster by using the AWS CLI, call the [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) command. Specify the DB instance identifier, and the values for the settings that you want to modify. For information about each parameter, see [Settings for Amazon Aurora](#Aurora.Modifying.Settings).

**Note**  
Some settings apply to the entire DB cluster. To change those settings, follow the instructions in [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster).

**Example**  
The following code modifies `mydbinstance` by setting the DB instance class to `db.r4.xlarge`. The changes are applied during the next maintenance window by using `--no-apply-immediately`. Use `--apply-immediately` to apply the changes immediately.   
For Linux, macOS, or Unix:  

```
aws rds modify-db-instance \
    --db-instance-identifier mydbinstance \
    --db-instance-class db.r4.xlarge \
    --no-apply-immediately
```
For Windows:  

```
aws rds modify-db-instance ^
    --db-instance-identifier mydbinstance ^
    --db-instance-class db.r4.xlarge ^
    --no-apply-immediately
```

### RDS API


To modify a DB instance by using the Amazon RDS API, call the [ModifyDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) operation. Specify the DB instance identifier, and the values for the settings that you want to modify. For information about each parameter, see [Settings for Amazon Aurora](#Aurora.Modifying.Settings). 

**Note**  
Some settings apply to the entire DB cluster. To change those settings, follow the instructions in [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster).

## Changing the password for the database master user
Changing the master user password

You can use the AWS Management Console or the AWS CLI to change the master user password.

### Console


You modify the writer DB instance to change the master user password using the AWS Management Console.

**To change the master user password**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then select the DB instance that you want to modify.

1. For **Actions**, choose **Modify**.

   The **Modify DB instance** page appears.

1. Enter a **New master password**.

1. For **Confirm master password**, enter the same new password.  
![\[Enter a new master user password and confirm it.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aur_new_master_password.png)

1. Choose **Continue** and check the summary of modifications.
**Note**  
Password changes are always applied immediately.

1. On the confirmation page, choose **Modify DB instance**.

### CLI


You call the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) command to change the master user password using the AWS CLI. Specify the DB cluster identifier and the new password, as shown in the following examples.

You don't need to specify `--apply-immediately|--no-apply-immediately`, because password changes are always applied immediately.

For Linux, macOS, or Unix:

```
aws rds modify-db-cluster \
    --db-cluster-identifier mydbcluster \
    --master-user-password mynewpassword
```

For Windows:

```
aws rds modify-db-cluster ^
    --db-cluster-identifier mydbcluster ^
    --master-user-password mynewpassword
```

## Settings for Amazon Aurora
Available settings

The following table contains details about which settings you can modify, the methods for modifying the setting, and the scope of the setting. The scope determines whether the setting applies to the entire DB cluster or if it can be set only for specific DB instances. 

**Note**  
Additional settings are available if you are modifying an Aurora Serverless v1 or Aurora Serverless v2 DB cluster. For information about these settings, see [Modifying an Aurora Serverless v1 DB cluster](aurora-serverless.modifying.md) and [Managing Aurora Serverless v2 DB clusters](aurora-serverless-v2-administration.md).  
Some settings aren't available for Aurora Serverless v1 and Aurora Serverless v2 because of their limitations. For more information, see [Limitations of Aurora Serverless v1](aurora-serverless.md#aurora-serverless.limitations) and [Requirements and limitations for Aurora Serverless v2](aurora-serverless-v2.requirements.md).


****  

| Setting and description | Method | Scope | Downtime notes | 
| --- | --- | --- | --- | 
|  **Auto minor version upgrade** Whether you want the DB instance to receive preferred minor engine version upgrades automatically when they become available. Upgrades are installed only during your scheduled maintenance window.  For more information about engine updates, see [Database engine updates for Amazon Aurora PostgreSQL](AuroraPostgreSQL.Updates.md) and [Database engine updates for Amazon Aurora MySQLLong-term support (LTS) and beta releases for Amazon Aurora MySQL](AuroraMySQL.Updates.md). For more information about the **Auto minor version upgrade** setting for Aurora MySQL, see [Enabling automatic upgrades between minor Aurora MySQL versions](AuroraMySQL.Updates.AMVU.md).   |   This setting is enabled by default. For each new cluster, choose the appropriate value for this setting based on its importance, expected lifetime, and the amount of verification testing that you do after each upgrade.  When you change this setting, perform this modification for every DB instance in your Aurora cluster. If any DB instance in your cluster has this setting turned off, the cluster isn't automatically upgraded. Using the AWS Management Console, [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance). Using the AWS CLI, run [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) and set the `--auto-minor-version-upgrade\|--no-auto-minor-version-upgrade` option. Using the RDS API, call [ModifyDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) and set the `AutoMinorVersionUpgrade` parameter.  |  The entire DB cluster  |  An outage doesn't occur during this change. Outages do occur during future maintenance windows when Aurora applies automatic upgrades.  | 
|  **Backup retention period** The number of days that automatic backups are retained. The minimum value is `1`.  For more information, see [Backups](Aurora.Managing.Backups.md#Aurora.Managing.Backups.Backup).   |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--backup-retention-period` option. Using the RDS API, call [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `BackupRetentionPeriod` parameter.  |  The entire DB cluster  |  An outage doesn't occur during this change.  | 
|  **Backup window (Start time)** The time range during which automated backups of your database occurs. The backup window is a start time in Universal Coordinated Time (UTC), and a duration in hours.  Aurora backups are continuous and incremental, but the backup window is used to create a daily system backup that is preserved within the backup retention period. You can copy it to preserve it outside of the retention period. The maintenance window and the backup window for the DB cluster can't overlap. For more information, see [Backup window](Aurora.Managing.Backups.md#Aurora.Managing.Backups.BackupWindow).  |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--preferred-backup-window` option. Using the RDS API, call [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `PreferredBackupWindow` parameter.  |  The entire DB cluster.  |  An outage doesn't occur during this change.  | 
|  **Capacity settings** The scaling properties of an Aurora Serverless v1 DB cluster. You can only modify scaling properties for DB clusters in `serverless` DB engine mode. For information about Aurora Serverless v1, see [Using Amazon Aurora Serverless v1](aurora-serverless.md).  |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--scaling-configuration` option. Using the RDS API, call [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `ScalingConfiguration` parameter.  |  The entire DB cluster  |  An outage doesn't occur during this change. The change occurs immediately. This setting ignores the apply immediately setting.  | 
|  **Certificate authority** The certificate authority (CA) for the server certificate used by the DB instance.  |  Using the AWS Management Console, [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance). Using the AWS CLI, run [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) and set the `--ca-certificate-identifier` option. Using the RDS API, call [ModifyDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) and set the `CACertificateIdentifier` parameter.  |  Only the specified DB instance  |  An outage only occurs if the DB engine doesn't support rotation without restart. You can use the [ describe-db-engine-versions](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-engine-versions.html) AWS CLI command to determine whether the DB engine supports rotation without restart.  | 
|  **Cluster storage configuration** The storage type for the DB cluster: **Aurora I/O-Optimized** or **Aurora Standard**. For more information, see [Storage configurations for Amazon Aurora DB clusters](Aurora.Overview.StorageReliability.md#aurora-storage-type).  |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--storage-type` option. Using the RDS API, call [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `StorageType` parameter.  | The entire DB cluster |  Changing the storage type of an Aurora PostgreSQL DB cluster with Optimized Reads instance classes causes an outage. This does not occur when changing storage types for clusters with other instance class types. For more information on the DB instance class types, see [DB instance class types](Concepts.DBInstanceClass.Types.md).  | 
| Copy tags to snapshots Select to specify that tags defined for this DB cluster are copied to DB snapshots created from this DB cluster. For more information, see [Tagging Amazon Aurora andAmazon RDS resources](USER_Tagging.md). |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--copy-tags-to-snapshot` or `--no-copy-tags-to-snapshot` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `CopyTagsToSnapshot` parameter.  | The entire DB cluster |  An outage doesn't occur during this change.  | 
|  **Data API** You can access Aurora Serverless v1 with web services–based applications, including AWS Lambda and AWS AppSync. This setting only applies to an Aurora Serverless v1 DB cluster. For more information, see [Using the Amazon RDS Data API](data-api.md).   |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--enable-http-endpoint` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `EnableHttpEndpoint` parameter.  |  The entire DB cluster  |  An outage doesn't occur during this change.  | 
|  Database authentication The database authentication you want to use.For MySQL:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Modifying.html)For PostgreSQL:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Modifying.html) |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [ modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the following options: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Modifying.html) Using the RDS API, call [ ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the following parameters: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Modifying.html)  |  The entire DB cluster  |  An outage doesn't occur during this change.  | 
|  **Database port** The port that you want to use to access the DB cluster.   |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--port` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `Port` parameter.  |  The entire DB cluster  |  An outage occurs during this change. All of the DB instances in the DB cluster are rebooted immediately.  | 
|  **DB cluster identifier** The DB cluster identifier. This value is stored as a lowercase string. When you change the DB cluster identifier, the DB cluster endpoints change. The endpoints of the DB instances in the DB cluster don't change.  |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--new-db-cluster-identifier` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `NewDBClusterIdentifier` parameter.  |  The entire DB cluster  |  An outage doesn't occur during this change.  | 
|  **DB cluster parameter group** The DB cluster parameter group that you want associated with the DB cluster.  For more information, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).   |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--db-cluster-parameter-group-name` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `DBClusterParameterGroupName` parameter.  |  The entire DB cluster  |  An outage doesn't occur during this change. When you change the parameter group, changes to some parameters are applied to the DB instances in the DB cluster immediately without a reboot. Changes to other parameters are applied only after the DB instances in the DB cluster are rebooted.  | 
|  **DB instance class** The DB instance class that you want to use.  For more information, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md).   |  Using the AWS Management Console, [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) and set the `--db-instance-class` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) and set the `DBInstanceClass` parameter.  |  Only the specified DB instance  |  An outage occurs during this change.  | 
|  **DB instance identifier** The DB instance identifier. This value is stored as a lowercase string.   |  Using the AWS Management Console, [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) and set the `--new-db-instance-identifier` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) and set the `NewDBInstanceIdentifier` parameter.  |  Only the specified DB instance  |  Downtime occurs during this change. RDS restarts the DB instance to update the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Modifying.html)  | 
|  **DB parameter group** The DB parameter group that you want associated with the DB instance.  For more information, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).   |  Using the AWS Management Console, [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) and set the `--db-parameter-group-name` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) and set the `DBParameterGroupName` parameter.  |  Only the specified DB instance  |  An outage doesn't occur during this change. When you associate a new DB parameter group with a DB instance, the modified static and dynamic parameters are applied only after the DB instance is rebooted. However, if you modify dynamic parameters in the DB parameter group after you associate it with the DB instance, these changes are applied immediately without a reboot. For more information, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md) and [Rebooting an Amazon Aurora DB cluster or Amazon Aurora DB instance](USER_RebootCluster.md).   | 
|  **Deletion protection** **Enable deletion protection** to prevent your DB cluster from being deleted. For more information, see [Deletion protection for Aurora clusters](USER_DeleteCluster.md#USER_DeletionProtection).   |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--deletion-protection\|--no-deletion-protection` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `DeletionProtection` parameter.  | The entire DB cluster |  An outage doesn't occur during this change.  | 
|  **Engine version** The version of the DB engine that you want to use. Before you upgrade your production DB cluster, we recommend that you test the upgrade process on a test DB cluster to verify its duration and to validate your applications.   |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--engine-version` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `EngineVersion` parameter.  |  The entire DB cluster  |  An outage occurs during this change.  | 
|  **Enhanced monitoring** **Enable enhanced monitoring** to enable gathering metrics in real time for the operating system that your DB instance runs on.  For more information, see [Monitoring OS metrics with Enhanced Monitoring](USER_Monitoring.OS.md).   |  Using the AWS Management Console, [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) and set the `--monitoring-role-arn` and `--monitoring-interval` options. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) and set the `MonitoringRoleArn` and `MonitoringInterval` parameters.  |  Only the specified DB instance  |  An outage doesn't occur during this change.  | 
|  **Log exports** Select the log types to publish to Amazon CloudWatch Logs.  For more information, see [AuroraMySQL database log files](USER_LogAccess.Concepts.MySQL.md).   |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--cloudwatch-logs-export-configuration` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `CloudwatchLogsExportConfiguration` parameter.  |  The entire DB cluster  |  An outage doesn't occur during this change.  | 
|  **Maintenance window** The time range during which system maintenance occurs. System maintenance includes upgrades, if applicable. The maintenance window is a start time in Universal Coordinated Time (UTC), and a duration in hours.  If you set the window to the current time, there must be at least 30 minutes between the current time and end of the window to ensure any pending changes are applied.  You can set the maintenance window independently for the DB cluster and for each DB instance in the DB cluster. When the scope of a modification is the entire DB cluster, the modification is performed during the DB cluster maintenance window. When the scope of a modification is the a DB instance, the modification is performed during maintenance window of that DB instance. The maintenance window and the backup window for the DB cluster can't overlap. For more information, see [Amazon RDS maintenance window](USER_UpgradeDBInstance.Maintenance.md#Concepts.DBMaintenance).   |  To change the maintenance window for the DB cluster using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). To change the maintenance window for a DB instance using the AWS Management Console, [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance). To change the maintenance window for the DB cluster using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--preferred-maintenance-window` option. To change the maintenance window for a DB instance using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) and set the `--preferred-maintenance-window` option. To change the maintenance window for the DB cluster using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `PreferredMaintenanceWindow` parameter. To change the maintenance window for a DB instance using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) and set the `PreferredMaintenanceWindow` parameter.  |  The entire DB cluster or a single DB instance  |  If there are one or more pending actions that cause an outage, and the maintenance window is changed to include the current time, then those pending actions are applied immediately, and an outage occurs.  | 
|   **Manage master credentials in AWS Secrets Manager** Select **Manage master credentials in AWS Secrets Manager** to manage the master user password in a secret in Secrets Manager. Optionally, choose a KMS key to use to protect the secret. Choose from the KMS keys in your account, or enter the key from a different account. For more information, see [Password management with Amazon Aurora and AWS Secrets Manager](rds-secrets-manager.md). If Aurora is already managing the master user password for the DB cluster, you can rotate the master user password by choosing **Rotate secret immediately**. For more information, see [Password management with Amazon Aurora and AWS Secrets Manager](rds-secrets-manager.md).  |  Using the AWS Management Console, [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--manage-master-user-password \| --no-manage-master-user-password` and `--master-user-secret-kms-key-id` options. To rotate the master user password immediately, set the `--rotate-master-user-password` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `ManageMasterUserPassword` and `MasterUserSecretKmsKeyId` parameters. To rotate the master user password immediately, set the `RotateMasterUserPassword` parameter to `true`.  |  The entire DB cluster  |  An outage doesn't occur during this change.  | 
|  **Network type** The IP addressing protocols supported by the DB cluster. **IPv4** to specify that resources can communicate with the DB cluster only over the IPv4 addressing protocol. **Dual-stack mode** to specify that resources can communicate with the DB cluster over IPv4, IPv6, or both. Use dual-stack mode if you have any resources that must communicate with your DB cluster over the IPv6 addressing protocol. To use dual-stack mode, make sure at least two subnets spanning two Availability Zones that support both the IPv4 and IPv6 network protocol. Also, make sure you associate an IPv6 CIDR block with subnets in the DB subnet group you specify. For more information, see [Amazon Aurora IP addressing](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.IP_addressing).  |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--network-type` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `NetworkType` parameter.  |  The entire DB cluster  |  An outage doesn't occur during this change.  | 
|  **New master password** The password for your master user.  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Modifying.html)  |  Using the AWS Management Console, [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--master-user-password` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `MasterUserPassword` parameter.  |  The entire DB cluster  |  An outage doesn't occur during this change.  | 
|  **Performance Insights** Whether to enable Performance Insights, a tool that monitors your DB instance load so that you can analyze and troubleshoot your database performance.  For more information, see [Monitoring DB load with Performance Insights on Amazon Aurora](USER_PerfInsights.md).   |  Using the AWS Management Console, [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) and set the `--enable-performance-insights\|--no-enable-performance-insights` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) and set the `EnablePerformanceInsights` parameter.  |  Only the specified DB instance  |  An outage doesn't occur during this change.  | 
|  **Performance Insights AWS KMS key** The AWS KMS key identifier for encryption of Performance Insights data. The KMS key identifier is the Amazon Resource Name (ARN), key identifier, or key alias for the KMS key.  For more information, see [Turning Performance Insights on and off for Aurora](USER_PerfInsights.Enabling.md).   |  Using the AWS Management Console, [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) and set the `--performance-insights-kms-key-id` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) and set the `PerformanceInsightsKMSKeyId` parameter.  |  Only the specified DB instance  |  An outage doesn't occur during this change.  | 
|  **Performance Insights retention period** The amount of time, in days, to retain Performance Insights data. The retention setting is **Default (7 days)**. To retain your performance data for longer, specify 1–24 months. For more information about retention periods, see [Pricing and data retention for Performance Insights](USER_PerfInsights.Overview.cost.md).  For more information, see [Turning Performance Insights on and off for Aurora](USER_PerfInsights.Enabling.md).   |  Using the AWS Management Console, [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) and set the `--performance-insights-retention-period` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) and set the `PerformanceInsightsRetentionPeriod` parameter.  |  Only the specified DB instance  |  An outage doesn't occur during this change.  | 
|  **Promotion tier** A value that specifies the order in which an Aurora Replica is promoted to the primary instance in a DB cluster, after a failure of the existing primary instance.  For more information, see [Fault tolerance for an Aurora DB cluster](Concepts.AuroraHighAvailability.md#Aurora.Managing.FaultTolerance).   |  Using the AWS Management Console, [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) and set the `--promotion-tier` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) and set the `PromotionTier` parameter.  |  Only the specified DB instance  |  An outage doesn't occur during this change.  | 
|  **Public access** **Publicly accessible** to give the DB instance a public IP address, meaning that it's accessible outside the VPC. To be publicly accessible, the DB instance also has to be in a public subnet in the VPC. **Not publicly accessible** to make the DB instance accessible only from inside the VPC. For more information, see [Hiding a DB cluster in a VPC from the internet](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.Hiding).  To connect to a DB instance from outside of its Amazon VPC, the DB instance must be publicly accessible, access must be granted using the inbound rules of the DB instance's security group, and other requirements must be met. For more information, see [Can't connect to Amazon RDS DB instance](CHAP_Troubleshooting.md#CHAP_Troubleshooting.Connecting). If your DB instance is isn't publicly accessible, you can also use an AWS Site-to-Site VPN connection or an Direct Connect connection to access it from a private network. For more information, see [Internetwork traffic privacy](inter-network-traffic-privacy.md).  |  Using the AWS Management Console, [Modifying a DB instance in a DB cluster](#Aurora.Modifying.Instance). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) and set the `--publicly-accessible\|--no-publicly-accessible` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) and set the `PubliclyAccessible` parameter.  |  Only the specified DB instance  |  An outage doesn't occur during this change.  | 
|  **Serverless v2 capacity settings** The database capacity of an Aurora Serverless v2 DB cluster, measured in Aurora Capacity Units (ACUs). For more information, see [Setting the Aurora Serverless v2 capacity range for a cluster](aurora-serverless-v2-administration.md#aurora-serverless-v2-setting-acus).  |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--serverless-v2-scaling-configuration` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `ServerlessV2ScalingConfiguration` parameter.  |  The entire DB cluster  |  An outage doesn't occur during this change. The change occurs immediately. This setting ignores the apply immediately setting.  | 
|  **Security group** The security group you want associated with the DB cluster.  For more information, see [Controlling access with security groups](Overview.RDSSecurityGroups.md).   |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--vpc-security-group-ids` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `VpcSecurityGroupIds` parameter.  |  The entire DB cluster  |  An outage doesn't occur during this change.  | 
|  **Target Backtrack window** The amount of time you want to be able to backtrack your DB cluster, in seconds. This setting is available only for Aurora MySQL and only if the DB cluster was created with Backtrack enabled.   |  Using the AWS Management Console, [Modifying the DB cluster by using the console, CLI, and API](#Aurora.Modifying.Cluster). Using the AWS CLI, run [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and set the `--backtrack-window` option. Using the RDS API, call [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) and set the `BacktrackWindow` parameter.  |  The entire DB cluster  |  An outage doesn't occur during this change.  | 

## Settings that don't apply to Amazon Aurora DB clusters
Settings that don't apply to Aurora DB clusters

The following settings in the AWS CLI command [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) and the RDS API operation [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) don't apply to Amazon Aurora DB clusters.

**Note**  
You can't use the AWS Management Console to modify these settings for Aurora DB clusters.


****  

| AWS CLI setting | RDS API setting | 
| --- | --- | 
|  `--allocated-storage`  |  `AllocatedStorage`  | 
|  `--auto-minor-version-upgrade \| --no-auto-minor-version-upgrade`  |  `AutoMinorVersionUpgrade`  | 
|  `--db-cluster-instance-class`  |  `DBClusterInstanceClass`  | 
|  `--enable-performance-insights \| --no-enable-performance-insights`  |  `EnablePerformanceInsights`  | 
|  `--iops`  |  `Iops`  | 
|  `--monitoring-interval`  |  `MonitoringInterval`  | 
|  `--monitoring-role-arn`  |  `MonitoringRoleArn`  | 
|  `--option-group-name`  |  `OptionGroupName`  | 
|  `--performance-insights-kms-key-id`  |  `PerformanceInsightsKMSKeyId`  | 
|  `--performance-insights-retention-period`  |  `PerformanceInsightsRetentionPeriod`  | 

## Settings that don't apply to Amazon Aurora DB instances
Settings that don't apply to Aurora DB instances

The following settings in the AWS CLI command [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) and the RDS API operation [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) don't apply to Amazon Aurora DB instances.

**Note**  
You can't use the AWS Management Console to modify these settings for Aurora DB instances.


****  

| AWS CLI setting | RDS API setting | 
| --- | --- | 
|  `--allocated-storage`  |  `AllocatedStorage`  | 
|  `--allow-major-version-upgrade\|--no-allow-major-version-upgrade`  |  `AllowMajorVersionUpgrade`  | 
|  `--copy-tags-to-snapshot\|--no-copy-tags-to-snapshot`  |  `CopyTagsToSnapshot`  | 
|  `--domain`  |  `Domain`  | 
|  `--db-security-groups`  |  `DBSecurityGroups`  | 
|  `--db-subnet-group-name`  |  `DBSubnetGroupName`  | 
|  `--domain-iam-role-name`  |  `DomainIAMRoleName`  | 
|  `--multi-az\|--no-multi-az`  |  `MultiAZ`  | 
|  `--iops`  |  `Iops`  | 
|  `--license-model`  |  `LicenseModel`  | 
|  `--network-type`  |  `NetworkType`  | 
|  `--option-group-name`  |  `OptionGroupName`  | 
|  `--processor-features`  |  `ProcessorFeatures`  | 
|  `--storage-type`  |  `StorageType`  | 
|  `--tde-credential-arn`  |  `TdeCredentialArn`  | 
|  `--tde-credential-password`  |  `TdeCredentialPassword`  | 
|  `--use-default-processor-features\|--no-use-default-processor-features`  |  `UseDefaultProcessorFeatures`  | 

# Adding Aurora Replicas to a DB cluster
Adding Aurora Replicas<a name="create_instance"></a>

An Aurora DB cluster with replication has one primary DB instance and up to 15 Aurora Replicas. The primary DB instance supports read and write operations, and performs all data modifications to the cluster volume. Aurora Replicas connect to the same storage volume as the primary DB instance, but support read operations only. You use Aurora Replicas to offload read workloads from the primary DB instance. For more information, see [Aurora Replicas](Aurora.Replication.md#Aurora.Replication.Replicas). 

Amazon Aurora Replicas have the following limitations:
+ You can't create an Aurora Replica for an Aurora Serverless v1 DB cluster. Aurora Serverless v1 has a single DB instance that scales up and down automatically to support all database read and write operations. 

  However, you can add reader instances to Aurora Serverless v2 DB clusters. For more information, see [Adding an Aurora Serverless v2 reader](aurora-serverless-v2-administration.md#aurora-serverless-v2-adding-reader).

We recommend that you distribute the primary instance and Aurora Replicas of your Aurora DB cluster over multiple Availability Zones to improve the availability of your DB cluster. For more information, see [Region availability](Concepts.RegionsAndAvailabilityZones.md#Aurora.Overview.Availability).

To remove an Aurora Replica from an Aurora DB cluster, delete the Aurora Replica by following the instructions in [Deleting a DB instance from an Aurora DB cluster](USER_DeleteCluster.md#USER_DeleteInstance).

**Note**  
Amazon Aurora also supports replication with an external database, such as an RDS DB instance. The RDS DB instance must be in the same AWS Region as Amazon Aurora. For more information, see [Replication with Amazon Aurora](Aurora.Replication.md).

You can add Aurora Replicas to a DB cluster using the AWS Management Console, the AWS CLI, or the RDS API.

## Console


**To add an Aurora replica to a DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then select the DB cluster where you want to add the new DB instance. 

1.  Make sure that both the cluster and the primary instance are in the **Available** state. If the DB cluster or the primary instance are in a transitional state such as **Creating**, you can't add a replica. 

    If the cluster doesn't have a primary instance, create one using the [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) AWS CLI command. This situation can arise if you used the CLI to restore a DB cluster snapshot and then view the cluster in the AWS Management Console. 

1. For **Actions**, choose **Add reader**. 

   The **Add reader** page appears.

1. On the **Add reader** page, specify options for your Aurora Replica. The following table shows settings for an Aurora Replica.    
<a name="aurora_replica_settings"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-replicas-adding.html)

1. Choose **Add reader** to create the Aurora Replica.

## AWS CLI


To create an Aurora Replica in your DB cluster, run the [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) AWS CLI command. Include the name of the DB cluster as the `--db-cluster-identifier` option. You can optionally specify an Availability Zone for the Aurora Replica using the `--availability-zone` parameter, as shown in the following examples.

For example, the following command creates a new MySQL 5.7–compatible Aurora Replica named `sample-instance-us-west-2a`.

For Linux, macOS, or Unix:

```
aws rds create-db-instance --db-instance-identifier sample-instance-us-west-2a \
    --db-cluster-identifier sample-cluster --engine aurora-mysql --db-instance-class db.r5.large \
    --availability-zone us-west-2a
```

For Windows:

```
aws rds create-db-instance --db-instance-identifier sample-instance-us-west-2a ^
    --db-cluster-identifier sample-cluster --engine aurora-mysql --db-instance-class db.r5.large ^
    --availability-zone us-west-2a
```

The following command creates a new MySQL 5.7–compatible Aurora Replica named `sample-instance-us-west-2a`.

For Linux, macOS, or Unix:

```
aws rds create-db-instance --db-instance-identifier sample-instance-us-west-2a \
    --db-cluster-identifier sample-cluster --engine aurora-mysql --db-instance-class db.r5.large \
    --availability-zone us-west-2a
```

For Windows:

```
aws rds create-db-instance --db-instance-identifier sample-instance-us-west-2a ^
    --db-cluster-identifier sample-cluster --engine aurora --db-instance-class db.r5.large ^
    --availability-zone us-west-2a
```

The following command creates a new PostgreSQL-compatible Aurora Replica named `sample-instance-us-west-2a`.

For Linux, macOS, or Unix:

```
aws rds create-db-instance --db-instance-identifier sample-instance-us-west-2a \
    --db-cluster-identifier sample-cluster --engine aurora-postgresql --db-instance-class db.r5.large \
    --availability-zone us-west-2a
```

For Windows:

```
aws rds create-db-instance --db-instance-identifier sample-instance-us-west-2a ^
    --db-cluster-identifier sample-cluster --engine aurora-postgresql --db-instance-class db.r5.large ^
    --availability-zone us-west-2a
```

## RDS API


To create an Aurora Replica in your DB cluster, call the [CreateDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html) operation. Include the name of the DB cluster as the `DBClusterIdentifier` parameter. You can optionally specify an Availability Zone for the Aurora Replica using the `AvailabilityZone` parameter.

For information about Auto Scaling Amazon Aurora with Aurora replicas, see the following sections.

**Topics**
+ [

# Amazon Aurora Auto Scaling with Aurora Replicas
](Aurora.Integrating.AutoScaling.md)
+ [

# Adding an auto scaling policy to an Amazon Aurora DB cluster
](Aurora.Integrating.AutoScaling.Add.md)
+ [

# Editing an auto scaling policy for an Amazon Aurora DB cluster
](Aurora.Integrating.AutoScaling.Edit.md)
+ [

# Deleting an auto scaling policy from your Amazon Aurora DB cluster
](Aurora.Integrating.AutoScaling.Delete.md)

# Amazon Aurora Auto Scaling with Aurora Replicas
Auto Scaling with Aurora Replicas

To meet your connectivity and workload requirements, Aurora Auto Scaling dynamically adjusts the number of Aurora Replicas (reader DB instances) provisioned for an Aurora DB cluster. Aurora Auto Scaling is available for both Aurora MySQL and Aurora PostgreSQL. Aurora Auto Scaling enables your Aurora DB cluster to handle sudden increases in connectivity or workload. When the connectivity or workload decreases, Aurora Auto Scaling removes unnecessary Aurora Replicas so that you don't pay for unused provisioned DB instances.

You define and apply a scaling policy to an Aurora DB cluster. The *scaling policy* defines the minimum and maximum number of Aurora Replicas that Aurora Auto Scaling can manage. Based on the policy, Aurora Auto Scaling adjusts the number of Aurora Replicas up or down in response to actual workloads, determined by using Amazon CloudWatch metrics and target values.

**Note**  
Aurora Auto Scaling doesn't apply to the workload on the writer DB instance. Aurora Auto Scaling helps with the workload only on the reader instances.

You can use the AWS Management Console to apply a scaling policy based on a predefined metric. Alternatively, you can use either the AWS CLI or Aurora Auto Scaling API to apply a scaling policy based on a predefined or custom metric.

**Topics**
+ [

## Before you begin
](#Aurora.Integrating.AutoScaling.BYB)
+ [

## Aurora Auto Scaling policies
](#Aurora.Integrating.AutoScaling.Concepts)
+ [

## DB instance IDs and tagging
](#Aurora.Integrating.AutoScaling.Concepts.Tagging)
+ [

## Aurora Auto Scaling and Performance Insights
](#aurora-auto-scaling-pi)

## Before you begin
Before you begin

Before you can use Aurora Auto Scaling with an Aurora DB cluster, you must first create an Aurora DB cluster with a primary (writer) DB instance. For more information about creating an Aurora DB cluster, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md).

Aurora Auto Scaling only scales a DB cluster if the DB cluster is in the available state.

When Aurora Auto Scaling adds a new Aurora Replica, the new Aurora Replica is the same DB instance class as the one used by the primary instance. For more information about DB instance classes, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md). Also, the promotion tier for new Aurora Replicas is set to the last priority, which is 15 by default. This means that during a failover, a replica with a better priority, such as one created manually, would be promoted first. For more information, see [Fault tolerance for an Aurora DB cluster](Concepts.AuroraHighAvailability.md#Aurora.Managing.FaultTolerance).

Aurora Auto Scaling only removes Aurora Replicas that it created.

To benefit from Aurora Auto Scaling, your applications must support connections to new Aurora Replicas. To do so, we recommend using the Aurora reader endpoint. You can use a driver such as the AWS JDBC Driver. For more information, see [Connecting to an Amazon Aurora DB cluster](Aurora.Connecting.md).

**Note**  
Aurora global databases currently don't support Aurora Auto Scaling for secondary DB clusters.

## Aurora Auto Scaling policies
Aurora Auto Scaling policies

Aurora Auto Scaling uses a scaling policy to adjust the number of Aurora Replicas in an Aurora DB cluster. Aurora Auto Scaling has the following components:
+ A service-linked role
+ A target metric
+ Minimum and maximum capacity
+ A cooldown period

**Topics**
+ [

### Service linked role
](#Aurora.Integrating.AutoScaling.Concepts.SLR)
+ [

### Target metric
](#Aurora.Integrating.AutoScaling.Concepts.TargetMetric)
+ [

### Minimum and maximum capacity
](#Aurora.Integrating.AutoScaling.Concepts.Capacity)
+ [

### Cooldown period
](#Aurora.Integrating.AutoScaling.Concepts.Cooldown)
+ [

### Enable or disable scale-in activities
](#Aurora.Integrating.AutoScaling.Concepts.ScaleIn)
+ [

### Add, edit, or delete auto scaling policies
](#Aurora.Integrating.AutoScaling.Concepts.AddEditDelete)

### Service linked role


Aurora Auto Scaling uses the `AWSServiceRoleForApplicationAutoScaling_RDSCluster` service-linked role. For more information, see [Service-linked roles for Application Auto Scaling](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-service-linked-roles.html) in the *Application Auto Scaling User Guide*.

### Target metric


In this type of policy, a predefined or custom metric and a target value for the metric is specified in a target-tracking scaling policy configuration. Aurora Auto Scaling creates and manages CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and target value. The scaling policy adds or removes Aurora Replicas as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target-tracking scaling policy also adjusts to fluctuations in the metric due to a changing workload. Such a policy also minimizes rapid fluctuations in the number of available Aurora Replicas for your DB cluster.

For example, take a scaling policy that uses the predefined average CPU utilization metric. Such a policy can keep CPU utilization at, or close to, a specified percentage of utilization, such as 40 percent.

**Note**  
 For each Aurora DB cluster, you can create only one Auto Scaling policy for each target metric.

When you configure Aurora Auto Scaling, the target metric value is calculated as the average of all reader instances in the cluster. This calculation is performed as follows:
+ Includes all reader instances in the Aurora cluster, regardless of whether they’re managed by Auto Scaling or added manually.
+ Includes instances associated with custom endpoints. Custom endpoints don’t influence the calculation of target metrics.
+ Does not include the cluster’s writer instance.

The metrics are derived from CloudWatch using the following dimensions:
+ `DBClusterIdentifier`
+ `Role=READER`

For example, consider an Aurora MySQL cluster with the following setup:
+ **Manual instances (not controlled by Auto Scaling)**:
  + Writer with 50% CPU utilization
  + Reader 1 (custom endpoint: `custom-reader-1`) with 90% CPU utilization
  + Reader 2 (custom endpoint: `custom-reader-2`) with 90% CPU utilization
+ **Auto Scaling instance**:
  + Reader 3 (added using Auto Scaling) with 10% CPU utilization

In this scenario, the target metric for the Auto Scaling policy is calculated as follows:

```
Target metric = (CPU utilization of reader 1 + reader 2 + reader 3) / total number of readers

Target metric = (90 + 90 + 10) / 3 = 63.33%
```

The Auto Scaling policy uses this value to evaluate whether to scale in or scale out based on the defined threshold.

Consider the following:
+ Although custom endpoints determine how traffic is routed to specific readers, they don’t exclude readers from the metric calculation.
+ Manual instances are always included in the target metric calculations.
+ To avoid unexpected scaling behavior, make sure that the Auto Scaling configuration accounts for all reader instances in the cluster.
+ If a cluster has no readers, the metric is not calculated, and the Auto Scaling policy alarms remain inactive. For the Auto Scaling policy to function effectively, at least one reader must be present at all times.

### Minimum and maximum capacity


You can specify the maximum number of Aurora Replicas to be managed by Application Auto Scaling. This value must be set to 0–15, and must be equal to or greater than the value specified for the minimum number of Aurora Replicas.

You can also specify the minimum number of Aurora Replicas to be managed by Application Auto Scaling. This value must be set to 0–15, and must be equal to or less than the value specified for the maximum number of Aurora Replicas.

There must be at least one reader DB instance for Aurora Auto Scaling to work. If the DB cluster has no reader instance, and you set the minimum capacity to 0, then Aurora Auto Scaling won't work.

**Note**  
The minimum and maximum capacity are set for an Aurora DB cluster. The specified values apply to all of the policies associated with that Aurora DB cluster.

### Cooldown period


You can tune the responsiveness of a target-tracking scaling policy by adding cooldown periods that affect scaling your Aurora DB cluster in and out. A cooldown period blocks subsequent scale-in or scale-out requests until the period expires. These blocks slow the deletions of Aurora Replicas in your Aurora DB cluster for scale-in requests, and the creation of Aurora Replicas for scale-out requests.

You can specify the following cooldown periods:
+ A scale-in activity reduces the number of Aurora Replicas in your Aurora DB cluster. A scale-in cooldown period specifies the amount of time, in seconds, after a scale-in activity completes before another scale-in activity can start.
+ A scale-out activity increases the number of Aurora Replicas in your Aurora DB cluster. A scale-out cooldown period specifies the amount of time, in seconds, after a scale-out activity completes before another scale-out activity can start.
**Note**  
A scale-out cooldown period is ignored if a subsequent scale-out request is for a larger number of Aurora Replicas than the first request.

If you don't set the scale-in or scale-out cooldown period, the default for each is 300 seconds.

### Enable or disable scale-in activities


You can enable or disable scale-in activities for a policy. Enabling scale-in activities allows the scaling policy to delete Aurora Replicas. When scale-in activities are enabled, the scale-in cooldown period in the scaling policy applies to scale-in activities. Disabling scale-in activities prevents the scaling policy from deleting Aurora Replicas.

**Note**  
Scale-out activities are always enabled so that the scaling policy can create Aurora Replicas as needed.

### Add, edit, or delete auto scaling policies


You can add, edit, or delete auto scaling policies using the AWS Management Console, AWS CLI, or Application Auto Scaling API. For more information about adding, editing, or deleting auto scaling policies, see the following sections.
+ [Adding an auto scaling policy to an Amazon Aurora DB cluster](Aurora.Integrating.AutoScaling.Add.md)
+ [Editing an auto scaling policy for an Amazon Aurora DB cluster](Aurora.Integrating.AutoScaling.Edit.md)
+ [Deleting an auto scaling policy from your Amazon Aurora DB cluster](Aurora.Integrating.AutoScaling.Delete.md)

## DB instance IDs and tagging


When a replica is added by Aurora Auto Scaling, its DB instance ID is prefixed by `application-autoscaling-`, for example, `application-autoscaling-61aabbcc-4e2f-4c65-b620-ab7421abc123`.

The following tag is automatically added to the DB instance. You can view it on the **Tags** tab of the DB instance detail page.


| Tag | Value | 
| --- | --- | 
| application-autoscaling:resourceId | cluster:mynewcluster-cluster | 

For more information on Amazon RDS resource tags, see [Tagging Amazon Aurora andAmazon RDS resources](USER_Tagging.md).

## Aurora Auto Scaling and Performance Insights


You can use Performance Insights to monitor replicas that have been added by Aurora Auto Scaling, the same as with any Aurora reader DB instance.

For more information on using Performance Insights to monitor Aurora DB clusters, see [Monitoring DB load with Performance Insights on Amazon Aurora](USER_PerfInsights.md).

# Adding an auto scaling policy to an Amazon Aurora DB cluster
Adding an auto scaling policy

You can add a scaling policy using the AWS Management Console, the AWS CLI, or the Application Auto Scaling API.

**Note**  
For an example that adds a scaling policy using CloudFormation, see [Declaring a scaling policy for an Aurora DB cluster](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/quickref-autoscaling.html#w2ab1c19c22c15c21c11) in the *AWS CloudFormation User Guide.*

## Console


You can add a scaling policy to an Aurora DB cluster by using the AWS Management Console.

**To add an auto scaling policy to an Aurora DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**. 

1. Choose the Aurora DB cluster that you want to add a policy for.

1. Choose the **Logs & events** tab.

1. In the **Auto scaling policies** section, choose **Add**.

   The **Add Auto Scaling policy** dialog box appears.

1. For **Policy Name**, type the policy name.

1. For the target metric, choose one of the following:
   + **Average CPU utilization of Aurora Replicas** to create a policy based on the average CPU utilization.
   + **Average connections of Aurora Replicas** to create a policy based on the average number of connections to Aurora Replicas.

1. For the target value, type one of the following:
   + If you chose **Average CPU utilization of Aurora Replicas** in the previous step, type the percentage of CPU utilization that you want to maintain on Aurora Replicas.
   + If you chose **Average connections of Aurora Replicas** in the previous step, type the number of connections that you want to maintain.

   Aurora Replicas are added or removed to keep the metric close to the specified value.

1. (Optional) Expand **Additional Configuration** to create a scale-in or scale-out cooldown period.

1. For **Minimum capacity**, type the minimum number of Aurora Replicas that the Aurora Auto Scaling policy is required to maintain.

1. For **Maximum capacity**, type the maximum number of Aurora Replicas the Aurora Auto Scaling policy is required to maintain.

1. Choose **Add policy**.

The following dialog box creates an Auto Scaling policy based an average CPU utilization of 40 percent. The policy specifies a minimum of 5 Aurora Replicas and a maximum of 15 Aurora Replicas.

![\[Creating an auto scaling policy based on average CPU utilization\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-autoscaling-cpu.png)


The following dialog box creates an auto scaling policy based an average number of connections of 100. The policy specifies a minimum of two Aurora Replicas and a maximum of eight Aurora Replicas.

![\[Creating an Auto Scaling policy based on average connections\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-autoscaling-connections.png)


## AWS CLI or Application Auto Scaling API


You can apply a scaling policy based on either a predefined or custom metric. To do so, you can use the AWS CLI or the Application Auto Scaling API. The first step is to register your Aurora DB cluster with Application Auto Scaling.

### Registering an Aurora DB cluster


Before you can use Aurora Auto Scaling with an Aurora DB cluster, you register your Aurora DB cluster with Application Auto Scaling. You do so to define the scaling dimension and limits to be applied to that cluster. Application Auto Scaling dynamically scales the Aurora DB cluster along the `rds:cluster:ReadReplicaCount` scalable dimension, which represents the number of Aurora Replicas. 

To register your Aurora DB cluster, you can use either the AWS CLI or the Application Auto Scaling API. 

#### AWS CLI


To register your Aurora DB cluster, use the [https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/register-scalable-target.html](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/register-scalable-target.html) AWS CLI command with the following parameters:
+ `--service-namespace` – Set this value to `rds`.
+ `--resource-id` – The resource identifier for the Aurora DB cluster. For this parameter, the resource type is `cluster` and the unique identifier is the name of the Aurora DB cluster, for example `cluster:myscalablecluster`.
+ `--scalable-dimension` – Set this value to `rds:cluster:ReadReplicaCount`.
+ `--min-capacity` – The minimum number of reader DB instances to be managed by Application Auto Scaling. For information about the relationship between `--min-capacity`, `--max-capacity`, and the number of DB instances in your cluster, see [Minimum and maximum capacity](Aurora.Integrating.AutoScaling.md#Aurora.Integrating.AutoScaling.Concepts.Capacity).
+ `--max-capacity` – The maximum number of reader DB instances to be managed by Application Auto Scaling. For information about the relationship between `--min-capacity`, `--max-capacity`, and the number of DB instances in your cluster, see [Minimum and maximum capacity](Aurora.Integrating.AutoScaling.md#Aurora.Integrating.AutoScaling.Concepts.Capacity).

**Example**  
In the following example, you register an Aurora DB cluster named `myscalablecluster`. The registration indicates that the DB cluster should be dynamically scaled to have from one to eight Aurora Replicas.  
For Linux, macOS, or Unix:  

```
aws application-autoscaling register-scalable-target \
    --service-namespace rds \
    --resource-id cluster:myscalablecluster \
    --scalable-dimension rds:cluster:ReadReplicaCount \
    --min-capacity 1 \
    --max-capacity 8 \
```
For Windows:  

```
aws application-autoscaling register-scalable-target ^
    --service-namespace rds ^
    --resource-id cluster:myscalablecluster ^
    --scalable-dimension rds:cluster:ReadReplicaCount ^
    --min-capacity 1 ^
    --max-capacity 8 ^
```

#### Application Auto Scaling API


To register your Aurora DB cluster with Application Auto Scaling, use the [https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_RegisterScalableTarget.html](https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_RegisterScalableTarget.html) Application Auto Scaling API operation with the following parameters:
+ `ServiceNamespace` – Set this value to `rds`.
+ `ResourceID` – The resource identifier for the Aurora DB cluster. For this parameter, the resource type is `cluster` and the unique identifier is the name of the Aurora DB cluster, for example `cluster:myscalablecluster`.
+ `ScalableDimension` – Set this value to `rds:cluster:ReadReplicaCount`.
+ `MinCapacity` – The minimum number of reader DB instances to be managed by Application Auto Scaling. For information about the relationship between `MinCapacity`, `MaxCapacity`, and the number of DB instances in your cluster, see [Minimum and maximum capacity](Aurora.Integrating.AutoScaling.md#Aurora.Integrating.AutoScaling.Concepts.Capacity).
+ `MaxCapacity` – The maximum number of reader DB instances to be managed by Application Auto Scaling. For information about the relationship between `MinCapacity`, `MaxCapacity`, and the number of DB instances in your cluster, see [Minimum and maximum capacity](Aurora.Integrating.AutoScaling.md#Aurora.Integrating.AutoScaling.Concepts.Capacity).

**Example**  
In the following example, you register an Aurora DB cluster named `myscalablecluster` with the Application Auto Scaling API. This registration indicates that the DB cluster should be dynamically scaled to have from one to eight Aurora Replicas.  

```
POST / HTTP/1.1
Host: autoscaling.us-east-2.amazonaws.com
Accept-Encoding: identity
Content-Length: 219
X-Amz-Target: AnyScaleFrontendService.RegisterScalableTarget
X-Amz-Date: 20160506T182145Z
User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8
Content-Type: application/x-amz-json-1.1
Authorization: AUTHPARAMS

{
    "ServiceNamespace": "rds",
    "ResourceId": "cluster:myscalablecluster",
    "ScalableDimension": "rds:cluster:ReadReplicaCount",
    "MinCapacity": 1,
    "MaxCapacity": 8
}
```

### Defining a scaling policy for an Aurora DB cluster
Defining a scaling policy

A target-tracking scaling policy configuration is represented by a JSON block that the metrics and target values are defined in. You can save a scaling policy configuration as a JSON block in a text file. You use that text file when invoking the AWS CLI or the Application Auto Scaling API. For more information about policy configuration syntax, see [https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_TargetTrackingScalingPolicyConfiguration.html](https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_TargetTrackingScalingPolicyConfiguration.html) in the *Application Auto Scaling API Reference*.

 The following options are available for defining a target-tracking scaling policy configuration.

**Topics**
+ [

#### Using a predefined metric
](#Aurora.Integrating.AutoScaling.AddCode.DefineScalingPolicy.Predefined)
+ [

#### Using a custom metric
](#Aurora.Integrating.AutoScaling.AddCode.DefineScalingPolicy.Custom)
+ [

#### Using cooldown periods
](#Aurora.Integrating.AutoScaling.AddCode.DefineScalingPolicy.Cooldown)
+ [

#### Disabling scale-in activity
](#Aurora.Integrating.AutoScaling.AddCode.DefineScalingPolicy.ScaleIn)

#### Using a predefined metric


By using predefined metrics, you can quickly define a target-tracking scaling policy for an Aurora DB cluster that works well with both target tracking and dynamic scaling in Aurora Auto Scaling. 

Currently, Aurora supports the following predefined metrics in Aurora Auto Scaling:
+ **RDSReaderAverageCPUUtilization** – The average value of the `CPUUtilization` metric in CloudWatch across all Aurora Replicas in the Aurora DB cluster.
+ **RDSReaderAverageDatabaseConnections** – The average value of the `DatabaseConnections` metric in CloudWatch across all Aurora Replicas in the Aurora DB cluster.

For more information about the `CPUUtilization` and `DatabaseConnections` metrics, see [Amazon CloudWatch metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md).

To use a predefined metric in your scaling policy, you create a target tracking configuration for your scaling policy. This configuration must include a `PredefinedMetricSpecification` for the predefined metric and a `TargetValue` for the target value of that metric.

**Example**  
The following example describes a typical policy configuration for target-tracking scaling for an Aurora DB cluster. In this configuration, the `RDSReaderAverageCPUUtilization` predefined metric is used to adjust the Aurora DB cluster based on an average CPU utilization of 40 percent across all Aurora Replicas.  

```
{
    "TargetValue": 40.0,
    "PredefinedMetricSpecification":
    {
        "PredefinedMetricType": "RDSReaderAverageCPUUtilization"
    }
}
```

#### Using a custom metric


By using custom metrics, you can define a target-tracking scaling policy that meets your custom requirements. You can define a custom metric based on any Aurora metric that changes in proportion to scaling. 

Not all Aurora metrics work for target tracking. The metric must be a valid utilization metric and describe how busy an instance is. The value of the metric must increase or decrease in proportion to the number of Aurora Replicas in the Aurora DB cluster. This proportional increase or decrease is necessary to use the metric data to proportionally scale out or in the number of Aurora Replicas.

**Example**  
The following example describes a target-tracking configuration for a scaling policy. In this configuration, a custom metric adjusts an Aurora DB cluster based on an average CPU utilization of 50 percent across all Aurora Replicas in an Aurora DB cluster named `my-db-cluster`.  

```
{
    "TargetValue": 50,
    "CustomizedMetricSpecification":
    {
        "MetricName": "CPUUtilization",
        "Namespace": "AWS/RDS",
        "Dimensions": [
            {"Name": "DBClusterIdentifier","Value": "my-db-cluster"},
            {"Name": "Role","Value": "READER"}
        ],
        "Statistic": "Average",
        "Unit": "Percent"
    }
}
```

#### Using cooldown periods


You can specify a value, in seconds, for `ScaleOutCooldown` to add a cooldown period for scaling out your Aurora DB cluster. Similarly, you can add a value, in seconds, for `ScaleInCooldown` to add a cooldown period for scaling in your Aurora DB cluster. For more information about `ScaleInCooldown` and `ScaleOutCooldown`, see [https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_TargetTrackingScalingPolicyConfiguration.html](https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_TargetTrackingScalingPolicyConfiguration.html) in the *Application Auto Scaling API Reference*.

**Example**  
The following example describes a target-tracking configuration for a scaling policy. In this configuration, the `RDSReaderAverageCPUUtilization` predefined metric is used to adjust an Aurora DB cluster based on an average CPU utilization of 40 percent across all Aurora Replicas in that Aurora DB cluster. The configuration provides a scale-in cooldown period of 10 minutes and a scale-out cooldown period of 5 minutes.  

```
{
    "TargetValue": 40.0,
    "PredefinedMetricSpecification":
    {
        "PredefinedMetricType": "RDSReaderAverageCPUUtilization"
    },
    "ScaleInCooldown": 600,
    "ScaleOutCooldown": 300
}
```

#### Disabling scale-in activity


You can prevent the target-tracking scaling policy configuration from scaling in your Aurora DB cluster by disabling scale-in activity. Disabling scale-in activity prevents the scaling policy from deleting Aurora Replicas, while still allowing the scaling policy to create them as needed.

You can specify a Boolean value for `DisableScaleIn` to enable or disable scale in activity for your Aurora DB cluster. For more information about `DisableScaleIn`, see [https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_TargetTrackingScalingPolicyConfiguration.html](https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_TargetTrackingScalingPolicyConfiguration.html) in the *Application Auto Scaling API Reference*. 

**Example**  
The following example describes a target-tracking configuration for a scaling policy. In this configuration, the `RDSReaderAverageCPUUtilization` predefined metric adjusts an Aurora DB cluster based on an average CPU utilization of 40 percent across all Aurora Replicas in that Aurora DB cluster. The configuration disables scale-in activity for the scaling policy.  

```
{
    "TargetValue": 40.0,
    "PredefinedMetricSpecification":
    {
        "PredefinedMetricType": "RDSReaderAverageCPUUtilization"
    },
    "DisableScaleIn": true
}
```

### Applying a scaling policy to an Aurora DB cluster
Applying a scaling policy

After registering your Aurora DB cluster with Application Auto Scaling and defining a scaling policy, you apply the scaling policy to the registered Aurora DB cluster. To apply a scaling policy to an Aurora DB cluster, you can use the AWS CLI or the Application Auto Scaling API. 

#### AWS CLI


To apply a scaling policy to your Aurora DB cluster, use the [https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/put-scaling-policy.html](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/put-scaling-policy.html) AWS CLI command with the following parameters:
+ `--policy-name` – The name of the scaling policy.
+ `--policy-type` – Set this value to `TargetTrackingScaling`.
+ `--resource-id` – The resource identifier for the Aurora DB cluster. For this parameter, the resource type is `cluster` and the unique identifier is the name of the Aurora DB cluster, for example `cluster:myscalablecluster`.
+ `--service-namespace` – Set this value to `rds`.
+ `--scalable-dimension` – Set this value to `rds:cluster:ReadReplicaCount`.
+ `--target-tracking-scaling-policy-configuration` – The target-tracking scaling policy configuration to use for the Aurora DB cluster.

**Example**  
In the following example, you apply a target-tracking scaling policy named `myscalablepolicy` to an Aurora DB cluster named `myscalablecluster` with Application Auto Scaling. To do so, you use a policy configuration saved in a file named `config.json`.  
For Linux, macOS, or Unix:  

```
aws application-autoscaling put-scaling-policy \
    --policy-name myscalablepolicy \
    --policy-type TargetTrackingScaling \
    --resource-id cluster:myscalablecluster \
    --service-namespace rds \
    --scalable-dimension rds:cluster:ReadReplicaCount \
    --target-tracking-scaling-policy-configuration file://config.json
```
For Windows:  

```
aws application-autoscaling put-scaling-policy ^
    --policy-name myscalablepolicy ^
    --policy-type TargetTrackingScaling ^
    --resource-id cluster:myscalablecluster ^
    --service-namespace rds ^
    --scalable-dimension rds:cluster:ReadReplicaCount ^
    --target-tracking-scaling-policy-configuration file://config.json
```

#### Application Auto Scaling API


To apply a scaling policy to your Aurora DB cluster with the Application Auto Scaling API, use the [https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_PutScalingPolicy.html](https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_PutScalingPolicy.html) Application Auto Scaling API operation with the following parameters:
+ `PolicyName` – The name of the scaling policy.
+ `ServiceNamespace` – Set this value to `rds`.
+ `ResourceID` – The resource identifier for the Aurora DB cluster. For this parameter, the resource type is `cluster` and the unique identifier is the name of the Aurora DB cluster, for example `cluster:myscalablecluster`.
+ `ScalableDimension` – Set this value to `rds:cluster:ReadReplicaCount`.
+ `PolicyType` – Set this value to `TargetTrackingScaling`.
+ `TargetTrackingScalingPolicyConfiguration` – The target-tracking scaling policy configuration to use for the Aurora DB cluster.

**Example**  
In the following example, you apply a target-tracking scaling policy named `myscalablepolicy` to an Aurora DB cluster named `myscalablecluster` with Application Auto Scaling. You use a policy configuration based on the `RDSReaderAverageCPUUtilization` predefined metric.  

```
POST / HTTP/1.1
Host: autoscaling.us-east-2.amazonaws.com
Accept-Encoding: identity
Content-Length: 219
X-Amz-Target: AnyScaleFrontendService.PutScalingPolicy
X-Amz-Date: 20160506T182145Z
User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8
Content-Type: application/x-amz-json-1.1
Authorization: AUTHPARAMS

{
    "PolicyName": "myscalablepolicy",
    "ServiceNamespace": "rds",
    "ResourceId": "cluster:myscalablecluster",
    "ScalableDimension": "rds:cluster:ReadReplicaCount",
    "PolicyType": "TargetTrackingScaling",
    "TargetTrackingScalingPolicyConfiguration": {
        "TargetValue": 40.0,
        "PredefinedMetricSpecification":
        {
            "PredefinedMetricType": "RDSReaderAverageCPUUtilization"
        }
    }
}
```

# Editing an auto scaling policy for an Amazon Aurora DB cluster
Editing an auto scaling policy

You can edit a scaling policy using the AWS Management Console, the AWS CLI, or the Application Auto Scaling API.

## Console


You can edit a scaling policy by using the AWS Management Console.

**To edit an auto scaling policy for an Aurora DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**. 

1. Choose the Aurora DB cluster whose auto scaling policy you want to edit.

1. Choose the **Logs & events** tab.

1. In the **Auto scaling policies** section, choose the auto scaling policy, and then choose **Edit**.

1. Make changes to the policy.

1. Choose **Save**.

The following is a sample **Edit Auto Scaling policy** dialog box.

![\[Editing an auto scaling policy based on average CPU utilization\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-autoscaling-edit-cpu.png)


## AWS CLI or Application Auto Scaling API


You can use the AWS CLI or the Application Auto Scaling API to edit a scaling policy in the same way that you apply a scaling policy:
+ When using the AWS CLI, specify the name of the policy you want to edit in the `--policy-name` parameter. Specify new values for the parameters you want to change.
+ When using the Application Auto Scaling API, specify the name of the policy you want to edit in the `PolicyName` parameter. Specify new values for the parameters you want to change.

For more information, see [Applying a scaling policy to an Aurora DB cluster](Aurora.Integrating.AutoScaling.Add.md#Aurora.Integrating.AutoScaling.AddCode.ApplyScalingPolicy).

# Deleting an auto scaling policy from your Amazon Aurora DB cluster
Deleting an auto scaling policy

You can delete a scaling policy using the AWS Management Console, the AWS CLI, or the Application Auto Scaling API.

## Console


You can delete a scaling policy by using the AWS Management Console.

**To delete an auto scaling policy for an Aurora DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**. 

1. Choose the Aurora DB cluster whose auto scaling policy you want to delete.

1. Choose the **Logs & events** tab.

1. In the **Auto scaling policies** section, choose the auto scaling policy, and then choose **Delete**.

## AWS CLI


To delete a scaling policy from your Aurora DB cluster, use the [https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/delete-scaling-policy.html](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/delete-scaling-policy.html) AWS CLI command with the following parameters:
+ `--policy-name` – The name of the scaling policy.
+ `--resource-id` – The resource identifier for the Aurora DB cluster. For this parameter, the resource type is `cluster` and the unique identifier is the name of the Aurora DB cluster, for example `cluster:myscalablecluster`.
+ `--service-namespace` – Set this value to `rds`.
+ `--scalable-dimension` – Set this value to `rds:cluster:ReadReplicaCount`.

**Example**  
In the following example, you delete a target-tracking scaling policy named `myscalablepolicy` from an Aurora DB cluster named `myscalablecluster`.  
For Linux, macOS, or Unix:  

```
aws application-autoscaling delete-scaling-policy \
    --policy-name myscalablepolicy \
    --resource-id cluster:myscalablecluster \
    --service-namespace rds \
    --scalable-dimension rds:cluster:ReadReplicaCount \
```
For Windows:  

```
aws application-autoscaling delete-scaling-policy ^
    --policy-name myscalablepolicy ^
    --resource-id cluster:myscalablecluster ^
    --service-namespace rds ^
    --scalable-dimension rds:cluster:ReadReplicaCount ^
```

## Application Auto Scaling API


To delete a scaling policy from your Aurora DB cluster, use the [https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_DeleteScalingPolicy.html](https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_DeleteScalingPolicy.html) the Application Auto Scaling API operation with the following parameters:
+ `PolicyName` – The name of the scaling policy.
+ `ServiceNamespace` – Set this value to `rds`.
+ `ResourceID` – The resource identifier for the Aurora DB cluster. For this parameter, the resource type is `cluster` and the unique identifier is the name of the Aurora DB cluster, for example `cluster:myscalablecluster`.
+ `ScalableDimension` – Set this value to `rds:cluster:ReadReplicaCount`.

**Example**  
In the following example, you delete a target-tracking scaling policy named `myscalablepolicy` from an Aurora DB cluster named `myscalablecluster` with the Application Auto Scaling API.  

```
POST / HTTP/1.1
Host: autoscaling.us-east-2.amazonaws.com
Accept-Encoding: identity
Content-Length: 219
X-Amz-Target: AnyScaleFrontendService.DeleteScalingPolicy
X-Amz-Date: 20160506T182145Z
User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8
Content-Type: application/x-amz-json-1.1
Authorization: AUTHPARAMS

{
    "PolicyName": "myscalablepolicy",
    "ServiceNamespace": "rds",
    "ResourceId": "cluster:myscalablecluster",
    "ScalableDimension": "rds:cluster:ReadReplicaCount"
}
```

# Managing performance and scaling for Aurora DB clusters
Managing performance and scaling

You can use the following options to manage performance and scaling for Aurora DB clusters and DB instances:

**Topics**
+ [

## Storage scaling
](#Aurora.Managing.Performance.StorageScaling)
+ [

## Instance scaling
](#Aurora.Managing.Performance.InstanceScaling)
+ [

## Read scaling
](#Aurora.Managing.Performance.ReadScaling)
+ [

## Managing connections
](#Aurora.Managing.MaxConnections)
+ [

## Managing query execution plans
](#Aurora.Managing.Optimizing)

## Storage scaling


Aurora storage automatically scales with the data in your cluster volume. As your data grows, your cluster volume storage expands depending on the DB engine version. For information about maximum Aurora cluster volume sizes for each engine version, see [Amazon Aurora size limits](CHAP_Limits.md#RDS_Limits.FileSize.Aurora). To learn what kinds of data are included in the cluster volume, see [Amazon Aurora storage](Aurora.Overview.StorageReliability.md). For details about the maximum size for a specific version, see [Amazon Aurora size limits](CHAP_Limits.md#RDS_Limits.FileSize.Aurora).

The size of your cluster volume is evaluated on an hourly basis to determine your storage costs. For pricing information, see the [Aurora pricing page](https://aws.amazon.com/rds/aurora/pricing).

Even though an Aurora cluster volume can scale up in size to many tebibytes, you are only charged for the space that you use in the volume. The mechanism for determining billed storage space depends on the version of your Aurora cluster.
+ When Aurora data is removed from the cluster volume, the overall billed space decreases by a comparable amount. This dynamic resizing behavior happens when underlying tablespaces are dropped or reorganized to require less space. Thus, you can reduce storage charges by dropping tables and databases that you no longer need. Dynamic resizing applies to certain Aurora versions. The following are the Aurora versions where the cluster volume dynamically resizes as you remove data:    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Performance.html)
+ In Aurora versions lower than those in the preceding list, the cluster volume can reuse space that's freed up when you remove data, but the volume itself never decreases in size.

Dynamic resizing applies to operations that physically remove or resize tablespaces within the cluster volume. Thus, it applies to SQL statements such as `DROP TABLE`, `DROP DATABASE`, `TRUNCATE TABLE`, and `ALTER TABLE ... DROP PARTITION`. It doesn't apply to deleting rows using the `DELETE` statement. If you delete a large number of rows from a table, you can run the Aurora MySQL `OPTIMIZE TABLE` statement or use the Aurora PostgreSQL `pg_repack` extension afterward to reorganize the table and dynamically resize the cluster volume.

For Aurora MySQL, the following considerations apply:
+ After you upgrade your DB cluster to a DB engine version that supports dynamic resizing, and when the feature is enabled in that specific AWS Region, any space that's later freed by certain SQL statements, such as `DROP TABLE`, is reclaimable.

  If the feature is explicitly disabled in a particular AWS Region, the space might only be reusable—and not reclaimable—even on versions that support dynamic resizing.

  The feature was enabled for specific DB engine versions (1.23.0–1.23.4, 2.09.0–2.09.3, and 2.10.0) between November 2020 and March 2022, and is enabled by default on any subsequent versions.
+ A table is stored internally in one or more contiguous fragments of varying sizes. While running `TRUNCATE TABLE` operations, the space corresponding to the first fragment is reusable and not reclaimable. Other fragments are reclaimable. During `DROP TABLE` operations, space corresponding to the entire tablespace is reclaimable.
+ The `innodb_file_per_table` parameter affects how table storage is organized. When tables are part of the system tablespace, dropping the table doesn't reduce the size of the system tablespace. Thus, make sure to set `innodb_file_per_table` to 1 for Aurora MySQL DB clusters to take full advantage of dynamic resizing.
+ In version 2.11 and higher, the InnoDB temporary tablespace is dropped and re-created on restart. This releases the space occupied by the temporary tablespace to the system, and then the cluster volume resizes. To take full advantage of the dynamic resizing feature, we recommend that you upgrade your DB cluster to version 2.11 or higher.

**Note**  
The dynamic resizing feature doesn't reclaim space immediately when tables in tablespaces are dropped, but gradually at a rate of approximately 10 TB per day. Space in the system tablespace isn't reclaimed, because the system tablespace is never removed. Unreclaimed free space in a tablespace is reused when an operation needs space in that tablespace. The dynamic resizing feature can reclaim storage space only when the cluster is in an available state.

You can check how much storage space a cluster is using by monitoring the `VolumeBytesUsed` metric in CloudWatch. For more information on storage billing, see [How Aurora data storage is billed](Aurora.Overview.StorageReliability.md#aurora-storage-data-billing).
+ In the AWS Management Console, you can see this figure in a chart by viewing the `Monitoring` tab on the details page for the cluster.
+ With the AWS CLI, you can run a command similar to the following Linux example. Substitute your own values for the start and end times and the name of the cluster.

  ```
  aws cloudwatch get-metric-statistics --metric-name "VolumeBytesUsed" \
    --start-time "$(date -d '6 hours ago')" --end-time "$(date -d 'now')" --period 60 \
    --namespace "AWS/RDS" \
    --statistics Average Maximum Minimum \
    --dimensions Name=DBClusterIdentifier,Value=my_cluster_identifier
  ```

   That command produces output similar to the following. 

  ```
  {
      "Label": "VolumeBytesUsed",
      "Datapoints": [
          {
              "Timestamp": "2020-08-04T21:25:00+00:00",
              "Average": 182871982080.0,
              "Minimum": 182871982080.0,
              "Maximum": 182871982080.0,
              "Unit": "Bytes"
          }
      ]
  }
  ```

The following examples show how you might track storage usage for an Aurora cluster over time using AWS CLI commands on a Linux system. The `--start-time` and `--end-time` parameters define the overall time interval as one day. The `--period` parameter requests the measurements at one hour intervals. It doesn't make sense to choose a `--period` value that's small, because the metrics are collected at intervals, not continuously. Also, Aurora storage operations sometimes continue for some time in the background after the relevant SQL statement finishes.

The first example returns output in the default JSON format. The data points are returned in arbitrary order, not sorted by timestamp. You might import this JSON data into a charting tool to do sorting and visualization.

```
$ aws cloudwatch get-metric-statistics --metric-name "VolumeBytesUsed" \
  --start-time "$(date -d '1 day ago')" --end-time "$(date -d 'now')" --period 3600
  --namespace "AWS/RDS" --statistics Maximum --dimensions Name=DBClusterIdentifier,Value=my_cluster_id
{
    "Label": "VolumeBytesUsed",
    "Datapoints": [
        {
            "Timestamp": "2020-08-04T19:40:00+00:00",
            "Maximum": 182872522752.0,
            "Unit": "Bytes"
        },
        {
            "Timestamp": "2020-08-05T00:40:00+00:00",
            "Maximum": 198573719552.0,
            "Unit": "Bytes"
        },
        {
            "Timestamp": "2020-08-05T05:40:00+00:00",
            "Maximum": 206827454464.0,
            "Unit": "Bytes"
        },
        {
            "Timestamp": "2020-08-04T17:40:00+00:00",
            "Maximum": 182872522752.0,
            "Unit": "Bytes"
        },
... output omitted ...
```

This example returns the same data as the previous one. The `--output` parameter represents the data in compact plain text format. The `aws cloudwatch ` command pipes its output to the `sort` command. The `-k` parameter of the `sort` command sorts the output by the third field, which is the timestamp in Universal Coordinated Time (UTC) format.

```
$ aws cloudwatch get-metric-statistics --metric-name "VolumeBytesUsed" \
  --start-time "$(date -d '1 day ago')" --end-time "$(date -d 'now')" --period 3600 \
  --namespace "AWS/RDS" --statistics Maximum --dimensions Name=DBClusterIdentifier,Value=my_cluster_id \
  --output text | sort -k 3
VolumeBytesUsed
DATAPOINTS  182872522752.0  2020-08-04T17:41:00+00:00 Bytes
DATAPOINTS  182872522752.0  2020-08-04T18:41:00+00:00 Bytes
DATAPOINTS  182872522752.0  2020-08-04T19:41:00+00:00 Bytes
DATAPOINTS  182872522752.0  2020-08-04T20:41:00+00:00 Bytes
DATAPOINTS  187667791872.0  2020-08-04T21:41:00+00:00 Bytes
DATAPOINTS  190981029888.0  2020-08-04T22:41:00+00:00 Bytes
DATAPOINTS  195587244032.0  2020-08-04T23:41:00+00:00 Bytes
DATAPOINTS  201048915968.0  2020-08-05T00:41:00+00:00 Bytes
DATAPOINTS  205368492032.0  2020-08-05T01:41:00+00:00 Bytes
DATAPOINTS  206827454464.0  2020-08-05T02:41:00+00:00 Bytes
DATAPOINTS  206827454464.0  2020-08-05T03:41:00+00:00 Bytes
DATAPOINTS  206827454464.0  2020-08-05T04:41:00+00:00 Bytes
DATAPOINTS  206827454464.0  2020-08-05T05:41:00+00:00 Bytes
DATAPOINTS  206827454464.0  2020-08-05T06:41:00+00:00 Bytes
DATAPOINTS  206827454464.0  2020-08-05T07:41:00+00:00 Bytes
DATAPOINTS  206827454464.0  2020-08-05T08:41:00+00:00 Bytes
DATAPOINTS  206827454464.0  2020-08-05T09:41:00+00:00 Bytes
DATAPOINTS  206827454464.0  2020-08-05T10:41:00+00:00 Bytes
DATAPOINTS  206827454464.0  2020-08-05T11:41:00+00:00 Bytes
DATAPOINTS  206827454464.0  2020-08-05T12:41:00+00:00 Bytes
DATAPOINTS  206827454464.0  2020-08-05T13:41:00+00:00 Bytes
DATAPOINTS  206827454464.0  2020-08-05T14:41:00+00:00 Bytes
DATAPOINTS  206833664000.0  2020-08-05T15:41:00+00:00 Bytes
DATAPOINTS  206833664000.0  2020-08-05T16:41:00+00:00 Bytes
```

The sorted output shows how much storage was used at the start and end of the monitoring period. You can also find the points during that period when Aurora allocated more storage for the cluster. The following example uses Linux commands to reformat the starting and ending `VolumeBytesUsed` values as gigabytes (GB) and as gibibytes (GiB). Gigabytes represent units measured in powers of 10 and are commonly used in discussions of storage for rotational hard drives. Gibibytes represent units measured in powers of 2. Aurora storage measurements and limits are typically stated in the power-of-2 units, such as gibibytes and tebibytes.

```
$ GiB=$((1024*1024*1024))
$ GB=$((1000*1000*1000))
$ echo "Start: $((182872522752/$GiB)) GiB, End: $((206833664000/$GiB)) GiB"
Start: 170 GiB, End: 192 GiB
$ echo "Start: $((182872522752/$GB)) GB, End: $((206833664000/$GB)) GB"
Start: 182 GB, End: 206 GB
```

The `VolumeBytesUsed` metric tells you how much storage in the cluster is incurring charges. Thus, it's best to minimize this number when practical. However, this metric doesn't include some storage that Aurora uses internally in the cluster and doesn't charge for. If your cluster is approaching the storage limit and might run out of space, it's more helpful to monitor the `AuroraVolumeBytesLeftTotal` metric and try to maximize that number. The following example runs a similar calculation as the previous one, but for `AuroraVolumeBytesLeftTotal` instead of `VolumeBytesUsed`.

```
$ aws cloudwatch get-metric-statistics --metric-name "AuroraVolumeBytesLeftTotal" \
  --start-time "$(date -d '1 hour ago')" --end-time "$(date -d 'now')" --period 3600 \
  --namespace "AWS/RDS" --statistics Maximum --dimensions Name=DBClusterIdentifier,Value=my_old_cluster_id \
  --output text | sort -k 3
AuroraVolumeBytesLeftTotal
DATAPOINTS      140530528288768.0       2023-02-23T19:25:00+00:00       Count
$ TiB=$((1024*1024*1024*1024))
$ TB=$((1000*1000*1000*1000))
$ echo "$((69797067915264 / $TB)) TB remaining for this cluster"
69 TB remaining for this cluster
$ echo "$((69797067915264 / $TiB)) TiB remaining for this cluster"
63 TiB remaining for this cluster
```

For a cluster running Aurora MySQL version 2.09 or higher, or Aurora PostgreSQL, the free size reported by `VolumeBytesUsed` increases when data is added and decreases when data is removed. The following example shows how. This report shows the maximum and minimum storage size for a cluster at 15-minute intervals as tables with temporary data are created and dropped. The report lists the maximum value before the minimum value. Thus, to understand how storage usage changed within the 15-minute interval, interpret the numbers from right to left.

```
$ aws cloudwatch get-metric-statistics --metric-name "VolumeBytesUsed" \
  --start-time "$(date -d '4 hours ago')" --end-time "$(date -d 'now')" --period 1800 \
  --namespace "AWS/RDS" --statistics Maximum Minimum --dimensions Name=DBClusterIdentifier,Value=my_new_cluster_id
  --output text | sort -k 4
VolumeBytesUsed
DATAPOINTS	14545305600.0	14545305600.0	2020-08-05T20:49:00+00:00	Bytes
DATAPOINTS	14545305600.0	14545305600.0	2020-08-05T21:19:00+00:00	Bytes
DATAPOINTS	22022176768.0 14545305600.0	2020-08-05T21:49:00+00:00	Bytes
DATAPOINTS	22022176768.0	22022176768.0	2020-08-05T22:19:00+00:00	Bytes
DATAPOINTS	22022176768.0	22022176768.0	2020-08-05T22:49:00+00:00	Bytes
DATAPOINTS	22022176768.0 15614263296.0	2020-08-05T23:19:00+00:00	Bytes
DATAPOINTS	15614263296.0	15614263296.0	2020-08-05T23:49:00+00:00	Bytes
DATAPOINTS	15614263296.0	15614263296.0	2020-08-06T00:19:00+00:00	Bytes
```

The following example shows how for clusters running compatible versions of Aurora MySQL or Aurora PostgreSQL, the free size reported by `AuroraVolumeBytesLeftTotal` reflects the 256-TiB size limit. For more information about compatible versions, see [Amazon Aurora size limits](CHAP_Limits.md#RDS_Limits.FileSize.Aurora).

```
$ aws cloudwatch get-metric-statistics --region us-east-1 --metric-name "AuroraVolumeBytesLeftTotal" \
  --start-time "$(date -d '4 hours ago')" --end-time "$(date -d 'now')" --period 1800 \
  --namespace "AWS/RDS" --statistics Minimum --dimensions Name=DBClusterIdentifier,Value=pq-57 \
  --output text | sort -k 3
AuroraVolumeBytesLeftTotal
DATAPOINTS	140515818864640.0	2020-08-05T20:56:00+00:00	Count
DATAPOINTS	140515818864640.0	2020-08-05T21:26:00+00:00	Count
DATAPOINTS	140515818864640.0	2020-08-05T21:56:00+00:00	Count
DATAPOINTS	140514866757632.0	2020-08-05T22:26:00+00:00	Count
DATAPOINTS	140511020580864.0	2020-08-05T22:56:00+00:00	Count
DATAPOINTS	140503168843776.0	2020-08-05T23:26:00+00:00	Count
DATAPOINTS	140503168843776.0	2020-08-05T23:56:00+00:00	Count
DATAPOINTS	140515818864640.0	2020-08-06T00:26:00+00:00	Count
$ TiB=$((1024*1024*1024*1024))
$ TB=$((1000*1000*1000*1000))
$ echo "$((140515818864640 / $TB)) TB remaining for this cluster"
140 TB remaining for this cluster
$ echo "$((140515818864640 / $TiB)) TiB remaining for this cluster"
256 TiB remaining for this cluster
```

## Instance scaling


You can scale your Aurora DB cluster as needed by modifying the DB instance class for each DB instance in the DB cluster. Aurora supports several DB instance classes optimized for Aurora, depending on database engine compatibility.


| Database engine | Instance scaling | 
| --- | --- | 
|  Amazon Aurora MySQL  |  See [Scaling Aurora MySQL DB instances](AuroraMySQL.Managing.Performance.md#AuroraMySQL.Managing.Performance.InstanceScaling)  | 
|  Amazon Aurora PostgreSQL  |  See [Scaling Aurora PostgreSQL DB instances](AuroraPostgreSQL.Managing.md#AuroraPostgreSQL.Managing.Performance.InstanceScaling)  | 

## Read scaling


You can achieve read scaling for your Aurora DB cluster by creating up to 15 Aurora Replicas in a DB cluster. Each Aurora Replica returns the same data from the cluster volume with minimal replica lag—usually considerably less than 100 milliseconds after the primary instance has written an update. As your read traffic increases, you can create additional Aurora Replicas and connect to them directly to distribute the read load for your DB cluster. Aurora Replicas don't have to be of the same DB instance class as the primary instance.

For information about adding Aurora Replicas to a DB cluster, see [Adding Aurora Replicas to a DB cluster](aurora-replicas-adding.md).

## Managing connections


The maximum number of connections allowed to an Aurora DB instance is determined by the `max_connections` parameter in the instance-level parameter group for the DB instance. The default value of that parameter varies depends on the DB instance class used for the DB instance and database engine compatibility.


| Database engine | max\$1connections default value | 
| --- | --- | 
|  Amazon Aurora MySQL  |  See [Maximum connections to an Aurora MySQL DB instance](AuroraMySQL.Managing.Performance.md#AuroraMySQL.Managing.MaxConnections)  | 
|  Amazon Aurora PostgreSQL  |  See [Maximum connections to an Aurora PostgreSQL DB instance](AuroraPostgreSQL.Managing.md#AuroraPostgreSQL.Managing.MaxConnections)  | 

**Tip**  
If your applications frequently open and close connections, or keep a large number of long-lived connections open, we recommend that you use Amazon RDS Proxy. RDS Proxy is a fully managed, highly available database proxy that uses connection pooling to share database connections securely and efficiently. To learn more about RDS Proxy, see [Amazon RDS Proxyfor Aurora](rds-proxy.md).

## Managing query execution plans


If you use query plan management for Aurora PostgreSQL, you gain control over which plans the optimizer runs. For more information, see [Managing query execution plans for Aurora PostgreSQL](AuroraPostgreSQL.Optimize.md).

# Cloning a volume for an Amazon Aurora DB cluster
Cloning a volume for an Aurora DB cluster<a name="cloning"></a>

By using Aurora cloning, you can create a new cluster that initially shares the same data pages as the original, but is a separate and independent volume. The process is designed to be fast and cost-effective. The new cluster with its associated data volume is known as a *clone*. Creating a clone is faster and more space-efficient than physically copying the data using other techniques, such as restoring a snapshot.

**Topics**
+ [

## Overview of Aurora cloning
](#Aurora.Clone.Overview)
+ [

## Limitations of Aurora cloning
](#Aurora.Managing.Clone.Limitations)
+ [

## How Aurora cloning works
](#Aurora.Managing.Clone.Protocol)
+ [

## Creating an Amazon Aurora clone
](#Aurora.Managing.Clone.create)
+ [

# Cross-VPC cloning with Amazon Aurora
](Aurora.Managing.Clone.Cross-VPC.md)
+ [

# Cross-account cloning with AWS RAM and Amazon Aurora
](Aurora.Managing.Clone.Cross-Account.md)

## Overview of Aurora cloning


Aurora uses a *copy-on-write protocol* to create a clone. This mechanism uses minimal additional space to create an initial clone. When the clone is first created, Aurora keeps a single copy of the data that is used by the source Aurora DB cluster and the new (cloned) Aurora DB cluster. Additional storage is allocated only when changes are made to data (on the Aurora storage volume) by the source Aurora DB cluster or the Aurora DB cluster clone. To learn more about the copy-on-write protocol, see [How Aurora cloning works](#Aurora.Managing.Clone.Protocol).

Aurora cloning is especially useful for quickly setting up test environments using your production data, without risking data corruption. You can use clones for many types of applications, such as the following:
+ Experiment with potential changes (schema changes and parameter group changes, for example) to assess all impacts.
+ Run workload-intensive operations, such as exporting data or running analytical queries on the clone.
+ Create a copy of your production DB cluster for development, testing, or other purposes.

You can create more than one clone from the same Aurora DB cluster. You can also create multiple clones from another clone.

After creating an Aurora clone, you can configure the Aurora DB instances differently from the source Aurora DB cluster. For example, you might not need a clone for development purposes to meet the same high availability requirements as the source production Aurora DB cluster. In this case, you can configure the clone with a single Aurora DB instance rather than the multiple DB instances used by the Aurora DB cluster.

When you create a clone using a different deployment configuration from the source, the clone is created using the latest minor version of the source's Aurora DB engine.

When you create clones from your Aurora DB clusters, the clones are created in your AWS account—the same account that owns the source Aurora DB cluster. However, you can also share Aurora Serverless v2 and provisioned Aurora DB clusters and clones with other AWS accounts. For more information, see [Cross-account cloning with AWS RAM and Amazon Aurora](Aurora.Managing.Clone.Cross-Account.md).

When you finish using the clone for your testing, development, or other purposes, you can delete it.

## Limitations of Aurora cloning


Aurora cloning currently has the following limitations:
+ You can create as many clones as you want, up to the maximum number of DB clusters allowed in the AWS Region.
+ You can create up to 15 clones with copy-on-write protocol. After you have 15 clones, the next clone that you create is a full copy. The full-copy protocol acts like a point-in-time recovery.
+ You can't create a clone in a different AWS Region from the source Aurora DB cluster.
+ You can't create a clone from an Aurora DB cluster without the parallel query feature to a cluster that uses parallel query. To bring data into a cluster that uses parallel query, create a snapshot of the original cluster and restore it to the cluster that's using the parallel query feature.
+ You can't create a clone from an Aurora DB cluster that has no DB instances. You can only clone Aurora DB clusters that have at least one DB instance.
+ You can create a clone in a different virtual private cloud (VPC) than that of the Aurora DB cluster. If you do, the subnets of the VPCs must map to the same Availability Zones.
+ You can create an Aurora provisioned clone from a provisioned Aurora DB cluster.
+ Clusters with Aurora Serverless v2 instances follow the same rules as provisioned clusters.
+ For Aurora Serverless v1:
  + You can create a provisioned clone from an Aurora Serverless v1 DB cluster.
  + You can create an Aurora Serverless v1 clone from an Aurora Serverless v1 or provisioned DB cluster.
  + You can't create an Aurora Serverless v1 clone from an unencrypted, provisioned Aurora DB cluster.
  + Cross-account cloning currently doesn't support cloning Aurora Serverless v1 DB clusters. For more information, see [Limitations of cross-account cloning](Aurora.Managing.Clone.Cross-Account.md#Aurora.Managing.Clone.CrossAccount.Limitations).
  + A cloned Aurora Serverless v1 DB cluster has the same behavior and limitations as any Aurora Serverless v1 DB cluster. For more information, see [Using Amazon Aurora Serverless v1](aurora-serverless.md).
  + Aurora Serverless v1 DB clusters are always encrypted. When you clone an Aurora Serverless v1 DB cluster into a provisioned Aurora DB cluster, the provisioned Aurora DB cluster is encrypted. You can choose the encryption key, but you can't disable the encryption. To clone from a provisioned Aurora DB cluster to an Aurora Serverless v1, you must start with an encrypted provisioned Aurora DB cluster.

## How Aurora cloning works


Aurora cloning works at the storage layer of an Aurora DB cluster. It uses a *copy-on-write* protocol that's both fast and space-efficient in terms of the underlying durable media supporting the Aurora storage volume. You can learn more about Aurora cluster volumes in the [Overview of Amazon Aurora storage](Aurora.Overview.StorageReliability.md#Aurora.Overview.Storage).

**Topics**
+ [

### Understanding the copy-on-write protocol
](#Aurora.Managing.Clone.Protocol.Before)
+ [

### Deleting a source cluster volume
](#Aurora.Managing.Clone.Deleting)

### Understanding the copy-on-write protocol


An Aurora DB cluster stores data in pages in the underlying Aurora storage volume. 

For example, in the following diagram you can find an Aurora DB cluster (A) that has four data pages, 1, 2, 3, and 4. Imagine that a clone, B, is created from the Aurora DB cluster. When the clone is created, no data is copied. Rather, the clone points to the same set of pages as the source Aurora DB cluster.

![\[Amazon Aurora cluster volume with 4 pages for source cluster, A, and clone, B\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-cloning-copy-on-write-protocol-1.png)


When the clone is created, no additional storage is usually needed. The copy-on-write protocol uses the same segment on the physical storage media as the source segment. Additional storage is required only if the capacity of the source segment isn't sufficient for the entire clone segment. If that's the case, the source segment is copied to another physical device. 

In the following diagrams, you can find an example of the copy-on-write protocol in action using the same cluster A and its clone, B, as shown preceding. Let's say that you make a change to your Aurora DB cluster (A) that results in a change to data held on page 1. Instead of writing to the original page 1, Aurora creates a new page 1[A]. The Aurora DB cluster volume for cluster (A) now points to page 1[A], 2, 3, and 4, while the clone (B) still references the original pages.

![\[Amazon Aurora source DB cluster volume and its clone, both with changes.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-cloning-copy-on-write-protocol-2.png)


On the clone, a change is made to page 4 on the storage volume. Instead of writing to the original page 4, Aurora creates a new page, 4[B]. The clone now points to pages 1, 2, 3, and to page 4[B], while the cluster (A) continues pointing to 1[A], 2, 3, and 4.

![\[Amazon Aurora source DB cluster volume and its clone, both with changes.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-cloning-copy-on-write-protocol-3.png)


As more changes occur over time in both the source Aurora DB cluster volume and the clone, more storage is needed to capture and store the changes. 

### Deleting a source cluster volume


 Initially, the clone volume shares the same data pages as the original volume from which the clone is created. As long as the original volume exists, the clone volume is only considered the owner of the pages that the clone created or modified. Thus, the `VolumeBytesUsed` metric for the clone volume starts out small and only grows as the data diverges between the original cluster and the clone. For pages that are identical between the source volume and the clone, the storage charges apply only to the original cluster. For more information about the `VolumeBytesUsed` metric, see [Cluster-level metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md#Aurora.AuroraMySQL.Monitoring.Metrics.clusters). 

 When you delete a source cluster volume that has one or more clones associated with it, the data in the cluster volumes of the clones isn't changed. Aurora preserves the pages that were previously owned by the source cluster volume. Aurora redistributes the storage billing for the pages that were owned by the deleted cluster. For example, suppose that an original cluster had two clones and then the original cluster was deleted. Half of the data pages owned by the original cluster would now be owned by one clone. The other half of the pages would be owned by the other clone. 

 If you delete the original cluster, then as you create or delete more clones, Aurora continues to redistribute ownership of the data pages among all the clones that share the same pages. Thus, you might find that the value of the `VolumeBytesUsed` metric changes for the cluster volume of a clone. The metric value can decrease as more clones are created and page ownership is spread across more clusters. The metric value can also increase as clones are deleted and page ownership is assigned to a smaller number of clusters. For information about how write operations affect data pages on clone volumes, see [Understanding the copy-on-write protocol](#Aurora.Managing.Clone.Protocol.Before). 

 When the original cluster and the clones are owned by the same AWS account, all the storage charges for those clusters apply to that same AWS account. If some of the clusters are cross-account clones, deleting the original cluster can result in additional storage charges to the AWS accounts that own the cross-account clones. 

 For example, suppose that a cluster volume has 1000 used data pages before you create any clones. When you clone that cluster, initially the clone volume has zero used pages. If the clone makes modifications to 100 data pages, only those 100 pages are stored on the clone volume and marked as used. The other 900 unchanged pages from the parent volume are shared by both clusters. In this case, the parent cluster has storage charges for 1000 pages and the clone volume for 100 pages. 

 If you delete the source volume, the storage charges for the clone include the 100 pages that it changed, plus the 900 shared pages from the original volume, for a total of 1000 pages. 

## Creating an Amazon Aurora clone
Creating an Aurora clone

You can create a clone in the same AWS account as the source Aurora DB cluster. To do so, you can use the AWS Management Console or the AWS CLI and the procedures following.

To allow another AWS account to create a clone or to share a clone with another AWS account, use the procedures in [Cross-account cloning with AWS RAM and Amazon Aurora](Aurora.Managing.Clone.Cross-Account.md).

### Console


The following procedure describes how to clone an Aurora DB cluster using the AWS Management Console.

Creating a clone using the AWS Management Console results in an Aurora DB cluster with one Aurora DB instance.

 These instructions apply for DB clusters owned by the same AWS account that is creating the clone. If the DB cluster is owned by a different AWS account, see [Cross-account cloning with AWS RAM and Amazon Aurora](Aurora.Managing.Clone.Cross-Account.md) instead. 

**To create a clone of a DB cluster owned by your AWS account using the AWS Management Console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**. 

1. Choose your Aurora DB cluster from the list, and for **Actions**, choose **Create clone**.  
![\[Creating a clone starts by selecting your Aurora DB cluster.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-cloning-create-clone-1.png)

   The Create clone page opens, where you can configure **Settings**, **Connectivity**, and other options for the Aurora DB cluster clone.

1. For **DB instance identifier**, enter the name that you want to give to your cloned Aurora DB cluster.

1. For Aurora Serverless v1 DB clusters, choose **Provisioned** or **Serverless** for **Capacity type**.

   You can choose **Serverless** only if the source Aurora DB cluster is an Aurora Serverless v1 DB cluster or is a provisioned Aurora DB cluster that is encrypted.

1. For Aurora Serverless v2 or provisioned DB clusters, choose either **Aurora I/O-Optimized** or **Aurora Standard** for **Cluster storage configuration**.

   For more information, see [Storage configurations for Amazon Aurora DB clusters](Aurora.Overview.StorageReliability.md#aurora-storage-type).

1. Choose the DB instance size or DB cluster capacity:
   + For a provisioned clone, choose a **DB instance class**.  
![\[To create a provisioned clone, specify the DB instance size.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-cloning-create-clone-3-provisioned.png)

     You can accept the provided setting, or you can use a different DB instance class for your clone.
   + For an Aurora Serverless v1 or Aurora Serverless v2 clone, choose the **Capacity settings**.  
![\[To create a Serverless clone from an Aurora DB cluster, specify the capacity.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-cloning-create-clone-3-serverless.png)

     You can accept the provided settings, or you can change them for your clone.

1. Choose other settings as needed for your clone. To learn more about Aurora DB cluster and instance settings, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md). 

1. Choose **Create clone**.

When the clone is created, it's listed with your other Aurora DB clusters in the console **Databases** section and displays its current state. Your clone is ready to use when its state is **Available**. 

### AWS CLI


 Using the AWS CLI for cloning your Aurora DB cluster involves separate steps for creating the clone cluster and adding one or more DB instances to it. 

 The `restore-db-cluster-to-point-in-time` AWS CLI command that you use results in an Aurora DB cluster with the same storage data as the original cluster, but no Aurora DB instances. You create the DB instances separately after the clone is available. You can choose the number of DB instances and their instance classes to give the clone more or less compute capacity than the original cluster. The steps in the process are as follows: 

1.  Create the clone by using the [restore-db-cluster-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html) CLI command. 

1.  Create the writer DB instance for the clone by using the [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) CLI command. 

1.  (Optional) Run additional [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) CLI commands to add one or more reader instances to the clone cluster. Using reader instances helps improve the high availability and read scalability aspects of the clone. You might skip this step if you only intend to use the clone for development and testing. 

**Topics**
+ [

#### Creating the clone
](#Aurora.Managing.Clone.CLI.create-empty-clone)
+ [

#### Checking the status and getting clone details
](#Aurora.Managing.Clone.CLI.check-status-get-details)
+ [

#### Creating the Aurora DB instance for your clone
](#Aurora.Managing.Clone.CLI.create-db-instance)
+ [

#### Parameters to use for cloning
](#Aurora.Managing.Clone.CLI.parameter-summary)

#### Creating the clone


 Use the `[restore-db-cluster-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html)` CLI command to create the initial clone cluster. 

**To create a clone from a source Aurora DB cluster**
+  Use the `[restore-db-cluster-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html)` CLI command. Specify values for the following parameters. In this typical case, the clone uses the same engine mode as the original cluster, either provisioned or Aurora Serverless v1. 
  +  `--db-cluster-identifier` – Choose a meaningful name for your clone. You name the clone when you use the [restore-db-cluster-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html) CLI command. You then pass the name of the clone in the [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) CLI command. 
  +  `--restore-type` – Use `copy-on-write` to create a clone of the source DB cluster. Without this parameter, the `restore-db-cluster-to-point-in-time` restores the Aurora DB cluster rather than creating a clone. 
  +  `--source-db-cluster-identifier` – Use the name of the source Aurora DB cluster that you want to clone. 
  +  `--use-latest-restorable-time` – This value points to the latest restorable volume data for the source DB cluster. Use it to create clones. 

 The following example creates a clone named `my-clone` from a cluster named `my-source-cluster`. 

For Linux, macOS, or Unix:

```
aws rds restore-db-cluster-to-point-in-time \
    --source-db-cluster-identifier my-source-cluster \
    --db-cluster-identifier my-clone \
    --restore-type copy-on-write \
    --use-latest-restorable-time
```

For Windows:

```
aws rds restore-db-cluster-to-point-in-time ^
    --source-db-cluster-identifier my-source-cluster ^
    --db-cluster-identifier my-clone ^
    --restore-type copy-on-write ^
    --use-latest-restorable-time
```

 The command returns the JSON object containing details of the clone. Check to make sure that your cloned DB cluster is available before trying to create the DB instance for your clone. For more information, see [Checking the status and getting clone details](#Aurora.Managing.Clone.CLI.check-status-get-details). 

 For example, suppose you have a cluster named `tpch100g` that you want to clone. The following Linux example creates a cloned cluster named `tpch100g-clone`, an Aurora Serverless v2 writer instance named `tpch100g-clone-instance`, and a provisioned reader instance named `tpch100g-clone-instance-2` for the new cluster. 

 You don't need to supply some parameters, such as `--master-username` and `--master-user-password`. Aurora automatically determines those from the original cluster. You do need to specify the DB engine to use. Thus, the example tests the new cluster to determine the right value to use for the `--engine` parameter. 

 This example also includes the `--serverless-v2-scaling-configuration` option when creating the clone cluster. That way, you can add Aurora Serverless v2 instances to the clone even if the original cluster didn't use Aurora Serverless v2. 

```
$ aws rds restore-db-cluster-to-point-in-time \
  --source-db-cluster-identifier tpch100g \
  --db-cluster-identifier tpch100g-clone \
  --serverless-v2-scaling-configuration MinCapacity=0.5,MaxCapacity=16 \
  --restore-type copy-on-write \
  --use-latest-restorable-time

$ aws rds describe-db-clusters \
  --db-cluster-identifier tpch100g-clone \
    --query '*[].[Engine]' \
    --output text
aurora-mysql

$ aws rds create-db-instance \
  --db-instance-identifier tpch100g-clone-instance \
  --db-cluster-identifier tpch100g-clone \
  --db-instance-class db.serverless \
  --engine aurora-mysql

$ aws rds create-db-instance \
  --db-instance-identifier tpch100g-clone-instance-2 \
  --db-cluster-identifier tpch100g-clone \
  --db-instance-class db.r6g.2xlarge \
  --engine aurora-mysql
```

**To create a clone with a different engine mode from the source Aurora DB cluster**
+  This procedure only applies to older engine versions that support Aurora Serverless v1. Suppose that you have an Aurora Serverless v1 cluster and you want to create a clone that's a provisioned cluster. In that case, use the `[restore-db-cluster-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html)` CLI command and specify values similar parameter values as in the previous example, plus these additional parameters: 
  +  `--engine-mode` – Use this parameter only to create clones that are of a different engine mode from the source Aurora DB cluster. This parameter only applies to the older engine versions that support Aurora Serverless v1. Choose the value to pass with `--engine-mode` as follows: 
    +  Use `--engine-mode provisioned` to create a provisioned Aurora DB cluster clone from an Aurora Serverless DB cluster. 
**Note**  
 If you intend to use Aurora Serverless v2 with a cluster that was cloned from Aurora Serverless v1, you still specify the engine mode for the clone as `provisioned`. Then you perform additional upgrade and migration steps afterward. 
    +  Use `--engine-mode serverless` to create an Aurora Serverless v1 clone from a provisioned Aurora DB cluster. When you specify the `serverless` engine mode, you can also choose the `--scaling-configuration`. 
  +  `--scaling-configuration` – (Optional) Use with `--engine-mode serverless` to configure the minimum and maximum capacity for an Aurora Serverless v1 clone. If you don't use this parameter, Aurora creates an Aurora Serverless v1 clone using the default Aurora Serverless v1 capacity values for the DB engine. 

 The following example creates a provisioned clone named `my-clone`, from an Aurora Serverless v1 DB cluster named `my-source-cluster`. 

For Linux, macOS, or Unix:

```
aws rds restore-db-cluster-to-point-in-time \
    --source-db-cluster-identifier my-source-cluster \
    --db-cluster-identifier my-clone \
    --engine-mode provisioned \
    --restore-type copy-on-write \
    --use-latest-restorable-time
```

For Windows:

```
aws rds restore-db-cluster-to-point-in-time ^
    --source-db-cluster-identifier my-source-cluster ^
    --db-cluster-identifier my-clone ^
    --engine-mode provisioned ^
    --restore-type copy-on-write ^
    --use-latest-restorable-time
```

 These commands return the JSON object containing details of the clone that you need to create the DB instance. You can't do that until the status of the clone (the empty Aurora DB cluster) has the status **Available**. 

**Note**  
 The [restore-db-cluster-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html) AWS CLI command only restores the DB cluster, not the DB instances for that DB cluster. You run the [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) command to create DB instances for the restored DB cluster. With that command, you specify the identifier of the restored DB cluster as the `--db-cluster-identifier` parameter. You can create DB instances only after the `restore-db-cluster-to-point-in-time` command has completed and the DB cluster is available.   
 Suppose that you start with an Aurora Serverless v1 cluster and intend to migrate it to an Aurora Serverless v2 cluster. You create a provisioned clone of the Aurora Serverless v1 cluster as the initial step in the migration. For the full procedure, including any required version upgrades, see [Upgrading from an Aurora Serverless v1 cluster to Aurora Serverless v2](aurora-serverless-v2.upgrade.md#aurora-serverless-v2.upgrade-from-serverless-v1-procedure). 

#### Checking the status and getting clone details


 You can use the following command to check the status of your newly created clone cluster. 

```
$ aws rds describe-db-clusters --db-cluster-identifier my-clone --query '*[].[Status]' --output text
```

 Or you can obtain the status and the other values that you need to [create the DB instance for your clone](#Aurora.Managing.Clone.CLI.create-db-instance) by using the following AWS CLI query. 

For Linux, macOS, or Unix:

```
aws rds describe-db-clusters --db-cluster-identifier my-clone \
  --query '*[].{Status:Status,Engine:Engine,EngineVersion:EngineVersion,EngineMode:EngineMode}'
```

For Windows:

```
aws rds describe-db-clusters --db-cluster-identifier my-clone ^
  --query "*[].{Status:Status,Engine:Engine,EngineVersion:EngineVersion,EngineMode:EngineMode}"
```

 This query returns output similar to the following. 

```
[
  {
        "Status": "available",
        "Engine": "aurora-mysql",
        "EngineVersion": "8.0.mysql_aurora.3.04.1",
        "EngineMode": "provisioned"
    }
]
```

#### Creating the Aurora DB instance for your clone


 Use the [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) CLI command to create the DB instance for your Aurora Serverless v2 or provisioned clone. You don't create a DB instance for an Aurora Serverless v1 clone. 

 The DB instance inherits the `--master-username` and `--master-user-password` properties from the source DB cluster. 

 The following example creates a DB instance for a provisioned clone. 

For Linux, macOS, or Unix:

```
aws rds create-db-instance \
    --db-instance-identifier my-new-db \
    --db-cluster-identifier my-clone \
    --db-instance-class db.r6g.2xlarge \
    --engine aurora-mysql
```

For Windows:

```
aws rds create-db-instance ^
    --db-instance-identifier my-new-db ^
    --db-cluster-identifier my-clone ^
    --db-instance-class db.r6g.2xlarge ^
    --engine aurora-mysql
```

 The following example creates an Aurora Serverless v2 DB instance, for a clone that uses an engine version that supports Aurora Serverless v2. 

For Linux, macOS, or Unix:

```
aws rds create-db-instance \
    --db-instance-identifier my-new-db \
    --db-cluster-identifier my-clone \
    --db-instance-class db.serverless \
  --engine aurora-postgresql
```

For Windows:

```
aws rds create-db-instance ^
    --db-instance-identifier my-new-db ^
    --db-cluster-identifier my-clone ^
    --db-instance-class db.serverless ^
    --engine aurora-mysql
```

#### Parameters to use for cloning


 The following table summarizes the various parameters used with `restore-db-cluster-to-point-in-time` to clone Aurora DB clusters. 


|  Parameter  |  Description  | 
| --- | --- | 
|   `--source-db-cluster-identifier`   |   Use the name of the source Aurora DB cluster that you want to clone.   | 
|   `--db-cluster-identifier`   |   Choose a meaningful name for your clone when you create it with the `restore-db-cluster-to-point-in-time` command. Then you pass this name to the `create-db-instance` command.   | 
|   `--restore-type`   |   Specify `copy-on-write` as the `--restore-type` to create a clone of the source DB cluster rather than restoring the source Aurora DB cluster.   | 
|   `--use-latest-restorable-time`   |   This value points to the latest restorable volume data for the source DB cluster. Use it to create clones.   | 
|   `--serverless-v2-scaling-configuration`   |   (Newer versions that support Aurora Serverless v2) Use this parameter to configure the minimum and maximum capacity for an Aurora Serverless v2 clone. If you don't specify this parameter, you can't create any Aurora Serverless v2 instances in the clone cluster until you modify the cluster to add this attribute.   | 
|   `--engine-mode`   |   (Older versions that support Aurora Serverless v1 only) Use this parameter to create clones that are of a different type from the source Aurora DB cluster, with one of the following values:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html)  | 
|   `--scaling-configuration`   |   (Older versions that support Aurora Serverless v1 only) Use this parameter to configure the minimum and maximum capacity for an Aurora Serverless v1 clone. If you don't specify this parameter, Aurora creates the clone using the default capacity values for the DB engine.   | 

For information about cross-VPC and cross-account cloning, see the following sections.

**Topics**
+ [

## Overview of Aurora cloning
](#Aurora.Clone.Overview)
+ [

## Limitations of Aurora cloning
](#Aurora.Managing.Clone.Limitations)
+ [

## How Aurora cloning works
](#Aurora.Managing.Clone.Protocol)
+ [

## Creating an Amazon Aurora clone
](#Aurora.Managing.Clone.create)
+ [

# Cross-VPC cloning with Amazon Aurora
](Aurora.Managing.Clone.Cross-VPC.md)
+ [

# Cross-account cloning with AWS RAM and Amazon Aurora
](Aurora.Managing.Clone.Cross-Account.md)

# Cross-VPC cloning with Amazon Aurora
Cross-VPC cloning

 Suppose that you want to impose different network access controls on the original cluster and the clone. For example, you might use cloning to make a copy of a production Aurora cluster in a different VPC for development and testing. Or you might create a clone as part of a migration from public subnets to private subnets, to enhance your database security. 

 The following sections demonstrate how to set up the network configuration for the clone so that the original cluster and the clone can both access the same Aurora storage nodes, even from different subnets or different VPCs. Verifying the network resources in advance can avoid errors during cloning that might be difficult to diagnose. 

 If you aren't familiar with how Aurora interacts with VPCs, subnets, and DB subnet groups, see [Amazon VPC and Amazon Aurora](USER_VPC.md) first. You can work through the tutorials in that section to create these kinds of resources in the AWS console, and understand how they fit together. 

 Because the steps involve switching between the RDS and EC2 services, the examples use AWS CLI commands to help you understand how to automate such operations and save the output. 

**Topics**
+ [

## Before you begin
](#before-you-begin)
+ [

## Gathering information about the network environment
](#gathering-information-about-the-network-environment)
+ [

## Creating network resources for the clone
](#creating-network-resources-for-the-clone)
+ [

## Creating an Aurora clone with new network settings
](#creating-an-aurora-clone-with-new-network-settings)
+ [

## Moving a cluster from public subnets to private ones
](#moving-a-cluster-from-public-subnets-to-private-ones)
+ [

## End-to-end example of creating a cross-VPC clone
](#end-to-end-example-of-creating-a-cross-vpc-clone)

## Before you begin


 Before you start setting up a cross-VPC clone, make sure to have the following resources: 
+  An Aurora DB cluster to use as the source for cloning. If this is your first time creating an Aurora DB cluster, consult the tutorials in [Getting started with Amazon Aurora](CHAP_GettingStartedAurora.md) to set up a cluster using either the MySQL or PostgreSQL database engine. 
+  A second VPC, if you intend to create a cross-VPC clone. If you don't have a VPC to use for the clone, see [Tutorial: Create a VPC for use with a DB cluster (IPv4 only)](CHAP_Tutorials.WebServerDB.CreateVPC.md) or [Tutorial: Create a VPC for use with a DB cluster (dual-stack mode)](CHAP_Tutorials.CreateVPCDualStack.md). 

## Gathering information about the network environment


 With cross-VPC cloning, the network environment can differ substantially between the original cluster and its clone. Before you create the clone, collect and record information about the VPC, subnets, DB subnet group, and AZs used in the original cluster. That way, you can minimize the chances of problems. If a network problem does occur, you won't have to interrupt any troubleshooting activities to search for diagnostic information. The following sections show CLI examples to gather these types of information. You can save the details in whichever format is convenient to consult while creating the clone and doing any troubleshooting. 
+  [Step 1: Check the Availability Zones of the original cluster](#cross-vpc-cloning-check-original-azs) 
+  [Step 2: Check the DB subnet group of the original cluster](#cross-vpc-cloning-check-original-subnet-group) 
+  [Step 3: Check the subnets of the original cluster](#cross-vpc-cloning-check-original-subnets) 
+  [Step 4: Check the Availability Zones of the DB instances in the original cluster](#cross-vpc-cloning-check-original-instance-azs) 
+  [Step 5: Check the VPCs you can use for the clone](#cross-vpc-cloning-check-vpcs) 

### Step 1: Check the Availability Zones of the original cluster


 Before you create the clone, verify which AZs the original cluster uses for its storage. As explained in [Amazon Aurora storage](Aurora.Overview.StorageReliability.md), the storage for each Aurora cluster is associated with exactly three AZs. Because the [Amazon Aurora DB clusters](Aurora.Overview.md) takes advantage of the separation of compute and storage, this rule is true regardless of how many instances are in the cluster. 

 For example, run a CLI command such as the following, substituting your own cluster name for `my_cluster`. The following example produces a list sorted alphabetically by the AZ name. 

```
aws rds describe-db-clusters \
  --db-cluster-identifier my_cluster \
  --query 'sort_by(*[].AvailabilityZones[].{Zone:@},&Zone)' \
  --output text
```

 The following example shows sample output from the preceding `describe-db-clusters` command. It demonstrates that the storage for the Aurora cluster always uses three AZs. 

```
us-east-1c
us-east-1d
us-east-1e
```

 To create a clone in a network environment that doesn't have all the resources in place to connect to these AZs, you must create subnets associated with at least two of those AZs, and then create a DB subnet group containing those two or three subnets. The following examples show how. 

### Step 2: Check the DB subnet group of the original cluster


 If you want to use the same number of subnets for the clone as in the original cluster, you can get the number of subnets from the DB subnet group of the original cluster. An Aurora DB subnet group contains at least two subnets, each associated with a different AZ. Make a note of which AZs the subnets are associated with. 

 The following example shows how to find the DB subnet group of the original cluster, and then work backwards to the corresponding AZs. Substitute the name of your cluster for `my_cluster` in the first command. Substitute the name of the DB subnet group for `my_subnet` in the second command. 

```
aws rds describe-db-clusters --db-cluster-identifier my_cluster \
  --query '*[].DBSubnetGroup' --output text

aws rds describe-db-subnet-groups --db-subnet-group-name my_subnet_group \
  --query '*[].Subnets[].[SubnetAvailabilityZone.Name]' --output text
```

 Sample output might look similar to the following, for a cluster with a DB subnet group containing containing two subnets. In this case, `two-subnets` is a name that was specified when creating the DB subnet group. 

```
two-subnets

us-east-1d
us-east-1c
```

 For a cluster where the DB subnet group contains three subnets, the output might look similar to the following. 

```
three-subnets

us-east-1f
us-east-1d
us-east-1c
```

### Step 3: Check the subnets of the original cluster


 If you need more details about the subnets in the original cluster, run AWS CLI commands similar to the following. You can examine the subnet attributes such as IP address ranges, owner, and so on. That way, you can determine whether to use different subnets in the same VPC, or create subnets with similar characteristics in a different VPC. 

 Find the subnet IDs of all the subnets that are available in your VPC. 

```
aws ec2 describe-subnets --filters Name=vpc-id,Values=my_vpc \
  --query '*[].[SubnetId]' --output text
```

 Find the exact subnets used in your DB subnet group. 

```
aws rds describe-db-subnet-groups --db-subnet-group-name my_subnet_group \
  --query '*[].Subnets[].[SubnetIdentifier]' --output text
```

 Then specify the subnets that you want to investigate in a list, as in the following command. Substitute the names of your subnets for `my_subnet_1` and so on. 

```
aws ec2 describe-subnets \
  --subnet-ids '["my_subnet_1","my_subnet2","my_subnet3"]'
```

 The following example shows partial output from such a `describe-subnets` command. The output shows some of the important attributes you can see for each subnet, such as its associated AZ and the VPC that it's part of. 

```
{
    'Subnets': [
        {
            'AvailabilityZone': 'us-east-1d',
            'AvailableIpAddressCount': 54,
            'CidrBlock': '10.0.0.64/26',
            'State': 'available',
            'SubnetId': 'subnet-000a0bca00e0b0000',
            'VpcId': 'vpc-3f3c3fc3333b3ffb3',
            ...
        },
        {
            'AvailabilityZone': 'us-east-1c',
            'AvailableIpAddressCount': 55,
            'CidrBlock': '10.0.0.0/26',
            'State': 'available',
            'SubnetId': 'subnet-4b4dbfe4d4a4fd4c4',
            'VpcId': 'vpc-3f3c3fc3333b3ffb3',
            ...
```

### Step 4: Check the Availability Zones of the DB instances in the original cluster


 You can use this procedure to understand the AZs used for the DB instances in the original cluster. That way, you can set up the exact same AZs for the DB instances in the clone. You can also use more or fewer DB instances in the clone depending on whether the clone is used for production, development and testing, and so on. 

 For each instance in the original cluster, run a command such as the following. Make sure that the instance has finished creating and is in the `Available` state first. Substitute the instance identifier for `my_instance`. 

```
aws rds describe-db-instances --db-instance-identifier my_instance \
  --query '*[].AvailabilityZone' --output text
```

 The following example shows the output of running the preceding `describe-db-instances` command. The Aurora cluster has four database instances. Therefore, we run the command four times, substituting a different DB instance identifier each time. The output shows how those DB instances are spread across a maximum of three AZs. 

```
us-east-1a
us-east-1c
us-east-1d
us-east-1a
```

 After the clone is created and you are adding DB instances to it, you can specify these same AZ names in the `create-db-instance` commands. You might do so to set up DB instances in the new cluster configured for exactly the same AZs as in the original cluster. 

### Step 5: Check the VPCs you can use for the clone


 If you intend to create the clone in a different VPC than the original, you can get a list of the VPC IDs available for your account. You might also do this step if you need to create any additional subnets in the same VPC as the original cluster. When you run the command to create a subnet, you specify the VPC ID as a parameter. 

 To list all the VPCs for your account, run the following CLI command: 

```
aws ec2 describe-vpcs --query '*[].[VpcId]' --output text
```

 The following example shows sample output from the preceding `describe-vpcs` command. The output demonstrates that there are four VPCs in the current AWS account that can be used as the source or the destination for cross-VPC cloning. 

```
vpc-fd111111
vpc-2222e2cd2a222f22e
vpc-33333333a33333d33
vpc-4ae4d4de4a4444dad
```

 You can use the same VPC as the destination for the clone, or a different VPC. If the original cluster and the clone are in the same VPC, you can reuse the same DB subnet group for the clone. You can also create a different DB subnet group. For example, the new DB subnet group might use private subnets, while the original cluster's DB subnet group might use public subnets. If you create the clone in a different VPC, make sure that there are enough subnets in the new VPC and that the subnets are associated with the right AZs from the original cluster. 

## Creating network resources for the clone


 If while collecting the network information you discovered that additional network resources are needed for the clone, you can create those resources before trying to set up the clone. For example, you might need to create more subnets, subnets associated with specific AZs, or a new DB subnet group. 
+  [Step 1: Create the subnets for the clone](#cross-vpc-cloning-create-clone-subnets) 
+  [Step 2: Create the DB subnet group for the clone](#cross-vpc-cloning-create-subnet-group) 

### Step 1: Create the subnets for the clone


 If you need to create new subnets for the clone, run a command similar to the following. You might need to do this when creating the clone in a different VPC, or when making some other network change such as using private subnets instead of public subnets. 

 AWS automatically generates the ID of the subnet. Substitute the name of the clone's VPC for `my_vpc`. Choose the address range for the `--cidr-block` option to allow at least 16 IP addresses in the range. You can include any other properties that you want to specify. Run the command `aws ec2 create-subnet help` to see all the choices. 

```
aws ec2 create-subnet --vpc-id my_vpc \
  --availability-zone AZ_name --cidr-block IP_range
```

 The following example shows some important attributes of a newly created subnet. 

```
{
    'Subnet': {
        'AvailabilityZone': 'us-east-1b',
        'AvailableIpAddressCount': 59,
        'CidrBlock': '10.0.0.64/26',
        'State': 'available',
        'SubnetId': 'subnet-44b4a44f4e44db444',
        'VpcId': 'vpc-555fc5df555e555dc',
        ...
        }
}
```

### Step 2: Create the DB subnet group for the clone


 If you are creating the clone in a different VPC, or a different set of subnets within the same VPC, then you create a new DB subnet group and specify it when creating the clone. 

 Make sure that you know all the following details. You can find all of these from the output of the preceding examples. 

1.  VPC of the original cluster. For instructions, see [Step 3: Check the subnets of the original cluster](#cross-vpc-cloning-check-original-subnets). 

1.  VPC of the clone, if you are creating it in a different VPC. For instructions, see [Step 5: Check the VPCs you can use for the clone](#cross-vpc-cloning-check-vpcs). 

1.  Three AZs associated with the Aurora storage for the original cluster. For instructions, see [Step 1: Check the Availability Zones of the original cluster](#cross-vpc-cloning-check-original-azs). 

1.  Two or three AZs associated with the DB subnet group for the original cluster. For instructions, see [Step 2: Check the DB subnet group of the original cluster](#cross-vpc-cloning-check-original-subnet-group). 

1.  The subnet IDs and associated AZs of all the subnets in the VPC you intend to use for the clone. Use the same `describe-subnets` command as in [Step 3: Check the subnets of the original cluster](#cross-vpc-cloning-check-original-subnets), substituting the VPC ID of the destination VPC. 

 Check how many AZs are both associated with the storage of the original cluster, and associated with subnets in the destination VPC. To successfully create the clone, there must be two or three AZs in common. If you have fewer than two AZs in common, go back to [Step 1: Create the subnets for the clone](#cross-vpc-cloning-create-clone-subnets). Create one, two, or three new subnets that are associated with the AZs associated with the storage of the original cluster. 

 Choose subnets in the destination VPC that are associated with the same AZs as the Aurora storage in the originally cluster. Ideally, choose three AZs. Doing so gives you the most flexibility to spread the DB instances of the clone across multiple AZs for high availability of compute resources. 

 Run a command similar to the following to create the new DB subnet group. Substitute the IDs of your subnets in the list. If you specify the subnet IDs using environment variables, be careful to quote the `--subnet-ids` parameter list in a way that preserves the double quotation marks around the IDs. 

```
aws rds create-db-subnet-group --db-subnet-group-name my_subnet_group \
  --subnet-ids '["my_subnet_1","my_subnet_2","my_subnet3"]' \
  --db-subnet-group-description 'DB subnet group with 3 subnets for clone'
```

 The following example shows partial output of the `create-db-subnet-group` command. 

```
{
    'DBSubnetGroup': {
        'DBSubnetGroupName': 'my_subnet_group',
        'DBSubnetGroupDescription': 'DB subnet group with 3 subnets for clone',
        'VpcId': 'vpc-555fc5df555e555dc',
        'SubnetGroupStatus': 'Complete',
        'Subnets': [
            {
                'SubnetIdentifier': 'my_subnet_1',
                'SubnetAvailabilityZone': {
                    'Name': 'us-east-1c'
                },
                'SubnetStatus': 'Active'
            },
            {
                'SubnetIdentifier': 'my_subnet_2',
                'SubnetAvailabilityZone': {
                    'Name': 'us-east-1d'
                },
                'SubnetStatus': 'Active'
            }
            ...
        ],
        'SupportedNetworkTypes': [
            'IPV4'
        ]
    }
}
```

 At this point, you haven't actually created the clone yet. You have created all the relevant VPC and subnet resources so that you can specify the appropriate parameters to the `restore-db-cluster-to-point-in-time` and `create-db-instance` commands when creating the clone. 

## Creating an Aurora clone with new network settings


 Once you have made sure that the right configuration of VPCs, subnets, AZs, and subnet groups is in place for the new cluster to use, you can perform the actual cloning operation. The following CLI examples highlight the options such as `--db-subnet-group-name`, `--availability-zone`, and `--vpc-security-group-ids` that you specify on the commands to set up the clone and its DB instances. 
+  [Step 1: Specify the DB subnet group for the clone](#cross-vpc-cloning-specify-clone-subnet-group) 
+  [Step 2: Specify network settings for instances in the clone](#cross-vpc-cloning-configure-clone-instance-network) 
+  [Step 3: Establishing connectivity from a client system to a clone](#cross-vpc-cloning-connect-to-clone) 

### Step 1: Specify the DB subnet group for the clone


 When you create the clone, you can configure all the right VPC, subnet, and AZ settings by specifying a DB subnet group. Use the commands in the preceding examples to verify all the relationships and mappings that go into the DB subnet group. 

 For example, the following commands demonstrate cloning an original cluster to a clone. In the first example, the source cluster is associated with two subnets and the clone is associated with three subnets. The second example shows the opposite case, cloning from a cluster with three subnets to a cluster with two subnets. 

```
aws rds restore-db-cluster-to-point-in-time \
  --source-db-cluster-identifier cluster-with-3-subnets \
  --db-cluster-identifier cluster-cloned-to-2-subnets \
  --restore-type copy-on-write --use-latest-restorable-time \
  --db-subnet-group-name two-subnets
```

 If you intend to use Aurora Serverless v2 instances in the clone, include a `--serverless-v2-scaling-configuration` option when you create the clone, as shown. Doing so lets you use the `db.serverless` class when creating DB instances in the clone. You can also modify the clone later to add this scaling configuration attribute. The capacity numbers in this example allow each Serverless v2 instance in the cluster to scale between 2 and 32 Aurora Capacity Units (ACUs). For information about the Aurora Serverless v2 feature and how to choose the capacity range, see [Using Aurora Serverless v2](aurora-serverless-v2.md). 

```
aws rds restore-db-cluster-to-point-in-time \
  --source-db-cluster-identifier cluster-with-2-subnets \
  --db-cluster-identifier cluster-cloned-to-3-subnets \
  --restore-type copy-on-write --use-latest-restorable-time \
  --db-subnet-group-name three-subnets \
  --serverless-v2-scaling-configuration 'MinCapacity=2,MaxCapacity=32'
```

 Regardless of the number of subnets used by the DB instances, the Aurora storage for the source cluster and the clone is associated with three AZs. The following example lists the AZs associated with both the original cluster and the clone, for both of the `restore-db-cluster-to-point-in-time` commands in the preceding examples. 

```
aws rds describe-db-clusters --db-cluster-identifier cluster-with-3-subnets \
  --query 'sort_by(*[].AvailabilityZones[].{Zone:@},&Zone)' --output text

us-east-1c
us-east-1d
us-east-1f

aws rds describe-db-clusters --db-cluster-identifier cluster-cloned-to-2-subnets \
  --query 'sort_by(*[].AvailabilityZones[].{Zone:@},&Zone)' --output text

us-east-1c
us-east-1d
us-east-1f

aws rds describe-db-clusters --db-cluster-identifier cluster-with-2-subnets \
  --query 'sort_by(*[].AvailabilityZones[].{Zone:@},&Zone)' --output text

us-east-1a
us-east-1c
us-east-1d

aws rds describe-db-clusters --db-cluster-identifier cluster-cloned-to-3-subnets \
  --query 'sort_by(*[].AvailabilityZones[].{Zone:@},&Zone)' --output text

us-east-1a
us-east-1c
us-east-1d
```

 Because at least two of the AZs overlap between each pair of original and clone clusters, both clusters can access the same underlying Aurora storage. 

### Step 2: Specify network settings for instances in the clone


 When you create DB instances in the clone, by default they inherit the DB subnet group from the cluster itself. That way, Aurora automatically assigns each instance to a particular subnet, and creates it in the AZ that's associated with the subnet. This choice is convenient, especially for development and test systems, because you don't have to keep track of the subnet IDs or the AZs while adding new instances to the clone. 

 As an alternative, you can specify the AZ when you create an Aurora DB instance for the clone. The AZ that you specify must be from the set of AZs that are associated with the clone. If the DB subnet group you use for the clone only contains two subnets, then you can only pick from the AZs associated with those two subnets. This choice offers flexibility and resilience for highly available systems, because you can make sure that the writer instance and the standby reader instance are in different AZs. Or if you add additional readers to the cluster, you can make sure that they are spread across three AZs. That way, even in the rare case of an AZ failure, you still have a writer instance and another reader instance in two other AZs. 

 The following example adds a provisioned DB instance to a cloned Aurora PostgreSQL cluster that uses a custom DB subnet group. 

```
aws rds create-db-instance --db-cluster-identifier my_aurora_postgresql_clone \
  --db-instance-identifier my_postgres_instance \
  --db-subnet-group-name my_new_subnet \
  --engine aurora-postgresql \
  --db-instance-class db.t4g.medium
```

 The following example shows partial output from such a command. 

```
{
  'DBInstanceIdentifier': 'my_postgres_instance',
  'DBClusterIdentifier': 'my_aurora_postgresql_clone',
  'DBInstanceClass': 'db.t4g.medium',
  'DBInstanceStatus': 'creating'
  ...
}
```

 The following example adds an Aurora Serverless v2 DB instance to an Aurora MySQL clone that uses a custom DB subnet group. To be able to use Serverless v2 instances, make sure to specify the `--serverless-v2-scaling-configuration` option for the `restore-db-cluster-to-point-in-time` command, as shown in preceding examples. 

```
aws rds create-db-instance --db-cluster-identifier my_aurora_mysql_clone \
  --db-instance-identifier my_mysql_instance \
  --db-subnet-group-name my_other_new_subnet \
  --engine aurora-mysql \
  --db-instance-class db.serverless
```

 The following example shows partial output from such a command. 

```
{
  'DBInstanceIdentifier': 'my_mysql_instance',
  'DBClusterIdentifier': 'my_aurora_mysql_clone',
  'DBInstanceClass': 'db.serverless',
  'DBInstanceStatus': 'creating'
  ...
}
```

### Step 3: Establishing connectivity from a client system to a clone


 If you are already connecting to an Aurora cluster from a client system, you might want to allow the same type of connectivity to a new clone. For example, you might connect to the original cluster from an Amazon Cloud9 instance or EC2 instance. To allow connections from the same client systems, or new ones that you create in the destination VPC, set up equivalent DB subnet groups and VPC security groups as in the VPC. Then specify the subnet group and security groups when you create the clone. 

 The following examples set up an Aurora Serverless v2 clone. That configuration is based on the combination of `--engine-mode provisioned` and `--serverless-v2-scaling-configuration` when creating the DB cluster, and then `--db-instance-class db.serverless` when creating each DB instance in the cluster. The `provisioned` engine mode is the default, so you can omit that option if you prefer. 

```
aws rds restore-db-cluster-to-point-in-time \
  --source-db-cluster-identifier serverless-sql-postgres\
  --db-cluster-identifier serverless-sql-postgres-clone \
  --db-subnet-group-name 'default-vpc-1234' \
  --vpc-security-group-ids 'sg-4567' \
  --serverless-v2-scaling-configuration 'MinCapacity=0.5,MaxCapacity=16' \
  --restore-type copy-on-write \
  --use-latest-restorable-time
```

 Then, when creating the DB instances in the clone, specify the same `--db-subnet-group-name` option. Optionally, you can include the `--availability-zone` option and specify one of the AZs associated with the subnets in that subnet group. That AZ must also be one of the AZs associated with the original cluster. 

```
aws rds create-db-instance \
  --db-cluster-identifier serverless-sql-postgres-clone \
  --db-instance-identifier serverless-sql-postgres-clone-instance \
  --db-instance-class db.serverless \
  --db-subnet-group-name 'default-vpc-987zyx654' \
  --availability-zone 'us-east-1c' \
  --engine aurora-postgresql
```

## Moving a cluster from public subnets to private ones


 You can use cloning to migrate a cluster between public and private subnets. You might do this when adding additional layers of security to your application before deploying it to production. For this example, you should already have the private subnets and NAT gateway set up before starting the cloning process with Aurora. 

 For the steps involving Aurora, you can follow the same general steps as in the preceding examples to [Gathering information about the network environment](#gathering-information-about-the-network-environment) and [Creating an Aurora clone with new network settings](#creating-an-aurora-clone-with-new-network-settings). The main difference is that even if you have public subnets that map to all the AZs from the original cluster, now you must verify that you have enough private subnets for an Aurora cluster, and that those subnets are associated with all the same AZs that are used for Aurora storage in the original cluster. Similar to other cloning use cases, you can make the DB subnet group for the clone with either three or two private subnets that are associated with the required AZs. However, if you use two private subnets in the DB subnet group, you must have a third private subnet that's associated with the third AZ used for Aurora storage in the original cluster. 

 You can consult this checklist to verify that all the requirements are in place to perform this type of cloning operation. 
+  Record the three AZs that are associated with the original cluster. For instructions, see [Step 1: Check the Availability Zones of the original cluster](#cross-vpc-cloning-check-original-azs). 
+  Record the three or two AZs that are associated with the public subnets in the DB subnet group for the original cluster. For instructions, see [Step 3: Check the subnets of the original cluster](#cross-vpc-cloning-check-original-subnets). 
+  Create private subnets that map to all three of the AZs that are associated with the original cluster. Also do any other networking setup, such as creating a NAT gateway, to be able to communicate with the private subnets. For instructions, see [Create a subnet](https://docs.aws.amazon.com/vpc/latest/userguide/create-subnets.html) in the *Amazon Virtual Private Cloud User Guide*. 
+  Create a new DB subnet group containing three or two of the private subnets that are associated with the AZs from the first point. For instructions, see [Step 2: Create the DB subnet group for the clone](#cross-vpc-cloning-create-subnet-group). 

 When all the prerequisites are in place, you can pause database activity on the original cluster while you create the clone and switch your application to use it. After the clone is created and you verify that you can connect to it, run your application code, and so on, you can discontinue use of the original cluster. 

## End-to-end example of creating a cross-VPC clone


 Creating a clone in a different VPC than the original uses the same general steps as in the preceding examples. Because the VPC ID is a property of the subnets, you don't actually specify the VPC ID as a parameter when running any of the RDS CLI commands. The main difference is that you are more likely to need to create new subnets, new subnets mapped to specific AZs, a VPC security group, and a new DB subnet group. That's especially true if this is the first Aurora cluster that you create in that VPC. 

 You can consult this checklist to verify that all the requirements are in place to perform this type of cloning operation. 
+  Record the three AZs that are associated with the original cluster. For instructions, see [Step 1: Check the Availability Zones of the original cluster](#cross-vpc-cloning-check-original-azs). 
+  Record the three or two AZs that are associated with the subnets in the DB subnet group for the original cluster. For instructions, see [Step 2: Check the DB subnet group of the original cluster](#cross-vpc-cloning-check-original-subnet-group). 
+  Create subnets that map to all three of the AZs that are associated with the original cluster. For instructions, see [Step 1: Create the subnets for the clone](#cross-vpc-cloning-create-clone-subnets). 
+  Do any other networking setup, such as setting up a VPC security group, for client systems, application servers, and so on to be able to communicate with the DB instances in the clone. For instructions, see [Controlling access with security groups](Overview.RDSSecurityGroups.md). 
+  Create a new DB subnet group containing three or two of the subnets that are associated with the AZs from the first point. For instructions, see [Step 2: Create the DB subnet group for the clone](#cross-vpc-cloning-create-subnet-group). 

 When all the prerequisites are in place, you can pause database activity on the original cluster while you create the clone and switch your application to use it. After the clone is created and you verify that you can connect to it, run your application code, and so on, you can consider whether to keep both the original and clones running, or discontinue use of the original cluster. 

 The following Linux examples show the sequence of AWS CLI operations to clone an Aurora DB cluster from one VPC to another. Some fields that aren't relevant to the examples aren't shown in the command output. 

 First, we check the IDs of the source and destination VPCs. The descriptive name that you assign to a VPC when you create it is represented as a tag in the VPC metadata. 

```
$ aws ec2 describe-vpcs --query '*[].[VpcId,Tags]'
[
    [
        'vpc-0f0c0fc0000b0ffb0',
        [
            {
                'Key': 'Name',
                'Value': 'clone-vpc-source'
            }
        ]
    ],
    [
        'vpc-9e99d9f99a999bd99',
        [
            {
                'Key': 'Name',
                'Value': 'clone-vpc-dest'
            }
        ]
    ]
]
```

 The original cluster already exists in the source VPC. To set up the clone using the same set of AZs for the Aurora storage, we check the AZs used by the original cluster. 

```
$ aws rds describe-db-clusters --db-cluster-identifier original-cluster \
  --query 'sort_by(*[].AvailabilityZones[].{Zone:@},&Zone)' --output text

us-east-1c
us-east-1d
us-east-1f
```

 We make sure there are subnets that correspond to the AZs used by the original cluster: `us-east-1c`, `us-east-1d`, and `us-east-1f`. 

```
$ aws ec2 create-subnet --vpc-id vpc-9e99d9f99a999bd99 \
  --availability-zone us-east-1c --cidr-block 10.0.0.128/28
{
    'Subnet': {
        'AvailabilityZone': 'us-east-1c',
        'SubnetId': 'subnet-3333a33be3ef3e333',
        'VpcId': 'vpc-9e99d9f99a999bd99',
    }
}

$ aws ec2 create-subnet --vpc-id vpc-9e99d9f99a999bd99 \
--availability-zone us-east-1d --cidr-block 10.0.0.160/28
{
    'Subnet': {
        'AvailabilityZone': 'us-east-1d',
        'SubnetId': 'subnet-4eeb444cd44b4d444',
        'VpcId': 'vpc-9e99d9f99a999bd99',
    }
}

$ aws ec2 create-subnet --vpc-id vpc-9e99d9f99a999bd99 \
--availability-zone us-east-1f --cidr-block 10.0.0.224/28
{
    'Subnet': {
        'AvailabilityZone': 'us-east-1f',
        'SubnetId': 'subnet-66eea6666fb66d66c',
        'VpcId': 'vpc-9e99d9f99a999bd99',
    }
}
```

 This example confirms that there are subnets that map to the necessary AZs in the destination VPC. 

```
aws ec2 describe-subnets --query 'sort_by(*[] | [?VpcId == `vpc-9e99d9f99a999bd99`] |
[].{SubnetId:SubnetId,VpcId:VpcId,AvailabilityZone:AvailabilityZone}, &AvailabilityZone)' --output table

---------------------------------------------------------------------------
|                             DescribeSubnets                             |
+------------------+----------------------------+-------------------------+
| AvailabilityZone |         SubnetId           |          VpcId          |
+------------------+----------------------------+-------------------------+
|  us-east-1a      |  subnet-000ff0e00000c0aea  |  vpc-9e99d9f99a999bd99  |
|  us-east-1b      |  subnet-1111d111111ca11b1  |  vpc-9e99d9f99a999bd99  |
|  us-east-1c      | subnet-3333a33be3ef3e333   |  vpc-9e99d9f99a999bd99  |
|  us-east-1d      | subnet-4eeb444cd44b4d444   |  vpc-9e99d9f99a999bd99  |
|  us-east-1f      | subnet-66eea6666fb66d66c   |  vpc-9e99d9f99a999bd99  |
+------------------+----------------------------+-------------------------+
```

 Before creating an Aurora DB cluster in the VPC, you must have a DB subnet group with subnets that map to the AZs used for Aurora storage. When you create a regular cluster, you can use any set of three AZs. When you clone an existing cluster, the subnet group must match at least two of the three AZs that it uses for Aurora storage. 

```
$ aws rds create-db-subnet-group \
  --db-subnet-group-name subnet-group-in-other-vpc \
  --subnet-ids '["subnet-3333a33be3ef3e333","subnet-4eeb444cd44b4d444","subnet-66eea6666fb66d66c"]' \
  --db-subnet-group-description 'DB subnet group with 3 subnets: subnet-3333a33be3ef3e333,subnet-4eeb444cd44b4d444,subnet-66eea6666fb66d66c'

{
    'DBSubnetGroup': {
        'DBSubnetGroupName': 'subnet-group-in-other-vpc',
        'DBSubnetGroupDescription': 'DB subnet group with 3 subnets: subnet-3333a33be3ef3e333,subnet-4eeb444cd44b4d444,subnet-66eea6666fb66d66c',
        'VpcId': 'vpc-9e99d9f99a999bd99',
        'SubnetGroupStatus': 'Complete',
        'Subnets': [
            {
                'SubnetIdentifier': 'subnet-4eeb444cd44b4d444',
                'SubnetAvailabilityZone': { 'Name': 'us-east-1d' }
            },
            {
                'SubnetIdentifier': 'subnet-3333a33be3ef3e333',
                'SubnetAvailabilityZone': { 'Name': 'us-east-1c' }
            },
            {
                'SubnetIdentifier': 'subnet-66eea6666fb66d66c',
                'SubnetAvailabilityZone': { 'Name': 'us-east-1f' }
            }
        ]
    }
}
```

 Now the subnets and DB subnet group are in place. The following example shows the `restore-db-cluster-to-point-in-time` that clones the cluster. The `--db-subnet-group-name` option associates the clone with the correct set of subnets that map to the correct set of AZs from the original cluster. 

```
$ aws rds restore-db-cluster-to-point-in-time \
  --source-db-cluster-identifier original-cluster \
  --db-cluster-identifier clone-in-other-vpc \
  --restore-type copy-on-write --use-latest-restorable-time \
  --db-subnet-group-name subnet-group-in-other-vpc

{
  'DBClusterIdentifier': 'clone-in-other-vpc',
  'DBSubnetGroup': 'subnet-group-in-other-vpc',
  'Engine': 'aurora-postgresql',
  'EngineVersion': '15.4',
  'Status': 'creating',
  'Endpoint': 'clone-in-other-vpc.cluster-c0abcdef.us-east-1.rds.amazonaws.com'
}
```

 The following example confirms that the Aurora storage in the clone uses the same set of AZs as in the original cluster. 

```
$ aws rds describe-db-clusters --db-cluster-identifier clone-in-other-vpc \
  --query 'sort_by(*[].AvailabilityZones[].{Zone:@},&Zone)' --output text

us-east-1c
us-east-1d
us-east-1f
```

 At this point, you can create DB instances for the clone. Make sure that the VPC security group associated with each instance allows connections from the IP address ranges you use for the EC2 instances, application servers, and so on that are in the destination VPC. 

# Cross-account cloning with AWS RAM and Amazon Aurora
Cross-account cloning

By using AWS Resource Access Manager (AWS RAM) with Amazon Aurora, you can share Aurora DB clusters and clones that belong to your AWS account with another AWS account or organization. Such *cross-account cloning* is much faster than creating and restoring a database snapshot. You can create a clone of one of your Aurora DB clusters and share the clone. Or you can share your Aurora DB cluster with another AWS account and let the account holder create the clone. The approach that you choose depends on your use case.

For example, you might need to regularly share a clone of your financial database with your organization's internal auditing team. In this case, your auditing team has its own AWS account for the applications that it uses. You can give the auditing team's AWS account the permission to access your Aurora DB cluster and clone it as needed. 

On the other hand, if an outside vendor audits your financial data you might prefer to create the clone yourself. You then give the outside vendor access to the clone only.

You can also use cross-account cloning to support many of the same use cases for cloning within the same AWS account, such as development and testing. For example, your organization might use different AWS accounts for production, development, testing, and so on. For more information, see [Overview of Aurora cloning](Aurora.Managing.Clone.md#Aurora.Clone.Overview). 

Thus, you might want to share a clone with another AWS account or allow another AWS account to create clones of your Aurora DB clusters. In either case, start by using AWS RAM to create a share object. For complete information about sharing AWS resources between AWS accounts, see the [AWS RAM User Guide](https://docs.aws.amazon.com/ram/latest/userguide/). 

Creating a cross-account clone requires actions from the AWS account that owns the original cluster, and the AWS account that creates the clone. First, the original cluster owner modifies the cluster to allow one or more other accounts to clone it. If any of the accounts is in a different AWS organization, AWS generates a sharing invitation. The other account must accept the invitation before proceeding. Then each authorized account can clone the cluster. Throughout this process, the cluster is identified by its unique Amazon Resource Name (ARN). 

As with cloning within the same AWS account, additional storage space is used only if changes are made to the data by the source or the clone. Charges for storage are then applied at that time. If the source cluster is deleted, storage costs are distributed equally among remaining cloned clusters. 

**Topics**
+ [

## Limitations of cross-account cloning
](#Aurora.Managing.Clone.CrossAccount.Limitations)
+ [

## Allowing other AWS accounts to clone your cluster
](#Aurora.Managing.Clone.CrossAccount.yours)
+ [

## Cloning a cluster that is owned by another AWS account
](#Aurora.Managing.Clone.CrossAccount.theirs)

## Limitations of cross-account cloning
Limitations

 Aurora cross-account cloning has the following limitations: 
+ You can't clone an Aurora Serverless v1 cluster across AWS accounts.
+ You can't view or accept invitations to shared resources with the AWS Management Console. Use the AWS CLI, the Amazon RDS API, or the AWS RAM console to view and accept invitations to shared resources.
+ You can create only one new clone from a resource shared with your AWS account. This applies whether the shared resource is an original Aurora DB cluster or a previously created clone.
+ You can create only one new clone from a clone that's been shared with your AWS account.
+ You can't share resources (clones or Aurora DB clusters) that have been shared with your AWS account.
+ You can create a maximum of 15 cross-account clones from any single Aurora DB cluster. 
+  Each of the 15 cross-account clones must be owned by a different AWS account. That is, you can only create one cross-account clone of a cluster within any AWS account. 
+  After you clone a cluster, the original cluster and its clone are considered to be the same for purposes of enforcing limits on cross-account clones. You can't create cross-account clones of both the original cluster and the cloned cluster within the same AWS account. The total number of cross-account clones for the original cluster and any of its clones can't exceed 15. 
+ You can't share an Aurora DB cluster with other AWS accounts unless the cluster is in an `ACTIVE` state. 
+ You can't rename an Aurora DB cluster that's been shared with other AWS accounts. 
+  You can't create a cross-account clone of a cluster that is encrypted with the default RDS key. 
+ You can't create nonencrypted clones in one AWS account from encrypted Aurora DB clusters that have been shared by another AWS account. The cluster owner must grant permission to access the source cluster's AWS KMS key. However, you can use a different key when you create the clone. 

## Allowing other AWS accounts to clone your cluster


 To allow other AWS accounts to clone a cluster that you own, use AWS RAM to set the sharing permission. Doing so also sends an invitation to each of the other accounts that's in a different AWS organization. 

 For the procedures to share resources owned by you in the AWS RAM console, see [Sharing resources owned by you](https://docs.aws.amazon.com/ram/latest/userguide/working-with-sharing.html) in the *AWS RAM User Guide*. 

**Topics**
+ [

### Granting permission to other AWS accounts to clone your cluster
](#Aurora.Managing.Clone.CrossAccount.granting)
+ [

### Checking if a cluster that you own is shared with other AWS accounts
](#Aurora.Managing.Clone.CrossAccount.confirming)

### Granting permission to other AWS accounts to clone your cluster


 If the cluster that you're sharing is encrypted, you also share the AWS KMS key for the cluster. You can allow AWS Identity and Access Management (IAM) users or roles in one AWS account to use a KMS key in a different account. 

To do this, you first add the external account (root user) to the KMS key's key policy through AWS KMS. You don't add the individual users or roles to the key policy, only the external account that owns them. You can only share a KMS key that you create, not the default RDS service key. For information about access control for KMS keys, see [Authentication and access control for AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/control-access.html). 

#### Console


**To grant permission to clone your cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  In the navigation pane, choose **Databases**. 

1.  Choose the DB cluster that you want to share to see its **Details** page, and choose the **Connectivity & security** tab. 

1.  In the **Share DB cluster with other AWS accounts** section, enter the numeric account ID for the AWS account that you want to allow to clone this cluster. For account IDs in the same organization, you can begin typing in the box and then choose from the menu. 
**Important**  
 In some cases, you might want an account that is not in the same AWS organization as your account to clone a cluster. In these cases, for security reasons the console doesn't report who owns that account ID or whether the account exists.   
Be careful entering account numbers that are not in the same AWS organization as your AWS account. Immediately verify that you shared with the intended account. 

1.  On the confirmation page, verify that the account ID that you specified is correct. Enter `share` in the confirmation box to confirm. 

    On the **Details** page, an entry appears that shows the specified AWS account ID under **Accounts that this DB cluster is shared with**. The **Status** column initially shows a status of **Pending**. 

1.  Contact the owner of the other AWS account, or sign in to that account if you own both of them. Instruct the owner of the other account to accept the sharing invitation and clone the DB cluster, as described following. 

#### AWS CLI


**To grant permission to clone your cluster**

1.  Gather the information for the required parameters. You need the ARN for your cluster and the numeric ID for the other AWS account. 

1.  Run the AWS RAM CLI command [https://docs.aws.amazon.com/cli/latest/reference/ram/create-resource-share.html](https://docs.aws.amazon.com/cli/latest/reference/ram/create-resource-share.html). 

   For Linux, macOS, or Unix:

   ```
   aws ram create-resource-share --name descriptive_name \
     --region region \
     --resource-arns cluster_arn \
     --principals other_account_ids
   ```

   For Windows:

   ```
   aws ram create-resource-share --name descriptive_name ^
     --region region ^
     --resource-arns cluster_arn ^
     --principals other_account_ids
   ```

    To include multiple account IDs for the `--principals` parameter, separate IDs from each other with spaces. To specify whether the permitted account IDs can be outside your AWS organization, include the `--allow-external-principals` or `--no-allow-external-principals` parameter for `create-resource-share`. 

#### AWS RAM API


**To grant permission to clone your cluster**

1.  Gather the information for the required parameters. You need the ARN for your cluster and the numeric ID for the other AWS account. 

1.  Call the AWS RAM API operation [CreateResourceShare](https://docs.aws.amazon.com/ram/latest/APIReference/API_CreateResourceShare.html), and specify the following values: 
   +  Specify the account ID for one or more AWS accounts as the `principals` parameter. 
   +  Specify the ARN for one or more Aurora DB clusters as the `resourceArns` parameter. 
   +  Specify whether the permitted account IDs can be outside your AWS organization by including a Boolean value for the `allowExternalPrincipals` parameter. 

#### Recreating a cluster that uses the default RDS key


If the encrypted cluster that you plan to share uses the default RDS key, make sure to recreate the cluster. To do this, create a manual snapshot of your DB cluster, use an AWS KMS key, and then restore the cluster to a new cluster. Then share the new cluster. To perform this process, take the following steps.

**To recreate an encrypted cluster that uses the default RDS key**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  Choose **Snapshots** from the navigation pane. 

1.  Choose your snapshot. 

1.  For **Actions**, choose **Copy Snapshot**, and then choose **Enable encryption**. 

1.  For **AWS KMS key**, choose the new encryption key that you want to use. 

1.  Restore the copied snapshot. To do so, follow the procedure in [Restoring from a DB cluster snapshot](aurora-restore-snapshot.md). The new DB instance uses your new encryption key. 

1.  (Optional) Delete the old DB cluster if you no longer need it. To do so, follow the procedure in [Deleting a DB cluster snapshot](aurora-delete-snapshot.md#DeleteDBClusterSnapshot). Before you do, confirm that your new cluster has all necessary data and that your application can access it successfully. 

### Checking if a cluster that you own is shared with other AWS accounts


 You can check if other users have permission to share a cluster. Doing so can help you understand whether the cluster is approaching the limit for the maximum number of cross-account clones. 

 For the procedures to share resources using the AWS RAM console, see [Sharing resources owned by you](https://docs.aws.amazon.com/ram/latest/userguide/working-with-sharing.html) in the *AWS RAM User Guide*. 

#### AWS CLI


**To find out if a cluster that you own is shared with other AWS accounts**
+  Call the AWS RAM CLI command [https://docs.aws.amazon.com/cli/latest/reference/ram/list-principals.html](https://docs.aws.amazon.com/cli/latest/reference/ram/list-principals.html), using your account ID as the resource owner and the ARN of your cluster as the resource ARN. You can see all shares with the following command. The results indicate which AWS accounts are allowed to clone the cluster. 

  ```
  aws ram list-principals \
      --resource-arns your_cluster_arn \
      --principals your_aws_id
  ```

#### AWS RAM API


**To find out if a cluster that you own is shared with other AWS accounts**
+  Call the AWS RAM API operation [ListPrincipals](https://docs.aws.amazon.com/ram/latest/APIReference/API_ListPrincipals.html). Use your account ID as the resource owner and the ARN of your cluster as the resource ARN. 

## Cloning a cluster that is owned by another AWS account


 To clone a cluster that's owned by another AWS account, use AWS RAM to get permission to make the clone. After you have the required permission, use the standard procedure for cloning an Aurora cluster. 

 You can also check whether a cluster that you own is a clone of a cluster owned by a different AWS account. 

 For the procedures to work with resources owned by others in the AWS RAM console, see [Accessing resources shared with you](https://docs.aws.amazon.com/ram/latest/userguide/working-with-shared.html) in the *AWS RAM User Guide.* 

**Topics**
+ [

### Viewing invitations to clone clusters that are owned by other AWS accounts
](#Aurora.Managing.Clone.CrossAccount.viewing)
+ [

### Accepting invitations to share clusters owned by other AWS accounts
](#Aurora.Managing.Clone.CrossAccount.accepting)
+ [

### Cloning an Aurora cluster that is owned by another AWS account
](#Aurora.Managing.Clone.CrossAccount.cloning)
+ [

### Checking if a DB cluster is a cross-account clone
](#Aurora.Managing.Clone.CrossAccount.checking)

### Viewing invitations to clone clusters that are owned by other AWS accounts


 To work with invitations to clone clusters owned by AWS accounts in other AWS organizations, use the AWS CLI, the AWS RAM console, or the AWS RAM API. Currently, you can't perform this procedure using the Amazon RDS console. 

 For the procedures to work with invitations in the AWS RAM console, see [Accessing resources shared with you](https://docs.aws.amazon.com/ram/latest/userguide/working-with-shared.html) in the *AWS RAM User Guide*. 

#### AWS CLI


**To see invitations to clone clusters that are owned by other AWS accounts**

1.  Run the AWS RAM CLI command [https://docs.aws.amazon.com/cli/latest/reference/ram/get-resource-share-invitations.html](https://docs.aws.amazon.com/cli/latest/reference/ram/get-resource-share-invitations.html). 

   ```
   aws ram get-resource-share-invitations --region region_name
   ```

    The results from the preceding command show all invitations to clone clusters, including any that you already accepted or rejected. 

1.  (Optional) Filter the list so you see only the invitations that require action from you. To do so, add the parameter `--query 'resourceShareInvitations[?status==`PENDING`]'`. 

#### AWS RAM API


**To see invitations to clone clusters that are owned by other AWS accounts**

1.  Call the AWS RAM API operation [https://docs.aws.amazon.com/ram/latest/APIReference/API_GetResourceShareInvitations.html](https://docs.aws.amazon.com/ram/latest/APIReference/API_GetResourceShareInvitations.html). This operation returns all such invitations, including any that you already accepted or rejected. 

1.  (Optional) Find only the invitations that require action from you by checking the `resourceShareAssociations` return field for a `status` value of `PENDING`. 

### Accepting invitations to share clusters owned by other AWS accounts


 You can accept invitations to share clusters owned by other AWS accounts that are in different AWS organizations. To work with these invitations, use the AWS CLI, the AWS RAM and RDS APIs, or the AWS RAM console. Currently, you can't perform this procedure using the RDS console. 

 For the procedures to work with invitations in the AWS RAM console, see [Accessing resources shared with you](https://docs.aws.amazon.com/ram/latest/userguide/working-with-shared.html) in the *AWS RAM User Guide*. 

#### AWS CLI


**To accept an invitation to share a cluster from another AWS account**

1.  Find the invitation ARN by running the AWS RAM CLI command [https://docs.aws.amazon.com/cli/latest/reference/ram/get-resource-share-invitations.html](https://docs.aws.amazon.com/cli/latest/reference/ram/get-resource-share-invitations.html), as shown preceding. 

1.  Accept the invitation by calling the AWS RAM CLI command [https://docs.aws.amazon.com/cli/latest/reference/ram/accept-resource-share-invitation.html](https://docs.aws.amazon.com/cli/latest/reference/ram/accept-resource-share-invitation.html), as shown following. 

   For Linux, macOS, or Unix:

   ```
   aws ram accept-resource-share-invitation \
     --resource-share-invitation-arn invitation_arn \
     --region region
   ```

   For Windows:

   ```
   aws ram accept-resource-share-invitation ^
     --resource-share-invitation-arn invitation_arn ^
     --region region
   ```

#### AWS RAM and RDS API


**To accept invitations to share somebody's cluster**

1.  Find the invitation ARN by calling the AWS RAM API operation [https://docs.aws.amazon.com/ram/latest/APIReference/API_GetResourceShareInvitations.html](https://docs.aws.amazon.com/ram/latest/APIReference/API_GetResourceShareInvitations.html), as shown preceding. 

1.  Pass that ARN as the `resourceShareInvitationArn` parameter to the RDS API operation [AcceptResourceShareInvitation](https://docs.aws.amazon.com/ram/latest/APIReference/API_AcceptResourceShareInvitation.html). 

### Cloning an Aurora cluster that is owned by another AWS account


 After you accept the invitation from the AWS account that owns the DB cluster, as shown preceding, you can clone the cluster. 

#### Console


**To clone an Aurora cluster that is owned by another AWS account**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  In the navigation pane, choose **Databases**. 

    At the top of the database list, you should see one or more items with a **Role** value of `Shared from account #account_id`. For security reasons, you can only see limited information about the original clusters. The properties that you can see are the ones such as database engine and version that must be the same in your cloned cluster. 

1.  Choose the cluster that you intend to clone. 

1.  For **Actions**, choose **Create clone**. 

1.  Follow the procedure in [Console](Aurora.Managing.Clone.md#Aurora.Managing.Clone.Console) to finish setting up the cloned cluster. 

1. As needed, enable encryption for the cloned cluster. If the cluster that you are cloning is encrypted, you must enable encryption for the cloned cluster. The AWS account that shared the cluster with you must also share the KMS key that was used to encrypt the cluster. You can use the same KMS key to encrypt the clone, or your own KMS key. You can't create a cross-account clone for a cluster that is encrypted with the default KMS key. 

    The account that owns the encryption key must grant permission to use the key to the destination account by using a key policy. This process is similar to how encrypted snapshots are shared, by using a key policy that grants permission to the destination account to use the key. 

#### AWS CLI


**To clone an Aurora cluster owned by another AWS account**

1.  Accept the invitation from the AWS account that owns the DB cluster, as shown preceding. 

1.  Clone the cluster by specifying the full ARN of the source cluster in the `source-db-cluster-identifier` parameter of the RDS CLI command [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html), as shown following. 

    If the ARN passed as the `source-db-cluster-identifier` hasn't been shared, the same error is returned as if the specified cluster doesn't exist. 

   For Linux, macOS, or Unix:

   ```
   aws rds restore-db-cluster-to-point-in-time \
     --source-db-cluster-identifier=arn:aws:rds:arn_details \
     --db-cluster-identifier=new_cluster_id \
     --restore-type=copy-on-write \
     --use-latest-restorable-time
   ```

   For Windows:

   ```
   aws rds restore-db-cluster-to-point-in-time ^
     --source-db-cluster-identifier=arn:aws:rds:arn_details ^
     --db-cluster-identifier=new_cluster_id ^
     --restore-type=copy-on-write ^
     --use-latest-restorable-time
   ```

1.  If the cluster that you are cloning is encrypted, encrypt your cloned cluster by including a `kms-key-id` parameter. This `kms-key-id` value can be the same one used to encrypt the original DB cluster, or your own KMS key. Your account must have permission to use that encryption key. 

   For Linux, macOS, or Unix:

   ```
   aws rds restore-db-cluster-to-point-in-time \
     --source-db-cluster-identifier=arn:aws:rds:arn_details \
     --db-cluster-identifier=new_cluster_id \
     --restore-type=copy-on-write \
     --use-latest-restorable-time \
     --kms-key-id=arn:aws:kms:arn_details
   ```

   For Windows:

   ```
   aws rds restore-db-cluster-to-point-in-time ^
     --source-db-cluster-identifier=arn:aws:rds:arn_details ^
     --db-cluster-identifier=new_cluster_id ^
     --restore-type=copy-on-write ^
     --use-latest-restorable-time ^
     --kms-key-id=arn:aws:kms:arn_details
   ```

    The account that owns the encryption key must grant permission to use the key to the destination account by using a key policy. This process is similar to how encrypted snapshots are shared, by using a key policy that grants permission to the destination account to use the key. An example of a key policy follows. 

------
#### [ JSON ]

****  

   ```
   {
       "Id": "key-policy-1",
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "Allow use of the key",
               "Effect": "Allow",
               "Principal": {
                   "AWS": [
                       "arn:aws:iam::111122223333:user/KeyUser",
                       "arn:aws:iam::111122223333:root"
                   ]
               },
               "Action": [
                   "kms:CreateGrant",
                   "kms:Encrypt",
                   "kms:Decrypt",
                   "kms:ReEncrypt*",
                   "kms:GenerateDataKey*",
                   "kms:DescribeKey"
               ],
               "Resource": "*"
           },
           {
               "Sid": "Allow attachment of persistent resources",
               "Effect": "Allow",
               "Principal": {
                   "AWS": [
                       "arn:aws:iam::111122223333:user/KeyUser",
                       "arn:aws:iam::111122223333:root"
                   ]
               },
               "Action": [
                   "kms:CreateGrant",
                   "kms:ListGrants",
                   "kms:RevokeGrant"
               ],
               "Resource": "*",
               "Condition": {
                   "Bool": {
                       "kms:GrantIsForAWSResource": true
                   }
               }
           }
       ]
   }
   ```

------

**Note**  
The [restore-db-cluster-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html) AWS CLI command restores only the DB cluster, not the DB instances for that DB cluster. To create DB instances for the restored DB cluster, invoke the [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) command. Specify the identifier of the restored DB cluster in `--db-cluster-identifier`.   
You can create DB instances only after the `restore-db-cluster-to-point-in-time` command has completed and the DB cluster is available.

#### RDS API


**To clone an Aurora cluster owned by another AWS account**

1.  Accept the invitation from the AWS account that owns the DB cluster, as shown preceding. 

1.  Clone the cluster by specifying the full ARN of the source cluster in the `SourceDBClusterIdentifier` parameter of the RDS API operation [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html). 

    If the ARN passed as the `SourceDBClusterIdentifier` hasn't been shared, then the same error is returned as if the specified cluster doesn't exist. 

1.  If the cluster that you are cloning is encrypted, include a `KmsKeyId` parameter to encrypt your cloned cluster. This `kms-key-id` value can be the same one used to encrypt the original DB cluster, or your own KMS key. Your account must have permission to use that encryption key. 

    When you clone a volume, the destination account must have permission to use the encryption key used to encrypt the source cluster. Aurora encrypts the new cloned cluster with the encryption key specified in `KmsKeyId`. 

    The account that owns the encryption key must grant permission to use the key to the destination account by using a key policy. This process is similar to how encrypted snapshots are shared, by using a key policy that grants permission to the destination account to use the key. An example of a key policy follows. 

------
#### [ JSON ]

****  

   ```
   {
       "Id": "key-policy-1",
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "Allow use of the key",
               "Effect": "Allow",
               "Principal": {
                   "AWS": [
                       "arn:aws:iam::111122223333:user/KeyUser",
                       "arn:aws:iam::111122223333:root"
                   ]
               },
               "Action": [
                   "kms:CreateGrant",
                   "kms:Encrypt",
                   "kms:Decrypt",
                   "kms:ReEncrypt*",
                   "kms:GenerateDataKey*",
                   "kms:DescribeKey"
               ],
               "Resource": "*"
           },
           {
               "Sid": "Allow attachment of persistent resources",
               "Effect": "Allow",
               "Principal": {
                   "AWS": [
                       "arn:aws:iam::111122223333:user/KeyUser",
                       "arn:aws:iam::111122223333:root"
                   ]
               },
               "Action": [
                   "kms:CreateGrant",
                   "kms:ListGrants",
                   "kms:RevokeGrant"
               ],
               "Resource": "*",
               "Condition": {
                   "Bool": {
                       "kms:GrantIsForAWSResource": true
                   }
               }
           }
       ]
   }
   ```

------

**Note**  
The [RestoreDBClusterToPointInTime](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html) RDS API operation restores only the DB cluster, not the DB instances for that DB cluster. To create DB instances for the restored DB cluster, invoke the [CreateDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html) RDS API operation. Specify the identifier of the restored DB cluster in `DBClusterIdentifier`. You can create DB instances only after the `RestoreDBClusterToPointInTime` operation has completed and the DB cluster is available.

### Checking if a DB cluster is a cross-account clone


 The `DBClusters` object identifies whether each cluster is a cross-account clone. You can see the clusters that you have permission to clone by using the `include-shared` option when you run the RDS CLI command [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html). However, you can't see most of the configuration details for such clusters. 

#### AWS CLI


**To check if a DB cluster is a cross-account clone**
+  Call the RDS CLI command [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html). 

   The following example shows how actual or potential cross-account clone DB clusters appear in `describe-db-clusters` output. For existing clusters owned by your AWS account, the `CrossAccountClone` field indicates whether the cluster is a clone of a DB cluster that is owned by another AWS account. 

   In some cases, an entry might have a different AWS account number than yours in the `DBClusterArn` field. In this case, that entry represents a cluster that is owned by a different AWS account and that you can clone. Such entries have few fields other than `DBClusterArn`. When creating the cloned cluster, specify the same `StorageEncrypted`, `Engine`, and `EngineVersion` values as in the original cluster. 

  ```
  $aws rds describe-db-clusters --include-shared --region us-east-1
  {
    "DBClusters": [
        {
            "EarliestRestorableTime": "2023-02-01T21:17:54.106Z",
            "Engine": "aurora-mysql",
            "EngineVersion": "8.0.mysql_aurora.3.02.0",
            "CrossAccountClone": false,
  ...
        },
        {
            "EarliestRestorableTime": "2023-02-09T16:01:07.398Z",
            "Engine": "aurora-mysql",
            "EngineVersion": "8.0.mysql_aurora.3.02.0",
            "CrossAccountClone": true,
  ...
        },
        {
            "StorageEncrypted": false,
            "DBClusterArn": "arn:aws:rds:us-east-1:12345678:cluster:cluster-abcdefgh",
            "Engine": "aurora-mysql",
            "EngineVersion": "8.0.mysql_aurora.3.02.0
    ]
  }
  ```

#### RDS API


**To check if a DB cluster is a cross-account clone**
+  Call the RDS API operation [DescribeDBClusters](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html). 

   For existing clusters owned by your AWS account, the `CrossAccountClone` field indicates whether the cluster is a clone of a DB cluster owned by another AWS account. Entries with a different AWS account number in the `DBClusterArn` field represent clusters that you can clone and that are owned by other AWS accounts. These entries have few fields other than `DBClusterArn`. When creating the cloned cluster, specify the same `StorageEncrypted`, `Engine`, and `EngineVersion` values as in the original cluster. 

   The following example shows a return value that demonstrates both actual and potential cloned clusters. 

  ```
  {
    "DBClusters": [
        {
            "EarliestRestorableTime": "2023-02-01T21:17:54.106Z",
            "Engine": "aurora-mysql",
            "EngineVersion": "8.0.mysql_aurora.3.02.0",
            "CrossAccountClone": false,
  ...
        },
        {
            "EarliestRestorableTime": "2023-02-09T16:01:07.398Z",
            "Engine": "aurora-mysql",
            "EngineVersion": "8.0.mysql_aurora.3.02.0",
            "CrossAccountClone": true,
  ...
        },
        {
            "StorageEncrypted": false,
            "DBClusterArn": "arn:aws:rds:us-east-1:12345678:cluster:cluster-abcdefgh",
            "Engine": "aurora-mysql",
            "EngineVersion": "8.0.mysql_aurora.3.02.0"
        }
    ]
  }
  ```

# Integrating Aurora with other AWS services
Integrating with AWS services

Integrate Amazon Aurora with other AWS services so that you can extend your Aurora DB cluster to use additional capabilities in the AWS Cloud. 

**Topics**
+ [

## Integrating AWS services with Amazon Aurora MySQL
](#Aurora.Integrating.AuroraMySQL)
+ [

## Integrating AWS services with Amazon Aurora PostgreSQL
](#Aurora.Integrating.AuroraPostgreSQL)

## Integrating AWS services with Amazon Aurora MySQL
Aurora MySQL

Amazon Aurora MySQL integrates with other AWS services so that you can extend your Aurora MySQL DB cluster to use additional capabilities in the AWS Cloud. Your Aurora MySQL DB cluster can use AWS services to do the following:
+ Synchronously or asynchronously invoke an AWS Lambda function using the native functions `lambda_sync` or `lambda_async`. Or, asynchronously invoke an AWS Lambda function using the `mysql.lambda_async` procedure.
+ Load data from text or XML files stored in an Amazon S3 bucket into your DB cluster using the `LOAD DATA FROM S3` or `LOAD XML FROM S3` command.
+ Save data to text files stored in an Amazon S3 bucket from your DB cluster using the `SELECT INTO OUTFILE S3` command.
+ Automatically add or remove Aurora Replicas with Application Auto Scaling. For more information, see [Amazon Aurora Auto Scaling with Aurora Replicas](Aurora.Integrating.AutoScaling.md).

For more information about integrating Aurora MySQL with other AWS services, see [Integrating Amazon Aurora MySQL with other AWS services](AuroraMySQL.Integrating.md).

## Integrating AWS services with Amazon Aurora PostgreSQL
Aurora PostgreSQL

Amazon Aurora PostgreSQL integrates with other AWS services so that you can extend your Aurora PostgreSQL DB cluster to use additional capabilities in the AWS Cloud. Your Aurora PostgreSQL DB cluster can use AWS services to do the following:
+ Quickly collect, view, and assess performance on your relational database workloads with Performance Insights.
+ Automatically add or remove Aurora Replicas with Aurora Auto Scaling. For more information, see [Amazon Aurora Auto Scaling with Aurora Replicas](Aurora.Integrating.AutoScaling.md).

For more information about integrating Aurora PostgreSQL with other AWS services, see [Integrating Amazon Aurora PostgreSQL with other AWS services](AuroraPostgreSQL.Integrating.md).

# Maintaining an Amazon Aurora DB cluster
Maintaining an Aurora DB cluster

Periodically, Amazon RDS performs maintenance on Amazon RDS resources. The following topics describe these maintenance actions and how to apply them.

## Overview of DB cluster maintenance updates


Maintenance most often involves updates to the following resources in your DB cluster:
+ Underlying hardware
+ Underlying operating system (OS)
+ Database engine version

Updates to the operating system most often occur for security issues. We recommend that you do them as soon as possible. For more information about operating system updates, see [Operating system updates for Aurora DB clusters](#Aurora_OS_updates).

**Topics**
+ [

### Offline resources during maintenance updates
](#USER_UpgradeDBInstance.Maintenance.Overview.offline)
+ [

### Deferred DB instanceand DB cluster modifications
](#USER_UpgradeDBInstance.Maintenance.Overview.Deferred)
+ [

### Eventual consistency for the DescribePendingMaintenanceActions API
](#USER_UpgradeDBInstance.Maintenance.Overview.eventual-consistency)

### Offline resources during maintenance updates
Offline maintenance

Some maintenance items require that Amazon RDS take your DB cluster offline for a short time. Maintenance items that require a resource to be offline include required operating system or database patching. Required patching is automatically scheduled only for patches that are related to security and instance reliability. Such patching occurs infrequently, typically once every few months. It seldom requires more than a fraction of your maintenance window.

### Deferred DB instanceand DB cluster modifications
Deferred modifications

Deferred DB cluster and instance modifications that you have chosen not to apply immediately are applied during the maintenance window. For example, you might choose to change DB instance classes or cluster or DB parameter groups during the maintenance window. Such modifications that you specify using the **pending reboot** setting don't show up in the **Pending maintenance** list. For information about modifying a DB cluster, see [Modifying an Amazon Aurora DB cluster](Aurora.Modifying.md).

To see the modifications that are pending for the next maintenance window, use the [describe-db-clusters](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/rds/describe-db-clusters.html) AWS CLI command and check the `PendingModifiedValues` field.

### Eventual consistency for the DescribePendingMaintenanceActions API
Eventual consistency

The Amazon RDS `DescribePendingMaintenanceActions` API follows an eventual consistency model. This means that the result of the `DescribePendingMaintenanceActions` command might not be immediately visible to all subsequent RDS commands. Keep this in mind when you use `DescribePendingMaintenanceActions` immediately after using a previous API command.

Eventual consistency can affect the way you managed your maintenance updates. For example, if you run the `ApplyPendingMaintenanceActions` command to update the database engine version for a DB cluster, it will eventually be visible to `DescribePendingMaintenanceActions`. In this scenario, `DescribePendingMaintenanceActions` might show that the maintenance action wasn't applied even though it was.

To manage eventual consistency, you can do the following:
+ Confirm the state of your DB cluster before you run a command to modify it. Run the appropriate `DescribePendingMaintenanceActions` command using an exponential backoff algorithm to ensure that you allow enough time for the previous command to propagate through the system. To do this, run the `DescribePendingMaintenanceActions` command repeatedly, starting with a couple of seconds of wait time, and increasing gradually up to five minutes of wait time. 
+ Add wait time between subsequent commands, even if a `DescribePendingMaintenanceActions` command returns an accurate response. Apply an exponential backoff algorithm starting with a couple of seconds of wait time, and increase gradually up to about five minutes of wait time.

## Viewing pending maintenance updates
Viewing pending maintenance

View whether a maintenance update is available for your DB cluster by using the RDS console, the AWS CLI, or the RDS API. If an update is available, it is indicated in the **Maintenance** column for the DB cluster on the Amazon RDS console, as shown in this figure.

![\[Maintenance action is available and will be applied at the next maintenance window.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/offlinepatchavailable.png)


If no maintenance update is available for a DB cluster, the column value is **none** for it.

If a maintenance update is available for a DB cluster, the following column values are possible:
+ **required** – The maintenance action will be applied to the resource and can't be deferred indefinitely.
+ **available** – The maintenance action is available, but it will not be applied to the resource automatically. You can apply it manually.
+ **next window** – The maintenance action will be applied to the resource during the next maintenance window.
+ **In progress** – The maintenance action is being applied to the resource.

If an update is available, you can do one of the following:
+ If the maintenance value is **next window**, defer the maintenance actions by choosing **Defer upgrade** from **Actions**. You can't defer a maintenance action that has already started.
+ Apply the maintenance actions immediately.
+ Apply the maintenance actions during your next maintenance window.
+ Take no action.

**To take an action by using the AWS Management Console**

1. Choose the DB instance or cluster to show its details.

1. Choose **Maintenance & backups**. The pending maintenance actions appear.

1. Choose the action to take, then choose when to apply it.

![\[Pending maintenance item for an Aurora DB instance.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/pending_maintenance_aurora_instance.png)


The maintenance window determines when pending operations start, but doesn't limit the total run time of these operations. Maintenance operations aren't guaranteed to finish before the maintenance window ends, and can continue beyond the specified end time. For more information, see [Amazon RDS maintenance window](#Concepts.DBMaintenance).

You can also view whether a maintenance update is available for your DB cluster by running the [describe-pending-maintenance-actions](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-pending-maintenance-actions.html) AWS CLI command.

For information about applying maintenance updates, see [Applying updates to a DB cluster](#USER_UpgradeDBInstance.OSUpgrades).

### Maintenance actions for Amazon Aurora


The following maintenance actions apply to Aurora DB clusters:
+ `os-upgrade` – Update the operating systems of all the DB instances in the DB cluster, using rolling upgrades. For more information, see [Operating system updates for Aurora DB clusters](#Aurora_OS_updates).
+ `system-update` – Patch the DB engine for Aurora PostgreSQL.

The following maintenance actions apply to Aurora DB instances:
+ `ca-certificate-rotation` – Update the Amazon RDS Certificate Authority certificate for the DB instance.
+ `hardware-maintenance` – Perform maintenance on the underlying hardware for the DB instance.
+ `system-update` – Update the operating system for the DB instance.

## Choosing the frequency of Aurora MySQL maintenance updates


You can control whether Aurora MySQL upgrades happen frequently or rarely for each DB cluster. The best choice depends on your usage of Aurora MySQL and the priorities for your applications that run on Aurora. For information about the Aurora MySQL long-term stability (LTS) releases that require less frequent upgrades, see [Aurora MySQL long-term support (LTS) releases](AuroraMySQL.Update.SpecialVersions.md#AuroraMySQL.Updates.LTS). 

 You might choose to upgrade an Aurora MySQL cluster rarely if some or all of the following conditions apply: 
+  Your testing cycle for your application takes a long time for each update to the Aurora MySQL database engine. 
+  You have many DB clusters or many applications all running on the same Aurora MySQL version. You prefer to upgrade all of your DB clusters and associated applications at the same time. 
+  You use both Aurora MySQL and RDS for MySQL. You prefer to keep the Aurora MySQL clusters and RDS for MySQL DB instances compatible with the same level of MySQL. 
+  Your Aurora MySQL application is in production or is otherwise business-critical. You can't afford downtime for upgrades outside of rare occurrences for critical patches. 
+  Your Aurora MySQL application isn't limited by performance issues or feature gaps that are addressed in subsequent Aurora MySQL versions. 

 If the preceding factors apply to your situation, you can limit the number of forced upgrades for an Aurora MySQL DB cluster. You do so by choosing a specific Aurora MySQL version known as the "Long-Term Support" (LTS) version when you create or upgrade that DB cluster. Doing so minimizes the number of upgrade cycles, testing cycles, and upgrade-related outages for that DB cluster. 

 You might choose to upgrade an Aurora MySQL cluster frequently if some or all of the following conditions apply: 
+  The testing cycle for your application is straightforward and brief. 
+  Your application is still in the development stage. 
+  Your database environment uses a variety of Aurora MySQL versions, or Aurora MySQL and RDS for MySQL versions. Each Aurora MySQL cluster has its own upgrade cycle. 
+  You are waiting for specific performance or feature improvements before you increase your usage of Aurora MySQL. 

 If the preceding factors apply to your situation, you can enable Aurora to apply important upgrades more frequently. To do so, upgrade an Aurora MySQL DB cluster to a more recent Aurora MySQL version than the LTS version. Doing so makes the latest performance enhancements, bug fixes, and features available to you more quickly. 

## Amazon RDS maintenance window
Maintenance window

The maintenance window is a weekly time interval during which any system changes are applied. Every DB cluster has a weekly maintenance window. The maintenance window is an opportunity to control when modifications and software patching occur. For more information about adjusting the maintenance window, see [Adjusting the preferred DB cluster maintenance window](#AdjustingTheMaintenanceWindow.Aurora).

RDS consumes some of the resources on your DB cluster while maintenance is being applied. You might observe a minimal effect on performance. For a DB instance, on rare occasions, a Multi-AZ failover might be required for a maintenance update to complete.

If a maintenance event is scheduled for a given week, it's initiated during the 30-minute maintenance window you identify. Most maintenance events also complete during the 30-minute maintenance window, although larger maintenance events may take more than 30 minutes to complete. The maintenance window is paused when the DB cluster is stopped.

The 30-minute maintenance window is selected at random from an 8-hour block of time per region. If you don't specify a maintenance window when you create the DB cluster, RDS assigns a 30-minute maintenance window on a randomly selected day of the week.

The following table shows the time blocks for each AWS Region from which default maintenance windows are assigned.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.Maintenance.html)

**Topics**
+ [

### Adjusting the preferred DB cluster maintenance window
](#AdjustingTheMaintenanceWindow.Aurora)

### Adjusting the preferred DB cluster maintenance window
Adjusting the maintenance window for a DB cluster

The Aurora DB cluster maintenance window should fall at the time of lowest usage and thus might need modification from time to time. Your DB cluster is unavailable during this time only if the updates that are being applied require an outage. The outage is for the minimum amount of time required to make the necessary updates.

**Note**  
For upgrades to the database engine, Amazon Aurora manages the preferred maintenance window for a DB cluster and not individual instances.

#### Console


**To adjust the preferred DB cluster maintenance window**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the DB cluster for which you want to change the maintenance window.

1. Choose **Modify**.

1. In the **Maintenance** section, update the maintenance window.

1. Choose **Continue**.

   On the confirmation page, review your changes.

1. To apply the changes to the maintenance window immediately, choose **Immediately** in the **Schedule of modifications** section.

1. Choose **Modify cluster** to save your changes.

   Alternatively, choose **Back** to edit your changes, or choose **Cancel** to cancel your changes.

#### AWS CLI


To adjust the preferred DB cluster maintenance window, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) command with the following parameters:
+ `--db-cluster-identifier`
+ `--preferred-maintenance-window`

**Example**  
The following code example sets the maintenance window to Tuesdays from 4:00–4:30 AM UTC.  
For Linux, macOS, or Unix:  

```
aws rds modify-db-cluster \
--db-cluster-identifier my-cluster \
--preferred-maintenance-window Tue:04:00-Tue:04:30
```
For Windows:  

```
aws rds modify-db-cluster ^
--db-cluster-identifier my-cluster ^
--preferred-maintenance-window Tue:04:00-Tue:04:30
```

#### RDS API


To adjust the preferred DB cluster maintenance window, use the Amazon RDS [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) API operation with the following parameters:
+ `DBClusterIdentifier`
+ `PreferredMaintenanceWindow`

## Applying updates to a DB cluster
Applying updates

With Amazon RDS, you can choose when to apply maintenance operations. You can decide when Amazon RDS applies updates by using the AWS Management Console, AWS CLI, or RDS API.

### Console


**To manage an update for a DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the DB cluster that has a required update. 

1. For **Actions**, choose one of the following:
   + **Patch now**
   + **Patch at next window**
**Note**  
If you choose **Patch at next window** and later want to delay the update, you can choose **Defer upgrade**. You can't defer a maintenance action if it has already started.  
To cancel a maintenance action, modify the DB instance and disable **Auto minor version upgrade**.

### AWS CLI


To apply a pending update to a DB cluster, use the [apply-pending-maintenance-action](https://docs.aws.amazon.com/cli/latest/reference/rds/apply-pending-maintenance-action.html) AWS CLI command.

**Example**  
For Linux, macOS, or Unix:  

```
aws rds apply-pending-maintenance-action \
    --resource-identifier arn:aws:rds:us-west-2:001234567890:db:mysql-db \
    --apply-action system-update \
    --opt-in-type immediate
```
For Windows:  

```
aws rds apply-pending-maintenance-action ^
    --resource-identifier arn:aws:rds:us-west-2:001234567890:db:mysql-db ^
    --apply-action system-update ^
    --opt-in-type immediate
```

**Note**  
To defer a maintenance action, specify `undo-opt-in` for `--opt-in-type`. You can't specify `undo-opt-in` for `--opt-in-type` if the maintenance action has already started.  
To cancel a maintenance action, run the [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) AWS CLI command and specify `--no-auto-minor-version-upgrade`.

To return a list of resources that have at least one pending update, use the [describe-pending-maintenance-actions](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-pending-maintenance-actions.html) AWS CLI command.

**Example**  
For Linux, macOS, or Unix:  

```
aws rds describe-pending-maintenance-actions \
    --resource-identifier arn:aws:rds:us-west-2:001234567890:db:mysql-db
```
For Windows:  

```
aws rds describe-pending-maintenance-actions ^
    --resource-identifier arn:aws:rds:us-west-2:001234567890:db:mysql-db
```

You can also return a list of resources for a DB cluster by specifying the `--filters` parameter of the `describe-pending-maintenance-actions` AWS CLI command. The format for the `--filters` command is `Name=filter-name,Value=resource-id,...`.

The following are the accepted values for the `Name` parameter of a filter:
+ `db-instance-id` – Accepts a list of DB instance identifiers or Amazon Resource Names (ARNs). The returned list only includes pending maintenance actions for the DB instances identified by these identifiers or ARNs.
+ `db-cluster-id` – Accepts a list of DB cluster identifiers or ARNs for Amazon Aurora. The returned list only includes pending maintenance actions for the DB clusters identified by these identifiers or ARNs.

For example, the following example returns the pending maintenance actions for the `sample-cluster1` and `sample-cluster2` DB clusters.

**Example**  
For Linux, macOS, or Unix:  

```
aws rds describe-pending-maintenance-actions \
	--filters Name=db-cluster-id,Values=sample-cluster1,sample-cluster2
```
For Windows:  

```
aws rds describe-pending-maintenance-actions ^
	--filters Name=db-cluster-id,Values=sample-cluster1,sample-cluster2
```

### RDS API


To apply an update to a DB cluster, call the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ApplyPendingMaintenanceAction.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ApplyPendingMaintenanceAction.html) operation.

To return a list of resources that have at least one pending update, call the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribePendingMaintenanceActions.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribePendingMaintenanceActions.html) operation.

## Automatic minor version upgrades for Aurora DB clusters


The **Auto minor version upgrade** setting specifies whether Aurora automatically applies upgrades to your DB cluster. These upgrades include new minor versions containing additional features and patches containing bug fixes.

Automatic minor version upgrades periodically update your database to recent database engine versions. However, the upgrade might not always include the latest database engine version. If you need to keep your databases on specific versions at particular times, we recommend that you manually upgrade to the database versions that you need according to your required schedule. In cases of critical security issues or when a version reaches its end-of-support date, Amazon Aurora might apply a minor version upgrade even if you haven't enabled the **Auto minor version upgrade** option. For more information, see the upgrade documentation for your specific database engine.

See [Upgrading the minor version or patch level of an Aurora MySQL DB cluster](AuroraMySQL.Updates.Patching.md) and [Performing a minor version upgrade](USER_UpgradeDBInstance.PostgreSQL.MinorUpgrade.md).

**Note**  
Aurora Global Database doesn't support automatic minor version upgrades.

This setting is turned on by default. For each new DB cluster, choose the appropriate value for this setting. This value is based on its importance, expected lifetime, and the amount of verification testing that you do after each upgrade.

For instructions on turning the **Auto minor version upgrade** setting on or off, see the following:
+ [Enabling automatic minor version upgrades for an Aurora DB cluster](#aurora-amvu-cluster)
+ [Enabling automatic minor version upgrades for individual DB instances in an Aurora DB cluster](#aurora-amvu-instance)

**Important**  
We strongly recommend that for new and existing DB clusters, you apply this setting to the DB cluster and not to the DB instances in the cluster individually. If any DB instance in your cluster has this setting turned off, the DB cluster isn't automatically upgraded.

The following table shows how the **Auto minor version upgrade** setting works when applied at the cluster and instance levels.


| Action | Cluster setting | Instance settings | Cluster upgraded automatically? | 
| --- | --- | --- | --- | 
| You set it to True on the DB cluster. | True | True for all new and existing instances | Yes | 
| You set it to False on the DB cluster. | False | False for all new and existing instances | No | 
|  It was set previously to True on the DB cluster. You set it to False on at least one DB instance.  | Changes to False | False for one or more instances | No | 
|  It was set previously to False on the DB cluster. You set it to True on at least one DB instance, but not all instances.  | False | True for one or more instances, but not all instances | No | 
|  It was set previously to False on the DB cluster. You set it to True on all DB instances.  | Changes to True | True for all instances | Yes | 

Automatic minor version upgrades are communicated in advance through an Amazon RDS DB cluster event with a category of `maintenance` and ID of `RDS-EVENT-0156`. For more information, see [Amazon RDS event categories and event messagesfor Aurora](USER_Events.Messages.md).

Automatic upgrades occur during the maintenance window. If the individual DB instances in the DB cluster have different maintenance windows from the cluster maintenance window, then the cluster maintenance window takes precedence.

For more information about engine updates for Aurora PostgreSQL, see [Database engine updates for Amazon Aurora PostgreSQL](AuroraPostgreSQL.Updates.md).

For more information about the **Auto minor version upgrade** setting for Aurora MySQL, see [Enabling automatic upgrades between minor Aurora MySQL versions](AuroraMySQL.Updates.AMVU.md). For general information about engine updates for Aurora MySQL, see [Database engine updates for Amazon Aurora MySQLLong-term support (LTS) and beta releases for Amazon Aurora MySQL](AuroraMySQL.Updates.md).

**Topics**

### Enabling automatic minor version upgrades for an Aurora DB cluster


Follow the general procedure in [Modifying the DB cluster by using the console, CLI, and API](Aurora.Modifying.md#Aurora.Modifying.Cluster).

**Console**  
On the **Modify DB cluster** page, in the **Maintenance** section, select the **Enable auto minor version upgrade** check box.

**AWS CLI**  
Call the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) AWS CLI command. Specify the name of your DB cluster for the `--db-cluster-identifier` option and `true` for the `--auto-minor-version-upgrade` option. Optionally, specify the `--apply-immediately` option to immediately enable this setting for your DB cluster.

**RDS API**  
Call the [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) API operation and specify the name of your DB cluster for the `DBClusterIdentifier` parameter and `true` for the `AutoMinorVersionUpgrade` parameter. Optionally, set the `ApplyImmediately` parameter to `true` to immediately enable this setting for your DB cluster.

### Enabling automatic minor version upgrades for individual DB instances in an Aurora DB cluster


Follow the general procedure in [Modifying a DB instance in a DB cluster](Aurora.Modifying.md#Aurora.Modifying.Instance).

**Console**  
On the **Modify DB instance** page, in the **Maintenance** section, select the **Enable auto minor version upgrade** check box.

**AWS CLI**  
Call the [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) AWS CLI command. Specify the name of your DB instance for the `--db-instance-identifier` option and `true` for the `--auto-minor-version-upgrade` option. Optionally, specify the `--apply-immediately` option to immediately enable this setting for your DB instance. Run a separate `modify-db-instance` command for each DB instance in the cluster.

**RDS API**  
Call the [ModifyDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) API operation and specify the name of your DB cluster for the `DBInstanceIdentifier` parameter and `true` for the `AutoMinorVersionUpgrade` parameter. Optionally, set the `ApplyImmediately` parameter to `true` to immediately enable this setting for your DB instance. Call a separate `ModifyDBInstance` operation for each DB instance in the cluster.

You can use a CLI command such as the following to check the status of the `AutoMinorVersionUpgrade` setting for all of the DB instances in your Aurora MySQL clusters.

```
aws rds describe-db-instances \
  --query '*[].{DBClusterIdentifier:DBClusterIdentifier,DBInstanceIdentifier:DBInstanceIdentifier,AutoMinorVersionUpgrade:AutoMinorVersionUpgrade}'
```

That command produces output similar to the following:

```
[
  {
      "DBInstanceIdentifier": "db-writer-instance",
      "DBClusterIdentifier": "my-db-cluster-57",
      "AutoMinorVersionUpgrade": true
  },
  {
      "DBInstanceIdentifier": "db-reader-instance1",
      "DBClusterIdentifier": "my-db-cluster-57",
      "AutoMinorVersionUpgrade": false
  },
  {
      "DBInstanceIdentifier": "db-writer-instance2",
      "DBClusterIdentifier": "my-db-cluster-80",
      "AutoMinorVersionUpgrade": true
  },
... output omitted ...
```

In this example, **Enable auto minor version upgrade** is turned off for the DB cluster `my-db-cluster-57`, because it's turned off for one of the DB instances in the cluster.

## Operating system updates for Aurora DB clusters
Operating system updates

DB instances in Aurora MySQL and Aurora PostgreSQL DB clusters occasionally require operating system updates. Amazon RDS upgrades the operating system to a newer version to improve database performance and customers’ overall security posture. Typically, the updates take about 10 minutes. Operating system updates don't change the DB engine version or DB instance class of a DB instance.

There are two types of operating system updates, differentiated by the description for the pending maintenance action:
+ **Operating system distribution upgrade** – Used to migrate to the latest supported major version of Amazon Linux. Its description is `New Operating System upgrade is available`.
+ **Operating system patch** – Used to apply various security fixes and sometimes to improve database performance. Its description is `New Operating System patch is available`.

Operating system updates can be either optional or mandatory:
+ An **optional update** can be applied at any time. While these updates are optional, we recommend that you apply them periodically to keep your RDS fleet up to date. RDS *does not* apply these updates automatically.

  To be notified when a new, optional operating system patch becomes available, you can subscribe to [RDS-EVENT-0230](USER_Events.Messages.md#RDS-EVENT-0230) in the security patching event category. For information about subscribing to RDS events, see [Subscribing to Amazon RDS event notification](USER_Events.Subscribing.md).
**Note**  
`RDS-EVENT-0230` doesn't apply to operating system distribution upgrades.
+ A **mandatory update** is required, and we send a notification before the mandatory update. The notification might contain a due date. Plan to schedule your update before this due date. After the specified due date, Amazon RDS automatically upgrades the operating system for your DB instance to the latest version during one of your assigned maintenance windows.

  Operating system distribution upgrades are mandatory.

**Note**  
Staying current on all optional and mandatory updates might be required to meet various compliance obligations. We recommend that you apply all updates made available by RDS routinely during your maintenance windows.

For Aurora DB clusters, you can use the **cluster-level** maintenance option to perform operating system (OS) updates. Find the option to perform cluster-level updates in the **Maintenance & backups** tab when you select the name of your DB cluster in the console, or use the `os-upgrade` command in the AWS CLI. This method preserves read availability with rolling upgrades that automatically apply updates to a few reader DB instances at a time. To prevent multiple failovers and reduce unnecessary downtime, Aurora upgrades the writer DB instance last. 

Cluster-level OS updates occur during the maintenance window that you specified for the cluster. This ensures coordinated updates across the entire cluster. 

For backward compatibility, Aurora also maintains the **instance-level** maintenance option. However, we recommend that you use cluster-level updates instead. If you must use instance-level updates, update the reader DB instances in a DB cluster first, then update the writer DB instance. If you update reader and writer instances simultaneously, you increase the chance of failover-related downtime. Find the option to perform instance-level updates in the **Maintenance & backups** tab when you select the name of your DB instance in the console, or use the `system-update` command in the AWS CLI. 

Instance-level OS updates occur during the maintenance window that you specified for each respective instance. For example, if a cluster and two reader instances have different maintenance window times, an OS update at the cluster level aligns with the cluster maintenance window. 



You can use the AWS Management Console or the AWS CLI to get information about the type of operating system upgrade.

### Console


**To get update information using the AWS Management Console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then select the DB instance.

1. Choose **Maintenance & backups**.

1. In the **Pending maintenance** section, find the operating system update, and check the **Description** value.

The following images show a DB cluster with a writer DB instance that has an operating system patch available.

![\[Cluster-level operating system patch.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/os-upgrade-cluster-minor.png)


![\[Instance-level operating system patch.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/os-upgrade-writer-minor.png)


The following images show a DB cluster with a writer DB instance and a reader DB instance. The writer instance has a mandatory operating system upgrade available. The reader instance has an operating system patch available.

![\[Cluster-level operating system distribution upgrade.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/os-upgrade-cluster-major.png)


![\[Writer instance operating system distribution upgrade.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/os-upgrade-writer-major.png)


![\[Reader instance operating system patch.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/os-upgrade-reader-minor.png)


### AWS CLI


To get update information from the AWS CLI, use the [describe-pending-maintenance-actions](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-pending-maintenance-actions.html) command.

```
aws rds describe-pending-maintenance-actions
```

The following output shows an operating system distribution upgrade for a DB cluster and a DB instance.

```
{
  "PendingMaintenanceActions": [
    {
      "ResourceIdentifier": "arn:aws:rds:us-east-1:123456789012:cluster:t3",
      "PendingMaintenanceActionDetails": [
        {
          "Action": "os-upgrade",
          "Description": "New Operating System upgrade is available"
        }
      ]
    },
    {
      "ResourceIdentifier": "arn:aws:rds:us-east-1:123456789012:db:t3-instance1",
      "PendingMaintenanceActionDetails": [
        {
          "Action": "system-update",
          "Description": "New Operating System upgrade is available"
        }
      ]
    },
  ]
}
```

The following output shows an operating system patch for a DB instance.

```
{
  "ResourceIdentifier": "arn:aws:rds:us-east-1:123456789012:db:mydb2",
  "PendingMaintenanceActionDetails": [
    {
      "Action": "system-update",
      "Description": "New Operating System patch is available"
    }
  ]
}
```

### Availability of operating system updates


Operating system updates are specific to DB engine version and DB instance class. Therefore, DB instances receive or require updates at different times. When an operating system update is available for your DB instance based on its engine version and instance class, the update appears in the console. It can also be viewed by running the [describe-pending-maintenance-actions](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-pending-maintenance-actions.html) AWS CLI command or by calling the [DescribePendingMaintenanceActions](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribePendingMaintenanceActions.html) RDS API operation. If an update is available for your instance, you can update your operating system by following the instructions in [Applying updates to a DB cluster](#USER_UpgradeDBInstance.OSUpgrades).

# Using AWS Organizations upgrade rollout policy for automatic minor version upgrades
AWS Organizations upgrade rollout

Aurora supports AWS Organizations upgrade rollout policy to manage automatic minor version upgrades across multiple database resources and AWS accounts. This policy helps you implement a controlled upgrade strategy for your clusters by:

**How upgrade rollout policy works**

When a new minor engine version becomes eligible for automatic upgrade, the policy controls the upgrade sequence based on defined orders:
+ Resources marked as [first] (typically development environments) become eligible for upgrades during their maintenance windows.
+ After a designated bake time, resources marked as [second] become eligible.
+ After another designated bake time, resources marked as [last] (typically production environments) become eligible.
+ Monitoring upgrade progress through AWS Health notifications.

You can define your upgrade orders by:
+ Account-level policies – Apply to all eligible resources in specified accounts.
+ Resource tags – Apply to specific resources based on tags.

**Note**  
Resources not configured with an upgrade policy or excluded from the policy automatically receive an upgrade order of [second].

**Prerequisites**
+ Your AWS account must be part of an organization in Organizations with upgrade rollout policy enabled
+ Enable automatic minor version upgrades for your clusters
+ Tags are not strictly required for upgrade rollout policy. If you want to define specific upgrade orders for different environments (for example, development, test, QA, production), you can use tags. If you don't include tag settings in your policy, all resources under that policy follow the default upgrade order. For Aurora resources, only cluster-level tags are used for upgrade rollout policy, even if you have tags defined at the instance level.

**To tag your resources**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the cluster you want to tag.

1. Choose **Actions**, then choose **Manage tags**.

1. Choose **Add tag**.

1. Enter your tag key (for example, 'Environment') and value (for example, 'Development')

1. Choose **Add tag**, then **Save**.

You can also add tags using the AWS CLI:

```
aws rds add-tags-to-resource \
    --resource-name arn:aws:rds:region:account-number:cluster:cluster-name \
    --tags Key=Environment,Value=Development
```

## Upgrade order and phases
Upgrade order

The upgrade rollout policy supports three upgrade orders:
+ [first] - Typically used for development or testing environments
+ [second] - Typically used for QA environments. Default order for resources if policy is not specifically configured
+ [last] - Usually reserved for production environments

When a new minor engine version becomes eligible for automatic upgrade:
+ Resources with upgrade order [first] become eligible for upgrades during their configured maintenance windows.
+ After a designated bake time, resources with upgrade order [second] become eligible for upgrades during their maintenance windows.
+ After another designated bake time, resources with upgrade order [last] become eligible for upgrades during their maintenance windows.
+ The automatic minor version upgrade campaign closes after all eligible resources with upgrade orders [first], [second], and [last] have been upgraded, or when the campaign reaches its scheduled end date, whichever comes first.

**Note**  
All automatic minor version upgrades are performed during each cluster's configured maintenance window to minimize potential impact to your applications.

## Observability
Observability

### AWS Health and monitoring
Health and monitoring

You receive AWS health notifications:
+ Before the start of an automatic minor version upgrade campaign
+ Between each phase transition to help track and monitor upgrade progress
+ Progress updates showing the number of resources upgraded across your fleet in the AWS Health console

Amazon RDS event notifications:
+ Notifications for resources enabled for automatic minor version upgrades, including:
  + When your resource becomes eligible for upgrade based on its upgrade order ([first], [second], or [last])
  + Scheduled upgrade timeline during the maintenance window
  + Individual database upgrade start and completion status
+ Subscribe to these events through Amazon EventBridge0 for automated monitoring

### Considerations
Considerations

Some considerations to keep in mind:
+ The policy applies to all future automatic minor version upgrade campaigns, including policy changes made during active campaigns.
+ If you join an ongoing upgrade campaign, your resources follow the current running upgrade order and do not wait for a configured policy.
+ Resources not configured with an upgrade policy or excluded from the policy automatically receive an upgrade order of [second].
+ The policy provides validation periods between upgrade phases before proceeding to the next phase.
+ Changes to either the policy or resource tags require time to propagate before the new upgrade order is applied.
+ The policy applies only to Aurora resources with automatic minor version upgrades enabled.
+ If you detect an issue within an environment, you can turn off automatic minor version upgrades for subsequent environments or use the validation period to resolve issues before upgrades proceed to the next upgrade order.

For more information about tagging RDS resources, see [Tagging Amazon Aurora andAmazon RDS resources](USER_Tagging.md). For detailed instructions on setting up and using upgrade rollout policy, see [ Getting started with AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started.html) in the *AWS Organizations User Guide*.

# Rebooting an Amazon Aurora DB cluster or Amazon Aurora DB instance
Rebooting an Aurora DB cluster or instance<a name="reboot"></a>

 You might need to reboot your DB cluster or some instances within the cluster, usually for maintenance reasons. For example, suppose that you modify the parameters within a parameter group or associate a different parameter group with your cluster. In these cases, you must reboot the cluster for the changes to take effect. Similarly, you might reboot one or more reader DB instances within the cluster. You can arrange the reboot operations for individual instances to minimize downtime for the entire cluster. 

 The time required to reboot each DB instance in your cluster depends on the database activity at the time of reboot. It also depends on the recovery process of your specific DB engine. If it's practical, reduce database activity on that particular instance before starting the reboot process. Doing so can reduce the time needed to restart the database. 

 You can only reboot each DB instance in your cluster when it's in the available state. A DB instance can be unavailable for several reasons. These include the cluster being stopped state, a modification being applied to the instance, and a maintenance-window action such as a version upgrade. 

 Rebooting a DB instance restarts the database engine process. Rebooting a DB instance results in a momentary outage, during which the DB instance status is set to *rebooting*. 

**Note**  
 If a DB instance isn't using the latest changes to its associated DB parameter group, the AWS Management Console shows the DB parameter group with a status of **pending-reboot**. The **pending-reboot** parameter groups status doesn't result in an automatic reboot during the next maintenance window. To apply the latest parameter changes to that DB instance, manually reboot the DB instance. For more information about parameter groups, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md). 

**Topics**
+ [

# Rebooting a DB instance within an Aurora cluster
](aurora-reboot-db-instance.md)
+ [

# Rebooting an Aurora cluster with read availability
](aurora-mysql-survivable-replicas.md)
+ [

# Rebooting an Aurora cluster without read availability
](aurora-reboot-cluster.md)
+ [

# Checking uptime for Aurora clusters and instances
](USER_Reboot.Uptime.md)
+ [

# Examples of Aurora reboot operations
](USER_Reboot.Examples.md)

# Rebooting a DB instance within an Aurora cluster


 This procedure is the most important operation that you take when performing reboots with Aurora. Many of the maintenance procedures involve rebooting one or more Aurora DB instances in a particular order. 

## Console


**To reboot a DB instance**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  In the navigation pane, choose **Databases**, and then choose the DB instance that you want to reboot. 

1.  For **Actions**, choose **Reboot**. 

    The **Reboot DB Instance** page appears. 

1.  Choose **Reboot** to reboot your DB instance. 

    Or choose **Cancel**. 

## AWS CLI


 To reboot a DB instance by using the AWS CLI, call the [https://docs.aws.amazon.com/cli/latest/reference/rds/reboot-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/reboot-db-instance.html) command. 

**Example**  
For Linux, macOS, or Unix:  

```
aws rds reboot-db-instance \
    --db-instance-identifier mydbinstance
```
For Windows:  

```
aws rds reboot-db-instance ^
    --db-instance-identifier mydbinstance
```

## RDS API


 To reboot a DB instance by using the Amazon RDS API, call the [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RebootDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RebootDBInstance.html) operation. 

# Rebooting an Aurora cluster with read availability
<a name="survivable_replicas"></a>

With the read availability feature, you can reboot the writer instance of your Aurora cluster without rebooting the reader instances in the primary or secondary DB cluster. Doing so can help maintain high availability of the cluster for read operations while you reboot the writer instance. You can reboot the reader instances later, on a schedule that's convenient for you. For example, in a production cluster you might reboot the reader instances one at a time, starting only after the reboot of the primary instance is finished. For each DB instance that you reboot, follow the procedure in [Rebooting a DB instance within an Aurora cluster](aurora-reboot-db-instance.md).

The read availability feature for primary DB clusters is available in Aurora MySQL version 2.10 and higher. Read availability for secondary DB clusters is available in Aurora MySQL version 3.06 and higher.

For Aurora PostgreSQL this feature is available by default in the following versions:
+ 15.2 and higher 15 versions
+ 14.7 and higher 14 versions
+ 13.10 and higher 13 versions
+ 12.14 and higher 12 versions

For more information on the read availability feature in Aurora PostgreSQL, see [Improving the read availability of Aurora Replicas](AuroraPostgreSQL.Replication.md#AuroraPostgreSQL.Replication.Replicas.SRO).

Before this feature, rebooting the primary instance caused a reboot for each reader instance at the same time. If your Aurora cluster is running an older version, use the reboot procedure in [Rebooting an Aurora cluster without read availability](aurora-reboot-cluster.md) instead.

**Note**  
The change to reboot behavior in Aurora DB clusters with read availability is different for Aurora global databases in Aurora MySQL versions lower than 3.06. If you reboot the writer instance for the primary cluster in an Aurora global database, the reader instances in the primary cluster remain available. However, the DB instances in any secondary clusters reboot at the same time.  
A limited version of the improved read availability feature is supported by Aurora global databases for Aurora PostgreSQL versions 12.16, 13.12, 14.9, 15.4, and higher. 

You frequently reboot the cluster after making changes to cluster parameter groups. You make parameter changes by following the procedures in [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md). Suppose that you reboot the writer DB instance in an Aurora cluster to apply changes to cluster parameters. Some or all of the reader DB instances might continue using the old parameter settings. However, the different parameter settings don't affect the data integrity of the cluster. Any cluster parameters that affect the organization of data files are only used by the writer DB instance.

For example, in an Aurora MySQL cluster, you can update cluster parameters such as `binlog_format` and `innodb_purge_threads` on the writer instance before the reader instances. Only the writer instance is writing binary logs and purging undo records. For parameters that change how queries interpret SQL statements or query output, you might need to take care to reboot the reader instances immediately. You do this to avoid unexpected application behavior during queries. For example, suppose that you change the `lower_case_table_names` parameter and reboot the writer instance. In this case, the reader instances might not be able to access a newly created table until they are all rebooted.

For a list of all the Aurora MySQL cluster parameters, see [Cluster-level parameters](AuroraMySQL.Reference.ParameterGroups.md#AuroraMySQL.Reference.Parameters.Cluster).

For a list of all the Aurora PostgreSQL cluster parameters, see [Aurora PostgreSQL cluster-level parameters](AuroraPostgreSQL.Reference.ParameterGroups.md#AuroraPostgreSQL.Reference.Parameters.Cluster).

**Tip**  
Aurora MySQL might still reboot some of the reader instances along with the writer instance if your cluster is processing a workload with high throughput.  
The reduction in the number of reboots applies during failover operations also. Aurora MySQL only restarts the writer DB instance and the failover target during a failover. Other reader DB instances in the cluster remain available to continue processing queries through connections to the reader endpoint. Thus, you can improve availability during a failover by having more than one reader DB instance in a cluster.

# Rebooting an Aurora cluster without read availability


 Without the read availability feature, you reboot an entire Aurora DB cluster by rebooting the writer DB instance of that cluster. To do so, follow the procedure in [Rebooting a DB instance within an Aurora cluster](aurora-reboot-db-instance.md). 

 Rebooting the writer DB instance also initiates a reboot for each reader DB instance in the cluster. That way, any cluster-wide parameter changes are applied to all DB instances at the same time. However, the reboot of all DB instances causes a brief outage for the cluster. The reader DB instances remain unavailable until the writer DB instance finishes rebooting and becomes available.

This reboot behavior applies to all DB clusters created in Aurora MySQL version 2.09 and lower.

For Aurora PostgreSQL this behavior applies to the following versions:
+ 14.6 and lower 14 versions
+ 13.9 and lower 13 versions
+ 12.13 and lower 12 versions
+ All PostgreSQL 11 versions

 In the RDS console, the writer DB instance has the value **Writer** under the **Role** column on the **Databases** page. In the RDS CLI, the output of the `describe-db-clusters` command includes a section `DBClusterMembers`. The `DBClusterMembers` element representing the writer DB instance has a value of `true` for the `IsClusterWriter` field. 

**Important**  
 With the read availability feature, the reboot behavior is different in Aurora MySQL and Aurora PostgreSQL: the reader DB instances typically remain available while you reboot the writer instance. Then you can reboot the reader instances at a convenient time. You can reboot the reader instances on a staggered schedule if you want some reader instances to always be available. For more information, see [Rebooting an Aurora cluster with read availability](aurora-mysql-survivable-replicas.md). 

# Checking uptime for Aurora clusters and instances


 You can check and monitor the length of time since the last reboot for each DB instance in your Aurora cluster. The Amazon CloudWatch metric `EngineUptime` reports the number of seconds since the last time a DB instance was started. You can examine this metric at a point in time to find out the uptime for the DB instance. You can also monitor this metric over time to detect when the instance is rebooted. 

 You can also examine the `EngineUptime` metric at the cluster level. The `Minimum` and `Maximum` dimensions report the smallest and largest uptime values for all DB instances in the cluster. To check the most recent time when any reader instance in a cluster was rebooted, or restarted for another reason, monitor the cluster-level metric using the `Minimum` dimension. To check which instance in the cluster has gone the longest without a reboot, monitor the cluster-level metric using the `Maximum` dimension. For example, you might want to confirm that all DB instances in the cluster were rebooted after a configuration change. 

**Tip**  
 For long-term monitoring, we recommend monitoring the `EngineUptime` metric for individual instances instead of at the cluster level. The cluster-level `EngineUptime` metric is set to zero when a new DB instance is added to the cluster. Such cluster changes can happen as part of maintenance and scaling operations such as those performed by Auto Scaling. 

 The following CLI examples show how to examine the `EngineUptime` metric for the writer and reader instances in a cluster. The examples use a cluster named `tpch100g`. This cluster has a writer DB instance `instance-1234`. It also has two reader DB instances, `instance-7448` and `instance-6305`. 

 First, the `reboot-db-instance` command reboots one of the reader instances. The `wait` command waits until the instance is finished rebooting. 

```
$ aws rds reboot-db-instance --db-instance-identifier instance-6305
{
    "DBInstance": {
        "DBInstanceIdentifier": "instance-6305",
        "DBInstanceStatus": "rebooting",
...
$ aws rds wait db-instance-available --db-instance-id instance-6305
```

 The CloudWatch `get-metric-statistics` command examines the `EngineUptime` metric over the last five minutes at one-minute intervals. The uptime for the `instance-6305` instance is reset to zero and begins counting upwards again. This AWS CLI example for Linux uses `$()` variable substitution to insert the appropriate timestamps into the CLI commands. It also uses the Linux `sort` command to order the output by the time the metric was collected. That timestamp value is the third field in each line of output. 

```
$ aws cloudwatch get-metric-statistics --metric-name "EngineUptime" \
  --start-time "$(date -d '5 minutes ago')" --end-time "$(date -d 'now')" \
  --period 60 --namespace "AWS/RDS" --statistics Maximum \
  --dimensions Name=DBInstanceIdentifier,Value=instance-6305 --output text \
  | sort -k 3
EngineUptime
DATAPOINTS	231.0	2021-03-16T18:19:00+00:00	Seconds
DATAPOINTS	291.0	2021-03-16T18:20:00+00:00	Seconds
DATAPOINTS	351.0	2021-03-16T18:21:00+00:00	Seconds
DATAPOINTS	411.0	2021-03-16T18:22:00+00:00	Seconds
DATAPOINTS	471.0	2021-03-16T18:23:00+00:00	Seconds
```

 The minimum uptime for the cluster is reset to zero because one of the instances in the cluster was rebooted. The maximum uptime for the cluster isn't reset because at least one of the DB instances in the cluster remained available. 

```
$ aws cloudwatch get-metric-statistics --metric-name "EngineUptime" \
  --start-time "$(date -d '5 minutes ago')" --end-time "$(date -d 'now')" \
  --period 60 --namespace "AWS/RDS" --statistics Minimum \
  --dimensions Name=DBClusterIdentifier,Value=tpch100g --output text \
  | sort -k 3
EngineUptime
DATAPOINTS	63099.0	2021-03-16T18:12:00+00:00	Seconds
DATAPOINTS	63159.0	2021-03-16T18:13:00+00:00	Seconds
DATAPOINTS	63219.0	2021-03-16T18:14:00+00:00	Seconds
DATAPOINTS	63279.0	2021-03-16T18:15:00+00:00	Seconds
DATAPOINTS	51.0	2021-03-16T18:16:00+00:00	Seconds

$ aws cloudwatch get-metric-statistics --metric-name "EngineUptime" \
  --start-time "$(date -d '5 minutes ago')" --end-time "$(date -d 'now')" \
  --period 60 --namespace "AWS/RDS" --statistics Maximum \
  --dimensions Name=DBClusterIdentifier,Value=tpch100g --output text \
  | sort -k 3
EngineUptime
DATAPOINTS	63389.0	2021-03-16T18:16:00+00:00	Seconds
DATAPOINTS	63449.0	2021-03-16T18:17:00+00:00	Seconds
DATAPOINTS	63509.0	2021-03-16T18:18:00+00:00	Seconds
DATAPOINTS	63569.0	2021-03-16T18:19:00+00:00	Seconds
DATAPOINTS	63629.0	2021-03-16T18:20:00+00:00	Seconds
```

 Then another `reboot-db-instance` command reboots the writer instance of the cluster. Another `wait` command pauses until the writer instance is finished rebooting. 

```
$ aws rds reboot-db-instance --db-instance-identifier instance-1234
{
  "DBInstanceIdentifier": "instance-1234",
  "DBInstanceStatus": "rebooting",
...
$ aws rds wait db-instance-available --db-instance-id instance-1234
```

 Now the `EngineUptime` metric for the writer instance shows that the instance `instance-1234` was rebooted recently. The reader instance `instance-6305` was also rebooted automatically along with the writer instance. This cluster is running Aurora MySQL 2.09, which doesn't keep the reader instances running as the writer instance reboots. 

```
$ aws cloudwatch get-metric-statistics --metric-name "EngineUptime" \
  --start-time "$(date -d '5 minutes ago')" --end-time "$(date -d 'now')" \
  --period 60 --namespace "AWS/RDS" --statistics Maximum \
  --dimensions Name=DBInstanceIdentifier,Value=instance-1234 --output text \
  | sort -k 3
EngineUptime
DATAPOINTS	63749.0	2021-03-16T18:22:00+00:00	Seconds
DATAPOINTS	63809.0	2021-03-16T18:23:00+00:00	Seconds
DATAPOINTS	63869.0	2021-03-16T18:24:00+00:00	Seconds
DATAPOINTS	41.0	2021-03-16T18:25:00+00:00	Seconds
DATAPOINTS	101.0	2021-03-16T18:26:00+00:00	Seconds

$ aws cloudwatch get-metric-statistics --metric-name "EngineUptime" \
  --start-time "$(date -d '5 minutes ago')" --end-time "$(date -d 'now')" \
  --period 60 --namespace "AWS/RDS" --statistics Maximum \
  --dimensions Name=DBInstanceIdentifier,Value=instance-6305 --output text \
  | sort -k 3
EngineUptime
DATAPOINTS	411.0	2021-03-16T18:22:00+00:00	Seconds
DATAPOINTS	471.0	2021-03-16T18:23:00+00:00	Seconds
DATAPOINTS	531.0	2021-03-16T18:24:00+00:00	Seconds
DATAPOINTS	49.0	2021-03-16T18:26:00+00:00	Seconds
```

# Examples of Aurora reboot operations


 The following Aurora MySQL examples show different combinations of reboot operations for reader and writer DB instances in an Aurora DB cluster. After each reboot, SQL queries demonstrate the uptime for the instances in the cluster. 

**Topics**
+ [

## Finding the writer and reader instances for an Aurora cluster
](#USER_Reboot.Examples.IsClusterWriter)
+ [

## Rebooting a single reader instance
](#USER_Reboot.Examples.RebootReader)
+ [

## Rebooting the writer instance
](#USER_Reboot.Examples.RebootWriter)
+ [

## Rebooting the writer and readers independently
](#USER_Reboot.Examples.RebootAsynch)
+ [

## Applying a cluster parameter change to an Aurora MySQL version 2.10 cluster
](#USER_Reboot.Examples.ParamChangeNewStyle)

## Finding the writer and reader instances for an Aurora cluster


 In an Aurora MySQL cluster with multiple DB instances, it's important to know which one is the writer and which ones are the readers. The writer and reader instances also can switch roles when a failover operation happens. Thus, it's best to perform a check like the following before doing any operation that requires a writer or reader instance. In this case, the `False` values for `IsClusterWriter` identify the reader instances, `instance-6305` and `instance-7448`. The `True` value identifies the writer instance, `instance-1234`. 

```
$ aws rds describe-db-clusters --db-cluster-id tpch100g \
  --query "*[].['Cluster:',DBClusterIdentifier,DBClusterMembers[*].['Instance:',DBInstanceIdentifier,IsClusterWriter]]" \
  --output text
Cluster:     tpch100g
Instance:    instance-6305    False
Instance:    instance-7448    False
Instance:    instance-1234    True
```

 Before we start the examples of rebooting, the writer instance has an uptime of approximately one week. The SQL query in this example shows a MySQL-specific way to check the uptime. You might use this technique in a database application. For another technique that uses the AWS CLI and works for both Aurora engines, see [Checking uptime for Aurora clusters and instances](USER_Reboot.Uptime.md). 

```
$ mysql -h instance-7448.a12345.us-east-1.rds.amazonaws.com -P 3306 -u my-user -p
...
mysql> select date_sub(now(), interval variable_value second) "Last Startup",
    -> time_format(sec_to_time(variable_value),'%Hh %im') as "Uptime"
    -> from performance_schema.global_status
    -> where variable_name='Uptime';
+----------------------------+---------+
| Last Startup               | Uptime  |
+----------------------------+---------+
| 2021-03-08 17:49:06.000000 | 174h 42m|
+----------------------------+---------+
```

## Rebooting a single reader instance


 This example reboots one of the reader DB instances. Perhaps this instance was overloaded by a huge query or many concurrent connections. Or perhaps it fell behind the writer instance because of a network issue. After starting the reboot operation, the example uses a `wait` command to pause until the instance becomes available. By that point, the instance has an uptime of a few minutes. 

```
$ aws rds reboot-db-instance --db-instance-identifier instance-6305
{
    "DBInstance": {
        "DBInstanceIdentifier": "instance-6305",
        "DBInstanceStatus": "rebooting",
...
    }
}
$ aws rds wait db-instance-available --db-instance-id instance-6305
$ mysql -h instance-6305.a12345.us-east-1.rds.amazonaws.com -P 3306 -u my-user -p
...
mysql> select date_sub(now(), interval variable_value second) "Last Startup",
    -> time_format(sec_to_time(variable_value),'%Hh %im') as "Uptime"
    -> from performance_schema.global_status
    -> where variable_name='Uptime';
+----------------------------+---------+
| Last Startup               | Uptime  |
+----------------------------+---------+
| 2021-03-16 00:35:02.000000 | 00h 03m |
+----------------------------+---------+
```

 Rebooting the reader instance didn't affect the uptime of the writer instance. It still has an uptime of about one week. 

```
$ mysql -h instance-7448.a12345.us-east-1.rds.amazonaws.com -P 3306 -u my-user -p
...
mysql> select date_sub(now(), interval variable_value second) "Last Startup",
    -> time_format(sec_to_time(variable_value),'%Hh %im') as "Uptime"
    -> from performance_schema.global_status where variable_name='Uptime';
+----------------------------+----------+
| Last Startup               | Uptime   |
+----------------------------+----------+
| 2021-03-08 17:49:06.000000 | 174h 49m |
+----------------------------+----------+
```

## Rebooting the writer instance


 This example reboots the writer instance. This cluster is running Aurora MySQL version 2.09. Because the Aurora MySQL version is lower than 2.10, rebooting the writer instance also reboots any reader instances in the cluster. 

 A `wait` command pauses until the reboot is finished. Now the uptime for that instance is reset to zero. It's possible that a reboot operation might take substantially different times for writer and reader DB instances. The writer and reader DB instances perform different kinds of cleanup operations depending on their roles. 

```
$ aws rds reboot-db-instance --db-instance-identifier instance-1234
{
    "DBInstance": {
        "DBInstanceIdentifier": "instance-1234",
        "DBInstanceStatus": "rebooting",
...
    }
}
$ aws rds wait db-instance-available --db-instance-id instance-1234
$ mysql -h instance-1234.a12345.us-east-1.rds.amazonaws.com -P 3306 -u my-user -p
...
mysql> select date_sub(now(), interval variable_value second) "Last Startup",
    -> time_format(sec_to_time(variable_value),'%Hh %im') as "Uptime"
    -> from performance_schema.global_status where variable_name='Uptime';
+----------------------------+---------+
| Last Startup               | Uptime  |
+----------------------------+---------+
| 2021-03-16 00:40:27.000000 | 00h 00m |
+----------------------------+---------+
```

 After the reboot for the writer DB instance, both of the reader DB instances also have their uptime reset. Rebooting the writer instance caused the reader instances to reboot also. This behavior applies to Aurora PostgreSQL clusters and to Aurora MySQL clusters before version 2.10. 

```
$ mysql -h instance-7448.a12345.us-east-1.rds.amazonaws.com -P 3306 -u my-user -p
...
mysql> select date_sub(now(), interval variable_value second) "Last Startup",
    -> time_format(sec_to_time(variable_value),'%Hh %im') as "Uptime"
    -> from performance_schema.global_status where variable_name='Uptime';
+----------------------------+---------+
| Last Startup               | Uptime  |
+----------------------------+---------+
| 2021-03-16 00:40:35.000000 | 00h 00m |
+----------------------------+---------+

$ mysql -h instance-6305.a12345.us-east-1.rds.amazonaws.com -P 3306 -u my-user -p
...
mysql> select date_sub(now(), interval variable_value second) "Last Startup",
    -> time_format(sec_to_time(variable_value),'%Hh %im') as "Uptime"
    -> from performance_schema.global_status where variable_name='Uptime';
+----------------------------+---------+
| Last Startup               | Uptime  |
+----------------------------+---------+
| 2021-03-16 00:40:33.000000 | 00h 01m |
+----------------------------+---------+
```

## Rebooting the writer and readers independently


 These next examples show a cluster that runs Aurora MySQL version 2.10. In this Aurora MySQL version and higher, you can reboot the writer instance without causing reboots for all the reader instances. That way, your query-intensive applications don't experience any outage when you reboot the writer instance. You can reboot the reader instances later. You might do those reboots at a time of low query traffic. You might also reboot the reader instances one at a time. That way, at least one reader instance is always available for the query traffic of your application. 

 The following example uses a cluster named `cluster-2393`, running Aurora MySQL version `5.7.mysql_aurora.2.10.0`. This cluster has a writer instance named `instance-9404` and three reader instances named `instance-6772`, `instance-2470`, and `instance-5138`. 

```
$ aws rds describe-db-clusters --db-cluster-id cluster-2393 \
  --query "*[].['Cluster:',DBClusterIdentifier,DBClusterMembers[*].['Instance:',DBInstanceIdentifier,IsClusterWriter]]" \
  --output text
Cluster:        cluster-2393
Instance:       instance-5138        False
Instance:       instance-2470        False
Instance:       instance-6772        False
Instance:       instance-9404        True
```

 Checking the `uptime` value of each database instance through the `mysql` command shows that each one has roughly the same uptime. For example, here is the uptime for `instance-5138`. 

```
mysql> SHOW GLOBAL STATUS LIKE 'uptime';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Uptime        | 3866  |
+---------------+-------+
```

 By using CloudWatch, we can get the corresponding uptime information without actually logging into the instances. That way, an administrator can monitor the database but can't view or change any table data. In this case, we specify a time period spanning five minutes, and check the uptime value every minute. The increasing uptime values demonstrate that the instances weren't restarted during that period. 

```
$ aws cloudwatch get-metric-statistics --metric-name "EngineUptime" \
  --start-time "$(date -d '5 minutes ago')" --end-time "$(date -d 'now')" --period 60 \
  --namespace "AWS/RDS" --statistics Minimum --dimensions Name=DBInstanceIdentifier,Value=instance-9404 \
  --output text | sort -k 3
EngineUptime
DATAPOINTS	4648.0	2021-03-17T23:42:00+00:00	Seconds
DATAPOINTS	4708.0	2021-03-17T23:43:00+00:00	Seconds
DATAPOINTS	4768.0	2021-03-17T23:44:00+00:00	Seconds
DATAPOINTS	4828.0	2021-03-17T23:45:00+00:00	Seconds
DATAPOINTS	4888.0	2021-03-17T23:46:00+00:00	Seconds

$ aws cloudwatch get-metric-statistics --metric-name "EngineUptime" \
  --start-time "$(date -d '5 minutes ago')" --end-time "$(date -d 'now')" --period 60 \
  --namespace "AWS/RDS" --statistics Minimum --dimensions Name=DBInstanceIdentifier,Value=instance-6772 \
  --output text | sort -k 3
EngineUptime
DATAPOINTS	4315.0	2021-03-17T23:42:00+00:00	Seconds
DATAPOINTS	4375.0	2021-03-17T23:43:00+00:00	Seconds
DATAPOINTS	4435.0	2021-03-17T23:44:00+00:00	Seconds
DATAPOINTS	4495.0	2021-03-17T23:45:00+00:00	Seconds
DATAPOINTS	4555.0	2021-03-17T23:46:00+00:00	Seconds
```

 Now we reboot one of the reader instances, `instance-5138`. We wait for the instance to become available again after the reboot. Now monitoring the uptime over a five-minute period shows that the uptime was reset to zero during that time. The most recent uptime value was measured five seconds after the reboot finished. 

```
$ aws rds reboot-db-instance --db-instance-identifier instance-5138
{
  "DBInstanceIdentifier": "instance-5138",
  "DBInstanceStatus": "rebooting"
}
$ aws rds wait db-instance-available --db-instance-id instance-5138

$ aws cloudwatch get-metric-statistics --metric-name "EngineUptime" \
  --start-time "$(date -d '5 minutes ago')" --end-time "$(date -d 'now')" --period 60 \
  --namespace "AWS/RDS" --statistics Minimum --dimensions Name=DBInstanceIdentifier,Value=instance-5138 \
  --output text | sort -k 3
EngineUptime
DATAPOINTS	4500.0	2021-03-17T23:46:00+00:00	Seconds
DATAPOINTS	4560.0	2021-03-17T23:47:00+00:00	Seconds
DATAPOINTS	4620.0	2021-03-17T23:48:00+00:00	Seconds
DATAPOINTS	4680.0	2021-03-17T23:49:00+00:00	Seconds
DATAPOINTS  5.0 2021-03-17T23:50:00+00:00 Seconds
```

 Next, we perform a reboot for the writer instance, `instance-9404`. We compare the uptime values for the writer instance and one of the reader instances. By doing so, we can see that rebooting the writer didn't cause a reboot for the readers. In versions before Aurora MySQL 2.10, the uptime values for all the readers would be reset at the same time as the writer. 

```
$ aws rds reboot-db-instance --db-instance-identifier instance-9404
{
  "DBInstanceIdentifier": "instance-9404",
  "DBInstanceStatus": "rebooting"
}
$ aws rds wait db-instance-available --db-instance-id instance-9404

$ aws cloudwatch get-metric-statistics --metric-name "EngineUptime" \
  --start-time "$(date -d '5 minutes ago')" --end-time "$(date -d 'now')" --period 60 \
  --namespace "AWS/RDS" --statistics Minimum --dimensions Name=DBInstanceIdentifier,Value=instance-9404 \
  --output text | sort -k 3
EngineUptime
DATAPOINTS	371.0	2021-03-17T23:57:00+00:00	Seconds
DATAPOINTS	431.0	2021-03-17T23:58:00+00:00	Seconds
DATAPOINTS	491.0	2021-03-17T23:59:00+00:00	Seconds
DATAPOINTS	551.0	2021-03-18T00:00:00+00:00	Seconds
DATAPOINTS  37.0  2021-03-18T00:01:00+00:00 Seconds

$ aws cloudwatch get-metric-statistics --metric-name "EngineUptime" \
  --start-time "$(date -d '5 minutes ago')" --end-time "$(date -d 'now')" --period 60 \
  --namespace "AWS/RDS" --statistics Minimum --dimensions Name=DBInstanceIdentifier,Value=instance-6772 \
  --output text | sort -k 3
EngineUptime
DATAPOINTS	5215.0	2021-03-17T23:57:00+00:00	Seconds
DATAPOINTS	5275.0	2021-03-17T23:58:00+00:00	Seconds
DATAPOINTS	5335.0	2021-03-17T23:59:00+00:00	Seconds
DATAPOINTS	5395.0	2021-03-18T00:00:00+00:00	Seconds
DATAPOINTS	5455.0	2021-03-18T00:01:00+00:00	Seconds
```

 To make sure that all the reader instances have all the same changes to configuration parameters as the writer instance, reboot all the reader instances after the writer. This example reboots all the readers and then waits until all of them are available before proceeding. 

```
$ aws rds reboot-db-instance --db-instance-identifier instance-6772
{
  "DBInstanceIdentifier": "instance-6772",
  "DBInstanceStatus": "rebooting"
}

$ aws rds reboot-db-instance --db-instance-identifier instance-2470
{
  "DBInstanceIdentifier": "instance-2470",
  "DBInstanceStatus": "rebooting"
}

$ aws rds reboot-db-instance --db-instance-identifier instance-5138
{
  "DBInstanceIdentifier": "instance-5138",
  "DBInstanceStatus": "rebooting"
}

$ aws rds wait db-instance-available --db-instance-id instance-6772
$ aws rds wait db-instance-available --db-instance-id instance-2470
$ aws rds wait db-instance-available --db-instance-id instance-5138
```

 Now we can see that the writer DB instance has the highest uptime. This instance's uptime value increased steadily throughout the monitoring period. The reader DB instances were all rebooted after the reader. We can see the point within the monitoring period when each reader was rebooted and its uptime was reset to zero. 

```
$ aws cloudwatch get-metric-statistics --metric-name "EngineUptime" \
  --start-time "$(date -d '5 minutes ago')" --end-time "$(date -d 'now')" --period 60 \
  --namespace "AWS/RDS" --statistics Minimum --dimensions Name=DBInstanceIdentifier,Value=instance-9404 \
  --output text | sort -k 3
EngineUptime
DATAPOINTS	457.0	2021-03-18T00:08:00+00:00	Seconds
DATAPOINTS	517.0	2021-03-18T00:09:00+00:00	Seconds
DATAPOINTS	577.0	2021-03-18T00:10:00+00:00	Seconds
DATAPOINTS	637.0	2021-03-18T00:11:00+00:00	Seconds
DATAPOINTS  697.0 2021-03-18T00:12:00+00:00 Seconds

$ aws cloudwatch get-metric-statistics --metric-name "EngineUptime" \
  --start-time "$(date -d '5 minutes ago')" --end-time "$(date -d 'now')" --period 60 \
  --namespace "AWS/RDS" --statistics Minimum --dimensions Name=DBInstanceIdentifier,Value=instance-2470 \
  --output text | sort -k 3
EngineUptime
DATAPOINTS	5819.0	2021-03-18T00:08:00+00:00	Seconds
DATAPOINTS  35.0  2021-03-18T00:09:00+00:00 Seconds
DATAPOINTS	95.0	2021-03-18T00:10:00+00:00	Seconds
DATAPOINTS	155.0	2021-03-18T00:11:00+00:00	Seconds
DATAPOINTS	215.0	2021-03-18T00:12:00+00:00	Seconds

$ aws cloudwatch get-metric-statistics --metric-name "EngineUptime" \
  --start-time "$(date -d '5 minutes ago')" --end-time "$(date -d 'now')" --period 60 \
  --namespace "AWS/RDS" --statistics Minimum --dimensions Name=DBInstanceIdentifier,Value=instance-5138 \
  --output text | sort -k 3
EngineUptime
DATAPOINTS	1085.0	2021-03-18T00:08:00+00:00	Seconds
DATAPOINTS	1145.0	2021-03-18T00:09:00+00:00	Seconds
DATAPOINTS	1205.0	2021-03-18T00:10:00+00:00	Seconds
DATAPOINTS  49.0  2021-03-18T00:11:00+00:00 Seconds
DATAPOINTS	109.0	2021-03-18T00:12:00+00:00	Seconds
```

## Applying a cluster parameter change to an Aurora MySQL version 2.10 cluster


 The following example demonstrates how to apply a parameter change to all DB instances in your Aurora MySQL 2.10 cluster. With this Aurora MySQL version, you reboot the writer instance and all the reader instances independently. 

 The example uses the MySQL configuration parameter `lower_case_table_names` for illustration. When this parameter setting is different between the writer and reader DB instances, a query might not be able to access a table declared with an uppercase or mixed-case name. Or if two table names differ only in terms of uppercase and lowercase letters, a query might access the wrong table. 

 This example shows how to determine the writer and reader instances in the cluster by examining the `IsClusterWriter` attribute of each instance. The cluster is named `cluster-2393`. The cluster has a writer instance named `instance-9404`. The reader instances in the cluster are named `instance-5138` and `instance-2470`. 

```
$ aws rds describe-db-clusters --db-cluster-id cluster-2393 \
  --query '*[].[DBClusterIdentifier,DBClusterMembers[*].[DBInstanceIdentifier,IsClusterWriter]]' \
  --output text
cluster-2393
instance-5138        False
instance-2470        False
instance-9404        True
```

 To demonstrate the effects of changing the `lower_case_table_names` parameter, we set up two DB cluster parameter groups. The `lower-case-table-names-0` parameter group has this parameter set to 0. The `lower-case-table-names-1` parameter group has this parameter group set to 1. 

```
$ aws rds create-db-cluster-parameter-group --description 'lower-case-table-names-0' \
  --db-parameter-group-family aurora-mysql5.7 \
  --db-cluster-parameter-group-name lower-case-table-names-0
{
    "DBClusterParameterGroup": {
        "DBClusterParameterGroupName": "lower-case-table-names-0",
        "DBParameterGroupFamily": "aurora-mysql5.7",
        "Description": "lower-case-table-names-0"
    }
}

$ aws rds create-db-cluster-parameter-group --description 'lower-case-table-names-1' \
  --db-parameter-group-family aurora-mysql5.7 \
  --db-cluster-parameter-group-name lower-case-table-names-1
{
    "DBClusterParameterGroup": {
        "DBClusterParameterGroupName": "lower-case-table-names-1",
        "DBParameterGroupFamily": "aurora-mysql5.7",
        "Description": "lower-case-table-names-1"
    }
}

$ aws rds modify-db-cluster-parameter-group \
  --db-cluster-parameter-group-name lower-case-table-names-0 \
  --parameters ParameterName=lower_case_table_names,ParameterValue=0,ApplyMethod=pending-reboot
{
    "DBClusterParameterGroupName": "lower-case-table-names-0"
}

$ aws rds modify-db-cluster-parameter-group \
  --db-cluster-parameter-group-name lower-case-table-names-1 \
    --parameters ParameterName=lower_case_table_names,ParameterValue=1,ApplyMethod=pending-reboot
{
    "DBClusterParameterGroupName": "lower-case-table-names-1"
}
```

 The default value of `lower_case_table_names` is 0. With this parameter setting, the table `foo` is distinct from the table `FOO`. This example verifies that the parameter is still at its default setting. Then the example creates three tables that differ only in uppercase and lowercase letters in their names. 

```
mysql> create database lctn;
Query OK, 1 row affected (0.07 sec)

mysql> use lctn;
Database changed
mysql> select @@lower_case_table_names;
+--------------------------+
| @@lower_case_table_names |
+--------------------------+
|                        0 |
+--------------------------+

mysql> create table foo (s varchar(128));
mysql> insert into foo values ('Lowercase table name foo');

mysql> create table Foo (s varchar(128));
mysql> insert into Foo values ('Mixed-case table name Foo');

mysql> create table FOO (s varchar(128));
mysql> insert into FOO values ('Uppercase table name FOO');

mysql> select * from foo;
+--------------------------+
| s                        |
+--------------------------+
| Lowercase table name foo |
+--------------------------+

mysql> select * from Foo;
+---------------------------+
| s                         |
+---------------------------+
| Mixed-case table name Foo |
+---------------------------+

mysql> select * from FOO;
+--------------------------+
| s                        |
+--------------------------+
| Uppercase table name FOO |
+--------------------------+
```

 Next, we associate the DB parameter group with the cluster to set the `lower_case_table_names` parameter to 1. This change only takes effect after each DB instance is rebooted. 

```
$ aws rds modify-db-cluster --db-cluster-identifier cluster-2393 \
  --db-cluster-parameter-group-name lower-case-table-names-1
{
  "DBClusterIdentifier": "cluster-2393",
  "DBClusterParameterGroup": "lower-case-table-names-1",
  "Engine": "aurora-mysql",
  "EngineVersion": "5.7.mysql_aurora.2.10.0"
}
```

 The first reboot we do is for the writer DB instance. Then we wait for the instance to become available again. At that point, we connect to the writer endpoint and verify that the writer instance has the changed parameter value. The `SHOW TABLES` command confirms that the database contains the three different tables. However, queries that refer to tables named `foo`, `Foo`, or `FOO` all access the table whose name is all-lowercase, `foo`. 

```
# Rebooting the writer instance
$ aws rds reboot-db-instance --db-instance-identifier instance-9404
$ aws rds wait db-instance-available --db-instance-id instance-9404
```

 Now, queries using the cluster endpoint show the effects of the parameter change. Whether the table name in the query is uppercase, lowercase, or mixed case, the SQL statement accesses the table whose name is all lowercase. 

```
mysql> select @@lower_case_table_names;
+--------------------------+
| @@lower_case_table_names |
+--------------------------+
|                        1 |
+--------------------------+

mysql> use lctn;
mysql> show tables;
+----------------+
| Tables_in_lctn |
+----------------+
| FOO            |
| Foo            |
| foo            |
+----------------+

mysql> select * from foo;
+--------------------------+
| s                        |
+--------------------------+
| Lowercase table name foo |
+--------------------------+

mysql> select * from Foo;
+--------------------------+
| s                        |
+--------------------------+
| Lowercase table name foo |
+--------------------------+

mysql> select * from FOO;
+--------------------------+
| s                        |
+--------------------------+
| Lowercase table name foo |
+--------------------------+
```

 The next example shows the same queries as the previous one. In this case, the queries use the reader endpoint and run on one of the reader DB instances. Those instances haven't been rebooted yet. Thus, they still have the original setting for the `lower_case_table_names` parameter. That means that queries can access each of the `foo`, `Foo`, and `FOO` tables. 

```
mysql> select @@lower_case_table_names;
+--------------------------+
| @@lower_case_table_names |
+--------------------------+
|                        0 |
+--------------------------+

mysql> use lctn;

mysql> select * from foo;
+--------------------------+
| s                        |
+--------------------------+
| Lowercase table name foo |
+--------------------------+

mysql> select * from Foo;
+---------------------------+
| s                         |
+---------------------------+
| Mixed-case table name Foo |
+---------------------------+

mysql> select * from FOO;
+--------------------------+
| s                        |
+--------------------------+
| Uppercase table name FOO |
+--------------------------+
```

 Next, we reboot one of the reader instances and wait for it to become available again. 

```
$ aws rds reboot-db-instance --db-instance-identifier instance-2470
{
  "DBInstanceIdentifier": "instance-2470",
  "DBInstanceStatus": "rebooting"
}
$ aws rds wait db-instance-available --db-instance-id instance-2470
```

 While connected to the instance endpoint for `instance-2470`, a query shows that the new parameter is in effect. 

```
mysql> select @@lower_case_table_names;
+--------------------------+
| @@lower_case_table_names |
+--------------------------+
|                        1 |
+--------------------------+
```

 At this point, the two reader instances in the cluster are running with different `lower_case_table_names` settings. Thus, any connection to the reader endpoint of the cluster uses a value for this setting that's unpredictable. It's important to immediately reboot the other reader instance so that they both have consistent settings. 

```
$ aws rds reboot-db-instance --db-instance-identifier instance-5138
{
  "DBInstanceIdentifier": "instance-5138",
  "DBInstanceStatus": "rebooting"
}
$ aws rds wait db-instance-available --db-instance-id instance-5138
```

 The following example confirms that all the reader instances have the same setting for the `lower_case_table_names` parameter. The commands check the `lower_case_table_names` setting value on each reader instance. Then the same command using the reader endpoint demonstrates that each connection to the reader endpoint uses one of the reader instances, but which one isn't predictable. 

```
# Check lower_case_table_names setting on each reader instance.

$ mysql -h instance-5138.a12345.us-east-1.rds.amazonaws.com \
  -u my-user -p -e 'select @@aurora_server_id, @@lower_case_table_names'
+--------------------------+--------------------------+
| @@aurora_server_id       | @@lower_case_table_names |
+--------------------------+--------------------------+
| instance-5138            |                        1 |
+--------------------------+--------------------------+

$ mysql -h instance-2470.a12345.us-east-1.rds.amazonaws.com \
  -u my-user -p -e 'select @@aurora_server_id, @@lower_case_table_names'
+--------------------------+--------------------------+
| @@aurora_server_id       | @@lower_case_table_names |
+--------------------------+--------------------------+
| instance-2470            |                        1 |
+--------------------------+--------------------------+

# Check lower_case_table_names setting on the reader endpoint of the cluster.

$ mysql -h cluster-2393.cluster-ro-a12345.us-east-1.rds.amazonaws.com \
  -u my-user -p -e 'select @@aurora_server_id, @@lower_case_table_names'
+--------------------------+--------------------------+
| @@aurora_server_id       | @@lower_case_table_names |
+--------------------------+--------------------------+
| instance-5138            |                        1 |
+--------------------------+--------------------------+

# Run query on writer instance

$ mysql -h cluster-2393.cluster-a12345.us-east-1.rds.amazonaws.com \
  -u my-user -p -e 'select @@aurora_server_id, @@lower_case_table_names'
+--------------------------+--------------------------+
| @@aurora_server_id       | @@lower_case_table_names |
+--------------------------+--------------------------+
| instance-9404            |                        1 |
+--------------------------+--------------------------+
```

 With the parameter change applied everywhere, we can see the effect of setting `lower_case_table_names=1`. Whether the table is referred to as `foo`, `Foo`, or `FOO` the query converts the name to `foo` and accesses the same table in each case. 

```
mysql> use lctn;

mysql> select * from foo;
+--------------------------+
| s                        |
+--------------------------+
| Lowercase table name foo |
+--------------------------+

mysql> select * from Foo;
+--------------------------+
| s                        |
+--------------------------+
| Lowercase table name foo |
+--------------------------+

mysql> select * from FOO;
+--------------------------+
| s                        |
+--------------------------+
| Lowercase table name foo |
+--------------------------+
```

# Failing over an Amazon Aurora DB cluster
Failing over an Aurora DB cluster

You can perform a manual failover of an Aurora DB cluster, for example, when you want to replace a provisioned writer DB instance with an Aurora Serverless v2 writer instance.

Aurora fails over to a new primary DB instance in one of two ways:
+ By promoting an existing reader DB instance to the new primary instance
+ By creating a new primary instance

If the DB cluster has one or more reader instances, then a reader is promoted to the primary instance during a failure event. To increase the availability of your DB cluster, we recommend that you create at least one or more reader instances in two or more different Availability Zones. For more information on the failover mechanism, see [Fault tolerance for an Aurora DB cluster](Concepts.AuroraHighAvailability.md#Aurora.Managing.FaultTolerance).

You can use the AWS Management Console, AWS CLI, or RDS API to perform a manual failover.

## Console


**To fail over a DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then select a DB instance in the DB cluster that you want to fail over.

1. For **Actions**, choose **Failover**.

   The confirmation page appears.

1. Choose **Failover**.

   The **Databases** page shows that the DB cluster status is **Failing-over**. The status returns to **Available** when the failover is completed, and the roles for the new and former primary DB instances are displayed.

## AWS CLI


To fail over a DB cluster using the AWS CLI, call the [failover-db-cluster](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/rds/failover-db-cluster.html) command. Specify the following parameters:
+ `--db-cluster-identifier` – The DB cluster that you want to fail over.
+ `--target-db-instance-identifier` – The name of the DB instance to promote to the primary DB instance.

**Example**  
For Linux, macOS, or Unix:  

```
aws rds failover-db-cluster \
    --db-cluster-identifier mydbcluster \
    --target-db-instance-identifier mydbcluster-instance-2
```
For Windows:  

```
aws rds failover-db-cluster ^
    --db-cluster-identifier mydbcluster ^
    --target-db-instance-identifier mydbcluster-instance-2
```

## RDS API


To modify a DB cluster using the Amazon RDS API, call the [FailoverDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_FailoverDBCluster.html) operation. Specify the following parameters:
+ DBClusterIdentifier
+ TargetDBInstanceIdentifier

# Deleting Aurora DB clusters and DB instances
Deleting Aurora clusters and instances

You can delete an Aurora DB cluster when you no longer need it. Deleting the cluster removes the cluster volume containing all your data. Before deleting the cluster, you can save a snapshot of your data. You can restore the snapshot later to create a new cluster containing the same data.

You can also delete DB instances from a cluster while preserving the cluster itself and the data that it contains. Deleting DB instances can help you reduce your charges if the cluster isn't busy, or you don't need the computation capacity of multiple DB instances.

**Topics**
+ [

## Deleting an Aurora DB cluster
](#USER_DeleteCluster.DeleteCluster)
+ [

## Deletion protection for Aurora clusters
](#USER_DeletionProtection)
+ [

## Deleting a stopped Aurora cluster
](#USER_Deletion_Stopped_Cluster)
+ [

## Deleting Aurora MySQL clusters that are read replicas
](#USER_DeleteInstance.AuroraReplica)
+ [

## The final snapshot when deleting a cluster
](#USER_Deletion_Final_Snapshot)
+ [

## Deleting a DB instance from an Aurora DB cluster
](#USER_DeleteInstance)

## Deleting an Aurora DB cluster


Aurora doesn't provide a single-step method to delete a DB cluster. This design choice is intended to prevent you from accidentally losing data or taking your application offline. Aurora applications are typically mission-critical and require high availability. Thus, Aurora makes it easy to scale the capacity of the cluster up and down by adding and removing DB instances. Removing the DB cluster itself requires you to make a separate deletion.

Use the following general procedure to remove all the DB instances from a cluster and then delete the cluster itself.

1. Delete any reader instances in the cluster. Use the procedure in [Deleting a DB instance from an Aurora DB cluster](#USER_DeleteInstance).

   If the cluster has any reader instances, deleting one of the instances only reduces the computation capacity of the cluster. Deleting the reader instances first ensures that the cluster remains available throughout the procedure and doesn't perform unnecessary failover operations.

1. Delete the writer instance from the cluster. Again, use the procedure in [Deleting a DB instance from an Aurora DB cluster](#USER_DeleteInstance).

    When you delete the DB instances, the cluster and its associated cluster volume remain even after you delete all the DB instances.

1. Delete the DB cluster.
   + **AWS Management Console** – Choose the cluster, then choose **Delete** from the **Actions** menu. You can choose the following options to preserve the data from the cluster in case you need it later:
     + Create a final snapshot of the cluster volume. The default setting is to create a final snapshot.
     + Retain automated backups. The default setting is not to retain automated backups.
**Note**  
Automated backups for Aurora Serverless v1 DB clusters aren't retained.

     Aurora also requires you to confirm that you intend to delete the cluster.
   + **CLI and API** – Call the `delete-db-cluster` CLI command or `DeleteDBCluster` API operation. You can choose the following options to preserve the data from the cluster in case you need it later:
     + Create a final snapshot of the cluster volume.
     + Retain automated backups.
**Note**  
Automated backups for Aurora Serverless v1 DB clusters aren't retained.

**Topics**
+ [

### Deleting an empty Aurora cluster
](#USER_DeleteInstance.Empty)
+ [

### Deleting an Aurora cluster with a single DB instance
](#USER_DeleteInstance.SingleInstance)
+ [

### Deleting an Aurora cluster with multiple DB instances
](#USER_DeleteInstance.MultipleInstances)

### Deleting an empty Aurora cluster


You can delete an empty DB cluster using the AWS Management Console, AWS CLI, or Amazon RDS API.

**Tip**  
You can keep a cluster with no DB instances to preserve your data without incurring CPU charges for the cluster. You can quickly start using the cluster again by creating one or more new DB instances for the cluster. However, you can only add new DB instances by using the AWS CLI or the RDS API. You can't add new DB instances by using the Amazon RDS console. You can perform Aurora-specific administrative operations on the cluster while it doesn't have any associated DB instances. You just can't access the data or perform any operations that require connecting to a DB instance.

#### Console


**To delete a DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose the DB cluster that you want to delete.

1. For **Actions**, choose **Delete**.

1. To create a final DB snapshot for the DB cluster, choose **Create final snapshot?**. This is the default setting.

1. If you chose to create a final snapshot, enter the **Final snapshot name**.

1. To retain automated backups, choose **Retain automated backups**. This is not the default setting.

1. Enter **delete me** in the box.

1. Choose **Delete**.

#### CLI


To delete an empty Aurora DB cluster by using the AWS CLI, call the [delete-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/delete-db-cluster.html) command.

Suppose that the empty cluster `deleteme-zero-instances` was only used for development and testing and doesn't contain any important data. In that case, you don't need to preserve a snapshot of the cluster volume when you delete the cluster. The following example demonstrates that a cluster doesn't contain any DB instances, and then deletes the empty cluster without creating a final snapshot or retaining automated backups.

```
$ aws rds describe-db-clusters --db-cluster-identifier deleteme-zero-instances --output text \
  --query '*[].["Cluster:",DBClusterIdentifier,DBClusterMembers[*].["Instance:",DBInstanceIdentifier,IsClusterWriter]]
Cluster:        deleteme-zero-instances

$ aws rds delete-db-cluster --db-cluster-identifier deleteme-zero-instances \
  --skip-final-snapshot \
  --delete-automated-backups
{
  "DBClusterIdentifier": "deleteme-zero-instances",
  "Status": "available",
  "Engine": "aurora-mysql"
}
```

#### RDS API


To delete an empty Aurora DB cluster by using the Amazon RDS API, call the [DeleteDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DeleteDBCluster.html) operation.

### Deleting an Aurora cluster with a single DB instance


You can delete the last DB instance, even if the DB cluster has deletion protection enabled. In this case, the DB cluster itself still exists and your data is preserved. You can access the data again by attaching a new DB instance to the cluster.

The following example shows how the `delete-db-cluster` command doesn't work when the cluster still has any associated DB instances. This cluster has a single writer DB instance. When we examine the DB instances in the cluster, we check the `IsClusterWriter` attribute of each one. The cluster could have zero or one writer DB instance. A value of `true` signifies a writer DB instance. A value of `false` signifies a reader DB instance. The cluster could have zero, one, or many reader DB instances. In this case, we delete the writer DB instance using the `delete-db-instance` command. As soon as the DB instance goes into `deleting` state, we can delete the cluster also. For this example also, suppose that the cluster doesn't contain any data worth preserving. Therefore, we don't create a snapshot of the cluster volume or retain automated backups.

```
$ aws rds delete-db-cluster --db-cluster-identifier deleteme-writer-only --skip-final-snapshot
An error occurred (InvalidDBClusterStateFault) when calling the DeleteDBCluster operation:
  Cluster cannot be deleted, it still contains DB instances in non-deleting state.

$ aws rds describe-db-clusters --db-cluster-identifier deleteme-writer-only \
  --query '*[].[DBClusterIdentifier,Status,DBClusterMembers[*].[DBInstanceIdentifier,IsClusterWriter]]'
[
    [
        "deleteme-writer-only",
        "available",
        [
            [
                "instance-2130",
                true
            ]
        ]
    ]
]

$ aws rds delete-db-instance --db-instance-identifier instance-2130
{
  "DBInstanceIdentifier": "instance-2130",
  "DBInstanceStatus": "deleting",
  "Engine": "aurora-mysql"
}

$ aws rds delete-db-cluster --db-cluster-identifier deleteme-writer-only \
  --skip-final-snapshot \
  --delete-automated-backups
{
  "DBClusterIdentifier": "deleteme-writer-only",
  "Status": "available",
  "Engine": "aurora-mysql"
}
```

### Deleting an Aurora cluster with multiple DB instances


If your cluster contains multiple DB instances, typically there is a single writer instance and one or more reader instances. The reader instances help with high availability, by being on standby to take over if the writer instance encounters a problem. You can also use reader instances to scale the cluster up to handle a read-intensive workload without adding overhead to the writer instance.

To delete a cluster with multiple reader DB instances, you delete the reader instances first and then the writer instance. Deleting the writer instance leaves the cluster and its data in place. You delete the cluster through a separate action.
+ For the procedure to delete an Aurora DB instance, see [Deleting a DB instance from an Aurora DB cluster](#USER_DeleteInstance).
+ For the procedure to delete the writer DB instance in an Aurora cluster, see [Deleting an Aurora cluster with a single DB instance](#USER_DeleteInstance.SingleInstance).
+ For the procedure to delete an empty Aurora cluster, see [Deleting an empty Aurora cluster](#USER_DeleteInstance.Empty).

This CLI example shows how to delete a cluster containing both a writer DB instance and a single reader DB instance. The `describe-db-clusters` output shows that `instance-7384` is the writer instance and `instance-1039` is the reader instance. The example deletes the reader instance first, because deleting the writer instance while a reader instance still existed would cause a failover operation. It doesn't make sense to promote the reader instance to a writer if you plan to delete that instance also. Again, suppose that these `db.t2.small` instances are only used for development and testing, and so the delete operation skips the final snapshot and doesn't retain automated backups..

```
$ aws rds delete-db-cluster --db-cluster-identifier deleteme-writer-and-reader --skip-final-snapshot

An error occurred (InvalidDBClusterStateFault) when calling the DeleteDBCluster operation:
  Cluster cannot be deleted, it still contains DB instances in non-deleting state.

$ aws rds describe-db-clusters --db-cluster-identifier deleteme-writer-and-reader --output text \
  --query '*[].["Cluster:",DBClusterIdentifier,DBClusterMembers[*].["Instance:",DBInstanceIdentifier,IsClusterWriter]]
Cluster:        deleteme-writer-and-reader
Instance:       instance-1039  False
Instance:       instance-7384   True

$ aws rds delete-db-instance --db-instance-identifier instance-1039
{
  "DBInstanceIdentifier": "instance-1039",
  "DBInstanceStatus": "deleting",
  "Engine": "aurora-mysql"
}

$ aws rds delete-db-instance --db-instance-identifier instance-7384
{
  "DBInstanceIdentifier": "instance-7384",
  "DBInstanceStatus": "deleting",
  "Engine": "aurora-mysql"
}

$ aws rds delete-db-cluster --db-cluster-identifier deleteme-writer-and-reader \
  --skip-final-snapshot \
  --delete-automated-backups
{
  "DBClusterIdentifier": "deleteme-writer-and-reader",
  "Status": "available",
  "Engine": "aurora-mysql"
}
```

The following example shows how to delete a DB cluster containing a writer DB instance and multiple reader DB instances. It uses concise output from the `describe-db-clusters` command to get a report of the writer and reader instances. Again, we delete all reader DB instances before deleting the writer DB instance. It doesn't matter what order we delete the reader DB instances in.

Suppose that this cluster with several DB instances does contain data worth preserving. Thus, the `delete-db-cluster` command in this example includes the `--no-skip-final-snapshot` and `--final-db-snapshot-identifier` parameters to specify the details of the snapshot to create. It also includes the `--no-delete-automated-backups` parameter to retain automated backups.

```
$ aws rds describe-db-clusters --db-cluster-identifier deleteme-multiple-readers --output text \
  --query '*[].["Cluster:",DBClusterIdentifier,DBClusterMembers[*].["Instance:",DBInstanceIdentifier,IsClusterWriter]]
Cluster:        deleteme-multiple-readers
Instance:       instance-1010   False
Instance:       instance-5410   False
Instance:       instance-9948   False
Instance:       instance-8451   True

$ aws rds delete-db-instance --db-instance-identifier instance-1010
{
  "DBInstanceIdentifier": "instance-1010",
  "DBInstanceStatus": "deleting",
  "Engine": "aurora-mysql"
}

$ aws rds delete-db-instance --db-instance-identifier instance-5410
{
  "DBInstanceIdentifier": "instance-5410",
  "DBInstanceStatus": "deleting",
  "Engine": "aurora-mysql"
}

$ aws rds delete-db-instance --db-instance-identifier instance-9948
{
  "DBInstanceIdentifier": "instance-9948",
  "DBInstanceStatus": "deleting",
  "Engine": "aurora-mysql"
}

$ aws rds delete-db-instance --db-instance-identifier instance-8451
{
  "DBInstanceIdentifier": "instance-8451",
  "DBInstanceStatus": "deleting",
  "Engine": "aurora-mysql"
}

$ aws rds delete-db-cluster --db-cluster-identifier deleteme-multiple-readers \
  --no-delete-automated-backups \
  --no-skip-final-snapshot \
  --final-db-snapshot-identifier deleteme-multiple-readers-final-snapshot
{
  "DBClusterIdentifier": "deleteme-multiple-readers",
  "Status": "available",
  "Engine": "aurora-mysql"
}
```

 The following example shows how to confirm that Aurora created the requested snapshot. You can request details for the specific snapshot by specifying its identifier `deleteme-multiple-readers-final-snapshot`. You can also get a report of all snapshots for the cluster that was deleted by specifying the cluster identifier `deleteme-multiple-readers`. Both of those commands return information about the same snapshot. 

```
$ aws rds describe-db-cluster-snapshots \
  --db-cluster-snapshot-identifier deleteme-multiple-readers-final-snapshot
{
    "DBClusterSnapshots": [
        {
            "AvailabilityZones": [],
            "DBClusterSnapshotIdentifier": "deleteme-multiple-readers-final-snapshot",
            "DBClusterIdentifier": "deleteme-multiple-readers",
            "SnapshotCreateTime": "11T01:40:07.354000+00:00",
            "Engine": "aurora-mysql",
...

$ aws rds describe-db-cluster-snapshots --db-cluster-identifier deleteme-multiple-readers
{
    "DBClusterSnapshots": [
        {
            "AvailabilityZones": [],
            "DBClusterSnapshotIdentifier": "deleteme-multiple-readers-final-snapshot",
            "DBClusterIdentifier": "deleteme-multiple-readers",
            "SnapshotCreateTime": "11T01:40:07.354000+00:00",
            "Engine": "aurora-mysql",
...
```

## Deletion protection for Aurora clusters


You can't delete clusters that have deletion protection enabled. You can delete DB instances within the cluster, but not the cluster itself. That way, the cluster volume containing all your data is safe from accidental deletion. Aurora enforces deletion protection for a DB cluster whether you try to delete the cluster using the console, the AWS CLI, or the RDS API.

Deletion protection is enabled by default when you create a production DB cluster using the AWS Management Console. However, deletion protection is disabled by default if you create a cluster using the AWS CLI or API. Enabling or disabling deletion protection doesn't cause an outage. To be able to delete the cluster, modify the cluster and disable deletion protection. For more information about turning deletion protection on and off, see [Modifying the DB cluster by using the console, CLI, and API](Aurora.Modifying.md#Aurora.Modifying.Cluster).

**Tip**  
Even if all the DB instances are deleted, you can access the data by creating a new DB instance in the cluster.

## Deleting a stopped Aurora cluster


You can't delete a cluster if it's in the `stopped` state. In this case, start the cluster before deleting it. For more information, see [Starting an Aurora DB cluster](aurora-cluster-stop-start.md#aurora-cluster-start).

## Deleting Aurora MySQL clusters that are read replicas


For Aurora MySQL, you can't delete a DB instance in a DB cluster if both of the following conditions are true:
+ The DB cluster is a read replica of another Aurora DB cluster.
+ The DB instance is the only instance in the DB cluster.

To delete a DB instance in this case, first promote the DB cluster so that it's no longer a read replica. After the promotion completes, you can delete the final DB instance in the DB cluster. For more information, see [Replicating Amazon Aurora MySQL DB clusters across AWS Regions](AuroraMySQL.Replication.CrossRegion.md).

## The final snapshot when deleting a cluster


Throughout this section, the examples show how you can choose whether to take a final snapshot when you delete an Aurora cluster. If you choose to take a final snapshot but the name you specify matches an existing snapshot, the operation stops with an error. In this case, examine the snapshot details to confirm if it represents your current detail or if it is an older snapshot. If the existing snapshot doesn't have the latest data that you want to preserve, rename the snapshot and try again, or specify a different name for the **final snapshot** parameter.

## Deleting a DB instance from an Aurora DB cluster


You can delete a DB instance from an Aurora DB cluster as part of the process of deleting the entire cluster. If your cluster contains a certain number of DB instances, then deleting the cluster requires deleting each of those DB instances. You can also delete one or more reader instances from a cluster while leaving the cluster running. You might do so to reduce compute capacity and associated charges if your cluster isn't busy. 

To delete a DB instance, you specify the name of the instance.

You can delete a DB instance using the AWS Management Console, the AWS CLI, or the RDS API. 

**Note**  
When an Aurora Replica is deleted, its instance endpoint is removed immediately, and the Aurora Replica is removed from the reader endpoint. If there are statements running on the Aurora Replica that is being deleted, there is a three-minute grace period. Existing statements can finish during the grace period. When the grace period ends, the Aurora Replica is shut down and deleted.

For Aurora DB clusters, deleting a DB instance doesn't necessarily delete the entire cluster. You can delete a DB instance in an Aurora cluster to reduce compute capacity and associated charges when the cluster isn't busy. For information about the special circumstances for Aurora clusters that have one DB instance or zero DB instances, see [Deleting an Aurora cluster with a single DB instance](#USER_DeleteInstance.SingleInstance) and [Deleting an empty Aurora cluster](#USER_DeleteInstance.Empty). 

**Note**  
You can't delete a DB cluster when deletion protection is enabled for it. For more information, see [Deletion protection for Aurora clusters](#USER_DeletionProtection).   
You can disable deletion protection by modifying the DB cluster. For more information, see [Modifying an Amazon Aurora DB cluster](Aurora.Modifying.md). 

### Console


**To delete a DB instance in a DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose the DB instance that you want to delete.

1. For **Actions**, choose **Delete**.

1. Enter **delete me** in the box.

1. Choose **Delete**.

### AWS CLI


To delete a DB instance by using the AWS CLI, call the [delete-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/delete-db-instance.html) command and specify the `--db-instance-identifier` value.

**Example**  
For Linux, macOS, or Unix:  

```
aws rds delete-db-instance \
    --db-instance-identifier mydbinstance
```
For Windows:  

```
aws rds delete-db-instance ^
    --db-instance-identifier mydbinstance
```

### RDS API


To delete a DB instance by using the Amazon RDS API, call the [DeleteDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DeleteDBInstance.html) operation and specify the `DBInstanceIdentifier` parameter.

**Note**  
 When the status for a DB instance is `deleting`, its CA certificate value doesn't appear in the RDS console or in output for AWS CLI commands or RDS API operations. For more information about CA certificates, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md). 

# Tagging Amazon Aurora andAmazon RDS resources
Tagging Aurora and RDS resources<a name="tagging"></a>

An Amazon RDS tag is a name-value pair that you define and associate with an Amazon RDS resource such as a DB instance or DB snapshot. The name is referred to as the key. Optionally, you can supply a value for the key.

You can use the AWS Management Console, the AWS CLI, or the Amazon RDS API to add, list, and delete tags on Amazon RDS resources. When using the CLI or API, make sure to provide the Amazon Resource Name (ARN) for the RDS resource to work with. For more information about constructing an ARN, see [Constructing an ARN for Amazon RDS](USER_Tagging.ARN.Constructing.md).

You can use tags to add metadata to your Aurora and Amazon RDS resources. You can use the tags to add your own notations about database instances, snapshots, Aurora clusters, and so on. Doing so can help you to document your Aurora and Amazon RDS resources. You can also use the tags with automated maintenance procedures. 

 In particular, you can use these tags with IAM policies. You can use them to manage access to Aurora and Amazon RDS resources and to control what actions can be applied to those resources. You can also use these tags to track costs by grouping expenses for similarly tagged resources. 

You can tag the following Aurora and Amazon RDS resources:
+ DB instances
+ DB clusters
+ Aurora global clusters
+ DB cluster endpoints
+ Read replicas
+ DB snapshots
+ DB cluster snapshots
+ Reserved DB instances
+ Event subscriptions
+ DB option groups
+ DB parameter groups
+ DB cluster parameter groups
+ DB subnet groups
+ RDS Proxies
+ RDS Proxy endpoints
+ Blue/green deployments
+ Zero-ETL integrations
+ Automated backups
+ Cluster automated backups

**Note**  
When you tag a DB instance, Aurora automatically applies those tags to the associated Performance Insights resources. Currently, you can't tag RDS Proxies and RDS Proxy endpoints by using the AWS Management Console.

**Topics**
+ [

## Why use Amazon RDS resource tags?
](#Tagging.Purpose)
+ [

## How Amazon RDS resource tags work
](#Overview.Tagging)
+ [

## Best practices for tagging Amazon RDS resources
](#Tagging.best-practices)
+ [

## Copying tags to DB cluster snapshots
](#USER_Tagging.CopyTagsCluster)
+ [

## Tagging automated backup resources
](#USER_Tagging.AutomatedBackups)
+ [

## Adding and deleting tags in Amazon RDS
](#Tagging.HowTo)
+ [

# Tutorial: Use tags to specify which Aurora DB clusters to stop
](Tagging.Aurora.Autostop.md)

## Why use Amazon RDS resource tags?
Why use RDS tags?

You can use tags to do the following:
+ Categorize your RDS resources by application, project, department, environment, and so on. For example, you could use a tag key to define a category, where the tag value is an item in this category. You might create the tag `environment=prod`. Or you might define a tag key of `project` and a tag value of `Salix`, which indicates that an Amazon RDS resource is assigned to the Salix project.
+ Automate resource management tasks. For example, you could create a maintenance window for instances tagged `environment=prod` that differs from the window for instances tagged `environment=test`. You could also configure automatic DB snapshots for instances tagged `environment=prod`.
+ Control access to RDS resources within an IAM policy. You can do this by using the global `aws:ResourceTag/tag-key` condition key. For example, a policy might allow only users in the `DBAdmin` group to modify DB instances tagged with `environment=prod`. For information about managing access to tagged resources with IAM policies, see [Identity and access management for Amazon Aurora](UsingWithRDS.IAM.md) and [Controlling access to AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-resources) in the *AWS Identity and Access Management User Guide*. 
+ Monitor resources based on a tag. For example, you can create an Amazon CloudWatch dashboard for DB instances tagged with `environment=prod`.
+ Track costs by grouping expenses for similarly tagged resources. For example, if you tag RDS resources associated with the Salix project with `project=Salix`, you can generate cost reports for and allocate expenses to this project. For more information, see [How AWS billing works with tags in Amazon RDS](#Tagging.Billing).

## How Amazon RDS resource tags work
How RDS tags work

AWS doesn't apply any semantic meaning to your tags. Tags are interpreted strictly as character strings. 

**Topics**
+ [

### Tag sets in Amazon RDS
](#Overview.Tagging.Sets)
+ [

### Tag structure in Amazon RDS
](#Overview.Tagging.Structure)
+ [

### Amazon RDS resources eligible for tagging
](#Overview.Tagging.Resources)
+ [

### How AWS billing works with tags in Amazon RDS
](#Tagging.Billing)

### Tag sets in Amazon RDS


Every Amazon RDS resource has a container called a tag set. The container includes all the tags that are assigned to the resource. A resource has exactly one tag set. 

A tag set contains 0—50 tags. If you add a tag to an RDS resource with the same key as an existing resource tag, the new value overwrites the old.

### Tag structure in Amazon RDS


The structure of an RDS tag is as follows:

**Tag key**  
The key is the required name of the tag. The string value must be 1—128 Unicode characters in length and cannot be prefixed with `aws:` or `rds:`. The string can contain only the set of Unicode letters, digits, whitespace, `_`, `.`, `:`, `/`, `=`, `+`, `-`, and `@`. The Java regex is `"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$"`. Tag keys are case-sensitive. Thus, the keys `project` and `Project` are distinct.  
A key is unique to a tag set. For example, you cannot have a key-pair in a tag set with the key the same but with different values, such as `project=Trinity` and `project=Xanadu`.

**Tag value**  
The value is an optional string value of the tag. The string value must be 1—256 Unicode characters in length. The string can contain only the set of Unicode letters, digits, whitespace, `_`, `.`, `:`, `/`, `=`, `+`, `-`, and `@`. The Java regex is `"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$"`. Tag values are case-sensitive. Thus, the values `prod` and `Prod` are distinct.  
Values don't need to be unique in a tag set and can be null. For example, you can have a key-value pair in a tag set of `project=Trinity` and `cost-center=Trinity`. 

### Amazon RDS resources eligible for tagging


You can tag the following Amazon RDS resources:
+ DB instances
+ DB clusters
+ DB cluster endpoints
+ Read replicas
+ DB snapshots
+ DB cluster snapshots
+ Reserved DB instances
+ Event subscriptions
+ DB option groups
+ DB parameter groups
+ DB cluster parameter groups
+ DB subnet groups
+ RDS Proxies
+ RDS Proxy endpoints
**Note**  
Currently, you can't tag RDS Proxies and RDS Proxy endpoints by using the AWS Management Console.
+ Blue/green deployments
+ Zero-ETL integrations (preview)
+ Automated backups
+ Cluster automated backups

### How AWS billing works with tags in Amazon RDS


Use tags to organize your AWS bill to reflect your own cost structure. To do this, sign up to get your AWS account bill with tag key values included. Then, to see the cost of combined resources, organize your billing information according to resources with the same tag key values. For example, you can tag several resources with a specific application name, and then organize your billing information to see the total cost of that application across several services. For more information, see [Using Cost Allocation Tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html) in the *AWS Billing User Guide*.

#### How cost allocation tags work with DB clustersnapshots


You can add a tag to a DB cluster snapshot. However, your bill won't reflect this grouping. For cost allocation tags to apply to DB cluster snapshots, the following conditions must be met:
+ The tags must be attached to the parent DB instance.
+ The parent DB instance must exist in the same AWS account as the DB cluster snapshot.
+ The parent DB instance must exist in the same AWS Region as the DB cluster snapshot.

DB cluster snapshots are considered orphaned if they don't exist in the same Region as the parent DB instance, or if the parent DB instance is deleted. Orphaned DB snapshots don't support cost allocation tags. Costs for orphaned snapshots are aggregated in a single untagged line item. Cross-account DB cluster snapshots aren't considered orphaned when the following conditions are met:
+ They exist in the same Region as the parent DB instance.
+ The parent DB instance is owned by the source account.
**Note**  
If the parent DB instance is owned by a different account, cost allocation tags don't apply to cross-account snapshots in the destination account.

## Best practices for tagging Amazon RDS resources
Best practices

When you use tags, we recommend that you adhere to the following best practices:
+ Document conventions for tag use that are followed by all teams in your organization. In particular, ensure the names are both descriptive and consistent. For example, standardize on the format `environment:prod` rather than tagging some resources with `env:production`.
**Important**  
Do not store personally identifiable information (PII) or other confidential or sensitive information in tags.
+ Automate tagging to ensure consistency. For example, you can use the following techniques:
  + Include tags in an CloudFormation template. When you create resources with the template, the resources are tagged automatically.
  + Define and apply tags using AWS Lambda functions.
  + Create an SSM document that includes steps to add tags to your RDS resources.
+ Use tags only when necessary. You can add up to 50 tags for a single RDS resource, but a best practice is to avoid unnecessary tag proliferation and complexity.
+ Review tags periodically for relevance and accuracy. Remove or modify outdated tags as needed.
+ Consider creating tags with the AWS Tag Editor in the AWS Management Console. You can use the Tag Editor to add tags to multiple supported AWS resources, including RDS resources, at the same time. For more information, see [Tag Editor](https://docs.aws.amazon.com/ARG/latest/userguide/tag-editor.html) in the *AWS Resource Groups User Guide*.

## Copying tags to DB cluster snapshots


When you create or restore a DB cluster, you can specify that the tags from the cluster are copied to snapshots of the DB cluster. Copying tags ensures that the metadata for the DB snapshots matches that of the source DB cluster. It also ensures any access policies for the DB snapshot also match those of the source DB cluster. Tags are not copied by default. 

You can specify that tags are copied to DB snapshots for the following actions: 
+ Creating a DB cluster
+ Restoring a DB cluster
+ Creating a read replica
+ Copying a DB cluster snapshot

**Note**  
In some cases, you might include a value for the `--tags` parameter of the [create-db-snapshot](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-snapshot.html) AWS CLI command. Or you might supply at least one tag to the [CreateDBSnapshot](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBSnapshot.html) API operation. In these cases, RDS doesn't copy tags from the source DB instance to the new DB snapshot. This functionality applies even if the source DB instance has the `--copy-tags-to-snapshot` (`CopyTagsToSnapshot`) option turned on.   
If you take this approach, you can create a copy of a DB instance from a DB snapshot. This approach avoids adding tags that don't apply to the new DB instance. You create your DB snapshot using the AWS CLI `create-db-snapshot` command (or the `CreateDBSnapshot` RDS API operation). After you create your DB snapshot, you can add tags as described later in this topic.

## Tagging automated backup resources
Tagging automated backups

Automated backup resources are created when you set backup retention period value from 0 to greater than 0. You can tag instance or cluster automated backup resources during creation using the `--tag-specifications` parameter.

### Tag-Specifications Parameter


APIs that support the `--tag-specifications` request parameter (like [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html), [restore-db-instance-from-db-snapshot](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html), [create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html), etc.) can tag automated backups (Resource Type: `auto-backup` or `cluster-auto-backup`) during creation.

#### Tagging cluster automated backups


Use `--tag-specifications` with `ResourceType=cluster-auto-backup` when creating DB clusters that have automated backups enabled.

**Note**  
Automated backup tags are independent of source DB instance tags, DB cluster tags, or DB snapshot tags.

## Adding and deleting tags in Amazon RDS


You can do the following:
+ Create tags when you create a resource, for example, when you run the AWS CLI command `create-db-instance`.
+ Add tags to an existing resource using the command `add-tags-to-resource`.
+ List tags associated with a specific resource using the command `list-tags-for-resource`.
+ Update tags using the command `add-tags-to-resource`.
+ Remove tags from a resource using the command `remove-tags-from-resource`.

The following procedures show how you can perform typical tagging operations on resources related to DB instances and Aurora DB clusters. Note that tags are cached for authorization purposes. For this reason, when you add or update tags on Amazon RDS resources, several minutes can pass before the modifications are available. 

### Console


The process to tag an Amazon RDS resource is similar for all resources. The following procedure shows how to tag an Amazon RDS DB instance. 

**To add a tag to a DB instance**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.
**Note**  
To filter the list of DB instances in the **Databases** pane, enter a text string for **Filter databases**. Only DB instances that contain the string appear.

1. Choose the name of the DB instance that you want to tag to show its details. 

1. In the details section, scroll down to the **Tags** section. 

1. Choose **Add**. The **Add tags** window appears.   
![\[Add tags window\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/RDSConsoleTagging5.png)

1. Enter a value for **Tag key** and **Value**.

1. To add another tag, you can choose **Add another Tag** and enter a value for its **Tag key** and **Value**. 

   Repeat this step as many times as necessary.

1. Choose **Add**. 

**To delete a tag from a DB instance**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.
**Note**  
To filter the list of DB instances in the **Databases** pane, enter a text string in the **Filter databases** box. Only DB instances that contain the string appear.

1. Choose the name of the DB instance to show its details. 

1. In the details section, scroll down to the **Tags** section. 

1. Choose the tag you want to delete.  
![\[Tags section\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/RDSConsoleTagging6.png)

1. Choose **Delete**, and then choose **Delete** in the **Delete tags** window. 

### AWS CLI


You can add, list, or remove tags for a DB instance using the AWS CLI.
+ To add one or more tags to an Amazon RDS resource, use the AWS CLI command [https://docs.aws.amazon.com/cli/latest/reference/rds/add-tags-to-resource.html](https://docs.aws.amazon.com/cli/latest/reference/rds/add-tags-to-resource.html).
+ To list the tags on an Amazon RDS resource, use the AWS CLI command [https://docs.aws.amazon.com/cli/latest/reference/rds/list-tags-for-resource.html](https://docs.aws.amazon.com/cli/latest/reference/rds/list-tags-for-resource.html).
+ To remove one or more tags from an Amazon RDS resource, use the AWS CLI command [https://docs.aws.amazon.com/cli/latest/reference/rds/remove-tags-from-resource.html](https://docs.aws.amazon.com/cli/latest/reference/rds/remove-tags-from-resource.html).

To learn more about how to construct the required ARN, see [Constructing an ARN for Amazon RDS](USER_Tagging.ARN.Constructing.md).

### RDS API


You can add, list, or remove tags for a DB instance using the Amazon RDS API.
+ To add a tag to an Amazon RDS resource, use the [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_AddTagsToResource.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_AddTagsToResource.html) operation.
+ To list tags that are assigned to an Amazon RDS resource, use the [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ListTagsForResource.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ListTagsForResource.html).
+ To remove tags from an Amazon RDS resource, use the [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RemoveTagsFromResource.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RemoveTagsFromResource.html) operation.

To learn more about how to construct the required ARN, see [Constructing an ARN for Amazon RDS](USER_Tagging.ARN.Constructing.md).

When working with XML using the Amazon RDS API, tags use the following schema:

```
 1. <Tagging>
 2.       <TagSet>
 3.           <Tag>
 4.               <Key>Project</Key>
 5.               <Value>Trinity</Value>
 6.           </Tag>
 7.           <Tag>
 8.               <Key>User</Key>
 9.               <Value>Jones</Value>
10.           </Tag>
11.       </TagSet>
12.   </Tagging>
```

The following table provides a list of the allowed XML tags and their characteristics. Values for `Key` and `Value` are case-sensitive. For example, `project=Trinity` and `PROJECT=Trinity` are distinct tags. 


****  
<a name="user-tag-reference"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_Tagging.html)

# Tutorial: Use tags to specify which Aurora DB clusters to stop


 Suppose that you're creating a number of Aurora DB clusters in a development or test environment. You need to keep all of these clusters for several days. Some of the clusters run tests overnight. Other clusters can be stopped overnight and started again the next day. The following example shows how to assign a tag to those clusters that are suitable to stop overnight. Then the example shows how a script can detect which clusters have that tag and then stop those clusters. In this example, the value portion of the key-value pair doesn't matter. The presence of the `stoppable` tag signifies that the cluster has this user-defined property. 

**To specify which Aurora DB clusters to stop**

1. Determine the ARN of a cluster that you want to designate as stoppable.

   The commands and APIs for tagging work with ARNs. That way, they can work seamlessly across AWS Regions, AWS accounts, and different types of resources that might have identical short names. You can specify the ARN instead of the cluster ID in CLI commands that operate on clusters. Substitute the name of your own cluster for *dev-test-cluster*. In subsequent commands that use ARN parameters, substitute the ARN of your own cluster. The ARN includes your own AWS account ID and the name of the AWS Region where your cluster is located. 

   ```
   $ aws rds describe-db-clusters --db-cluster-identifier dev-test-cluster \
     --query "*[].{DBClusterArn:DBClusterArn}" --output text
   arn:aws:rds:us-east-1:123456789:cluster:dev-test-cluster
   ```

1. Add the tag `stoppable` to this cluster.

   You choose the name for this tag. This approach means that you can avoid devising a naming convention that encodes all relevant information in names. In such a convention, you might encode information in the DB instance name or names of other resources. Because this example treats the tag as an attribute that is either present or absent, it omits the `Value=` part of the `--tags` parameter. 

   ```
   $ aws rds add-tags-to-resource \
     --resource-name arn:aws:rds:us-east-1:123456789:cluster:dev-test-cluster \
     --tags Key=stoppable
   ```

1. Confirm that the tag is present in the cluster.

   These commands retrieve the tag information for the cluster in JSON format and in plain tab-separated text. 

   ```
   $ aws rds list-tags-for-resource \
     --resource-name arn:aws:rds:us-east-1:123456789:cluster:dev-test-cluster 
   {
       "TagList": [
           {
               "Key": "stoppable",
               "Value": ""
   
           }
       ]
   }
   $ aws rds list-tags-for-resource \
     --resource-name arn:aws:rds:us-east-1:123456789:cluster:dev-test-cluster --output text
   TAGLIST stoppable
   ```

1. To stop all the clusters that are designated as `stoppable`, prepare a list of all your clusters. Loop through the list and check if each cluster is tagged with the relevant attribute.

   This Linux example uses shell scripting to save the list of cluster ARNs to a temporary file and then perform CLI commands for each cluster.

   ```
   $ aws rds describe-db-clusters --query "*[].[DBClusterArn]" --output text >/tmp/cluster_arns.lst
   $ for arn in $(cat /tmp/cluster_arns.lst)
   do
     match="$(aws rds list-tags-for-resource --resource-name $arn --output text | grep 'TAGLIST\tstoppable')"
     if [[ ! -z "$match" ]]
     then
         echo "Cluster $arn is tagged as stoppable. Stopping it now."
   # Note that you can specify the full ARN value as the parameter instead of the short ID 'dev-test-cluster'.
         aws rds stop-db-cluster --db-cluster-identifier $arn
     fi
   done
   
   Cluster arn:aws:rds:us-east-1:123456789:cluster:dev-test-cluster is tagged as stoppable. Stopping it now.
   {
       "DBCluster": {
           "AllocatedStorage": 1,
           "AvailabilityZones": [
               "us-east-1e",
               "us-east-1c",
               "us-east-1d"
           ],
           "BackupRetentionPeriod": 1,
           "DBClusterIdentifier": "dev-test-cluster",
           ...
   ```

 You can run a script like this at the end of each day to make sure that nonessential clusters are stopped. You might also schedule a job using a utility such as `cron` to perform such a check each night. For example, you might do this in case some clusters were left running by mistake. Here, you might fine-tune the command that prepares the list of clusters to check. 

The following command produces a list of your clusters, but only the ones in `available` state. The script can ignore clusters that are already stopped, because they will have different status values such as `stopped` or `stopping`. 

```
$ aws rds describe-db-clusters \
  --query '*[].{DBClusterArn:DBClusterArn,Status:Status}|[?Status == `available`]|[].{DBClusterArn:DBClusterArn}' \
  --output text
arn:aws:rds:us-east-1:123456789:cluster:cluster-2447
arn:aws:rds:us-east-1:123456789:cluster:cluster-3395
arn:aws:rds:us-east-1:123456789:cluster:dev-test-cluster
arn:aws:rds:us-east-1:123456789:cluster:pg2-cluster
```

**Tip**  
You can use assigning tags and finding clusters that have those tags to reduce costs in other ways. For example, take the scenario with Aurora DB clusters used for development and testing. Here, you might designate some clusters to be deleted at the end of each day, or to have only their reader DB instances deleted. Or you might designate some to have their DB instances changed to small DB instance classes during times of expected low usage. 

# Amazon Resource Names (ARNs) in Amazon RDS
ARNs in Amazon RDS

Resources created in Amazon Web Services are each uniquely identified with an Amazon Resource Name (ARN). For certain Amazon RDS operations, you must uniquely identify an Amazon RDS resource by specifying its ARN. For example, when you create an RDS DB instance read replica, you must supply the ARN for the source DB instance. 

For information about constructing an ARN and getting an existing ARN, see the following topics.

**Topics**
+ [

# Constructing an ARN for Amazon RDS
](USER_Tagging.ARN.Constructing.md)
+ [

# Getting an existing ARN for Amazon RDS
](USER_Tagging.ARN.Getting.md)

# Constructing an ARN for Amazon RDS
Constructing an ARN

Resources created in Amazon Web Services are each uniquely identified with an Amazon Resource Name (ARN). You can construct an ARN for an Amazon RDS resource using the following syntax. 

 `arn:aws:rds:<region>:<account number>:<resourcetype>:<name>` 

For global cluster resources, the ARN doesn't include an AWS Region: `arn:aws:rds::<account number>:global-cluster:<name>`. ARNs for global clusters don't appear in the AWS Management Console.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_Tagging.ARN.Constructing.html)

The following table shows the format that you should use when constructing an ARN for a particular Amazon RDS resource type. 


****  

| Resource type | ARN format | 
| --- | --- | 
| DB instance  |  arn:aws:rds:*<region>*:*<account>*:db:*<name>* For example: <pre>arn:aws:rds:us-east-2:123456789012:db:my-mysql-instance-1</pre>  | 
| DB cluster |  arn:aws:rds:*<region>*:*<account>*:cluster:*<name>* For example: <pre>arn:aws:rds:us-east-2:123456789012:cluster:my-aurora-cluster-1</pre>  | 
| Event subscription  |  arn:aws:rds:*<region>*:*<account>*:es:*<name>* For example: <pre>arn:aws:rds:us-east-2:123456789012:es:my-subscription</pre>  | 
| DB parameter group  |  arn:aws:rds:*<region>*:*<account>*:pg:*<name>* For example: <pre>arn:aws:rds:us-east-2:123456789012:pg:my-param-enable-logs</pre>  | 
| DB cluster parameter group  |  arn:aws:rds:*<region>*:*<account>*:cluster-pg:*<name>* For example: <pre>arn:aws:rds:us-east-2:123456789012:cluster-pg:my-cluster-param-timezone</pre>  | 
| Reserved DB instance  |  arn:aws:rds:*<region>*:*<account>*:ri:*<name>* For example: <pre>arn:aws:rds:us-east-2:123456789012:ri:my-reserved-postgresql</pre>  | 
| DB security group  |  arn:aws:rds:*<region>*:*<account>*:secgrp:*<name>* For example: <pre>arn:aws:rds:us-east-2:123456789012:secgrp:my-public</pre>  | 
| Automated DB snapshot |  arn:aws:rds:*<region>*:*<account>*:snapshot:rds:*<name>* For example: <pre>arn:aws:rds:us-east-2:123456789012:snapshot:rds:my-mysql-db-2019-07-22-07-23</pre>  | 
| Automated DB cluster snapshot |  arn:aws:rds:*<region>*:*<account>*:cluster-snapshot:rds:*<name>* For example: <pre>arn:aws:rds:us-east-2:123456789012:cluster-snapshot:rds:my-aurora-cluster-2019-07-22-16-16</pre>  | 
| Manual DB snapshot |  arn:aws:rds:*<region>*:*<account>*:snapshot:*<name>* For example: <pre>arn:aws:rds:us-east-2:123456789012:snapshot:my-mysql-db-snap</pre>  | 
| Manual DB cluster snapshot |  arn:aws:rds:*<region>*:*<account>*:cluster-snapshot:*<name>* For example: <pre>arn:aws:rds:us-east-2:123456789012:cluster-snapshot:my-aurora-cluster-snap</pre>  | 
| DB subnet group |  arn:aws:rds:*<region>*:*<account>*:subgrp:*<name>* For example: <pre>arn:aws:rds:us-east-2:123456789012:subgrp:my-subnet-10</pre>  | 
| Global cluster |  arn:aws:rds::*<account>*:global-cluster:*<name>* For example: <pre>arn:aws:rds::123456789012:global-cluster:my-aurora-global-cluster-1</pre>  | 

# Getting an existing ARN for Amazon RDS
Getting an existing ARN

You can get the ARN of an RDS resource by using the AWS Management Console, AWS Command Line Interface (AWS CLI), or RDS API. 

## Console


To get an ARN from the AWS Management Console, navigate to the resource you want an ARN for, and view the details for that resource.

For example, you can get the ARN for a DB cluster from the **Configuration** tab of the DB cluster details.

![\[DB cluster ARN.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/DB-cluster-arn.png)


## AWS CLI


To get an ARN from the AWS CLI for a particular RDS resource, you use the `describe` command for that resource. The following table shows each AWS CLI command, and the ARN property used with the command to get an ARN. 


****  
<a name="cli-command-arn-property"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_Tagging.ARN.Getting.html)

For example, the following AWS CLI command gets the ARN for a DB instance.

**Example**  
For Linux, macOS, or Unix:  

```
aws rds describe-db-instances \
--db-instance-identifier DBInstanceIdentifier \
--region us-west-2 \
--query "*[].{DBInstanceIdentifier:DBInstanceIdentifier,DBInstanceArn:DBInstanceArn}"
```
For Windows:  

```
aws rds describe-db-instances ^
--db-instance-identifier DBInstanceIdentifier ^
--region us-west-2 ^
--query "*[].{DBInstanceIdentifier:DBInstanceIdentifier,DBInstanceArn:DBInstanceArn}"
```
The output of that command is like the following:  

```
[
    {
        "DBInstanceArn": "arn:aws:rds:us-west-2:account_id:db:instance_id", 
        "DBInstanceIdentifier": "instance_id"
    }
]
```

## RDS API


To get an ARN for a particular RDS resource, you can call the following RDS API operations and use the ARN properties shown following.


****  
<a name="rds-operation-arn-property"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_Tagging.ARN.Getting.html)

# Amazon Aurora updates
Aurora updates

Amazon Aurora releases updates regularly. Updates are applied to Amazon Aurora DB clusters during system maintenance windows. The timing when updates are applied depends on the Region and maintenance window setting for the DB cluster, and also the type of update. Updates require a database restart, so you typically experience 20–30 seconds of downtime. After this downtime, you can resume using your DB cluster or clusters. You can view or change your maintenance window settings from the [AWS Management Console](https://console.aws.amazon.com/).

**Note**  
The time required to reboot your DB instance depends on the crash recovery process, database activity at the time of reboot, and the behavior of your specific DB engine. To improve the reboot time, we recommend that you reduce database activity as much as possible during the reboot process. Reducing database activity reduces rollback activity for in-transit transactions.

For information on operating system updates for Amazon Aurora, see [Operating system updates for Aurora DB clusters](USER_UpgradeDBInstance.Maintenance.md#Aurora_OS_updates).

Some updates are specific to a database engine supported by Aurora. For more information about database engine updates, see the following table.


| Database engine | Updates | 
| --- | --- | 
|  Amazon Aurora MySQL  |  See [Database engine updates for Amazon Aurora MySQLLong-term support (LTS) and beta releases for Amazon Aurora MySQL](AuroraMySQL.Updates.md)  | 
|  Amazon Aurora PostgreSQL  |  See [Database engine updates for Amazon Aurora PostgreSQL](AuroraPostgreSQL.Updates.md)  | 

## Identifying your Amazon Aurora version
Identifying your Amazon Aurora version

Amazon Aurora includes certain features that are general to Aurora and available to all Aurora DB clusters. Aurora includes other features that are specific to a particular database engine that Aurora supports. These features are available only to those Aurora DB clusters that use that database engine, such as Aurora PostgreSQL.

An Aurora DB instance provides two version numbers, the Aurora version number and the Aurora database engine version number. Aurora version numbers use the following format.

```
<major version>.<minor version>.<patch version>
```

To get the Aurora version number from an Aurora DB instance using a particular database engine, use one of the following queries.


| Database engine | Queries | 
| --- | --- | 
|  Amazon Aurora MySQL  |  <pre>SELECT AURORA_VERSION();</pre> <pre>SHOW @@aurora_version;</pre>  | 
|  Amazon Aurora PostgreSQL  |  <pre>SELECT AURORA_VERSION();</pre>  | 