

# Security in Amazon Aurora DSQL
<a name="security"></a>

Cloud security at AWS is the highest priority. As an AWS customer, you benefit from data centers and network architectures that are built to meet the requirements of the most security-sensitive organizations.

Security is a shared responsibility between AWS and you. The [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/) describes this as security *of* the cloud and security *in* the cloud:
+ **Security of the cloud** – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. AWS also provides you with services that you can use securely. Third-party auditors regularly test and verify the effectiveness of our security as part of the [AWS Compliance Programs](https://aws.amazon.com/compliance/programs/). To learn about the compliance programs that apply to Amazon Aurora DSQL, see [AWS Services in Scope by Compliance Program](https://aws.amazon.com/compliance/services-in-scope/).
+ **Security in the cloud** – Your responsibility is determined by the AWS service that you use. You are also responsible for other factors including the sensitivity of your data, your company’s requirements, and applicable laws and regulations. 

This documentation helps you understand how to apply the shared responsibility model when using Aurora DSQL. The following topics show you how to configure Aurora DSQL to meet your security and compliance objectives. You also learn how to use other AWS services that help you to monitor and secure your Aurora DSQL resources. 

**Topics**
+ [

# AWS managed policies for Amazon Aurora DSQL
](security-iam-awsmanpol.md)
+ [

# Data protection in Amazon Aurora DSQL
](data-protection.md)
+ [

# Data encryption for Amazon Aurora DSQL
](data-encryption.md)
+ [

# Identity and access management for Aurora DSQL
](security-iam.md)
+ [

# Resource-based policies for Aurora DSQL
](resource-based-policies.md)
+ [

# Using service-linked roles in Aurora DSQL
](working-with-service-linked-roles.md)
+ [

# Using IAM condition keys with Amazon Aurora DSQL
](using-iam-condition-keys.md)
+ [

# Incident response in Amazon Aurora DSQL
](incident-response.md)
+ [

# Compliance validation for Amazon Aurora DSQL
](compliance-validation.md)
+ [

# Resilience in Amazon Aurora DSQL
](disaster-recovery-resiliency.md)
+ [

# Infrastructure Security in Amazon Aurora DSQL
](infrastructure-security.md)
+ [

# Configuration and vulnerability analysis in Amazon Aurora DSQL
](configuration-vulnerability.md)
+ [

# Cross-service confused deputy prevention
](cross-service-confused-deputy-prevention.md)
+ [

# Security best practices for Aurora DSQL
](best-practices-security.md)

# AWS managed policies for Amazon Aurora DSQL
<a name="security-iam-awsmanpol"></a>



An AWS managed policy is a standalone policy that is created and administered by AWS. AWS managed policies are designed to provide permissions for many common use cases so that you can start assigning permissions to users, groups, and roles.

Keep in mind that AWS managed policies might not grant least-privilege permissions for your specific use cases because they're available for all AWS customers to use. We recommend that you reduce permissions further by defining [ customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#customer-managed-policies) that are specific to your use cases.

You cannot change the permissions defined in AWS managed policies. If AWS updates the permissions defined in an AWS managed policy, the update affects all principal identities (users, groups, and roles) that the policy is attached to. AWS is most likely to update an AWS managed policy when a new AWS service is launched or new API operations become available for existing services.

For more information, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) in the *IAM User Guide*.

 





## AWS managed policy: AmazonAuroraDSQLFullAccess
<a name="security-iam-awsmanpol-AmazonAuroraDSQLFullAccess"></a>



You can attach `AmazonAuroraDSQLFullAccess` to your users, groups, and roles.

This policy grants permissions that allows full administrative access to Aurora DSQL. Principals with these permissions can:
+ Create, delete, and update Aurora DSQL clusters, including multi-Region clusters
+ Manage cluster inline policies (create, view, update, and delete policies)
+ Add and remove tags from clusters
+ List clusters and view information about individual clusters
+ See tags attached to Aurora DSQL clusters
+ Connect to the database as any user, including admin
+ Perform backup and restore operations for Aurora DSQL clusters, including starting, stopping, and monitoring backup and restore jobs
+ Use customer-managed AWS KMS keys for cluster encryption
+ View any metrics from CloudWatch for their account
+ Use AWS Fault Injection Service (AWS FIS) to inject failures into Aurora DSQL clusters for fault tolerance testing
+ Create service-linked roles for the `dsql.amazonaws.com` service, which is required for creating clusters



**Permissions details**

This policy includes the following permissions.


+ `dsql`—grants principals full access to Aurora DSQL.
+ `cloudwatch`—grants permission to publish metric data points to Amazon CloudWatch.
+ `iam`—grants permission to create a service-linked role.
+ `backup and restore`—grants permissions to start, stop, and monitor backup and restore jobs for Aurora DSQL clusters. 
+ `kms`—grants permissions required to validate access to customer-managed keys used for Aurora DSQL cluster encryption when creating, updating, or connecting to clusters. 
+ `fis`—grants permissions to use AWS Fault Injection Service (AWS FIS) to inject failures into Aurora DSQL clusters for fault tolerance testing.

You can find the `AmazonAuroraDSQLFullAccess` policy in the IAM console and in the [AWS Managed Policy Reference Guide](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonAuroraDSQLFullAccess.html).

## AWS managed policy: AmazonAuroraDSQLReadOnlyAccess
<a name="security-iam-awsmanpol-AmazonAuroraDSQLReadOnlyAccess"></a>



You can attach `AmazonAuroraDSQLReadOnlyAccess` to your users, groups, and roles.

Allows read access to Aurora DSQL. Principals with these permissions can list clusters and view information about individual clusters. They can see the tags attached to Aurora DSQL clusters, and view cluster inline policies. They can retrieve and see any metrics from CloudWatch on your account. 



**Permissions details**

This policy includes the following permissions.




+ `dsql` – grants read only permissions to all resources in Aurora DSQL.
+ `cloudwatch` – grants permission to retrieve batch amounts of CloudWatch metric data and perform metric math on retrieved data 

You can find the `AmazonAuroraDSQLReadOnlyAccess` policy in the IAM console and the [AWS Managed Policy Reference Guide](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonAuroraDSQLReadOnlyAccess.html).

## AWS managed policy: AmazonAuroraDSQLConsoleFullAccess
<a name="security-iam-awsmanpol-AmazonAuroraDSQLConsoleFullAccess"></a>



You can attach `AmazonAuroraDSQLConsoleFullAccess` to your users, groups, and roles.

Allows full administrative access to Amazon Aurora DSQL via the AWS Management Console. Principals with these permissions can:
+ Create, delete, and update Aurora DSQL clusters, including multi-Region clusters, with the console
+ Manage cluster inline policies through the console (create, view, update, and delete policies)
+ List clusters and view information about individual clusters
+ See tags on any resource on your account
+ Connect to the database as any user, including the admin
+ Perform backup and restore operations for Aurora DSQL clusters, including starting, stopping, and monitoring backup and restore jobs
+ Use customer-managed AWS KMS keys for cluster encryption
+ Launch AWS CloudShell from the AWS Management Console
+ View any metrics from CloudWatch on your account
+ Use AWS Fault Injection Service (AWS FIS) to inject failures into Aurora DSQL clusters for fault tolerance testing
+ Create service linked roles for the `dsql.amazonaws.com` service, which is required for creating clusters

You can find the `AmazonAuroraDSQLConsoleFullAccess` policy on the IAM console and [AmazonAuroraDSQLConsoleFullAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonAuroraDSQLConsoleFullAccess.html) in the AWS Managed Policy Reference Guide.



**Permissions details**

This policy includes the following permissions.




+ `dsql`—grants full administrative permissions to all resources in Aurora DSQL via the AWS Management Console.
+ `cloudwatch`—grants permission to retrieve batch amounts of CloudWatch metric data and perform metric math on retrieved data. 
+ `tag`—grants permission to returns tag keys and values currently in use in the specified AWS Region for the calling account.
+ `backup and restore`—grants permissions to start, stop, and monitor backup and restore jobs for Aurora DSQL clusters. 
+ `kms`—grants permissions required to validate access to customer-managed keys used for Aurora DSQL cluster encryption when creating, updating, or connecting to clusters. 
+ `cloudshell`—grants permissions to launch AWS CloudShell to interact with Aurora DSQL.
+ `ec2`—grants permission to view Amazon VPC endpoint information needed for Aurora DSQL connections.
+ `fis`—grants permissions to use AWS FIS to inject failures into Aurora DSQL clusters for fault tolerance testing.
+ `access-analyzer:ValidatePolicy` grants permission for the linter in the policy editor, which provides real-time feedback about errors, warnings, and security issues in the current policy.
+ `fis`—grants permissions to use AWS Fault Injection Service (AWS FIS) to inject failures into Aurora DSQL clusters for fault tolerance testing.

You can find the `AmazonAuroraDSQLConsoleFullAccess` policy in the IAM console and the [AWS Managed Policy Reference Guide](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonAuroraDSQLConsoleFullAccess.html).

## AWS managed policy: AuroraDSQLServiceRolePolicy
<a name="security-iam-awsmanpol-AuroraDSQLServiceRolePolicy"></a>



You can't attach AuroraDSQLServiceRolePolicy to your IAM entities. This policy is attached to a service-linked role that allows Aurora DSQL to access account resources.

You can find the `AuroraDSQLServiceRolePolicy` policy on the IAM console and [AuroraDSQLServiceRolePolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AuroraDSQLServiceRolePolicy.html) in the AWS Managed Policy Reference Guide.





## Aurora DSQL updates to AWS managed policies
<a name="security-iam-awsmanpol-updates"></a>



View details about updates to AWS managed policies for Aurora DSQL since this service began tracking these changes. For automatic alerts about changes to this page, subscribe to the RSS feed on the Aurora DSQL Document history page.




| Change | Description | Date | 
| --- | --- | --- | 
|  AmazonAuroraDSQLFullAccess and AmazonAuroraDSQLConsoleFullAccess update  |  Added support for AWS Fault Injection Service (AWS FIS) integration with Aurora DSQL. This allows you to inject failures into single-Region and multi-Region Aurora DSQL clusters to test fault tolerance of your applications. You can create experiment templates in the AWS FIS console to define failure scenarios and target specific Aurora DSQL clusters for testing. For more on these policies, see [AmazonAuroraDSQLFullAccess](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonAuroraDSQLFullAccess) and [AmazonAuroraDSQLConsoleFullAccess](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonAuroraDSQLConsoleFullAccess).  | August 19, 2025 | 
|  AmazonAuroraDSQLFullAccess, AmazonAuroraDSQLReadOnlyAccess, and AmazonAuroraDSQLConsoleFullAccess update  |  Added resource-based policy (RBP) support with new permissions: `PutClusterPolicy`, `GetClusterPolicy`, and `DeleteClusterPolicy`. These permissions allow managing inline policies attached to Aurora DSQL clusters for fine-grained access control. For more information, see [AmazonAuroraDSQLFullAccess](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonAuroraDSQLFullAccess), [AmazonAuroraDSQLReadOnlyAccess](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonAuroraDSQLReadOnlyAccess), and [AmazonAuroraDSQLConsoleFullAccess](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonAuroraDSQLConsoleFullAccess).  | October 15, 2025 | 
|  AmazonAuroraDSQLFullAccess update  |  Adds the capability to perform backup and restore operations for Aurora DSQL clusters, including starting, stopping, and monitoring jobs. It also adds the capability to use customer-managed KMS keys for cluster encryption. For more information, see [AmazonAuroraDSQLFullAccess](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonAuroraDSQLFullAccess) and [Using service-linked roles in Aurora DSQL ](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/working-with-service-linked-roles.html).  | May 21, 2025 | 
|  AmazonAuroraDSQLConsoleFullAccess update  |  Adds the capability to perform backup and restore operations for Aurora DSQL clusters through the AWS Console Home. This includes starting, stopping, and monitoring jobs. It also supports using customer-managed KMS keys for cluster encryption and launching AWS CloudShell. For more information, see [AmazonAuroraDSQLConsoleFullAccess](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonAuroraDSQLConsoleFullAccess) and [Using service-linked roles in Aurora DSQL ](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/working-with-service-linked-roles.html).  | May 21, 2025 | 
| AmazonAuroraDSQLFullAccess update |  The policy adds four new permissions to create and manage database clusters across multiple AWS Regions: `PutMultiRegionProperties`, `PutWitnessRegion`, `AddPeerCluster`, and `RemovePeerCluster`. These permissions include resource-level controls and condition keys so you can control which clusters users you can modify. The policy also adds the `GetVpcEndpointServiceName` permission to help you connect to your Aurora DSQL clusters through AWS PrivateLink. For more information, see For more information, see [AmazonAuroraDSQLFullAccess](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonAuroraDSQLFullAccess) and [Using service-linked roles in Aurora DSQL ](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/working-with-service-linked-roles.html).  | May 13, 2025 | 
| AmazonAuroraDSQLReadOnlyAccess update | Includes the ability to determine the correct VPC endpoint service name when connecting to your Aurora DSQL clusters through AWS PrivateLink Aurora DSQL creates unique endpoints per cell, so this API helps ensure you can identify the correct endpoint for your cluster and avoid connection errors.For more information, see [AmazonAuroraDSQLReadOnlyAccess](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonAuroraDSQLReadOnlyAccess) and [Using service-linked roles in Aurora DSQL ](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/working-with-service-linked-roles.html). | May 13, 2025 | 
| AmazonAuroraDSQLConsoleFullAccess update | Adds new permissions to Aurora DSQL to support multi-Region cluster management and VPC endpoint connection. The new permissions include: PutMultiRegionProperties PutWitnessRegion AddPeerCluster RemovePeerCluster GetVpcEndpointServiceName For more information, see [AmazonAuroraDSQLConsoleFullAccess](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonAuroraDSQLConsoleFullAccess) and [Using service-linked roles in Aurora DSQL ](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/working-with-service-linked-roles.html). | May 13, 2025 | 
| AuroraDsqlServiceLinkedRolePolicy update | Adds the ability to publish metrics to the AWS/AuroraDSQL and AWS/Usage CloudWatch namespaces to the policy. This allows the associated service or role to emit more comprehensive usage and performance data to your CloudWatch environment. For more information, see [AuroraDsqlServiceLinkedRolePolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AuroraDsqlServiceLinkedRolePolicy.html) and [Using service-linked roles in Aurora DSQL](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/working-with-service-linked-roles.html). | May 8, 2025 | 
| Page created | Started tracking AWS managed policies related to Amazon Aurora DSQL | December 3, 2024 | 

# Data protection in Amazon Aurora DSQL
<a name="data-protection"></a>

The [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/) applies to data protection in . As described in this model, is responsible for protecting the global infrastructure that runs all of the AWS Cloud. You are responsible for maintaining control over your content that is hosted on this infrastructure. You are also responsible for the security configuration and management tasks for the that you use. For more information about data privacy, see the [Data Privacy FAQ](https://aws.amazon.com/compliance/data-privacy-faq/). For information about data protection in Europe, see the [ Shared Responsibility Model and GDPR](https://aws.amazon.com/blogs/security/the-aws-shared-responsibility-model-and-gdpr/) blog post on the *Security Blog*.

For data protection purposes, we recommend that you protect credentials and set up individual users with AWS IAM Identity Center or AWS Identity and Access Management. That way, each user is given only the permissions necessary to fulfill their job duties. We also recommend that you secure your data in the following ways:
+ Use multi-factor authentication (MFA) with each account.
+ Use SSL/TLS to communicate with resources. We require TLS 1.2 and recommend TLS 1.3.
+ Set up API and user activity logging with AWS CloudTrail. For information about using trails to capture activities, see [Working with trails](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-trails.html) in the * User Guide*.
+ Use encryption solutions, along with all default security controls within AWS services.
+ Use advanced managed security services such as Amazon Macie, which assists in discovering and securing sensitive data that is stored in Amazon S3.

We strongly recommend that you never put confidential or sensitive information, such as your customers email addresses, into tags or free-form text fields such as a **Name** field. This includes when you work with or other using the console, API, AWS CLI, or AWS SDKs. Any data that you enter into tags or free-form text fields used for names may be used for billing or diagnostic logs. If you provide a URL to an external server, we strongly recommend that you do not include credentials information in the URL to validate your request to that server.



## Data encryption
<a name="data-encryption"></a>

Amazon Aurora DSQL provides a highly durable storage infrastructure designed for mission-critical and primary data storage. Data is redundantly stored on multiple devices across multiple facilities in an Aurora DSQL Region.

### Encryption in transit
<a name="encryption-transit"></a>

By default, encryption in transit is configured for you. Aurora DSQL uses TLS to encrypt all traffic between your SQL client and Aurora DSQL.

Encryption and signing of data in transit between AWS CLI, SDK, or API clients and Aurora DSQL endpoints:
+ Aurora DSQL provides HTTPS endpoints for encrypting data in transit. 
+ To protect the integrity of API requests to Aurora DSQL, API calls must be signed by the caller. Calls are signed by an X.509 certificate or the customer's AWS secret access key according to the Signature Version 4 Signing Process (Sigv4). For more information, see [Signature Version 4 Signing Process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) in the *AWS General Reference*.
+  Use the AWS CLI or one of the AWS SDKs to make requests to AWS. These tools automatically sign the requests for you with the access key that you specify when you configure the tools. 

#### FIPS compliance
<a name="fips-compliance"></a>

Aurora DSQL dataplane endpoints (cluster endpoints used for database connections) use FIPS 140-2 validated cryptographic modules by default. No separate FIPS endpoints are required for cluster connections.

For control plane operations, Aurora DSQL provides dedicated FIPS endpoints in supported regions. For more information about control plane FIPS endpoints, see [Aurora DSQL endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/dsql.html) in the *AWS General Reference*.

For encryption at rest, see [Encryption at rest in Aurora DSQL](data-encryption.md#encryption-at-rest).

### Inter-network traffic privacy
<a name="inter-network-traffic-privacy"></a>

Connections are protected both between Aurora DSQL and on-premises applications and between Aurora DSQL and other AWS resources within the same AWS Region.

You have two connectivity options between your private network and AWS: 
+ An AWS Site-to-Site VPN connection. For more information, see [What is AWS Site-to-Site VPN?](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html)
+ An Direct Connect connection. For more information, see [What is Direct Connect?](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html)

You get access to Aurora DSQL through the network by using AWS-published API operations. Clients must support the following:
+ Transport Layer Security (TLS). We require TLS 1.2 and recommend TLS 1.3.
+ Cipher suites with perfect forward secrecy (PFS) such as DHE (Ephemeral Diffie-Hellman) or ECDHE (Elliptic Curve Ephemeral Diffie-Hellman). Most modern systems such as Java 7 and later support these modes.

## Data Protection in witness Regions
<a name="witness-regions"></a>

When you create a multi-Region cluster, a witness Region helps enable automated failure recovery by participating in synchronous replication of encrypted transactions. If a peered cluster becomes unavailable, the witness Region remains available to validate and process database writes, ensuring no loss of availability. 

Witness Regions protect and secure your data through these design features:
+ The witness Region receives and stores only encrypted transaction logs. It never hosts, stores or transmits your encryption keys.
+ The witness Region focuses soley on write transaction logging and quorum functions. It can't read your data by design.
+ The witness Region operates without cluster connection endpoints or query processors. This prevents user database access.

For more information on witness Regions, see [Configuring multi-Region clusters](configuring-multi-region-clusters.md).

# Configuring SSL/TLS certificates for Aurora DSQL connections
<a name="configure-root-certificates"></a><a name="ssl-certificate-overview"></a>

Aurora DSQL requires all connections to use Transport Layer Security (TLS) encryption. To establish secure connections, your client system must trust the Amazon Root Certificate Authority (Amazon Root CA 1). This certificate is pre-installed on many operating systems. This section provides instructions for verifying the pre-installed Amazon Root CA 1 certificate on various operating systems, and guides you through the process of manually installing the certificate if it is not already present. 

We recommend using PostgreSQL version 17.

**Important**  
For production environments, we recommend using `verify-full` SSL mode to ensure the highest level of connection security. This mode verifies that the server certificate is signed by a trusted certificate authority and that the server hostname matches the certificate.

## Verifying pre-installed certificates
<a name="verify-installed-certificates"></a>

In most operating systems, **Amazon Root CA 1** is already pre-installed. To validate this, you can follow the steps below.

### Linux (RedHat/CentOS/Fedora)
<a name="verify-linux"></a>

Run the following command in your terminal:

```
trust list | grep "Amazon Root CA 1"
```

If the certificate is installed, you see the following output:

```
label: Amazon Root CA 1
```

### macOS
<a name="verify-macos"></a>

1. Open Spotlight Search (**Command** \$1 **Space**)

1. Search for **Keychain Access**

1. Select **System Roots** under **System Keychains**

1. Look for **Amazon Root CA 1** in the certificate list

### Windows
<a name="verify-windows"></a>

**Note**  
Due to a known issue with the psql Windows client, using system root certificates (`sslrootcert=system`) may return the following error: `SSL error: unregistered scheme`. You can follow the [Connecting from Windows](#connect-windows) as an alternative way to connect to your cluster using SSL. 

If **Amazon Root CA 1** is not installed in your operating system, follow the steps below. 

## Installing certificates
<a name="install-certificates"></a>

 If the `Amazon Root CA 1` certificate is not pre-installed on your operating system, you will need to manually install it in order to establish secure connections to your Aurora DSQL cluster. 

### Linux certificate installation
<a name="install-linux"></a>

Follow these steps to install the Amazon Root CA certificate on Linux systems.

1. Download the Root Certificate:

   ```
   wget https://www.amazontrust.com/repository/AmazonRootCA1.pem
   ```

1. Copy the certificate to the trust store:

   ```
   sudo cp ./AmazonRootCA1.pem /etc/pki/ca-trust/source/anchors/
   ```

1. Update the CA trust store:

   ```
   sudo update-ca-trust
   ```

1. Verify the installation:

   ```
   trust list | grep "Amazon Root CA 1"
   ```

### macOS certificate installation
<a name="install-macos"></a>

These certificate installation steps are optional. The [Linux certificate installation](#install-linux) also work for a macOS.

1. Download the Root Certificate:

   ```
   wget https://www.amazontrust.com/repository/AmazonRootCA1.pem
   ```

1. Add the certificate to the System keychain:

   ```
   sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain AmazonRootCA1.pem
   ```

1. Verify the installation:

   ```
   security find-certificate -a -c "Amazon Root CA 1" -p /Library/Keychains/System.keychain
   ```

## Connecting with SSL/TLS verification
<a name="connect-using-certificates"></a>

 Before configuring SSL/TLS certificates for secure connections to your Aurora DSQL cluster, ensure you have the following prerequisites. 
+ PostgreSQL version 17 installed
+ AWS CLI configured with appropriate credentials
+ Aurora DSQL cluster endpoint information

### Connecting from Linux
<a name="connect-linux"></a>

1. Generate and set the authentication token:

   ```
   export PGPASSWORD=$(aws dsql generate-db-connect-admin-auth-token --region=your-cluster-region --hostname your-cluster-endpoint)
   ```

1. Connect using system certificates (if pre-installed):

   ```
   PGSSLROOTCERT=system \
   PGSSLMODE=verify-full \
   psql --dbname postgres \
   --username admin \
   --host your-cluster-endpoint
   ```

1. Or, connect using a downloaded certificate:

   ```
   PGSSLROOTCERT=/full/path/to/root.pem \
   PGSSLMODE=verify-full \
   psql --dbname postgres \
   --username admin \
   --host your-cluster-endpoint
   ```

**Note**  
 For more on PGSSLMODE settings, see [sslmode](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-SSLMODE) in the PostgresQL 17 [Database Connection Control Functions](https://www.postgresql.org/docs/current/libpq-connect.html) documentation. 

### Connecting from macOS
<a name="connect-macos"></a>

1. Generate and set the authentication token:

   ```
   export PGPASSWORD=$(aws dsql generate-db-connect-admin-auth-token --region=your-cluster-region --hostname your-cluster-endpoint)
   ```

1. Connect using system certificates (if pre-installed):

   ```
   PGSSLROOTCERT=system \
   PGSSLMODE=verify-full \
   psql --dbname postgres \
   --username admin \
   --host your-cluster-endpoint
   ```

1. Or, download the root certificate and save it as `root.pem` (if certificate is not pre-installed)

   ```
   PGSSLROOTCERT=/full/path/to/root.pem \
   PGSSLMODE=verify-full \
   psql —dbname postgres \
   --username admin \
   --host your_cluster_endpoint
   ```

1. Connect using psql:

   ```
   PGSSLROOTCERT=/full/path/to/root.pem \
   PGSSLMODE=verify-full \
   psql —dbname postgres \
   --username admin \
   --host your_cluster_endpoint
   ```

### Connecting from Windows
<a name="connect-windows"></a>

#### Using Command Prompt
<a name="windows-command-prompt"></a>

1. Generate the authentication token:

   ```
   aws dsql generate-db-connect-admin-auth-token ^
   --region=your-cluster-region ^
   --expires-in=3600 ^
   --hostname=your-cluster-endpoint
   ```

1. Set the password environment variable:

   ```
   set "PGPASSWORD=token-from-above"
   ```

1. Set SSL configuration:

   ```
   set PGSSLROOTCERT=C:\full\path\to\root.pem
   set PGSSLMODE=verify-full
   ```

1. Connect to the database:

   ```
   "C:\Program Files\PostgreSQL\17\bin\psql.exe" --dbname postgres ^
   --username admin ^
   --host your-cluster-endpoint
   ```

#### Using PowerShell
<a name="windows-powershell"></a>

1. Generate and set the authentication token:

   ```
   $env:PGPASSWORD = (aws dsql generate-db-connect-admin-auth-token --region=your-cluster-region --expires-in=3600 --hostname=your-cluster-endpoint)
   ```

1. Set SSL configuration:

   ```
   $env:PGSSLROOTCERT='C:\full\path\to\root.pem'
   $env:PGSSLMODE='verify-full'
   ```

1. Connect to the database:

   ```
    "C:\Program Files\PostgreSQL\17\bin\psql.exe" --dbname postgres `
   --username admin `
   --host your-cluster-endpoint
   ```

## Additional resources
<a name="additional-resources"></a>
+  [PostgreSQL SSL documentation](https://www.postgresql.org/docs/current/libpq-ssl.html) 
+  [Amazon Trust Services](https://www.amazontrust.com/repository/) 

# Data encryption for Amazon Aurora DSQL
<a name="data-encryption"></a>

Amazon Aurora DSQL encrypts all user data at rest. For enhanced security, this encryption uses AWS Key Management Service (AWS KMS). This functionality helps reduce the operational burden and complexity involved in protecting sensitive data. Encryption at rest helps you:
+ Reduce the operational burden of protecting sensitive data
+ Build security-sensitive applications that meet strict encryption compliance and regulatory requirements
+ Add an extra layer of data protection by always securing your data in an encrypted cluster
+ Comply with organizational policies, industry or government regulations, and compliance requirements

With Aurora DSQL, you can build security-sensitive applications that meet strict encryption compliance and regulatory requirements. The following sections explain how to configure encryption for new and existing Aurora DSQL databases and manage your encryption keys.

**Topics**
+ [

## KMS key types for Aurora DSQL
](#kms-key-types)
+ [

## Encryption at rest in Aurora DSQL
](#encryption-at-rest)
+ [

## Using AWS KMS and data keys with Aurora DSQL
](#using-kms-and-data-keys)
+ [

## Authorizing use of your AWS KMS key for Aurora DSQL
](#authorizing-kms-key-use)
+ [

## Aurora DSQL encryption context
](#dsql-encryption-context)
+ [

## Monitoring Aurora DSQL interaction with AWS KMS
](#monitoring-dsql-kms-interaction)
+ [

## Creating an encrypted Aurora DSQL cluster
](#creating-encrypted-cluster)
+ [

## Removing or updating a key for your Aurora DSQL cluster
](#updating-encryption-key)
+ [

## Considerations for encryption with Aurora DSQL
](#considerations-with-encryption)

## KMS key types for Aurora DSQL
<a name="kms-key-types"></a>

Aurora DSQL integrates with AWS KMS to manage the encryption keys for your clusters. To learn more about key types and states, see [AWS Key Management Service concepts](https://docs.aws.amazon.com/kms/latest/developerguide/concepts-intro.html) in the *AWS Key Management Service Developer Guide*. When you create a new cluster, you can choose from the following KMS key types to encrypt your cluster:

**AWS owned key**  
Default encryption type. Aurora DSQL owns the key at no additional charge to you. Amazon Aurora DSQL transparently decrypts cluster data when you access an encrypted cluster. You don't need to change your code or applications to use or manage encrypted clusters, and all Aurora DSQL queries work with your encrypted data.

**Customer managed key**  
You create, own, and manage the key in your AWS account. You have full control over the KMS key. AWS KMS charges apply.

Encryption at rest using the AWS owned key is available at no additional charge. However, AWS KMS charges apply for customer managed keys. For more information, see the [AWS KMS Pricing](https://aws.amazon.com/kms/pricing/) page.

You can switch between these key types at any time. For more information about key types, see [Customer managed keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) and [AWS owned keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-owned-cmk) in the *AWS Key Management Service Developer Guide*.

**Note**  
Aurora DSQL encryption at rest is available in all AWS Regions where Aurora DSQL is available.

## Encryption at rest in Aurora DSQL
<a name="encryption-at-rest"></a>

Amazon Aurora DSQL uses 256-bit Advanced Encryption Standard (AES-256) to encrypt your data at rest. This encryption helps protect your data from unauthorized access to the underlying storage. AWS KMS manages the encryption keys for your clusters. You can use the default [AWS owned keys](#aws-owned-keys), or choose to use your own AWS KMS [Customer managed keys](#customer-managed-keys). To learn more about specifying and managing keys for your Aurora DSQL clusters, see [Creating an encrypted Aurora DSQL cluster](#creating-encrypted-cluster) and [Removing or updating a key for your Aurora DSQL cluster](#updating-encryption-key).

**Topics**
+ [

### AWS owned keys
](#aws-owned-keys)
+ [

### Customer managed keys
](#customer-managed-keys)

### AWS owned keys
<a name="aws-owned-keys"></a>

Aurora DSQL encrypts all clusters by default with AWS owned keys. These keys are free to use and rotate annually to protect your account resources. You don't need to view, manage, use, or audit these keys, so there's no action required for data protection. For more information about AWS owned keys, see [AWS owned keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-owned-cmk) in the *AWS Key Management Service Developer Guide*.

### Customer managed keys
<a name="customer-managed-keys"></a>

You create, own, and manage customer managed keys in your AWS account. You have full control over these KMS keys, including their policies, encryption material, tags, and aliases. For more information about managing permissions, see [Customer managed keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) in the *AWS Key Management Service Developer Guide*.

When you specify a customer managed key for cluster-level encryption, Aurora DSQL encrypts the cluster and all its regional data with that key. To prevent data loss and maintain cluster access, Aurora DSQL needs access to your encryption key. If you disable your customer managed key, schedule your key for deletion, or have a policy that restricts your service access, the encryption status for your cluster changes to `KMS_KEY_INACCESSIBLE`. When Aurora DSQL can't access the key, users can't connect to the cluster, the encryption status for the cluster changes to `KMS_KEY_INACCESSIBLE`, and the service loses access to the cluster data.

For multi-Region clusters, customers can configure each region's AWS KMS encryption key separately, and each regional cluster uses its own cluster-level encryption key. If Aurora DSQL can't access the encryption key for a peer in a multi-Region cluster, the status for that peer becomes `KMS_KEY_INACCESSIBLE` and it becomes unavailable for read and write operations. Other peers continue normal operations.

**Note**  
If Aurora DSQL can't access your customer managed key, your cluster encryption status changes to `KMS_KEY_INACCESSIBLE`. After you restore key access, service will automatically detect the restoration within 15 minutes. For more information, see Cluster idling.  
For multi-Region clusters, if key access is lost for an extended time, the cluster restoration time depends on how much data was written while the key was inaccessible.

## Using AWS KMS and data keys with Aurora DSQL
<a name="using-kms-and-data-keys"></a>

The Aurora DSQL encryption at rest feature uses an AWS KMS key and a hierarchy of data keys to protect your cluster data.

We recommend that you plan your encryption strategy before implementing your cluster in Aurora DSQL. If you store sensitive or confidential data in Aurora DSQL, consider including client-side encryption in your plan. This way you can encrypt data as close as possible to its origin, and ensure its protection throughout its lifecycle.

**Topics**
+ [

### Using AWS KMS keys with Aurora DSQL
](#aws-kms-key)
+ [

### Using cluster keys with Aurora DSQL
](#cluster-keys)
+ [

### Cluster key caching
](#cluster-key-caching)

### Using AWS KMS keys with Aurora DSQL
<a name="aws-kms-key"></a>

Encryption at rest protects your Aurora DSQL cluster under an AWS KMS key. By default, Aurora DSQL uses an AWS owned key, a multi-tenant encryption key that is created and managed in a Aurora DSQL service account. But you can encrypt your Aurora DSQL clusters under a customer managed key in your AWS account. You can select a different KMS key for each cluster, even if it participates in a multi-Region setup.

You select the KMS key for a cluster when you create or update the cluster. You can change the KMS key for a cluster at any time, either in the Aurora DSQL console or by using the `UpdateCluster` operation. The process of switching keys doesn't require downtime or degrade service.

**Important**  
Aurora DSQL supports only symmetric KMS keys. You can't use an asymmetric KMS key to encrypt your Aurora DSQL clusters.

A customer managed key provides the following benefits.
+ You create and manage the KMS key, including setting the key policies and IAM policies to control access to the KMS key. You can enable and disable the KMS key, enable and disable automatic key rotation, and delete the KMS key when it is no longer in use.
+ You can use a customer managed key with imported key material or a customer managed key in a custom key store that you own and manage.
+ You can audit the encryption and decryption of your Aurora DSQL cluster by examining the Aurora DSQL API calls to AWS KMS in AWS CloudTrail logs.

However, the AWS owned key is free of charge and its use doesn't count against AWS KMS resource or request quotas. Customer managed keys incur a charge for each API call and AWS KMS quotas apply to these keys.

### Using cluster keys with Aurora DSQL
<a name="cluster-keys"></a>

Aurora DSQL uses the AWS KMS key for the cluster to generate and encrypt a unique data key for the cluster, known as the **cluster key**.

The cluster key is used as a key encryption key. Aurora DSQL uses this cluster key to protect data encryption keys that are used to encrypt the cluster data. Aurora DSQL generates a unique data encryption key for each underlying structure in a cluster, but multiple cluster items might be protected by the same data encryption key.

To decrypt the cluster key, Aurora DSQL sends a request to AWS KMS when you first access an encrypted cluster. To keep the cluster available, Aurora DSQL periodically verifies decrypt access to the KMS key, even when you're not actively accessing the cluster.

Aurora DSQL stores and uses the cluster key and data encryption keys outside of AWS KMS. It protects all keys with Advanced Encryption Standard (AES) encryption and 256-bit encryption keys. Then, it stores the encrypted keys with the encrypted data so they are available to decrypt the cluster data on demand.

If you change the KMS key for your cluster, Aurora DSQL re-encrypts the existing cluster key with the new KMS key.

### Cluster key caching
<a name="cluster-key-caching"></a>

To avoid calling AWS KMS for every Aurora DSQL operation, Aurora DSQL caches the plaintext cluster keys for each caller in memory. If Aurora DSQL gets a request for the cached cluster key after 15 minutes of inactivity, it sends a new request to AWS KMS to decrypt the cluster key. This call will capture any changes made to the access policies of the AWS KMS key in AWS KMS or AWS Identity and Access Management (IAM) after the last request to decrypt the cluster key.

## Authorizing use of your AWS KMS key for Aurora DSQL
<a name="authorizing-kms-key-use"></a>

If you use a customer managed key in your account to protect your Aurora DSQL cluster, the policies on that key must give Aurora DSQL permission to use it on your behalf.

You have full control over the policies on a customer managed key. Aurora DSQL does not need additional authorization to use the default AWS owned key to protect the Aurora DSQL clusters in your AWS account.

### Key policy for a customer managed key
<a name="key-policy-customer-managed-key"></a>

When you select a customer managed key to protect a Aurora DSQL cluster, Aurora DSQL needs permission to use the AWS KMS key on behalf of the principal who makes the selection. That principal, a user or role, must have the permissions on the AWS KMS key that Aurora DSQL requires. You can provide these permissions in a key policy, or an IAM policy.

At a minimum, Aurora DSQL requires the following permissions on a customer managed key:
+ `kms:Encrypt`
+ `kms:Decrypt`
+ `kms:ReEncrypt*` (for kms:ReEncryptFrom and kms:ReEncryptTo)
+ `kms:GenerateDataKey`
+ `kms:DescribeKey`

For example, the following example key policy provides only the required permissions. The policy has the following effects:
+ Allows Aurora DSQL to use the AWS KMS key in cryptographic operations, but only when it is acting on behalf of principals in the account who have permission to use Aurora DSQL. If the principals specified in the policy statement don't have permission to use Aurora DSQL, the call fails, even when it comes from the Aurora DSQL service.
+ The `kms:ViaService` condition key allows the permissions only when the request comes from Aurora DSQL on behalf of the principals listed in the policy statement. These principals can't call these operations directly.

Before using an example key policy, replace the example principals with actual principals from your AWS account.

```
{
  "Sid": "Enable dsql IAM User Permissions",
  "Effect": "Allow",
  "Principal": {
    "Service": "dsql.amazonaws.com"
  },
  "Action": [
    "kms:Decrypt",
    "kms:GenerateDataKey",
    "kms:Encrypt",
    "kms:ReEncryptFrom",
    "kms:ReEncryptTo"
  ],
  "Resource": "*",
  "Condition": {
    "StringLike": {
      "kms:EncryptionContext:aws:dsql:ClusterId": "w4abucpbwuxx",
      "aws:SourceArn": "arn:aws:dsql:us-east-2:111122223333:cluster/w4abucpbwuxx"
    }
  }
},
{
  "Sid": "Enable dsql IAM User Describe Permissions",
  "Effect": "Allow",
  "Principal": {
    "Service": "dsql.amazonaws.com"
  },
  "Action": "kms:DescribeKey",
  "Resource": "*",
  "Condition": {
    "StringLike": {
      "aws:SourceArn": "arn:aws:dsql:us-east-2:111122223333:cluster/w4abucpbwuxx"
    }
  }
}
```

## Aurora DSQL encryption context
<a name="dsql-encryption-context"></a>

An encryption context is a set of key–value pairs that contain arbitrary nonsecret data. When you include an encryption context in a request to encrypt data, AWS KMS cryptographically binds the encryption context to the encrypted data. To decrypt the data, you must pass in the same encryption context.

Aurora DSQL uses the same encryption context in all AWS KMS cryptographic operations. If you use a customer managed key to protect your Aurora DSQL cluster, you can use the encryption context to identify use of the AWS KMS key in audit records and logs. It also appears in plaintext in logs such as those in AWS CloudTrail.

The encryption context can also be used as a condition for authorhorization in policies.

In its requests to AWS KMS, Aurora DSQL uses an encryption context with a key-value pair:

```
"encryptionContext": {
  "aws:dsql:ClusterId": "w4abucpbwuxx"
},
```

The key–value pair identifies the cluster that Aurora DSQL is encrypting. The key is `aws:dsql:ClusterId`. The value is the identifier of the cluster.

## Monitoring Aurora DSQL interaction with AWS KMS
<a name="monitoring-dsql-kms-interaction"></a>

If you use a customer managed key to protect your Aurora DSQL clusters, you can use AWS CloudTrail logs to track the requests that Aurora DSQL sends to AWS KMS on your behalf.

Expand the following sections to learn how Aurora DSQL uses the AWS KMS operations `GenerateDataKey` and `Decrypt`.

### `GenerateDataKey`
<a name="GenerateDataKey"></a>

When you enable encryption at rest on a cluster, Aurora DSQL creates a unique cluster key. It sends a `GenerateDataKey` request to AWS KMS that specifies the AWS KMS key for the cluster.

The event that records the `GenerateDataKey` operation is similar to the following example event. The user is the Aurora DSQL service account. The parameters include the Amazon Resource Name (ARN) of the AWS KMS key, a key specifier that requires a 256-bit key, and the encryption context that identifies the cluster.

```
{
    "eventVersion": "1.11",
    "userIdentity": {
        "type": "AWSService",
        "invokedBy": "dsql.amazonaws.com"
    },
    "eventTime": "2025-05-16T18:41:24Z",
    "eventSource": "kms.amazonaws.com",
    "eventName": "GenerateDataKey",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "dsql.amazonaws.com",
    "userAgent": "dsql.amazonaws.com",
    "requestParameters": {
        "encryptionContext": {
            "aws:dsql:ClusterId": "w4abucpbwuxx"
        },
        "keySpec": "AES_256",
        "keyId": "arn:aws:kms:us-east-1:982127530226:key/8b60dd9f-2ff8-4b1f-8a9c-bf570cbfdb5e"
    },
    "responseElements": null,
    "requestID": "2da2dc32-d3f4-4d6c-8a41-aff27cd9a733",
    "eventID": "426df0a6-ba56-3244-9337-438411f826f4",
    "readOnly": true,
    "resources": [
        {
            "accountId": "AWS Internal",
            "type": "AWS::KMS::Key",
            "ARN": "arn:aws:kms:us-east-1:982127530226:key/8b60dd9f-2ff8-4b1f-8a9c-bf570cbfdb5e"
        }
    ],
    "eventType": "AwsApiCall",
    "managementEvent": true,
    "recipientAccountId": "111122223333",
    "sharedEventID": "f88e0dd8-6057-4ce0-b77d-800448426d4e",
    "vpcEndpointId": "AWS Internal",
    "vpcEndpointAccountId": "vpce-1a2b3c4d5e6f1a2b3",
    "eventCategory": "Management"
}
```

### Decrypt
<a name="Decrypt"></a>

When you access an encrypted Aurora DSQL cluster, Aurora DSQL needs to decrypt the cluster key so that it can decrypt the keys below it in the hierarchy. It then decrypts the data in the cluster. To decrypt the cluster key, Aurora DSQL sends a `Decrypt` request to AWS KMS that specifies the AWS KMS key for the cluster.

The event that records the `Decrypt` operation is similar to the following example event. The user is the principal in your AWS account who is accessing the cluster. The parameters include the encrypted cluster key (as a ciphertext blob) and the encryption context that identifies the cluster. AWS KMS derives the ID of the AWS KMS key from the ciphertext.

```
{
  "eventVersion": "1.05",
  "userIdentity": {
    "type": "AWSService",
    "invokedBy": "dsql.amazonaws.com"
  },
  "eventTime": "2018-02-14T16:42:39Z",
  "eventSource": "kms.amazonaws.com",
  "eventName": "Decrypt",
  "awsRegion": "us-east-1",
  "sourceIPAddress": "dsql.amazonaws.com",
  "userAgent": "dsql.amazonaws.com",
  "requestParameters": {
    "keyId": "arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab",
    "encryptionContext": {
      "aws:dsql:ClusterId": "w4abucpbwuxx"
    },
    "encryptionAlgorithm": "SYMMETRIC_DEFAULT"
  },
  "responseElements": null,
  "requestID": "11cab293-11a6-11e8-8386-13160d3e5db5",
  "eventID": "b7d16574-e887-4b5b-a064-bf92f8ec9ad3",
  "readOnly": true,
  "resources": [
    {
      "ARN": "arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab",
      "accountId": "AWS Internal",
      "type": "AWS::KMS::Key"
    }
  ],
  "eventType": "AwsApiCall",
  "managementEvent": true,
  "recipientAccountId": "111122223333",
  "sharedEventID": "d99f2dc5-b576-45b6-aa1d-3a3822edbeeb",
  "vpcEndpointId": "AWS Internal",
  "vpcEndpointAccountId": "vpce-1a2b3c4d5e6f1a2b3",
  "eventCategory": "Management"
}
```

## Creating an encrypted Aurora DSQL cluster
<a name="creating-encrypted-cluster"></a>

All Aurora DSQL clusters are encrypted at rest. By default, clusters use an AWS owned key at no cost, or you can specify a custom AWS KMS key. Follow these steps to create your encrypted cluster from either the AWS Management Console or the AWS CLI.

------
#### [ Console ]

**To create an encrypted cluster in the AWS Management Console**

1. Sign in to the AWS Management Console and open the Aurora DSQL console at [https://console.aws.amazon.com/dsql/](https://console.aws.amazon.com/dsql/).

1. In the navigation pane on the left side of the console, choose **Clusters**.

1. Choose **Create Cluster** on the top right and select **Single-Region**.

1. In the **Cluster encryption settings**, choose one of the following options.
   + Accept the default settings to encrypt with an AWS owned key at no additional cost.
   + Select **Customize encryption settings (advanced)** to specify a custom KMS key. Then, search for or enter the ID or alias of your KMS key. Alternatively, choose **Create an AWS KMS key** to create a new key in the AWS KMS Console.

1. Choose **Create cluster**.

To confirm the encryption type for your cluster, navigate to the **Clusters** page and select the ID of the cluster to view the cluster details. Review the **Cluster settings** tab the **Cluster KMS key** setting shows **Aurora DSQL default key** for clusters that use AWS owned keys or the key ID for other encryption types.

**Note**  
If you select to own and manage your own key, make sure you set the KMS key policy appropriately. For examples and more information, see [Key policy for a customer managed key](#key-policy-customer-managed-key).

------
#### [ CLI ]

**To create a cluster that's encrypted with the default AWS owned key**
+ Use the following command to create an Aurora DSQL cluster.

  ```
  aws dsql create-cluster
  ```

As shown in the following encryption details, the encryption status for the cluster is enabled by default, and the default encryption type is AWS owned key. The cluster is now encrypted with the default AWS owned key in the Aurora DSQL service account.

```
"encryptionDetails": {
  "encryptionType" : "AWS_OWNED_KMS_KEY",
  "encryptionStatus" : "ENABLED"
}
```

**To create a cluster that's encrypted with your customer managed key**
+ Use the following command to create an Aurora DSQL cluster, replacing the key ID in red text with the ID of your customer managed key.

  ```
  aws dsql create-cluster \
  --kms-encryption-key d41d8cd98f00b204e9800998ecf8427e
  ```

As shown in the following encryption details, the encryption status for the cluster is enabled by default, and the encryption type is customer managed KMS key. The cluster is now encrypted with your key.

```
"encryptionDetails": {
  "encryptionType" : "CUSTOMER_MANAGED_KMS_KEY",
  "kmsKeyArn" : "arn:aws:kms:us-east-1:111122223333:key/d41d8cd98f00b204e9800998ecf8427e",
  "encryptionStatus" : "ENABLED"
}
```

------

## Removing or updating a key for your Aurora DSQL cluster
<a name="updating-encryption-key"></a>

You can use the AWS Management Console or the AWS CLI to update or remove the encryption keys on existing clusters in Amazon Aurora DSQL. If you remove a key without replacing it, Aurora DSQL uses the default AWS owned key. Follow these steps to update the encryption keys of an existing cluster from the Aurora DSQL console or the AWS CLI.

------
#### [ Console ]

**To update or remove an encryption key in the AWS Management Console**

1. Sign in to the AWS Management Console and open the Aurora DSQL console at [https://console.aws.amazon.com/dsql/](https://console.aws.amazon.com/dsql/).

1. In the navigation pane on the left side of the console, choose **Clusters**.

1. From the list view, find and select row of the cluster that you want to update.

1. Select the **Actions** menu and then choose **Modify**.

1. In the **Cluster encryption settings**, choose one of the following options to modify your encryption settings.
   + If you want to switch from a custom key to an AWS owned key, de-select the **Customize encryption settings (advanced)** option. The default settings will apply and encrypt your cluster with an AWS owned key at no cost.
   + If you want to switch from one custom KMS key to another or from an AWS owned key to a KMS key, select the **Customize encryption settings (advanced)** option if it's not already selected. Then, search for and select the ID or alias of the key you want to use. Alternatively, choose **Create an AWS KMS key** to create a new key in the AWS KMS Console.

1. Choose **Save**.

------
#### [ CLI ]

The following examples show how to use the AWS CLI to update an encrypted cluster.

To update an encrypted cluster with the default AWS owned key

```
aws dsql update-cluster \
--identifier aiabtx6icfp6d53snkhseduiqq \
--kms-encryption-key "AWS_OWNED_KMS_KEY"
```

The `EncryptionStatus` of the cluster description is set to `ENABLED` and the `EncryptionType` is `AWS_OWNED_KMS_KEY`.

```
"encryptionDetails": {
  "encryptionType" : "AWS_OWNED_KMS_KEY",
  "encryptionStatus" : "ENABLED"
}
```

This cluster is now encrypted using the default AWS owned key in the Aurora DSQL service account.

To update an encrypted cluster with a customer managed key for Aurora DSQL

Update the encrypted cluster, as in the following example:

```
aws dsql update-cluster \
--identifier aiabtx6icfp6d53snkhseduiqq \
--kms-encryption-key arn:aws:kms:us-east-1:123456789012:key/abcd1234-abcd-1234-a123-ab1234a1b234
```

The `EncryptionStatus` of the cluster description transitions to `UPDATING` and the `EncryptionType` is `CUSTOMER_MANAGED_KMS_KEY`. After Aurora DSQL finishes propagating the new key through the platform, the encryption status will be transitioned to `ENABLED`

```
"encryptionDetails": {
  "encryptionType" : "CUSTOMER_MANAGED_KMS_KEY",
  "kmsKeyArn" : "arn:aws:us-east-1:kms:key/abcd1234-abcd-1234-a123-ab1234a1b234",
  "encryptionStatus" : "ENABLED"
}
```

------

**Note**  
If you select to own and manage your own key, make sure you set the KMS key policy appropriately. For examples and more information, see [Key policy for a customer managed key](#key-policy-customer-managed-key).

## Considerations for encryption with Aurora DSQL
<a name="considerations-with-encryption"></a>
+ Aurora DSQL encrypts all cluster data at rest. You can't disable this encryption or encrypt only some items in a cluster.
+ AWS Backup encrypts your backups and any clusters restored from these backups. You can encrypt your backup data in AWS Backup using either the AWS owned key or a customer managed key.
+ The following data protection states are enabled for Aurora DSQL:
  + **Data at rest** - Aurora DSQL encrypts all static data on persistent storage media
  + **Data in transit** - Aurora DSQL encrypts all communications using Transport Layer Security (TLS) by default
+ When you transition to a different key, we recommend that you keep the original key enabled until the transition is complete. AWS needs the original key to decrypt data before it encrypts your data with the new key. The process is complete when the cluster's `encryptionStatus` is `ENABLED` and you see the `kmsKeyArn` of the new customer managed key.
+ When you disable your Customer Managed Key or revoke access for Aurora DSQL to use your key, your cluster will go into `IDLE` state.
+ The AWS Management Console and Amazon Aurora DSQL API use different terms for encryption types:
  + AWS Console – In the console, you'll see `KMS` when using a Customer managed key and `DEFAULT` when using an AWS owned key.
  + API – The Amazon Aurora DSQL API uses `CUSTOMER_MANAGED_KMS_KEY` for customer managed keys, and `AWS_OWNED_KMS_KEY` for AWS owned keys.
+ If you don't specify an encryption key during cluster creation, Aurora DSQL automatically encrypts your data using the AWS owned key.
+ You can switch between an AWS owned key and a Customer managed key at any time. Make this change using the AWS Management Console, AWS CLI, or the Amazon Aurora DSQL API.

# Identity and access management for Aurora DSQL
<a name="security-iam"></a>

AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be *authenticated* (signed in) and *authorized* (have permissions) to use Aurora DSQL resources. IAM is an AWS service that you can use with no additional charge.

**Topics**
+ [

## Audience
](#security_iam_audience)
+ [

## Authenticating with identities
](#security_iam_authentication)
+ [

## Managing access using policies
](#security_iam_access-manage)
+ [

# How Amazon Aurora DSQL works with IAM
](security_iam_service-with-iam.md)
+ [

# Identity-based policy examples for Amazon Aurora DSQL
](security_iam_id-based-policy-examples.md)
+ [

# Troubleshooting Amazon Aurora DSQL identity and access
](security_iam_troubleshoot.md)

## Audience
<a name="security_iam_audience"></a>

How you use AWS Identity and Access Management (IAM) differs based on your role:
+ **Service user** - request permissions from your administrator if you cannot access features (see [Troubleshooting Amazon Aurora DSQL identity and access](security_iam_troubleshoot.md))
+ **Service administrator** - determine user access and submit permission requests (see [How Amazon Aurora DSQL works with IAM](security_iam_service-with-iam.md))
+ **IAM administrator** - write policies to manage access (see [Identity-based policy examples for Amazon Aurora DSQL](security_iam_id-based-policy-examples.md))

## Authenticating with identities
<a name="security_iam_authentication"></a>

Authentication is how you sign in to AWS using your identity credentials. You must be authenticated as the AWS account root user, an IAM user, or by assuming an IAM role.

You can sign in as a federated identity using credentials from an identity source like AWS IAM Identity Center (IAM Identity Center), single sign-on authentication, or Google/Facebook credentials. For more information about signing in, see [How to sign in to your AWS account](https://docs.aws.amazon.com/signin/latest/userguide/how-to-sign-in.html) in the *AWS Sign-In User Guide*.

For programmatic access, AWS provides an SDK and CLI to cryptographically sign requests. For more information, see [AWS Signature Version 4 for API requests](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_sigv.html) in the *IAM User Guide*.

### AWS account root user
<a name="security_iam_authentication-rootuser"></a>

 When you create an AWS account, you begin with one sign-in identity called the AWS account *root user* that has complete access to all AWS services and resources. We strongly recommend that you don't use the root user for everyday tasks. For tasks that require root user credentials, see [Tasks that require root user credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#root-user-tasks) in the *IAM User Guide*. 

### Federated identity
<a name="security_iam_authentication-federated"></a>

As a best practice, require human users to use federation with an identity provider to access AWS services using temporary credentials.

A *federated identity* is a user from your enterprise directory, web identity provider, or Directory Service that accesses AWS services using credentials from an identity source. Federated identities assume roles that provide temporary credentials.

For centralized access management, we recommend AWS IAM Identity Center. For more information, see [What is IAM Identity Center?](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) in the *AWS IAM Identity Center User Guide*.

### IAM users and groups
<a name="security_iam_authentication-iamuser"></a>

An *[IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html)* is an identity with specific permissions for a single person or application. We recommend using temporary credentials instead of IAM users with long-term credentials. For more information, see [Require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp) in the *IAM User Guide*.

An [https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) specifies a collection of IAM users and makes permissions easier to manage for large sets of users. For more information, see [Use cases for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/gs-identities-iam-users.html) in the *IAM User Guide*.

### IAM roles
<a name="security_iam_authentication-iamrole"></a>

An *[IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html)* is an identity with specific permissions that provides temporary credentials. You can assume a role by [switching from a user to an IAM role (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-console.html) or by calling an AWS CLI or AWS API operation. For more information, see [Methods to assume a role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage-assume.html) in the *IAM User Guide*.

IAM roles are useful for federated user access, temporary IAM user permissions, cross-account access, cross-service access, and applications running on Amazon EC2. For more information, see [Cross account resource access in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-cross-account-resource-access.html) in the *IAM User Guide*.

## Managing access using policies
<a name="security_iam_access-manage"></a>

You control access in AWS by creating policies and attaching them to AWS identities or resources. A policy defines permissions when associated with an identity or resource. AWS evaluates these policies when a principal makes a request. Most policies are stored in AWS as JSON documents. For more information about JSON policy documents, see [Overview of JSON policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#access_policies-json) in the *IAM User Guide*.

Using policies, administrators specify who has access to what by defining which **principal** can perform **actions** on what **resources**, and under what **conditions**.

By default, users and roles have no permissions. An IAM administrator creates IAM policies and adds them to roles, which users can then assume. IAM policies define permissions regardless of the method used to perform the operation.

### Identity-based policies
<a name="security_iam_access-manage-id-based-policies"></a>

Identity-based policies are JSON permissions policy documents that you attach to an identity (user, group, or role). These policies control what actions identities can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

Identity-based policies can be *inline policies* (embedded directly into a single identity) or *managed policies* (standalone policies attached to multiple identities). To learn how to choose between managed and inline policies, see [Choose between managed policies and inline policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-choosing-managed-or-inline.html) in the *IAM User Guide*.

### Resource-based policies
<a name="security_iam_access-manage-resource-based-policies"></a>

Resource-based policies are JSON policy documents that you attach to a resource. Examples include IAM *role trust policies* and Amazon S3 *bucket policies*. In services that support resource-based policies, service administrators can use them to control access to a specific resource. You must [specify a principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) in a resource-based policy.

Resource-based policies are inline policies that are located in that service. You can't use AWS managed policies from IAM in a resource-based policy.

### Other policy types
<a name="security_iam_access-manage-other-policies"></a>

AWS supports additional policy types that can set the maximum permissions granted by more common policy types:
+ **Permissions boundaries** – Set the maximum permissions that an identity-based policy can grant to an IAM entity. For more information, see [Permissions boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) in the *IAM User Guide*.
+ **Service control policies (SCPs)** – Specify the maximum permissions for an organization or organizational unit in AWS Organizations. For more information, see [Service control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) in the *AWS Organizations User Guide*.
+ **Resource control policies (RCPs)** – Set the maximum available permissions for resources in your accounts. For more information, see [Resource control policies (RCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_rcps.html) in the *AWS Organizations User Guide*.
+ **Session policies** – Advanced policies passed as a parameter when creating a temporary session for a role or federated user. For more information, see [Session policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session) in the *IAM User Guide*.

### Multiple policy types
<a name="security_iam_access-manage-multiple-policies"></a>

When multiple types of policies apply to a request, the resulting permissions are more complicated to understand. To learn how AWS determines whether to allow a request when multiple policy types are involved, see [Policy evaluation logic](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html) in the *IAM User Guide*.

# How Amazon Aurora DSQL works with IAM
<a name="security_iam_service-with-iam"></a>

Before you use IAM to manage access to Aurora DSQL, learn what IAM features are available to use with Aurora DSQL.






**IAM features you can use with Amazon Aurora DSQL**  

| IAM feature | Aurora DSQL support | 
| --- | --- | 
|  [Identity-based policies](#security_iam_service-with-iam-id-based-policies)  |   Yes  | 
|  [Resource-based policies](#security_iam_service-with-iam-resource-based-policies)  |   Yes  | 
|  [Policy actions](#security_iam_service-with-iam-id-based-policies-actions)  |   Yes  | 
|  [Policy resources](#security_iam_service-with-iam-id-based-policies-resources)  |   Yes  | 
|  [Policy condition keys](#security_iam_service-with-iam-id-based-policies-conditionkeys)  |   Yes  | 
|  [ACLs](#security_iam_service-with-iam-acls)  |   No   | 
|  [ABAC (tags in policies)](#security_iam_service-with-iam-tags)  |   Yes  | 
|  [Temporary credentials](#security_iam_service-with-iam-roles-tempcreds)  |   Yes  | 
|  [Principal permissions](#security_iam_service-with-iam-principal-permissions)  |   Yes  | 
|  [Service roles](#security_iam_service-with-iam-roles-service)  |   Yes  | 
|  [Service-linked roles](#security_iam_service-with-iam-roles-service-linked)  |   Yes  | 

To get a high-level view of how Aurora DSQL and other AWS services work with most IAM features, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) in the *IAM User Guide*.

## Identity-based policies for Aurora DSQL
<a name="security_iam_service-with-iam-id-based-policies"></a>

**Supports identity-based policies:** Yes

Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

With IAM identity-based policies, you can specify allowed or denied actions and resources as well as the conditions under which actions are allowed or denied. To learn about all of the elements that you can use in a JSON policy, see [IAM JSON policy elements reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html) in the *IAM User Guide*.

### Identity-based policy examples for Aurora DSQL
<a name="security_iam_service-with-iam-id-based-policies-examples"></a>



To view examples of Aurora DSQL identity-based policies, see [Identity-based policy examples for Amazon Aurora DSQL](security_iam_id-based-policy-examples.md).

## Resource-based policies within Aurora DSQL
<a name="security_iam_service-with-iam-resource-based-policies"></a>

**Supports resource-based policies:** Yes

Resource-based policies are JSON policy documents that you attach to a resource. Examples of resource-based policies are IAM role trust policies and Amazon S3 bucket policies. In services that support resource-based policies, service administrators can use them to control access to a specific resource. For the resource where the policy is attached, the policy defines what actions a specified principal can perform on that resource and under what conditions. You must [specify a principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) in a resource-based policy. Principals can include accounts, users, roles, federated users, or AWS services. Resource-based policies are inline policies that are located in that service. You can't use AWS managed policies from IAM in a resource-based policy.

To learn how to create and manage resource-based policies for Aurora DSQL clusters, see [Resource-based policies for Aurora DSQL](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/resource-based-policies.html).

## Policy actions for Aurora DSQL
<a name="security_iam_service-with-iam-id-based-policies-actions"></a>

**Supports policy actions:** Yes

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Action` element of a JSON policy describes the actions that you can use to allow or deny access in a policy. Include actions in a policy to grant permissions to perform the associated operation.



To see a list of Aurora DSQL actions, see [Actions Defined by Amazon Aurora DSQL ](https://docs.aws.amazon.com/IAM/latest/UserGuide/list_your_service.html#your_service-actions-as-permissions) in the *Service Authorization Reference*.

Policy actions in Aurora DSQL use the following prefix before the action:

```
dsql
```

To specify multiple actions in a single statement, separate them with commas.

```
"Action": [
      "dsql:action1",
      "dsql:action2"
]
```





To view examples of Aurora DSQL identity-based policies, see [Identity-based policy examples for Amazon Aurora DSQL](security_iam_id-based-policy-examples.md).

## Policy resources for Aurora DSQL
<a name="security_iam_service-with-iam-id-based-policies-resources"></a>

**Supports policy resources:** Yes

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Resource` JSON policy element specifies the object or objects to which the action applies. As a best practice, specify a resource using its [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html). For actions that don't support resource-level permissions, use a wildcard (\$1) to indicate that the statement applies to all resources.

```
"Resource": "*"
```

To see a list of Aurora DSQL resource types and their ARNs, see [Resources Defined by Amazon Aurora DSQL ](https://docs.aws.amazon.com/IAM/latest/UserGuide/list_your_service.html#your_service-resources-for-iam-policies) in the *Service Authorization Reference*. To learn with which actions you can specify the ARN of each resource, see [Actions Defined by Amazon Aurora DSQL ](https://docs.aws.amazon.com/IAM/latest/UserGuide/list_your_service.html#your_service-actions-as-permissions).





To view examples of Aurora DSQL identity-based policies, see [Identity-based policy examples for Amazon Aurora DSQL](security_iam_id-based-policy-examples.md).

## Policy condition keys for Aurora DSQL
<a name="security_iam_service-with-iam-id-based-policies-conditionkeys"></a>

**Supports service-specific policy condition keys:** Yes

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Condition` element specifies when statements execute based on defined criteria. You can create conditional expressions that use [condition operators](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html), such as equals or less than, to match the condition in the policy with values in the request. To see all AWS global condition keys, see [AWS global condition context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) in the *IAM User Guide*.

To see a list of Aurora DSQL condition keys, see [Condition keys for Amazon Aurora DSQL](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonauroradsql.html#amazonauroradsql-policy-keys) in the *Service Authorization Reference*. To learn with which actions and resources you can use a condition key, see [Actions defined by Amazon Aurora DSQL](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonauroradsql.html#amazonauroradsql-actions-as-permissions).

To view examples of Aurora DSQL identity-based policies, see [Identity-based policy examples for Amazon Aurora DSQL](security_iam_id-based-policy-examples.md).

## ACLs in Aurora DSQL
<a name="security_iam_service-with-iam-acls"></a>

**Supports ACLs:** No 

Access control lists (ACLs) control which principals (account members, users, or roles) have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format.

## ABAC with Aurora DSQL
<a name="security_iam_service-with-iam-tags"></a>

**Supports ABAC (tags in policies):** Yes

Attribute-based access control (ABAC) is an authorization strategy that defines permissions based on attributes called tags. You can attach tags to IAM entities and AWS resources, then design ABAC policies to allow operations when the principal's tag matches the tag on the resource.

To control access based on tags, you provide tag information in the [condition element](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) of a policy using the `aws:ResourceTag/key-name`, `aws:RequestTag/key-name`, or `aws:TagKeys` condition keys.

If a service supports all three condition keys for every resource type, then the value is **Yes** for the service. If a service supports all three condition keys for only some resource types, then the value is **Partial**.

For more information about ABAC, see [Define permissions with ABAC authorization](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) in the *IAM User Guide*. To view a tutorial with steps for setting up ABAC, see [Use attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_attribute-based-access-control.html) in the *IAM User Guide*.

## Using temporary credentials with Aurora DSQL
<a name="security_iam_service-with-iam-roles-tempcreds"></a>

**Supports temporary credentials:** Yes

Temporary credentials provide short-term access to AWS resources and are automatically created when you use federation or switch roles. AWS recommends that you dynamically generate temporary credentials instead of using long-term access keys. For more information, see [Temporary security credentials in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) and [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) in the *IAM User Guide*.

## Cross-service principal permissions for Aurora DSQL
<a name="security_iam_service-with-iam-principal-permissions"></a>

**Supports forward access sessions (FAS):** Yes

 Forward access sessions (FAS) use the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. For policy details when making FAS requests, see [Forward access sessions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_forward_access_sessions.html). 

## Service roles for Aurora DSQL
<a name="security_iam_service-with-iam-roles-service"></a>

**Supports service roles:** Yes

 A service role is an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see [Create a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*. 

**Warning**  
Changing the permissions for a service role might break Aurora DSQL functionality. Edit service roles only when Aurora DSQL provides guidance to do so.

## Service-linked roles for Aurora DSQL
<a name="security_iam_service-with-iam-roles-service-linked"></a>

**Supports service-linked roles:** Yes

 A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. 

For details about creating or managing service-linked roles for Aurora DSQL, see [Using service-linked roles in Aurora DSQL](working-with-service-linked-roles.md).

# Identity-based policy examples for Amazon Aurora DSQL
<a name="security_iam_id-based-policy-examples"></a>

By default, users and roles don't have permission to create or modify Aurora DSQL resources. To grant users permission to perform actions on the resources that they need, an IAM administrator can create IAM policies.

To learn how to create an IAM identity-based policy by using these example JSON policy documents, see [Create IAM policies (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html) in the *IAM User Guide*.

For details about actions and resource types defined by Aurora DSQL, including the format of the ARNs for each of the resource types, see [Actions, Resources, and Condition Keys for Amazon Aurora DSQL ](https://docs.aws.amazon.com/IAM/latest/UserGuide/list_your_service.html) in the *Service Authorization Reference*.

**Topics**
+ [

## Policy best practices
](#security_iam_service-with-iam-policy-best-practices)
+ [

## Using the Aurora DSQL console
](#security_iam_id-based-policy-examples-console)
+ [

## Allow users to view their own permissions
](#security_iam_id-based-policy-examples-view-own-permissions)
+ [

## Allow cluster management and database connection
](#security_iam_id-based-policy-examples-cluster-management)
+ [

## Aurora DSQL resource access based on tags
](#security_iam_id-based-policy-examples-tag-based-access)

## Policy best practices
<a name="security_iam_service-with-iam-policy-best-practices"></a>

Identity-based policies determine whether someone can create, access, or delete Aurora DSQL resources in your account. These actions can incur costs for your AWS account. When you create or edit identity-based policies, follow these guidelines and recommendations:
+ **Get started with AWS managed policies and move toward least-privilege permissions** – To get started granting permissions to your users and workloads, use the *AWS managed policies* that grant permissions for many common use cases. They are available in your AWS account. We recommend that you reduce permissions further by defining AWS customer managed policies that are specific to your use cases. For more information, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) or [AWS managed policies for job functions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html) in the *IAM User Guide*.
+ **Apply least-privilege permissions** – When you set permissions with IAM policies, grant only the permissions required to perform a task. You do this by defining the actions that can be taken on specific resources under specific conditions, also known as *least-privilege permissions*. For more information about using IAM to apply permissions, see [ Policies and permissions in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) in the *IAM User Guide*.
+ **Use conditions in IAM policies to further restrict access** – You can add a condition to your policies to limit access to actions and resources. For example, you can write a policy condition to specify that all requests must be sent using SSL. You can also use conditions to grant access to service actions if they are used through a specific AWS service, such as CloudFormation. For more information, see [ IAM JSON policy elements: Condition](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) in the *IAM User Guide*.
+ **Use IAM Access Analyzer to validate your IAM policies to ensure secure and functional permissions** – IAM Access Analyzer validates new and existing policies so that the policies adhere to the IAM policy language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and actionable recommendations to help you author secure and functional policies. For more information, see [Validate policies with IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html) in the *IAM User Guide*.
+ **Require multi-factor authentication (MFA)** – If you have a scenario that requires IAM users or a root user in your AWS account, turn on MFA for additional security. To require MFA when API operations are called, add MFA conditions to your policies. For more information, see [ Secure API access with MFA](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html) in the *IAM User Guide*.

For more information about best practices in IAM, see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.

## Using the Aurora DSQL console
<a name="security_iam_id-based-policy-examples-console"></a>

To access the Amazon Aurora DSQL console, you must have a minimum set of permissions. These permissions must allow you to list and view details about the Aurora DSQL resources in your AWS account. If you create an identity-based policy that is more restrictive than the minimum required permissions, the console won't function as intended for entities (users or roles) with that policy.

You don't need to allow minimum console permissions for users that are making calls only to the AWS CLI or the AWS API. Instead, allow access to only the actions that match the API operation that they're trying to perform.

To ensure that users and roles can still use the Aurora DSQL console, also attach the Aurora DSQL `AmazonAuroraDSQLConsoleFullAccess` or `AmazonAuroraDSQLReadOnlyAccess` AWS managed policy to the entities. For more information, see [Adding permissions to a user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

## Allow users to view their own permissions
<a name="security_iam_id-based-policy-examples-view-own-permissions"></a>

This example shows how you might create a policy that allows IAM users to view the inline and managed policies that are attached to their user identity. This policy includes permissions to complete this action on the console or programmatically using the AWS CLI or AWS API.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "ViewOwnUserInfo",
            "Effect": "Allow",
            "Action": [
                "iam:GetUserPolicy",
                "iam:ListGroupsForUser",
                "iam:ListAttachedUserPolicies",
                "iam:ListUserPolicies",
                "iam:GetUser"
            ],
            "Resource": ["arn:aws:iam::*:user/${aws:username}"]
        },
        {
            "Sid": "NavigateInConsole",
            "Effect": "Allow",
            "Action": [
                "iam:GetGroupPolicy",
                "iam:GetPolicyVersion",
                "iam:GetPolicy",
                "iam:ListAttachedGroupPolicies",
                "iam:ListGroupPolicies",
                "iam:ListPolicyVersions",
                "iam:ListPolicies",
                "iam:ListUsers"
            ],
            "Resource": "*"
        }
    ]
}
```

## Allow cluster management and database connection
<a name="security_iam_id-based-policy-examples-cluster-management"></a>

The following policy grants an IAM user permission to manage and connect to a specific Aurora DSQL cluster. The policy scopes cluster management and connection actions to a single cluster Amazon Resource Name (ARN), while allowing `dsql:ListClusters` on all resources because this action does not support resource-level permissions.

This example uses `dsql:DbConnectAdmin` to connect with the `admin` role. To connect with a custom database role instead, replace `dsql:DbConnectAdmin` with `dsql:DbConnect`. For more information, see [Authentication and authorization for Aurora DSQL](authentication-authorization.md).

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowClusterManagement",
            "Effect": "Allow",
            "Action": [
                "dsql:GetCluster",
                "dsql:UpdateCluster",
                "dsql:DeleteCluster",
                "dsql:DbConnectAdmin",
                "dsql:TagResource",
                "dsql:ListTagsForResource",
                "dsql:UntagResource"
            ],
            "Resource": "arn:aws:dsql:*:123456789012:cluster/my-cluster-id"
        },
        {
            "Sid": "AllowListClusters",
            "Effect": "Allow",
            "Action": "dsql:ListClusters",
            "Resource": "*"
        }
    ]
}
```

------

## Aurora DSQL resource access based on tags
<a name="security_iam_id-based-policy-examples-tag-based-access"></a>

You can use conditions in your identity-based policy to control access to Aurora DSQL resources based on tags. The following example shows how you might create a policy that allows viewing a cluster. However, the policy grants permission only if the cluster tag `Owner` has the value of that user's user name. This policy also grants the permissions necessary to complete this action on the console.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "ListClustersInConsole",
            "Effect": "Allow",
            "Action": "dsql:ListClusters",
            "Resource": "*"
        },
        {
            "Sid": "ViewClusterIfOwner",
            "Effect": "Allow",
            "Action": "dsql:GetCluster",
            "Resource": "arn:aws:dsql:*:*:cluster/*",
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/Owner": "${aws:username}"
                }
            }
        }
    ]
}
```

------

You can attach this policy to the IAM users in your account. If a user named `richard-roe` attempts to view an Aurora DSQL cluster, the cluster must be tagged `Owner=richard-roe` or `owner=richard-roe`. Otherwise IAM denies access. The condition tag key `Owner` matches both `Owner` and `owner` because condition key names are not case-sensitive. For more information, see [IAM JSON policy elements: Condition](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) in the *IAM User Guide*.

The following policy allows a user to create clusters only if they tag the cluster with their own user name as the `Owner`. It also allows tagging only on clusters that the user already owns.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowCreateTaggedCluster",
            "Effect": "Allow",
            "Action": "dsql:CreateCluster",
            "Resource": "arn:aws:dsql:*:123456789012:cluster/*",
            "Condition": {
                "StringEquals": {
                    "aws:RequestTag/Owner": "${aws:username}"
                }
            }
        },
        {
            "Sid": "AllowTagOwnedClusters",
            "Effect": "Allow",
            "Action": "dsql:TagResource",
            "Resource": "arn:aws:dsql:*:123456789012:cluster/*",
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/Owner": "${aws:username}"
                }
            }
        }
    ]
}
```

------







# Troubleshooting Amazon Aurora DSQL identity and access
<a name="security_iam_troubleshoot"></a>

Use the following information to help you diagnose and fix common issues that you might encounter when working with Aurora DSQL and IAM.

**Topics**
+ [

## I am not authorized to perform an action in Aurora DSQL
](#security_iam_troubleshoot-no-permissions)
+ [

## I am not authorized to perform iam:PassRole
](#security_iam_troubleshoot-passrole)
+ [

## I want to allow people outside of my AWS account to access my Aurora DSQL resources
](#security_iam_troubleshoot-cross-account-access)

## I am not authorized to perform an action in Aurora DSQL
<a name="security_iam_troubleshoot-no-permissions"></a>

If you receive an error that you're not authorized to perform an action, your policies must be updated to allow you to perform the action.

The following example error occurs when the `mateojackson` tries to use the console to view details about the `my-dsql-cluster` resource but doesn't have the `GetCluster` permissions.

```
User: iam:::user/mateojackson is not authorized to perform: GetCluster on resource: my-dsql-cluster
```

In this case, the policy for the `mateojackson` user must be updated to allow access to the `my-dsql-cluster` resource by using the `GetCluster` action.

If you need help, contact your administrator. Your administrator is the person who provided you with your sign-in credentials.

## I am not authorized to perform iam:PassRole
<a name="security_iam_troubleshoot-passrole"></a>

If you receive an error that you're not authorized to perform the `iam:PassRole` action, your policies must be updated to allow you to pass a role to Aurora DSQL.

Some AWS services allow you to pass an existing role to that service instead of creating a new service role or service-linked role. To do this, you must have permissions to pass the role to the service.

The following example error occurs when an IAM user named `marymajor` tries to use the console to perform an action in Aurora DSQL. However, the action requires the service to have permissions that are granted by a service role. Mary does not have permissions to pass the role to the service.

```
User: arn:aws:iam::123456789012:user/marymajor is not authorized to perform: iam:PassRole
```

In this case, Mary's policies must be updated to allow her to perform the `iam:PassRole` action.

If you need help, contact your AWS administrator. Your administrator is the person who provided you with your sign-in credentials.

## I want to allow people outside of my AWS account to access my Aurora DSQL resources
<a name="security_iam_troubleshoot-cross-account-access"></a>

You can create a role that users in other accounts or people outside of your organization can use to access your resources. You can specify who is trusted to assume the role. For services that support resource-based policies or access control lists (ACLs), you can use those policies to grant people access to your resources.

To learn more, consult the following:
+ To learn whether Aurora DSQL supports these features, see [How Amazon Aurora DSQL works with IAM](security_iam_service-with-iam.md).
+ To learn how to provide access to your resources across AWS accounts that you own, see [Providing access to an IAM user in another AWS account that you own](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_aws-accounts.html) in the *IAM User Guide*.
+ To learn how to provide access to your resources to third-party AWS accounts, see [Providing access to AWS accounts owned by third parties](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html) in the *IAM User Guide*.
+ To learn how to provide access through identity federation, see [Providing access to externally authenticated users (identity federation)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_federated-users.html) in the *IAM User Guide*.
+ To learn the difference between using roles and resource-based policies for cross-account access, see [Cross account resource access in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-cross-account-resource-access.html) in the *IAM User Guide*.

# Resource-based policies for Aurora DSQL
<a name="resource-based-policies"></a>

Use resource-based policies for Aurora DSQL to restrict or grant access to your clusters through JSON policy documents that attach directly to your cluster resources. These policies provide fine-grained control over who can access your cluster and under what conditions.

Aurora DSQL clusters are accessible from the public internet by default, with IAM authentication as the primary security control. Resource-based policies enable you to add access restrictions, particularly to block access from the public internet.

Resource-based policies work alongside IAM identity-based policies. AWS evaluates both types of policies to determine the final permissions for any access request to your cluster. By default, Aurora DSQL clusters are accessible within an account. If an IAM user or role has Aurora DSQL permissions, they can access clusters with no resource-based policy attached.

**Note**  
Changes to resource-based policies are eventually consistent and typically take effect within one minute.

For more information about the differences between identity-based and resource-based policies, see [Identity-based policies and resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html) in the *IAM User Guide*.

## When to use resource-based policies
<a name="rbp-when-to-use"></a>

Resource-based policies are particularly useful in these scenarios:
+ *Network-based access control* — Restrict access based on the VPC or IP address that requests originate from, or block public internet access entirely. Use condition keys like `aws:SourceVpc` and `aws:SourceIp` to control network access.
+ *Multiple teams or applications* — Grant access to the same cluster for multiple teams or applications. Rather than managing individual IAM policies for each principal, you define access rules once on the cluster.
+ *Complex conditional access* — Control access based on multiple factors like network attributes, request context, and user attributes. You can combine multiple conditions in a single policy.
+ *Centralized security governance* — Enable cluster owners to control access using familiar AWS policy syntax that integrates with your existing security practices.

**Note**  
Cross-account access is not yet supported for Aurora DSQL resource-based policies but will be available in future releases.

When someone tries to connect to your Aurora DSQL cluster, AWS evaluates your resource-based policy as part of the authorization context, along with any relevant IAM policies, to determine whether the request should be allowed or rejected.

Resource-based policies can grant access to principals within the same AWS account as the cluster. For multi-Region clusters, each regional cluster has its own resource-based policy, allowing for Region-specific access controls when needed.

**Note**  
Condition context keys may vary between Regions (such as VPC IDs).

**Topics**
+ [When to Use](#rbp-when-to-use)
+ [Create with policies](rbp-create-cluster.md)
+ [Add and edit policies](rbp-attach-policy.md)
+ [View Policy](rbp-view-policy.md)
+ [Remove Policy](rbp-remove-policy.md)
+ [Policy examples](rbp-examples.md)
+ [Block public access](rbp-block-public-access.md)
+ [API Operations](rbp-api-operations.md)

# Creating clusters with resource-based policies
<a name="rbp-create-cluster"></a>

You can attach resource-based policies when creating a new cluster to ensure access controls are in place from the start. Each cluster can have a single inline policy attached directly to the cluster.

## AWS Management Console
<a name="rbp-create-cluster-console"></a>

**To add a resource-based policy during cluster creation**

1. Sign in to the AWS Management Console and open the Aurora DSQL console at [https://console.aws.amazon.com/dsql/](https://console.aws.amazon.com/dsql).

1. Choose **Create cluster**.

1. Configure your cluster name, tags, and multi-region settings as needed.

1. In the **Cluster settings** section, locate the **Resource-based policy** option.

1. Turn on **Add resource-based policy**.

1. Enter your policy document in the JSON editor. For example, to block public internet access:

   ```
   {
     "Version": "2012-10-17",		 	 	 
     "Statement": [
       {
         "Effect": "Deny",
         "Principal": {
           "AWS": "*"
         },
         "Resource": "*",
         "Action": [
           "dsql:DbConnect",
           "dsql:DbConnectAdmin"
         ],
         "Condition": {
           "Null": {
             "aws:SourceVpc": "true"
           }
         }
       }
     ]
   }
   ```

1. You can use **Edit statement** or **Add new statement** to build your policy.

1. Complete the remaining cluster configuration and choose **Create cluster**.

## AWS CLI
<a name="rbp-create-cluster-cli"></a>

Use the `--policy` parameter when creating a cluster to attach an inline policy:

```
aws dsql create-cluster --policy '{
    "Version": "2012-10-17",		 	 	 
    "Statement": [{
        "Effect": "Deny",
        "Principal": {"AWS": "*"},
        "Resource": "*",
        "Action": ["dsql:DbConnect", "dsql:DbConnectAdmin"],
        "Condition": { 
            "StringNotEquals": { "aws:SourceVpc": "vpc-123456" } 
        }
    }]
}'
```

## AWS SDKs
<a name="rbp-create-cluster-sdk"></a>

------
#### [ Python ]

```
import boto3
import json

client = boto3.client('dsql')

policy = {
    "Version": "2012-10-17",		 	 	 
    "Statement": [{
        "Effect": "Deny",
        "Principal": {"AWS": "*"},
        "Resource": "*",
        "Action": ["dsql:DbConnect", "dsql:DbConnectAdmin"],
        "Condition": { 
            "StringNotEquals": { "aws:SourceVpc": "vpc-123456" } 
        }
    }]
}

response = client.create_cluster(
    policy=json.dumps(policy)
)

print(f"Cluster created: {response['identifier']}")
```

------
#### [ Java ]

```
import software.amazon.awssdk.services.dsql.DsqlClient;
import software.amazon.awssdk.services.dsql.model.CreateClusterRequest;
import software.amazon.awssdk.services.dsql.model.CreateClusterResponse;

DsqlClient client = DsqlClient.create();

String policy = """
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [{
    "Effect": "Deny",
    "Principal": {"AWS": "*"},
    "Resource": "*",
    "Action": ["dsql:DbConnect", "dsql:DbConnectAdmin"],
    "Condition": { 
      "StringNotEquals": { "aws:SourceVpc": "vpc-123456" } 
    }
  }]
}
""";

CreateClusterRequest request = CreateClusterRequest.builder()
    .policy(policy)
    .build();

CreateClusterResponse response = client.createCluster(request);
System.out.println("Cluster created: " + response.identifier());
```

------

# Adding and editing resource-based policies for clusters
<a name="rbp-attach-policy"></a>

## AWS Management Console
<a name="rbp-attach-console"></a>

**To add a resource-based policy to an existing cluster**

1. Sign in to the AWS Management Console and open the Aurora DSQL console at [https://console.aws.amazon.com/dsql/](https://console.aws.amazon.com/dsql).

1. Choose your cluster from the cluster list to open the cluster details page.

1. Choose the **Permissions** tab.

1. In the **Resource-based policy** section, choose **Add policy**.

1. Enter your policy document in the JSON editor. You can use **Edit statement** or **Add new statement** to build your policy.

1. Choose **Add policy**.

**To edit an existing resource-based policy**

1. Sign in to the AWS Management Console and open the Aurora DSQL console at [https://console.aws.amazon.com/dsql/](https://console.aws.amazon.com/dsql).

1. Choose your cluster from the cluster list to open the cluster details page.

1. Choose the **Permissions** tab.

1. In the **Resource-based policy** section, choose **Edit**.

1. Modify the policy document in the JSON editor. You can use **Edit statement** or **Add new statement** to update your policy.

1. Choose **Save changes**.

## AWS CLI
<a name="rbp-attach-cli"></a>

Use the `put-cluster-policy` command to attach a new policy or update an existing policy on a cluster:

```
aws dsql put-cluster-policy --identifier your_cluster_id --policy '{
    "Version": "2012-10-17",		 	 	 
    "Statement": [{
        "Effect": "Deny",
        "Principal": {"AWS": "*"},
        "Resource": "*",
        "Action": ["dsql:DbConnect", "dsql:DbConnectAdmin"],
        "Condition": { 
            "Null": { "aws:SourceVpc": "true" } 
        }
    }]
}'
```

## AWS SDKs
<a name="rbp-attach-sdk"></a>

------
#### [ Python ]

```
import boto3
import json

client = boto3.client('dsql')

policy = {
    "Version": "2012-10-17",		 	 	 
    "Statement": [{
        "Effect": "Deny",
        "Principal": {"AWS": "*"},
        "Resource": "*",
        "Action": ["dsql:DbConnect", "dsql:DbConnectAdmin"],
        "Condition": {
            "Null": {"aws:SourceVpc": "true"}
        }
    }]
}

response = client.put_cluster_policy(
    identifier='your_cluster_id',
    policy=json.dumps(policy)
)
```

------
#### [ Java ]

```
import software.amazon.awssdk.services.dsql.DsqlClient;
import software.amazon.awssdk.services.dsql.model.PutClusterPolicyRequest;

DsqlClient client = DsqlClient.create();

String policy = """
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [{
    "Effect": "Deny",
    "Principal": {"AWS": "*"},
    "Resource": "*",
    "Action": ["dsql:DbConnect", "dsql:DbConnectAdmin"],
    "Condition": {
      "Null": {"aws:SourceVpc": "true"}
    }
  }]
}
""";

PutClusterPolicyRequest request = PutClusterPolicyRequest.builder()
    .identifier("your_cluster_id")
    .policy(policy)
    .build();

client.putClusterPolicy(request);
```

------

# Viewing resource-based policies
<a name="rbp-view-policy"></a>

You can view resource-based policies attached to your clusters to understand the current access controls in place.

## AWS Management Console
<a name="rbp-view-console"></a>

**To view resource-based policies**

1. Sign in to the AWS Management Console and open the Aurora DSQL console at [https://console.aws.amazon.com/dsql/](https://console.aws.amazon.com/dsql).

1. Choose your cluster from the cluster list to open the cluster details page.

1. Choose the **Permissions** tab.

1. View the attached policy in the **Resource-based policy** section.

## AWS CLI
<a name="rbp-view-cli"></a>

Use the `get-cluster-policy` command to view a cluster's resource-based policy:

```
aws dsql get-cluster-policy --identifier your_cluster_id
```

## AWS SDKs
<a name="rbp-view-sdk"></a>

------
#### [ Python ]

```
import boto3
import json

client = boto3.client('dsql')

response = client.get_cluster_policy(
    identifier='your_cluster_id'
)

# Parse and pretty-print the policy
policy = json.loads(response['policy'])
print(json.dumps(policy, indent=2))
```

------
#### [ Java ]

```
import software.amazon.awssdk.services.dsql.DsqlClient;
import software.amazon.awssdk.services.dsql.model.GetClusterPolicyRequest;
import software.amazon.awssdk.services.dsql.model.GetClusterPolicyResponse;

DsqlClient client = DsqlClient.create();

GetClusterPolicyRequest request = GetClusterPolicyRequest.builder()
    .identifier("your_cluster_id")
    .build();

GetClusterPolicyResponse response = client.getClusterPolicy(request);
System.out.println("Policy: " + response.policy());
```

------

# Removing resource-based policies
<a name="rbp-remove-policy"></a>

You can remove resource-based policies from clusters to change access controls.

**Important**  
When you remove all resource-based policies from a cluster, access will be controlled entirely by IAM identity-based policies.

## AWS Management Console
<a name="rbp-remove-console"></a>

**To remove a resource-based policy**

1. Sign in to the AWS Management Console and open the Aurora DSQL console at [https://console.aws.amazon.com/dsql/](https://console.aws.amazon.com/dsql).

1. Choose your cluster from the cluster list to open the cluster details page.

1. Choose the **Permissions** tab.

1. In the **Resource-based policy** section, choose **Delete**.

1. In the confirmation dialog, type **confirm** to confirm the deletion.

1. Choose **Delete**.

## AWS CLI
<a name="rbp-remove-cli"></a>

Use the `delete-cluster-policy` command to remove a policy from a cluster:

```
aws dsql delete-cluster-policy --identifier your_cluster_id
```

## AWS SDKs
<a name="rbp-remove-sdk"></a>

------
#### [ Python ]

```
import boto3

client = boto3.client('dsql')

response = client.delete_cluster_policy(
    identifier='your_cluster_id'
)

print("Policy deleted successfully")
```

------
#### [ Java ]

```
import software.amazon.awssdk.services.dsql.DsqlClient;
import software.amazon.awssdk.services.dsql.model.DeleteClusterPolicyRequest;

DsqlClient client = DsqlClient.create();

DeleteClusterPolicyRequest request = DeleteClusterPolicyRequest.builder()
    .identifier("your_cluster_id")
    .build();

client.deleteClusterPolicy(request);
System.out.println("Policy deleted successfully");
```

------

# Common resource-based policy examples
<a name="rbp-examples"></a>

These examples show common patterns for controlling access to your Aurora DSQL clusters. You can combine and modify these patterns to meet your specific access requirements.

## Block public internet access
<a name="rbp-example-block-public"></a>

This policy blocks connections to your Aurora DSQL clusters from the public internet (non-VPC). The policy doesn't specify which VPC customers can connect from—only that they must connect from a VPC. To limit access to a specific VPC, use `aws:SourceVpc` with the `StringEquals` condition operator.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Deny",
      "Principal": {
        "AWS": "*"
      },
      "Resource": "*",
      "Action": [
        "dsql:DbConnect",
        "dsql:DbConnectAdmin"
      ],
      "Condition": {
        "Null": {
          "aws:SourceVpc": "true"
        }
      }
    }
  ]
}
```

**Note**  
This example uses only `aws:SourceVpc` to check for VPC connections. The `aws:VpcSourceIp` and `aws:SourceVpce` condition keys provide additional granularity but are not required for basic VPC-only access control.

To provide an exception for specific roles, use this policy instead:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "DenyAccessFromOutsideVPC",
      "Effect": "Deny",
      "Principal": {
        "AWS": "*"
      },
      "Resource": "*",
      "Action": [
        "dsql:DbConnect",
        "dsql:DbConnectAdmin"
      ],
      "Condition": {
        "Null": {
          "aws:SourceVpc": "true"
        },
        "StringNotEquals": {
          "aws:PrincipalArn": [
            "arn:aws:iam::123456789012:role/ExceptionRole",
            "arn:aws:iam::123456789012:role/AnotherExceptionRole"
          ]
        }
      }
    }
  ]
}
```

## Restrict access to AWS Organization
<a name="rbp-example-org-access"></a>

This policy restricts access to principals within an AWS Organization:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Deny",
      "Principal": {
        "AWS": "*"
      },
      "Action": [
        "dsql:DbConnect",
        "dsql:DbConnectAdmin"
      ],
      "Resource": "arn:aws:dsql:us-east-1:123456789012:cluster/mydsqlclusterid0123456789a",
      "Condition": {
        "StringNotEquals": {
          "aws:PrincipalOrgID": "o-exampleorgid"
        }
      }
    }
  ]
}
```

## Restrict access to specific Organizational Unit
<a name="rbp-example-ou-access"></a>

This policy restricts access to principals within a specific Organizational Unit (OU) in an AWS Organization, providing more granular control than organization-wide access:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Deny",
      "Principal": {
        "AWS": "*"
      },
      "Action": [
        "dsql:DbConnect"
      ],
      "Resource": "arn:aws:dsql:us-east-1:123456789012:cluster/mydsqlclusterid0123456789a",
      "Condition": {
        "StringNotLike": {
          "aws:PrincipalOrgPaths": "o-exampleorgid/r-examplerootid/ou-exampleouid/*"
        }
      }
    }
  ]
}
```

## Multi-Region cluster policies
<a name="rbp-example-multi-region"></a>

For multi-Region clusters, each regional cluster maintains its own resource policy, allowing for Region-specific controls. Here's an example with different policies per region:

*us-east-1 policy:*

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Deny",
      "Principal": {
        "AWS": "*"
      },
      "Resource": "*",
      "Action": [
        "dsql:DbConnect"
      ],
      "Condition": {
        "StringNotEquals": {
          "aws:SourceVpc": "vpc-east1-id"
        },
        "Null": {
          "aws:SourceVpc": "true"
        }
      }
    }
  ]
}
```

*us-east-2 policy:*

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Resource": "*",
      "Action": [
        "dsql:DbConnect"
      ],
      "Condition": {
        "StringEquals": {
          "aws:SourceVpc": "vpc-east2-id"
        }
      }
    }
  ]
}
```

**Note**  
Condition context keys may vary between AWS Regions (such as VPC IDs).

# Blocking public access with resource-based policies in Aurora DSQL
<a name="rbp-block-public-access"></a>

Block Public Access (BPA) is a feature that identifies and prevents the attaching of resource-based policies that grant public access to your Aurora DSQL clusters across your AWS accounts. With BPA, you can prevent public access to your Aurora DSQL resources. BPA performs checks during the creation or modification of a resource-based policy and helps improve your security posture with Aurora DSQL.

BPA uses [automated reasoning](https://aws.amazon.com/what-is/automated-reasoning/) to analyze the access granted by your resource-based policy and alerts you if such permissions are found at the time of administering a resource-based policy. The analysis verifies access across all resource-based policy statements, actions, and the set of condition keys used in your policies.

**Important**  
BPA helps protect your resources by preventing public access from being granted through the resource-based policies that are directly attached to your Aurora DSQL resources, such as clusters. In addition to using BPA, carefully inspect the following policies to confirm that they do not grant public access:  
Identity-based policies attached to associated AWS principals (for example, IAM roles)
Resource-based policies attached to associated AWS resources (for example, AWS Key Management Service (KMS) keys)

You must ensure that the [principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) doesn't include a `*` entry or that one of the specified condition keys restrict access from principals to the resource. If the resource-based policy grants public access to your cluster across AWS accounts, Aurora DSQL will block you from creating or modifying the policy until the specification within the policy is corrected and deemed non-public.

You can make a policy non-public by specifying one or more principals inside the `Principal` block. The following resource-based policy example blocks public access by specifying two principals.

```
{
  "Effect": "Allow",
  "Principal": {
    "AWS": [
      "123456789012",
      "111122223333"
    ]
  },
  "Action": "dsql:*",
  "Resource": "arn:aws:dsql:us-east-1:123456789012:cluster/cluster-id"
}
```

Policies that restrict access by specifying certain condition keys are also not considered public. Along with evaluation of the principal specified in the resource-based policy, the following [trusted condition keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) are used to complete the evaluation of a resource-based policy for non-public access:
+ `aws:PrincipalAccount`
+ `aws:PrincipalArn`
+ `aws:PrincipalOrgID`
+ `aws:PrincipalOrgPaths`
+ `aws:SourceAccount`
+ `aws:SourceArn`
+ `aws:SourceVpc`
+ `aws:SourceVpce`
+ `aws:UserId`
+ `aws:PrincipalServiceName`
+ `aws:PrincipalServiceNamesList`
+ `aws:PrincipalIsAWSService`
+ `aws:Ec2InstanceSourceVpc`
+ `aws:SourceOrgID`
+ `aws:SourceOrgPaths`

Additionally, for a resource-based policy to be non-public, the values for Amazon Resource Name (ARN) and string keys must not contain wildcards or variables. If your resource-based policy uses the `aws:PrincipalIsAWSService` key, you must make sure that you've set the key value to true.

The following policy limits access to the user `Ben` in the specified account. The condition makes the `Principal` constrained and not be considered as public.

```
{
  "Effect": "Allow",
  "Principal": {
    "AWS": "*"
  },
  "Action": "dsql:*",
  "Resource": "arn:aws:dsql:us-east-1:123456789012:cluster/cluster-id",
  "Condition": {
    "StringEquals": {
      "aws:PrincipalArn": "arn:aws:iam::123456789012:user/Ben"
    }
  }
}
```

The following example of a non-public resource-based policy constrains `sourceVPC` using the `StringEquals` operator.

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "dsql:*",
      "Resource": "arn:aws:dsql:us-east-1:123456789012:cluster/cluster-id",
      "Condition": {
        "StringEquals": {
          "aws:SourceVpc": [
            "vpc-91237329"
          ]
        }
      }
    }
  ]
}
```

# Aurora DSQL API Operations and Resource-Based Policies
<a name="rbp-api-operations"></a>

Resource-based policies in Aurora DSQL control access to specific API operations. The following sections list all Aurora DSQL API operations organized by category, with an indication of which ones support resource-based policies.

The *Supports RBP* column indicates whether the API operation is subject to resource-based policy evaluation when a policy is attached to the cluster.

## Tag APIs
<a name="rbp-tag-apis"></a>


| API Operation | Description | Supports RBP | 
| --- | --- | --- | 
| [ListTagsForResource](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_ListTagsForResource.html) | Lists the tags for a Aurora DSQL resource | Yes | 
| [TagResource](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_TagResource.html) | Adds tags to a Aurora DSQL resource | Yes | 
| [UntagResource](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_UntagResource.html) | Removes tags from a Aurora DSQL resource | Yes | 

## Cluster management APIs
<a name="rbp-cluster-management-apis"></a>


| API Operation | Description | Supports RBP | 
| --- | --- | --- | 
| [CreateCluster](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_CreateCluster.html) | Creates a new cluster | No | 
| [DeleteCluster](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_DeleteCluster.html) | Deletes a cluster | Yes | 
| [GetCluster](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_GetCluster.html) | Retrieves information about a cluster | Yes | 
| [GetVpcEndpointServiceName](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_GetVpcEndpointServiceName.html) | Retrieves the VPC endpoint service name for a cluster | Yes | 
| [ListClusters](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_ListClusters.html) | Lists clusters in your account | No | 
| [UpdateCluster](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_UpdateCluster.html) | Updates the configuration of a cluster | Yes | 

## Multi-Region property APIs
<a name="rbp-multi-region-apis"></a>


| API Operation | Description | Supports RBP | 
| --- | --- | --- | 
| [AddPeerCluster](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_AddPeerCluster.html) | Adds a peer cluster to a multi-region configuration | Yes | 
| [PutMultiRegionProperties](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_PutMultiRegionProperties.html) | Sets multi-region properties for a cluster | Yes | 
| [PutWitnessRegion](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_PutWitnessRegion.html) | Sets the witness region for a multi-region cluster | Yes | 

## Resource-based policy APIs
<a name="rbp-policy-apis"></a>


| API Operation | Description | Supports RBP | 
| --- | --- | --- | 
| [DeleteClusterPolicy](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_DeleteClusterPolicy.html) | Deletes the resource-based policy from a cluster | Yes | 
| [GetClusterPolicy](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_GetClusterPolicy.html) | Retrieves the resource-based policy for a cluster | Yes | 
| [PutClusterPolicy](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_PutClusterPolicy.html) | Creates or updates the resource-based policy for a cluster | Yes | 

## AWS Fault Injection Service APIs
<a name="rbp-fis-apis"></a>


| API Operation | Description | Supports RBP | 
| --- | --- | --- | 
| [InjectError](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_InjectError.html) | Injects errors for fault injection testing | No | 

## Backup and restore APIs
<a name="rbp-backup-restore-apis"></a>


| API Operation | Description | Supports RBP | 
| --- | --- | --- | 
| [GetBackupJob](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_GetBackupJob.html) | Retrieves information about a backup job | No | 
| [GetRestoreJob](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_GetRestoreJob.html) | Retrieves information about a restore job | No | 
| [StartBackupJob](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_StartBackupJob.html) | Starts a backup job for a cluster | Yes | 
| [StartRestoreJob](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_StartRestoreJob.html) | Starts a restore job from a backup | No | 
| [StopBackupJob](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_StopBackupJob.html) | Stops a running backup job | No | 
| [StopRestoreJob](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_StopRestoreJob.html) | Stops a running restore job | No | 

# Using service-linked roles in Aurora DSQL
<a name="working-with-service-linked-roles"></a>

 Aurora DSQL uses AWS Identity and Access Management (IAM) [ service-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#id_roles_terms-and-concepts). A service-linked role is a unique type of IAM role that is linked directly to Aurora DSQL. Service-linked roles are predefined by Aurora DSQL and include all the permissions that the service requires to call AWS services on behalf of your Aurora DSQL cluster. 

Service-linked roles make the setup process easier because you don't have to manually add the necessary permissions to use Aurora DSQL. When you create a cluster, Aurora DSQL automatically creates a service-linked role for you. You can delete the service-linked role only after you delete all of your clusters. This protects your Aurora DSQL resources because you can't inadvertently remove permissions needed for access to the resources.

For information about other services that support service-linked roles, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) and look for the services that have **Yes** in the **Service-Linked Role** column. Choose a **Yes** with a link to view the service-linked role documentation for that service. 

Service-linked roles are available in all supported Aurora DSQL Regions.

## Service-linked role permissions for Aurora DSQL
<a name="working-with-service-linked-roles-permissions"></a>

Aurora DSQL uses the service-linked role named `AWSServiceRoleForAuroraDsql` – Allows Amazon Aurora DSQL to create and manage AWS resources on your behalf. This service-linked role is attached to the following managed policy: [AuroraDsqlServiceLinkedRolePolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AuroraDsqlServiceLinkedRolePolicy.html).

**Note**  
You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or delete a service-linked role. You might encounter the following error message: `You don't have the permissions to create an Amazon Aurora DSQL service-linked role`. If you see this message, make sure that you have the following permissions enabled:  

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "CreateDsqlServiceLinkedRole",
            "Effect": "Allow",
            "Action": "iam:CreateServiceLinkedRole",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "iam:AWSServiceName": "dsql.amazonaws.com"
                }
            }
        }
    ]
}
```
For more information, see [Service-linked role permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create-service-linked-role.html#service-linked-role-permissions.html).

## Create a service-linked role
<a name="working-with-service-linked-roles-create"></a>

You don't need to manually create an AuroraDSQLServiceLinkedRolePolicy service-linked role. Aurora DSQL creates the service-linked role for you. If the AuroraDSQLServiceLinkedRolePolicy service-linked role has been deleted from your account, Aurora DSQL creates the role when you create a new Aurora DSQL cluster.

## Edit a service-linked role
<a name="working-with-service-linked-roles-edit"></a>

 Aurora DSQL doesn't allow you to edit the AuroraDSQLServiceLinkedRolePolicy service-linked role. After you create a service-linked role, you can't change the name of the role because various entities might reference the role. However, you can edit the description of the role using the IAM console, the AWS Command Line Interface (AWS CLI), or IAM API. 

## Delete a service-linked role
<a name="working-with-service-linked-roles-delete"></a>

If you no longer need to use a feature or service that requires a service-linked role, we recommend that you delete that role. That way, you don't have an unused entity that is not actively monitored or maintained.

Before you can delete a service-linked role for an account, you must delete any clusters in the account.

You can use the IAM console, the AWS CLI, or the IAM API to delete a service-linked role. For more information, see [Create a service-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create-service-linked-role.html#delete-service-linked-role) in the IAM User Guide.

## Supported Regions for Aurora DSQL service-linked roles
<a name="working-with-service-linked-role-regions"></a>

Aurora DSQL supports using service-linked roles in all of the Regions where the service is available. For more information, see [AWS Regions and endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html).

# Using IAM condition keys with Amazon Aurora DSQL
<a name="using-iam-condition-keys"></a>

When you grant permissions in Aurora DSQL you can specify conditions that determine how a permissions policy takes effect. The following are examples of how you can use condition keys in Aurora DSQL permissions policies.

## Example 1: Grant permission to create a cluster in a specific AWS Region
<a name="using-iam-condition-keys-create-cluster"></a>

The following policy grants permission to create clusters in the US East (N. Virginia) and US East (Ohio) Regions. This policy uses the resource ARN to limit the allowed Regions, so Aurora DSQL can only create clusters only if that ARN is specified in the `Resource` section of the policy. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": ["dsql:CreateCluster"], 
            "Resource": [
                "arn:aws:dsql:us-east-1:*:cluster/*",
                "arn:aws:dsql:us-east-2:*:cluster/*"
            ],
            "Effect": "Allow"
        }
    ]
}
```

------

## Example 2: Grant permission to create a multi-Region cluster in specific AWS Regions
<a name="using-iam-condition-keys-create-mr-cluster"></a>

The following policy grants permission to create multi-Region clusters in the US East (N. Virginia) and US East (Ohio) Regions. This policy uses the resource ARN to limit the allowed Regions, so Aurora DSQL can create multi-Region clusters only if this ARN is specified in the `Resource` section of the policy. Note that creating multi-Region clusters also requires the `PutMultiRegionProperties`, `PutWitnessRegion`, and `AddPeerCluster` permissions in each specified Region. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
      {
        "Effect": "Allow",
        "Action": [
          "dsql:CreateCluster",
          "dsql:PutMultiRegionProperties",
          "dsql:PutWitnessRegion",
          "dsql:AddPeerCluster"
        ],
        "Resource": [
           "arn:aws:dsql:us-east-1:123456789012:cluster/*",
           "arn:aws:dsql:us-east-2:123456789012:cluster/*"
        ]
      }
    ]
}
```

------

## Example 3: Grant permission to create a multi-Region cluster with a specific witness Region
<a name="using-iam-condition-keys-create-mr-cluster-witness"></a>

The following policy uses an Aurora DSQL `dsql:WitnessRegion` condition key and lets a user create multi-Region clusters with a witness Region in US West (Oregon). If you don't specify the `dsql:WitnessRegion` condition, you can use any Region as the witness Region. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "dsql:CreateCluster",
                "dsql:PutMultiRegionProperties",
                "dsql:AddPeerCluster"
            ],
            "Resource": "arn:aws:dsql:*:123456789012:cluster/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "dsql:PutWitnessRegion"
            ],
            "Resource": "arn:aws:dsql:*:123456789012:cluster/*",
            "Condition": {
                "StringEquals": {
                    "dsql:WitnessRegion": [
                        "us-west-2"
                    ]
                }
            }
        }
    ]
}
```

------

# Incident response in Amazon Aurora DSQL
<a name="incident-response"></a>

Security is the highest priority at AWS. As part of the AWS Cloud shared responsibility model, AWS manages a data center, network, and software architecture that meets the requirements of the most security-sensitive organizations. AWS is responsible for any incident response with respect to the Amazon Aurora DSQL service itself. Also, as an AWS customer, you share a responsibility for maintaining security in the cloud. This means that you control the security you choose to implement from the AWS tools and features you have access to. In addition, you’re responsible for incident response on your side of the shared responsibility model.

By establishing a security baseline that meets the objectives for your applications running in the cloud, you're able to detect deviations that you can respond to. To help you understand the impact that incident response and your choices have on your corporate goals, we encourage you to review the following resources:
+ [AWS Security Incident Response Guide](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/aws-security-incident-response-guide.html)
+ [AWS Best Practices for Security, Identity, and Compliance](https://aws.amazon.com/architecture/security-identity-compliance/)
+ [Security Perspective of the AWS Cloud Adoption Framework (CAF) whitepaper](https://docs.aws.amazon.com/whitepapers/latest/overview-aws-cloud-adoption-framework/security-perspective.html)

[Amazon GuardDuty](https://aws.amazon.com/guardduty/) is a managed threat detection service continuously monitoring malicious or unauthorized behavior to help customers protect AWS accounts and workloads and identify suspicious activity potentially before it escalates into an incident. It monitors activity such as unusual API calls or potentially unauthorized deployments indicating possible account or resource compromise or reconnaissance by bad actors. For example, Amazon GuardDuty is able to detect suspicious activity in Amazon Aurora DSQL APIs, such as a user logging in from a new location and creating a new cluster.

# Compliance validation for Amazon Aurora DSQL
<a name="compliance-validation"></a>

To learn whether an AWS service is within the scope of specific compliance programs, see [AWS services in Scope by Compliance Program](https://aws.amazon.com/compliance/services-in-scope/) and choose the compliance program that you are interested in. For general information, see [AWS Compliance Programs](https://aws.amazon.com/compliance/programs/).

You can download third-party audit reports using AWS Artifact. For more information, see [Downloading Reports in AWS Artifact](https://docs.aws.amazon.com/artifact/latest/ug/downloading-documents.html).

Your compliance responsibility when using AWS services is determined by the sensitivity of your data, your company's compliance objectives, and applicable laws and regulations. For more information about your compliance responsibility when using AWS services, see [AWS Security Documentation](https://docs.aws.amazon.com/security/).

# Resilience in Amazon Aurora DSQL
<a name="disaster-recovery-resiliency"></a>

The AWS global infrastructure is built around AWS Regions and Availability Zones (AZ). AWS Regions provide multiple physically separated and isolated Availability Zones, which are connected with low-latency, high-throughput, and highly redundant networking. With Availability Zones, you can design and operate applications and databases that automatically fail over between zones without interruption. Availability Zones are more highly available, fault tolerant, and scalable than traditional single or multiple data center infrastructures. Aurora DSQL is designed so that you can take advantage of AWS Regional infrastructure while providing the highest database availability. By default, single-Region clusters in Aurora DSQL have Multi-AZ availability, providing tolerance to major component failures and infrastructure disruptions that might impact access to a full AZ. Multi-Region clusters provide all of the benefits from Multi-AZ resiliency while still providing the strongly consistent database availability, even in cases in which AWS Region is inaccessible to application clients.

For more information about AWS Regions and Availability Zones, see [AWS Global Infrastructure](https://aws.amazon.com/about-aws/global-infrastructure/).

In addition to the AWS global infrastructure, Aurora DSQL offers several features to help support your data resiliency and backup needs.

## Backup and restore
<a name="disaster-recovery-resiliency-backup-and-restore"></a>

Aurora DSQL supports backup and restore with AWS Backup console. You can perform a full backup and restore for your single-Region and multi-Region clusters. For more information, see [Backup and restore for Amazon Aurora DSQLBackup and restore](backup-aurora-dsql.md).

## Replication
<a name="disaster-recovery-resiliency-replication"></a>

By design, Aurora DSQL commits all write transactions to a distributed transaction log and synchronously replicates all committed log data to user storage replicas in three AZs. Multi-Region clusters provide full cross-Region replication capabilities between read and write Regions.

A designated witness Region supports transaction log-only writes and doesn't consume storage. Witness Regions don't have an endpoint. This means that witness Regions store only encrypted transaction logs, require no administration or configuration, and aren't accessible by users. If the witness Region becomes impaired, there is no impact to cluster availability. Write transactions might experience a small increase in latency until the witness Region recovers.

Aurora DSQL transaction logs and user storage are distributed with all data presented to Aurora DSQL query processors as a single logical volume. Aurora DSQL automatically splits, merges, and replicates data based on database primary key range and access patterns. Aurora DSQL automatically scales read replicas, both up and down, based on read access frequency.

Cluster storage replicas are distributed across a multi-tenant storage fleet. If a component or AZ becomes impaired, Aurora DSQL automatically redirects access to surviving components and asynchronously repairs missing replicas. Once Aurora DSQL fixes the impaired replicas, Aurora DSQL automatically adds them back to the storage quorum and makes them available to your cluster.

## High availability
<a name="disaster-recovery-resiliency-high-availability"></a>

By default, single-Region and multi-Region clusters in Aurora DSQL are active-active, and you don't need to manually provision, configure, or reconfigure any clusters. Aurora DSQL fully automates cluster recovery, which eliminates the need for traditional primary-secondary failover operations. Replication is always synchronous and done in multiple AZs, so there is no risk of data loss due to replication lag or failover to an asynchronous secondary database during failure recovery.

Single-Region clusters provide a Multi-AZ redundant endpoint that automatically enables concurrent access with strong data consistency across three AZs. This means that user storage replicas on any of these three AZs always return the same result to one or more readers and are always available to receive writes. This strong consistency and Multi-AZ resiliency is available across all Regions for Aurora DSQL multi-Region clusters. This means that multi-Region clusters provide two strongly consistent Regional endpoints, so clients can read or write indiscriminately to either Region with zero replication lag on commit. 

Aurora DSQL provides 99.99% availability for single-Region clusters and 99.999% for multi-Region clusters.

## Fault injection testing
<a name="fault-injection-testing"></a>

Amazon Aurora DSQL integrates with AWS Fault Injection Service (AWS FIS), a fully managed service for running controlled fault injection experiments to improve an application's resilience. Using AWS FIS, you can:
+ Create experiment templates that define specific failure scenarios
+ Inject failures (elevated cluster connection error rates) to validate application error handling and recovery mechanisms
+ Test multi-Region application behavior to validate application traffic shift between AWS Regions when one AWS Region is experiencing high connection error rates

 For example, in a multi-Region cluster spanning US East (N. Virginia) and US East (Ohio), you can run an experiment in US East (Ohio) to test failures there while US East (N. Virginia) continues normal operations. This controlled testing helps you identify and resolve potential issues before they affect production workloads. 

See [Action targets](https://docs.aws.amazon.com/fis/latest/userguide/action-sequence.html#action-targets) in the *AWS FIS user guide* for a complete list of AWS FIS supported actions.

For information about Amazon Aurora DSQL actions available in AWS FIS, see [Aurora DSQL actions reference](https://docs.aws.amazon.com/fis/latest/userguide/fis-actions-reference.html#dsql-actions-reference) in the *AWS FIS User Guide*.

To get started running fault injection experiments, see [Planning your AWS FIS experiments](https://docs.aws.amazon.com/fis/latest/userguide/getting-started-planning.html) in the *AWS FIS User Guide*. 

# Infrastructure Security in Amazon Aurora DSQL
<a name="infrastructure-security"></a>

As a managed service, Amazon Aurora DSQL is protected by the AWS global network security procedures that are described in [Best Practices for Security, Identity, & Compliance](https://aws.amazon.com/architecture/security-identity-compliance).

You use AWS published API calls to access Aurora DSQL through the network. Clients must support Transport Layer Security (TLS) 1.2 or later. Clients must also support cipher suites with perfect forward secrecy (PFS) such as DHE (Ephemeral Diffie-Hellman) or ECDHE (Elliptic Curve Ephemeral Diffie-Hellman). Most modern systems such as Java 7 and later support these modes.

Additionally, requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal. Or you can use the [AWS Security Token Service](https://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html) (AWS STS) to generate temporary security credentials to sign requests.

# Managing and connecting to Amazon Aurora DSQL clusters using AWS PrivateLink
<a name="privatelink-managing-clusters"></a>

With AWS PrivateLink for Amazon Aurora DSQL, you can provision interface Amazon VPC endpoints (interface endpoints) in your Amazon Virtual Private Cloud. These endpoints are directly accessible from applications that are on premises over Amazon VPC and Direct Connect, or in a different AWS Region over Amazon VPC peering. Using AWS PrivateLink and interface endpoints, you can simplify private network connectivity from your applications to Aurora DSQL.

Applications within your Amazon VPC can access Aurora DSQL using Amazon VPC interface endpoints without requiring public IP addresses.

Interface endpoints are represented by one or more elastic network interfaces (ENIs) that are assigned private IP addresses from subnets in your Amazon VPC. Requests to Aurora DSQL over interface endpoints stay on the AWS network. For more information about how to connect your Amazon VPC with your on-premises network, see the [Direct Connect User Guide](https://docs.aws.amazon.com/directconnect/latest/UserGuide/) and the [AWS Site-to-Site VPN VPN](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html) User Guide.

For general information about interface endpoints, see [Access an AWS service using an interface Amazon VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html) in the [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink) User Guide.

## Types of Amazon VPC endpoints for Aurora DSQL
<a name="endpoint-types-dsql"></a>

 Aurora DSQL requires two different types of AWS PrivateLink endpoints. 

1. *Management endpoint*— This endpoint is used for administrative operations, such as `get`, `create`, `update`, `delete`, and `list` on Aurora DSQL clusters. See [Managing Aurora DSQL clusters using AWS PrivateLink](#managing-dsql-clusters-using-privatelink).

1. *Connection endpoint*— This endpoint is used for connecting to Aurora DSQL clusters through PostgreSQL clients. See [Connecting to Aurora DSQL clusters using AWS PrivateLink](#privatelink-connecting-clusters). 

## Considerations when using AWS PrivateLink for Aurora DSQL
<a name="privatelink-dsql-considerations"></a>

Amazon VPC considerations apply to AWS PrivateLink for Aurora DSQL. For more information, see [Access an AWS service using an interface VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html#vpce-interface-limitations) and [AWS PrivateLink quotas](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-limits-endpoints.html) in the AWS PrivateLink Guide.

## Managing Aurora DSQL clusters using AWS PrivateLink
<a name="managing-dsql-clusters-using-privatelink"></a>

You can use the AWS Command Line Interface or AWS Software Development Kits (SDKs) to manage Aurora DSQL clusters through Aurora DSQL interface endpoints.

### Creating an Amazon VPC endpoint
<a name="create-vpc-endpoint"></a>

To create an Amazon VPC interface endpoint, see [Create an Amazon VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html#create-interface-endpoint-aws) in the AWS PrivateLink Guide. 

```
aws ec2 create-vpc-endpoint \
--region region \
--service-name com.amazonaws.region.dsql \
--vpc-id your-vpc-id \
--subnet-ids your-subnet-id \
--vpc-endpoint-type Interface \
--security-group-ids client-sg-id \
```

To use the default Regional DNS name for Aurora DSQL API requests, do not disable private DNS when you create the Aurora DSQL interface endpoint. When private DNS is enabled, requests to the Aurora DSQL service made from within your Amazon VPC will automatically resolve to the private IP address of the Amazon VPC endpoint, rather than the public DNS name. When private DNS is enabled, Aurora DSQL requests made within your Amazon VPC will automatically resolve to your Amazon VPC endpoint. 

If private DNS is not enabled, use the `--region` and `--endpoint-url` parameters with AWS CLI commands to manage Aurora DSQL clusters through Aurora DSQL interface endpoints.

### Listing clusters using an endpoint URL
<a name="list-clusters-endpoint-url"></a>

In the following example, replace the AWS Region `us-east-1` and the DNS name of the Amazon VPC endpoint ID `vpce-1a2b3c4d-5e6f.dsql.us-east-1.vpce.amazonaws.com` with your own information.

```
aws dsql --region us-east-1 --endpoint-url https://vpce-1a2b3c4d-5e6f.dsql.us-east-1.vpce.amazonaws.com list-clusters
```

### API Operations
<a name="api-operations"></a>

Refer to the [Aurora DSQL API reference](CHAP_api_reference.md) for documentation on managing resources in Aurora DSQL.

### Managing endpoint policies
<a name="managing-endpoint-policies"></a>

By thoroughly testing and configuring the Amazon VPC endpoint policies, you can help ensure that your Aurora DSQL cluster is secure, compliant, and aligned with your organization's specific access control and governance requirements.

**Example: Full Aurora DSQL access policy**

The following policy grants full access to all Aurora DSQL actions and resources through the specified Amazon VPC endpoint. 

```
aws ec2 modify-vpc-endpoint \
    --vpc-endpoint-id vpce-xxxxxxxxxxxxxxxxx \
    --region region \
    --policy-document '{
      "Version": "2012-10-17",		 	 	 
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": "*",
          "Action": "dsql:*",
          "Resource": "*"
        }
      ]
    }'
```

**Example: Restricted Aurora DSQL Access Policy**

The following policy only permits these Aurora DSQL actions.
+ `CreateCluster`
+ `GetCluster`
+ `ListClusters`

All other Aurora DSQL actions are denied.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "dsql:CreateCluster",
        "dsql:GetCluster",
        "dsql:ListClusters"
      ],
      "Resource": "*"
    }
  ]
}
```

------

## Connecting to Aurora DSQL clusters using AWS PrivateLink
<a name="privatelink-connecting-clusters"></a>

Once your AWS PrivateLink endpoint is set up and active, you can connect to your Aurora DSQL cluster using a PostgreSQL client. The connection instructions below outline the steps to construct the proper hostname for connecting through the AWS PrivateLink endpoint.

### Setting up an AWS PrivateLink connection endpoint
<a name="setting-up-privatelink-endpoint"></a>

******Step 1: Get the service name for your cluster**

When creating an AWS PrivateLink endpoint for connecting to your cluster, you first need to fetch the cluster-specific service name.

------
#### [ AWS CLI ]

```
aws dsql get-vpc-endpoint-service-name \
--region us-east-1 \
--identifier your-cluster-id
```

Example response

```
{
    "serviceName": "com.amazonaws.us-east-1.dsql-fnh4"
}
```

The service name includes an identifier, such as `dsql-fnh4` in the example. This identifier is also needed when constructing the hostname for connecting to your cluster.

------
#### [ AWS SDK for Python (Boto3) ]

```
import boto3

dsql_client = boto3.client('dsql', region_name='us-east-1')
response = dsql_client.get_vpc_endpoint_service_name(
    identifier='your-cluster-id'
)
service_name = response['serviceName']
print(f"Service Name: {service_name}")
```

------
#### [ AWS SDK for Java 2.x ]

```
import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.dsql.DsqlClient;
import software.amazon.awssdk.services.dsql.model.GetVpcEndpointServiceNameRequest;
import software.amazon.awssdk.services.dsql.model.GetVpcEndpointServiceNameResponse;

String region = "us-east-1";
String clusterId = "your-cluster-id";

DsqlClient dsqlClient = DsqlClient.builder()
    .region(Region.of(region))
    .credentialsProvider(DefaultCredentialsProvider.create())
    .build();

GetVpcEndpointServiceNameResponse response = dsqlClient.getVpcEndpointServiceName(
    GetVpcEndpointServiceNameRequest.builder()
        .identifier(clusterId)
        .build()
);
String serviceName = response.serviceName();
System.out.println("Service Name: " + serviceName);
```

------<a name="create-vpc-endpoint"></a>

**Step 2: Create the Amazon VPC endpoint**

Using the service name obtained in the previous step, create an Amazon VPC endpoint. 

**Important**  
The connection instructions below only work for connecting to clusters when private is DNS enabled. Do not use the `--no-private-dns-enabled` flag when creating the endpoint, as this will prevent the connection instructions below from working properly. If you disable private DNS, you will need to create your own wildcard private DNS record that points to the created endpoint.

------
#### [ AWS CLI ]

```
aws ec2 create-vpc-endpoint \
    --region us-east-1 \
    --service-name service-name-for-your-cluster \
    --vpc-id your-vpc-id \
    --subnet-ids subnet-id-1 subnet-id-2  \
    --vpc-endpoint-type Interface \
    --security-group-ids security-group-id
```

**Example response**

```
{
    "VpcEndpoint": {
        "VpcEndpointId": "vpce-0123456789abcdef0",
        "VpcEndpointType": "Interface",
        "VpcId": "vpc-0123456789abcdef0",
        "ServiceName": "com.amazonaws.us-east-1.dsql-fnh4",
        "State": "pending",
        "RouteTableIds": [],
        "SubnetIds": [
            "subnet-0123456789abcdef0",
            "subnet-0123456789abcdef1"
        ],
        "Groups": [
            {
                "GroupId": "sg-0123456789abcdef0",
                "GroupName": "default"
            }
        ],
        "PrivateDnsEnabled": true,
        "RequesterManaged": false,
        "NetworkInterfaceIds": [
            "eni-0123456789abcdef0",
            "eni-0123456789abcdef1"
        ],
        "DnsEntries": [
            {
                "DnsName": "*.dsql-fnh4.us-east-1.vpce.amazonaws.com",
                "HostedZoneId": "Z7HUB22UULQXV"
            }
        ],
        "CreationTimestamp": "2025-01-01T00:00:00.000Z"
    }
}
```

------
#### [ SDK for Python ]

```
import boto3

ec2_client = boto3.client('ec2', region_name='us-east-1')
response = ec2_client.create_vpc_endpoint(
    VpcEndpointType='Interface',
    VpcId='your-vpc-id',
    ServiceName='com.amazonaws.us-east-1.dsql-fnh4',  # Use the service name from previous step
    SubnetIds=[
        'subnet-id-1',
        'subnet-id-2'
    ],
    SecurityGroupIds=[
        'security-group-id'
    ]
)

vpc_endpoint_id = response['VpcEndpoint']['VpcEndpointId']
print(f"VPC Endpoint created with ID: {vpc_endpoint_id}")
```

------
#### [ SDK for Java 2.x ]

Use an endpoint URL for Aurora DSQL APIs

```
import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.ec2.Ec2Client;
import software.amazon.awssdk.services.ec2.model.CreateVpcEndpointRequest;
import software.amazon.awssdk.services.ec2.model.CreateVpcEndpointResponse;
import software.amazon.awssdk.services.ec2.model.VpcEndpointType;

String region = "us-east-1";
String serviceName = "com.amazonaws.us-east-1.dsql-fnh4";  // Use the service name from previous step
String vpcId = "your-vpc-id";

Ec2Client ec2Client = Ec2Client.builder()
    .region(Region.of(region))
    .credentialsProvider(DefaultCredentialsProvider.create())
    .build();

CreateVpcEndpointRequest request = CreateVpcEndpointRequest.builder()
    .vpcId(vpcId)
    .serviceName(serviceName)
    .vpcEndpointType(VpcEndpointType.INTERFACE)
    .subnetIds("subnet-id-1", "subnet-id-2")
    .securityGroupIds("security-group-id")
    .build();

CreateVpcEndpointResponse response = ec2Client.createVpcEndpoint(request);
String vpcEndpointId = response.vpcEndpoint().vpcEndpointId();
System.out.println("VPC Endpoint created with ID: " + vpcEndpointId);
```

------<a name="additional-setup-for-peering"></a>

**Additional setup when connecting via Direct Connect or Amazon VPC peering**

Some additional setup may be needed to connect to Aurora DSQL clusters using an AWS PrivateLink connection endpoint from on-premise devices via Amazon VPC peering or Direct Connect. This setup is not required if your application is running in the same Amazon VPC as your AWS PrivateLink endpoint. The private DNS entries created above will not resolve correctly outside the endpoint's Amazon VPC, but you can create your own private DNS records which resolve to your AWS PrivateLink connection endpoint. 

Create a private CNAME DNS record which points to the AWS PrivateLink endpoint's fully-qualified domain name. The domain name of the created DNS record should be constructed from the following components:

1. The service identifier from the service name. For example: `dsql-fnh4`

1. The AWS Region

Create the CNAME DNS record with a domain name in the following format: `*.service-identifier.region.on.aws` 

The format of the domain name is important for two reasons:

1. The hostname used to connect to Aurora DSQL must match Aurora DSQL's server certificate when using the `verify-full` SSL mode. This ensures the highest level of connection security.

1. Aurora DSQL uses the cluster ID portion of the hostname used to connect to Aurora DSQL to identify the connecting cluster.

If creating private DNS records is not possible, you can still connect to Aurora DSQL. See [Connecting to an Aurora DSQL cluster using an AWS PrivateLink endpoint without private DNS](#connecting-cluster-id-option).

### Connecting to an Aurora DSQL cluster using an AWS PrivateLink connection endpoint
<a name="connecting-endpoints"></a>

Once your AWS PrivateLink endpoint is set up and active (check that the `State` is `available`), you can connect to your Aurora DSQL cluster using a PostgreSQL client. For instructions on using the AWS SDKs, you can follow the guides in [Programming with Aurora DSQL](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/programming-with.html). You must change the cluster endpoint to match the hostname format.

#### Constructing the hostname
<a name="construct-hostname"></a>

The hostname for connecting through AWS PrivateLink differs from the public DNS hostname. You need to construct it using the following components.

1. `Your-cluster-id`

1. The service identifier from the service name. For example: `dsql-fnh4` 

1. The AWS Region. For example: `us-east-1` 

Use the following format: `cluster-id.service-identifier.region.on.aws`

**Example: Connection Using PostgreSQL**

```
# Set environment variables
export CLUSTERID=your-cluster-id
export REGION=us-east-1
export SERVICE_IDENTIFIER=dsql-fnh4  # This should match the identifier in your service name

# Construct the hostname
export HOSTNAME="$CLUSTERID.$SERVICE_IDENTIFIER.$REGION.on.aws"

# Generate authentication token
export PGPASSWORD=$(aws dsql --region $REGION generate-db-connect-admin-auth-token --hostname $HOSTNAME)

# Connect using psql
psql -d postgres -h $HOSTNAME -U admin
```

#### Connecting to an Aurora DSQL cluster using an AWS PrivateLink endpoint without private DNS
<a name="connecting-cluster-id-option"></a>

The connection instructions above rely on private DNS records. If your application is running in the same Amazon VPC as your AWS PrivateLink endpoint, the DNS records are created for you. Alternatively, if you are connecting from on-premise devices via Amazon VPC peering or Direct Connect, then you can create your own private DNS records. However, DNS record setup is not always possible due to network restrictions imposed by your security teams. If your application must connect using Direct Connect or from a peered Amazon VPC, and DNS record setup is not possible, you can still connect to Aurora DSQL.

 Aurora DSQL uses the cluster ID portion of your hostname to identify the connecting cluster, but if DNS record setup is not possible, Aurora DSQL supports specifying the target cluster using the `amzn-cluster-id` connection option. With this option, it is possible to use your AWS PrivateLink endpoint's fully-qualified domain name as your hostname when connecting.

**Important**  
When connecting with your AWS PrivateLink endpoint's fully-qualified domain name or IP address, the `verify-full` SSL mode is not supported. For this reason, setting up private DNS is preferred.

**Example: Specifying the cluster ID connection option using PostgreSQL**

```
# Set environment variables
export CLUSTERID=your-cluster-id
export REGION=us-east-1
export HOSTNAME=vpce-04037adb76c111221-d849uc2p.dsql-fnh4.us-east-1.vpce.amazonaws.com # This should match your endpoint's fully-qualified domain name

# Construct the hostname used to generate the authentication token
export AUTH_HOSTNAME="$CLUSTERID.dsql.$REGION.on.aws"

# Generate authentication token
export PGPASSWORD=$(aws dsql --region $REGION generate-db-connect-admin-auth-token --hostname $AUTH_HOSTNAME)

# Specify the amzn-cluster-id connection option
export PGOPTIONS="-c amzn-cluster-id=$CLUSTERID"

# Connect using psql
psql -d postgres -h $HOSTNAME -U admin
```

### Troubleshooting issues with AWS PrivateLink
<a name="troubleshooting-privatelink"></a>

#### Common Issues and Solutions
<a name="common-issues"></a>

The following table lists common issues and solutions relating to AWS PrivateLink with Aurora DSQL.


| Issue | Possible Cause | Solution | 
| --- | --- | --- | 
|  Connection timeout  |  Security group not properly configured  |  Use Amazon VPC Reachability Analyzer to ensure your networking setup allows traffic on port 5432.  | 
|  DNS resolution failure  |  Private DNS not enabled  |  Verify that the Amazon VPC endpoint was created with private DNS enabled.  | 
|  Authentication failure  |  Incorrect credentials or expired token  |  Generate a new authentication token and verify the user name.  | 
|  Service name not found  |  Incorrect cluster ID  |  Double-check your cluster ID and AWS Region when fetching the service name.  | 

### Related Resources
<a name="related-resources"></a>

For more information, see the following resources:
+ [Amazon Aurora DSQL User Guide](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-dsql.html)
+ [AWS PrivateLink Documentation](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html)
+ [Access AWS services through AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-access-aws-services.html)

# Configuration and vulnerability analysis in Amazon Aurora DSQL
<a name="configuration-vulnerability"></a>

AWS handles basic security tasks like guest operating system (OS) and database patching, firewall configuration, and disaster recovery. These procedures have been reviewed and certified by the appropriate third parties. For more details, see the following resources:
+ [Shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/)
+ [Amazon Web Services: Overview of security processes (whitepaper)](https://d0.awsstatic.com/whitepapers/Security/AWS_Security_Whitepaper.pdf)

# Cross-service confused deputy prevention
<a name="cross-service-confused-deputy-prevention"></a>

The confused deputy problem is a security issue where an entity that doesn't have permission to perform an action can coerce a more-privileged entity to perform the action. In AWS, cross-service impersonation can result in the confused deputy problem. Cross-service impersonation can occur when one service (the *calling service*) calls another service (the *called service*). The calling service can be manipulated to use its permissions to act on another customer's resources in a way it should not otherwise have permission to access. To prevent this, AWS provides tools that help you protect your data for all services with service principals that have been given access to resources in your account. 

We recommend using the [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn) and [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount) global condition context keys in resource policies to limit the permissions that Amazon Aurora DSQL gives another service to the resource. Use `aws:SourceArn` if you want only one resource to be associated with the cross-service access. Use `aws:SourceAccount` if you want to allow any resource in that account to be associated with the cross-service use.

The most effective way to protect against the confused deputy problem is to use the `aws:SourceArn` global condition context key with the full ARN of the resource. If you don't know the full ARN of the resource or if you are specifying multiple resources, use the `aws:SourceArn` global context condition key with wildcard characters (`*`) for the unknown portions of the ARN. For example, `arn:aws:dsql:*:123456789012:*`. 

If the `aws:SourceArn` value does not contain the account ID, such as an Amazon S3 bucket ARN, you must use both global condition context keys to limit permissions. 

The value of `aws:SourceArn` must be ResourceDescription.

The following example shows how you can use the `aws:SourceArn` and `aws:SourceAccount` global condition context keys in Aurora DSQL to prevent the confused deputy problem.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": {
    "Sid": "ConfusedDeputyPreventionExamplePolicy",
    "Effect": "Allow",
    "Principal": {
      "Service": "backup.amazonaws.com"
    },
    "Action": "dsql:GetCluster",
    "Resource": [
      "arn:aws:dsql:*:123456789012:cluster/*"
    ],
    "Condition": {
      "ArnLike": {
        "aws:SourceArn": "arn:aws:backup:*:123456789012:*"
      },
      "StringEquals": {
        "aws:SourceAccount": "123456789012"
      }
    }
  }
}
```

------

# Security best practices for Aurora DSQL
<a name="best-practices-security"></a>

Aurora DSQL provides a number of security features to consider as you develop and implement your own security policies. The following best practices are general guidelines and don’t represent a complete security solution. Because these best practices might not be appropriate or sufficient for your environment, treat them as helpful considerations rather than prescriptions.

**Topics**
+ [

# Detective security best practices for Aurora DSQL
](best-practices-security-detective.md)
+ [

# Preventative security best practices for Aurora DSQL
](best-practices-security-preventative.md)

# Detective security best practices for Aurora DSQL
<a name="best-practices-security-detective"></a>

In addition to the following ways to securely use Aurora DSQL, see [Security](https://docs.aws.amazon.com/wellarchitected/latest/framework/security.html) in AWS Well-Architected Tool to learn about how cloud technologies improve your security.

**Amazon CloudWatch Alarms**  
Using Amazon CloudWatch alarms, you watch a single metric over a time period that you specify. If the metric exceeds a given threshold, a notification is sent to an Amazon SNS topic or AWS Auto Scaling policy. CloudWatch alarms do not invoke actions because they are in a particular state. Rather the state must have changed and been maintained for a specified number of periods.

**Tag your Aurora DSQL resources for identification and automation**  
You can assign metadata to your AWS resources in the form of tags. Each tag is a simple label consisting of a customer-defined key and an optional value that can make it easier to manage, search for, and filter resources.   
Tagging allows for grouped controls to be implemented. Although there are no inherent types of tags, they enable you to categorize resources by purpose, owner, environment, or other criteria. The following are some examples:  
+ Security – Used to determine requirements such as encryption.
+ Confidentiality – An identifier for the specific data-confidentiality level a resource supports.
+ Environment – Used to distinguish between development, test, and production infrastructure.
You can assign metadata to your AWS resources in the form of tags. Each tag is a simple label consisting of a customer-defined key and an optional value that can make it easier to manage, search for, and filter resources.  
Tagging allows for grouped controls to be implemented. Although there are no inherent types of tags, they let you categorize resources by purpose, owner, environment, or other criteria. The following are some examples.  
+ Security – used to determine requirements such as encryption.
+ Confidentiality – an identifier for the specific data-confidentiality level a resource supports.
+ Environment – used to distinguish between development, test, and production infrastructure.
For more information, see [ Best Practices for Tagging AWS Resources](https://docs.aws.amazon.com/whitepapers/latest/tagging-best-practices/tagging-best-practices.html).

# Preventative security best practices for Aurora DSQL
<a name="best-practices-security-preventative"></a>

In addition to the following ways to securely use Aurora DSQL, see [Security](https://docs.aws.amazon.com/wellarchitected/latest/framework/security.html) in AWS Well-Architected Tool to learn about how cloud technologies improve your security.

**Use IAM roles to authenticate access to Aurora DSQL.**  
Users, applications, and other AWS services that access Aurora DSQL must include valid AWS credentials in AWS API and AWS CLI requests. You shouldn't store AWS credentials directly in the application or EC2 instances. These are long-term credentials that aren't automatically rotated. There is significant business impact if these credentials are compromised. An IAM role lets you obtain temporary access keys that you can use to access AWS services and resources.  
For more information, see [Authentication and authorization for Aurora DSQL](authentication-authorization.md).

**Use IAM policies for Aurora DSQL base authorization.**  
When you grant permissions, you decide who is getting them, which Aurora DSQL API operations they are getting permissions for, and the specific actions you want to allow on those resources. Implementing least privilege is key in reducing security risk and the impact that can result from errors or malicious intent.  
Attach permissions policies to IAM roles and grant permissions to perform operations on Aurora DSQL resources. Also available are [permissions boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html), which let you set the maximum permissions that an identity-based policy can grant to an IAM entity.  
Similar to the [ root user best practices for your AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/root-user-best-practices.html), don't use the `admin` role in Aurora DSQL to perform everyday operations. Instead, we recommend that you create custom database roles to manage and connect to your cluster. For more information, see [ Accessing Aurora DSQL](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/accessing.html) and [Understanding authentication and authorization for Aurora DSQL](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/accessing.html).

**Use `verify-full` in production environments.**  
This setting verifies that the server certificate is signed by a trusted certificate authority and that the server hostname matches the certificate. 

**Update your PostgreSQL client**  
Regularly update your PostgreSQL client to the latest version to benefit from security improvements. We recommend using PostgreSQL version 17. 