

# Data protection in Amazon Aurora DSQL
<a name="data-protection"></a>

The [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/) applies to data protection in . As described in this model, is responsible for protecting the global infrastructure that runs all of the AWS Cloud. You are responsible for maintaining control over your content that is hosted on this infrastructure. You are also responsible for the security configuration and management tasks for the that you use. For more information about data privacy, see the [Data Privacy FAQ](https://aws.amazon.com/compliance/data-privacy-faq/). For information about data protection in Europe, see the [ Shared Responsibility Model and GDPR](https://aws.amazon.com/blogs/security/the-aws-shared-responsibility-model-and-gdpr/) blog post on the *Security Blog*.

For data protection purposes, we recommend that you protect credentials and set up individual users with AWS IAM Identity Center or AWS Identity and Access Management. That way, each user is given only the permissions necessary to fulfill their job duties. We also recommend that you secure your data in the following ways:
+ Use multi-factor authentication (MFA) with each account.
+ Use SSL/TLS to communicate with resources. We require TLS 1.2 and recommend TLS 1.3.
+ Set up API and user activity logging with AWS CloudTrail. For information about using trails to capture activities, see [Working with trails](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-trails.html) in the * User Guide*.
+ Use encryption solutions, along with all default security controls within AWS services.
+ Use advanced managed security services such as Amazon Macie, which assists in discovering and securing sensitive data that is stored in Amazon S3.

We strongly recommend that you never put confidential or sensitive information, such as your customers email addresses, into tags or free-form text fields such as a **Name** field. This includes when you work with or other using the console, API, AWS CLI, or AWS SDKs. Any data that you enter into tags or free-form text fields used for names may be used for billing or diagnostic logs. If you provide a URL to an external server, we strongly recommend that you do not include credentials information in the URL to validate your request to that server.



## Data encryption
<a name="data-encryption"></a>

Amazon Aurora DSQL provides a highly durable storage infrastructure designed for mission-critical and primary data storage. Data is redundantly stored on multiple devices across multiple facilities in an Aurora DSQL Region.

### Encryption in transit
<a name="encryption-transit"></a>

By default, encryption in transit is configured for you. Aurora DSQL uses TLS to encrypt all traffic between your SQL client and Aurora DSQL.

Encryption and signing of data in transit between AWS CLI, SDK, or API clients and Aurora DSQL endpoints:
+ Aurora DSQL provides HTTPS endpoints for encrypting data in transit. 
+ To protect the integrity of API requests to Aurora DSQL, API calls must be signed by the caller. Calls are signed by an X.509 certificate or the customer's AWS secret access key according to the Signature Version 4 Signing Process (Sigv4). For more information, see [Signature Version 4 Signing Process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) in the *AWS General Reference*.
+  Use the AWS CLI or one of the AWS SDKs to make requests to AWS. These tools automatically sign the requests for you with the access key that you specify when you configure the tools. 

#### FIPS compliance
<a name="fips-compliance"></a>

Aurora DSQL dataplane endpoints (cluster endpoints used for database connections) use FIPS 140-2 validated cryptographic modules by default. No separate FIPS endpoints are required for cluster connections.

For control plane operations, Aurora DSQL provides dedicated FIPS endpoints in supported regions. For more information about control plane FIPS endpoints, see [Aurora DSQL endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/dsql.html) in the *AWS General Reference*.

For encryption at rest, see [Encryption at rest in Aurora DSQL](data-encryption.md#encryption-at-rest).

### Inter-network traffic privacy
<a name="inter-network-traffic-privacy"></a>

Connections are protected both between Aurora DSQL and on-premises applications and between Aurora DSQL and other AWS resources within the same AWS Region.

You have two connectivity options between your private network and AWS: 
+ An AWS Site-to-Site VPN connection. For more information, see [What is AWS Site-to-Site VPN?](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html)
+ An Direct Connect connection. For more information, see [What is Direct Connect?](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html)

You get access to Aurora DSQL through the network by using AWS-published API operations. Clients must support the following:
+ Transport Layer Security (TLS). We require TLS 1.2 and recommend TLS 1.3.
+ Cipher suites with perfect forward secrecy (PFS) such as DHE (Ephemeral Diffie-Hellman) or ECDHE (Elliptic Curve Ephemeral Diffie-Hellman). Most modern systems such as Java 7 and later support these modes.

## Data Protection in witness Regions
<a name="witness-regions"></a>

When you create a multi-Region cluster, a witness Region helps enable automated failure recovery by participating in synchronous replication of encrypted transactions. If a peered cluster becomes unavailable, the witness Region remains available to validate and process database writes, ensuring no loss of availability. 

Witness Regions protect and secure your data through these design features:
+ The witness Region receives and stores only encrypted transaction logs. It never hosts, stores or transmits your encryption keys.
+ The witness Region focuses soley on write transaction logging and quorum functions. It can't read your data by design.
+ The witness Region operates without cluster connection endpoints or query processors. This prevents user database access.

For more information on witness Regions, see [Configuring multi-Region clusters](configuring-multi-region-clusters.md).

# Configuring SSL/TLS certificates for Aurora DSQL connections
<a name="configure-root-certificates"></a><a name="ssl-certificate-overview"></a>

Aurora DSQL requires all connections to use Transport Layer Security (TLS) encryption. To establish secure connections, your client system must trust the Amazon Root Certificate Authority (Amazon Root CA 1). This certificate is pre-installed on many operating systems. This section provides instructions for verifying the pre-installed Amazon Root CA 1 certificate on various operating systems, and guides you through the process of manually installing the certificate if it is not already present. 

We recommend using PostgreSQL version 17.

**Important**  
For production environments, we recommend using `verify-full` SSL mode to ensure the highest level of connection security. This mode verifies that the server certificate is signed by a trusted certificate authority and that the server hostname matches the certificate.

## Verifying pre-installed certificates
<a name="verify-installed-certificates"></a>

In most operating systems, **Amazon Root CA 1** is already pre-installed. To validate this, you can follow the steps below.

### Linux (RedHat/CentOS/Fedora)
<a name="verify-linux"></a>

Run the following command in your terminal:

```
trust list | grep "Amazon Root CA 1"
```

If the certificate is installed, you see the following output:

```
label: Amazon Root CA 1
```

### macOS
<a name="verify-macos"></a>

1. Open Spotlight Search (**Command** \$1 **Space**)

1. Search for **Keychain Access**

1. Select **System Roots** under **System Keychains**

1. Look for **Amazon Root CA 1** in the certificate list

### Windows
<a name="verify-windows"></a>

**Note**  
Due to a known issue with the psql Windows client, using system root certificates (`sslrootcert=system`) may return the following error: `SSL error: unregistered scheme`. You can follow the [Connecting from Windows](#connect-windows) as an alternative way to connect to your cluster using SSL. 

If **Amazon Root CA 1** is not installed in your operating system, follow the steps below. 

## Installing certificates
<a name="install-certificates"></a>

 If the `Amazon Root CA 1` certificate is not pre-installed on your operating system, you will need to manually install it in order to establish secure connections to your Aurora DSQL cluster. 

### Linux certificate installation
<a name="install-linux"></a>

Follow these steps to install the Amazon Root CA certificate on Linux systems.

1. Download the Root Certificate:

   ```
   wget https://www.amazontrust.com/repository/AmazonRootCA1.pem
   ```

1. Copy the certificate to the trust store:

   ```
   sudo cp ./AmazonRootCA1.pem /etc/pki/ca-trust/source/anchors/
   ```

1. Update the CA trust store:

   ```
   sudo update-ca-trust
   ```

1. Verify the installation:

   ```
   trust list | grep "Amazon Root CA 1"
   ```

### macOS certificate installation
<a name="install-macos"></a>

These certificate installation steps are optional. The [Linux certificate installation](#install-linux) also work for a macOS.

1. Download the Root Certificate:

   ```
   wget https://www.amazontrust.com/repository/AmazonRootCA1.pem
   ```

1. Add the certificate to the System keychain:

   ```
   sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain AmazonRootCA1.pem
   ```

1. Verify the installation:

   ```
   security find-certificate -a -c "Amazon Root CA 1" -p /Library/Keychains/System.keychain
   ```

## Connecting with SSL/TLS verification
<a name="connect-using-certificates"></a>

 Before configuring SSL/TLS certificates for secure connections to your Aurora DSQL cluster, ensure you have the following prerequisites. 
+ PostgreSQL version 17 installed
+ AWS CLI configured with appropriate credentials
+ Aurora DSQL cluster endpoint information

### Connecting from Linux
<a name="connect-linux"></a>

1. Generate and set the authentication token:

   ```
   export PGPASSWORD=$(aws dsql generate-db-connect-admin-auth-token --region=your-cluster-region --hostname your-cluster-endpoint)
   ```

1. Connect using system certificates (if pre-installed):

   ```
   PGSSLROOTCERT=system \
   PGSSLMODE=verify-full \
   psql --dbname postgres \
   --username admin \
   --host your-cluster-endpoint
   ```

1. Or, connect using a downloaded certificate:

   ```
   PGSSLROOTCERT=/full/path/to/root.pem \
   PGSSLMODE=verify-full \
   psql --dbname postgres \
   --username admin \
   --host your-cluster-endpoint
   ```

**Note**  
 For more on PGSSLMODE settings, see [sslmode](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-SSLMODE) in the PostgresQL 17 [Database Connection Control Functions](https://www.postgresql.org/docs/current/libpq-connect.html) documentation. 

### Connecting from macOS
<a name="connect-macos"></a>

1. Generate and set the authentication token:

   ```
   export PGPASSWORD=$(aws dsql generate-db-connect-admin-auth-token --region=your-cluster-region --hostname your-cluster-endpoint)
   ```

1. Connect using system certificates (if pre-installed):

   ```
   PGSSLROOTCERT=system \
   PGSSLMODE=verify-full \
   psql --dbname postgres \
   --username admin \
   --host your-cluster-endpoint
   ```

1. Or, download the root certificate and save it as `root.pem` (if certificate is not pre-installed)

   ```
   PGSSLROOTCERT=/full/path/to/root.pem \
   PGSSLMODE=verify-full \
   psql —dbname postgres \
   --username admin \
   --host your_cluster_endpoint
   ```

1. Connect using psql:

   ```
   PGSSLROOTCERT=/full/path/to/root.pem \
   PGSSLMODE=verify-full \
   psql —dbname postgres \
   --username admin \
   --host your_cluster_endpoint
   ```

### Connecting from Windows
<a name="connect-windows"></a>

#### Using Command Prompt
<a name="windows-command-prompt"></a>

1. Generate the authentication token:

   ```
   aws dsql generate-db-connect-admin-auth-token ^
   --region=your-cluster-region ^
   --expires-in=3600 ^
   --hostname=your-cluster-endpoint
   ```

1. Set the password environment variable:

   ```
   set "PGPASSWORD=token-from-above"
   ```

1. Set SSL configuration:

   ```
   set PGSSLROOTCERT=C:\full\path\to\root.pem
   set PGSSLMODE=verify-full
   ```

1. Connect to the database:

   ```
   "C:\Program Files\PostgreSQL\17\bin\psql.exe" --dbname postgres ^
   --username admin ^
   --host your-cluster-endpoint
   ```

#### Using PowerShell
<a name="windows-powershell"></a>

1. Generate and set the authentication token:

   ```
   $env:PGPASSWORD = (aws dsql generate-db-connect-admin-auth-token --region=your-cluster-region --expires-in=3600 --hostname=your-cluster-endpoint)
   ```

1. Set SSL configuration:

   ```
   $env:PGSSLROOTCERT='C:\full\path\to\root.pem'
   $env:PGSSLMODE='verify-full'
   ```

1. Connect to the database:

   ```
    "C:\Program Files\PostgreSQL\17\bin\psql.exe" --dbname postgres `
   --username admin `
   --host your-cluster-endpoint
   ```

## Additional resources
<a name="additional-resources"></a>
+  [PostgreSQL SSL documentation](https://www.postgresql.org/docs/current/libpq-ssl.html) 
+  [Amazon Trust Services](https://www.amazontrust.com/repository/) 