

# Monitoring events, logs, and streams in an Amazon Aurora DB cluster
Monitoring events, logs, and database activity streams

When you monitor your Amazon Aurora databases and your other AWS solutions, your goal is to maintain the following:
+ Reliability
+ Availability
+ Performance
+ Security

[Monitoring metrics in an Amazon Aurora cluster](MonitoringAurora.md) explains how to monitor your cluster using metrics. A complete solution must also monitor database events, log files, and activity streams. AWS provides you with the following monitoring tools:
+ *Amazon EventBridge* is a serverless event bus service that makes it easy to connect your applications with data from a variety of sources. EventBridge delivers a stream of real-time data from your own applications, Software-as-a-Service (SaaS) applications, and AWS services. EventBridge routes that data to targets such as AWS Lambda. This way, you can monitor events that happen in services and build event-driven architectures. For more information, see the [Amazon EventBridge User Guide](https://docs.aws.amazon.com/eventbridge/latest/userguide/).
+ *Amazon CloudWatch Logs* provides a way to monitor, store, and access your log files from Amazon Aurora instances, AWS CloudTrail, and other sources. Amazon CloudWatch Logs can monitor information in the log files and notify you when certain thresholds are met. You can also archive your log data in highly durable storage. For more information, see the [Amazon CloudWatch Logs User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/).
+ *AWS CloudTrail* captures API calls and related events made by or on behalf of your AWS account. CloudTrail delivers the log files to an Amazon S3 bucket that you specify. You can identify which users and accounts called AWS, the source IP address from which the calls were made, and when the calls occurred. For more information, see the [AWS CloudTrail User Guide](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/).
+ *Database Activity Streams* is an Amazon Aurora feature that provides a near real-time stream of the activity in your DB cluster. Amazon Aurora pushes activities to an Amazon Kinesis data stream. The Kinesis stream is created automatically. From Kinesis, you can configure AWS services such as Amazon Data Firehose and AWS Lambda to consume the stream and store the data.

**Topics**
+ [

# Viewing logs, events, and streams in the Amazon RDS console
](logs-events-streams-console.md)
+ [

# Monitoring Amazon Aurora events
](working-with-events.md)
+ [

# Monitoring Amazon Aurora log files
](USER_LogAccess.md)
+ [

# Monitoring Amazon Aurora API calls in AWS CloudTrail
](logging-using-cloudtrail.md)
+ [

# Monitoring Amazon Aurora with Database Activity Streams
](DBActivityStreams.md)
+ [

# Monitoring threats with Amazon GuardDuty RDS Protectionfor Amazon Aurora
](guard-duty-rds-protection.md)

# Viewing logs, events, and streams in the Amazon RDS console


Amazon RDS integrates with AWS services to show information about logs, events, and database activity streams in the RDS console.

The **Logs & events** tab for your Aurora DB cluster shows the following information:
+ **Auto scaling policies and activities** – Shows policies and activities relating to the Aurora Auto Scaling feature. This information only appears in the **Logs & events** tab at the cluster level. 
+ **Amazon CloudWatch alarms** – Shows any metric alarms that you have configured for the DB instance in your Aurora cluster. If you haven't configured alarms, you can create them in the RDS console. 
+ **Recent events** – Shows a summary of events (environment changes) for your Aurora DB instance or cluster. For more information, see [Viewing Amazon RDS events](USER_ListEvents.md).
+ **Logs** – Shows database log files generated by a DB instance in your Aurora cluster. For more information, see [Monitoring Amazon Aurora log files](USER_LogAccess.md).

The **Configuration** tab displays information about database activity streams.

**To view logs, events, and streams for your AuroraDB cluster in the RDS console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the Aurora DB cluster that you want to monitor.

   The database page appears. The following example shows an Amazon Aurora PostgreSQL DB cluster named `apga`.  
![\[Database page with monitoring tab shown\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/cluster-with-monitoring-tab.png)

1. Scroll down and choose **Configuration**.

   The following example shows the status of the database activity streams for your cluster.  
![\[Enhanced Monitoring\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/cluster-das.png)

1. Choose **Logs & events**.

   The Logs & events section appears.  
![\[Database page with Logs & events tab shown\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/cluster-logs-and-events-subpage.png)

1. Choose a DB instance in your Aurora cluster, and then choose **Logs & events** for the instance.

   The following example shows that the contents are different between the DB instance page and the DB cluster page. The DB instance page shows logs and alarms.  
![\[Logs & events page\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/cluster-instance-logs-and-events-subpage.png)

# Monitoring Amazon Aurora events
Monitoring Aurora events

An *event* indicates a change in an environment. This can be an AWS environment, an SaaS partner service or application, or a custom application or service. For descriptions of the Aurora events, see [Amazon RDS event categories and event messagesfor Aurora](USER_Events.Messages.md).

**Topics**
+ [

## Overview of events for Aurora
](#rds-cloudwatch-events.sample)
+ [

# Viewing Amazon RDS events
](USER_ListEvents.md)
+ [

# Working with Amazon RDS event notification
](USER_Events.md)
+ [

# Creating a rule that triggers on an Amazon Aurora event
](rds-cloud-watch-events.md)
+ [

# Amazon RDS event categories and event messagesfor Aurora
](USER_Events.Messages.md)

## Overview of events for Aurora


An *RDS event* indicates a change in the Aurora environment. For example, Amazon Aurora generates an event when a DB cluster is patched. Amazon Aurora delivers events to EventBridge in near-real time.

**Note**  
Amazon RDS emits events on a best effort basis. We recommend that you avoid writing programs that depend on the order or existence of notification events, because they might be out of sequence or missing.

Amazon RDS records events that relate to the following resources:
+ DB clusters

  For a list of cluster events, see [DB cluster events](USER_Events.Messages.md#USER_Events.Messages.cluster).
+ DB instances

  For a list of DB instance events, see [DB instance events](USER_Events.Messages.md#USER_Events.Messages.instance).
+ DB parameter groups

  For a list of DB parameter group events, see [DB parameter group events](USER_Events.Messages.md#USER_Events.Messages.parameter-group).
+ DB security groups

  For a list of DB security group events, see [DB security group events](USER_Events.Messages.md#USER_Events.Messages.security-group).
+ DB cluster snapshots

  For a list of DB cluster snapshot events, see [DB cluster snapshot events](USER_Events.Messages.md#USER_Events.Messages.cluster-snapshot).
+ RDS Proxy events

  For a list of RDS Proxy events, see [RDS Proxy events](USER_Events.Messages.md#USER_Events.Messages.rds-proxy).
+ Blue/green deployment events

  For a list of blue/green deployment events, see [Blue/green deployment events](USER_Events.Messages.md#USER_Events.Messages.BlueGreenDeployments).

This information includes the following: 
+ The date and time of the event
+ The source name and source type of the event
+ A message associated with the event
+ Event notifications include tags from when the message was sent and may not reflect tags at the time when the event occurred

# Viewing Amazon RDS events


You can retrieve the following event information for your Amazon Aurora resources:
+ Resource name
+ Resource type
+ Time of the event
+ Message summary of the event

You can access events in the following parts of the AWS Management Console:
+ The **Events** tab, which shows events from the past 24 hours.
+ The **Recent events** table in the **Logs & events** section in the **Databases** tab, which can show events for up to the past 2 weeks.

You can also retrieve events by using the [describe-events](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-events.html) AWS CLI command, or the [DescribeEvents](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeEvents.html) RDS API operation. If you use the AWS CLI or the RDS API to view events, you can retrieve events for up to the past 14 days. 

**Note**  
If you need to store events for longer periods of time, you can send Amazon RDS events to EventBridge. For more information, see [Creating a rule that triggers on an Amazon Aurora event](rds-cloud-watch-events.md)

For descriptions of the Amazon Aurora events, see [Amazon RDS event categories and event messagesfor Aurora](USER_Events.Messages.md).

To access detailed information about events using AWS CloudTrail, including request parameters, see [CloudTrail events](logging-using-cloudtrail.md#service-name-info-in-cloudtrail.events).

## Console


**To view all Amazon RDS events for the past 24 hours**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Events**. 

   The available events appear in a list.

1. (Optional) Enter a search term to filter your results. 

   The following example shows a list of events filtered by the characters **apg**.  
![\[List DB events\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/ListEventsAPG.png)

## AWS CLI


To view all events generated in the last hour, call [describe-events](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-events.html) with no parameters.

```
aws rds describe-events
```

The following sample output shows that a DB cluster instance has started recovery.

```
{
    "Events": [
        {
            "EventCategories": [
                "recovery"
            ], 
            "SourceType": "db-instance", 
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:db:mycluster-instance-1", 
            "Date": "2022-04-20T15:02:38.416Z", 
            "Message": "Recovery of the DB instance has started. Recovery time will vary with the amount of data to be recovered.", 
            "SourceIdentifier": "mycluster-instance-1"
        }, ...
```

To view all Amazon RDS events for the past 10080 minutes (7 days), call the [describe-events](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-events.html) AWS CLI command and set the `--duration` parameter to `10080`.

```
1. aws rds describe-events --duration 10080
```

The following example shows the events in the specified time range for DB instance *test-instance*.

```
aws rds describe-events \
    --source-identifier test-instance \
    --source-type db-instance \
    --start-time 2022-03-13T22:00Z \
    --end-time 2022-03-13T23:59Z
```

The following sample output shows the status of a backup.

```
{
    "Events": [
        {
            "SourceType": "db-instance",
            "SourceIdentifier": "test-instance",
            "EventCategories": [
                "backup"
            ],
            "Message": "Backing up DB instance",
            "Date": "2022-03-13T23:09:23.983Z",
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:db:test-instance"
        },
        {
            "SourceType": "db-instance",
            "SourceIdentifier": "test-instance",
            "EventCategories": [
                "backup"
            ],
            "Message": "Finished DB Instance backup",
            "Date": "2022-03-13T23:15:13.049Z",
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:db:test-instance"
        }
    ]
}
```

## API


You can view all Amazon RDS instance events for the past 14 days by calling the [DescribeEvents](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeEvents.html) RDS API operation and setting the `Duration` parameter to `20160`.

# Working with Amazon RDS event notification


Amazon RDS uses the Amazon Simple Notification Service (Amazon SNS) to provide notification when an Amazon RDS event occurs. These notifications can be in any notification form supported by Amazon SNS for an AWS Region, such as an email, a text message, or a call to an HTTP endpoint. 

**Topics**
+ [

# Overview of Amazon RDS event notification
](USER_Events.overview.md)
+ [

# Granting permissions to publish notifications to an Amazon SNS topic
](USER_Events.GrantingPermissions.md)
+ [

# Subscribing to Amazon RDS event notification
](USER_Events.Subscribing.md)
+ [

# Amazon RDS event notification tags and attributes
](USER_Events.TagsAttributesForFiltering.md)
+ [

# Listing Amazon RDS event notification subscriptions
](USER_Events.ListSubscription.md)
+ [

# Modifying an Amazon RDS event notification subscription
](USER_Events.Modifying.md)
+ [

# Adding a source identifier to an Amazon RDS event notification subscription
](USER_Events.AddingSource.md)
+ [

# Removing a source identifier from an Amazon RDS event notification subscription
](USER_Events.RemovingSource.md)
+ [

# Listing the Amazon RDS event notification categories
](USER_Events.ListingCategories.md)
+ [

# Deleting an Amazon RDS event notification subscription
](USER_Events.Deleting.md)

# Overview of Amazon RDS event notification


Amazon RDS groups events into categories that you can subscribe to so that you can be notified when an event in that category occurs.

**Topics**
+ [

## RDS resources eligible for event subscription
](#USER_Events.overview.resources)
+ [

## Basic process for subscribing to Amazon RDS event notifications
](#USER_Events.overview.process)
+ [

## Delivery of RDS event notifications
](#USER_Events.overview.subscriptions)
+ [

## Billing for Amazon RDS event notifications
](#USER_Events.overview.billing)
+ [

## Examples of Aurora events using Amazon EventBridge
](#events-examples)

## RDS resources eligible for event subscription


For Amazon Aurora, events occur at both the DB cluster and the DB instance level. You can subscribe to an event category for the following resources:
+ DB instance
+ DB cluster
+ DB cluster snapshot
+ DB parameter group
+ DB security group
+ RDS Proxy
+ Custom engine version

For example, if you subscribe to the backup category for a given DB instance, you're notified whenever a backup-related event occurs that affects the DB instance. If you subscribe to a configuration change category for a DB instance, you're notified when the DB instance is changed. You also receive notification when an event notification subscription changes.

You might want to create several different subscriptions. For example, you might create one subscription that receives all event notifications for all DB instances and another subscription that includes only critical events for a subset of the DB instances. For the second subscription, specify one or more DB instances in the filter.

## Basic process for subscribing to Amazon RDS event notifications


The process for subscribing to Amazon RDS event notification is as follows:

1. You create an Amazon RDS event notification subscription by using the Amazon RDS console, AWS CLI, or API.

   Amazon RDS uses the ARN of an Amazon SNS topic to identify each subscription. The Amazon RDS console creates the ARN for you when you create the subscription. Create the ARN by using the Amazon SNS console, the AWS CLI, or the Amazon SNS API.

1. Amazon RDS sends an approval email or SMS message to the addresses you submitted with your subscription.

1. You confirm your subscription by choosing the link in the notification you received.

1. The Amazon RDS console updates the **My Event Subscriptions** section with the status of your subscription.

1. Amazon RDS begins sending the notifications to the addresses that you provided when you created the subscription.

To learn about identity and access management when using Amazon SNS, see [Identity and access management in Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/sns-authentication-and-access-control.html) in the *Amazon Simple Notification Service Developer Guide*.

You can use AWS Lambda to process event notifications from a DB instance. For more information, see [Using AWS Lambda with Amazon RDS](https://docs.aws.amazon.com/lambda/latest/dg/services-rds.html) in the *AWS Lambda Developer Guide*.

## Delivery of RDS event notifications


Amazon RDS sends notifications to the addresses that you provide when you create the subscription. The notification can include message attributes which provide structured metadata about the message. For more information about message attributes, see [Amazon RDS event categories and event messagesfor Aurora](USER_Events.Messages.md).

Event notifications might take up to five minutes to be delivered.

**Important**  
Amazon RDS doesn't guarantee the order of events sent in an event stream. The event order is subject to change.

When Amazon SNS sends a notification to a subscribed HTTP or HTTPS endpoint, the POST message sent to the endpoint has a message body that contains a JSON document. For more information, see [Amazon SNS message and JSON formats](https://docs.aws.amazon.com/sns/latest/dg/sns-message-and-json-formats.html) in the *Amazon Simple Notification Service Developer Guide*.

You can configure SNS to notify you with text messages. For more information, see [ Mobile text messaging (SMS)](https://docs.aws.amazon.com/sns/latest/dg/sns-mobile-phone-number-as-subscriber.html) in the *Amazon Simple Notification Service Developer Guide*.

To turn off notifications without deleting a subscription, choose **No** for **Enabled** in the Amazon RDS console. Or you can set the `Enabled` parameter to `false` using the AWS CLI or Amazon RDS API.

## Billing for Amazon RDS event notifications


Billing for Amazon RDS event notification is through Amazon SNS. Amazon SNS fees apply when using event notification. For more information about Amazon SNS billing, see [ Amazon Simple Notification Service pricing](http://aws.amazon.com/sns/#pricing).

## Examples of Aurora events using Amazon EventBridge


The following examples illustrate different types of Aurora events in JSON format. For a tutorial that shows you how to capture and view events in JSON format, see [Tutorial: Log DB instance state changes using Amazon EventBridge](rds-cloud-watch-events.md#log-rds-instance-state).

**Topics**
+ [

### Example of a DB cluster event
](#rds-cloudwatch-events.db-clusters)
+ [

### Example of a DB parameter group event
](#rds-cloudwatch-events.db-parameter-groups)
+ [

### Example of a DB cluster snapshot event
](#rds-cloudwatch-events.db-cluster-snapshots)

### Example of a DB cluster event


The following is an example of a DB cluster event in JSON format. The event shows that the cluster named `my-db-cluster` was patched. The event ID is `RDS-EVENT-0173`.

```
{
  "version": "0",
  "id": "844e2571-85d4-695f-b930-0153b71dcb42",
  "detail-type": "RDS DB Cluster Event",
  "source": "aws.rds",
  "account": "123456789012",
  "time": "2018-10-06T12:26:13Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:rds:us-east-1:123456789012:cluster:my-db-cluster"
  ],
  "detail": {
    "EventCategories": [
      "notification"
    ],
    "SourceType": "CLUSTER",
    "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:my-db-cluster",
    "Date": "2018-10-06T12:26:13.882Z",
    "Message": "Database cluster has been patched",
    "SourceIdentifier": "my-db-cluster",
    "EventID": "RDS-EVENT-0173"
  }
}
```

### Example of a DB parameter group event


The following is an example of a DB parameter group event in JSON format. The event shows that the parameter `time_zone` was updated in parameter group `my-db-param-group`. The event ID is RDS-EVENT-0037.

```
{
  "version": "0",
  "id": "844e2571-85d4-695f-b930-0153b71dcb42",
  "detail-type": "RDS DB Parameter Group Event",
  "source": "aws.rds",
  "account": "123456789012",
  "time": "2018-10-06T12:26:13Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:rds:us-east-1:123456789012:pg:my-db-param-group"
  ],
  "detail": {
    "EventCategories": [
      "configuration change"
    ],
    "SourceType": "DB_PARAM",
    "SourceArn": "arn:aws:rds:us-east-1:123456789012:pg:my-db-param-group",
    "Date": "2018-10-06T12:26:13.882Z",
    "Message": "Updated parameter time_zone to UTC with apply method immediate",
    "SourceIdentifier": "my-db-param-group",
    "EventID": "RDS-EVENT-0037"
  }
}
```

### Example of a DB cluster snapshot event


The following is an example of a DB cluster snapshot event in JSON format. The event shows the creation of the snapshot named `my-db-cluster-snapshot`. The event ID is RDS-EVENT-0074.

```
{
  "version": "0",
  "id": "844e2571-85d4-695f-b930-0153b71dcb42",
  "detail-type": "RDS DB Cluster Snapshot Event",
  "source": "aws.rds",
  "account": "123456789012",
  "time": "2018-10-06T12:26:13Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:rds:us-east-1:123456789012:cluster-snapshot:rds:my-db-cluster-snapshot"
  ],
  "detail": {
    "EventCategories": [
      "backup"
    ],
    "SourceType": "CLUSTER_SNAPSHOT",
    "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster-snapshot:rds:my-db-cluster-snapshot",
    "Date": "2018-10-06T12:26:13.882Z",
    "SourceIdentifier": "my-db-cluster-snapshot",
    "Message": "Creating manual cluster snapshot",
    "EventID": "RDS-EVENT-0074"
  }
}
```

# Granting permissions to publish notifications to an Amazon SNS topic
Granting permissions

To grant Amazon RDS permissions to publish notifications to an Amazon Simple Notification Service (Amazon SNS) topic, attach an AWS Identity and Access Management (IAM) policy to the destination topic. For more information about permissions, see [ Example cases for Amazon Simple Notification Service access control](https://docs.aws.amazon.com/sns/latest/dg/sns-access-policy-use-cases.html) in the *Amazon Simple Notification Service Developer Guide*.

By default, an Amazon SNS topic has a policy allowing all Amazon RDS resources within the same account to publish notifications to it. You can attach a custom policy to allow cross-account notifications, or to restrict access to certain resources.

The following is an example of an IAM policy that you attach to the destination Amazon SNS topic. It restricts the topic to DB instances with names that match the specified prefix. To use this policy, specify the following values:
+ `Resource` – The Amazon Resource Name (ARN) for your Amazon SNS topic
+ `SourceARN` – Your RDS resource ARN
+ `SourceAccount` – Your AWS account ID

To see a list of resource types and their ARNs, see [Resources Defined by Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-resources-for-iam-policies) in the *Service Authorization Reference*.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "events.rds.amazonaws.com"
      },
      "Action": [
        "sns:Publish"
      ],
      "Resource": "arn:aws:sns:us-east-1:123456789012:topic_name",
      "Condition": {
        "ArnLike": {
          "aws:SourceArn": "arn:aws:rds:us-east-1:123456789012:db:prefix-*"
        },
        "StringEquals": {
          "aws:SourceAccount": "123456789012"
        }
      }
    }
  ]
}
```

------

# Subscribing to Amazon RDS event notification


The simplest way to create a subscription is with the RDS console. If you choose to create event notification subscriptions using the CLI or API, you must create an Amazon Simple Notification Service topic and subscribe to that topic with the Amazon SNS console or Amazon SNS API. You will also need to retain the Amazon Resource Name (ARN) of the topic because it is used when submitting CLI commands or API operations. For information on creating an SNS topic and subscribing to it, see [Getting started with Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/GettingStarted.html) in the *Amazon Simple Notification Service Developer Guide*.

You can specify the type of source you want to be notified of and the Amazon RDS source that triggers the event:

**Source type**  
The type of source. For example, **Source type** might be **Instances**. You must choose a source type.

***Resources* to include**  
The Amazon RDS resources that are generating the events. For example, you might choose **Select specific instances** and then **myDBInstance1**. 

The following table explains the result when you specify or don't specify ***Resources* to include**.


|  Resources to include  |  Description  |  Example  | 
| --- | --- | --- | 
|  Specified  |  RDS notifies you about all events for the specified resource only.  | If your Source type is Instances and your resource is myDBInstance1, RDS notifies you about all events for myDBInstance1 only. | 
|  Not specified  |  RDS notifies you about the events for the specified source type for all your Amazon RDS resources.   |  If your **Source type** is **Instances**, RDS notifies you about all instance-related events in your account.  | 

An Amazon SNS topic subscriber receives every message published to the topic by default. To receive only a subset of the messages, the subscriber must assign a filter policy to the topic subscription. For more information about SNS message filtering, see [Amazon SNS message filtering](https://docs.aws.amazon.com/sns/latest/dg/sns-message-filtering.html) in the *Amazon Simple Notification Service Developer Guide*

## Console


**To subscribe to RDS event notification**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In navigation pane, choose **Event subscriptions**. 

1. In the **Event subscriptions** pane, choose **Create event subscription**. 

1. Enter your subscription details as follows:

   1. For **Name**, enter a name for the event notification subscription.

   1. For **Send notifications to**, do one of the following:
      + Choose **New email topic**. Enter a name for your email topic and a list of recipients. We recommend that you configure the events subscriptions to the same email address as your primary account contact. The recommendations, service events, and personal health messages are sent using different channels. The subscriptions to the same email address ensures that all the messages are consolidated in one location.
      + Choose **Amazon Resource Name (ARN)**. Then choose existing Amazon SNS ARN for an Amazon SNS topic.

        If you want to use a topic that has been enabled for server-side encryption (SSE), grant Amazon RDS the necessary permissions to access the AWS KMS key. For more information, see [ Enable compatibility between event sources from AWS services and encrypted topics](https://docs.aws.amazon.com/sns/latest/dg/sns-key-management.html#compatibility-with-aws-services) in the *Amazon Simple Notification Service Developer Guide*.

   1. For **Source type**, choose a source type. For example, choose **Clusters** or **Cluster snapshots**.

   1. Choose the event categories and resources that you want to receive event notifications for.

      The following example configures event notifications for the DB instance named `testinst`.  
![\[Enter source type\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/event-source.png)

   1. Choose **Create**.

The Amazon RDS console indicates that the subscription is being created.

![\[List DB event notification subscriptions\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/EventNotification-Create2.png)


## AWS CLI


To subscribe to RDS event notification, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/create-event-subscription.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-event-subscription.html) command. Include the following required parameters:
+ `--subscription-name`
+ `--sns-topic-arn`

**Example**  
For Linux, macOS, or Unix:  

```
aws rds create-event-subscription \
    --subscription-name myeventsubscription \
    --sns-topic-arn arn:aws:sns:us-east-1:123456789012:myawsuser-RDS \
    --enabled
```
For Windows:  

```
aws rds create-event-subscription ^
    --subscription-name myeventsubscription ^
    --sns-topic-arn arn:aws:sns:us-east-1:123456789012:myawsuser-RDS ^
    --enabled
```

## API


To subscribe to Amazon RDS event notification, call the Amazon RDS API function [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateEventSubscription.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateEventSubscription.html). Include the following required parameters: 
+ `SubscriptionName`
+ `SnsTopicArn`

# Amazon RDS event notification tags and attributes


When Amazon RDS sends an event notification to Amazon Simple Notification Service (SNS) or Amazon EventBridge, the notification contains message attributes and event tags. RDS sends the message attributes separately along with the message, while the event tags are in the body of the message. Use the message attributes and the Amazon RDS tags to add metadata to your resources. You can modify these tags with your own notations about the DB instances, Aurora clusters, and so on. For more information about tagging Amazon RDS resources, see [Tagging Amazon Aurora andAmazon RDS resources](USER_Tagging.md). 

By default, the Amazon SNS and Amazon EventBridge receives every message sent to them. SNS and EventBridge can filter the message and send the notifications to the preferred communication mode, such as an email, a text message, or a call to an HTTP endpoint.

**Note**  
The notification sent in an email or a text message will not have event tags.

The following table shows the message attributes for RDS events sent to the topic subscriber.


| Amazon RDS event attribute |  Description  | 
| --- | --- | 
| EventID |  Identifier for the RDS event message, for example, RDS-EVENT-0006.  | 
| Resource |  The ARN identifier for the resource emitting the event, for example, `arn:aws:rds:ap-southeast-2:123456789012:db:database-1`.  | 

The RDS tags provide data about the resource that was affected by the service event. RDS adds the current state of the tags in the message body when the notification is sent to SNS or EventBridge.

For more information about filtering message attributes for SNS, see [Amazon SNS message filtering](https://docs.aws.amazon.com/sns/latest/dg/sns-message-filtering.html) in the *Amazon Simple Notification Service Developer Guide*.

For more information about filtering event tags for EventBridge, see [ Comparison operators for use in event patterns in Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns-content-based-filtering.html) in the *Amazon EventBridge User Guide*.

For more information about filtering payload-based tags for SNS, see [Introducing payload-based message filtering for Amazon SNS](https://aws.amazon.com/blogs/compute/introducing-payload-based-message-filtering-for-amazon-sns/)

# Listing Amazon RDS event notification subscriptions


You can list your current Amazon RDS event notification subscriptions.

## Console


**To list your current Amazon RDS event notification subscriptions**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  In the navigation pane, choose **Event subscriptions**. The **Event subscriptions** pane shows all your event notification subscriptions.  
![\[List DB event notification subscriptions\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/EventNotification-ListSubs.png)

   

## AWS CLI


To list your current Amazon RDS event notification subscriptions, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-event-subscriptions.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-event-subscriptions.html) command. 

**Example**  
The following example describes all event subscriptions.  

```
aws rds describe-event-subscriptions
```
The following example describes the `myfirsteventsubscription`.  

```
aws rds describe-event-subscriptions --subscription-name myfirsteventsubscription
```

## API


To list your current Amazon RDS event notification subscriptions, call the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeEventSubscriptions.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeEventSubscriptions.html) action.

# Modifying an Amazon RDS event notification subscription


After you have created a subscription, you can change the subscription name, source identifier, categories, or topic ARN.

## Console


**To modify an Amazon RDS event notification subscription**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  In the navigation pane, choose **Event subscriptions**. 

1.  In the **Event subscriptions** pane, choose the subscription that you want to modify and choose **Edit**. 

1.  Make your changes to the subscription in either the **Target** or **Source** section.

1. Choose **Edit**. The Amazon RDS console indicates that the subscription is being modified.  
![\[List DB event notification subscriptions\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/EventNotification-Modify2.png)

   

## AWS CLI


To modify an Amazon RDS event notification subscription, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-event-subscription.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-event-subscription.html) command. Include the following required parameter:
+ `--subscription-name`

**Example**  
The following code enables `myeventsubscription`.  
For Linux, macOS, or Unix:  

```
aws rds modify-event-subscription \
    --subscription-name myeventsubscription \
    --enabled
```
For Windows:  

```
aws rds modify-event-subscription ^
    --subscription-name myeventsubscription ^
    --enabled
```

## API


To modify an Amazon RDS event, call the Amazon RDS API operation [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyEventSubscription.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyEventSubscription.html). Include the following required parameter:
+ `SubscriptionName`

# Adding a source identifier to an Amazon RDS event notification subscription


You can add a source identifier (the Amazon RDS source generating the event) to an existing subscription.

## Console


You can easily add or remove source identifiers using the Amazon RDS console by selecting or deselecting them when modifying a subscription. For more information, see [Modifying an Amazon RDS event notification subscription](USER_Events.Modifying.md).

## AWS CLI


To add a source identifier to an Amazon RDS event notification subscription, use the AWS CLI [https://docs.aws.amazon.com/](https://docs.aws.amazon.com/) command. Include the following required parameters:
+ `--subscription-name`
+ `--source-identifier`

**Example**  
The following example adds the source identifier `mysqldb` to the `myrdseventsubscription` subscription.  
For Linux, macOS, or Unix:  

```
aws rds add-source-identifier-to-subscription \
    --subscription-name myrdseventsubscription \
    --source-identifier mysqldb
```
For Windows:  

```
aws rds add-source-identifier-to-subscription ^
    --subscription-name myrdseventsubscription ^
    --source-identifier mysqldb
```

## API


To add a source identifier to an Amazon RDS event notification subscription, call the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_AddSourceIdentifierToSubscription.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_AddSourceIdentifierToSubscription.html). Include the following required parameters:
+ `SubscriptionName`
+ `SourceIdentifier`

# Removing a source identifier from an Amazon RDS event notification subscription


You can remove a source identifier (the Amazon RDS source generating the event) from a subscription if you no longer want to be notified of events for that source. 

## Console


You can easily add or remove source identifiers using the Amazon RDS console by selecting or deselecting them when modifying a subscription. For more information, see [Modifying an Amazon RDS event notification subscription](USER_Events.Modifying.md).

## AWS CLI


To remove a source identifier from an Amazon RDS event notification subscription, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/remove-source-identifier-from-subscription.html](https://docs.aws.amazon.com/cli/latest/reference/rds/remove-source-identifier-from-subscription.html) command. Include the following required parameters:
+ `--subscription-name`
+ `--source-identifier`

**Example**  
The following example removes the source identifier `mysqldb` from the `myrdseventsubscription` subscription.  
For Linux, macOS, or Unix:  

```
aws rds remove-source-identifier-from-subscription \
    --subscription-name myrdseventsubscription \
    --source-identifier mysqldb
```
For Windows:  

```
aws rds remove-source-identifier-from-subscription ^
    --subscription-name myrdseventsubscription ^
    --source-identifier mysqldb
```

## API


To remove a source identifier from an Amazon RDS event notification subscription, use the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RemoveSourceIdentifierFromSubscription.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RemoveSourceIdentifierFromSubscription.html) command. Include the following required parameters:
+ `SubscriptionName`
+ `SourceIdentifier`

# Listing the Amazon RDS event notification categories


All events for a resource type are grouped into categories. To view the list of categories available, use the following procedures.

## Console


When you create or modify an event notification subscription, the event categories are displayed in the Amazon RDS console. For more information, see [Modifying an Amazon RDS event notification subscription](USER_Events.Modifying.md). 

![\[List DB event notification categories\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/EventNotification-Categories.png)




## AWS CLI


To list the Amazon RDS event notification categories, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-event-categories.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-event-categories.html) command. This command has no required parameters.

**Example**  

```
aws rds describe-event-categories
```

## API


To list the Amazon RDS event notification categories, use the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeEventCategories.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeEventCategories.html) command. This command has no required parameters.

# Deleting an Amazon RDS event notification subscription


You can delete a subscription when you no longer need it. All subscribers to the topic will no longer receive event notifications specified by the subscription.

## Console


**To delete an Amazon RDS event notification subscription**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  In the navigation pane, choose **DB Event Subscriptions**. 

1.  In the **My DB Event Subscriptions** pane, choose the subscription that you want to delete. 

1. Choose **Delete**.

1. The Amazon RDS console indicates that the subscription is being deleted.  
![\[Delete an event notification subscription\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/EventNotification-Delete.png)

   

## AWS CLI


To delete an Amazon RDS event notification subscription, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/delete-event-subscription.html](https://docs.aws.amazon.com/cli/latest/reference/rds/delete-event-subscription.html) command. Include the following required parameter:
+ `--subscription-name`

**Example**  
The following example deletes the subscription `myrdssubscription`.  

```
aws rds delete-event-subscription --subscription-name myrdssubscription
```

## API


To delete an Amazon RDS event notification subscription, use the RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DeleteEventSubscription.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DeleteEventSubscription.html) command. Include the following required parameter:
+ `SubscriptionName`

# Creating a rule that triggers on an Amazon Aurora event
Creating a rule that triggers on an Amazon Aurora event

Using Amazon EventBridge, you can automate AWS services and respond to system events such as application availability issues or resource changes. 

**Topics**
+ [

## Tutorial: Log DB instance state changes using Amazon EventBridge
](#log-rds-instance-state)

## Tutorial: Log DB instance state changes using Amazon EventBridge
Tutorial: Log instance state using EventBridge

In this tutorial, you create an AWS Lambda function that logs the state changes for an instance. You then create a rule that runs the function whenever there is a state change of an existing RDS DB instance. The tutorial assumes that you have a small running test instance that you can shut down temporarily.

**Important**  
Don't perform this tutorial on a running production DB instance.

**Topics**
+ [

### Step 1: Create an AWS Lambda function
](#rds-create-lambda-function)
+ [

### Step 2: Create a rule
](#rds-create-rule)
+ [

### Step 3: Test the rule
](#rds-test-rule)

### Step 1: Create an AWS Lambda function


Create a Lambda function to log the state change events. You specify this function when you create your rule.

**To create a Lambda function**

1. Open the AWS Lambda console at [https://console.aws.amazon.com/lambda/](https://console.aws.amazon.com/lambda/).

1. If you're new to Lambda, you see a welcome page. Choose **Get Started Now**. Otherwise, choose **Create function**.

1. Choose **Author from scratch**.

1. On the **Create function** page, do the following:

   1. Enter a name and description for the Lambda function. For example, name the function **RDSInstanceStateChange**. 

   1. In **Runtime**, select **Node.js 16x**. 

   1. For **Architecture**, choose **x86\$164**.

   1. For **Execution role**, do either of the following:
      + Choose **Create a new role with basic Lambda permissions**.
      + For **Existing role**, choose **Use an existing role**. Choose the role that you want to use. 

   1. Choose **Create function**.

1. On the **RDSInstanceStateChange** page, do the following:

   1. In **Code source**, select **index.js**. 

   1. In the **index.js** pane, delete the existing code.

   1. Enter the following code:

      ```
      console.log('Loading function');
      
      exports.handler = async (event, context) => {
          console.log('Received event:', JSON.stringify(event));
      };
      ```

   1. Choose **Deploy**.

### Step 2: Create a rule


Create a rule to run your Lambda function whenever you launch an Amazon RDS instance.

**To create the EventBridge rule**

1. Open the Amazon EventBridge console at [https://console.aws.amazon.com/events/](https://console.aws.amazon.com/events/).

1. In the navigation pane, choose **Rules**.

1. Choose **Create rule**.

1. Enter a name and description for the rule. For example, enter **RDSInstanceStateChangeRule**.

1. Choose **Rule with an event pattern**, and then choose **Next**.

1. For **Event source**, choose **AWS events or EventBridge partner events**.

1. Scroll down to the **Event pattern** section.

1. For **Event source**, choose **AWS services**.

1. For **AWS service**, choose **Relational Database Service (RDS)**.

1. For **Event type**, choose **RDS DB Instance Event**.

1. Leave the default event pattern. Then choose **Next**.

1. For **Target types**, choose **AWS service**.

1. For **Select a target**, choose **Lambda function**.

1. For **Function**, choose the Lambda function that you created. Then choose **Next**.

1. In **Configure tags**, choose **Next**.

1. Review the steps in your rule. Then choose **Create rule**.

### Step 3: Test the rule


To test your rule, shut down an RDS DB instance. After waiting a few minutes for the instance to shut down, verify that your Lambda function was invoked.

**To test your rule by stopping a DB instance**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Stop an RDS DB instance.

1. Open the Amazon EventBridge console at [https://console.aws.amazon.com/events/](https://console.aws.amazon.com/events/).

1. In the navigation pane, choose **Rules**, choose the name of the rule that you created.

1. In **Rule details**, choose **Monitoring**.

   You are redirected to the Amazon CloudWatch console. If you are not redirected, click **View the metrics in CloudWatch**.

1. In **All metrics**, choose the name of the rule that you created.

   The graph should indicate that the rule was invoked.

1. In the navigation pane, choose **Log groups**.

1. Choose the name of the log group for your Lambda function (**/aws/lambda/*function-name***).

1. Choose the name of the log stream to view the data provided by the function for the instance that you launched. You should see a received event similar to the following:

   ```
   {
       "version": "0",
       "id": "12a345b6-78c9-01d2-34e5-123f4ghi5j6k",
       "detail-type": "RDS DB Instance Event",
       "source": "aws.rds",
       "account": "111111111111",
       "time": "2021-03-19T19:34:09Z",
       "region": "us-east-1",
       "resources": [
           "arn:aws:rds:us-east-1:111111111111:db:testdb"
       ],
       "detail": {
           "EventCategories": [
               "notification"
           ],
           "SourceType": "DB_INSTANCE",
           "SourceArn": "arn:aws:rds:us-east-1:111111111111:db:testdb",
           "Date": "2021-03-19T19:34:09.293Z",
           "Message": "DB instance stopped",
           "SourceIdentifier": "testdb",
           "EventID": "RDS-EVENT-0087"
       }
   }
   ```

   For more examples of RDS events in JSON format, see [Overview of events for Aurora](working-with-events.md#rds-cloudwatch-events.sample).

1. (Optional) When you're finished, you can open the Amazon RDS console and start the instance that you stopped.

# Amazon RDS event categories and event messagesfor Aurora


Amazon RDS generates a significant number of events in categories that you can subscribe to using the Amazon RDS Console, AWS CLI, or the API.

**Topics**
+ [

## DB cluster events
](#USER_Events.Messages.cluster)
+ [

## DB cluster snapshot events
](#USER_Events.Messages.cluster-snapshot)
+ [

## DB instance events
](#USER_Events.Messages.instance)
+ [

## DB parameter group events
](#USER_Events.Messages.parameter-group)
+ [

## DB security group events
](#USER_Events.Messages.security-group)
+ [

## DB shard group events
](#USER_Events.Messages.shard-group)
+ [

## RDS Proxy events
](#USER_Events.Messages.rds-proxy)
+ [

## Blue/green deployment events
](#USER_Events.Messages.BlueGreenDeployments)

## DB cluster events


The following table shows the event category and a list of events when a DB cluster is the source type.

**Note**  
No event category exists for Aurora Serverless in the DB cluster event type. The Aurora Serverless events range from RDS-EVENT-0141 to RDS-EVENT-0149.


|  Category  | RDS event ID |  Message  |  Notes  | 
| --- | --- | --- | --- | 
|  configuration change  | RDS-EVENT-0016 |  Reset master credentials.  | None | 
|  configuration change  | RDS-EVENT-0179 |  Database Activity Streams is started on your database cluster.  |  For more information see [Monitoring Amazon Aurora with Database Activity Streams](DBActivityStreams.md).  | 
|  configuration change  | RDS-EVENT-0180 |  Database Activity Streams is stopped on your database cluster.  | For more information see [Monitoring Amazon Aurora with Database Activity Streams](DBActivityStreams.md). | 
| creation | RDS-EVENT-0170 |  DB cluster created.  |  None  | 
| deletion | RDS-EVENT-0171 |  DB cluster deleted.  |  None  | 
|  failover  | RDS-EVENT-0069 |  Cluster failover failed, check the health of your cluster instances and try again.  |  None  | 
|  failover  | RDS-EVENT-0070 |  Promoting previous primary again: *name*.  |  None  | 
|  failover  | RDS-EVENT-0071 |  Completed failover to DB instance: *name*.  |  None  | 
|  failover  | RDS-EVENT-0072 |  Started same AZ failover to DB instance: *name*.  |  None  | 
|  failover  | RDS-EVENT-0073 |  Started cross AZ failover to DB instance: *name*.  |  None  | 
|  failure  | RDS-EVENT-0083 |  Amazon RDS has been unable to create credentials to access your Amazon S3 Bucket for your DB cluster *name*. This is due to the S3 snapshot ingestion IAM role not being configured correctly in your account or the specified Amazon S3 bucket cannot be found. Please refer to the troubleshooting section in the Amazon RDS documentation for further details.  |  For more information, see [Physical migration from MySQL by using Percona XtraBackup and Amazon S3](AuroraMySQL.Migrating.ExtMySQL.S3.md) .   | 
|  failure  | RDS-EVENT-0143 |  The DB cluster failed to scale from *units* to *units* for this reason: *reason*.  |  Scaling failed for the Aurora Serverless DB cluster.  | 
| failure | RDS-EVENT-0354 |  You can't create the DB cluster because of incompatible resources. *message*.  |  The *message* includes details about the failure.  | 
| failure | RDS-EVENT-0355 |  The DB cluster can't be created because of insufficient resource limits. *message*.  |  The *message* includes details about the failure.  | 
|  failure  | RDS-EVENT-0387 |  Failed to partition DB instances in DB cluster *name* for patching.  | The operating system upgrades for the DB instances in the DB cluster failed. | 
|  global failover  | RDS-EVENT-0181 |  Global switchover to DB cluster *name* in Region *name* started.  |  This event is for a switchover operation (previously called "managed planned failover"). The process can be delayed because other operations are running on the DB cluster.  | 
|  global failover  | RDS-EVENT-0182 |  Old primary DB cluster *name* in Region *name* successfully shut down.  |  This event is for a switchover operation (previously called "managed planned failover"). The old primary instance in the global database isn't accepting writes. All volumes are synchronized.  | 
|  global failover  | RDS-EVENT-0183 |  Waiting for data synchronization across global cluster members. Current lags behind primary DB cluster: `reason`.  |  This event is for a switchover operation (previously called "managed planned failover"). A replication lag is occurring during the synchronization phase of the global database failover.  | 
|  global failover  | RDS-EVENT-0184 |  New primary DB cluster *name* in Region *name* was successfully promoted.  |  This event is for a switchover operation (previously called "managed planned failover"). The volume topology of the global database is reestablished with the new primary volume.  | 
|  global failover  | RDS-EVENT-0185 |  Global switchover to DB cluster *name* in Region *name* finished.  |  This event is for a switchover operation (previously called "managed planned failover"). The global database switchover is finished on the primary DB cluster. Replicas might take long to come online after the failover completes.  | 
|  global failover  | RDS-EVENT-0186 |  Global switchover to DB cluster *name* in Region *name* is cancelled.  |  This event is for a switchover operation (previously called "managed planned failover").  | 
|  global failover  | RDS-EVENT-0187 |  Global switchover to DB cluster *name* in Region *name* failed.  |  This event is for a switchover operation (previously called "managed planned failover").  | 
|  global failover  | RDS-EVENT-0238 |  Global failover to DB cluster *name* in Region *name* completed.  |  None  | 
|  global failover  | RDS-EVENT-0239 |  Global failover to DB cluster *name* in Region *name* failed.  |  None  | 
|  global failover  | RDS-EVENT-0240 |  Started resynchronizing members of DB cluster *name* in Region *name* after global failover.  |  None  | 
|  global failover  | RDS-EVENT-0241 |  Finished resynchronizing members of DB cluster *name* in Region *name* after global failover.  |  None  | 
|  global failover  | RDS-EVENT-0397 |  Aurora finished changing the DNS name that the global writer endpoint resolves to.  |  None  | 
|  global failover  | RDS-EVENT-0423 |  Waiting for data synchronization with the target DB cluster. Current target DB cluster lag behind the primary DB cluster: `%s`.  |  This event is for a switchover operation (previously called "managed planned failover"). A replication lag is occurring during the synchronization phase of the global database failover.  | 
|  maintenance  | RDS-EVENT-0156 |  The DB cluster has a DB engine minor version upgrade available.  |  None  | 
|  maintenance  | RDS-EVENT-0173 |  Database cluster engine version has been upgraded.  | Patching of the DB cluster has completed. | 
|  maintenance  | RDS-EVENT-0174 |  Database cluster is in a state that cannot be upgraded.  | None | 
|  maintenance  | RDS-EVENT-0176 |  Database cluster engine major version has been upgraded.  | None | 
|  maintenance  | RDS-EVENT-0177 |  Database cluster upgrade is in progress.  | None | 
|  maintenance  | RDS-EVENT-0286 |  Database cluster engine *version\$1number* version upgrade started. Cluster remains online.  | None | 
|  maintenance  | RDS-EVENT-0287 |  Operating system upgrade requirement detected.  | None | 
|  maintenance  | RDS-EVENT-0288 |  Cluster operating system upgrade starting.  | None | 
|  maintenance  | RDS-EVENT-0289 |  Cluster operating system upgrade completed.  | None | 
|  maintenance  | RDS-EVENT-0290 |  Database cluster has been patched: source version *version\$1number* => *new\$1version\$1number*.  | None | 
|  maintenance  | RDS-EVENT-0363 |  Upgrade preparation in progress: *cluster\$1name*  | Upgrade prechecks have started for the DB cluster. | 
|  maintenance  | RDS-EVENT-0388 |  Starting offline patches to DB instances in partition *name*/*name* for DB cluster *name*: *partition\$1n*.  | Starting the operating system upgrades for the DB instances in the DB cluster. | 
|  maintenance  | RDS-EVENT-0389 |  We were unable to upgrade your DB cluster operating system. You can wait for the next maintenance window, or you can upgrade your DB cluster operating system manually.  | None | 
|  maintenance  | RDS-EVENT-0424 |  The DB cluster *name* is running version *version*, which is higher than the target upgrade version *version* for the global cluster. We don't recommend having a secondary cluster on a higher version than the global cluster, as it can cause issues during failover or switchover. Consider upgrading your global cluster to match.  |  None  | 
|  notification  | RDS-EVENT-0076 |  Failed to migrate from *name* to *name*. Reason: *reason*.  |  Migration to an Aurora DB cluster failed.  | 
|  notification  | RDS-EVENT-0077 |  Failed to convert *name*.*name* to InnoDB. Reason: *reason*.  |  An attempt to convert a table from the source database to InnoDB failed during the migration to an Aurora DB cluster.  | 
|  notification  | RDS-EVENT-0085 |  Unable to upgrade DB cluster *name* because the instance *name* has a status of *name*. Resolve the issue or delete the instance and try again.  |  An error occurred while attempting to patch the Aurora DB cluster. Check your instance status, resolve the issue, and try again. For more information see [Maintaining an Amazon Aurora DB cluster](USER_UpgradeDBInstance.Maintenance.md).  | 
|  notification  | RDS-EVENT-0141 |  Scaling DB cluster from *units* to *units* for this reason: *reason*.  |  Scaling initiated for the Aurora Serverless DB cluster.  | 
|  notification  | RDS-EVENT-0142 |  The DB cluster has scaled from *units* to *units*.  |  Scaling completed for the Aurora Serverless DB cluster.  | 
|  notification  | RDS-EVENT-0144 |  The DB cluster is being paused.  |  An automatic pause was initiated for the Aurora Serverless DB cluster.  | 
|  notification  | RDS-EVENT-0145 |  The DB cluster is paused.  |  The Aurora Serverless DB cluster has been paused.  | 
|  notification  | RDS-EVENT-0146 |  Pause was canceled for the DB cluster.  |  The pause was canceled for the Aurora Serverless DB cluster.  | 
|  notification  | RDS-EVENT-0147 |  The DB cluster is being resumed.  | A resume operation was initiated for the Aurora Serverless DB cluster. | 
|  notification  | RDS-EVENT-0148 |  The DB cluster is resumed.  | The resume operation completed for the Aurora Serverless DB cluster. | 
|  notification  | RDS-EVENT-0149 |  The DB cluster has scaled from *units* to *units*, but scaling wasn't seamless for this reason: *reason*.  |  Seamless scaling completed with the force option for the Aurora Serverless DB cluster. Connections might have been interrupted as required.  | 
|  notification  | RDS-EVENT-0150 |  DB cluster stopped.  |  None  | 
|  notification  | RDS-EVENT-0151 |  DB cluster started.  |  None  | 
|  notification  | RDS-EVENT-0152 |  DB cluster stop failed.  |  None  | 
|  notification  | RDS-EVENT-0153 |  DB cluster is being started due to it exceeding the maximum allowed time being stopped.  |  None  | 
|  notification  | RDS-EVENT-0172 |  Renamed cluster from *name* to *name*.  |  None  | 
|  notification  | RDS-EVENT-0234 |  Export task failed.  |  The DB cluster export task failed.  | 
|  notification  | RDS-EVENT-0235 |  Export task canceled.  |  The DB cluster export task was canceled.  | 
|  notification  | RDS-EVENT-0236 |  Export task completed.  |  The DB cluster export task completed.  | 
|  notification  | RDS-EVENT-0386 |  DB instances in DB cluster *name* have been partitioned: *list\$1of\$1partitions*. DB cluster is online.  | The operating system upgrades for the DB instances in the DB cluster were successful. | 
| notification | RDS-EVENT-0426 | RDS can't configure the DB cluster *name* as a replication source because of idle or long-running transactions. Wait for the transactions to complete or cancel them, and try again. | None | 
| notification | RDS-EVENT-0512 | Volume replacement for DB cluster *name* started. | None | 
| notification | RDS-EVENT-0513 | Volume replacement for DB cluster *name* completed. | None | 

## DB cluster snapshot events


The following table shows the event category and a list of events when a DB cluster snapshot is the source type.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_Events.Messages.html)

## DB instance events


The following table shows the event category and a list of events when a DB instance is the source type.


|  Category  | RDS event ID |  Message  |  Notes  | 
| --- | --- | --- | --- | 
|  availability  | RDS-EVENT-0004 |  DB instance shutdown.  | None | 
|  availability  | RDS-EVENT-0006 |  DB instance restarted.  | None | 
|  availability  | RDS-EVENT-0022 |  Error restarting mysql: *message*.  | An error has occurred while restarting Aurora MySQL or RDS for MariaDB. | 
| availability | RDS-EVENT-0419 | Amazon RDS has been unable to access the KMS encryption key for database instance *name*. This database will be placed into an inaccessible state. Please refer to the troubleshooting section in the Amazon RDS documentation for further details. | None | 
|  backtrack  | RDS-EVENT-0131 |  The actual Backtrack window is smaller than the target Backtrack window you specified. Consider reducing the number of hours in your target Backtrack window.  | For more information about backtracking, see [Backtracking an Aurora DB cluster](AuroraMySQL.Managing.Backtrack.md). | 
|  backtrack  | RDS-EVENT-0132 |  The actual Backtrack window is the same as the target Backtrack window.  | None | 
|  configuration change  | RDS-EVENT-0011 |  Updated to use DBParameterGroup *name*.  | None | 
|  configuration change  | RDS-EVENT-0012 |  Applying modification to database instance class.   | None | 
|  configuration change  | RDS-EVENT-0014 |  Finished applying modification to DB instance class.  | None | 
|  configuration change  | RDS-EVENT-0017 |  Finished applying modification to allocated storage.  | None | 
|  configuration change  | RDS-EVENT-0025 |  Finished applying modification to convert to a Multi-AZ DB instance.  | None | 
|  configuration change  | RDS-EVENT-0029 |  Finished applying modification to convert to a standard (Single-AZ) DB instance.  | None | 
|  configuration change  | RDS-EVENT-0033 |  There are *number* users matching the master username; only resetting the one not tied to a specific host.  | None | 
|  configuration change  | RDS-EVENT-0067 |  Unable to reset your password. Error information: *message*.  | None | 
|  configuration change  | RDS-EVENT-0078 |  Monitoring Interval changed to *number*.  |  The Enhanced Monitoring configuration has been changed. | 
|  configuration change  | RDS-EVENT-0092 |  Finished updating DB parameter group.  | None | 
|  creation  | RDS-EVENT-0005 |  DB instance created.  | None | 
|  deletion  | RDS-EVENT-0003 |  DB instance deleted.  | None | 
|  failure  | RDS-EVENT-0035 |  Database instance put into *state*. *message*.  | The DB instance has invalid parameters. For example, if the DB instance could not start because a memory-related parameter is set too high for this instance class, your action would be to modify the memory parameter and reboot the DB instance. | 
|  failure  | RDS-EVENT-0036 |  Database instance in *state*. *message*.  | The DB instance is in an incompatible network. Some of the specified subnet IDs are invalid or do not exist. | 
|  failure  | RDS-EVENT-0079 |  Amazon RDS has been unable to create credentials for enhanced monitoring and this feature has been disabled. This is likely due to the rds-monitoring-role not being present and configured correctly in your account. Please refer to the troubleshooting section in the Amazon RDS documentation for further details.  |  Enhanced Monitoring can't be enabled without the Enhanced Monitoring IAM role. For information about creating the IAM role, see [To create an IAM role for Amazon RDS enhanced monitoring](USER_Monitoring.OS.Enabling.md#USER_Monitoring.OS.IAMRole).  | 
|  failure  | RDS-EVENT-0080 |  Amazon RDS has been unable to configure enhanced monitoring on your instance: *name* and this feature has been disabled. This is likely due to the rds-monitoring-role not being present and configured correctly in your account. Please refer to the troubleshooting section in the Amazon RDS documentation for further details.  |  Enhanced Monitoring was disabled because an error occurred during the configuration change. It is likely that the Enhanced Monitoring IAM role is configured incorrectly. For information about creating the enhanced monitoring IAM role, see [To create an IAM role for Amazon RDS enhanced monitoring](USER_Monitoring.OS.Enabling.md#USER_Monitoring.OS.IAMRole).  | 
|  failure  | RDS-EVENT-0082 |  Amazon RDS has been unable to create credentials to access your Amazon S3 Bucket for your DB instance *name*. This is due to the S3 snapshot ingestion IAM role not being configured correctly in your account or the specified Amazon S3 bucket cannot be found. Please refer to the troubleshooting section in the Amazon RDS documentation for further details.  |  Aurora was unable to copy backup data from an Amazon S3 bucket. It is likely that the permissions for Aurora to access the Amazon S3 bucket are configured incorrectly. For more information, see [Physical migration from MySQL by using Percona XtraBackup and Amazon S3](AuroraMySQL.Migrating.ExtMySQL.S3.md) .   | 
| failure | RDS-EVENT-0353 |  The DB instance can't be created because of insufficient resource limits. *message*.  |  The *message* includes details about the failure.  | 
| failure | RDS-EVENT-0418 | Amazon RDS is unable to access the KMS encryption key for database instance *name*. This is likely due to the key being disabled or Amazon RDS being unable to access it. If this continues the database will be placed into an inaccessible state. Please refer to the troubleshooting section in the Amazon RDS documentation for further details. | None | 
| failure | RDS-EVENT-0420 | Amazon RDS can now successfully access the KMS encryption key for database instance *name*. | None | 
|  low storage  | RDS-EVENT-0007 |  Allocated storage has been exhausted. Allocate additional storage to resolve.  |  The allocated storage for the DB instance has been consumed. To resolve this issue, allocate additional storage for the DB instance. For more information, see the [RDS FAQ](https://aws.amazon.com/rds/faqs). You can monitor the storage space for a DB instance using the **Free Storage Space** metric.  | 
|  low storage  | RDS-EVENT-0089 |  The free storage capacity for DB instance: *name* is low at *percentage* of the provisioned storage [Provisioned Storage: *size*, Free Storage: *size*]. You may want to increase the provisioned storage to address this issue.  |  The DB instance has consumed more than 90% of its allocated storage. You can monitor the storage space for a DB instance using the **Free Storage Space** metric.  | 
|  low storage  | RDS-EVENT-0227 |  Your Aurora cluster's storage is dangerously low with only *amount* terabytes remaining. Please take measures to reduce the storage load on your cluster.  |  The Aurora storage subsystem is running low on space.  | 
|  maintenance  | RDS-EVENT-0026 |  Applying off-line patches to DB instance.  |  Offline maintenance of the DB instance is taking place. The DB instance is currently unavailable.  | 
|  maintenance  | RDS-EVENT-0027 |  Finished applying off-line patches to DB instance.  |  Offline maintenance of the DB instance is complete. The DB instance is now available.  | 
|  maintenance  | RDS-EVENT-0047 |  Database instance patched.  | None | 
|  maintenance  | RDS-EVENT-0155 |  The DB instance has a DB engine minor version upgrade available.  | None | 
|  maintenance  | RDS-EVENT-0178 |  Database instance upgrade is in progress.  | None | 
|  maintenance  | RDS-EVENT-0422 |  RDS will replace the host of DB instance *name* due to a pending maintenance action. | None | 
|  notification  | RDS-EVENT-0044 |  *message*  | This is an operator-issued notification. For more information, see the event message. | 
|  notification  | RDS-EVENT-0048 |  Delaying database engine upgrade since this instance has read replicas that need to be upgraded first.  | Patching of the DB instance has been delayed. | 
|  notification  | RDS-EVENT-0087 |  DB instance stopped.   | None | 
|  notification  | RDS-EVENT-0088 |  DB instance started.  | None | 
|  notification  | RDS-EVENT-0365 |  Timezone files were updated. Restart your RDS instance for the changes to take effect.  | None | 
|  notification, serverless  | RDS-EVENT-0370 |  Initiated pause for the DB instance.  |  A new attempt to pause an idle Aurora Serverless v2 DB instance was started.  | 
|  notification, serverless  | RDS-EVENT-0371 |  Pause was canceled for the DB instance.  |  An attempt to pause an idle Aurora Serverless v2 DB instance was unsuccessful, due to workload.  | 
|  notification, serverless  | RDS-EVENT-0372 |  Successfully paused the DB instance.  |  The Aurora Serverless v2 DB instance was paused.  | 
|  notification, serverless  | RDS-EVENT-0373 |  Initiated resume for the DB instance.  |  The Aurora Serverless v2 DB instance started resuming, due to new workload or administrative or maintenance activity.  | 
|  notification, serverless  | RDS-EVENT-0374 |  Successfully resumed the DB instance.  | The Aurora Serverless v2 DB instance resumed. | 
|  notification  | RDS-EVENT-0385 |  Cluster topology is updated.  |  There are DNS changes to the DB cluster for the DB instance. This includes when new DB instances are added or deleted, or there's a failover.  | 
|  notification, global database  | RDS-EVENT-0390 |  Attempt to block writes for DB cluster *cluster\$1id* in Region *region\$1id* succeeded.  |  Aurora began blocking writes at the storage layer in preparation for switchover or failover of an Aurora global database.  | 
|  notification, global database  | RDS-EVENT-0391 |  Attempt to block writes for DB cluster *cluster\$1id* in Region *region\$1id* timed out.  |  Aurora wasn't able to block writes at the storage layer in preparation for switchover or failover of an Aurora global database. The switchover or failover will proceed but you might need to recover recently written data from the snapshot of the original primary cluster.  | 
|  read replica  | RDS-EVENT-0045 |  Replication has stopped.  |  This message appears when there is an error during replication. To determine the type of error, see [ Troubleshooting a MySQL read replica problem](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.Troubleshooting.html).  | 
|  read replica  | RDS-EVENT-0046 |  Replication for the Read Replica resumed.  | This message appears when you first create a read replica, or as a monitoring message confirming that replication is functioning properly. If this message follows an `RDS-EVENT-0045` notification, then replication has resumed following an error or after replication was stopped. | 
|  read replica  | RDS-EVENT-0057 |  Replication streaming has been terminated.  | None | 
|  recovery  | RDS-EVENT-0020 |  Recovery of the DB instance has started. Recovery time will vary with the amount of data to be recovered.  | None | 
|  recovery  | RDS-EVENT-0021 |  Recovery of the DB instance is complete.  | None | 
|  recovery  | RDS-EVENT-0023 |  Emergent Snapshot Request: *message*.  |  A manual backup has been requested but Amazon RDS is currently in the process of creating a DB snapshot. Submit the request again after Amazon RDS has completed the DB snapshot.  | 
|  recovery  | RDS-EVENT-0052 |  Multi-AZ instance recovery started.  | Recovery time will vary with the amount of data to be recovered. | 
|  recovery  | RDS-EVENT-0053 |  Multi-AZ instance recovery completed. Pending failover or activation.  | This message indicates that Amazon RDS has prepared your DB instance to initiate a failover to the secondary instance if necessary. | 
|  recovery  | RDS-EVENT-0361 |  Recovery of standby DB instance has started.  |  The standby DB instance is rebuilt during the recovery process. Database performance is impacted during the recovery process.  | 
|  recovery  | RDS-EVENT-0362 |  Recovery of standby DB instance has completed.  |  The standby DB instance is rebuilt during the recovery process. Database performance is impacted during the recovery process.  | 
|  restoration  | RDS-EVENT-0019 |  Restored from DB instance *name* to *name*.  |  The DB instance has been restored from a point-in-time backup.  | 
|  security patching  | RDS-EVENT-0230 |  A system update is available for your DB instance. For information about applying updates, see 'Maintaining a DB instance' in the RDS User Guide.  |  A new, minor version, operating system patch is available for your DB instance. For information about applying updates, see [Operating system updates for Aurora DB clusters](USER_UpgradeDBInstance.Maintenance.md#Aurora_OS_updates).  | 
|  maintenance  | RDS-EVENT-0425 |  Amazon RDS can't perform the OS upgrade because there are no available IP addresses in the specified subnets. Choose subnets with available IP addresses and try again.  |  None  | 
|  maintenance  | RDS-EVENT-0429 |  Amazon RDS can't perform the OS upgrade because of insufficient capacity available for the *type* instance type in the *zone* Availability Zone  |  None  | 
|  maintenance  | RDS-EVENT-0501 |  Amazon RDS DB instance's server certificate requires rotation through a pending maintenance action.  |  DB instance's server certificate requires rotation through a pending maintenance action. Amazon RDS reboots your database during this maintenance to complete the certificate rotation. To schedule this maintenance, go to the **Maintenance & backups** tab and choose **Apply now** or **Schedule for next maintenance window**. If the change is not scheduled, Amazon RDS automatically applies it in your mainteance window on the auto apply date shown in your maintenance action.  | 
|  maintenance  | RDS-EVENT-0502 |  Amazon RDS has scheduled a server certificate rotation for DB instance during the next maintenance window. This maintenance will require a database reboot.  |  None  | 

## DB parameter group events


The following table shows the event category and a list of events when a DB parameter group is the source type.


|  Category  | RDS event ID |  Message  |  Notes  | 
| --- | --- | --- | --- | 
|  configuration change  | RDS-EVENT-0037 |  Updated parameter *name* to *value* with apply method *method*.   |  None  | 

## DB security group events


The following table shows the event category and a list of events when a DB security group is the source type.

**Note**  
DB security groups are resources for EC2-Classic. EC2-Classic was retired on August 15, 2022. If you haven't migrated from EC2-Classic to a VPC, we recommend that you migrate as soon as possible. For more information, see [Migrate from EC2-Classic to a VPC](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vpc-migrate.html) in the *Amazon EC2 User Guide* and the blog [ EC2-Classic Networking is Retiring – Here’s How to Prepare](https://aws.amazon.com/blogs/aws/ec2-classic-is-retiring-heres-how-to-prepare/).


|  Category  | RDS event ID |  Message  |  Notes  | 
| --- | --- | --- | --- | 
|  configuration change  | RDS-EVENT-0038 |  Applied change to security group.  |  None  | 
|  failure  | RDS-EVENT-0039 |  Revoking authorization as *user*.  |  The security group owned by *user* doesn't exist. The authorization for the security group has been revoked because it is invalid.  | 

## DB shard group events


The following table shows the event category and a list of events when a DB shard group is the source type.


|  Category  | RDS event ID |  Message  |  Notes  | 
| --- | --- | --- | --- | 
| failure | RDS-EVENT-0324 |  Data load job failed.  | The data loading job failed for the reason that appears in the error message. | 
| maintenance | RDS-EVENT-0302 |  Your action is required to finalize a pending split shard job *job\$1ID* for subcluster ID *number* in DB shard group *name*. To complete the operation invoke the SQL: <pre>SELECT rds_aurora.limitless_finalize_split_shard(job_ID);</pre>  |  None  | 
| maintenance | RDS-EVENT-0303 |  Finalize for the split shard job *job\$1ID* has started for subcluster ID *number* in DB shard group *name*.  |  None  | 
| maintenance | RDS-EVENT-0304 |  Split shard job *job\$1ID* has completed successfully for subcluster ID *number* in DB shard group *name*. A new shard with subcluster ID *number* was added to DB shard group *name*.  |  None  | 
| maintenance | RDS-EVENT-0305 |  Split shard job *job\$1ID* has failed for subcluster ID *number* in DB shard group *name*.  |  None  | 
| maintenance | RDS-EVENT-0366 |  Split shard job *job\$1ID* has started for subcluster ID *number* in DB shard group *name*.  |  None  | 
| maintenance | RDS-EVENT-0367 |  Add router job *job\$1ID* has started in DB shard group *name*.  |  None  | 
| maintenance | RDS-EVENT-0368 |  Add router job *job\$1ID* has completed successfully. A new router with subcluster ID *number* was added to DB shard group *name*.  |  None  | 
| maintenance | RDS-EVENT-0369 |  Add router job *job\$1ID* has failed in DB shard group *name*.  | None | 
| maintenance | RDS-EVENT-0394 |  Add router job *job\$1ID* has been canceled in DB shard group *name*.  |  None  | 
| maintenance | RDS-EVENT-0395 |  Split shard job *job\$1ID* has been canceled in DB shard group *name*.  |  None  | 
| notification | RDS-EVENT-0321 |  Initializing infrastructure for the Data Load job.  | None | 
| notification | RDS-EVENT-0322 |  Data load job is in progress.  | None | 
| notification | RDS-EVENT-0323 |  Data load job completed successfully.  | None | 
| notification | RDS-EVENT-0325 |  Canceling Data load job as per customer’s request.  | None | 
| notification | RDS-EVENT-0326 |  Data load job canceled.  | None | 

## RDS Proxy events


The following table shows the event category and a list of events when an RDS Proxy is the source type.


|  Category  | RDS event ID |  Message  |  Notes  | 
| --- | --- | --- | --- | 
| configuration change | RDS-EVENT-0204 |  RDS modified DB proxy *name*.  | None | 
| configuration change | RDS-EVENT-0207 |  RDS modified the end point of the DB proxy *name*.  | None | 
| configuration change | RDS-EVENT-0213 |  RDS detected the addition of the DB instance and automatically added it to the target group of the DB proxy *name*.  | None | 
|  configuration change  | RDS-EVENT-0214 |  RDS detected deletion of DB instance *name* and automatically removed it from target group *name* of DB proxy *name*.  | None | 
|  configuration change  | RDS-EVENT-0215 |  RDS detected deletion of DB cluster *name* and automatically removed it from target group *name* of DB proxy *name*.  | None | 
|  creation  | RDS-EVENT-0203 |  RDS created DB proxy *name*.  | None | 
|  creation  | RDS-EVENT-0206 |  RDS created endpoint *name* for DB proxy *name*.  | None | 
| deletion | RDS-EVENT-0205 |  RDS deleted DB proxy *name*.  | None | 
|  deletion  | RDS-EVENT-0208 |  RDS deleted endpoint *name* for DB proxy *name*.  | None | 
|  failure  | RDS-EVENT-0243 |  RDS failed to provision capacity for proxy *name* because there aren't enough IP addresses available in your subnets: *name*. To fix the issue, make sure that your subnets have the minimum number of unused IP addresses as recommended in the RDS Proxy documentation.  |  To determine the recommended number for your instance class, see [Planning for IP address capacity](rds-proxy-network-prereqs.md#rds-proxy-network-prereqs.plan-ip-address).  | 
|  failure | RDS-EVENT-0275 |  RDS throttled some connections to DB proxy *name*. The number of simultaneous connection requests from the client to the proxy has exceeded the limit.  | None | 

## Blue/green deployment events


The following table shows the event category and a list of events when a blue/green deployment is the source type.

For more information about blue/green deployments, see [Using Amazon Aurora Blue/Green Deployments for database updates](blue-green-deployments.md).


|  Category  | Amazon RDS event ID |  Message  |  Notes  | 
| --- | --- | --- | --- | 
|  creation  | RDS-EVENT-0244 |  Blue/green deployment tasks completed. You can make more modifications to the green environment databases or switch over the deployment.  | None | 
|  failure  | RDS-EVENT-0245 |  Creation of blue/green deployment failed because *reason*.  | None | 
|  deletion  | RDS-EVENT-0246 |  Blue/green deployment deleted.  | None | 
|  notification  | RDS-EVENT-0247 |  Switchover from *blue* to *green* started.  | None | 
|  notification  | RDS-EVENT-0248 |  Switchover completed on blue/green deployment.  | None | 
|  failure  | RDS-EVENT-0249 |  Switchover canceled on blue/green deployment.  | None | 
|  notification  |  RDS-EVENT-0259 |  Switchover from DB cluster *blue* to *green* started.  | None | 
|  notification  |  RDS-EVENT-0260 |  Switchover from DB cluster *blue* to *green* completed. Renamed *blue* to *blue-old* and *green* to *blue*.  | None | 
|  failure  |  RDS-EVENT-0261 |  Switchover from DB cluster *blue* to *green* was canceled due to *reason*.  | None | 
|  notification  |  RDS-EVENT-0311  |  Sequence sync for switchover of DB cluster *blue* to *green* has initiated. Switchover when using sequences may lead to extended downtime.  | None | 
|  notification  |  RDS-EVENT-0312  |  Sequence sync for switchover of DB cluster *blue* to *green* has completed.  | None | 
|  failure  |  RDS-EVENT-0314  |  Sequence sync for switchover of DB cluster *blue* to *green* was cancelled because sequences failed to sync.  | None | 
|  notification  | RDS-EVENT-0409  |  *message*  | None | 

# Monitoring Amazon Aurora log files
Monitoring Aurora logs

Every RDS database engine generates logs that you can access for auditing and troubleshooting. The type of logs depends on your database engine.

You can access database logs for DB instances using the AWS Management Console, the AWS Command Line Interface (AWS CLI), or the Amazon RDS API. You can't view, watch, or download transaction logs.

**Note**  
In some cases, logs contain hidden data. Therefore, the AWS Management Console might show content in a log file, but the log file might be empty when you download it.

**Topics**
+ [

# Viewing and listing database log files
](USER_LogAccess.Procedural.Viewing.md)
+ [

# Downloading a database log file
](USER_LogAccess.Procedural.Downloading.md)
+ [

# Watching a database log file
](USER_LogAccess.Procedural.Watching.md)
+ [

# Publishing database logs to Amazon CloudWatch Logs
](USER_LogAccess.Procedural.UploadtoCloudWatch.md)
+ [

# Reading log file contents using REST
](DownloadCompleteDBLogFile.md)
+ [

# AuroraMySQL database log files
](USER_LogAccess.Concepts.MySQL.md)
+ [

# Aurora PostgreSQL database log files
](USER_LogAccess.Concepts.PostgreSQL.md)

# Viewing and listing database log files


You can view database log files for your Amazon Aurora DB engine by using the AWS Management Console. You can list what log files are available for download or monitoring by using the AWS CLI or Amazon RDS API. 

**Note**  
You can't view the log files for Aurora Serverless v1 DB clusters in the RDS console. However, you can view them in the Amazon CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

## Console


**To view a database log file**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the DB instance that has the log file that you want to view.

1. Choose the **Logs & events** tab.

1. Scroll down to the **Logs** section.

1. (Optional) Enter a search term to filter your results.

   The following example lists logs filtered by the text **error**.  
![\[List DB logs\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/ListEventsAMS.png)

1. Choose the log that you want to view, and then choose **View**.

## AWS CLI


To list the available database log files for a DB instance, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-log-files.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-log-files.html) command.

The following example returns a list of log files for a DB instance named `my-db-instance`.

**Example**  

```
1. aws rds describe-db-log-files --db-instance-identifier my-db-instance
```

## RDS API


To list the available database log files for a DB instance, use the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBLogFiles.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBLogFiles.html) action.

# Downloading a database log file


You can use the AWS Management Console, AWS CLI, or API to download a database log file. 

## Console


**To download a database log file**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the DB instance that has the log file that you want to view.

1. Choose the **Logs & events** tab.

1. Scroll down to the **Logs** section. 

1. In the **Logs** section, choose the button next to the log that you want to download, and then choose **Download**.

1. Open the context (right-click) menu for the link provided, and then choose **Save Link As**. Enter the location where you want the log file to be saved, and then choose **Save**.  
![\[viewing log file\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/log_download2.png)

## AWS CLI


To download a database log file, use the AWS CLI command [https://docs.aws.amazon.com/cli/latest/reference/rds/download-db-log-file-portion.html](https://docs.aws.amazon.com/cli/latest/reference/rds/download-db-log-file-portion.html). By default, this command downloads only the latest portion of a log file. However, you can download an entire file by specifying the parameter `--starting-token 0`.

The following example shows how to download the entire contents of a log file called *log/ERROR.4* and store it in a local file called *errorlog.txt*.

**Example**  
For Linux, macOS, or Unix:  

```
1. aws rds download-db-log-file-portion \
2.     --db-instance-identifier myexampledb \
3.     --starting-token 0 --output text \
4.     --log-file-name log/ERROR.4 > errorlog.txt
```
For Windows:  

```
1. aws rds download-db-log-file-portion ^
2.     --db-instance-identifier myexampledb ^
3.     --starting-token 0 --output text ^
4.     --log-file-name log/ERROR.4 > errorlog.txt
```

## RDS API


To download a database log file, use the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DownloadDBLogFilePortion.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DownloadDBLogFilePortion.html) action.

# Watching a database log file


Watching a database log file is equivalent to tailing the file on a UNIX or Linux system. You can watch a log file by using the AWS Management Console. RDS refreshes the tail of the log every 5 seconds.

**To watch a database log file**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the DB instance that has the log file that you want to view.

1. Choose the **Logs & events** tab.  
![\[Choose the Logs & events tab\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/Monitoring_logsEvents.png)

1. In the **Logs** section, choose a log file, and then choose **Watch**.  
![\[Choose a log\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/Monitoring_LogsEvents_watch.png)

   RDS shows the tail of the log, as in the following MySQL example.  
![\[Tail of a log file\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/Monitoring_LogsEvents_watch_content.png)

# Publishing database logs to Amazon CloudWatch Logs
Publishing to CloudWatch Logs

In an on-premises database, the database logs reside on the file system. Amazon RDS doesn't provide host access to the database logs on the file system of your DB cluster. For this reason, Amazon RDS lets you export database logs to [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html). With CloudWatch Logs, you can perform real-time analysis of the log data. You can also store the data in highly durable storage and manage the data with the CloudWatch Logs Agent. 

**Topics**
+ [

## Overview of RDS integration with CloudWatch Logs
](#rds-integration-cw-logs)
+ [

## Deciding which logs to publish to CloudWatch Logs
](#engine-specific-logs)
+ [

## Specifying the logs to publish to CloudWatch Logs
](#integrating_cloudwatchlogs.configure)
+ [

## Searching and filtering your logs in CloudWatch Logs
](#accessing-logs-in-cloudwatch)

## Overview of RDS integration with CloudWatch Logs


In CloudWatch Logs, a *log stream* is a sequence of log events that share the same source. Each separate source of logs in CloudWatch Logs makes up a separate log stream. A *log group* is a group of log streams that share the same retention, monitoring, and access control settings.

Amazon Aurora continuously streams your DB cluster log records to a log group. For example, you have a log group `/aws/rds/cluster/cluster_name/log_type` for each type of log that you publish. This log group is in the same AWS Region as the database instance that generates the log.

AWS retains log data published to CloudWatch Logs for an indefinite time period unless you specify a retention period. For more information, see [Change log data retention in CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#SettingLogRetention). 

## Deciding which logs to publish to CloudWatch Logs


Each RDS database engine supports its own set of logs. To learn about the options for your database engine, review the following topics:
+ [Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs](AuroraMySQL.Integrating.CloudWatch.md)
+ [Publishing Aurora PostgreSQL logs to Amazon CloudWatch Logs](AuroraPostgreSQL.CloudWatch.md)

## Specifying the logs to publish to CloudWatch Logs


You specify which logs to publish in the console. Make sure that you have a service-linked role in AWS Identity and Access Management (IAM). For more information about service-linked roles, see [Using service-linked roles for Amazon Aurora](UsingWithRDS.IAM.ServiceLinkedRoles.md).

**To specify the logs to publish**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Do either of the following:
   + Choose **Create database**.
   + Choose a database from the list, and then choose **Modify**.

1. In **Logs exports**, choose which logs to publish.

   The following example specifies the audit log, error logs, general log, instance log, IAM database authentication error log, and slow query log for an Aurora MySQL DB cluster.

## Searching and filtering your logs in CloudWatch Logs


You can search for log entries that meet a specified criteria using the CloudWatch Logs console. You can access the logs either through the RDS console, which leads you to the CloudWatch Logs console, or from the CloudWatch Logs console directly.

**To search your RDS logs using the RDS console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose a DB cluster or a DB instance.

1. Choose **Configuration**.

1. Under **Published logs**, choose the database log that you want to view.

**To search your RDS logs using the CloudWatch Logs console**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Log groups**.

1. In the filter box, enter **/aws/rds**.

1. For **Log Groups**, choose the name of the log group containing the log stream to search.

1. For **Log Streams**, choose the name of the log stream to search.

1. Under **Log events**, enter the filter syntax to use.

For more information, see [Searching and filtering log data](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html) in the *Amazon CloudWatch Logs User Guide*. For a blog tutorial explaining how to monitor RDS logs, see [Build proactive database monitoring for Amazon RDS with Amazon CloudWatch Logs, AWS Lambda, and Amazon SNS](https://aws.amazon.com/blogs/database/build-proactive-database-monitoring-for-amazon-rds-with-amazon-cloudwatch-logs-aws-lambda-and-amazon-sns/).

# Reading log file contents using REST


Amazon RDS provides a REST endpoint that allows access to DB instance log files. This is useful if you need to write an application to stream Amazon RDS log file contents.

The syntax is:

```
GET /v13/downloadCompleteLogFile/DBInstanceIdentifier/LogFileName HTTP/1.1
Content-type: application/json
host: rds.region.amazonaws.com
```

The following parameters are required:
+ `DBInstanceIdentifier`—the name of the DB instance that contains the log file you want to download.
+ `LogFileName`—the name of the log file to be downloaded.

The response contains the contents of the requested log file, as a stream.

The following example downloads the log file named *log/ERROR.6* for the DB instance named *sample-sql* in the *us-west-2* region.

```
GET /v13/downloadCompleteLogFile/sample-sql/log/ERROR.6 HTTP/1.1
host: rds.us-west-2.amazonaws.com
X-Amz-Security-Token: AQoDYXdzEIH//////////wEa0AIXLhngC5zp9CyB1R6abwKrXHVR5efnAVN3XvR7IwqKYalFSn6UyJuEFTft9nObglx4QJ+GXV9cpACkETq=
X-Amz-Date: 20140903T233749Z
X-Amz-Algorithm: AWS4-HMAC-SHA256
X-Amz-Credential: AKIADQKE4SARGYLE/20140903/us-west-2/rds/aws4_request
X-Amz-SignedHeaders: host
X-Amz-Content-SHA256: e3b0c44298fc1c229afbf4c8996fb92427ae41e4649b934de495991b7852b855
X-Amz-Expires: 86400
X-Amz-Signature: 353a4f14b3f250142d9afc34f9f9948154d46ce7d4ec091d0cdabbcf8b40c558
```

If you specify a nonexistent DB instance, the response consists of the following error:
+ `DBInstanceNotFound`—`DBInstanceIdentifier` does not refer to an existing DB instance. (HTTP status code: 404)

# AuroraMySQL database log files
MySQL database log files

You can monitor the Aurora MySQL logs directly through the Amazon RDS console, Amazon RDS API, AWS CLI, or AWS SDKs. You can also access MySQL logs by directing the logs to a database table in the main database and querying that table. You can use the mysqlbinlog utility to download a binary log. 

For more information about viewing, downloading, and watching file-based database logs, see [Monitoring Amazon Aurora log files](USER_LogAccess.md).

**Topics**
+ [

# Overview of Aurora MySQL database logs
](USER_LogAccess.MySQL.LogFileSize.md)
+ [

# Sending AuroraMySQL log output to tables
](Appendix.MySQL.CommonDBATasks.Logs.md)
+ [

# Configuring Aurora MySQL binary logging for Single-AZ databases
](USER_LogAccess.MySQL.BinaryFormat.md)
+ [

# Accessing MySQL binary logs
](USER_LogAccess.MySQL.Binarylog.md)

# Overview of Aurora MySQL database logs


You can monitor the following types of Aurora MySQL log files:
+ Error log
+ Slow query log
+ General log
+ Audit log
+ Instance log
+ IAM database authentication error log

The Aurora MySQL error log is generated by default. You can generate the slow query and general logs by setting parameters in your DB parameter group.

**Topics**
+ [

## Aurora MySQL error logs
](#USER_LogAccess.MySQL.Errorlog)
+ [

## Aurora MySQL slow query and general logs
](#USER_LogAccess.MySQL.Generallog)
+ [

## Aurora MySQL audit log
](#ams-audit-log)
+ [

## Aurora MySQL instance log
](#ams-instance-log)
+ [

## Log rotation and retention for Aurora MySQL
](#USER_LogAccess.AMS.LogFileSize.retention)
+ [

## Publishing Aurora MySQL logs to Amazon CloudWatch Logs
](#USER_LogAccess.MySQLDB.PublishAuroraMySQLtoCloudWatchLogs)

## Aurora MySQL error logs


Aurora MySQL writes errors in the `mysql-error.log` file. Each log file has the hour it was generated (in UTC) appended to its name. The log files also have a timestamp that helps you determine when the log entries were written.

Aurora MySQL writes to the error log only on startup, shutdown, and when it encounters errors. A DB instance can go hours or days without new entries being written to the error log. If you see no recent entries, it's because the server didn't encounter an error that would result in a log entry.

By design, the error logs are filtered so that only unexpected events such as errors are shown. However, the error logs also contain some additional database information, for example query progress, which isn't shown. Therefore, even without any actual errors the size of the error logs might increase because of ongoing database activities. And while you might see a certain size in bytes or kilobytes for the error logs in the AWS Management Console, they might have 0 bytes when you download them.

Aurora MySQL writes `mysql-error.log` to disk every 5 minutes. It appends the contents of the log to `mysql-error-running.log`.

Aurora MySQL rotates the `mysql-error-running.log` file every hour.

**Note**  
The log retention period is different between Amazon RDS and Aurora.

## Aurora MySQL slow query and general logs


You can write the Aurora MySQL slow query log and the general log to a file or a database table. To do so, set parameters in your DB parameter group. For information about creating and modifying a DB parameter group, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md). You must set these parameters before you can view the slow query log or general log in the Amazon RDS console or by using the Amazon RDS API, Amazon RDS CLI, or AWS SDKs.

You can control Aurora MySQL logging by using the parameters in this list:
+ `slow_query_log`: To create the slow query log, set to 1. The default is 0.
+ `general_log`: To create the general log, set to 1. The default is 0.
+ `long_query_time`: To prevent fast-running queries from being logged in the slow query log, specify a value for the shortest query runtime to be logged, in seconds. The default is 10 seconds; the minimum is 0. If log\$1output = FILE, you can specify a floating point value that goes to microsecond resolution. If log\$1output = TABLE, you must specify an integer value with second resolution. Only queries whose runtime exceeds the `long_query_time` value are logged. For example, setting `long_query_time` to 0.1 prevents any query that runs for less than 100 milliseconds from being logged.
+ `log_queries_not_using_indexes`: To log all queries that do not use an index to the slow query log, set to 1. Queries that don't use an index are logged even if their runtime is less than the value of the `long_query_time` parameter. The default is 0.
+ `log_output option`: You can specify one of the following options for the `log_output` parameter. 
  + **TABLE** – Write general queries to the `mysql.general_log` table, and slow queries to the `mysql.slow_log` table.
  + **FILE** – Write both general and slow query logs to the file system.
  + **NONE** – Disable logging.

  For Aurora MySQL versions 2 and 3, the default for `log_output` is `FILE`.

For slow query data to appear in Amazon CloudWatch Logs, the following conditions must be met:
+ CloudWatch Logs must be configured to include slow query logs.
+ `slow_query_log` must be enabled.
+ `log_output` must be set to `FILE`.
+ The query must take longer than the time configured for `long_query_time`.

For more information about the slow query and general logs, go to the following topics in the MySQL documentation:
+ [The slow query log](https://dev.mysql.com/doc/refman/8.0/en/slow-query-log.html)
+ [The general query log](https://dev.mysql.com/doc/refman/8.0/en/query-log.html)

## Aurora MySQL audit log


Audit logging for Aurora MySQL is called Advanced Auditing. To turn on Advanced Auditing, you set certain DB cluster parameters. For more information, see [Using Advanced Auditing with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Auditing.md).

## Aurora MySQL instance log


Aurora creates a separate log file for DB instances that have auto-pause enabled. This instance.log file records any reasons why these DB instances couldn't be paused when expected. For more information on instance log file behavior and Aurora auto-pause capability, see [ Monitoring Aurora Serverless v2 pause and resume activity](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2-administration.html#autopause-logging-instance-log).

## Log rotation and retention for Aurora MySQL


When logging is enabled, Amazon Aurora rotates or deletes log files at regular intervals. This measure is a precaution to reduce the possibility of a large log file either blocking database use or affecting performance. Aurora MySQL handles rotation and deletion as follows:
+ The Aurora MySQL error log file sizes are constrained to no more than 15 percent of the local storage for a DB instance. To maintain this threshold, logs are automatically rotated every hour. Aurora MySQL removes logs after 30 days or when 15% of disk space is reached. If the combined log file size exceeds the threshold after removing old log files, then the oldest log files are deleted until the log file size no longer exceeds the threshold.
+ Aurora MySQL removes the audit, general, and slow query logs after either 24 hours or when 15% of storage has been consumed.
+ When `FILE` logging is enabled, general log and slow query log files are examined every hour and log files more than 24 hours old are deleted. In some cases, the remaining combined log file size after the deletion might exceed the threshold of 15 percent of a DB instance's local space. In these cases, the oldest log files are deleted until the log file size no longer exceeds the threshold.
+ When `TABLE` logging is enabled, log tables aren't rotated or deleted. Log tables are truncated when the size of all logs combined is too large. You can subscribe to the `low storage` event category to be notified when log tables should be manually rotated or deleted to free up space. For more information, see [Working with Amazon RDS event notification](USER_Events.md).

  You can rotate the `mysql.general_log` table manually by calling the `mysql.rds_rotate_general_log` procedure. You can rotate the `mysql.slow_log` table by calling the `mysql.rds_rotate_slow_log` procedure.

  When you rotate log tables manually, the current log table is copied to a backup log table and the entries in the current log table are removed. If the backup log table already exists, then it is deleted before the current log table is copied to the backup. You can query the backup log table if needed. The backup log table for the `mysql.general_log` table is named `mysql.general_log_backup`. The backup log table for the `mysql.slow_log` table is named `mysql.slow_log_backup`.
+ The Aurora MySQL audit logs are rotated when the file size reaches 100 MB, and removed after 24 hours.
+ Amazon RDS rotates IAM database authentication error log files larger than 10 MB. Amazon RDS removes IAM database authentication error log files that are older than five days or larger than 100 MB.

To work with the logs from the Amazon RDS console, Amazon RDS API, Amazon RDS CLI, or AWS SDKs, set the `log_output` parameter to FILE. Like the Aurora MySQL error log, these log files are rotated hourly. The log files that were generated during the previous 24 hours are retained. Note that the retention period is different between Amazon RDS and Aurora.

## Publishing Aurora MySQL logs to Amazon CloudWatch Logs


You can configure your Aurora MySQL DB cluster to publish log data to a log group in Amazon CloudWatch Logs. With CloudWatch Logs, you can perform real-time analysis of the log data, and use CloudWatch to create alarms and view metrics. You can use CloudWatch Logs to store your log records in highly durable storage. For more information, see [Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs](AuroraMySQL.Integrating.CloudWatch.md).

# Sending AuroraMySQL log output to tables


You can direct the general and slow query logs to tables on the DB instance by creating a DB parameter group and setting the `log_output` server parameter to `TABLE`. General queries are then logged to the `mysql.general_log` table, and slow queries are logged to the `mysql.slow_log` table. You can query the tables to access the log information. Enabling this logging increases the amount of data written to the database, which can degrade performance.

Both the general log and the slow query logs are disabled by default. In order to enable logging to tables, you must also set the `general_log` and `slow_query_log` server parameters to `1`.

Log tables keep growing until the respective logging activities are turned off by resetting the appropriate parameter to `0`. A large amount of data often accumulates over time, which can use up a considerable percentage of your allocated storage space. Amazon Aurora doesn't allow you to truncate the log tables, but you can move their contents. Rotating a table saves its contents to a backup table and then creates a new empty log table. You can manually rotate the log tables with the following command line procedures, where the command prompt is indicated by `PROMPT>`: 

```
PROMPT> CALL mysql.rds_rotate_slow_log;
PROMPT> CALL mysql.rds_rotate_general_log;
```

To completely remove the old data and reclaim the disk space, call the appropriate procedure twice in succession. 

# Configuring Aurora MySQL binary logging for Single-AZ databases


The *binary log* is a set of log files that contain information about data modifications made to an Aurora MySQL server instance. The binary log contains information such as the following:
+ Events that describe database changes such as table creation or row modifications
+ Information about the duration of each statement that updated data
+ Events for statements that could have updated data but didn't

The binary log records statements that are sent during replication. It is also required for some recovery operations. For more information, see [The Binary Log](https://dev.mysql.com/doc/refman/8.0/en/binary-log.html) in the MySQL documentation.

Binary logs are accessible only from the primary DB instance, not from the replicas.

MySQL on Amazon Aurora supports the *row-based*, *statement-based*, and *mixed* binary logging formats. We recommend mixed unless you need a specific binlog format. For details on the different Aurora MySQL binary log formats, see [Binary Logging Formats](https://dev.mysql.com/doc/refman/8.0/en/binary-log-formats.html) in the MySQL documentation.

If you plan to use replication, the binary logging format is important because it determines the record of data changes that is recorded in the source and sent to the replication targets. For information about the advantages and disadvantages of different binary logging formats for replication, see [Advantages and Disadvantages of Statement-Based and Row-Based Replication](https://dev.mysql.com/doc/refman/8.0/en/replication-sbr-rbr.html) in the MySQL documentation.

**Important**  
With MySQL 8.0.34, MySQL deprecated the `binlog_format` parameter. In later MySQL versions, MySQL plans to remove the parameter and only support row-based replication. As a result, we recommend using row-based logging for new MySQL replication setups. For more information, see [binlog\$1format](https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_binlog_format) in the MySQL documentation.  
MySQL versions 8.0 and 8.4 accept the parameter `binlog_format`. When using this parameter, MySQL issues a deprecation warning. In a future major release, MySQL will remove the parameter `binlog_format`.  
Statement-based replication can cause inconsistencies between the source DB cluster and a read replica. For more information, see [Determination of Safe and Unsafe Statements in Binary Logging](https://dev.mysql.com/doc/refman/8.0/en/replication-rbr-safe-unsafe.html) in the MySQL documentation.  
Enabling binary logging increases the number of write disk I/O operations to the DB cluster. You can monitor IOPS usage with the ```VolumeWriteIOPs` CloudWatch metric.

**To set the MySQL binary logging format**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Parameter groups**.

1. Choose the DB cluster parameter group, associated with the DB cluster, that you want to modify.

   You can't modify a default parameter group. If the DB cluster is using a default parameter group, create a new parameter group and associate it with the DB cluster.

   For more information on parameter groups, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).

1. From **Actions**, choose **Edit**.

1. Set the `binlog_format` parameter to the binary logging format of your choice (`ROW`, `STATEMENT`, or `MIXED`). You can also use the value `OFF` to turn off binary logging.
**Note**  
Setting `binlog_format` to `OFF` in the DB cluster parameter group disables the `log_bin` session variable. This disables binary logging on the Aurora MySQL DB cluster, which in turn resets the `binlog_format` session variable to the default value of `ROW` in the database.

1. Choose **Save changes** to save the updates to the DB cluster parameter group.

After you perform these steps, you must reboot the writer instance in the DB cluster for your changes to apply. In Aurora MySQL version 2.09 and lower, when you reboot the writer instance, all of the reader instances in the DB cluster are also rebooted. In Aurora MySQL version 2.10 and higher, you must reboot all of the reader instances manually. For more information, see [Rebooting an Amazon Aurora DB cluster or Amazon Aurora DB instance](USER_RebootCluster.md).

**Important**  
Changing a DB cluster parameter group affects all DB clusters that use that parameter group. If you want to specify different binary logging formats for different Aurora MySQL DB clusters in an AWS Region, the DB clusters must use different DB cluster parameter groups. These parameter groups identify different logging formats. Assign the appropriate DB cluster parameter group to each DB clusters. For more information about Aurora MySQL parameters, see [Aurora MySQL configuration parameters](AuroraMySQL.Reference.ParameterGroups.md).

# Accessing MySQL binary logs


You can use the mysqlbinlog utility to download or stream binary logs from RDS for MySQL DB instances. The binary log is downloaded to your local computer, where you can perform actions such as replaying the log using the mysql utility. For more information about using the mysqlbinlog utility, see [Using mysqlbinlog to back up binary log files](https://dev.mysql.com/doc/refman/8.0/en/mysqlbinlog-backup.html) in the MySQL documentation.

To run the mysqlbinlog utility against an Amazon RDS instance, use the following options:
+ `--read-from-remote-server` – Required.
+ `--host` – The DNS name from the endpoint of the instance.
+ `--port` – The port used by the instance.
+ `--user` – A MySQL user that has been granted the `REPLICATION SLAVE` permission.
+ `--password` – The password for the MySQL user, or omit a password value so that the utility prompts you for a password.
+ `--raw` – Download the file in binary format.
+ `--result-file` – The local file to receive the raw output.
+ `--stop-never` – Stream the binary log files.
+ `--verbose` – When you use the `ROW` binlog format, include this option to see the row events as pseudo-SQL statements. For more information on the `--verbose` option, see [mysqlbinlog row event display](https://dev.mysql.com/doc/refman/8.0/en/mysqlbinlog-row-events.html) in the MySQL documentation.
+ Specify the names of one or more binary log files. To get a list of the available logs, use the SQL command `SHOW BINARY LOGS`.

For more information about mysqlbinlog options, see [mysqlbinlog — Utility for processing binary log files](https://dev.mysql.com/doc/refman/8.0/en/mysqlbinlog.html) in the MySQL documentation.

The following examples show how to use the mysqlbinlog utility.

For Linux, macOS, or Unix:

```
mysqlbinlog \
    --read-from-remote-server \
    --host=MySQLInstance1.cg034hpkmmjt.region.rds.amazonaws.com \
    --port=3306  \
    --user ReplUser \
    --password \
    --raw \
    --verbose \
    --result-file=/tmp/ \
    binlog.00098
```

For Windows:

```
mysqlbinlog ^
    --read-from-remote-server ^
    --host=MySQLInstance1.cg034hpkmmjt.region.rds.amazonaws.com ^
    --port=3306  ^
    --user ReplUser ^
    --password ^
    --raw ^
    --verbose ^
    --result-file=/tmp/ ^
    binlog.00098
```

Binary logs must remain available on the DB instance for the mysqlbinlog utility to access them. To ensure their availability, use the [mysql.rds\$1set\$1configuration](mysql-stored-proc-configuring.md#mysql_rds_set_configuration) stored procedure and specify a period with enough time for you to download the logs. If this configuration isn't set, Amazon RDS purges the binary logs as soon as possible, leading to gaps in the binary logs that the mysqlbinlog utility retrieves. 

The following example sets the retention period to 1 day.

```
call mysql.rds_set_configuration('binlog retention hours', 24);
```

To display the current setting, use the [mysql.rds\$1show\$1configuration](mysql-stored-proc-configuring.md#mysql_rds_show_configuration) stored procedure.

```
call mysql.rds_show_configuration;
```

# Aurora PostgreSQL database log files
PostgreSQL database log files

You can monitor the following types of Aurora PostgreSQL log files:
+ PostgreSQL log
+ Instance log
+ IAM database authentication error log
**Note**  
To enable IAM database authentication error logs, you must first enable IAM database authentication for your Aurora PostgreSQL DB cluster. For more information about enabling IAM database authentication, see [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md).

Aurora PostgreSQL logs database activities to the default PostgreSQL log file. For an on-premises PostgreSQL DB instance, these messages are stored locally in `log/postgresql.log`. For an Aurora PostgreSQL DB cluster, the log file is available on the Aurora cluster. These logs are also accessible via the AWS Management Console, where you can view or download them. The default logging level captures login failures, fatal server errors, deadlocks, and query failures.

For more information about how you can view, download, and watch file-based database logs, see [Monitoring Amazon Aurora log files](USER_LogAccess.md). To learn more about PostgreSQL logs, see [Working with Amazon RDS and Aurora PostgreSQL logs: Part 1](https://aws.amazon.com/blogs/database/working-with-rds-and-aurora-postgresql-logs-part-1/) and [ Working with Amazon RDS and Aurora PostgreSQL logs: Part 2](https://aws.amazon.com/blogs/database/working-with-rds-and-aurora-postgresql-logs-part-2/). 

In addition to the standard PostgreSQL logs discussed in this topic, Aurora PostgreSQL also supports the PostgreSQL Audit extension (`pgAudit`). Most regulated industries and government agencies need to maintain an audit log or audit trail of changes made to data to comply with legal requirements. For information about installing and using pgAudit, see [Using pgAudit to log database activity](Appendix.PostgreSQL.CommonDBATasks.pgaudit.md).

Aurora creates a separate log file for DB instances that have auto-pause enabled. This instance.log file records any reasons why these DB instances couldn't be paused when expected. For more information on instance log file behavior and Aurora auto-pause capability, see [ Monitoring Aurora Serverless v2 pause and resume activity](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2-administration.html#autopause-logging-instance-log).

**Topics**
+ [

# Parameters for logging in Aurora PostgreSQL
](USER_LogAccess.Concepts.PostgreSQL.overview.parameter-groups.md)
+ [

# Turning on query logging for your Aurora PostgreSQL DB cluster
](USER_LogAccess.Concepts.PostgreSQL.Query_Logging.md)

# Parameters for logging in Aurora PostgreSQL
Parameters for logging

You can customize the logging behavior for your Aurora PostgreSQL DB cluster by modifying various parameters. In the following table you can find the parameters that affect how long the logs are stored, when to rotate the log, and whether to output the log as a CSV (comma-separated value) format. You can also find the text output sent to STDERR, among other settings. To change settings for the parameters that are modifiable, use a custom DB cluster parameter group for your Aurora PostgreSQL DB cluster. For more information, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md). 


| Parameter | Default | Description | 
| --- | --- | --- | 
| log\$1destination | stderr | Sets the output format for the log. The default is `stderr` but you can also specify comma-separated value (CSV) by adding `csvlog` to the setting. For more information, see [Setting the log destination (`stderr`, `csvlog`)](#USER_LogAccess.Concepts.PostgreSQL.Log_Format).  | 
| log\$1filename | postgresql.log.%Y-%m-%d-%H%M  | Specifies the pattern for the log file name. In addition to the default, this parameter supports `postgresql.log.%Y-%m-%d` and `postgresql.log.%Y-%m-%d-%H` for the filename pattern. For Aurora PostgreSQL version 17.4 and later, you can't modify this parameter.  | 
| log\$1line\$1prefix | %t:%r:%u@%d:[%p]: | Defines the prefix for each log line that gets written to `stderr`, to note the time (%t), remote host (%r), user (%u), database (%d), and process ID (%p). | 
| log\$1rotation\$1age | 60 | Minutes after which log file is automatically rotated. You can change this value within the range of 1 and 1440 minutes. For more information, see [Setting log file rotation](#USER_LogAccess.Concepts.PostgreSQL.log_rotation).  | 
| log\$1rotation\$1size | – | The size (kB) at which the log is automatically rotated. You can change this value within the range of 50,000 to 1,000,000 kilobytes. To learn more, see [Setting log file rotation](#USER_LogAccess.Concepts.PostgreSQL.log_rotation). | 
| rds.log\$1retention\$1period | 4320 | PostgreSQL logs that are older than the specified number of minutes are deleted. The default value of 4320 minutes deletes log files after 3 days. For more information, see [Setting the log retention period](#USER_LogAccess.Concepts.PostgreSQL.log_retention_period). | 

To identify application issues, you can look for query failures, login failures, deadlocks, and fatal server errors in the log. For example, suppose that you converted a legacy application from Oracle to Aurora PostgreSQL, but not all queries converted correctly. These incorrectly formatted queries generate error messages that you can find in the logs to help identify problems. For more information about logging queries, see [Turning on query logging for your Aurora PostgreSQL DB cluster ](USER_LogAccess.Concepts.PostgreSQL.Query_Logging.md). 

In the following topics, you can find information about how to set various parameters that control the basic details for your PostgreSQL logs. 

**Topics**
+ [

## Setting the log retention period
](#USER_LogAccess.Concepts.PostgreSQL.log_retention_period)
+ [

## Setting log file rotation
](#USER_LogAccess.Concepts.PostgreSQL.log_rotation)
+ [

## Setting the log destination (`stderr`, `csvlog`)
](#USER_LogAccess.Concepts.PostgreSQL.Log_Format)
+ [

## Understanding the log\$1line\$1prefix parameter
](#USER_LogAccess.Concepts.PostgreSQL.Log_Format.log-line-prefix)

## Setting the log retention period


The `rds.log_retention_period` parameter specifies how long your Aurora PostgreSQL DB cluster keeps its log files. The default setting is 3 days (4,320 minutes), but you can set this value to anywhere from 1 day (1,440 minutes) to 7 days (10,080 minutes). Be sure that your Aurora PostgreSQL DB cluster has sufficient storage to hold the log files for the period of time.

We recommend that you have your logs routinely published to Amazon CloudWatch Logs so that you can view and analyze system data long after the logs have been removed from your Aurora PostgreSQL DB cluster. For more information, see [Publishing Aurora PostgreSQL logs to Amazon CloudWatch Logs](AuroraPostgreSQL.CloudWatch.md). After you set up CloudWatch publishing, Aurora doesn't delete a log until after it's published to CloudWatch Logs.  

Amazon Aurora compresses older PostgreSQL logs when storage for the DB instance reaches a threshold. Aurora compresses the files using the gzip compression utility. For more information, see the [gzip](https://www.gzip.org) website.

When storage for the DB instance is low and all available logs are compressed, you get a warning such as the following:

```
Warning: local storage for PostgreSQL log files is critically low for 
this Aurora PostgreSQL instance, and could lead to a database outage.
```

If there's not enough storage, Aurora might delete compressed PostgreSQL logs before the end of a specified retention period. If that happens, you see a message similar to the following:

```
The oldest PostgreSQL log files were deleted due to local storage constraints.
```

## Setting log file rotation


Aurora creates new log files every hour by default. The timing is controlled by the `log_rotation_age` parameter. This parameter has a default value of 60 (minutes), but you can set it to anywhere from 1 minute to 24 hours (1,440 minutes). When it's time for rotation, a new distinct log file is created. The file is named according to the pattern specified by the `log_filename` parameter. 

Log files can also be rotated according to their size, as specified in the `log_rotation_size` parameter. This parameter specifies that the log should be rotated when it reaches the specified size (in kilobytes). The default `log_rotation_size` is 100000 kB (kilobytes) for an Aurora PostgreSQL DB cluster, but you can set this value to anywhere from 50,000 to 1,000,000 kilobytes. 

The log file names are based on the file name pattern specified in the `log_filename` parameter. The available settings for this parameter are as follows:
+ `postgresql.log.%Y-%m-%d` – Default format for the log file name. Includes the year, month, and date in the name of the log file.
+ `postgresql.log.%Y-%m-%d-%H` – Includes the hour in the log file name format.
+ `postgresql.log.%Y-%m-%d-%H%M` – Includes hour:minute in the log file name format.

If you set `log_rotation_age` parameter to less than 60 minutes, set the `log_filename` parameter to the minute format.

For more information, see [https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ROTATION-AGE](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ROTATION-AGE) and [https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ROTATION-SIZE](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ROTATION-SIZE) in the PostgreSQL documentation.

## Setting the log destination (`stderr`, `csvlog`)


By default, Aurora PostgreSQL generates logs in standard error (stderr) format. This format is the default setting for the `log_destination` parameter. Each message is prefixed using the pattern specified in the `log_line_prefix` parameter. For more information, see [Understanding the log\$1line\$1prefix parameter](#USER_LogAccess.Concepts.PostgreSQL.Log_Format.log-line-prefix). 

Aurora PostgreSQL can also generate the logs in `csvlog` format. The `csvlog` is useful for analyzing the log data as comma-separated values (CSV) data. For example, suppose that you use the `log_fdw` extension to work with your logs as foreign tables. The foreign table created on `stderr` log files contains a single column with log event data. By adding `csvlog` to the `log_destination` parameter, you get the log file in the CSV format with demarcations for the multiple columns of the foreign table. You can now sort and analyze your logs more easily. 

If you specify `csvlog` for this parameter, be aware that both `stderr` and `csvlog` files are generated. Be sure to monitor the storage consumed by the logs, taking into account the `rds.log_retention_period` and other settings that affect log storage and turnover. Using `stderr` and `csvlog` more than doubles the storage consumed by the logs.

If you add `csvlog` to `log_destination` and you want to revert to the `stderr` alone, you need to reset the parameter. To do so, open the Amazon RDS Console and then open the custom DB cluster parameter group for your instance. Choose the `log_destination` parameter, choose **Edit parameter**, and then choose **Reset**. 

For more information about configuring logging, see [ Working with Amazon RDS and Aurora PostgreSQL logs: Part 1](https://aws.amazon.com/blogs/database/working-with-rds-and-aurora-postgresql-logs-part-1/).

## Understanding the log\$1line\$1prefix parameter


The `stderr` log format prefixes each log message with the details specified by the `log_line_prefix` parameter. The default value is:

```
%t:%r:%u@%d:[%p]:t
```

Starting from Aurora PostgreSQL version 16, you can also choose:

```
%m:%r:%u@%d:[%p]:%l:%e:%s:%v:%x:%c:%q%a
```

Each log entry sent to stderr includes the following information based on the selected value:
+ `%t` – Time of log entry without milliseconds
+ `%m` – Time of log entry with milliseconds
+  `%r` – Remote host address
+  `%u@%d` – User name @ database name
+  `[%p]` – Process ID if available
+  `%l` – Log line number per session 
+  `%e` – SQL error code 
+  `%s` – Process start timestamp 
+  `%v` – Virtual transaction id 
+  `%x` – Transaction ID 
+  `%c` – Session ID 
+  `%q` – Non-session terminator 
+  `%a` – Application name 

# Turning on query logging for your Aurora PostgreSQL DB cluster
Turning on query logging

You can collect more detailed information about your database activities, including queries, queries waiting for locks, checkpoints, and many other details by setting some of the parameters listed in the following table. This topic focuses on logging queries.


| Parameter | Default | Description | 
| --- | --- | --- | 
| log\$1connections | – | Logs each successful connection. To learn how to use this parameter with `log_disconnections` to detect connection churn, see [Managing Aurora PostgreSQL connection churn with pooling](AuroraPostgreSQL.BestPractices.connection_pooling.md).  | 
| log\$1disconnections | – | Logs the end of each session and its duration. To learn how to use this parameter with `log_connections` to detect connection churn, see [Managing Aurora PostgreSQL connection churn with pooling](AuroraPostgreSQL.BestPractices.connection_pooling.md). | 
| log\$1checkpoints | – |  Not applicable for Aurora PostgreSQL | 
| log\$1lock\$1waits | – | Logs long lock waits. By default, this parameter isn't set. | 
| log\$1min\$1duration\$1sample | – | (ms) Sets the minimum execution time above which a sample of statements is logged. Sample size is set using the log\$1statement\$1sample\$1rate parameter. | 
| log\$1min\$1duration\$1statement | – | Any SQL statement that runs atleast for the specified amount of time or longer gets logged. By default, this parameter isn't set. Turning on this parameter can help you find unoptimized queries. | 
| log\$1statement | – | Sets the type of statements logged. By default, this parameter isn't set, but you can change it to `all`, `ddl`, or `mod` to specify the types of SQL statements that you want logged. If you specify anything other than `none` for this parameter, you should also take additional steps to prevent the exposure of passwords in the log files. For more information, see [Mitigating risk of password exposure when using query loggingMitigating password exposure risk](#USER_LogAccess.Concepts.PostgreSQL.Query_Logging.mitigate-risk).  | 
| log\$1statement\$1sample\$1rate | – | The percentage of statements exceeding the time specified in `log_min_duration_sample` to be logged, expressed as a floating point value between 0.0 and 1.0.  | 
| log\$1statement\$1stats | – | Writes cumulative performance statistics to the server log. | 

## Using logging to find slow performing queries
Using logging to find slow performing queries

You can log SQL statements and queries to help find slow performing queries. You turn on this capability by modifying the settings in the `log_statement` and `log_min_duration` parameters as outlined in this section. Before turning on query logging for your Aurora PostgreSQL DB cluster, you should be aware of possible password exposure in the logs and how to mitigate the risks. For more information, see [Mitigating risk of password exposure when using query loggingMitigating password exposure risk](#USER_LogAccess.Concepts.PostgreSQL.Query_Logging.mitigate-risk). 

Following, you can find reference information about the `log_statement` and `log_min_duration` parameters.log\$1statement

This parameter specifies the type of SQL statements that should get sent to the log. The default value is `none`. If you change this parameter to `all`, `ddl`, or `mod`, be sure to apply recommended actions to mitigate the risk of exposing passwords in the logs. For more information, see [Mitigating risk of password exposure when using query loggingMitigating password exposure risk](#USER_LogAccess.Concepts.PostgreSQL.Query_Logging.mitigate-risk). 

**all**  
Logs all statements. This setting is recommended for debugging purposes.

**ddl**  
Logs all data definition language (DDL) statements, such as CREATE, ALTER, DROP, and so on.

**mod**  
Logs all DDL statements and data manipulation language (DML) statements, such as INSERT, UPDATE, and DELETE, which modify the data.

**none**  
No SQL statements get logged. We recommend this setting to avoid the risk of exposing passwords in the logs.log\$1min\$1duration\$1statement

Any SQL statement that runs atleast for the specified amount of time or longer gets logged. By default, this parameter isn't set. Turning on this parameter can help you find unoptimized queries.

**–1–2147483647**  
The number of milliseconds (ms) of runtime over which a statement gets logged.

**To set up query logging**

These steps assume that your Aurora PostgreSQL DB cluster uses a custom DB cluster parameter group. 

1. Set the `log_statement` parameter to `all`. The following example shows the information that is written to the `postgresql.log` file with this parameter setting.

   ```
   2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:LOG: statement: SELECT feedback, s.sentiment,s.confidence
   FROM support,aws_comprehend.detect_sentiment(feedback, 'en') s
   ORDER BY s.confidence DESC;
   2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:LOG: QUERY STATISTICS
   2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:DETAIL: ! system usage stats:
   ! 0.017355 s user, 0.000000 s system, 0.168593 s elapsed
   ! [0.025146 s user, 0.000000 s system total]
   ! 36644 kB max resident size
   ! 0/8 [0/8] filesystem blocks in/out
   ! 0/733 [0/1364] page faults/reclaims, 0 [0] swaps
   ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
   ! 19/0 [27/0] voluntary/involuntary context switches
   2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:STATEMENT: SELECT feedback, s.sentiment,s.confidence
   FROM support,aws_comprehend.detect_sentiment(feedback, 'en') s
   ORDER BY s.confidence DESC;
   2022-10-05 22:05:56 UTC:52.95.4.1(11335):postgres@labdb:[3639]:ERROR: syntax error at or near "ORDER" at character 1
   2022-10-05 22:05:56 UTC:52.95.4.1(11335):postgres@labdb:[3639]:STATEMENT: ORDER BY s.confidence DESC;
   ----------------------- END OF LOG ----------------------
   ```

1. Set the `log_min_duration_statement` parameter. The following example shows the information that is written to the `postgresql.log` file when the parameter is set to `1`.

   Queries that exceed the duration specified in the `log_min_duration_statement` parameter are logged. The following shows an example. You can view the log file for your Aurora PostgreSQL DB cluster in the Amazon RDS Console. 

   ```
   2022-10-05 19:05:19 UTC:52.95.4.1(6461):postgres@labdb:[6144]:LOG: statement: DROP table comments;
   2022-10-05 19:05:19 UTC:52.95.4.1(6461):postgres@labdb:[6144]:LOG: duration: 167.754 ms
   2022-10-05 19:08:07 UTC::@:[355]:LOG: checkpoint starting: time
   2022-10-05 19:08:08 UTC::@:[355]:LOG: checkpoint complete: wrote 11 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=1.013 s, sync=0.006 s, total=1.033 s; sync files=8, longest=0.004 s, average=0.001 s; distance=131028 kB, estimate=131028 kB
   ----------------------- END OF LOG ----------------------
   ```

### Mitigating risk of password exposure when using query logging
Mitigating password exposure risk

We recommend that you keep `log_statement` set to `none` to avoid exposing passwords. If you set `log_statement` to `all`, `ddl`, or `mod`, we recommend that you take one or more of the following steps.
+ For the client, encrypt sensitive information. For more information, see [Encryption Options](https://www.postgresql.org/docs/current/encryption-options.html) in the PostgreSQL documentation. Use the `ENCRYPTED` (and `UNENCRYPTED`) options of the `CREATE` and `ALTER` statements. For more information, see [CREATE USER](https://www.postgresql.org/docs/current/sql-createuser.html) in the PostgreSQL documentation.
+ For your Aurora PostgreSQL DB cluster, set up and use the PostgreSQL Auditing (pgAudit) extension. This extension redacts sensitive information in CREATE and ALTER statements sent to the log. For more information, see [Using pgAudit to log database activity](Appendix.PostgreSQL.CommonDBATasks.pgaudit.md). 
+ Restrict access to the CloudWatch logs.
+ Use stronger authentication mechanisms such as IAM.

 

# Monitoring Amazon Aurora API calls in AWS CloudTrail
Monitoring Aurora API calls in CloudTrail

AWS CloudTrail is an AWS service that helps you audit your AWS account. AWS CloudTrail is turned on for your AWS account when you create it. For more information about CloudTrail, see the [AWS CloudTrail User Guide](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/).

**Topics**
+ [

## CloudTrail integration with Amazon Aurora
](#service-name-info-in-cloudtrail)
+ [

## Amazon Aurora log file entries
](#understanding-service-name-entries)

## CloudTrail integration with Amazon Aurora


All Amazon Aurora actions are logged by CloudTrail. CloudTrail provides a record of actions taken by a user, role, or an AWS service in Amazon Aurora.

### CloudTrail events


CloudTrail captures API calls for Amazon Aurora as events. An event represents a single request from any source and includes information about the requested action, the date and time of the action, request parameters, and so on. Events include calls from the Amazon RDS console and from code calls to the Amazon RDS API operations. 

Amazon Aurora activity is recorded in a CloudTrail event in **Event history**. You can use the CloudTrail console to view the last 90 days of recorded API activity and events in an AWS Region. For more information, see [Viewing events with CloudTrail event history](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html). 

### CloudTrail trails


For an ongoing record of events in your AWS account, including events for Amazon Aurora, create a trail. A trail is a configuration that enables delivery of events to a specified Amazon S3 bucket. CloudTrail typically delivers log files within 15 minutes of account activity.

**Note**  
If you don't configure a trail, you can still view the most recent events in the CloudTrail console in **Event history**.

You can create two types of trails for an AWS account: a trail that applies to all Regions, or a trail that applies to one Region. By default, when you create a trail in the console, the trail applies to all Regions. 

Additionally, you can configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs. For more information, see: 
+ [Overview for creating a trail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html)
+ [CloudTrail supported services and integrations](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-aws-service-specific-topics.html#cloudtrail-aws-service-specific-topics-integrations)
+ [Configuring Amazon SNS notifications for CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/getting_notifications_top_level.html)
+ [Receiving CloudTrail log files from multiple Regions](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html) and [Receiving CloudTrail log files from multiple accounts](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-receive-logs-from-multiple-accounts.html)

## Amazon Aurora log file entries


CloudTrail log files contain one or more log entries. CloudTrail log files are not an ordered stack trace of the public API calls, so they do not appear in any specific order. 

The following example shows a CloudTrail log entry that demonstrates the `CreateDBInstance` action.

```
{
    "eventVersion": "1.04",
    "userIdentity": {
        "type": "IAMUser",
        "principalId": "AKIAIOSFODNN7EXAMPLE",
        "arn": "arn:aws:iam::123456789012:user/johndoe",
        "accountId": "123456789012",
        "accessKeyId": "AKIAI44QH8DHBEXAMPLE",
        "userName": "johndoe"
    },
    "eventTime": "2018-07-30T22:14:06Z",
    "eventSource": "rds.amazonaws.com",
    "eventName": "CreateDBInstance",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "192.0.2.0",
    "userAgent": "aws-cli/1.15.42 Python/3.6.1 Darwin/17.7.0 botocore/1.10.42",
    "requestParameters": {
        "enableCloudwatchLogsExports": [
            "audit",
            "error",
            "general",
            "slowquery"
        ],
        "dBInstanceIdentifier": "test-instance",
        "engine": "mysql",
        "masterUsername": "myawsuser",
        "allocatedStorage": 20,
        "dBInstanceClass": "db.m1.small",
        "masterUserPassword": "****"
    },
    "responseElements": {
        "dBInstanceArn": "arn:aws:rds:us-east-1:123456789012:db:test-instance",
        "storageEncrypted": false,
        "preferredBackupWindow": "10:27-10:57",
        "preferredMaintenanceWindow": "sat:05:47-sat:06:17",
        "backupRetentionPeriod": 1,
        "allocatedStorage": 20,
        "storageType": "standard",
        "engineVersion": "8.0.28",
        "dbInstancePort": 0,
        "optionGroupMemberships": [
            {
                "status": "in-sync",
                "optionGroupName": "default:mysql-8-0"
            }
        ],
        "dBParameterGroups": [
            {
                "dBParameterGroupName": "default.mysql8.0",
                "parameterApplyStatus": "in-sync"
            }
        ],
        "monitoringInterval": 0,
        "dBInstanceClass": "db.m1.small",
        "readReplicaDBInstanceIdentifiers": [],
        "dBSubnetGroup": {
            "dBSubnetGroupName": "default",
            "dBSubnetGroupDescription": "default",
            "subnets": [
                {
                    "subnetAvailabilityZone": {"name": "us-east-1b"},
                    "subnetIdentifier": "subnet-cbfff283",
                    "subnetStatus": "Active"
                },
                {
                    "subnetAvailabilityZone": {"name": "us-east-1e"},
                    "subnetIdentifier": "subnet-d7c825e8",
                    "subnetStatus": "Active"
                },
                {
                    "subnetAvailabilityZone": {"name": "us-east-1f"},
                    "subnetIdentifier": "subnet-6746046b",
                    "subnetStatus": "Active"
                },
                {
                    "subnetAvailabilityZone": {"name": "us-east-1c"},
                    "subnetIdentifier": "subnet-bac383e0",
                    "subnetStatus": "Active"
                },
                {
                    "subnetAvailabilityZone": {"name": "us-east-1d"},
                    "subnetIdentifier": "subnet-42599426",
                    "subnetStatus": "Active"
                },
                {
                    "subnetAvailabilityZone": {"name": "us-east-1a"},
                    "subnetIdentifier": "subnet-da327bf6",
                    "subnetStatus": "Active"
                }
            ],
            "vpcId": "vpc-136a4c6a",
            "subnetGroupStatus": "Complete"
        },
        "masterUsername": "myawsuser",
        "multiAZ": false,
        "autoMinorVersionUpgrade": true,
        "engine": "mysql",
        "cACertificateIdentifier": "rds-ca-2015",
        "dbiResourceId": "db-ETDZIIXHEWY5N7GXVC4SH7H5IA",
        "dBSecurityGroups": [],
        "pendingModifiedValues": {
            "masterUserPassword": "****",
            "pendingCloudwatchLogsExports": {
                "logTypesToEnable": [
                    "audit",
                    "error",
                    "general",
                    "slowquery"
                ]
            }
        },
        "dBInstanceStatus": "creating",
        "publiclyAccessible": true,
        "domainMemberships": [],
        "copyTagsToSnapshot": false,
        "dBInstanceIdentifier": "test-instance",
        "licenseModel": "general-public-license",
        "iAMDatabaseAuthenticationEnabled": false,
        "performanceInsightsEnabled": false,
        "vpcSecurityGroups": [
            {
                "status": "active",
                "vpcSecurityGroupId": "sg-f839b688"
            }
        ]
    },
    "requestID": "daf2e3f5-96a3-4df7-a026-863f96db793e",
    "eventID": "797163d3-5726-441d-80a7-6eeb7464acd4",
    "eventType": "AwsApiCall",
    "recipientAccountId": "123456789012"
}
```

As shown in the `userIdentity` element in the preceding example, every event or log entry contains information about who generated the request. The identity information helps you determine the following: 
+ Whether the request was made with root or IAM user credentials.
+ Whether the request was made with temporary security credentials for a role or federated user.
+ Whether the request was made by another AWS service.

For more information about the `userIdentity`, see the [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html). For more information about `CreateDBInstance` and other Amazon Aurora actions, see the [Amazon RDS API Reference](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/).

# Monitoring Amazon Aurora with Database Activity Streams
Monitoring Aurora with Database Activity Streams<a name="das"></a>

By using Database Activity Streams, you can monitor near real-time streams of database activity.

**Topics**
+ [

## Overview of Database Activity Streams
](#DBActivityStreams.Overview)
+ [

# Network prerequisites for Aurora MySQL database activity streams
](DBActivityStreams.Prereqs.md)
+ [

# Starting a database activity stream
](DBActivityStreams.Enabling.md)
+ [

# Getting the status of a database activity stream
](DBActivityStreams.Status.md)
+ [

# Stopping a database activity stream
](DBActivityStreams.Disabling.md)
+ [

# Monitoring database activity streams
](DBActivityStreams.Monitoring.md)
+ [

# IAM policy examples for database activity streams
](DBActivityStreams.ManagingAccess.md)

## Overview of Database Activity Streams
Overview

As an Amazon Aurora database administrator, you need to safeguard your database and meet compliance and regulatory requirements. One strategy is to integrate database activity streams with your monitoring tools. In this way, you monitor and set alarms for auditing activity in your Amazon Aurora cluster.

Security threats are both external and internal. To protect against internal threats, you can control administrator access to data streams by configuring the Database Activity Streams feature. DBAs don't have access to the collection, transmission, storage, and processing of the streams.

**Contents**
+ [

### How database activity streams work
](#DBActivityStreams.Overview.how-they-work)
+ [

### Asynchronous and synchronousmode for database activity streams
](#DBActivityStreams.Overview.sync-mode)
+ [

### Requirements and limitations for database activity streams
](#DBActivityStreams.Overview.requirements)
+ [

### Region and version availability
](#DBActivityStreams.Overview.Availability)
+ [

### Supported DB instance classes for database activity streams
](#DBActivityStreams.Overview.requirements.classes)

### How database activity streams work


In Amazon Aurora, you start a database activity stream at the cluster level. All DB instances within your cluster have database activity streams enabled.

Your Aurora DB cluster pushes activities to an Amazon Kinesis data stream in near real time. The Kinesis stream is created automatically. From Kinesis, you can configure AWS services such as Amazon Data Firehose and AWS Lambda to consume the stream and store the data.

**Important**  
Use of the database activity streams feature in Amazon Aurora is free, but Amazon Kinesis charges for a data stream. For more information, see [Amazon Kinesis Data Streams pricing](https://aws.amazon.com/kinesis/data-streams/pricing/).

If you use an Aurora global database, start a database activity stream on each DB cluster separately. Each cluster delivers audit data to its own Kinesis stream within its own AWS Region. The activity streams don't operate differently during a failover. They continue to audit your global database as usual.

You can configure applications for compliance management to consume database activity streams. For Aurora PostgreSQL, compliance applications include IBM's Security Guardium and Imperva's SecureSphere Database Audit and Protection. These applications can use the stream to generate alerts and audit activity on your Aurora DB cluster.

The following graphic shows an Aurora DB cluster configured with Amazon Data Firehose.

![\[Architecture diagram showing database activity streams from an Aurora DB cluster consumed by Firehose\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-das.png)


### Asynchronous and synchronousmode for database activity streams


You can choose to have the database session handle database activity events in either of the following modes:
+ **Asynchronous mode** – When a database session generates an activity stream event, the session returns to normal activities immediately. In the background, the activity stream event is made a durable record. If an error occurs in the background task, an RDS event is sent. This event indicates the beginning and end of any time windows where activity stream event records might have been lost.

  Asynchronous mode favors database performance over the accuracy of the activity stream.
**Note**  
 Asynchronous mode is available for both Aurora PostgreSQL and Aurora MySQL. 
+ **Synchronous mode** – When a database session generates an activity stream event, the session blocks other activities until the event is made durable. If the event can't be made durable for some reason, the database session returns to normal activities. However, an RDS event is sent indicating that activity stream records might be lost for some time. A second RDS event is sent after the system is back to a healthy state.

  The synchronous mode favors the accuracy of the activity stream over database performance.
**Note**  
 Synchronous mode is available for Aurora PostgreSQL. You can't use synchronous mode with Aurora MySQL. 

### Requirements and limitations for database activity streams


In Aurora, database activity streams have the following requirements and limitations:
+ Amazon Kinesis is required for database activity streams.
+ AWS Key Management Service (AWS KMS) is required for database activity streams because they are always encrypted.
+ Applying additional encryption to your Amazon Kinesis data stream is incompatible with database activity streams, which are already encrypted with your AWS KMS key.
+ Start your database activity stream at the DB cluster level. If you add a DB instance to your cluster, you don't need to start an activity stream on the instance: it is audited automatically.
+ In an Aurora global database, make sure to start an activity stream on each DB cluster separately. Each cluster delivers audit data to its own Kinesis stream within its own AWS Region.
+ In Aurora PostgreSQL, make sure to stop database activity stream before a major version upgrade. You can start the database activity stream after the upgrade completes.

### Region and version availability
Region and version availability

Feature availability and support varies across specific versions of each Aurora database engine, and across AWS Regions. For more information on version and Region availability with Aurora and database activity streams, see [Supported Regions and Aurora DB engines for database activity streams](Concepts.Aurora_Fea_Regions_DB-eng.Feature.DBActivityStreams.md). 

### Supported DB instance classes for database activity streams


For Aurora MySQL, you can use database activity streams with the following DB instance classes:
+ db.r8g.\$1large
+ db.r7g.\$1large
+ db.r7i.\$1large
+ db.r6g.\$1large
+ db.r6i.\$1large
+ db.r5.\$1large
+ db.x2g.\$1

For Aurora PostgreSQL, you can use database activity streams with the following DB instance classes:
+ db.r8g.\$1large
+ db.r7i.\$1large
+ db.r7g.\$1large
+ db.r6g.\$1large
+ db.r6i.\$1large
+ db.r6id.\$1large
+ db.r5.\$1large
+ db.r4.\$1large
+ db.x2g.\$1

# Network prerequisites for Aurora MySQL database activity streams
Aurora MySQL network prerequisites

In the following section, you can find how to configure your virtual private cloud (VPC) for use with database activity streams.

**Note**  
Aurora MySQL network prerequisites are applicable to the following engine versions:  
Aurora MySQL version 2, up to 2.11.3
Aurora MySQL version 2.12.0
Aurora MySQL version 3, up to 3.04.2

**Topics**
+ [

## Prerequisites for AWS KMS endpoints
](#DBActivityStreams.Prereqs.KMS)
+ [

## Prerequisites for public availability
](#DBActivityStreams.Prereqs.Public)
+ [

## Prerequisites for private availability
](#DBActivityStreams.Prereqs.Private)

## Prerequisites for AWS KMS endpoints


Instances in an Aurora MySQL cluster that use activity streams must be able to access AWS KMS endpoints. Make sure this requirement is satisfied before enabling database activity streams for your Aurora MySQL cluster. If the Aurora cluster is publicly available, this requirement is satisfied automatically.

**Important**  
If the Aurora MySQL DB cluster can't access the AWS KMS endpoint, the activity stream stops. In that case, Aurora notifies you about this issue using RDS Events. 

## Prerequisites for public availability


For an Aurora DB cluster to be public, it must meet the following requirements:
+ **Publicly Accessible** is **Yes** in the AWS Management Console cluster details page.
+ The DB cluster is in an Amazon VPC public subnet. For more information about publicly accessible DB instances, see [Working with a DB cluster in a VPC](USER_VPC.WorkingWithRDSInstanceinaVPC.md). For more information about public Amazon VPC subnets, see [Your VPC and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html).

## Prerequisites for private availability


If your Aurora DB cluster is in a VPC public subnet and isn't publicly accessible, it's private. To keep your cluster private and use it with database activity streams, you have the following options:
+ Configure Network Address Translation (NAT) in your VPC. For more information, see [NAT Gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html).
+ Create an AWS KMS endpoint in your VPC. This option is recommended because it's easier to configure.

**To create an AWS KMS endpoint in your VPC**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **Endpoints**.

1. Choose **Create Endpoint**.

   The **Create Endpoint** page appears.

1. Do the following:
   + In **Service category**, choose **AWS services**.
   + In **Service Name**, choose **com.amazonaws.*region*.kms**, where *region* is the AWS Region where your cluster is located.
   + For **VPC**, choose the VPC where your cluster is located.

1. Choose **Create Endpoint**.

For more information about configuring VPC endpoints, see [VPC Endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html).

# Starting a database activity stream
Starting a database activity stream

To monitor database activity for all instances in your Aurora DB cluster, start an activity stream at the cluster level. Any DB instances that you add to the cluster are also automatically monitored. If you use an Aurora global database, start a database activity stream on each DB cluster separately. Each cluster delivers audit data to its own Kinesis stream within its own AWS Region.

When you start an activity stream, each database activity event that you configured in the audit policy generates an activity stream event. SQL commands such as `CONNECT` and `SELECT` generate access events. SQL commands such as `CREATE` and `INSERT` generate change events.

------
#### [ Console ]

**To start a database activity stream**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the DB cluster on which you want to start an activity stream. 

1. For **Actions**, choose **Start activity stream**. 

   The **Start database activity stream: ***name* window appears, where *name* is your DB cluster.

1. Enter the following settings:
   + For **AWS KMS key**, choose a key from the list of AWS KMS keys.
**Note**  
 If your Aurora MySQL cluster can't access KMS keys, follow the instructions in [Network prerequisites for Aurora MySQL database activity streams](DBActivityStreams.Prereqs.md) to enable such access first. 

     Aurora uses the KMS key to encrypt the key that in turn encrypts database activity. Choose a KMS key other than the default key. For more information about encryption keys and AWS KMS, see [What is AWS Key Management Service?](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) in the *AWS Key Management Service Developer Guide.*
   + For **Database activity stream mode**, choose **Asynchronous** or **Synchronous**.
**Note**  
This choice applies only to Aurora PostgreSQL. For Aurora MySQL, you can use only asynchronous mode.
   + Choose **Immediately**.

     When you choose **Immediately**, the DB cluster restarts right away. If you choose **During the next maintenance window**, the DB cluster doesn't restart right away. In this case, the database activity stream doesn't start until the next maintenance window.

1. Choose **Start database activity stream**.

   The status for the DB cluster shows that the activity stream is starting.
**Note**  
If you get the error `You can't start a database activity stream in this configuration`, check [Supported DB instance classes for database activity streams](DBActivityStreams.md#DBActivityStreams.Overview.requirements.classes) to see whether your DB cluster is using a supported instance class.

------
#### [ AWS CLI ]

To start database activity streams for a DB cluster , configure the DB cluster using the [start-activity-stream](https://docs.aws.amazon.com/cli/latest/reference/rds/start-activity-stream.html) AWS CLI command.
+ `--resource-arn arn` – Specifies the Amazon Resource Name (ARN) of the DB cluster.
+ `--mode sync-or-async` – Specifies either synchronous (`sync`) or asynchronous (`async`) mode. For Aurora PostgreSQL, you can choose either value. For Aurora MySQL, specify `async`. 
+ `--kms-key-id key` – Specifies the KMS key identifier for encrypting messages in the database activity stream. The AWS KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the AWS KMS key.

The following example starts a database activity stream for a DB cluster in asynchronous mode.

For Linux, macOS, or Unix:

```
aws rds start-activity-stream \
    --mode async \
    --kms-key-id my-kms-key-arn \
    --resource-arn my-cluster-arn \
    --apply-immediately
```

For Windows:

```
aws rds start-activity-stream ^
    --mode async ^
    --kms-key-id my-kms-key-arn ^
    --resource-arn my-cluster-arn ^
    --apply-immediately
```

------
#### [ Amazon RDS API ]

To start database activity streams for a DB cluster, configure the cluster using the [StartActivityStream](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_StartActivityStream.html) operation.

Call the action with the parameters below:
+ `Region`
+ `KmsKeyId`
+ `ResourceArn`
+ `Mode`

------

**Note**  
If you get an error stating that you can't start a database activity stream with the current version of your Aurora PostgreSQL database, apply the latest patch for Aurora PostgreSQL before starting a database activity stream. For information about upgrading your Aurora PostgreSQL database, see [Upgrading Amazon Aurora DB clusters](Aurora.VersionPolicy.Upgrading.md).  
Following are the minimum patch versions to start database activity streams with Aurora PostgreSQL.  
3.4.15 (11.9.15), 11.21.10
12.9.15, 12.15.9, 12.16.10, 12.17.7, 12.18.5, 12.19.4, 12.20.3, 12.22.3
13.9.12, 13.11.9, 13.12.10, 13.13.7, 13.14.5, 13.15.4, 13.16.3, 13.18.3
14.6.12, 14.8.9, 14.9.10, 14.10.7, 14.11.5, 14.12.4, 14.13.3, 14.15.3
15.3.9, 15.4.10, 15.5.7, 15.6.5, 15.7.4, 15.8.3, 15.10.3
16.1.7, 16.2.5, 16.3.4, 16.4.3, 16.6.3

# Getting the status of a database activity stream
Getting the activity stream status

You can get the status of an activity stream using the console or AWS CLI.

## Console


**To get the status of a database activity stream**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose the DB cluster link.

1. Choose the **Configuration** tab, and check **Database activity stream** for status.

## AWS CLI


You can get the activity stream configuration for a DB cluster as the response to a [describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) CLI request.

The following example describes *my-cluster*.

```
aws rds --region my-region describe-db-clusters --db-cluster-identifier my-cluster
```

The following example shows a JSON response. The following fields are shown:
+ `ActivityStreamKinesisStreamName`
+ `ActivityStreamKmsKeyId`
+ `ActivityStreamStatus`
+ `ActivityStreamMode`
+ 

These fields are the same for Aurora PostgreSQL and Aurora MySQL, except that `ActivityStreamMode` is always `async` for Aurora MySQL, while for Aurora PostgreSQL it might be `sync` or `async`.

```
{
    "DBClusters": [
        {
      "DBClusterIdentifier": "my-cluster",
            ...
            "ActivityStreamKinesisStreamName": "aws-rds-das-cluster-A6TSYXITZCZXJHIRVFUBZ5LTWY",
            "ActivityStreamStatus": "starting",
            "ActivityStreamKmsKeyId": "12345678-abcd-efgh-ijkl-bd041f170262",
            "ActivityStreamMode": "async",
            "DbClusterResourceId": "cluster-ABCD123456"
            ...
        }
    ]
}
```

## RDS API


You can get the activity stream configuration for a DB cluster as the response to a [DescribeDBClusters](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html) operation.

# Stopping a database activity stream
Stopping a database activity stream

You can stop an activity stream using the console or AWS CLI.

If you delete your DB cluster, the activity stream is stopped and the underlying Amazon Kinesis stream is deleted automatically.

## Console


**To turn off an activity stream**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose a DB cluster that you want to stop the database activity stream for.

1. For **Actions**, choose **Stop activity stream**. The **Database Activity Stream** window appears.

   1. Choose **Immediately**.

      When you choose **Immediately**, the DB cluster restarts right away. If you choose **During the next maintenance window**, the DB cluster doesn't restart right away. In this case, the database activity stream doesn't stop until the next maintenance window.

   1. Choose **Continue**.

## AWS CLI


To stop database activity streams for your DB cluster, configure the DB cluster using the AWS CLI command [stop-activity-stream](https://docs.aws.amazon.com/cli/latest/reference/rds/stop-activity-stream.html). Identify the AWS Region for the DB cluster using the `--region` parameter. The `--apply-immediately` parameter is optional.

For Linux, macOS, or Unix:

```
aws rds --region MY_REGION \
    stop-activity-stream \
    --resource-arn MY_CLUSTER_ARN \
    --apply-immediately
```

For Windows:

```
aws rds --region MY_REGION ^
    stop-activity-stream ^
    --resource-arn MY_CLUSTER_ARN ^
    --apply-immediately
```

## RDS API


To stop database activity streams for your DB cluster, configure the cluster using the [StopActivityStream](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_StopActivityStream.html) operation. Identify the AWS Region for the DB cluster using the `Region` parameter. The `ApplyImmediately` parameter is optional.

# Monitoring database activity streams
Monitoring activity streams

Database activity streams monitor and report activities. The stream of activity is collected and transmitted to Amazon Kinesis. From Kinesis, you can monitor the activity stream, or other services and applications can consume the activity stream for further analysis. You can find the underlying Kinesis stream name by using the AWS CLI command `describe-db-clusters` or the RDS API `DescribeDBClusters` operation.

Aurora manages the Kinesis stream for you as follows:
+ Aurora creates the Kinesis stream automatically with a 24-hour retention period. 
+  Aurora scales the Kinesis stream if necessary. 
+  If you stop the database activity stream or delete the DB cluster, Aurora deletes the Kinesis stream. 

The following categories of activity are monitored and put in the activity stream audit log:
+ **SQL commands** – All SQL commands are audited, and also prepared statements, built-in functions, and functions in PL/SQL. Calls to stored procedures are audited. Any SQL statements issued inside stored procedures or functions are also audited.
+ **Other database information** – Activity monitored includes the full SQL statement, the row count of affected rows from DML commands, accessed objects, and the unique database name. For Aurora PostgreSQL, database activity streams also monitor the bind variables and stored procedure parameters. 
**Important**  
The full SQL text of each statement is visible in the activity stream audit log, including any sensitive data. However, database user passwords are redacted if Aurora can determine them from the context, such as in the following SQL statement.   

  ```
  ALTER ROLE role-name WITH password
  ```
+ **Connection information** – Activity monitored includes session and network information, the server process ID, and exit codes.

If an activity stream has a failure while monitoring your DB instance, you are notified through RDS events.

In the following sections, you can access, audit, and process database activity streams.

**Topics**
+ [

# Accessing an activity stream from Amazon Kinesis
](DBActivityStreams.KinesisAccess.md)
+ [

# Audit log contents and examples for database activity streams
](DBActivityStreams.AuditLog.md)
+ [

# databaseActivityEventList JSON array for database activity streams
](DBActivityStreams.AuditLog.databaseActivityEventList.md)
+ [

# Processing a database activity stream using the AWS SDK
](DBActivityStreams.CodeExample.md)

# Accessing an activity stream from Amazon Kinesis
Accessing an activity stream from Kinesis

When you enable an activity stream for a DB cluster, a Kinesis stream is created for you. From Kinesis, you can monitor your database activity in real time. To further analyze database activity, you can connect your Kinesis stream to consumer applications. You can also connect the stream to compliance management applications such as IBM's Security Guardium or Imperva's SecureSphere Database Audit and Protection.

You can access your Kinesis stream either from the RDS console or the Kinesis console.

**To access an activity stream from Kinesis using the RDS console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the DB cluster on which you started an activity stream.

1. Choose **Configuration**.

1. Under **Database activity stream**, choose the link under **Kinesis stream**.

1. In the Kinesis console, choose **Monitoring** to begin observing the database activity.

**To access an activity stream from Kinesis using the Kinesis console**

1. Open the Kinesis console at [https://console.aws.amazon.com/kinesis](https://console.aws.amazon.com/kinesis).

1. Choose your activity stream from the list of Kinesis streams.

   An activity stream's name includes the prefix `aws-rds-das-cluster-` followed by the resource ID of the DB cluster. The following is an example. 

   ```
   aws-rds-das-cluster-NHVOV4PCLWHGF52NP
   ```

   To use the Amazon RDS console to find the resource ID for the DB cluster, choose your DB cluster from the list of databases, and then choose the **Configuration** tab.

   To use the AWS CLI to find the full Kinesis stream name for an activity stream, use a [describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) CLI request and note the value of `ActivityStreamKinesisStreamName` in the response.

1. Choose **Monitoring** to begin observing the database activity.

For more information about using Amazon Kinesis, see [What Is Amazon Kinesis Data Streams?](https://docs.aws.amazon.com/streams/latest/dev/introduction.html).

# Audit log contents and examples for database activity streams
Audit logs

Monitored events are represented in the database activity stream as JSON strings. The structure consists of a JSON object containing a `DatabaseActivityMonitoringRecord`, which in turn contains a `databaseActivityEventList` array of activity events. 

**Note**  
For database activity streams, the `paramList` JSON array doesn't include null values from Hibernate applications.

**Topics**
+ [

## Examples of an audit log for an activity stream
](#DBActivityStreams.AuditLog.Examples)
+ [

## DatabaseActivityMonitoringRecords JSON object
](#DBActivityStreams.AuditLog.DatabaseActivityMonitoringRecords)
+ [

## databaseActivityEvents JSON Object
](#DBActivityStreams.AuditLog.databaseActivityEvents)

## Examples of an audit log for an activity stream
Audit log examples

Following are sample decrypted JSON audit logs of activity event records.

**Example Activity event record of an Aurora PostgreSQL CONNECT SQL statement**  
The following activity event record shows a login with the use of a `CONNECT` SQL statement (`command`) by a psql client (`clientApplication`).  

```
{
  "type":"DatabaseActivityMonitoringRecords",
  "version":"1.1",
  "databaseActivityEvents": 
    {
      "type":"DatabaseActivityMonitoringRecord",
      "clusterId":"cluster-4HNY5V4RRNPKKYB7ICFKE5JBQQ",
      "instanceId":"db-FZJTMYKCXQBUUZ6VLU7NW3ITCM",
      "databaseActivityEventList":[
        {
          "startTime": "2019-10-30 00:39:49.940668+00",
          "logTime": "2019-10-30 00:39:49.990579+00",
          "statementId": 1,
          "substatementId": 1,
          "objectType": null,
          "command": "CONNECT",
          "objectName": null,
          "databaseName": "postgres",
          "dbUserName": "rdsadmin",
          "remoteHost": "172.31.3.195",
          "remotePort": "49804",
          "sessionId": "5ce5f7f0.474b",
          "rowCount": null,
          "commandText": null,
          "paramList": [],
          "pid": 18251,
          "clientApplication": "psql",
          "exitCode": null,
          "class": "MISC",
          "serverVersion": "2.3.1",
          "serverType": "PostgreSQL",
          "serviceName": "Amazon Aurora PostgreSQL-Compatible edition",
          "serverHost": "172.31.3.192",
          "netProtocol": "TCP",
          "dbProtocol": "Postgres 3.0",
          "type": "record",
          "errorMessage": null
        }
      ]
    },
   "key":"decryption-key"
}
```

**Example Activity event record of an Aurora MySQL CONNECT SQL statement**  
The following activity event record shows a logon with the use of a `CONNECT` SQL statement (`command`) by a mysql client (`clientApplication`).   

```
{
  "type":"DatabaseActivityMonitoringRecord",
  "clusterId":"cluster-some_id",
  "instanceId":"db-some_id",
  "databaseActivityEventList":[
    {
      "logTime":"2020-05-22 18:07:13.267214+00",
      "type":"record",
      "clientApplication":null,
      "pid":2830,
      "dbUserName":"rdsadmin",
      "databaseName":"",
      "remoteHost":"localhost",
      "remotePort":"11053",
      "command":"CONNECT",
      "commandText":"",
      "paramList":null,
      "objectType":"TABLE",
      "objectName":"",
      "statementId":0,
      "substatementId":1,
      "exitCode":"0",
      "sessionId":"725121",
      "rowCount":0,
      "serverHost":"master",
      "serverType":"MySQL",
      "serviceName":"Amazon Aurora MySQL",
      "serverVersion":"MySQL 5.7.12",
      "startTime":"2020-05-22 18:07:13.267207+00",
      "endTime":"2020-05-22 18:07:13.267213+00",
      "transactionId":"0",
      "dbProtocol":"MySQL",
      "netProtocol":"TCP",
      "errorMessage":"",
      "class":"MAIN"
    }
  ]
}
```

**Example Activity event record of an Aurora PostgreSQL CREATE TABLE statement**  
The following example shows a `CREATE TABLE` event for Aurora PostgreSQL.  

```
{
  "type":"DatabaseActivityMonitoringRecords",
  "version":"1.1",
  "databaseActivityEvents": 
    {
      "type":"DatabaseActivityMonitoringRecord",
      "clusterId":"cluster-4HNY5V4RRNPKKYB7ICFKE5JBQQ",
      "instanceId":"db-FZJTMYKCXQBUUZ6VLU7NW3ITCM",
      "databaseActivityEventList":[
        {
          "startTime": "2019-05-24 00:36:54.403455+00",
          "logTime": "2019-05-24 00:36:54.494235+00",
          "statementId": 2,
          "substatementId": 1,
          "objectType": null,
          "command": "CREATE TABLE",
          "objectName": null,
          "databaseName": "postgres",
          "dbUserName": "rdsadmin",
          "remoteHost": "172.31.3.195",
          "remotePort": "34534",
          "sessionId": "5ce73c6f.7e64",
          "rowCount": null,
          "commandText": "create table my_table (id serial primary key, name varchar(32));",
          "paramList": [],
          "pid": 32356,
          "clientApplication": "psql",
          "exitCode": null,
          "class": "DDL",
          "serverVersion": "2.3.1",
          "serverType": "PostgreSQL",
          "serviceName": "Amazon Aurora PostgreSQL-Compatible edition",
          "serverHost": "172.31.3.192",
          "netProtocol": "TCP",
          "dbProtocol": "Postgres 3.0",
          "type": "record",
          "errorMessage": null
        }
      ]
    },
   "key":"decryption-key"
}
```

**Example Activity event record of an Aurora MySQL CREATE TABLE statement**  
The following example shows a `CREATE TABLE` statement for Aurora MySQL. The operation is represented as two separate event records. One event has `"class":"MAIN"`. The other event has `"class":"AUX"`. The messages might arrive in any order. The `logTime` field of the `MAIN` event is always earlier than the `logTime` fields of any corresponding `AUX` events.  
The following example shows the event with a `class` value of `MAIN`.   

```
{
  "type":"DatabaseActivityMonitoringRecord",
  "clusterId":"cluster-some_id",
  "instanceId":"db-some_id",
  "databaseActivityEventList":[
    {
      "logTime":"2020-05-22 18:07:12.250221+00",
      "type":"record",
      "clientApplication":null,
      "pid":2830,
      "dbUserName":"master",
      "databaseName":"test",
      "remoteHost":"localhost",
      "remotePort":"11054",
      "command":"QUERY",
      "commandText":"CREATE TABLE test1 (id INT)",
      "paramList":null,
      "objectType":"TABLE",
      "objectName":"test1",
      "statementId":65459278,
      "substatementId":1,
      "exitCode":"0",
      "sessionId":"725118",
      "rowCount":0,
      "serverHost":"master",
      "serverType":"MySQL",
      "serviceName":"Amazon Aurora MySQL",
      "serverVersion":"MySQL 5.7.12",
      "startTime":"2020-05-22 18:07:12.226384+00",
      "endTime":"2020-05-22 18:07:12.250222+00",
      "transactionId":"0",
      "dbProtocol":"MySQL",
      "netProtocol":"TCP",
      "errorMessage":"",
      "class":"MAIN"
    }
  ]
}
```
 The following example shows the corresponding event with a `class` value of `AUX`.  

```
{
  "type":"DatabaseActivityMonitoringRecord",
  "clusterId":"cluster-some_id",
  "instanceId":"db-some_id",
  "databaseActivityEventList":[
    {
      "logTime":"2020-05-22 18:07:12.247182+00",
      "type":"record",
      "clientApplication":null,
      "pid":2830,
      "dbUserName":"master",
      "databaseName":"test",
      "remoteHost":"localhost",
      "remotePort":"11054",
      "command":"CREATE",
      "commandText":"test1",
      "paramList":null,
      "objectType":"TABLE",
      "objectName":"test1",
      "statementId":65459278,
      "substatementId":2,
      "exitCode":"",
      "sessionId":"725118",
      "rowCount":0,
      "serverHost":"master",
      "serverType":"MySQL",
      "serviceName":"Amazon Aurora MySQL",
      "serverVersion":"MySQL 5.7.12",
      "startTime":"2020-05-22 18:07:12.226384+00",
      "endTime":"2020-05-22 18:07:12.247182+00",
      "transactionId":"0",
      "dbProtocol":"MySQL",
      "netProtocol":"TCP",
      "errorMessage":"",
      "class":"AUX"
    }
  ]
}
```

**Example Activity event record of an Aurora PostgreSQL SELECT statement**  
The following example shows a `SELECT` event .  

```
{
  "type":"DatabaseActivityMonitoringRecords",
  "version":"1.1",
  "databaseActivityEvents": 
    {
      "type":"DatabaseActivityMonitoringRecord",
      "clusterId":"cluster-4HNY5V4RRNPKKYB7ICFKE5JBQQ",
      "instanceId":"db-FZJTMYKCXQBUUZ6VLU7NW3ITCM",
      "databaseActivityEventList":[
        {
          "startTime": "2019-05-24 00:39:49.920564+00",
          "logTime": "2019-05-24 00:39:49.940668+00",
          "statementId": 6,
          "substatementId": 1,
          "objectType": "TABLE",
          "command": "SELECT",
          "objectName": "public.my_table",
          "databaseName": "postgres",
          "dbUserName": "rdsadmin",
          "remoteHost": "172.31.3.195",
          "remotePort": "34534",
          "sessionId": "5ce73c6f.7e64",
          "rowCount": 10,
          "commandText": "select * from my_table;",
          "paramList": [],
          "pid": 32356,
          "clientApplication": "psql",
          "exitCode": null,
          "class": "READ",
          "serverVersion": "2.3.1",
          "serverType": "PostgreSQL",
          "serviceName": "Amazon Aurora PostgreSQL-Compatible edition",
          "serverHost": "172.31.3.192",
          "netProtocol": "TCP",
          "dbProtocol": "Postgres 3.0",
          "type": "record",
          "errorMessage": null
        }
      ]
    },
   "key":"decryption-key"
}
```

```
{
    "type": "DatabaseActivityMonitoringRecord",
    "clusterId": "",
    "instanceId": "db-4JCWQLUZVFYP7DIWP6JVQ77O3Q",
    "databaseActivityEventList": [
        {
            "class": "TABLE",
            "clientApplication": "Microsoft SQL Server Management Studio - Query",
            "command": "SELECT",
            "commandText": "select * from [testDB].[dbo].[TestTable]",
            "databaseName": "testDB",
            "dbProtocol": "SQLSERVER",
            "dbUserName": "test",
            "endTime": null,
            "errorMessage": null,
            "exitCode": 1,
            "logTime": "2022-10-06 21:24:59.9422268+00",
            "netProtocol": null,
            "objectName": "TestTable",
            "objectType": "TABLE",
            "paramList": null,
            "pid": null,
            "remoteHost": "local machine",
            "remotePort": null,
            "rowCount": 0,
            "serverHost": "172.31.30.159",
            "serverType": "SQLSERVER",
            "serverVersion": "15.00.4073.23.v1.R1",
            "serviceName": "sqlserver-ee",
            "sessionId": 62,
            "startTime": null,
            "statementId": "0x03baed90412f564fad640ebe51f89b99",
            "substatementId": 1,
            "transactionId": "4532935",
            "type": "record",
            "engineNativeAuditFields": {
                "target_database_principal_id": 0,
                "target_server_principal_id": 0,
                "target_database_principal_name": "",
                "server_principal_id": 2,
                "user_defined_information": "",
                "response_rows": 0,
                "database_principal_name": "dbo",
                "target_server_principal_name": "",
                "schema_name": "dbo",
                "is_column_permission": true,
                "object_id": 581577110,
                "server_instance_name": "EC2AMAZ-NFUJJNO",
                "target_server_principal_sid": null,
                "additional_information": "",
                "duration_milliseconds": 0,
                "permission_bitmask": "0x00000000000000000000000000000001",
                "data_sensitivity_information": "",
                "session_server_principal_name": "test",
                "connection_id": "AD3A5084-FB83-45C1-8334-E923459A8109",
                "audit_schema_version": 1,
                "database_principal_id": 1,
                "server_principal_sid": "0x010500000000000515000000bdc2795e2d0717901ba6998cf4010000",
                "user_defined_event_id": 0,
                "host_name": "EC2AMAZ-NFUJJNO"
            }
        }
    ]
}
```

**Example Activity event record of an Aurora MySQL SELECT statement**  
The following example shows a `SELECT` event.  
 The following example shows the event with a `class` value of `MAIN`.   

```
{
  "type":"DatabaseActivityMonitoringRecord",
  "clusterId":"cluster-some_id",
  "instanceId":"db-some_id",
  "databaseActivityEventList":[
    {
      "logTime":"2020-05-22 18:29:57.986467+00",
      "type":"record",
      "clientApplication":null,
      "pid":2830,
      "dbUserName":"master",
      "databaseName":"test",
      "remoteHost":"localhost",
      "remotePort":"11054",
      "command":"QUERY",
      "commandText":"SELECT * FROM test1 WHERE id < 28",
      "paramList":null,
      "objectType":"TABLE",
      "objectName":"test1",
      "statementId":65469218,
      "substatementId":1,
      "exitCode":"0",
      "sessionId":"726571",
      "rowCount":2,
      "serverHost":"master",
      "serverType":"MySQL",
      "serviceName":"Amazon Aurora MySQL",
      "serverVersion":"MySQL 5.7.12",
      "startTime":"2020-05-22 18:29:57.986364+00",
      "endTime":"2020-05-22 18:29:57.986467+00",
      "transactionId":"0",
      "dbProtocol":"MySQL",
      "netProtocol":"TCP",
      "errorMessage":"",
      "class":"MAIN"
    }
  ]
}
```
 The following example shows the corresponding event with a `class` value of `AUX`.   

```
{
  "type":"DatabaseActivityMonitoringRecord",
  "instanceId":"db-some_id",
  "databaseActivityEventList":[
    {
      "logTime":"2020-05-22 18:29:57.986399+00",
      "type":"record",
      "clientApplication":null,
      "pid":2830,
      "dbUserName":"master",
      "databaseName":"test",
      "remoteHost":"localhost",
      "remotePort":"11054",
      "command":"READ",
      "commandText":"test1",
      "paramList":null,
      "objectType":"TABLE",
      "objectName":"test1",
      "statementId":65469218,
      "substatementId":2,
      "exitCode":"",
      "sessionId":"726571",
      "rowCount":0,
      "serverHost":"master",
      "serverType":"MySQL",
      "serviceName":"Amazon Aurora MySQL",
      "serverVersion":"MySQL 5.7.12",
      "startTime":"2020-05-22 18:29:57.986364+00",
      "endTime":"2020-05-22 18:29:57.986399+00",
      "transactionId":"0",
      "dbProtocol":"MySQL",
      "netProtocol":"TCP",
      "errorMessage":"",
      "class":"AUX"
    }
  ]
}
```

## DatabaseActivityMonitoringRecords JSON object
DatabaseActivityMonitoringRecords

The database activity event records are in a JSON object that contains the following information.


****  

| JSON Field | Data Type | Description | 
| --- | --- | --- | 
|  `type`  | string |  The type of JSON record. The value is `DatabaseActivityMonitoringRecords`.  | 
| version | string |  The version of the database activity monitoring records. The version of the generated database activity records depends on the engine version of the DB cluster: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/DBActivityStreams.AuditLog.html)All of the following fields are in both version 1.0 and version 1.1 except where specifically noted. | 
|  [databaseActivityEvents](#DBActivityStreams.AuditLog.databaseActivityEvents)  | string |  A JSON object that contains the activity events.  | 
| key | string | An encryption key that you use to decrypt the [databaseActivityEventList JSON array](DBActivityStreams.AuditLog.databaseActivityEventList.md)  | 

## databaseActivityEvents JSON Object
databaseActivityEvents

The `databaseActivityEvents` JSON object contains the following information.

### Top-level fields in JSON record


 Each event in the audit log is wrapped inside a record in JSON format. This record contains the following fields. 

**type**  
 This field always has the value `DatabaseActivityMonitoringRecords`. 

**version**  
 This field represents the version of the database activity stream data protocol or contract. It defines which fields are available.  
Version 1.0 represents the original data activity streams support for Aurora PostgreSQL versions 10.7 and 11.4. Version 1.1 represents the data activity streams support for Aurora PostgreSQL versions 10.10 and higher and Aurora PostgreSQL 11.5 and higher. Version 1.1 includes the additional fields `errorMessage` and `startTime`. Version 1.2 represents the data activity streams support for Aurora MySQL 2.08 and higher. Version 1.2 includes the additional fields `endTime` and `transactionId`.

**databaseActivityEvents**  
 An encrypted string representing one or more activity events. It's represented as a base64 byte array. When you decrypt the string, the result is a record in JSON format with fields as shown in the examples in this section.

**key**  
 The encrypted data key used to encrypt the `databaseActivityEvents` string. This is the same AWS KMS key that you provided when you started the database activity stream.

 The following example shows the format of this record.

```
{
  "type":"DatabaseActivityMonitoringRecords",
  "version":"1.1",
  "databaseActivityEvents":"encrypted audit records",
  "key":"encrypted key"
}
```

Take the following steps to decrypt the contents of the `databaseActivityEvents` field:

1.  Decrypt the value in the `key` JSON field using the KMS key you provided when starting database activity stream. Doing so returns the data encryption key in clear text. 

1.  Base64-decode the value in the `databaseActivityEvents` JSON field to obtain the ciphertext, in binary format, of the audit payload. 

1.  Decrypt the binary ciphertext with the data encryption key that you decoded in the first step. 

1.  Decompress the decrypted payload. 
   +  The encrypted payload is in the `databaseActivityEvents` field. 
   +  The `databaseActivityEventList` field contains an array of audit records. The `type` fields in the array can be `record` or `heartbeat`. 

The audit log activity event record is a JSON object that contains the following information.


****  

| JSON Field | Data Type | Description | 
| --- | --- | --- | 
|  `type`  | string |  The type of JSON record. The value is `DatabaseActivityMonitoringRecord`.  | 
| clusterId | string | The DB cluster resource identifier. It corresponds to the DB cluster attribute DbClusterResourceId. | 
| instanceId | string | The DB instance resource identifier. It corresponds to the DB instance attribute DbiResourceId. | 
|  [databaseActivityEventList JSON array](DBActivityStreams.AuditLog.databaseActivityEventList.md)   | string |  An array of activity audit records or heartbeat messages.  | 

# databaseActivityEventList JSON array for database activity streams
databaseActivityEventList JSON array

The audit log payload is an encrypted `databaseActivityEventList` JSON array. The following tables lists alphabetically the fields for each activity event in the decrypted `DatabaseActivityEventList` array of an audit log. The fields differ depending on whether you use Aurora PostgreSQL or Aurora MySQL. Consult the table that applies to your database engine.

**Important**  
The event structure is subject to change. Aurora might add new fields to activity events in the future. In applications that parse the JSON data, make sure that your code can ignore or take appropriate actions for unknown field names. 

## databaseActivityEventList fields for Aurora PostgreSQL


The following are `databaseActivityEventList` fields for Aurora PostgreSQL.


| Field | Data Type | Description | 
| --- | --- | --- | 
| class | string |  The class of activity event. Valid values for Aurora PostgreSQL are the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/DBActivityStreams.AuditLog.databaseActivityEventList.html)  | 
| clientApplication | string | The application the client used to connect as reported by the client. The client doesn't have to provide this information, so the value can be null. | 
| command | string | The name of the SQL command without any command details. | 
| commandText | string |  The actual SQL statement passed in by the user. For Aurora PostgreSQL, the value is identical to the original SQL statement. This field is used for all types of records except for connect or disconnect records, in which case the value is null.  The full SQL text of each statement is visible in the activity stream audit log, including any sensitive data. However, database user passwords are redacted if Aurora can determine them from the context, such as in the following SQL statement.  <pre>ALTER ROLE role-name WITH password</pre>   | 
| databaseName | string | The database to which the user connected. | 
| dbProtocol | string | The database protocol, for example Postgres 3.0. | 
| dbUserName | string | The database user with which the client authenticated. | 
| errorMessage(version 1.1 database activity records only) | string |  If there was any error, this field is populated with the error message that would've been generated by the DB server. The `errorMessage` value is null for normal statements that didn't result in an error.  An error is defined as any activity that would produce a client-visible PostgreSQL error log event at a severity level of `ERROR` or greater. For more information, see [PostgreSQL Message Severity Levels](https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-SEVERITY-LEVELS). For example, syntax errors and query cancellations generate an error message.  Internal PostgreSQL server errors such as background checkpointer process errors do not generate an error message. However, records for such events are still emitted regardless of the setting of the log severity level. This prevents attackers from turning off logging to attempt avoiding detection. See also the `exitCode` field.  | 
| exitCode | int | A value used for a session exit record. On a clean exit, this contains the exit code. An exit code can't always be obtained in some failure scenarios. Examples are if PostgreSQL does an exit() or if an operator performs a command such as kill -9.If there was any error, the `exitCode` field shows the SQL error code, `SQLSTATE`, as listed in [ PostgreSQL Error Codes](https://www.postgresql.org/docs/current/errcodes-appendix.html). See also the `errorMessage` field. | 
| logTime | string | A timestamp as recorded in the auditing code path. This represents the SQL statement execution end time. See also the startTime field. | 
| netProtocol | string | The network communication protocol. | 
| objectName | string | The name of the database object if the SQL statement is operating on one. This field is used only where the SQL statement operates on a database object. If the SQL statement is not operating on an object, this value is null. | 
| objectType | string | The database object type such as table, index, view, and so on. This field is used only where the SQL statement operates on a database object. If the SQL statement is not operating on an object, this value is null. Valid values include the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/DBActivityStreams.AuditLog.databaseActivityEventList.html) | 
| paramList | string | An array of comma-separated parameters passed to the SQL statement. If the SQL statement has no parameters, this value is an empty array. | 
| pid | int | The process ID of the backend process that is allocated for serving the client connection. | 
| remoteHost | string | Either the client IP address or hostname. For Aurora PostgreSQL, which one is used depends on the database's log\$1hostname parameter setting. The remoteHost value also includes [local] and localhost which indicate activity from the rdsadmin user. | 
| remotePort | string | The client port number. | 
| rowCount | int | The number of table rows affected or retrieved by the SQL statement. This field is used only for SQL statements that are data manipulation language (DML) statements. If the SQL statement is not a DML statement, this value is null. | 
| serverHost | string | The database server host IP address. The serverHost value also includes [local] and localhost which indicate activity from the rdsadmin user. | 
| serverType | string | The database server type, for example PostgreSQL. | 
| serverVersion | string | The database server version, for example 2.3.1 for Aurora PostgreSQL. | 
| serviceName | string | The name of the service, for example Amazon Aurora PostgreSQL-Compatible edition.  | 
| sessionId | int | A pseudo-unique session identifier. | 
| sessionId | int | A pseudo-unique session identifier. | 
| startTime(version 1.1 database activity records only) | string |  The time when execution began for the SQL statement.  To calculate the approximate execution time of the SQL statement, use `logTime - startTime`. See also the `logTime` field.  | 
| statementId | int | An identifier for the client's SQL statement. The counter is at the session level and increments with each SQL statement entered by the client.  | 
| substatementId | int | An identifier for a SQL substatement. This value counts the contained substatements for each SQL statement identified by the statementId field. | 
| type | string | The event type. Valid values are record or heartbeat. | 

## databaseActivityEventList fields for Aurora MySQL


The following are `databaseActivityEventList` fields for Aurora MySQL.


| Field | Data Type | Description | 
| --- | --- | --- | 
| class | string |  The class of activity event. Valid values for Aurora MySQL are the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/DBActivityStreams.AuditLog.databaseActivityEventList.html)  | 
| clientApplication | string | The application the client used to connect as reported by the client. The client doesn't have to provide this information, so the value can be null. | 
| command | string |  The general category of the SQL statement. The values for this field depend on the value of `class`. The values when `class` is `MAIN` include the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/DBActivityStreams.AuditLog.databaseActivityEventList.html) The values when `class` is `AUX` include the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/DBActivityStreams.AuditLog.databaseActivityEventList.html)  | 
| commandText | string |  For events with a `class` value of `MAIN`, this field represents the actual SQL statement passed in by the user. This field is used for all types of records except for connect or disconnect records, in which case the value is null.  For events with a `class` value of `AUX`, this field contains supplemental information about the objects involved in the event.  For Aurora MySQL, characters such as quotation marks are preceded by a backslash, representing an escape character.  The full SQL text of each statement is visible in the audit log, including any sensitive data. However, database user passwords are redacted if Aurora can determine them from the context, such as in the following SQL statement.  <pre>mysql> SET PASSWORD = 'my-password';</pre> Specify a password other than the prompt shown here as a security best practice.   | 
| databaseName | string | The database to which the user connected. | 
| dbProtocol | string | The database protocol. Currently, this value is always MySQL for Aurora MySQL. | 
| dbUserName | string | The database user with which the client authenticated. | 
| endTime(version 1.2 database activity records only) | string |  The time when execution ended for the SQL statement. It is represented in Coordinated Universal Time (UTC) format. To calculate the execution time of the SQL statement, use `endTime - startTime`. See also the `startTime` field.  | 
| errorMessage(version 1.1 database activity records only) | string |  If there was any error, this field is populated with the error message that would've been generated by the DB server. The `errorMessage` value is null for normal statements that didn't result in an error.  An error is defined as any activity that would produce a client-visible MySQL error log event at a severity level of `ERROR` or greater. For more information, see [The Error Log](https://dev.mysql.com/doc/refman/5.7/en/error-log.html) in the *MySQL Reference Manual*. For example, syntax errors and query cancellations generate an error message.  Internal MySQL server errors such as background checkpointer process errors do not generate an error message. However, records for such events are still emitted regardless of the setting of the log severity level. This prevents attackers from turning off logging to attempt avoiding detection. See also the `exitCode` field.  | 
| exitCode | int | A value used for a session exit record. On a clean exit, this contains the exit code. An exit code can't always be obtained in some failure scenarios. In such cases, this value might be zero or might be blank. | 
| logTime | string | A timestamp as recorded in the auditing code path. It is represented in Coordinated Universal Time (UTC) format. For the most accurate way to calculate statement duration, see the startTime and endTime fields. | 
| netProtocol | string | The network communication protocol. Currently, this value is always TCP for Aurora MySQL. | 
| objectName | string | The name of the database object if the SQL statement is operating on one. This field is used only where the SQL statement operates on a database object. If the SQL statement isn't operating on an object, this value is blank. To construct the fully qualified name of the object, combine databaseName and objectName. If the query involves multiple objects, this field can be a comma-separated list of names. | 
| objectType | string |  The database object type such as table, index, and so on. This field is used only where the SQL statement operates on a database object. If the SQL statement is not operating on an object, this value is null. Valid values for Aurora MySQL include the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/DBActivityStreams.AuditLog.databaseActivityEventList.html)  | 
| paramList | string | This field isn't used for Aurora MySQL and is always null. | 
| pid | int | The process ID of the backend process that is allocated for serving the client connection. When the database server is restarted, the pid changes and the counter for the statementId field starts over. | 
| remoteHost | string | Either the IP address or hostname of the client that issued the SQL statement. For Aurora MySQL, which one is used depends on the database's skip\$1name\$1resolve parameter setting. The value localhost indicates activity from the rdsadmin special user.  | 
| remotePort | string | The client port number. | 
| rowCount | int | The number of rows returned by the SQL statement. For example, if a SELECT statement returns 10 rows, rowCount is 10. For INSERT or UPDATE statements, rowCount is 0. | 
| serverHost | string | The database server instance identifier. | 
| serverType | string | The database server type, for example MySQL. | 
| serverVersion | string | The database server version. Currently, this value is always MySQL 5.7.12 for Aurora MySQL. | 
| serviceName | string | The name of the service. Currently, this value is always Amazon Aurora MySQL for Aurora MySQL. | 
| sessionId | int | A pseudo-unique session identifier. | 
| startTime(version 1.1 database activity records only) | string |  The time when execution began for the SQL statement. It is represented in Coordinated Universal Time (UTC) format. To calculate the execution time of the SQL statement, use `endTime - startTime`. See also the `endTime` field.  | 
| statementId | int | An identifier for the client's SQL statement. The counter increments with each SQL statement entered by the client. The counter is reset when the DB instance is restarted. | 
| substatementId | int | An identifier for a SQL substatement. This value is 1 for events with class MAIN and 2 for events with class AUX. Use the statementId field to identify all the events generated by the same statement. | 
| transactionId(version 1.2 database activity records only) | int | An identifier for a transaction. | 
| type | string | The event type. Valid values are record or heartbeat. | 

# Processing a database activity stream using the AWS SDK
Processing an activity stream using the SDK

You can programmatically process an activity stream by using the AWS SDK. The following are fully functioning Java and Python examples of how you might process the Kinesis data stream.

------
#### [ Java ]

```
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.net.InetAddress;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.security.NoSuchAlgorithmException;
import java.security.NoSuchProviderException;
import java.security.Security;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import java.util.zip.GZIPInputStream;

import javax.crypto.Cipher;
import javax.crypto.NoSuchPaddingException;
import javax.crypto.spec.SecretKeySpec;

import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.encryptionsdk.AwsCrypto;
import com.amazonaws.encryptionsdk.CryptoInputStream;
import com.amazonaws.encryptionsdk.jce.JceMasterKey;
import com.amazonaws.services.kinesis.clientlibrary.exceptions.InvalidStateException;
import com.amazonaws.services.kinesis.clientlibrary.exceptions.ShutdownException;
import com.amazonaws.services.kinesis.clientlibrary.exceptions.ThrottlingException;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorCheckpointer;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorFactory;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.InitialPositionInStream;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisClientLibConfiguration;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShutdownReason;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker.Builder;
import com.amazonaws.services.kinesis.model.Record;
import com.amazonaws.services.kms.AWSKMS;
import com.amazonaws.services.kms.AWSKMSClientBuilder;
import com.amazonaws.services.kms.model.DecryptRequest;
import com.amazonaws.services.kms.model.DecryptResult;
import com.amazonaws.util.Base64;
import com.amazonaws.util.IOUtils;
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;
import com.google.gson.annotations.SerializedName;
import org.bouncycastle.jce.provider.BouncyCastleProvider;

public class DemoConsumer {

    private static final String STREAM_NAME = "aws-rds-das-[cluster-external-resource-id]";
    private static final String APPLICATION_NAME = "AnyApplication"; //unique application name for dynamo table generation that holds kinesis shard tracking
    private static final String AWS_ACCESS_KEY = "[AWS_ACCESS_KEY_TO_ACCESS_KINESIS]";
    private static final String AWS_SECRET_KEY = "[AWS_SECRET_KEY_TO_ACCESS_KINESIS]";
    private static final String DBC_RESOURCE_ID = "[cluster-external-resource-id]";
    private static final String REGION_NAME = "[region-name]"; //us-east-1, us-east-2...
    private static final BasicAWSCredentials CREDENTIALS = new BasicAWSCredentials(AWS_ACCESS_KEY, AWS_SECRET_KEY);
    private static final AWSStaticCredentialsProvider CREDENTIALS_PROVIDER = new AWSStaticCredentialsProvider(CREDENTIALS);

    private static final AwsCrypto CRYPTO = new AwsCrypto();
    private static final AWSKMS KMS = AWSKMSClientBuilder.standard()
            .withRegion(REGION_NAME)
            .withCredentials(CREDENTIALS_PROVIDER).build();

    class Activity {
        String type;
        String version;
        String databaseActivityEvents;
        String key;
    }

    class ActivityEvent {
        @SerializedName("class") String _class;
        String clientApplication;
        String command;
        String commandText;
        String databaseName;
        String dbProtocol;
        String dbUserName;
        String endTime;
        String errorMessage;
        String exitCode;
        String logTime;
        String netProtocol;
        String objectName;
        String objectType;
        List<String> paramList;
        String pid;
        String remoteHost;
        String remotePort;
        String rowCount;
        String serverHost;
        String serverType;
        String serverVersion;
        String serviceName;
        String sessionId;
        String startTime;
        String statementId;
        String substatementId;
        String transactionId;
        String type;
    }

    class ActivityRecords {
        String type;
        String clusterId;
        String instanceId;
        List<ActivityEvent> databaseActivityEventList;
    }

    static class RecordProcessorFactory implements IRecordProcessorFactory {
        @Override
        public IRecordProcessor createProcessor() {
            return new RecordProcessor();
        }
    }

    static class RecordProcessor implements IRecordProcessor {

        private static final long BACKOFF_TIME_IN_MILLIS = 3000L;
        private static final int PROCESSING_RETRIES_MAX = 10;
        private static final long CHECKPOINT_INTERVAL_MILLIS = 60000L;
        private static final Gson GSON = new GsonBuilder().serializeNulls().create();

        private static final Cipher CIPHER;
        static {
            Security.insertProviderAt(new BouncyCastleProvider(), 1);
            try {
                CIPHER = Cipher.getInstance("AES/GCM/NoPadding", "BC");
            } catch (NoSuchAlgorithmException | NoSuchPaddingException | NoSuchProviderException e) {
                throw new ExceptionInInitializerError(e);
            }
        }

        private long nextCheckpointTimeInMillis;

        @Override
        public void initialize(String shardId) {
        }

        @Override
        public void processRecords(final List<Record> records, final IRecordProcessorCheckpointer checkpointer) {
            for (final Record record : records) {
                processSingleBlob(record.getData());
            }

            if (System.currentTimeMillis() > nextCheckpointTimeInMillis) {
                checkpoint(checkpointer);
                nextCheckpointTimeInMillis = System.currentTimeMillis() + CHECKPOINT_INTERVAL_MILLIS;
            }
        }

        @Override
        public void shutdown(IRecordProcessorCheckpointer checkpointer, ShutdownReason reason) {
            if (reason == ShutdownReason.TERMINATE) {
                checkpoint(checkpointer);
            }
        }

        private void processSingleBlob(final ByteBuffer bytes) {
            try {
                // JSON $Activity
                final Activity activity = GSON.fromJson(new String(bytes.array(), StandardCharsets.UTF_8), Activity.class);

                // Base64.Decode
                final byte[] decoded = Base64.decode(activity.databaseActivityEvents);
                final byte[] decodedDataKey = Base64.decode(activity.key);

                Map<String, String> context = new HashMap<>();
                context.put("aws:rds:dbc-id", DBC_RESOURCE_ID);

                // Decrypt
                final DecryptRequest decryptRequest = new DecryptRequest()
                        .withCiphertextBlob(ByteBuffer.wrap(decodedDataKey)).withEncryptionContext(context);
                final DecryptResult decryptResult = KMS.decrypt(decryptRequest);
                final byte[] decrypted = decrypt(decoded, getByteArray(decryptResult.getPlaintext()));

                // GZip Decompress
                final byte[] decompressed = decompress(decrypted);
                // JSON $ActivityRecords
                final ActivityRecords activityRecords = GSON.fromJson(new String(decompressed, StandardCharsets.UTF_8), ActivityRecords.class);

                // Iterate throught $ActivityEvents
                for (final ActivityEvent event : activityRecords.databaseActivityEventList) {
                    System.out.println(GSON.toJson(event));
                }
            } catch (Exception e) {
                // Handle error.
                e.printStackTrace();
            }
        }

        private static byte[] decompress(final byte[] src) throws IOException {
            ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(src);
            GZIPInputStream gzipInputStream = new GZIPInputStream(byteArrayInputStream);
            return IOUtils.toByteArray(gzipInputStream);
        }

        private void checkpoint(IRecordProcessorCheckpointer checkpointer) {
            for (int i = 0; i < PROCESSING_RETRIES_MAX; i++) {
                try {
                    checkpointer.checkpoint();
                    break;
                } catch (ShutdownException se) {
                    // Ignore checkpoint if the processor instance has been shutdown (fail over).
                    System.out.println("Caught shutdown exception, skipping checkpoint." + se);
                    break;
                } catch (ThrottlingException e) {
                    // Backoff and re-attempt checkpoint upon transient failures
                    if (i >= (PROCESSING_RETRIES_MAX - 1)) {
                        System.out.println("Checkpoint failed after " + (i + 1) + "attempts." + e);
                        break;
                    } else {
                        System.out.println("Transient issue when checkpointing - attempt " + (i + 1) + " of " + PROCESSING_RETRIES_MAX + e);
                    }
                } catch (InvalidStateException e) {
                    // This indicates an issue with the DynamoDB table (check for table, provisioned IOPS).
                    System.out.println("Cannot save checkpoint to the DynamoDB table used by the Amazon Kinesis Client Library." + e);
                    break;
                }
                try {
                    Thread.sleep(BACKOFF_TIME_IN_MILLIS);
                } catch (InterruptedException e) {
                    System.out.println("Interrupted sleep" + e);
                }
            }
        }
    }

    private static byte[] decrypt(final byte[] decoded, final byte[] decodedDataKey) throws IOException {
        // Create a JCE master key provider using the random key and an AES-GCM encryption algorithm
        final JceMasterKey masterKey = JceMasterKey.getInstance(new SecretKeySpec(decodedDataKey, "AES"),
                "BC", "DataKey", "AES/GCM/NoPadding");
        try (final CryptoInputStream<JceMasterKey> decryptingStream = CRYPTO.createDecryptingStream(masterKey, new ByteArrayInputStream(decoded));
             final ByteArrayOutputStream out = new ByteArrayOutputStream()) {
            IOUtils.copy(decryptingStream, out);
            return out.toByteArray();
        }
    }

    public static void main(String[] args) throws Exception {
        final String workerId = InetAddress.getLocalHost().getCanonicalHostName() + ":" + UUID.randomUUID();
        final KinesisClientLibConfiguration kinesisClientLibConfiguration =
                new KinesisClientLibConfiguration(APPLICATION_NAME, STREAM_NAME, CREDENTIALS_PROVIDER, workerId);
        kinesisClientLibConfiguration.withInitialPositionInStream(InitialPositionInStream.LATEST);
        kinesisClientLibConfiguration.withRegionName(REGION_NAME);
        final Worker worker = new Builder()
                .recordProcessorFactory(new RecordProcessorFactory())
                .config(kinesisClientLibConfiguration)
                .build();

        System.out.printf("Running %s to process stream %s as worker %s...\n", APPLICATION_NAME, STREAM_NAME, workerId);

        try {
            worker.run();
        } catch (Throwable t) {
            System.err.println("Caught throwable while processing data.");
            t.printStackTrace();
            System.exit(1);
        }
        System.exit(0);
    }

    private static byte[] getByteArray(final ByteBuffer b) {
        byte[] byteArray = new byte[b.remaining()];
        b.get(byteArray);
        return byteArray;
    }
}
```

------
#### [ Python ]

```
import base64
import json
import zlib
import aws_encryption_sdk
from aws_encryption_sdk import CommitmentPolicy
from aws_encryption_sdk.internal.crypto import WrappingKey
from aws_encryption_sdk.key_providers.raw import RawMasterKeyProvider
from aws_encryption_sdk.identifiers import WrappingAlgorithm, EncryptionKeyType
import boto3

REGION_NAME = '<region>'                    # us-east-1
RESOURCE_ID = '<external-resource-id>'      # cluster-ABCD123456
STREAM_NAME = 'aws-rds-das-' + RESOURCE_ID  # aws-rds-das-cluster-ABCD123456

enc_client = aws_encryption_sdk.EncryptionSDKClient(commitment_policy=CommitmentPolicy.FORBID_ENCRYPT_ALLOW_DECRYPT)

class MyRawMasterKeyProvider(RawMasterKeyProvider):
    provider_id = "BC"

    def __new__(cls, *args, **kwargs):
        obj = super(RawMasterKeyProvider, cls).__new__(cls)
        return obj

    def __init__(self, plain_key):
        RawMasterKeyProvider.__init__(self)
        self.wrapping_key = WrappingKey(wrapping_algorithm=WrappingAlgorithm.AES_256_GCM_IV12_TAG16_NO_PADDING,
                                        wrapping_key=plain_key, wrapping_key_type=EncryptionKeyType.SYMMETRIC)

    def _get_raw_key(self, key_id):
        return self.wrapping_key


def decrypt_payload(payload, data_key):
    my_key_provider = MyRawMasterKeyProvider(data_key)
    my_key_provider.add_master_key("DataKey")
    decrypted_plaintext, header = enc_client.decrypt(
        source=payload,
        materials_manager=aws_encryption_sdk.materials_managers.default.DefaultCryptoMaterialsManager(master_key_provider=my_key_provider))
    return decrypted_plaintext


def decrypt_decompress(payload, key):
    decrypted = decrypt_payload(payload, key)
    return zlib.decompress(decrypted, zlib.MAX_WBITS + 16)


def main():
    session = boto3.session.Session()
    kms = session.client('kms', region_name=REGION_NAME)
    kinesis = session.client('kinesis', region_name=REGION_NAME)

    response = kinesis.describe_stream(StreamName=STREAM_NAME)
    shard_iters = []
    for shard in response['StreamDescription']['Shards']:
        shard_iter_response = kinesis.get_shard_iterator(StreamName=STREAM_NAME, ShardId=shard['ShardId'],
                                                         ShardIteratorType='LATEST')
        shard_iters.append(shard_iter_response['ShardIterator'])

    while len(shard_iters) > 0:
        next_shard_iters = []
        for shard_iter in shard_iters:
            response = kinesis.get_records(ShardIterator=shard_iter, Limit=10000)
            for record in response['Records']:
                record_data = record['Data']
                record_data = json.loads(record_data)
                payload_decoded = base64.b64decode(record_data['databaseActivityEvents'])
                data_key_decoded = base64.b64decode(record_data['key'])
                data_key_decrypt_result = kms.decrypt(CiphertextBlob=data_key_decoded,
                                                      EncryptionContext={'aws:rds:dbc-id': RESOURCE_ID})
                print (decrypt_decompress(payload_decoded, data_key_decrypt_result['Plaintext']))
            if 'NextShardIterator' in response:
                next_shard_iters.append(response['NextShardIterator'])
        shard_iters = next_shard_iters


if __name__ == '__main__':
    main()
```

------

# IAM policy examples for database activity streams
IAM policy examples for activity streams

Any user with appropriate AWS Identity and Access Management (IAM) role privileges for database activity streams can create, start, stop, and modify the activity stream settings for a DB cluster. These actions are included in the audit log of the stream. For best compliance practices, we recommend that you don't provide these privileges to DBAs.

You set access to database activity streams using IAM policies. For more information about Aurora authentication, see [Identity and access management for Amazon Aurora](UsingWithRDS.IAM.md). For more information about creating IAM policies, see [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md). 

**Example Policy to allow configuring database activity streams**  
To give users fine-grained access to modify activity streams, use the service-specific operation context keys `rds:StartActivityStream` and `rds:StopActivityStream` in an IAM policy. The following IAM policy example allows a user or role to configure activity streams.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "ConfigureActivityStreams",
            "Effect": "Allow",
            "Action": [
                "rds:StartActivityStream",
                "rds:StopActivityStream"
            ],
            "Resource": "*"
        }
    ]
}
```

**Example Policy to allow starting database activity streams**  
The following IAM policy example allows a user or role to start activity streams.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement":[
        {
            "Sid":"AllowStartActivityStreams",
            "Effect":"Allow",
            "Action":"rds:StartActivityStream",
            "Resource":"*"
        }
    ]
}
```

**Example Policy to allow stopping database activity streams**  
The following IAM policy example allows a user or role to stop activity streams.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement":[
        {
            "Sid":"AllowStopActivityStreams",
            "Effect":"Allow",
            "Action":"rds:StopActivityStream",
            "Resource":"*"
        }
     ]
}
```

**Example Policy to deny starting database activity streams**  
The following IAM policy example prevents a user or role from starting activity streams.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement":[
        {
            "Sid":"DenyStartActivityStreams",
            "Effect":"Deny",
            "Action":"rds:StartActivityStream",
            "Resource":"*"
        }
     ]
}
```

**Example Policy to deny stopping database activity streams**  
The following IAM policy example prevents a user or role from stopping activity streams.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement":[
        {
            "Sid":"DenyStopActivityStreams",
            "Effect":"Deny",
            "Action":"rds:StopActivityStream",
            "Resource":"*"
        }
    ]
}
```

# Monitoring threats with Amazon GuardDuty RDS Protectionfor Amazon Aurora
Monitoring threats with GuardDuty RDS Protection

Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment. Using machine learning (ML) models, and anomaly and threat detection capabilities, GuardDuty continuously monitors different log sources and runtime activity to identify and prioritize potential security risks and malicious activities in your environment.

GuardDuty RDS Protection analyzes and profiles login events for potential access threats to your Amazon Aurora databases. When you turn on RDS Protection, GuardDuty consumes RDS login events from your Aurora databases. RDS Protection monitors these events and profiles them for potential insider threats or external actors.

For more information about enabling GuardDuty RDS Protection, see [GuardDuty RDS Protection](https://docs.aws.amazon.com/guardduty/latest/ug/rds-protection.html) in the *Amazon GuardDuty User Guide*.

When RDS Protection detects a potential threat, such as an unusual pattern in successful or failed login attempts, GuardDuty generates a new finding with details about the potentially compromised database. You can view the finding details in the finding summary section in the Amazon GuardDuty console. The finding details vary based on the finding type. The primary details, resource type and resource role, determine the kind of information available for any finding. For more information about the commonly available details for findings and the finding types, see [Finding details](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings-summary.html) and [GuardDuty RDS Protection finding types](https://docs.aws.amazon.com/guardduty/latest/ug/findings-rds-protection.html) respectively in the *Amazon GuardDuty User Guide*. 

You can turn the RDS Protection feature on or off for any AWS account in any AWS Region where this feature is available. When RDS Protection isn't enabled, GuardDuty doesn't detect potentially compromised Aurora databases or provide details of the compromise.

An existing GuardDuty account can enable RDS Protection with a 30-day trial period. For a new GuardDuty account, RDS Protection is already enabled and included in the 30-day free trial period. For more information, see [Estimating GuardDuty cost](https://docs.aws.amazon.com/guardduty/latest/ug/monitoring_costs.html) in the *Amazon GuardDuty User Guide*.

For information about the AWS Regions where GuardDuty doesn't yet support RDS Protection, see [Region-specific feature availability](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_regions.html#gd-regional-feature-availability) in the *Amazon GuardDuty User Guide*.

For information about the Aurora database versions that GuardDuty RDS Protection supports, see [Supported Amazon Aurora, Amazon RDS, and Aurora Limitless databases](https://docs.aws.amazon.com/guardduty/latest/ug/rds-protection.html#rds-pro-supported-db) in the *Amazon GuardDuty User Guide*.