

# Working with tables, items, queries, scans, and indexes
<a name="WorkingWithDynamo"></a>

This section provides details about working with tables, items, queries, and more in Amazon DynamoDB.

**Topics**
+ [Working with tables and data in DynamoDB](WorkingWithTables.md)
+ [Global tables - multi-active, multi-Region replication](GlobalTables.md)
+ [Working with items and attributes in DynamoDB](WorkingWithItems.md)
+ [Improving data access with secondary indexes in DynamoDB](SecondaryIndexes.md)
+ [Managing complex workflows with DynamoDB transactions](transactions.md)
+ [Change data capture with Amazon DynamoDB](streamsmain.md)

# Working with tables and data in DynamoDB
<a name="WorkingWithTables"></a>

This section describes how to use the AWS Command Line Interface (AWS CLI) and the AWS SDKs to create, update, and delete tables in Amazon DynamoDB.

**Note**  
You can also perform these same tasks using the AWS Management Console. For more information, see [Using the console](AccessingDynamoDB.md#ConsoleDynamoDB).

This section also provides more information about throughput capacity using DynamoDB auto scaling or manually setting provisioned throughput.

**Topics**
+ [Basic operations on DynamoDB tables](WorkingWithTables.Basics.md)
+ [Considerations when choosing a table class in DynamoDB](WorkingWithTables.tableclasses.md)
+ [Adding tags and labels to resources in DynamoDB](Tagging.md)

# Basic operations on DynamoDB tables
<a name="WorkingWithTables.Basics"></a>

Similar to other database systems, Amazon DynamoDB stores data in tables. You can manage your tables using a few basic operations.

**Topics**
+ [Creating a table](#WorkingWithTables.Basics.CreateTable)
+ [Describing a table](#WorkingWithTables.Basics.DescribeTable)
+ [Updating a table](#WorkingWithTables.Basics.UpdateTable)
+ [Deleting a table](#WorkingWithTables.Basics.DeleteTable)
+ [Using deletion protection](#WorkingWithTables.Basics.DeletionProtection)
+ [Listing table names](#WorkingWithTables.Basics.ListTables)
+ [Describing provisioned throughput quotas](#WorkingWithTables.Basics.DescribeLimits)

## Creating a table
<a name="WorkingWithTables.Basics.CreateTable"></a>

Use the `CreateTable` operation to create a table in Amazon DynamoDB. To create the table, you must provide the following information:
+ **Table name.** The name must conform to the DynamoDB naming rules, and must be unique for the current AWS account and Region. For example, you could create a `People` table in US East (N. Virginia) and another `People` table in Europe (Ireland). However, these two tables would be entirely different from each other. For more information, see [Supported data types and naming rules in Amazon DynamoDB](HowItWorks.NamingRulesDataTypes.md).
+ **Primary key.** The primary key can consist of one attribute (partition key) or two attributes (partition key and sort key). You need to provide the attribute names, data types, and the role of each attribute: `HASH` (for a partition key) and `RANGE` (for a sort key). For more information, see [Primary key](HowItWorks.CoreComponents.md#HowItWorks.CoreComponents.PrimaryKey).
+ **Throughput settings (for provisioned tables).** If using provisioned mode, you must specify the initial read and write throughput settings for the table. You can modify these settings later, or enable DynamoDB auto scaling to manage the settings for you. For more information, see [DynamoDB provisioned capacity mode](provisioned-capacity-mode.md) and [Managing throughput capacity automatically with DynamoDB auto scaling](AutoScaling.md).

### Example 1: Create an on-demand table
<a name="create-payperrequest-example"></a>

To create the same table `Music` using on-demand mode.

```
aws dynamodb create-table \
    --table-name Music \
    --attribute-definitions \
        AttributeName=Artist,AttributeType=S \
        AttributeName=SongTitle,AttributeType=S \
    --key-schema \
        AttributeName=Artist,KeyType=HASH \
        AttributeName=SongTitle,KeyType=RANGE \
    --billing-mode=PAY_PER_REQUEST
```

The `CreateTable` operation returns metadata for the table, as shown following.

```
{
    "TableDescription": {
        "TableArn": "arn:aws:dynamodb:us-east-1:123456789012:table/Music",
        "AttributeDefinitions": [
            {
                "AttributeName": "Artist",
                "AttributeType": "S"
            },
            {
                "AttributeName": "SongTitle",
                "AttributeType": "S"
            }
        ],
        "ProvisionedThroughput": {
            "NumberOfDecreasesToday": 0,
            "WriteCapacityUnits": 0,
            "ReadCapacityUnits": 0
        },
        "TableSizeBytes": 0,
        "TableName": "Music",
        "BillingModeSummary": {
            "BillingMode": "PAY_PER_REQUEST"
        },
        "TableStatus": "CREATING",
        "TableId": "12345678-0123-4567-a123-abcdefghijkl",
        "KeySchema": [
            {
                "KeyType": "HASH",
                "AttributeName": "Artist"
            },
            {
                "KeyType": "RANGE",
                "AttributeName": "SongTitle"
            }
        ],
        "ItemCount": 0,
        "CreationDateTime": 1542397468.348
    }
}
```

**Important**  
 When calling `DescribeTable` on an on-demand table, read capacity units and write capacity units are set to 0. 

### Example 2: Create a provisioned table
<a name="create-provisioned-example"></a>

The following AWS CLI example shows how to create a table (`Music`). The primary key consists of `Artist` (partition key) and `SongTitle` (sort key), each of which has a data type of `String`. The maximum throughput for this table is 10 read capacity units and 5 write capacity units.

```
aws dynamodb create-table \
    --table-name Music \
    --attribute-definitions \
        AttributeName=Artist,AttributeType=S \
        AttributeName=SongTitle,AttributeType=S \
    --key-schema \
        AttributeName=Artist,KeyType=HASH \
        AttributeName=SongTitle,KeyType=RANGE \
    --provisioned-throughput \
        ReadCapacityUnits=10,WriteCapacityUnits=5
```

The `CreateTable` operation returns metadata for the table, as shown following.

```
{
    "TableDescription": {
        "TableArn": "arn:aws:dynamodb:us-east-1:123456789012:table/Music",
        "AttributeDefinitions": [
            {
                "AttributeName": "Artist",
                "AttributeType": "S"
            },
            {
                "AttributeName": "SongTitle",
                "AttributeType": "S"
            }
        ],
        "ProvisionedThroughput": {
            "NumberOfDecreasesToday": 0,
            "WriteCapacityUnits": 5,
            "ReadCapacityUnits": 10
        },
        "TableSizeBytes": 0,
        "TableName": "Music",
        "TableStatus": "CREATING",
        "TableId": "12345678-0123-4567-a123-abcdefghijkl",
        "KeySchema": [
            {
                "KeyType": "HASH",
                "AttributeName": "Artist"
            },
            {
                "KeyType": "RANGE",
                "AttributeName": "SongTitle"
            }
        ],
        "ItemCount": 0,
        "CreationDateTime": 1542397215.37
    }
}
```

The `TableStatus` element indicates the current state of the table (`CREATING`). It might take a while to create the table, depending on the values you specify for `ReadCapacityUnits` and `WriteCapacityUnits`. Larger values for these require DynamoDB to allocate more resources for the table.

### Example 3: Create a table using the DynamoDB standard-infrequent access table class
<a name="create-infrequent-access-example"></a>

To create the same `Music` table using the DynamoDB Standard-Infrequent Access table class.

```
aws dynamodb create-table \
    --table-name Music \
    --attribute-definitions \
        AttributeName=Artist,AttributeType=S \
        AttributeName=SongTitle,AttributeType=S \
    --key-schema \
        AttributeName=Artist,KeyType=HASH \
        AttributeName=SongTitle,KeyType=RANGE \
    --provisioned-throughput \
        ReadCapacityUnits=10,WriteCapacityUnits=5 \
    --table-class STANDARD_INFREQUENT_ACCESS
```

The `CreateTable` operation returns metadata for the table, as shown following.

```
{
    "TableDescription": {
        "TableArn": "arn:aws:dynamodb:us-east-1:123456789012:table/Music",
        "AttributeDefinitions": [
            {
                "AttributeName": "Artist",
                "AttributeType": "S"
            },
            {
                "AttributeName": "SongTitle",
                "AttributeType": "S"
            }
        ],
        "ProvisionedThroughput": {
            "NumberOfDecreasesToday": 0,
            "WriteCapacityUnits": 5,
            "ReadCapacityUnits": 10
        },
        "TableClassSummary": {
            "LastUpdateDateTime": 1542397215.37,
            "TableClass": "STANDARD_INFREQUENT_ACCESS"
        },
        "TableSizeBytes": 0,
        "TableName": "Music",
        "TableStatus": "CREATING",
        "TableId": "12345678-0123-4567-a123-abcdefghijkl",
        "KeySchema": [
            {
                "KeyType": "HASH",
                "AttributeName": "Artist"
            },
            {
                "KeyType": "RANGE",
                "AttributeName": "SongTitle"
            }
        ],
        "ItemCount": 0,
        "CreationDateTime": 1542397215.37
    }
}
```

## Describing a table
<a name="WorkingWithTables.Basics.DescribeTable"></a>

To view details about a table, use the `DescribeTable` operation. You must provide the table name. The output from `DescribeTable` is in the same format as that from `CreateTable`. It includes the timestamp when the table was created, its key schema, its provisioned throughput settings, its estimated size, and any secondary indexes that are present.

**Important**  
 When calling `DescribeTable` on an on-demand table, read capacity units and write capacity units are set to 0. 

**Example**  

```
aws dynamodb describe-table --table-name Music
```

The table is ready for use when the `TableStatus` has changed from `CREATING` to `ACTIVE`.

**Note**  
If you issue a `DescribeTable` request immediately after a `CreateTable` request, DynamoDB might return an error (`ResourceNotFoundException`). This is because `DescribeTable` uses an eventually consistent query, and the metadata for your table might not be available at that moment. Wait for a few seconds, and then try the `DescribeTable` request again.  
For billing purposes, your DynamoDB storage costs include a per-item overhead of 100 bytes. (For more information, go to [DynamoDB Pricing](https://aws.amazon.com/dynamodb/pricing/).) This extra 100 bytes per item is not used in capacity unit calculations or by the `DescribeTable` operation. 

## Updating a table
<a name="WorkingWithTables.Basics.UpdateTable"></a>

The `UpdateTable` operation allows you to do one of the following:
+ Modify a table's provisioned throughput settings (for provisioned mode tables).
+ Change the table's read/write capacity mode.
+ Manipulate global secondary indexes on the table (see [Using Global Secondary Indexes in DynamoDB](GSI.md)).
+ Enable or disable DynamoDB Streams on the table (see [Change data capture for DynamoDB Streams](Streams.md)).

**Example**  
The following AWS CLI example shows how to modify a table's provisioned throughput settings.  

```
aws dynamodb update-table --table-name Music \
    --provisioned-throughput ReadCapacityUnits=20,WriteCapacityUnits=10
```

**Note**  
When you issue an `UpdateTable` request, the status of the table changes from `AVAILABLE` to `UPDATING`. The table remains fully available for use while it is `UPDATING`. When this process is completed, the table status changes from `UPDATING` to `AVAILABLE`.

**Example**  
The following AWS CLI example shows how to modify a table's read/write capacity mode to on-demand mode.  

```
aws dynamodb update-table --table-name Music \
    --billing-mode PAY_PER_REQUEST
```

## Deleting a table
<a name="WorkingWithTables.Basics.DeleteTable"></a>

You can remove an unused table with the `DeleteTable` operation. Deleting a table is an unrecoverable operation. To delete a table using the AWS Management Console, see [Step 6: (Optional) Delete your DynamoDB table to clean up resources](getting-started-step-6.md).

**Example**  
The following AWS CLI example shows how to delete a table.  

```
aws dynamodb delete-table --table-name Music
```

When you issue a `DeleteTable` request, the table's status changes from `ACTIVE` to `DELETING`. It might take a while to delete the table, depending on the resources it uses (such as the data stored in the table, and any streams or indexes on the table).

When the `DeleteTable` operation concludes, the table no longer exists in DynamoDB.

## Using deletion protection
<a name="WorkingWithTables.Basics.DeletionProtection"></a>

You can protect a table from accidental deletion with the deletion protection property. Enabling this property for tables helps ensure that tables do not get accidentally deleted during regular table management operations by your administrators. This will help prevent disruption to your normal business operations.

 The table owner or an authorized administrator controls the deletion protection property for each table. The deletion protection property for every table is off by default. This includes global replicas, and tables restored from backups. When deletion protection is disabled for a table, the table can be deleted by any users authorized by an Identity and Access Management (IAM) policy. When deletion protection is enabled for a table, it cannot be deleted by anyone. 

To change this setting, go to the table’s **Additional settings**, navigate to the **Deletion Protection** panel and select **Enable delete protection**. 

The deletion protection property is supported by the DynamoDB console, API, CLI/SDK and CloudFormation. The `CreateTable` API supports the deletion protection property at table creation time, and the `UpdateTable` API supports changing the deletion protection property for existing tables.

**Note**  
If an AWS account is deleted, all of that account's data including tables are still deleted within 90 days.
If DynamoDB loses access to a customer managed key that was used to encrypt a table, it will still archive the table. Archiving involves making a backup of the table and deleting the original.

## Listing table names
<a name="WorkingWithTables.Basics.ListTables"></a>

The `ListTables` operation returns the names of the DynamoDB tables for the current AWS account and Region.

**Example**  
The following AWS CLI example shows how to list the DynamoDB table names.  

```
aws dynamodb list-tables
```

## Describing provisioned throughput quotas
<a name="WorkingWithTables.Basics.DescribeLimits"></a>

The `DescribeLimits` operation returns the current read and write capacity quotas for the current AWS account and Region.

**Example**  
The following AWS CLI example shows how to describe the current provisioned throughput quotas.  

```
aws dynamodb describe-limits
```
The output shows the upper quotas of read and write capacity units for the current AWS account and Region.

For more information about these quotas, and how to request quota increases, see [Throughput default quotas](ServiceQuotas.md#default-limits-throughput).

# Considerations when choosing a table class in DynamoDB
<a name="WorkingWithTables.tableclasses"></a>

DynamoDB offers two table classes designed to help you optimize for cost. The DynamoDB Standard table class is the default, and is recommended for the vast majority of workloads. The DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class is optimized for tables where storage is the dominant cost. For example, tables that store infrequently accessed data, such as application logs, old social media posts, e-commerce order history, and past gaming achievements, are good candidates for the Standard-IA table class.

Every DynamoDB table is associated with a table class. All secondary indexes associated with the table use the same table class. You can set your table class when creating your table (DynamoDB Standard by default) and update the table class of an existing table using the AWS Management Console, AWS CLI, or AWS SDK. DynamoDB also supports managing your table class using AWS CloudFormation for single-region tables (tables that are not global tables). Each table class offers different pricing for data storage as well as read and write requests. When choosing a table class for your table, keep the following in mind:
+ The DynamoDB Standard table class offers lower throughput costs than DynamoDB Standard-IA and is the most cost-effective option for tables where throughput is the dominant cost. 
+ The DynamoDB Standard-IA table class offers lower storage costs than DynamoDB Standard, and is the most cost-effective option for tables where storage is the dominant cost. When storage exceeds 50% of the throughput (reads and writes) cost of a table using the DynamoDB Standard table class, the DynamoDB Standard-IA table class can help you reduce your total table cost. 
+ DynamoDB Standard-IA tables offer the same performance, durability, and availability as DynamoDB Standard tables. 
+ Switching between the DynamoDB Standard and DynamoDB Standard-IA table classes does not require changing your application code. You use the same DynamoDB APIs and service endpoints regardless of the table class your tables use. 
+ DynamoDB Standard-IA tables are compatible with all existing DynamoDB features such as auto scaling, on-demand mode, time-to-live (TTL), on-demand backups, point-in-time recovery (PITR), and global secondary indexes.

The most cost-effective table class for your table depends on your table's expected storage and throughput usage patterns. You can look at your table's historical storage and throughput cost and usage with AWS Cost and Usage Reports and the AWS Cost Explorer. Use this historical data to determine the most cost-effective table class for your table. To learn more about using AWS Cost and Usage Reports and the AWS Cost Explorer, see the [AWS Billing and Cost Management Documentation](https://docs.aws.amazon.com/account-billing/index.html). See [Amazon DynamoDB Pricing](https://aws.amazon.com/dynamodb/pricing/on-demand/) for details about table class pricing.

**Note**  
A table class update is a background process. You can still access your table normally during a table class update. The time to update your table class depends on your table traffic, storage size, and other related variables. No more than two table class updates on your table are allowed in a 30-day trailing period.

# Adding tags and labels to resources in DynamoDB
<a name="Tagging"></a>

You can label Amazon DynamoDB resources using *tags*. Tags let you categorize your resources in different ways, for example, by purpose, owner, environment, or other criteria. Tags can help you do the following:
+ Quickly identify a resource based on the tags that you assigned to it.
+ See AWS bills broken down by tags.
**Note**  
Any local secondary indexes (LSI) and global secondary indexes (GSI) related to tagged tables are labeled with the same tags automatically. Currently, DynamoDB Streams usage cannot be tagged.

Tagging is supported by AWS services like Amazon EC2, Amazon S3, DynamoDB, and more. Efficient tagging can provide cost insights by enabling you to create reports across services that carry a specific tag.

To get started with tagging, do the following:

1. Understand [Tagging restrictions in DynamoDB](#TaggingRestrictions).

1. Create tags by using [Tagging resources in DynamoDB](Tagging.Operations.md).

1. Use [Using DynamoDB tags to create cost allocation reports](#CostAllocationReports) to track your AWS costs per active tag.

Finally, it is good practice to follow optimal tagging strategies. For information, see [AWS tagging strategies](https://d0.awsstatic.com/aws-answers/AWS_Tagging_Strategies.pdf).

## Tagging restrictions in DynamoDB
<a name="TaggingRestrictions"></a>

 Each tag consists of a key and a value, both of which you define. The following restrictions apply: 
+  Each DynamoDB table can have only one tag with the same key. If you try to add an existing tag (same key), the existing tag value is updated to the new value. 
+  Tag keys and values are case sensitive. 
+  The maximum key length is 128 Unicode characters. 
+ The maximum value length is 256 Unicode characters. 
+  The allowed characters are letters, white space, and numbers, plus the following special characters: `+ - = . _ : /` 
+  The maximum number of tags per resource is 50.
+ The maximum size supported for all the tags in a table is 10 KB.
+ AWS-assigned tag names and values are automatically assigned the `aws:` prefix, which you can't assign. AWS-assigned tag names don't count toward the tag limit of 50 or the 10 KB maximum size limit. User-assigned tag names have the prefix `user:` in the cost allocation report. 
+  You can't backdate the application of a tag. 

# Tagging resources in DynamoDB
<a name="Tagging.Operations"></a>

You can use the Amazon DynamoDB console or the AWS Command Line Interface (AWS CLI) to add, list, edit, or delete tags. You can then activate these user-defined tags so that they appear on the AWS Billing and Cost Management console for cost allocation tracking. For more information, see [Using DynamoDB tags to create cost allocation reports](Tagging.md#CostAllocationReports). 

 For bulk editing, you can also use Tag Editor on the AWS Management Console. For more information, see [Working with Tag Editor](http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/tag-editor.html).

 To use the DynamoDB API instead, see the following operations in the [Amazon DynamoDB API Reference](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/):
+ [TagResource](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TagResource.html)
+ [UntagResource](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UntagResource.html)
+ [ListTagsOfResource](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ListTagsOfResource.html)

**Topics**
+ [Setting permissions to filter by tags](#Tagging.Operations.permissions)
+ [Adding tags to new or existing tables (AWS Management Console)](#Tagging.Operations.using-console)
+ [Adding tags to new or existing tables (AWS CLI)](#Tagging.Operations.using-cli)

## Setting permissions to filter by tags
<a name="Tagging.Operations.permissions"></a>

To use tags to filter your table list in the DynamoDB console, make sure your user's policies include access to the following operations:
+ `tag:GetTagKeys`
+ `tag:GetTagValues`

You can access these operations by attaching a new IAM policy to your user by following the steps below.

1. Go to the [IAM console](https://console.aws.amazon.com/iam/) with an Admin user.

1. Select "Policies" in the left navigation menu.

1. Select "Create policy."

1. Paste the following policy into the JSON editor.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "tag:GetTagKeys",
                   "tag:GetTagValues"
               ],
               "Resource": "*"
           }
       ]
   }
   ```

------

1. Complete the wizard and assign a name to the policy (for example, `TagKeysAndValuesReadAccess`).

1. Select "Users" in the left navigation menu.

1. From the list, select the user you normally use to access the DynamoDB console.

1. Select "Add permissions."

1. Select "Attach existing policies directly."

1. From the list, select the policy you created previously.

1. Complete the wizard.

## Adding tags to new or existing tables (AWS Management Console)
<a name="Tagging.Operations.using-console"></a>

You can use the DynamoDB console to add tags to new tables when you create them, or to add, edit, or delete tags for existing tables.

**To tag resources on creation (console)**

1. Sign in to the AWS Management Console and open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/).

1. In the navigation pane, choose **Tables**, and then choose **Create table**.

1. On the **Create DynamoDB table** page, provide a name and primary key. In the **Tags** section, choose **Add new tag** and enter the tags that you want to use.

   For information about tag structure, see [Tagging restrictions in DynamoDB](Tagging.md#TaggingRestrictions). 

   For more information about creating tables, see [Basic operations on DynamoDB tables](WorkingWithTables.Basics.md).

**To tag existing resources (console)**

Open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/).

1. In the navigation pane, choose **Tables**.

1. Choose a table in the list, and then choose the **Additional settings** tab. You can add, edit, or delete your tags in the **Tags** section at the bottom of the page.

## Adding tags to new or existing tables (AWS CLI)
<a name="Tagging.Operations.using-cli"></a>

The following examples show how to use the AWS CLI to specify tags when you create tables and indexes, and to tag existing resources.

**To tag resources on creation (AWS CLI)**
+ The following example creates a new `Movies` table and adds the `Owner` tag with a value of `blueTeam`: 

  ```
  aws dynamodb create-table \
      --table-name Movies \
      --attribute-definitions AttributeName=Title,AttributeType=S \
      --key-schema AttributeName=Title,KeyType=HASH \
      --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
      --tags Key=Owner,Value=blueTeam
  ```

**To tag existing resources (AWS CLI)**
+ The following example adds the `Owner` tag with a value of `blueTeam` for the `Movies` table: 

  ```
  aws dynamodb tag-resource \
      --resource-arn arn:aws:dynamodb:us-east-1:123456789012:table/Movies \
      --tags Key=Owner,Value=blueTeam
  ```

**To list all tags for a table (AWS CLI)**
+ The following example lists all the tags that are associated with the `Movies` table:

  ```
  aws dynamodb list-tags-of-resource \
      --resource-arn arn:aws:dynamodb:us-east-1:123456789012:table/Movies
  ```

## Using DynamoDB tags to create cost allocation reports
<a name="CostAllocationReports"></a>

AWS uses tags to organize resource costs on your cost allocation report. AWS provides two types of cost allocation tags:
+ An AWS-generated tag. AWS defines, creates, and applies this tag for you.
+ User-defined tags. You define, create, and apply these tags.

You must activate both types of tags separately before they can appear in Cost Explorer or on a cost allocation report. 

 To activate AWS-generated tags: 

1.  Sign in to the AWS Management Console and open the Billing and Cost Management console at [https://console.aws.amazon.com/billing/home\$1/.](https://console.aws.amazon.com/billing/home#/.) 

1.  In the navigation pane, choose **Cost Allocation Tags**. 

1.  Under **AWS-Generated Cost Allocation Tags**, choose **Activate**. 

 To activate user-defined tags: 

1.  Sign in to the AWS Management Console and open the Billing and Cost Management console at [https://console.aws.amazon.com/billing/home\$1/.](https://console.aws.amazon.com/billing/home#/.) 

1.  In the navigation pane, choose **Cost Allocation Tags**. 

1.  Under **User-Defined Cost Allocation Tags**, choose **Activate**. 

 After you create and activate tags, AWS generates a cost allocation report with your usage and costs grouped by your active tags. The cost allocation report includes all of your AWS costs for each billing period. The report includes both tagged and untagged resources, so that you can clearly organize the charges for resources. 

**Note**  
 Currently, any data transferred out from DynamoDB won't be broken down by tags on cost allocation reports. 

 For more information, see [Using cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html). 

# Global tables - multi-active, multi-Region replication
<a name="GlobalTables"></a>

*Amazon DynamoDB global tables* is a fully managed, multi-Region, and multi-active database feature that provides easy to use data replication and fast local read and write performance for globally scaled applications.

Global tables automatically replicate your DynamoDB table data across AWS Regions and optionally across AWS accounts without requiring you to build and maintain your own replication solution. Global tables are ideal for applications requiring business continuity and high availability through multi-Region deployment. Any global table replica can serve reads and writes. Applications can achieve high resilience with a low or zero Recovery Point Objective (RPO) by shifting traffic to a different Region if application processing is interrupted in a Region. Global tables are available in all Regions where DynamoDB is available.

## Consistency modes
<a name="GlobalTables.consistency-modes"></a>

When you create a global table, you can configure its consistency mode. Global tables support two consistency modes: multi-Region eventual consistency (MREC) and multi-Region strong consistency (MRSC).

If you do not specify a consistency mode when creating a global table, the global table defaults to multi-Region eventual consistency (MREC). A global table cannot contain replicas configured with different consistency modes. You cannot change a global table's consistency mode after creation.

## Account configurations
<a name="GlobalTables.account-configurations"></a>

DynamoDB now supports two global tables models, each designed for different architectural patterns:
+ **Same-account global tables** – All replicas are created and managed within a single AWS account.
+ **Multi-account global tables** – Replicas are deployed across multiple AWS accounts while participating in a shared replication group.

Both same-account and multi-account models support multi-Region writes, asynchronous replication, last-writer-wins conflict resolution, and the same billing model. However, they differ in how accounts, permissions, encryption, and table governance are managed.

Global tables configured for MRSC only support same-account configurations.

You can configure a global table using the AWS Management Console. Global tables use existing DynamoDB APIs to read and write data to your tables, so no application changes are required. You pay only for the resources you provision or use, with no upfront costs or commitments.


| **Properties** | **Same-Account global tables** | **Multi-account global tables** | 
| --- | --- | --- | 
| Primary use case | Multi-Region resiliency for applications within a single AWS account | Multi-Region, multi-account replication for applications owned by different teams, distinct business units, or strong security boundaries across accounts | 
| Account model | All replicas created and managed in one AWS account | Replicas created across multiple AWS accounts within the same deployment | 
| Resource ownership | A single account owns the table and all replicas | Each account owns its local replica; replication group spans accounts | 
| Version supported | Global tables version 2019.11.21 (Current) and Version 2017.11.29 (Legacy) | Global tables version 2019.11.21 (Current) | 
| Control plane operations | Create, modify, and delete replicas through the table owner account | Distributed control-plane operations: accounts join or leave the replication group | 
| Data plane operations | Standard DynamoDB endpoints per Region | Data-plane access per account/Region; routing through replication group | 
| Security boundary | A single IAM and KMS boundary | Distinct IAM, KMS, billing, CloudTrail, and governance per account | 
| Best fit | Organizations with centralized ownership of tables | Organizations with federated teams, governance boundaries, or multi-account setups | 

**Topics**
+ [Consistency modes](#GlobalTables.consistency-modes)
+ [Account configurations](#GlobalTables.account-configurations)
+ [Global tables core concepts](globaltables-CoreConcepts.md)
+ [DynamoDB same-account global table](globaltables-SameAccount.md)
+ [DynamoDB multi-account global tables](globaltables-MultiAccount.md)
+ [Understanding Amazon DynamoDB billing for global tables](global-tables-billing.md)
+ [DynamoDB global tables versions](V2globaltables_versions.md)
+ [Best practices for global tables](globaltables-bestpractices.md)

# Global tables core concepts
<a name="globaltables-CoreConcepts"></a>

The following sections describe the concepts and behaviors of global tables in Amazon DynamoDB

## Concepts
<a name="globaltables-CoreConcepts.KeyConcepts"></a>

*Global tables* is a DynamoDB feature that replicates table data across AWS Regions.

A *replica table* (or replica) is a DynamoDB table that functions as part of a global table. A global table consists of two or more replica tables across different AWS Regions. Each global table can have only one replica per AWS Region. All replicas in a global table share the same table name, primary key schema, and item data.

When an application writes data to a replica in one Region, DynamoDB automatically replicates the write to all other replicas in the global table. For more information about how to get started with global tables, see [Tutorials: Creating global tables](V2globaltables.tutorial.md) or [Tutorials: Creating multi-account global tables](V2globaltables_MA.tutorial.md).

## Versions
<a name="globaltables-CoreConcepts.Versions"></a>

There are two versions of DynamoDB global tables available: [Global Tables version 2019.11.21 (Current)](GlobalTables.md) and [Global tables version 2017.11.29 (Legacy)](globaltables.V1.md). You should use Global Tables version 2019.11.21 (Current) whenever possible. The information in this documentation section is for Version 2019.11.21 (Current). For more information, see Determining the version of a global table [Determining the version of a global table](V2globaltables_versions.md#globaltables.DetermineVersion).

## Availability
<a name="globaltables-CoreConcepts.availability"></a>

Global tables help improve your business continuity by making it easier to implement a multi-Region high availability architecture. If a workload in a single AWS Region becomes impaired, you can shift application traffic to a different Region and perform reads and writes to a different replica table in the same global table.

Each replica table in a global table provides the same durability and availability as a single-Region DynamoDB table. Global tables offer a 99.999% availability [Service Level Agreement (SLA)](https://aws.amazon.com//dynamodb/sla/), compared to 99.99% for single-Region tables.

## Fault injection testing
<a name="fault-injection-testing"></a>

Both MREC and MRSC global tables integrate with [AWS Fault Injection Service](https://docs.aws.amazon.com/resilience-hub/latest/userguide/testing.html) (AWS FIS), a fully managed service for running controlled fault injection experiments to improve an application's resilience. Using AWS FIS, you can:
+ Create experiment templates that define specific failure scenarios.
+ Inject failures to validate application resilience by simulating Region isolation (that is, pausing replication to and from a selected replica) to test error handling, recovery mechanisms, and multi-Region traffic shift behavior when one AWS Region experiences disruption.

For example, in a global table with replicas in US East (N. Virginia), US East (Ohio), and US West (Oregon), you can run an experiment in US East (Ohio) to test region isolation there while US East (N. Virginia) and US West (Oregon) continue normal operations. This controlled testing helps you identify and resolve potential issues before they affect production workloads. 

See [ Action targets](https://docs.aws.amazon.com/fis/latest/userguide/action-sequence.html#action-targets) in the *AWS FIS user guide* for a complete list of AWS FIS supported actions and [ Cross-Region Connectivity](https://docs.aws.amazon.com/fis/latest/userguide/cross-region-scenario.html) to pause DynamoDB replication between regions.

For information about Amazon DynamoDB global table actions available in AWS FIS, see [DynamoDB global tables actions reference](https://docs.aws.amazon.com/fis/latest/userguide/fis-actions-reference.html#dynamodb-actions-reference) in the *AWS FIS User Guide*.

To get started running fault injection experiments, see [Planning your AWS FIS experiments](https://docs.aws.amazon.com/fis/latest/userguide/getting-started-planning.html) in the AWS FIS user guide.

**Note**  
During AWS FIS experiments in MRSC, eventually consistent reads are permitted, but table setting updates - such as changing billing mode or configuring table throughput - are not allowed, similar to MREC. Please check the CloudWatch metric [`FaultInjectionServiceInducedErrors`](metrics-dimensions.md#FaultInjectionServiceInducedErrors) for additional details regarding the error code.

## Time To Live (TTL)
<a name="global-tables-ttl"></a>

Global tables configured for MREC support configuring [Time To Live](TTL.md) (TTL) deletion. TTL settings are automatically synchronized for all replicas in a global table. When TTL deletes an item from a replica in a Region, the delete is replicated to all other replicas in the global table. TTL does not consume write capacity, so you are not charged for the TTL delete in the Region where the delete occurred. However, you are charged for the replicated delete in each other region with a replica in the global table.

TTL delete replication consumes write capacity on the replicas to which the delete is being replicated. Replicas configured for provisioned capacity may throttle requests if the combination of write throughput and TTL delete throughput is higher than the provisioned write capacity.

Global tables configured for multi-Region strong consistency (MRSC) do not support configuring Time To Live (TTL) deletion.

## Streams
<a name="global-tables-streams"></a>

Global tables configured for multi-Region eventual consistency (MREC) replicate changes by reading those changes from a [DynamoDB Stream](Streams.md) on a replica table and applying that change to all other replica tables. Streams are therefore enabled by default on all replicas in an MREC global table, and cannot be disabled on those replicas. The MREC replication process may combine multiple changes in a short period of time into a single replicated write, resulting in each replica's Stream containing slightly different records. Streams records on MREC replicas maintain ordering for all changes to the same item, but the relative ordering of changes to different items may vary across replicas.

If you want to write an application that processes Streams records for changes that occurred in a particular Region but not other Regions in a global table, you can add an attribute to each item that defines in which Region the change for that item occurred. You can use this attribute to filter Streams records for changes that occurred in other Regions, including the use of Lambda event filters to only invoke Lambda functions for changes in a specific Region.

Global tables configured for multi-Region strong consistency (MRSC) do not use DynamoDB Streams for replication, so Streams are not enabled by default on MRSC replicas. You can enable Streams on an MRSC replica. Streams records on MRSC replicas are identical for every replica, including Stream record ordering.

## Transactions
<a name="global-tables-transactions"></a>

On a global table configured for MREC, DynamoDB transaction operations ( [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TransactWriteItems.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TransactWriteItems.html) and [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TransactGetItems.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TransactGetItems.html)) are only atomic within the Region where the operation was invoked. Transactional writes are not replicated as a unit across Regions, meaning only some of the writes in a transaction may be returned by read operations in other replicas at a given point in time.

For example, if you have a global table with replicas in the US East (Ohio) and US West (Oregon) Regions and perform a `TransactWriteItems` operation in the US East (Ohio) Region, you may observe partially completed transactions in the US West (Oregon) Region as changes are replicated. Changes will only be replicated to other Regions once they've been committed in the source Region.

Global tables configured for multi-Region strong consistency (MRSC) do not support transaction operations, and will return an error if those operations are invoked on an MRSC replica.

## Read and write throughput
<a name="globaltables-CoreConcepts.Throughput"></a>

### Provisioned mode
<a name="gt_throughput.provisioned"></a>

Replication consumes write capacity. Replicas configured for provisioned capacity may throttle requests if the combination of application write throughput and replication write throughput exceeds the provisioned write capacity. For global tables using provisioned mode, auto scaling settings for both read and write capacities are synchronized between replicas.

You can independently configure read capacity settings for each replica in a global table by using the [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ProvisionedThroughputOverride.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ProvisionedThroughputOverride.html) parameter at the replica level. By default, changes to provisioned read capacity are applied to all replicas in the global table. When adding a new replica to a global table, the read capacity of the source table or replica is used as the initial value unless a replica-level override is explicitly specified.

### On-demand mode
<a name="gt_throughput.on-demand"></a>

For global tables configured for on-demand mode, write capacity is automatically synchronized across all replicas. DynamoDB automatically adjusts capacity based on traffic, and there are no replica-specific read or write capacity settings to manage.

## Monitoring global tables
<a name="monitoring-global-tables"></a>

Global tables configured for multi-Region eventual consistency (MREC) publish the [`ReplicationLatency`](metrics-dimensions.md#ReplicationLatency) metric to CloudWatch. This metric tracks the elapsed time between when an item is written to a replica table, and when that item appears in another replica in the global table. `ReplicationLatency` is expressed in milliseconds and is emitted for every source and destination Region pair in a global table. 

Typical `ReplicationLatency` values depends on the distance between your chosen AWS Regions, as well as other variables like workload type and throughput. For example, a source replica in the US West (N. California) (us-west-1) Region has lower `ReplicationLatency` to the US West (Oregon) (us-west-2) Region compared to the Africa (Cape Town) (af-south-1) Region.

An increasing value for `ReplicationLatency` could indicate that updates from one replica are not propagating to other replica tables in a timely manner. In this case, you can temporarily redirect your application's read and write activity to a different AWS Region.

Global tables configured for multi-Region strong consistency (MRSC) do not publish a `ReplicationLatency` metric.

## Considerations for managing global tables
<a name="management-considerations"></a>

You can't delete a table used to add a new global table replica until 24 hours have elapsed since the new replica was created.

If you disable an AWS Region that contains global table replicas, those replicas are permanently converted to single-Region tables 20 hours after the Region is disabled.

# DynamoDB same-account global table
<a name="globaltables-SameAccount"></a>

Same-account global tables automatically replicate your DynamoDB table data across AWS Regions within a single AWS account. Same-account global tables provide the simplest model for running multi-Region applications because all replicas share the same account boundary, ownership, and permissions model. When you choose the AWS Regions for your replica tables, global tables handle all replication automatically. Global tables are available in all Regions where DynamoDB is available.

Same-account global tables provide the following benefits:
+ Replicate DynamoDB table data automatically across your choice of AWS Regions to locate data closer to your users
+ Enable higher application availability during regional isolation or degradation
+ Use built-in conflict resolution so you can focus on your application's business logic
+ When creating a same-account global table, you can choose either [Multi-Region eventual consistency (MREC)](V2globaltables_HowItWorks.md#V2globaltables_HowItWorks.consistency-modes.mrec) or [Multi-Region strong consistency (MRSC)](V2globaltables_HowItWorks.md#V2globaltables_HowItWorks.consistency-modes.mrsc)

**Topics**
+ [How DynamoDB global tables work](V2globaltables_HowItWorks.md)
+ [Tutorials: Creating global tables](V2globaltables.tutorial.md)
+ [DynamoDB global tables security](globaltables-security.md)

# How DynamoDB global tables work
<a name="V2globaltables_HowItWorks"></a>

The following sections describe the concepts and behaviors of global tables in Amazon DynamoDB.

## Concepts
<a name="V2globaltables_HowItWorks.KeyConcepts"></a>

*Global tables* is a DynamoDB feature that replicates table data across AWS Regions. 

A *replica table* (or replica) is a DynamoDB table that functions as part of a global table. A global table consists of two or more replica tables across different AWS Regions. Each global table can have only one replica per AWS Region. All replicas in a global table share the same table name, primary key schema, and item data.

When an application writes data to a replica in one Region, DynamoDB automatically replicates the write to all other replicas in the global table. For more information about how to get started with global tables, see [Tutorials: Creating global tables](V2globaltables.tutorial.md).

## Versions
<a name="V2globaltables_HowItWorks.versions"></a>

There are two versions of DynamoDB global tables available: Version 2019.11.21 (Current) and [Version 2017.11.29 (Legacy)](globaltables.V1.md). You should use Version 2019.11.21 (Current) whenever possible. The information in this documentation section is for Version 2019.11.21 (Current). For more information, see [Determining the version of a global table](V2globaltables_versions.md#globaltables.DetermineVersion).

## Availability
<a name="V2globaltables_HowItWorks.availability"></a>

Global tables help improve your business continuity by making it easier to implement a multi-Region high availability architecture. If a workload in a single AWS Region becomes impaired, you can shift application traffic to a different Region and perform reads and writes to a different replica table in the same global table.

Each replica table in a global table provides the same durability and availability as a single-Region DynamoDB table. Global tables offer a 99.999% availability [Service Level Agreement (SLA)](https://aws.amazon.com//dynamodb/sla/), compared to 99.99% for single-Region tables.

## Consistency modes
<a name="V2globaltables_HowItWorks.consistency-modes"></a>

When you create a global table, you can configure its consistency mode. Global tables support two consistency modes: multi-Region eventual consistency (MREC), and multi-Region strong consistency (MRSC).

If you do not specify a consistency mode when creating a global table, the global table defaults to multi-Region eventual consistency (MREC). A global table cannot contain replicas configured with different consistency modes. You cannot change a global table's consistency mode after creation.

### Multi-Region eventual consistency (MREC)
<a name="V2globaltables_HowItWorks.consistency-modes.mrec"></a>

Multi-Region eventual consistency (MREC) is the default consistency mode for global tables. Item changes in an MREC global table replica are asynchronously replicated to all other replicas, typically within a second or less. In the unlikely event a replica in a MREC global table becomes isolated or impaired, any data not yet replicated to other Regions will be replicated when the replica becomes healthy.

If the same item is modified in multiple Regions simultaneously, DynamoDB will resolve the conflict by using the modification with the latest internal timestamp on a per-item basis, referred to as a "last writer wins" conflict resolution method. An item will eventually converge in all replicas to the version created by the last write.

[Strongly consistent read operations](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_GetItem.html#DDB-GetItem-request-ConsistentRead) return the latest version of an item if that item was last updated in the Region where the read occurred, but may return stale data if the item was last updated in a different Region. Conditional writes evaluate the condition expression against the version of the item in the Region.

You create a MREC global table by adding a replica to an existing DynamoDB table. Adding a replica has no performance impact on existing single-Region DynamoDB tables or global table replicas. You can add replicas to a MREC global table to expand the number of Regions where data is replicated, or remove replicas from an MREC global table if they are no longer needed. An MREC global table can have a replica in any Region where DynamoDB is available, and can have as many replicas as there are Regions in the [AWS partition.](https://docs.aws.amazon.com/whitepapers/latest/aws-fault-isolation-boundaries/partitions.html)

### Multi-Region strong consistency (MRSC)
<a name="V2globaltables_HowItWorks.consistency-modes.mrsc"></a>

You can configure multi-Region strong consistency (MRSC) mode when you create a global table. Item changes in an MRSC global table replica are synchronously replicated to at least one other Region before the write operation returns a successful response. Strongly consistent read operations on any MRSC replica always return the latest version of an item. Conditional writes always evaluate the condition expression against the latest version of an item.

A MRSC global table must be deployed in exactly three Regions. You can configure a MRSC global table with three replicas, or with two replicas and one witness. A witness is a component of a MRSC global table that contains data written to global table replicas and provides an optional alternative to a full replica while supporting MRSC's availability architecture. You cannot perform read or write operations on a witness. A witness is located in a different Region than the two replicas. When creating a MRSC global table, you choose the Regions for both your replicas and the witness deployment at MRSC table creation time. You can determine whether and in which Region a MRSC global table has a witness configured from the output of the [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.html) API. The witness is owned and managed by DynamoDB, and the witness will not appear in your AWS account in the Region where it is configured.

MRSC global tables are available in the following Region sets: US Region set (US East N. Virginia, US East Ohio, US West Oregon), EU Region set (Europe Ireland, Europe London, Europe Paris, Europe Frankfurt), and AP Region set (Asia Pacific Tokyo, Asia Pacific Seoul, and Asia Pacific Osaka). MRSC global tables cannot span Region sets (e.g. a MRSC global table cannot contain replicas from both US and EU Region sets).

You create a MRSC global table by adding one replica and a witness or two replicas to an existing DynamoDB table that contains no data. When converting an existing single-Region table to a MRSC global table, you must ensure that the table is empty. Converting a single-Region table to a MRSC global table with existing items is not supported. Ensure that no data is written into the table during the conversion process. You cannot add additional replicas to an existing MRSC global table. You cannot delete a single replica or a witness from a MRSC global table. You can delete two replicas or delete one replica and a witness from a MRSC global table, converting the remaining replica to a single-Region DynamoDB table.

A write operation fails with a `ReplicatedWriteConflictException` when it attempts to modify an item that is already being modified in another Region. Writes that fail with the `ReplicatedWriteConflictException` can be retried and will succeed if the item is no longer being modified in another Region.

The following considerations apply to MRSC global tables:
+ Time to Live (TTL) is not supported for MRSC global tables.
+ Local secondary indexes (LSIs) are not supported for MRSC global tables.
+ CloudWatch Contributor Insights information is only reported for the Region in which an operation occurred.

## Choosing a consistency mode
<a name="V2globaltables_HowItWorks.choosing-consistency-mode"></a>

The key criteria for choosing a multi-Region consistency mode is whether your application prioritizes lower latency writes and strongly consistent reads, or prioritizes global strong consistency.

MREC global tables will have lower write and strongly consistent read latencies compared to MRSC global tables. MREC global tables have a Recovery Point Objective (RPO) equal to the replication delay between replicas, usually a few seconds depending on the replica Regions.

You should use the MREC mode when:
+ Your application can tolerate stale data returned from strongly consistent read operations if that data was updated in another Region.
+ You prioritize lower write and strongly consistent read latencies over multi-Region read consistency.
+ Your multi-Region high availability strategy can tolerate an RPO greater than zero.

MRSC global tables will have higher write and strongly consistent read latencies compared to MREC global tables. MRSC global tables support a Recovery Point Objective (RPO) of zero.

You should use the MRSC mode when:
+ You need strongly consistent reads across multiple Regions.
+ You prioritize global read consistency over lower write latency.
+ Your multi-Region high availability strategy requires an RPO of zero.

## Monitoring global tables
<a name="monitoring-global-tables"></a>

Global tables configured for multi-Region eventual consistency (MREC) publish the [`ReplicationLatency`](metrics-dimensions.md#ReplicationLatency) metric to CloudWatch. This metric tracks the elapsed time between when an item is written to a replica table, and when that item appears in another replica in the global table. `ReplicationLatency` is expressed in milliseconds and is emitted for every source and destination Region pair in a global table. 

Typical `ReplicationLatency` values depends on the distance between your chosen AWS Regions, as well as other variables like workload type and throughput. For example, a source replica in the US West (N. California) (us-west-1) Region has lower `ReplicationLatency` to the US West (Oregon) (us-west-2) Region compared to the Africa (Cape Town) (af-south-1) Region.

An increasing value for `ReplicationLatency` could indicate that updates from one replica are not propagating to other replica tables in a timely manner. In this case, you can temporarily redirect your application's read and write activity to a different AWS Region.

Global tables configured for multi-Region strong consistency (MRSC) do not publish a `ReplicationLatency` metric.

## Fault injection testing
<a name="fault-injection-testing"></a>

Both MREC and MRSC global tables integrate with [AWS Fault Injection Service](https://docs.aws.amazon.com/resilience-hub/latest/userguide/testing.html) (AWS FIS), a fully managed service for running controlled fault injection experiments to improve an application's resilience. Using AWS FIS, you can:
+ Create experiment templates that define specific failure scenarios.
+ Inject failures to validate application resilience by simulating Region isolation (that is, pausing replication to and from a selected replica) to test error handling, recovery mechanisms, and multi-Region traffic shift behavior when one AWS Region experiences disruption.

For example, in a global table with replicas in US East (N. Virginia), US East (Ohio), and US West (Oregon), you can run an experiment in US East (Ohio) to test region isolation there while US East (N. Virginia) and US West (Oregon) continue normal operations. This controlled testing helps you identify and resolve potential issues before they affect production workloads. 

See [ Action targets](https://docs.aws.amazon.com/fis/latest/userguide/action-sequence.html#action-targets) in the *AWS FIS user guide* for a complete list of AWS FIS supported actions and [ Cross-Region Connectivity](https://docs.aws.amazon.com/fis/latest/userguide/cross-region-scenario.html) to pause DynamoDB replication between regions.

For information about Amazon DynamoDB global table actions available in AWS FIS, see [DynamoDB global tables actions reference](https://docs.aws.amazon.com/fis/latest/userguide/fis-actions-reference.html#dynamodb-actions-reference) in the *AWS FIS User Guide*.

To get started running fault injection experiments, see [Planning your AWS FIS experiments](https://docs.aws.amazon.com/fis/latest/userguide/getting-started-planning.html) in the AWS FIS user guide.

**Note**  
During AWS FIS experiments in MRSC, eventually consistent reads are permitted, but table setting updates - such as changing billing mode or configuring table throughput - are not allowed, similar to MREC. Please check the CloudWatch metric [`FaultInjectionServiceInducedErrors`](metrics-dimensions.md#FaultInjectionServiceInducedErrors) for additional details regarding the error code.

## Time To Live (TTL)
<a name="global-tables-ttl"></a>

Global tables configured for MREC support configuring [Time To Live](TTL.md) (TTL) deletion. TTL settings are automatically synchronized for all replicas in a global table. When TTL deletes an item from a replica in a Region, the delete is replicated to all other replicas in the global table. TTL does not consume write capacity, so you are not charged for the TTL delete in the Region where the delete occurred. However, you are charged for the replicated delete in each other region with a replica in the global table.

TTL delete replication consumes write capacity on the replicas to which the delete is being replicated. Replicas configured for provisioned capacity may throttle requests if the combination of write throughput and TTL delete throughput is higher than the provisioned write capacity.

Global tables configured for multi-Region strong consistency (MRSC) do not support configuring Time To Live (TTL) deletion.

## Streams
<a name="global-tables-streams"></a>

Global tables configured for multi-Region eventual consistency (MREC) replicate changes by reading those changes from a [DynamoDB Stream](Streams.md) on a replica table and applying that change to all other replica tables. Streams are therefore enabled by default on all replicas in an MREC global table, and cannot be disabled on those replicas. The MREC replication process may combine multiple changes in a short period of time into a single replicated write, resulting in each replica's Stream containing slightly different records. Streams records on MREC replicas are always ordered on a per-item basis, but ordering between items may differ between replicas.

Global tables configured for multi-Region strong consistency (MRSC) do not use DynamoDB Streams for replication, so Streams are not enabled by default on MRSC replicas. You can enable Streams on an MRSC replica. Streams records on MRSC replicas are identical for every replica, including Stream record ordering.

If you want to write an application that processes Streams records for changes that occurred in a particular Region but not other Regions in a global table, you can add an attribute to each item that defines in which Region the change for that item occurred. You can use this attribute to filter Streams records for changes that occurred in other Regions, including the use of Lambda event filters to only invoke Lambda functions for changes in a specific Region.

## Transactions
<a name="global-tables-transactions"></a>

On a global table configured for MREC, DynamoDB transaction operations ( [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TransactWriteItems.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TransactWriteItems.html) and [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TransactGetItems.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TransactGetItems.html)) are only atomic within the Region where the operation was invoked. Transactional writes are not replicated as a unit across Regions, meaning only some of the writes in a transaction may be returned by read operations in other replicas at a given point in time.

For example, if you have a global table with replicas in the US East (Ohio) and US West (Oregon) Regions and perform a `TransactWriteItems` operation in the US East (Ohio) Region, you may observe partially completed transactions in the US West (Oregon) Region as changes are replicated. Changes will only be replicated to other Regions once they've been committed in the source Region.

Global tables configured for multi-Region strong consistency (MRSC) do not support transaction operations, and will return an error if those operations are invoked on an MRSC replica.

## Read and write throughput
<a name="V2globaltables_HowItWorks.Throughput"></a>

### Provisioned mode
<a name="gt_throughput.provisioned"></a>

Replication consumes write capacity. Replicas configured for provisioned capacity may throttle requests if the combination of application write throughput and replication write throughput exceeds the provisioned write capacity. For global tables using provisioned mode, auto scaling settings for both read and write capacities are synchronized between replicas.

You can independently configure read capacity settings for each replica in a global table by using the [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ProvisionedThroughputOverride.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ProvisionedThroughputOverride.html) parameter at the replica level. By default, changes to provisioned read capacity are applied to all replicas in the global table. When adding a new replica to a global table, the read capacity of the source table or replica is used as the initial value unless a replica-level override is explicitly specified.

### On-demand mode
<a name="gt_throughput.on-demand"></a>

For global tables configured for on-demand mode, write capacity is automatically synchronized across all replicas. DynamoDB automatically adjusts capacity based on traffic, and there are no replica-specific read or write capacity settings to manage.

## Settings synchronization
<a name="V2globaltables_HowItWorks.setting-synchronization"></a>

Settings in DynamoDB global tables are configuration parameters that control various aspects of table behavior and replication. These settings are managed through the DynamoDB control plane APIs and can be configured when creating or modifying global tables. Global tables automatically synchronize certain settings across all replicas to maintain consistency, while allowing flexibility for region-specific optimizations. Understanding which settings synchronize and how they behave helps you configure your global table effectively. The settings fall into three main categories based on how they are synchronized across replicas.

The following settings are always synchronized between replicas in a global table:
+ Capacity mode (provisioned capacity or on-demand)
+ Table provisioned write capacity
+ Table write auto scaling
+ Attribute definition of key schema
+ Global Secondary Index (GSI) definition
+ GSI provisioned write capacity
+ GSI write auto scaling
+ Server-side Encryption (SSE) type
+ Streams definition in MREC mode
+ Time To Live (TTL)
+ Warm Throughput
+ On-demand maximum write throughput

The following settings are synchronized between replicas, but can be overridden on a per-replica basis:
+ Table provisioned read capacity
+ Table read auto scaling
+ GSI provisioned read capacity
+ GSI read auto scaling
+ Table Class
+ On-demand maximum read throughput

**Note**  
Overridable setting values are changed if the setting is modified on any other replica. As an example, you have a MREC global table with replicas in US East (N. Virginia) and US West (Oregon). The US East (N. Virginia) replica has provisioned read throughput set to 200 RCUs. The replica in US West (Oregon) has a provisioned read throughput override set to 100 RCUs. If you update the provisioned read throughput setting on the US East (N. Virginia) replica from 200 RCUs to 300 RCUs, the new provisioned read throughput value will also be applied to the replica in US West (Oregon). This changes the provisioned read throughput setting for the US West (Oregon) replica from the overridden value of 100 RCUs to the new value of 300 RCUs.

The following settings are never synchronized between replicas:
+ Deletion protection
+ Point-in-time Recovery
+ Tags
+ Table CloudWatch Contributor Insights enablement
+ GSI CloudWatch Contributor Insights enablement
+ Kinesis Data Streams definition
+ Resource Policies
+ Streams definition in MRSC mode

All other settings are not synchronized between replicas.

## DynamoDB Accelerator (DAX)
<a name="V2globaltables_HowItWorks.dax"></a>

Writes to global table replicas bypass DynamoDB Accelerator (DAX), updating DynamoDB directly. As a result, DAX caches can become stale as writes are not updating the DAX cache. DAX caches configured for global table replicas will only be refreshed when the cache TTL expires.

## Considerations for managing global tables
<a name="management-considerations"></a>

You can't delete a table used to add a new global table replica until 24 hours have elapsed since the new replica was created.

If you disable an AWS Region that contains global table replicas, those replicas are permanently converted to single-Region tables 20 hours after the Region is disabled.

# Tutorials: Creating global tables
<a name="V2globaltables.tutorial"></a>

This section provides step-by-step instructions for creating DynamoDB global tables configured for your preferred consistency mode. Choose either Multi-Region Eventual Consistency (MREC) or Multi-Region Strong Consistency (MRSC) modes based on your application's requirements.

MREC global tables provide lower write latency with eventual consistency across AWS Regions. MRSC global tables provide strongly consistent reads across Regions with slightly higher write latencies than MREC. Choose the consistency mode that best meets your application's needs for data consistency, latency, and availability.

**Topics**
+ [Creating a global table configured for MREC](#V2creategt_mrec)
+ [Creating a global table configured for MRSC](#create-gt-mrsc)

## Creating a global table configured for MREC
<a name="V2creategt_mrec"></a>

This section shows how to create a global table with Multi-Region Eventual Consistency (MREC) mode. MREC is the default consistency mode for global tables and provides low-latency writes with asynchronous replication across AWS Regions. Changes made to an item in one region are typically replicated to all other regions within a second. This makes MREC ideal for applications that prioritize low write latency and can tolerate brief periods where different Regions may return slightly different versions of data.

You can create MREC global tables with replicas in any AWS Region where DynamoDB is available and add or remove replicas at any time. The following examples show how to create an MREC global table with replicas in multiple regions.

### Creating a MREC global table using the DynamoDB Console
<a name="mrec-console"></a>

Follow these steps to create a global table using the AWS Management Console. The following example creates a global table with replica tables in the United States and Europe.

1. Sign in to the AWS Management Console and open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/).

1. For this example, choose **US East (Ohio)** from the Region selector in the navigation bar.

1. In the navigation pane on the left side of the console, choose **Tables**.

1. Choose **Create Table**.

1. On the **Create table** page:

   1. For **Table name**, enter **Music**.

   1. For **Partition key**, enter **Artist**.

   1. For **Sort key**, enter **SongTitle**.

   1. Keep the other default settings and choose **Create table**.

      This new table serves as the first replica table in a new global table. It is the prototype for other replica tables that you add later.

1. After the table becomes active:

   1. Select the **Music** table from the tables list.

   1. Choose the **Global tables** tab.

   1. Choose **Create replica**.

1. From the **Available replication Regions** dropdown list, choose **US West (Oregon) us-west-2**.

   The console ensures that a table with the same name doesn't exist in the selected Region. If a table with the same name does exist, you must delete the existing table before you can create a new replica table in that Region.

1. Choose **Create replica**. This starts the table creation process in the US West (Oregon) us-west-2 Region.

   The **Global tables** tab for the **Music** table (and for any other replica tables) shows that the table has been replicated in multiple Regions.

1. Add another region by repeating the previous steps, but choose **Europe (Frankfurt) eu-central-1** as the region.

1. To test replication:

   1. Make sure you're using the AWS Management Console in the US East (Ohio) Region.

   1. Choose **Explore table items**.

   1. Choose **Create item**.

   1. Enter **item\$11** for **Artist** and **Song Value 1** for **SongTitle**.

   1. Choose **Create item**.

1. Verify replication by switching to the other regions:

   1. From the Region selector in the upper-right corner, choose **Europe (Frankfurt)**.

   1. Verify that the **Music** table contains the item you created.

   1. Repeat the verification for **US West (Oregon)**.

### Creating a MREC global table using the AWS CLI or Java
<a name="mrec-cli-java"></a>

------
#### [ CLI ]

The following code example shows how to manage DynamoDB global tables with multi-Region replication with eventual consistency (MREC).
+ Create a table with multi-Region replication (MREC).
+ Put and get items from replica tables.
+ Remove replicas one-by-one.
+ Clean up by deleting the table.

**AWS CLI with Bash script**  
Create a table with multi-Region replication.  

```
# Step 1: Create a new table (MusicTable) in US East (Ohio), with DynamoDB Streams enabled (NEW_AND_OLD_IMAGES)
aws dynamodb create-table \
    --table-name MusicTable \
    --attribute-definitions \
        AttributeName=Artist,AttributeType=S \
        AttributeName=SongTitle,AttributeType=S \
    --key-schema \
        AttributeName=Artist,KeyType=HASH \
        AttributeName=SongTitle,KeyType=RANGE \
    --billing-mode PAY_PER_REQUEST \
    --stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES \
    --region us-east-2

# Step 2: Create an identical MusicTable table in US East (N. Virginia)
aws dynamodb update-table --table-name MusicTable --cli-input-json \
'{
  "ReplicaUpdates":
  [
    {
      "Create": {
        "RegionName": "us-east-1"
      }
    }
  ]
}' \
--region us-east-2

# Step 3: Create a table in Europe (Ireland)
aws dynamodb update-table --table-name MusicTable --cli-input-json \
'{
  "ReplicaUpdates":
  [
    {
      "Create": {
        "RegionName": "eu-west-1"
      }
    }
  ]
}' \
--region us-east-2
```
Describe the multi-Region table.  

```
# Step 4: View the list of replicas created using describe-table
aws dynamodb describe-table \
    --table-name MusicTable \
    --region us-east-2 \
    --query 'Table.{TableName:TableName,TableStatus:TableStatus,MultiRegionConsistency:MultiRegionConsistency,Replicas:Replicas[*].{Region:RegionName,Status:ReplicaStatus}}'
```
Put items in a replica table.  

```
# Step 5: To verify that replication is working, add a new item to the Music table in US East (Ohio)
aws dynamodb put-item \
    --table-name MusicTable \
    --item '{"Artist": {"S":"item_1"},"SongTitle": {"S":"Song Value 1"}}' \
    --region us-east-2
```
Get items from replica tables.  

```
# Step 6: Wait for a few seconds, and then check to see whether the item has been 
# successfully replicated to US East (N. Virginia) and Europe (Ireland)
aws dynamodb get-item \
    --table-name MusicTable \
    --key '{"Artist": {"S":"item_1"},"SongTitle": {"S":"Song Value 1"}}' \
    --region us-east-1

aws dynamodb get-item \
    --table-name MusicTable \
    --key '{"Artist": {"S":"item_1"},"SongTitle": {"S":"Song Value 1"}}' \
    --region eu-west-1
```
Remove replicas.  

```
# Step 7: Delete the replica table in Europe (Ireland) Region
aws dynamodb update-table --table-name MusicTable --cli-input-json \
'{
  "ReplicaUpdates":
  [
    {
      "Delete": {
        "RegionName": "eu-west-1"
      }
    }
  ]
}' \
--region us-east-2

# Delete the replica table in US East (N. Virginia) Region
aws dynamodb update-table --table-name MusicTable --cli-input-json \
'{
  "ReplicaUpdates":
  [
    {
      "Delete": {
        "RegionName": "us-east-1"
      }
    }
  ]
}' \
--region us-east-2
```
Clean up by deleting the table.  

```
# Clean up: Delete the primary table
aws dynamodb delete-table --table-name MusicTable --region us-east-2

echo "Global table demonstration complete."
```
+ For API details, see the following topics in *AWS CLI Command Reference*.
  + [CreateTable](https://docs.aws.amazon.com/goto/aws-cli/dynamodb-2012-08-10/CreateTable)
  + [DeleteTable](https://docs.aws.amazon.com/goto/aws-cli/dynamodb-2012-08-10/DeleteTable)
  + [DescribeTable](https://docs.aws.amazon.com/goto/aws-cli/dynamodb-2012-08-10/DescribeTable)
  + [GetItem](https://docs.aws.amazon.com/goto/aws-cli/dynamodb-2012-08-10/GetItem)
  + [PutItem](https://docs.aws.amazon.com/goto/aws-cli/dynamodb-2012-08-10/PutItem)
  + [UpdateTable](https://docs.aws.amazon.com/goto/aws-cli/dynamodb-2012-08-10/UpdateTable)

------
#### [ Java ]

The following code example shows how to create and manage DynamoDB global tables with replicas across multiple Regions.
+ Create a table with Global Secondary Index and DynamoDB Streams.
+ Add replicas in different Regions to create a global table.
+ Remove replicas from a global table.
+ Add test items to verify replication across Regions.
+ Describe global table configuration and replica status.

**SDK for Java 2.x**  
Create a table with Global Secondary Index and DynamoDB Streams using AWS SDK for Java 2.x.  

```
    public static CreateTableResponse createTableWithGSI(
        final DynamoDbClient dynamoDbClient, final String tableName, final String indexName) {

        if (dynamoDbClient == null) {
            throw new IllegalArgumentException("DynamoDB client cannot be null");
        }
        if (tableName == null || tableName.trim().isEmpty()) {
            throw new IllegalArgumentException("Table name cannot be null or empty");
        }
        if (indexName == null || indexName.trim().isEmpty()) {
            throw new IllegalArgumentException("Index name cannot be null or empty");
        }

        try {
            LOGGER.info("Creating table: " + tableName + " with GSI: " + indexName);

            CreateTableRequest createTableRequest = CreateTableRequest.builder()
                .tableName(tableName)
                .attributeDefinitions(
                    AttributeDefinition.builder()
                        .attributeName("Artist")
                        .attributeType(ScalarAttributeType.S)
                        .build(),
                    AttributeDefinition.builder()
                        .attributeName("SongTitle")
                        .attributeType(ScalarAttributeType.S)
                        .build())
                .keySchema(
                    KeySchemaElement.builder()
                        .attributeName("Artist")
                        .keyType(KeyType.HASH)
                        .build(),
                    KeySchemaElement.builder()
                        .attributeName("SongTitle")
                        .keyType(KeyType.RANGE)
                        .build())
                .billingMode(BillingMode.PAY_PER_REQUEST)
                .globalSecondaryIndexes(GlobalSecondaryIndex.builder()
                    .indexName(indexName)
                    .keySchema(KeySchemaElement.builder()
                        .attributeName("SongTitle")
                        .keyType(KeyType.HASH)
                        .build())
                    .projection(
                        Projection.builder().projectionType(ProjectionType.ALL).build())
                    .build())
                .streamSpecification(StreamSpecification.builder()
                    .streamEnabled(true)
                    .streamViewType(StreamViewType.NEW_AND_OLD_IMAGES)
                    .build())
                .build();

            CreateTableResponse response = dynamoDbClient.createTable(createTableRequest);
            LOGGER.info("Table creation initiated. Status: "
                + response.tableDescription().tableStatus());

            return response;

        } catch (DynamoDbException e) {
            LOGGER.severe("Failed to create table: " + tableName + " - " + e.getMessage());
            throw e;
        }
    }
```
Wait for a table to become active using AWS SDK for Java 2.x.  

```
    public static void waitForTableActive(final DynamoDbClient dynamoDbClient, final String tableName) {

        if (dynamoDbClient == null) {
            throw new IllegalArgumentException("DynamoDB client cannot be null");
        }
        if (tableName == null || tableName.trim().isEmpty()) {
            throw new IllegalArgumentException("Table name cannot be null or empty");
        }

        try {
            LOGGER.info("Waiting for table to become active: " + tableName);

            try (DynamoDbWaiter waiter =
                DynamoDbWaiter.builder().client(dynamoDbClient).build()) {
                DescribeTableRequest request =
                    DescribeTableRequest.builder().tableName(tableName).build();

                waiter.waitUntilTableExists(request);
                LOGGER.info("Table is now active: " + tableName);
            }

        } catch (DynamoDbException e) {
            LOGGER.severe("Failed to wait for table to become active: " + tableName + " - " + e.getMessage());
            throw e;
        }
    }
```
Add a replica to create or extend a global table using AWS SDK for Java 2.x.  

```
    public static UpdateTableResponse addReplica(
        final DynamoDbClient dynamoDbClient,
        final String tableName,
        final Region replicaRegion,
        final String indexName,
        final Long readCapacity) {

        if (dynamoDbClient == null) {
            throw new IllegalArgumentException("DynamoDB client cannot be null");
        }
        if (tableName == null || tableName.trim().isEmpty()) {
            throw new IllegalArgumentException("Table name cannot be null or empty");
        }
        if (replicaRegion == null) {
            throw new IllegalArgumentException("Replica region cannot be null");
        }
        if (indexName == null || indexName.trim().isEmpty()) {
            throw new IllegalArgumentException("Index name cannot be null or empty");
        }
        if (readCapacity == null || readCapacity <= 0) {
            throw new IllegalArgumentException("Read capacity must be a positive number");
        }

        try {
            LOGGER.info("Adding replica in region: " + replicaRegion.id() + " for table: " + tableName);

            // Create a ReplicationGroupUpdate for adding a replica
            ReplicationGroupUpdate replicationGroupUpdate = ReplicationGroupUpdate.builder()
                .create(builder -> builder.regionName(replicaRegion.id())
                    .globalSecondaryIndexes(ReplicaGlobalSecondaryIndex.builder()
                        .indexName(indexName)
                        .provisionedThroughputOverride(ProvisionedThroughputOverride.builder()
                            .readCapacityUnits(readCapacity)
                            .build())
                        .build())
                    .build())
                .build();

            UpdateTableRequest updateTableRequest = UpdateTableRequest.builder()
                .tableName(tableName)
                .replicaUpdates(replicationGroupUpdate)
                .build();

            UpdateTableResponse response = dynamoDbClient.updateTable(updateTableRequest);
            LOGGER.info("Replica addition initiated in region: " + replicaRegion.id());

            return response;

        } catch (DynamoDbException e) {
            LOGGER.severe("Failed to add replica in region: " + replicaRegion.id() + " - " + e.getMessage());
            throw e;
        }
    }
```
Remove a replica from a global table using AWS SDK for Java 2.x.  

```
    public static UpdateTableResponse removeReplica(
        final DynamoDbClient dynamoDbClient, final String tableName, final Region replicaRegion) {

        if (dynamoDbClient == null) {
            throw new IllegalArgumentException("DynamoDB client cannot be null");
        }
        if (tableName == null || tableName.trim().isEmpty()) {
            throw new IllegalArgumentException("Table name cannot be null or empty");
        }
        if (replicaRegion == null) {
            throw new IllegalArgumentException("Replica region cannot be null");
        }

        try {
            LOGGER.info("Removing replica in region: " + replicaRegion.id() + " for table: " + tableName);

            // Create a ReplicationGroupUpdate for removing a replica
            ReplicationGroupUpdate replicationGroupUpdate = ReplicationGroupUpdate.builder()
                .delete(builder -> builder.regionName(replicaRegion.id()).build())
                .build();

            UpdateTableRequest updateTableRequest = UpdateTableRequest.builder()
                .tableName(tableName)
                .replicaUpdates(replicationGroupUpdate)
                .build();

            UpdateTableResponse response = dynamoDbClient.updateTable(updateTableRequest);
            LOGGER.info("Replica removal initiated in region: " + replicaRegion.id());

            return response;

        } catch (DynamoDbException e) {
            LOGGER.severe("Failed to remove replica in region: " + replicaRegion.id() + " - " + e.getMessage());
            throw e;
        }
    }
```
Add test items to verify replication using AWS SDK for Java 2.x.  

```
    public static PutItemResponse putTestItem(
        final DynamoDbClient dynamoDbClient, final String tableName, final String artist, final String songTitle) {

        if (dynamoDbClient == null) {
            throw new IllegalArgumentException("DynamoDB client cannot be null");
        }
        if (tableName == null || tableName.trim().isEmpty()) {
            throw new IllegalArgumentException("Table name cannot be null or empty");
        }
        if (artist == null || artist.trim().isEmpty()) {
            throw new IllegalArgumentException("Artist cannot be null or empty");
        }
        if (songTitle == null || songTitle.trim().isEmpty()) {
            throw new IllegalArgumentException("Song title cannot be null or empty");
        }

        try {
            LOGGER.info("Adding test item to table: " + tableName);

            Map<String, software.amazon.awssdk.services.dynamodb.model.AttributeValue> item = new HashMap<>();
            item.put(
                "Artist",
                software.amazon.awssdk.services.dynamodb.model.AttributeValue.builder()
                    .s(artist)
                    .build());
            item.put(
                "SongTitle",
                software.amazon.awssdk.services.dynamodb.model.AttributeValue.builder()
                    .s(songTitle)
                    .build());

            PutItemRequest putItemRequest =
                PutItemRequest.builder().tableName(tableName).item(item).build();

            PutItemResponse response = dynamoDbClient.putItem(putItemRequest);
            LOGGER.info("Test item added successfully");

            return response;

        } catch (DynamoDbException e) {
            LOGGER.severe("Failed to add test item to table: " + tableName + " - " + e.getMessage());
            throw e;
        }
    }
```
Describe global table configuration and replicas using AWS SDK for Java 2.x.  

```
    public static DescribeTableResponse describeTable(final DynamoDbClient dynamoDbClient, final String tableName) {

        if (dynamoDbClient == null) {
            throw new IllegalArgumentException("DynamoDB client cannot be null");
        }
        if (tableName == null || tableName.trim().isEmpty()) {
            throw new IllegalArgumentException("Table name cannot be null or empty");
        }

        try {
            LOGGER.info("Describing table: " + tableName);

            DescribeTableRequest request =
                DescribeTableRequest.builder().tableName(tableName).build();

            DescribeTableResponse response = dynamoDbClient.describeTable(request);

            LOGGER.info("Table status: " + response.table().tableStatus());
            if (response.table().replicas() != null
                && !response.table().replicas().isEmpty()) {
                LOGGER.info("Number of replicas: " + response.table().replicas().size());
                response.table()
                    .replicas()
                    .forEach(replica -> LOGGER.info(
                        "Replica region: " + replica.regionName() + ", Status: " + replica.replicaStatus()));
            }

            return response;

        } catch (ResourceNotFoundException e) {
            LOGGER.severe("Table not found: " + tableName + " - " + e.getMessage());
            throw e;
        } catch (DynamoDbException e) {
            LOGGER.severe("Failed to describe table: " + tableName + " - " + e.getMessage());
            throw e;
        }
    }
```
Complete example of global table operations using AWS SDK for Java 2.x.  

```
    public static void exampleUsage(final Region sourceRegion, final Region replicaRegion) {

        String tableName = "Music";
        String indexName = "SongTitleIndex";
        Long readCapacity = 15L;

        // Create DynamoDB client for the source region
        try (DynamoDbClient dynamoDbClient =
            DynamoDbClient.builder().region(sourceRegion).build()) {

            try {
                // Step 1: Create the initial table with GSI and streams
                LOGGER.info("Step 1: Creating table in source region: " + sourceRegion.id());
                createTableWithGSI(dynamoDbClient, tableName, indexName);

                // Step 2: Wait for table to become active
                LOGGER.info("Step 2: Waiting for table to become active");
                waitForTableActive(dynamoDbClient, tableName);

                // Step 3: Add replica in destination region
                LOGGER.info("Step 3: Adding replica in region: " + replicaRegion.id());
                addReplica(dynamoDbClient, tableName, replicaRegion, indexName, readCapacity);

                // Step 4: Wait a moment for replica creation to start
                Thread.sleep(5000);

                // Step 5: Describe table to view replica information
                LOGGER.info("Step 5: Describing table to view replicas");
                describeTable(dynamoDbClient, tableName);

                // Step 6: Add a test item to verify replication
                LOGGER.info("Step 6: Adding test item to verify replication");
                putTestItem(dynamoDbClient, tableName, "TestArtist", "TestSong");

                LOGGER.info("Global table setup completed successfully!");
                LOGGER.info("You can verify replication by checking the item in region: " + replicaRegion.id());

                // Step 7: Remove replica and clean up table
                LOGGER.info("Step 7: Removing replica from region: " + replicaRegion.id());
                removeReplica(dynamoDbClient, tableName, replicaRegion);
                DeleteTableResponse deleteTableResponse = dynamoDbClient.deleteTable(
                    DeleteTableRequest.builder().tableName(tableName).build());
                LOGGER.info("MREC global table demonstration completed successfully!");

            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
                throw new RuntimeException("Thread was interrupted", e);
            } catch (DynamoDbException e) {
                LOGGER.severe("DynamoDB operation failed: " + e.getMessage());
                throw e;
            }
        }
    }
```
+ For API details, see the following topics in *AWS SDK for Java 2.x API Reference*.
  + [CreateTable](https://docs.aws.amazon.com/goto/SdkForJavaV2/dynamodb-2012-08-10/CreateTable)
  + [DescribeTable](https://docs.aws.amazon.com/goto/SdkForJavaV2/dynamodb-2012-08-10/DescribeTable)
  + [PutItem](https://docs.aws.amazon.com/goto/SdkForJavaV2/dynamodb-2012-08-10/PutItem)
  + [UpdateTable](https://docs.aws.amazon.com/goto/SdkForJavaV2/dynamodb-2012-08-10/UpdateTable)

------

## Creating a global table configured for MRSC
<a name="create-gt-mrsc"></a>

This section shows you how to create a Multi-Region Strong Consistency (MRSC) global table. MRSC global tables synchronously replicate item changes across Regions, ensuring that strongly consistent read operations on any replica always return the latest version of an item. When converting a single-Region table to a MRSC global table, you must ensure that the table is empty. Converting a single-Region table to a MRSC global table with existing items is not supported. Ensure that no data is written into the table during the conversion process.

You can configure a MRSC global table with three replicas, or two replicas and one witness. When creating a MRSC global table, you choose the Regions where replicas and an optional witness are deployed. The following example creates an MRSC global table with replicas in the US East (N. Virginia) and US East (Ohio) Regions, with a witness in the US West (Oregon) Region.

**Note**  
Before creating a global table, verify that the Service Quota throughput limits are consistent across all target Regions, as this is required to create a global table. For more information about global table throughput limits, see [Global tables quotas](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ServiceQuotas.html#gt-limits-throughput).

### Creating a MRSC global table using the DynamoDB Console
<a name="mrsc_console"></a>

Follow these steps to create about MRSC global table using the AWS Management Console.

1. Sign in to the AWS Management Console and open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/).

1. From the Region selector in the navigation bar, choose a Region where global tables with MRSC is [supported](V2globaltables_HowItWorks.md#V2globaltables_HowItWorks.consistency-modes), such as **us-east-2**.

1. In the navigation pane, choose **Tables**.

1. Choose **Create table**.

1. On the **Create table** page:

   1. For **Table name**, enter **Music**.

   1. For **Partition key**, enter **Artist** and keep the default **String** type.

   1. For **Sort key**, enter **SongTitle** and keep the default **String** type.

   1. Keep the other default settings and choose **Create table**

      This new table serves as the first replica table in a new global table. It is the prototype for other replica tables that you add later.

1. Wait for the table to become active, then select it from the tables list.

1. Choose the **Global tables** tab, then choose **Create replica**.

1. On the **Create replica** page:

   1. Under **Multi-Region Consistency**, choose **Strong consistency**.

   1. For **Replication Region 1**, choose **US East (N. Virginia) us-east-1**.

   1. For **Replication Region 2**, choose **US West (Oregon) us-west-2**.

   1. Check **Configure as Witness** for the US West (Oregon) region.

   1. Choose **Create replicas**.

1. Wait for the replica and witness creation process to complete. The replica status will show as **Active** when the table is ready to use.

### Creating a MRSC global table using the AWS CLI or Java
<a name="mrsc-cli-java"></a>

Before you start, ensure that your IAM principal has the required permissions to create a MRSC global table with a witness Region.

The following sample IAM policy allows you to create a DynamoDB table (`MusicTable`) in US East (Ohio) with a replica in US East (N. Virginia) and a witness Region in US West (Oregon):

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:CreateTable",
                "dynamodb:CreateTableReplica",
                "dynamodb:CreateGlobalTableWitness",
                "dynamodb:DescribeTable",
                "dynamodb:UpdateTable",
                "dynamodb:DeleteTable",
                "dynamodb:DeleteTableReplica",
                "dynamodb:DeleteGlobalTableWitness",
                "dynamodb:Scan",
                "dynamodb:Query",
                "dynamodb:UpdateItem",
                "dynamodb:PutItem",
                "dynamodb:GetItem",
                "dynamodb:DeleteItem",
                "dynamodb:BatchWriteItem"
            ],
            "Resource": [
                "arn:aws:dynamodb:us-east-1:123456789012:table/MusicTable",
                "arn:aws:dynamodb:us-east-2:123456789012:table/MusicTable",
                "arn:aws:dynamodb:us-west-2:123456789012:table/MusicTable"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "iam:CreateServiceLinkedRole",
            "Resource": "arn:aws:iam::*:role/aws-service-role/replication.dynamodb.amazonaws.com/AWSServiceRoleForDynamoDBReplication",
            "Condition": {
                "StringLike": {
                    "iam:AWSServiceName": "replication.dynamodb.amazonaws.com"
                }
            }
        }
    ]
}
```

------

The following code examples show how to create and manage DynamoDB global tables with Multi-Region Strong Consistency (MRSC).
+ Create a table with Multi-Region Strong Consistency.
+ Verify MRSC configuration and replica status.
+ Test strong consistency across Regions with immediate reads.
+ Perform conditional writes with MRSC guarantees.
+ Clean up MRSC global table resources.

------
#### [ Bash ]

**AWS CLI with Bash script**  
Create a table with Multi-Region Strong Consistency.  

```
# Step 1: Create a new table in us-east-2 (primary region for MRSC)
# Note: Table must be empty when enabling MRSC
aws dynamodb create-table \
    --table-name MusicTable \
    --attribute-definitions \
        AttributeName=Artist,AttributeType=S \
        AttributeName=SongTitle,AttributeType=S \
    --key-schema \
        AttributeName=Artist,KeyType=HASH \
        AttributeName=SongTitle,KeyType=RANGE \
    --billing-mode PAY_PER_REQUEST \
    --region us-east-2

# Wait for table to become active
aws dynamodb wait table-exists --table-name MusicTable --region us-east-2

# Step 2: Add replica and witness with Multi-Region Strong Consistency
# MRSC requires exactly three replicas in supported regions
aws dynamodb update-table \
    --table-name MusicTable \
    --replica-updates '[{"Create": {"RegionName": "us-east-1"}}]' \
    --global-table-witness-updates '[{"Create": {"RegionName": "us-west-2"}}]' \
    --multi-region-consistency STRONG \
    --region us-east-2
```
Verify MRSC configuration and replica status.  

```
# Verify the global table configuration and MRSC setting
aws dynamodb describe-table \
    --table-name MusicTable \
    --region us-east-2 \
    --query 'Table.{TableName:TableName,TableStatus:TableStatus,MultiRegionConsistency:MultiRegionConsistency,Replicas:Replicas[*],GlobalTableWitnesses:GlobalTableWitnesses[*].{Region:RegionName,Status:ReplicaStatus}}'
```
Test strong consistency with immediate reads across Regions.  

```
# Write an item to the primary region
aws dynamodb put-item \
    --table-name MusicTable \
    --item '{"Artist": {"S":"The Beatles"},"SongTitle": {"S":"Hey Jude"},"Album": {"S":"The Beatles 1967-1970"},"Year": {"N":"1968"}}' \
    --region us-east-2

# Read the item from replica region to verify strong consistency (cannot read or write to witness)
# No wait time needed - MRSC provides immediate consistency
echo "Reading from us-east-1 (immediate consistency):"
aws dynamodb get-item \
    --table-name MusicTable \
    --key '{"Artist": {"S":"The Beatles"},"SongTitle": {"S":"Hey Jude"}}' \
    --consistent-read \
    --region us-east-1
```
Perform conditional writes with MRSC guarantees.  

```
# Perform a conditional update from a different region
# This demonstrates that conditions work consistently across all regions
aws dynamodb update-item \
    --table-name MusicTable \
    --key '{"Artist": {"S":"The Beatles"},"SongTitle": {"S":"Hey Jude"}}' \
    --update-expression "SET #rating = :rating" \
    --condition-expression "attribute_exists(Artist)" \
    --expression-attribute-names '{"#rating": "Rating"}' \
    --expression-attribute-values '{":rating": {"N":"5"}}' \
    --region us-east-1
```
Clean up MRSC global table resources.  

```
# Remove replica tables (must be done before deleting the primary table)
aws dynamodb update-table \
    --table-name MusicTable \
    --replica-updates '[{"Delete": {"RegionName": "us-east-1"}}]' \
    --global-table-witness-updates '[{"Delete": {"RegionName": "us-west-2"}}]' \
    --region us-east-2

# Wait for replicas to be deleted
echo "Waiting for replicas to be deleted..."
sleep 30

# Delete the primary table
aws dynamodb delete-table \
    --table-name MusicTable \
    --region us-east-2
```
+ For API details, see the following topics in *AWS CLI Command Reference*.
  + [CreateTable](https://docs.aws.amazon.com/goto/aws-cli/dynamodb-2012-08-10/CreateTable)
  + [DeleteTable](https://docs.aws.amazon.com/goto/aws-cli/dynamodb-2012-08-10/DeleteTable)
  + [DescribeTable](https://docs.aws.amazon.com/goto/aws-cli/dynamodb-2012-08-10/DescribeTable)
  + [GetItem](https://docs.aws.amazon.com/goto/aws-cli/dynamodb-2012-08-10/GetItem)
  + [PutItem](https://docs.aws.amazon.com/goto/aws-cli/dynamodb-2012-08-10/PutItem)
  + [UpdateItem](https://docs.aws.amazon.com/goto/aws-cli/dynamodb-2012-08-10/UpdateItem)
  + [UpdateTable](https://docs.aws.amazon.com/goto/aws-cli/dynamodb-2012-08-10/UpdateTable)

------
#### [ Java ]

**SDK for Java 2.x**  
Create a regional table ready for MRSC conversion using AWS SDK for Java 2.x.  

```
    public static CreateTableResponse createRegionalTable(final DynamoDbClient dynamoDbClient, final String tableName) {

        if (dynamoDbClient == null) {
            throw new IllegalArgumentException("DynamoDB client cannot be null");
        }
        if (tableName == null || tableName.trim().isEmpty()) {
            throw new IllegalArgumentException("Table name cannot be null or empty");
        }

        try {
            LOGGER.info("Creating regional table: " + tableName + " (must be empty for MRSC)");

            CreateTableRequest createTableRequest = CreateTableRequest.builder()
                .tableName(tableName)
                .attributeDefinitions(
                    AttributeDefinition.builder()
                        .attributeName("Artist")
                        .attributeType(ScalarAttributeType.S)
                        .build(),
                    AttributeDefinition.builder()
                        .attributeName("SongTitle")
                        .attributeType(ScalarAttributeType.S)
                        .build())
                .keySchema(
                    KeySchemaElement.builder()
                        .attributeName("Artist")
                        .keyType(KeyType.HASH)
                        .build(),
                    KeySchemaElement.builder()
                        .attributeName("SongTitle")
                        .keyType(KeyType.RANGE)
                        .build())
                .billingMode(BillingMode.PAY_PER_REQUEST)
                .build();

            CreateTableResponse response = dynamoDbClient.createTable(createTableRequest);
            LOGGER.info("Regional table creation initiated. Status: "
                + response.tableDescription().tableStatus());

            return response;

        } catch (DynamoDbException e) {
            LOGGER.severe("Failed to create regional table: " + tableName + " - " + e.getMessage());
            throw DynamoDbException.builder()
                .message("Failed to create regional table: " + tableName)
                .cause(e)
                .build();
        }
    }
```
Convert a regional table to MRSC with replicas and witness using AWS SDK for Java 2.x.  

```
    public static UpdateTableResponse convertToMRSCWithWitness(
        final DynamoDbClient dynamoDbClient,
        final String tableName,
        final Region replicaRegion,
        final Region witnessRegion) {

        if (dynamoDbClient == null) {
            throw new IllegalArgumentException("DynamoDB client cannot be null");
        }
        if (tableName == null || tableName.trim().isEmpty()) {
            throw new IllegalArgumentException("Table name cannot be null or empty");
        }
        if (replicaRegion == null) {
            throw new IllegalArgumentException("Replica region cannot be null");
        }
        if (witnessRegion == null) {
            throw new IllegalArgumentException("Witness region cannot be null");
        }

        try {
            LOGGER.info("Converting table to MRSC with replica in " + replicaRegion.id() + " and witness in "
                + witnessRegion.id());

            // Create replica update using ReplicationGroupUpdate
            ReplicationGroupUpdate replicaUpdate = ReplicationGroupUpdate.builder()
                .create(CreateReplicationGroupMemberAction.builder()
                    .regionName(replicaRegion.id())
                    .build())
                .build();

            // Create witness update
            GlobalTableWitnessGroupUpdate witnessUpdate = GlobalTableWitnessGroupUpdate.builder()
                .create(CreateGlobalTableWitnessGroupMemberAction.builder()
                    .regionName(witnessRegion.id())
                    .build())
                .build();

            UpdateTableRequest updateTableRequest = UpdateTableRequest.builder()
                .tableName(tableName)
                .replicaUpdates(List.of(replicaUpdate))
                .globalTableWitnessUpdates(List.of(witnessUpdate))
                .multiRegionConsistency(MultiRegionConsistency.STRONG)
                .build();

            UpdateTableResponse response = dynamoDbClient.updateTable(updateTableRequest);
            LOGGER.info("MRSC conversion initiated. Status: "
                + response.tableDescription().tableStatus());
            LOGGER.info("UpdateTableResponse full object: " + response);
            return response;

        } catch (DynamoDbException e) {
            LOGGER.severe("Failed to convert table to MRSC: " + tableName + " - " + e.getMessage());
            throw DynamoDbException.builder()
                .message("Failed to convert table to MRSC: " + tableName)
                .cause(e)
                .build();
        }
    }
```
Describe an MRSC global table configuration using AWS SDK for Java 2.x.  

```
    public static DescribeTableResponse describeMRSCTable(final DynamoDbClient dynamoDbClient, final String tableName) {

        if (dynamoDbClient == null) {
            throw new IllegalArgumentException("DynamoDB client cannot be null");
        }
        if (tableName == null || tableName.trim().isEmpty()) {
            throw new IllegalArgumentException("Table name cannot be null or empty");
        }

        try {
            LOGGER.info("Describing MRSC global table: " + tableName);

            DescribeTableRequest request =
                DescribeTableRequest.builder().tableName(tableName).build();

            DescribeTableResponse response = dynamoDbClient.describeTable(request);

            LOGGER.info("Table status: " + response.table().tableStatus());
            LOGGER.info("Multi-region consistency: " + response.table().multiRegionConsistency());

            if (response.table().replicas() != null
                && !response.table().replicas().isEmpty()) {
                LOGGER.info("Number of replicas: " + response.table().replicas().size());
                response.table()
                    .replicas()
                    .forEach(replica -> LOGGER.info(
                        "Replica region: " + replica.regionName() + ", Status: " + replica.replicaStatus()));
            }

            if (response.table().globalTableWitnesses() != null
                && !response.table().globalTableWitnesses().isEmpty()) {
                LOGGER.info("Number of witnesses: "
                    + response.table().globalTableWitnesses().size());
                response.table()
                    .globalTableWitnesses()
                    .forEach(witness -> LOGGER.info(
                        "Witness region: " + witness.regionName() + ", Status: " + witness.witnessStatus()));
            }

            return response;

        } catch (ResourceNotFoundException e) {
            LOGGER.severe("Table not found: " + tableName + " - " + e.getMessage());
            throw DynamoDbException.builder()
                .message("Table not found: " + tableName)
                .cause(e)
                .build();
        } catch (DynamoDbException e) {
            LOGGER.severe("Failed to describe table: " + tableName + " - " + e.getMessage());
            throw DynamoDbException.builder()
                .message("Failed to describe table: " + tableName)
                .cause(e)
                .build();
        }
    }
```
Add test items to verify MRSC strong consistency using AWS SDK for Java 2.x.  

```
    public static PutItemResponse putTestItem(
        final DynamoDbClient dynamoDbClient,
        final String tableName,
        final String artist,
        final String songTitle,
        final String album,
        final String year) {

        if (dynamoDbClient == null) {
            throw new IllegalArgumentException("DynamoDB client cannot be null");
        }
        if (tableName == null || tableName.trim().isEmpty()) {
            throw new IllegalArgumentException("Table name cannot be null or empty");
        }
        if (artist == null || artist.trim().isEmpty()) {
            throw new IllegalArgumentException("Artist cannot be null or empty");
        }
        if (songTitle == null || songTitle.trim().isEmpty()) {
            throw new IllegalArgumentException("Song title cannot be null or empty");
        }

        try {
            LOGGER.info("Adding test item to MRSC global table: " + tableName);

            Map<String, AttributeValue> item = new HashMap<>();
            item.put("Artist", AttributeValue.builder().s(artist).build());
            item.put("SongTitle", AttributeValue.builder().s(songTitle).build());

            if (album != null && !album.trim().isEmpty()) {
                item.put("Album", AttributeValue.builder().s(album).build());
            }
            if (year != null && !year.trim().isEmpty()) {
                item.put("Year", AttributeValue.builder().n(year).build());
            }

            PutItemRequest putItemRequest =
                PutItemRequest.builder().tableName(tableName).item(item).build();

            PutItemResponse response = dynamoDbClient.putItem(putItemRequest);
            LOGGER.info("Test item added successfully with strong consistency");

            return response;

        } catch (DynamoDbException e) {
            LOGGER.severe("Failed to add test item to table: " + tableName + " - " + e.getMessage());
            throw DynamoDbException.builder()
                .message("Failed to add test item to table: " + tableName)
                .cause(e)
                .build();
        }
    }
```
Read items with consistent reads from MRSC replicas using AWS SDK for Java 2.x.  

```
    public static GetItemResponse getItemWithConsistentRead(
        final DynamoDbClient dynamoDbClient, final String tableName, final String artist, final String songTitle) {

        if (dynamoDbClient == null) {
            throw new IllegalArgumentException("DynamoDB client cannot be null");
        }
        if (tableName == null || tableName.trim().isEmpty()) {
            throw new IllegalArgumentException("Table name cannot be null or empty");
        }
        if (artist == null || artist.trim().isEmpty()) {
            throw new IllegalArgumentException("Artist cannot be null or empty");
        }
        if (songTitle == null || songTitle.trim().isEmpty()) {
            throw new IllegalArgumentException("Song title cannot be null or empty");
        }

        try {
            LOGGER.info("Reading item from MRSC global table with consistent read: " + tableName);

            Map<String, AttributeValue> key = new HashMap<>();
            key.put("Artist", AttributeValue.builder().s(artist).build());
            key.put("SongTitle", AttributeValue.builder().s(songTitle).build());

            GetItemRequest getItemRequest = GetItemRequest.builder()
                .tableName(tableName)
                .key(key)
                .consistentRead(true)
                .build();

            GetItemResponse response = dynamoDbClient.getItem(getItemRequest);

            if (response.hasItem()) {
                LOGGER.info("Item found with strong consistency - no wait time needed");
            } else {
                LOGGER.info("Item not found");
            }

            return response;

        } catch (DynamoDbException e) {
            LOGGER.severe("Failed to read item from table: " + tableName + " - " + e.getMessage());
            throw DynamoDbException.builder()
                .message("Failed to read item from table: " + tableName)
                .cause(e)
                .build();
        }
    }
```
Perform conditional updates with MRSC guarantees using AWS SDK for Java 2.x.  

```
    public static UpdateItemResponse performConditionalUpdate(
        final DynamoDbClient dynamoDbClient,
        final String tableName,
        final String artist,
        final String songTitle,
        final String rating) {

        if (dynamoDbClient == null) {
            throw new IllegalArgumentException("DynamoDB client cannot be null");
        }
        if (tableName == null || tableName.trim().isEmpty()) {
            throw new IllegalArgumentException("Table name cannot be null or empty");
        }
        if (artist == null || artist.trim().isEmpty()) {
            throw new IllegalArgumentException("Artist cannot be null or empty");
        }
        if (songTitle == null || songTitle.trim().isEmpty()) {
            throw new IllegalArgumentException("Song title cannot be null or empty");
        }
        if (rating == null || rating.trim().isEmpty()) {
            throw new IllegalArgumentException("Rating cannot be null or empty");
        }

        try {
            LOGGER.info("Performing conditional update on MRSC global table: " + tableName);

            Map<String, AttributeValue> key = new HashMap<>();
            key.put("Artist", AttributeValue.builder().s(artist).build());
            key.put("SongTitle", AttributeValue.builder().s(songTitle).build());

            Map<String, String> expressionAttributeNames = new HashMap<>();
            expressionAttributeNames.put("#rating", "Rating");

            Map<String, AttributeValue> expressionAttributeValues = new HashMap<>();
            expressionAttributeValues.put(
                ":rating", AttributeValue.builder().n(rating).build());

            UpdateItemRequest updateItemRequest = UpdateItemRequest.builder()
                .tableName(tableName)
                .key(key)
                .updateExpression("SET #rating = :rating")
                .conditionExpression("attribute_exists(Artist)")
                .expressionAttributeNames(expressionAttributeNames)
                .expressionAttributeValues(expressionAttributeValues)
                .build();

            UpdateItemResponse response = dynamoDbClient.updateItem(updateItemRequest);
            LOGGER.info("Conditional update successful - demonstrates strong consistency");

            return response;

        } catch (ConditionalCheckFailedException e) {
            LOGGER.warning("Conditional check failed: " + e.getMessage());
            throw e;
        } catch (DynamoDbException e) {
            LOGGER.severe("Failed to perform conditional update: " + tableName + " - " + e.getMessage());
            throw DynamoDbException.builder()
                .message("Failed to perform conditional update: " + tableName)
                .cause(e)
                .build();
        }
    }
```
Wait for MRSC replicas and witnesses to become active using AWS SDK for Java 2.x.  

```
    public static void waitForMRSCReplicasActive(
        final DynamoDbClient dynamoDbClient, final String tableName, final int maxWaitTimeSeconds)
        throws InterruptedException {

        if (dynamoDbClient == null) {
            throw new IllegalArgumentException("DynamoDB client cannot be null");
        }
        if (tableName == null || tableName.trim().isEmpty()) {
            throw new IllegalArgumentException("Table name cannot be null or empty");
        }
        if (maxWaitTimeSeconds <= 0) {
            throw new IllegalArgumentException("Max wait time must be positive");
        }

        try {
            LOGGER.info("Waiting for MRSC replicas and witnesses to become active: " + tableName);

            final long startTime = System.currentTimeMillis();
            final long maxWaitTimeMillis = maxWaitTimeSeconds * 1000L;
            int backoffSeconds = 5; // Start with 5 second intervals
            final int maxBackoffSeconds = 30; // Cap at 30 seconds

            while (System.currentTimeMillis() - startTime < maxWaitTimeMillis) {
                DescribeTableResponse response = describeMRSCTable(dynamoDbClient, tableName);

                boolean allActive = true;
                StringBuilder statusReport = new StringBuilder();

                if (response.table().multiRegionConsistency() == null
                    || !MultiRegionConsistency.STRONG
                        .toString()
                        .equals(response.table().multiRegionConsistency().toString())) {
                    allActive = false;
                    statusReport
                        .append("MultiRegionConsistency: ")
                        .append(response.table().multiRegionConsistency())
                        .append(" ");
                }
                if (response.table().replicas() == null
                    || response.table().replicas().isEmpty()) {
                    allActive = false;
                    statusReport.append("No replicas found. ");
                }
                if (response.table().globalTableWitnesses() == null
                    || response.table().globalTableWitnesses().isEmpty()) {
                    allActive = false;
                    statusReport.append("No witnesses found. ");
                }

                // Check table status
                if (!"ACTIVE".equals(response.table().tableStatus().toString())) {
                    allActive = false;
                    statusReport
                        .append("Table: ")
                        .append(response.table().tableStatus())
                        .append(" ");
                }

                // Check replica status
                if (response.table().replicas() != null) {
                    for (var replica : response.table().replicas()) {
                        if (!"ACTIVE".equals(replica.replicaStatus().toString())) {
                            allActive = false;
                            statusReport
                                .append("Replica(")
                                .append(replica.regionName())
                                .append("): ")
                                .append(replica.replicaStatus())
                                .append(" ");
                        }
                    }
                }

                // Check witness status
                if (response.table().globalTableWitnesses() != null) {
                    for (var witness : response.table().globalTableWitnesses()) {
                        if (!"ACTIVE".equals(witness.witnessStatus().toString())) {
                            allActive = false;
                            statusReport
                                .append("Witness(")
                                .append(witness.regionName())
                                .append("): ")
                                .append(witness.witnessStatus())
                                .append(" ");
                        }
                    }
                }

                if (allActive) {
                    LOGGER.info("All MRSC replicas and witnesses are now active: " + tableName);
                    return;
                }

                LOGGER.info("Waiting for MRSC components to become active. Status: " + statusReport.toString());
                LOGGER.info("Next check in " + backoffSeconds + " seconds...");

                tempWait(backoffSeconds);

                // Exponential backoff with cap
                backoffSeconds = Math.min(backoffSeconds * 2, maxBackoffSeconds);
            }

            throw DynamoDbException.builder()
                .message("Timeout waiting for MRSC replicas to become active after " + maxWaitTimeSeconds + " seconds")
                .build();

        } catch (DynamoDbException | InterruptedException e) {
            LOGGER.severe("Failed to wait for MRSC replicas to become active: " + tableName + " - " + e.getMessage());
            throw e;
        }
    }
```
Clean up MRSC replicas and witnesses using AWS SDK for Java 2.x.  

```
    public static UpdateTableResponse cleanupMRSCReplicas(
        final DynamoDbClient dynamoDbClient,
        final String tableName,
        final Region replicaRegion,
        final Region witnessRegion) {

        if (dynamoDbClient == null) {
            throw new IllegalArgumentException("DynamoDB client cannot be null");
        }
        if (tableName == null || tableName.trim().isEmpty()) {
            throw new IllegalArgumentException("Table name cannot be null or empty");
        }
        if (replicaRegion == null) {
            throw new IllegalArgumentException("Replica region cannot be null");
        }
        if (witnessRegion == null) {
            throw new IllegalArgumentException("Witness region cannot be null");
        }

        try {
            LOGGER.info("Cleaning up MRSC replicas and witnesses for table: " + tableName);

            // Remove replica using ReplicationGroupUpdate
            ReplicationGroupUpdate replicaUpdate = ReplicationGroupUpdate.builder()
                .delete(DeleteReplicationGroupMemberAction.builder()
                    .regionName(replicaRegion.id())
                    .build())
                .build();

            // Remove witness
            GlobalTableWitnessGroupUpdate witnessUpdate = GlobalTableWitnessGroupUpdate.builder()
                .delete(DeleteGlobalTableWitnessGroupMemberAction.builder()
                    .regionName(witnessRegion.id())
                    .build())
                .build();

            UpdateTableRequest updateTableRequest = UpdateTableRequest.builder()
                .tableName(tableName)
                .replicaUpdates(List.of(replicaUpdate))
                .globalTableWitnessUpdates(List.of(witnessUpdate))
                .build();

            UpdateTableResponse response = dynamoDbClient.updateTable(updateTableRequest);
            LOGGER.info("MRSC cleanup initiated - removing replica and witness. Response: " + response);

            return response;

        } catch (DynamoDbException e) {
            LOGGER.severe("Failed to cleanup MRSC replicas: " + tableName + " - " + e.getMessage());
            throw DynamoDbException.builder()
                .message("Failed to cleanup MRSC replicas: " + tableName)
                .cause(e)
                .build();
        }
    }
```
Complete MRSC workflow demonstration using AWS SDK for Java 2.x.  

```
    public static void demonstrateCompleteMRSCWorkflow(
        final DynamoDbClient primaryClient,
        final DynamoDbClient replicaClient,
        final String tableName,
        final Region replicaRegion,
        final Region witnessRegion)
        throws InterruptedException {

        if (primaryClient == null) {
            throw new IllegalArgumentException("Primary DynamoDB client cannot be null");
        }
        if (replicaClient == null) {
            throw new IllegalArgumentException("Replica DynamoDB client cannot be null");
        }
        if (tableName == null || tableName.trim().isEmpty()) {
            throw new IllegalArgumentException("Table name cannot be null or empty");
        }
        if (replicaRegion == null) {
            throw new IllegalArgumentException("Replica region cannot be null");
        }
        if (witnessRegion == null) {
            throw new IllegalArgumentException("Witness region cannot be null");
        }

        try {
            LOGGER.info("=== Starting Complete MRSC Workflow Demonstration ===");

            // Step 1: Create an empty single-Region table
            LOGGER.info("Step 1: Creating empty single-Region table");
            createRegionalTable(primaryClient, tableName);

            // Use the existing GlobalTableOperations method for basic table waiting
            LOGGER.info("Intermediate step: Waiting for table [" + tableName + "] to become active before continuing");
            GlobalTableOperations.waitForTableActive(primaryClient, tableName);

            // Step 2: Convert to MRSC with replica and witness
            LOGGER.info("Step 2: Converting to MRSC with replica and witness");
            convertToMRSCWithWitness(primaryClient, tableName, replicaRegion, witnessRegion);

            // Wait for MRSC conversion to complete using MRSC-specific waiter
            LOGGER.info("Waiting for MRSC conversion to complete...");
            waitForMRSCReplicasActive(primaryClient, tableName);

            LOGGER.info("Intermediate step: Waiting for table [" + tableName + "] to become active before continuing");
            GlobalTableOperations.waitForTableActive(primaryClient, tableName);

            // Step 3: Verify MRSC configuration
            LOGGER.info("Step 3: Verifying MRSC configuration");
            describeMRSCTable(primaryClient, tableName);

            // Step 4: Test strong consistency with data operations
            LOGGER.info("Step 4: Testing strong consistency with data operations");

            // Add test item to primary region
            putTestItem(primaryClient, tableName, "The Beatles", "Hey Jude", "The Beatles 1967-1970", "1968");

            // Immediately read from replica region (no wait needed with MRSC)
            LOGGER.info("Reading from replica region immediately (strong consistency):");
            GetItemResponse getResponse =
                getItemWithConsistentRead(replicaClient, tableName, "The Beatles", "Hey Jude");

            if (getResponse.hasItem()) {
                LOGGER.info("✓ Strong consistency verified - item immediately available in replica region");
            } else {
                LOGGER.warning("✗ Item not found in replica region");
            }

            // Test conditional update from replica region
            LOGGER.info("Testing conditional update from replica region:");
            performConditionalUpdate(replicaClient, tableName, "The Beatles", "Hey Jude", "5");
            LOGGER.info("✓ Conditional update successful - demonstrates strong consistency");

            // Step 5: Cleanup
            LOGGER.info("Step 5: Cleaning up resources");
            cleanupMRSCReplicas(primaryClient, tableName, replicaRegion, witnessRegion);

            // Wait for cleanup to complete using basic table waiter
            LOGGER.info("Waiting for replica cleanup to complete...");
            GlobalTableOperations.waitForTableActive(primaryClient, tableName);

            // "Halt" until replica/witness cleanup is complete
            DescribeTableResponse cleanupVerification = describeMRSCTable(primaryClient, tableName);
            int backoffSeconds = 5; // Start with 5 second intervals
            while (cleanupVerification.table().multiRegionConsistency() != null) {
                LOGGER.info("Waiting additional time (" + backoffSeconds + " seconds) for MRSC cleanup to complete...");
                tempWait(backoffSeconds);

                // Exponential backoff with cap
                backoffSeconds = Math.min(backoffSeconds * 2, 30);
                cleanupVerification = describeMRSCTable(primaryClient, tableName);
            }

            // Delete the primary table
            deleteTable(primaryClient, tableName);

            LOGGER.info("=== MRSC Workflow Demonstration Complete ===");
            LOGGER.info("");
            LOGGER.info("Key benefits of Multi-Region Strong Consistency (MRSC):");
            LOGGER.info("- Immediate consistency across all regions (no eventual consistency delays)");
            LOGGER.info("- Simplified application logic (no need to handle eventual consistency)");
            LOGGER.info("- Support for conditional writes and transactions across regions");
            LOGGER.info("- Consistent read operations from any region without waiting");

        } catch (DynamoDbException | InterruptedException e) {
            LOGGER.severe("MRSC workflow failed: " + e.getMessage());
            throw e;
        }
    }
```
+ For API details, see the following topics in *AWS SDK for Java 2.x API Reference*.
  + [CreateTable](https://docs.aws.amazon.com/goto/SdkForJavaV2/dynamodb-2012-08-10/CreateTable)
  + [DeleteTable](https://docs.aws.amazon.com/goto/SdkForJavaV2/dynamodb-2012-08-10/DeleteTable)
  + [DescribeTable](https://docs.aws.amazon.com/goto/SdkForJavaV2/dynamodb-2012-08-10/DescribeTable)
  + [GetItem](https://docs.aws.amazon.com/goto/SdkForJavaV2/dynamodb-2012-08-10/GetItem)
  + [PutItem](https://docs.aws.amazon.com/goto/SdkForJavaV2/dynamodb-2012-08-10/PutItem)
  + [UpdateItem](https://docs.aws.amazon.com/goto/SdkForJavaV2/dynamodb-2012-08-10/UpdateItem)
  + [UpdateTable](https://docs.aws.amazon.com/goto/SdkForJavaV2/dynamodb-2012-08-10/UpdateTable)

------

# DynamoDB global tables security
<a name="globaltables-security"></a>

Global tables replicas are DynamoDB tables, so you use the same methods for controlling access to replicas that you do for single-Region tables, including AWS Identity and Access Management (IAM) identity policies and resource-based policies.

This topic covers how to secure DynamoDB global tables using IAM permissions and AWS Key Management Service (AWS KMS) encryption. You learn about the service-linked roles (SLR) that allow cross-Region replication and auto-scaling, the IAM permissions needed to create, update, and delete global tables , and the differences between multi-Region eventual consistency (MREC) and multi-Region strong consistency (MRSC) tables. You also learn about AWS KMS encryption keys to manage cross-Region replication securely.

## Service-linked roles for global tables
<a name="globaltables-slr"></a>

DynamoDB global tables rely on service-linked roles (SLRs) to manage cross-Region replication and auto-scaling capabilities.

You only need to set up these roles once per AWS account. Once created, the same roles serve all global tables in your account. For more information about service-linked roles, see [Using service-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html) in the *IAM User Guide*.

### Replication service-linked role
<a name="globaltables-replication-slr"></a>

Amazon DynamoDB automatically creates the `AWSServiceRoleForDynamoDBReplication` service-linked role (SLR) when you create your first global table. This role manages cross-Region replication for you.

When applying resource-based policies to replicas, ensure that you don't deny any of the permissions defined in the `AWSServiceRoleForDynamoDBReplicationPolicy` to the SLR principal, as this will interrupt replication. If you deny required SLR permissions, replication to and from affected replicas will stop, and the replica table status will change to `REPLICATION_NOT_AUTHORIZED`.
+ For multi-Region eventual consistency (MREC) global tables, if a replica remains in the `REPLICATION_NOT_AUTHORIZED` state for more than 20 hours, the replica is irreversibly converted to a single-Region DynamoDB table.
+ For multi-Region strong consistency (MRSC) global tables, denying required permissions results in `AccessDeniedException` for write and strongly consistent read operations. If a replica remains in the `REPLICATION_NOT_AUTHORIZED` state for more than seven days, the replica becomes permanently inaccessible, and write and strongly consistent read operations will continue to fail with an error. Some management operations like replica deletion will succeed.

### Auto scaling service-linked role
<a name="globaltables-autoscaling-slr"></a>

When configuring a global table for provisioned capacity mode, auto scaling must be configured for the global table. DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your global table replicas. The Application Auto Scaling service creates a service-linked role (SLR) named [https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-service-linked-roles.html](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-service-linked-roles.html). This service-linked role is automatically created in your AWS account when you first configure auto scaling for a DynamoDB table. It allows Application Auto Scaling to managed provisioned table capacity and create CloudWatch alarms. 

 When applying resource-based policies to replicas, ensure that you don't deny any permissions defined in the [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSApplicationAutoscalingDynamoDBTablePolicy.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSApplicationAutoscalingDynamoDBTablePolicy.html) to the Application Auto Scaling SLR principal, as this will interrupt auto scaling functionality.

### Example IAM policies for service-linked roles
<a name="globaltables-example-slr"></a>

An IAM policy with the following condition does not impact required permissions to the DynamoDB replication SLR and AWS Auto Scaling SLR. This condition can be added to otherwise broadly restrictive policies to avoid unintentionally interrupting replication or auto scaling.

#### Excluding required SLR permissions from deny policies
<a name="example-exclude-slr-policy"></a>

The following example shows how to exclude service-linked role principals from deny statements:

```
"Condition": {
    "StringNotEquals": {
        "aws:PrincipalArn": [
            "arn:aws::iam::111122223333:role/aws-service-role/replication.dynamodb.amazonaws.com/AWSServiceRoleForDynamoDBReplication",
            "arn:aws::iam::111122223333:role/aws-service-role/dynamodb.application-autoscaling.amazonaws.com/AWSServiceRoleForApplicationAutoScaling_DynamoDBTable"
        ]
    }
}
```

## How global tables use AWS IAM
<a name="globaltables-iam"></a>

The following sections describe the required permissions for different global table operations and provide policy examples to help you configure the appropriate access for your users and applications.

**Note**  
All permissions described must be applied to the specific table resource ARN in the affected Region(s). The table resource ARN follows the format `arn:aws:dynamodb:region:account-id:table/table-name`, where you need to specify your actual Region, account ID, and table name values.

**Topics**
+ [Creating global tables and adding replicas](#globaltables-creation-iam)
+ [Updating global tables](#globaltables-update-iam)
+ [Deleting global tables and removing replicas](#globaltables-delete-iam)

### Creating global tables and adding replicas
<a name="globaltables-creation-iam"></a>

DynamoDB global tables support two consistency modes: multi-Region eventual consistency (MREC) and multi-Region strong consistency (MRSC). MREC global tables can have multiple replicas across any number of Regions and provide eventual consistency. MRSC global tables require exactly three Regions (three replicas or two replicas and one witness) and provide strong consistency with zero recovery point objective (RPO).

The permissions required to create global tables depend on whether you're creating a global table with or without a witness.

#### Permissions for creating global tables
<a name="globaltables-creation-iam-all-types"></a>

The following permissions are required both for initial global table creation and for adding replicas later. These permissions apply to both Multi-Region Eventual Consistency (MREC) and Multi-Region Strong Consistency (MRSC) global tables.
+ Global tables require cross-Region replication, which DynamoDB manages through the [`AWSServiceRoleForDynamoDBReplication`](#globaltables-replication-slr) service-linked role (SLR). The following permission allows DynamoDB to create this role automatically when you create a global table for the first time:
  + `iam:CreateServiceLinkedRole`
+ To create a global table or add a replica using the [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html) API, you must have the following permission on the source table resource:
  + `dynamodb:UpdateTable`
+ You must have the following permissions on the table resource in the Regions for the replicas to be added:
  + `dynamodb:CreateTable`
  + `dynamodb:CreateTableReplica`
  + `dynamodb:Query`
  + `dynamodb:Scan`
  + `dynamodb:UpdateItem`
  + `dynamodb:PutItem`
  + `dynamodb:GetItem`
  + `dynamodb:DeleteItem`
  + `dynamodb:BatchWriteItem`

#### Additional permissions for MRSC global tables using a witness
<a name="globaltables-creation-iam-witness"></a>

When creating a Multi-Region Strong Consistency (MRSC) global table with a witness Region, you must have the following permission on the table resource in all participating Regions (including both replica Regions and the witness Region):
+ `dynamodb:CreateGlobalTableWitness`

#### Example IAM policies for creating global tables
<a name="globaltables-creation-iam-example"></a>

##### Creating MREC or MRSC global table across three Regions
<a name="globaltables-creation-iam-example-three-regions"></a>

The following identity-based policy allows you to create an MREC or MRSC global table named "users" across three Regions, including creating the required DynamoDB replication service-linked role.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowCreatingUsersGlobalTable",
      "Effect": "Allow",
      "Action": [
        "dynamodb:CreateTable",
        "dynamodb:CreateTableReplica",
        "dynamodb:UpdateTable",
        "dynamodb:Query",
        "dynamodb:Scan",
        "dynamodb:UpdateItem",
        "dynamodb:PutItem",
        "dynamodb:GetItem",
        "dynamodb:DeleteItem",
        "dynamodb:BatchWriteItem"
      ],
      "Resource": [
        "arn:aws:dynamodb:us-east-1:123456789012:table/users",
        "arn:aws:dynamodb:us-east-2:123456789012:table/users",
        "arn:aws:dynamodb:us-west-2:123456789012:table/users"
      ]
    },
    {
      "Sid": "AllowCreatingSLR",
      "Effect": "Allow",
      "Action": [
        "iam:CreateServiceLinkedRole"
      ],
      "Resource": [
        "arn:aws:iam::123456789012:role/aws-service-role/replication.dynamodb.amazonaws.com/AWSServiceRoleForDynamoDBReplication"
      ]
    }
  ]
}
```

------

##### Restricting MREC or MRSC global table creation to specific Regions
<a name="globaltables-creation-iam-example-restrict-regions"></a>

The following identity-based policy allows you to create DynamoDB global tables replicas across specific Regions using the [aws:RequestedRegion](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-requestedregion) condition key, including creating the required DynamoDB replication service-linked role.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowAddingReplicasToSourceTable",
      "Effect": "Allow",
      "Action": [
        "dynamodb:UpdateTable"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:RequestedRegion": [
            "us-east-1"
          ]
        }
      }
    },
    {
      "Sid": "AllowCreatingReplicas",
      "Effect": "Allow",
      "Action": [
        "dynamodb:CreateTable",
        "dynamodb:CreateTableReplica",
        "dynamodb:UpdateTable",
        "dynamodb:Query",
        "dynamodb:Scan",
        "dynamodb:UpdateItem",
        "dynamodb:PutItem",
        "dynamodb:GetItem",
        "dynamodb:DeleteItem",
        "dynamodb:BatchWriteItem"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:RequestedRegion": [
            "us-east-2",
            "us-west-2"
          ]
        }
      }
    },
    {
      "Sid": "AllowCreatingSLR",
      "Effect": "Allow",
      "Action": [
        "iam:CreateServiceLinkedRole"
      ],
      "Resource": [
        "arn:aws:iam::123456789012:role/aws-service-role/replication.dynamodb.amazonaws.com/AWSServiceRoleForDynamoDBReplication"
      ]
    }
  ]
}
```

------

##### Creating MRSC global table with witness
<a name="globaltables-creation-iam-example-witness"></a>

The following identity-based policy allows you to a create a DynamoDB MRSC global table named "users" with replicas in us-east-1 and us-east-2 and a witness in us-west-2, including creating the required DynamoDB replication service-linked role.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowCreatingUsersGlobalTableWithWitness",
      "Effect": "Allow",
      "Action": [
        "dynamodb:CreateTable",
        "dynamodb:CreateTableReplica",
        "dynamodb:CreateGlobalTableWitness",
        "dynamodb:UpdateTable",
        "dynamodb:Query",
        "dynamodb:Scan",
        "dynamodb:UpdateItem",
        "dynamodb:PutItem",
        "dynamodb:GetItem",
        "dynamodb:DeleteItem",
        "dynamodb:BatchWriteItem"
      ],
      "Resource": [
        "arn:aws:dynamodb:us-east-1:123456789012:table/users",
        "arn:aws:dynamodb:us-east-2:123456789012:table/users"
      ]
    },
    {
      "Sid": "AllowCreatingSLR",
      "Effect": "Allow",
      "Action": [
        "iam:CreateServiceLinkedRole"
      ],
      "Resource": [
        "arn:aws:iam::123456789012:role/aws-service-role/replication.dynamodb.amazonaws.com/AWSServiceRoleForDynamoDBReplication"
      ]
    }
  ]
}
```

------

##### Restricting MRSC witness creation to specific Regions
<a name="globaltables-creation-iam-example-restrict-witness-regions"></a>

This identity-based policy allows you to create a MRSC global table with replicas restricted to specific Regions using the [aws:RequestedRegion](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-requestedregion) condition key and unrestricted witness creation across all Regions, including creating the required DynamoDB replication service-linked role.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowCreatingReplicas",
      "Effect": "Allow",
      "Action": [
        "dynamodb:CreateTable",
        "dynamodb:CreateTableReplica",
        "dynamodb:UpdateTable",
        "dynamodb:Query",
        "dynamodb:Scan",
        "dynamodb:UpdateItem",
        "dynamodb:PutItem",
        "dynamodb:GetItem",
        "dynamodb:DeleteItem",
        "dynamodb:BatchWriteItem"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:RequestedRegion": [
            "us-east-1",
            "us-east-2"
          ]
        }
      }
    },
    {
      "Sid": "AllowCreatingWitness",
      "Effect": "Allow",
      "Action": [
        "dynamodb:CreateGlobalTableWitness"
      ],
      "Resource": "*"
    },
    {
      "Sid": "AllowCreatingSLR",
      "Effect": "Allow",
      "Action": [
        "iam:CreateServiceLinkedRole"
      ],
      "Resource": [
        "arn:aws:iam::123456789012:role/aws-service-role/replication.dynamodb.amazonaws.com/AWSServiceRoleForDynamoDBReplication"
      ]
    }
  ]
}
```

------

### Updating global tables
<a name="globaltables-update-iam"></a>

To modify replica settings for an existing global table using the [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html) API, you need the following permission on the table resource in the Region where you're making the API call:
+ `dynamodb:UpdateTable`

You can additionally update other global table configurations, such as auto scaling policies and Time to Live settings. The following permissions are required for these additional update operations:
+ To update a replica auto scaling policy with the [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTableReplicaAutoScaling.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTableReplicaAutoScaling.html) API, you must have the following permissions on the table resource in all Regions containing replicas:
  + `application-autoscaling:DeleteScalingPolicy`
  + `application-autoscaling:DeleteScheduledAction`
  + `application-autoscaling:DeregisterScalableTarget`
  + `application-autoscaling:DescribeScalableTargets`
  + `application-autoscaling:DescribeScalingActivities`
  + `application-autoscaling:DescribeScalingPolicies`
  + `application-autoscaling:DescribeScheduledActions`
  + `application-autoscaling:PutScalingPolicy`
  + `application-autoscaling:PutScheduledAction`
  + `application-autoscaling:RegisterScalableTarget`
+ To update Time to Live settings with the [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTimeToLive.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTimeToLive.html) API, you must have the following permission on the table resource in all Regions containing replicas:
  + `dynamodb:UpdateTimeToLive`

  Note that Time to Live (TTL) is only supported for global tables configured with Multi-Region Eventual Consistency (MREC). For more information about how global tables work with TTL, see [How DynamoDB global tables work](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/V2globaltables_HowItWorks.html).

### Deleting global tables and removing replicas
<a name="globaltables-delete-iam"></a>

To delete a global table, you must remove all replicas. The permissions required for this operation differ depending on whether you're deleting a global table with or without a witness Region.

#### Permissions for deleting global tables and removing replicas
<a name="globaltables-delete-iam-all-types"></a>

The following permissions are required both for removing individual replicas and for completely deleting global tables. Deleting a global table configuration only removes the replication relationship between tables in different Regions. It does not delete the underlying DynamoDB table in the last remaining Region. The table in the last Region continues to exist as a standard DynamoDB table with the same data and settings. These permissions apply to both Multi-Region Eventual Consistency (MREC) and Multi-Region Strong Consistency (MRSC) global tables. 
+ To remove replicas from a global table using the [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html) API, you need the following permission on the table resource in the Region from which you're making the API call:
  + `dynamodb:UpdateTable`
+ You need the following permissions on the table resource in each Region where you're removing a replica:
  + `dynamodb:DeleteTable`
  + `dynamodb:DeleteTableReplica`

#### Additional permissions for MRSC global tables using a witness
<a name="globaltables-delete-iam-witness"></a>

To delete a multi-Region strong consistency (MRSC) global table with a witness, you must have the following permission on the table resource in all participating Regions (including both replica Regions and the witness Region):
+ `dynamodb:DeleteGlobalTableWitness`

#### Examples IAM policies to delete a global table replicas
<a name="globaltables-delete-iam-example"></a>

##### Deleting global table replicas
<a name="globaltables-delete-replicas-iam-example"></a>

This identity-based policy allows you to delete a DynamoDB global table named "users" and its replicas across three Regions:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "dynamodb:UpdateTable",
        "dynamodb:DeleteTable",
        "dynamodb:DeleteTableReplica"
      ],
      "Resource": [
        "arn:aws:dynamodb:us-east-1:123456789012:table/users",
        "arn:aws:dynamodb:us-east-2:123456789012:table/users",
        "arn:aws:dynamodb:us-west-2:123456789012:table/users"
      ]
    }
  ]
}
```

------

##### Deleting a MRSC global table with a witness
<a name="globaltables-delete-witness-iam-example"></a>

This identity-based policy allows you to delete the replica and the witness of a MRSC global table named "users":

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "dynamodb:UpdateTable",
        "dynamodb:DeleteTable",
        "dynamodb:DeleteTableReplica",
        "dynamodb:DeleteGlobalTableWitness"
      ],
      "Resource": [
        "arn:aws:dynamodb:us-east-1:123456789012:table/users",
        "arn:aws:dynamodb:us-east-2:123456789012:table/users"
      ]
    }
  ]
}
```

------

## How global tables use AWS KMS
<a name="globaltables-kms"></a>

Like all DynamoDB tables, global tables replicas always encrypt data at rest using encryption keys stored in AWS Key Management Service (AWS KMS).

All replicas in a global table must be configured with the same type of KMS key (AWS owned key, AWS managed key, or Customer managed key).

**Important**  
DynamoDB requires access to the replica's encryption key to delete a replica. If you want to disable or delete a customer managed key used to encrypt a replica because you are deleting the replica, you should first delete the replica, wait for the table status on one of the remaining replicas to change to `ACTIVE`, then disable or delete the key.

For a global table configured for multi-Region eventual consistency (MREC), if you disable or revoke DynamoDB's access to a customer managed key used to encrypt a replica, replication to and from the replica will stop and the replica status will change to `INACCESSIBLE_ENCRYPTION_CREDENTIALS`. If a replica in a MREC global table remains in the `INACCESSIBLE_ENCRYPTION_CREDENTIALS` state for more than 20 hours, the replica is irreversibly converted to a single-Region DynamoDB table.

For a global table configured for multi-Region strong consistency (MRSC), if you disable or revoke DynamoDB's access to a customer managed key used to encrypt a replica, replication to and from the replica will stop, attempts to perform write or strongly consistent reads to the replica will return an error, and the replica status will change to `INACCESSIBLE_ENCRYPTION_CREDENTIALS`. If a replica in a MRSC global table remains in the `INACCESSIBLE_ENCRYPTION_CREDENTIALS` state for more than seven days, depending on the specific permissions revoked the replica will be archived or become permanently inaccessible.

# DynamoDB multi-account global tables
<a name="globaltables-MultiAccount"></a>

Multi-account global tables automatically replicate your DynamoDB table data across multiple AWS Regions and multiple AWS accounts to improve resiliency, isolate workloads at the account level, and apply distinct security and governance controls. Each replica table resides in a distinct AWS account, enabling fault isolation at both the Region and account level. You can also align replicas with your AWS organizational structure. Multi-account global tables provide additional isolation, governance, and security benefits compared to same-account global tables.

Multi-account global tables provide the following benefits:
+ Replicate DynamoDB table data automatically across your choice of AWS accounts and Regions
+ Enhance security and governance by replicating data across accounts with distinct policies, guardrails, and compliance boundaries
+ Improve operational resiliency and account-level fault isolation by placing replicas in separate AWS accounts
+ Align workloads by business unit or ownership when using a multi-account strategy
+ Simplify cost attribution by billing each replica to its respective AWS account

For more information, see [ Benefits of using multiple AWS accounts](https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/benefits-of-using-multiple-aws-accounts.html). If your workloads don't require multi-account replication, or you want simpler replica management with local overrides, you can continue to use same-account global tables.

You can configure multi-account global tables with [Multi-Region eventual consistency (MREC)](V2globaltables_HowItWorks.md#V2globaltables_HowItWorks.consistency-modes.mrec). Global tables configured for [Multi-Region strong consistency (MRSC)](V2globaltables_HowItWorks.md#V2globaltables_HowItWorks.consistency-modes.mrsc) do not support the multi-account model.

**Topics**
+ [How DynamoDB global tables work](V2globaltables_MA_HowItWorks.md)
+ [Tutorials: Creating multi-account global tables](V2globaltables_MA.tutorial.md)
+ [DynamoDB global tables security](globaltables_MA_security.md)

# How DynamoDB global tables work
<a name="V2globaltables_MA_HowItWorks"></a>

Multi-account global tables extend DynamoDB global tables fully managed, serverless, multi-Region, and multi-active capabilities to span multiple AWS accounts. Multi-account global tables replicate data across AWS Regions and accounts, providing the same active-active functionality as same-account global tables. When you write to any replica, DynamoDB replicates the data to all other replicas.

Key differences from same-account global tables include:
+ Multi-account replication is supported for multi-Region eventual consistency (MREC) global tables.
+ You can only add replicas by starting with a single-Region table. Converting an existing same-account global table into a multi-account setup is not supported. To migrate, you must delete existing replicas to return to a single-Region table before creating a new multi-account global table.
+ Each replica must reside in a separate AWS account. For a multi-account global table with *N* replicas, you must have *N* accounts.
+ Multi-account global tables use unified table settings across all replicas by default. All replicas automatically share the same configuration (such as throughput mode and TTL), and unlike same-account global tables, these settings cannot be overridden per replica.
+ Customers must provide replication permissions to the DynamoDB global tables service principal in their resource policies.

Multi-account global tables use the same underlying replication technology as same-account global tables. Table settings are replicated automatically across all regional replicas, and customers cannot override or customize settings per replica. This ensures consistent configuration and predictable behavior across multiple AWS accounts participating in the same global table.

Settings in DynamoDB global tables define how a table behaves and how data is replicated across Regions. These settings are configured through DynamoDB control plane APIs during table creation or when adding a new regional replica.

When creating a multi-account global table, customers must set `GlobalTableSettingsReplicationMode = ENABLED` for each regional replica. This ensures that configuration changes made in one Region propagate automatically to all other Regions that participate in the global table.

You can enable settings replication after table creation. This supports the scenario where a table is originally created as a regional table and later upgraded to a multi-account global table.

**Synchronized Settings**

The following table settings are always synchronized across all replicas in a multi-account global table:

**Note**  
Unlike same-account global tables, multi-account global tables do not allow per-Region overrides for these settings. The only exception is that overrides for read auto-scaling policies (tables and GSIs) are allowed as they are separate external resources.
+ Capacity mode (provisioned capacity or on-demand)
+ Table provisioned read and write capacity
+ Table read and write auto scaling
+ Local Secondary Index (LSI) definition
+ Global Secondary Index (GSI) definition
+ GSI provisioned read and write capacity
+ GSI read and write auto scaling
+ Streams definition in MREC mode
+ Time To Live (TTL)
+ Warm Throughput
+ On-demand maximum read and write throughput

**Non-Synchronized Settings**

The following settings are not synchronized between replicas and must be configured independently for each replica table in each Region.
+ Table Class
+ Server-side Encryption (SSE) type
+ Point-in-time Recovery
+ Server-side Encryption (SSE) KMS Key Id
+ Deletion Protection
+ Kinesis Data Streams (KDSD)
+ Tags
+ Resource Policy
+ Table Cloudwatch-Contributor Insights (CCI)
+ GSI Cloudwatch-Contributor Insights (CCI)

## Monitoring
<a name="V2globaltables_MA_HowItWorks.monitoring"></a>

Global tables configured for multi-Region eventual consistency (MREC) publish the [`ReplicationLatency`](metrics-dimensions.md#ReplicationLatency) metric to CloudWatch. This metric tracks the elapsed time between when an item is written to a replica table, and when that item appears in another replica in the global table. `ReplicationLatency` is expressed in milliseconds and is emitted for every source and destination Region pair in a global table.

Typical `ReplicationLatency` values depends on the distance between your chosen AWS Regions, as well as other variables like workload type and throughput. For example, a source replica in the US West (N. California) (us-west-1) Region has lower `ReplicationLatency` to the US West (Oregon) (us-west-2) Region compared to the Africa (Cape Town) (af-south-1) Region.

An increasing value for `ReplicationLatency` could indicate that updates from one replica are not propagating to other replica tables in a timely manner. In this case, you can temporarily redirect your application's read and write activity to a different AWS Region.

**Handling Replication Latency Issues in Multi-account Global Tables**

If `ReplicationLatency` exceeds 3 hours due to customer-induced issues on a replica table, DynamoDB sends a notification requesting the customer to address the underlying problem. Common customer-induced issues that may prevent replication include:
+ Removing required permissions from the replica table's resource policy
+ Opting out of an AWS Region that hosts a replica of the multi-account global table
+ Denying the table's AWS KMS key permissions required to decrypt data

DynamoDB sends an initial notification within 3 hours of elevated replication latency, followed by a second notification after 20 hours if the issue remains unresolved. If the problem is not corrected within the required time window, DynamoDB will automatically disassociate the replica from the global table. The affected replica will then be converted to a regional table.

# Tutorials: Creating multi-account global tables
<a name="V2globaltables_MA.tutorial"></a>

This section provides step-by-step instructions for creating DynamoDB global tables that span across multiple AWS accounts.

## Create a multi-account global table using the DynamoDB console
<a name="create-ma-gt-console"></a>

Follow these steps to create a multi-account global table using the AWS Management Console. The following example creates a global table with replica tables in the United States.

1. Sign in to the AWS Management Console and open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/) for the first account (say *111122223333*).

1. For this example, choose **US East (Ohio)** from the Region selector in the navigation bar.

1. In the navigation pane on the left side of the console, choose **Tables**.

1. Choose **Create Table**.

1. On the **Create table** page:

   1. For **Table name**, enter **MusicTable**.

   1. For **Partition key**, enter **Artist**.

   1. For **Sort key**, enter **SongTitle**.

   1. Keep the other default settings and choose **Create table**.

1. Add the following resource policy to the table

------
#### [ JSON ]

****  

   ```
   {
   "Version":"2012-10-17",		 	 	 
   "Statement": [
       {
           "Sid": "DynamoDBActionsNeededForSteadyStateReplication",
           "Effect": "Allow",
           "Action": [
               "dynamodb:ReadDataForReplication",
               "dynamodb:WriteDataForReplication",
               "dynamodb:ReplicateSettings"
           ],
           "Resource": "arn:aws:dynamodb:us-east-2:111122223333:table/MusicTable",
           "Principal": {"Service": ["replication.dynamodb.amazonaws.com"]},
           "Condition": {
               "StringEquals": {
                   "aws:SourceAccount": ["444455556666","111122223333"],
                   "aws:SourceArn": [
                       "arn:aws:dynamodb:us-east-1:444455556666:table/MusicTable",
                       "arn:aws:dynamodb:us-east-2:111122223333:table/MusicTable"
                   ]
               }
           }
       },
       {
           "Sid": "AllowTrustedAccountsToJoinThisGlobalTable",
           "Effect": "Allow",
           "Action": [
               "dynamodb:AssociateTableReplica"
           ],
           "Resource": "arn:aws:dynamodb:us-east-2:111122223333:table/MusicTable",
           "Principal": {"AWS": ["444455556666"]}
       }
   ]
   }
   ```

------

1. This new table serves as the first replica table in a new global table. It is the prototype for other replica tables that you add later.

1. Wait for the table to become **Active**. For the newly created table, from the **Global tables** tab, navigate to **Settings Replication** and click **Enable**.

1. Logout of this account (*111122223333* here).

1. Sign in to the AWS Management Console and open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/) for the second account (say *444455556666*).

1. For this example, choose **US East (N. Virginia)** from the Region selector in the navigation bar.

1. The console ensures that a table with the same name doesn't exist in the selected Region. If a table with the same name does exist, you must delete the existing table before you can create a new replica table in that Region.

1. In the drop down near **Create Table**, choose **Create from another account**

1. On the **Create table from another account** page:

   1. Add **arn:aws:dynamodb:us-east-2:*111122223333*:table/MusicTable** as the table arn for the source table.

   1. In the **Replica Table ARNs**, add the ARN of the source table again **arn:aws:dynamodb:us-east-2:*111122223333*:table/MusicTable**. If there are multiple replicas already existing as part of a Multi Account Global Table, you must add every existing replica to the ReplicaTableARN.

   1. Keep the other default settings and choose **Submit**.

1. The **Global tables** tab for the Music table (and for any other replica tables) shows that the table has been replicated in multiple Regions.

1. To test replication:

   1. You can use any of the regions where a replica exists for this table

   1. Choose **Explore table items**.

   1. Choose **Create item**.

   1. Enter **item\$11** for **Artist** and **Song Value 1** for **SongTitle**.

   1. Choose **Create item**.

   1. Verify replication by switching to the other regions:

   1. Verify that the Music table contains the item you created.

## Create a multi-account global table using the AWS CLI
<a name="ma-gt-cli"></a>

The following examples show how to create a multi-account global table using the AWS CLI. These examples demonstrate the complete workflow for setting up cross-account replication.

------
#### [ CLI ]

Use the following AWS CLI commands to create a multi-account global table with cross-account replication.

```
# STEP 1: Setting resource policy for the table in account 111122223333

cat > /tmp/source-resource-policy.json << 'EOF'
{
    "Version": "2012-10-17", 		 	 	 
    "Statement": [
        {
            "Sid": "DynamoDBActionsNeededForSteadyStateReplication",
            "Effect": "Allow",
            "Action": [
                "dynamodb:ReadDataForReplication",
                "dynamodb:WriteDataForReplication",
                "dynamodb:ReplicateSettings"
            ],
            "Resource": "arn:aws:dynamodb:us-east-2:111122223333:table/MusicTable",
            "Principal": {"Service": ["replication.dynamodb.amazonaws.com"]},
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": ["444455556666","111122223333"],
                    "aws:SourceArn": [
                        "arn:aws:dynamodb:us-east-1:444455556666:table/MusicTable",
                        "arn:aws:dynamodb:us-east-2:111122223333:table/MusicTable"
                    ]
                }
            }
        },
        {
            "Sid": "AllowTrustedAccountsToJoinThisGlobalTable",
            "Effect": "Allow",
            "Action": [
                "dynamodb:AssociateTableReplica"
            ],
            "Resource": "arn:aws:dynamodb:us-east-2:111122223333:table/MusicTable",
            "Principal": {"AWS": ["444455556666"]}
        }
    ]
}
EOF

# Step 2: Create a new table (MusicTable) in US East (Ohio), 
#   with DynamoDB Streams enabled (NEW_AND_OLD_IMAGES),
#   and Settings Replication ENABLED on the account 111122223333

aws dynamodb create-table \
    --table-name MusicTable \
    --attribute-definitions \
        AttributeName=Artist,AttributeType=S \
        AttributeName=SongTitle,AttributeType=S \
    --key-schema \
        AttributeName=Artist,KeyType=HASH \
        AttributeName=SongTitle,KeyType=RANGE \
    --billing-mode PAY_PER_REQUEST \
    --stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES \
    --global-table-settings-replication-mode ENABLED \
    --resource-policy file:///tmp/source-resource-policy.json \
    --region us-east-2 


# Step 3: Creating replica table in account 444455556666

# Resource policy for account 444455556666
cat > /tmp/dest-resource-policy.json << 'EOF'
{
    "Version": "2012-10-17", 		 	 	 
    "Statement": [
        {
            "Sid": "DynamoDBActionsNeededForSteadyStateReplication",
            "Effect": "Allow",
            "Action": [
                "dynamodb:ReadDataForReplication",
                "dynamodb:WriteDataForReplication",
                "dynamodb:ReplicateSettings"
            ],
            "Resource": "arn:aws:dynamodb:us-east-1:444455556666:table/MusicTable",
            "Principal": {"Service": ["replication.dynamodb.amazonaws.com"]},
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": ["444455556666","111122223333"],
                    "aws:SourceArn": [
                        "arn:aws:dynamodb:us-east-1:444455556666:table/MusicTable",
                        "arn:aws:dynamodb:us-east-2:111122223333:table/MusicTable"
                    ]
                }
            }
        }
    ]
}
EOF

# Execute the replica table creation
aws dynamodb create-table \
    --table-name MusicTable \
    --global-table-source-arn "arn:aws:dynamodb:us-east-2:111122223333:table/MusicTable" \
    --resource-policy file:///tmp/dest-resource-policy.json \
    --global-table-settings-replication-mode ENABLED \
    --region us-east-1

# Step 4: View the list of replicas created using describe-table
aws dynamodb describe-table \
    --table-name MusicTable \
    --region us-east-2 \
    --query 'Table.{TableName:TableName,TableStatus:TableStatus,MultiRegionConsistency:MultiRegionConsistency,Replicas:Replicas[*].{Region:RegionName,Status:ReplicaStatus}}'

# Step 5: To verify that replication is working, add a new item to the Music table in US East (Ohio)
aws dynamodb put-item \
    --table-name MusicTable \
    --item '{"Artist": {"S":"item_1"},"SongTitle": {"S":"Song Value 1"}}' \
    --region us-east-2

# Step 6: Wait for a few seconds, and then check to see whether the item has been 
# successfully replicated to US East (N. Virginia) and Europe (Ireland)
aws dynamodb get-item \
    --table-name MusicTable \
    --key '{"Artist": {"S":"item_1"},"SongTitle": {"S":"Song Value 1"}}' \
    --region us-east-1

aws dynamodb get-item \
    --table-name MusicTable \
    --key '{"Artist": {"S":"item_1"},"SongTitle": {"S":"Song Value 1"}}' \
    --region us-east-2

# Step 7: Delete the replica table in US East (N. Virginia) Region
aws dynamodb delete-table \
    --table-name MusicTable \
    --region us-east-1

# Clean up: Delete the primary table
aws dynamodb delete-table \
    --table-name MusicTable \
    --region us-east-2
```

------

# DynamoDB global tables security
<a name="globaltables_MA_security"></a>

Global tables replicas are DynamoDB tables, so you use the same methods for controlling access to replicas that you do for single-Region tables, including AWS Identity and Access Management (IAM) identity policies and resource-based policies. This topic covers how to secure DynamoDB multi-account global tables using IAM permissions and AWS Key Management Service (AWS KMS) encryption. You learn about the resource based policies and service-linked roles (SLR) that allow cross-Region cross-account replication and auto-scaling, the IAM permissions needed to create, update, and delete global tables, for a multi-Region eventual consistency (MREC) tables. You also learn about AWS KMS encryption keys to manage cross-Region replication securely.

It provides detailed information about the resource-based policies and permissions required to establish cross-account and cross-region table replication. Understanding this security model is crucial for customers who need to implement secure, cross-account data replication solutions.

## Service principal authorization for replication
<a name="globaltables_MA_service_principal"></a>

DynamoDB's multi-account global tables use a distinct authorization approach because replication is performed across account boundaries. This is done using DynamoDB's replication service principal: `replication.dynamodb.amazonaws.com`. Each participating account must explicitly allow that principal in the replica table's resource policy, giving it permissions that can be constrained to specific replicas by source context conditions on keys like `aws:SourceAccount`, `aws:SourceArn`, etc. — see [AWS global condition keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) for more details. Permissions are bi-directional, which means that all replicas must explicitly grant permissions to each other before replication can be established across any particular pair of replicas.

The following service principal permissions are essential for cross-account replication:
+ `dynamodb:ReadDataForReplication` grants the ability to read data for replication purposes. This permission allows changes in one replica to be read and propagated to other replicas.
+ `dynamodb:WriteDataForReplication` permits the writing of replicated data to destination tables. This permission allows changes to be synchronized across all replicas in the global table.
+ `dynamodb:ReplicateSettings` enables the synchronization of table settings across replicas, providing consistent configuration across all participating tables.

Each replica must give the above permissions to all other replicas and to itself — i.e. the source context conditions must include the full set of replicas that comprises the global table. These permission are verified for each new replica when it is added to a multi-account global table. This verifies that replication operations are performed only by the authorized DynamoDB service and only between the intended tables.

## Service-linked roles for multi-account global tables
<a name="globaltables_MA_service_linked_roles"></a>

DynamoDB multi-account global tables replicate settings across all replicas so that each replica is set up identically with consistent throughput and provides a seamless fail-over experience. Replication of settings is controlled through the `ReplicateSettings` permission on the service principal, but we also rely on service-linked roles (SLRs) to manage certain cross-account cross-Region replication and auto-scaling capabilities. These roles are set up only once per AWS account. Once created, the same roles serve all global tables in your account. For more information about service-linked roles, see [Using service-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create-service-linked-role.html) in the IAM User Guide.

### Settings management service-linked role
<a name="globaltables_MA_settings_management_slr"></a>

Amazon DynamoDB automatically creates the AWSServiceRoleForDynamoDBGlobalTableSettingsManagement service-linked role (SLR) when you create your first multi-account global table replica in the account. This role manages cross-account cross-Region replication of settings for you.

When applying resource-based policies to replicas, confirm that you do not deny any of the permissions defined in the `AWSServiceRoleForDynamoDBGlobalTableSettingsManagement` to the SLR principal, as this could interfere with settings management and may impair replication if throughput does not match across replicas or GSIs. If you deny required SLR permissions, replication to and from affected replicas may stop, and the replica table status will change to `REPLICATION_NOT_AUTHORIZED`. For multi-account global tables, if a replica remains in the `REPLICATION_NOT_AUTHORIZED` state for more than 20 hours, the replica is irreversibly converted to a single-Region DynamoDB table. The SLR has the following permissions:
+ `application-autoscaling:DeleteScalingPolicy`
+ `application-autoscaling:DescribeScalableTargets`
+ `application-autoscaling:DescribeScalingPolicies`
+ `application-autoscaling:DeregisterScalableTarget`
+ `application-autoscaling:PutScalingPolicy`
+ `application-autoscaling:RegisterScalableTarget`

### Auto scaling service-linked role
<a name="globaltables_MA_autoscaling_slr"></a>

When configuring a global table for provisioned capacity mode, auto scaling must be configured for the global table. DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your global table replicas. The Application Auto Scaling service creates a service-linked role (SLR) named [AWSServiceRoleForApplicationAutoScaling\$1DynamoDBTable](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-service-linked-roles.html). This service-linked role is automatically created in your AWS account when you first configure auto scaling for a DynamoDB table. It allows Application Auto Scaling to manage provisioned table capacity and create CloudWatch alarms.

When applying resource-based policies to replicas, verify that you do not deny any permissions defined in the [AWSApplicationAutoscalingDynamoDBTablePolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSApplicationAutoscalingDynamoDBTablePolicy.html) to the Application Auto Scaling SLR principal, as this will interrupt auto-scaling functionality.

## How global tables use AWS IAM
<a name="globaltables_MA_iam"></a>

The following sections describe the required permissions for different global table operations and provide policy examples to help you configure the appropriate access for your users and applications.

**Note**  
All permissions described must be applied to the specific table resource ARN in the affected Region(s). The table resource ARN follows the format `arn:aws:dynamodb:region:account-id:table/table-name`, where you need to specify your actual Region, account ID, and table name values.

The following are the step-by-step topics we cover in the sections below:
+ Creating multi-account global tables and adding replicas
+ Updating a multi-account global table
+ Deleting global tables and removing replicas

### Creating global tables and adding replicas
<a name="globaltables_MA_creating"></a>

#### Permissions for creating global tables
<a name="globaltables_MA_creating_permissions"></a>

When a new replica is added to a regional table to form a multi-account global table or to an existing multi-account global table, the IAM principal performing the action must be authorized by all existing members. All existing members needs to give the following permission in their table policy for the replica addition to succeed:
+ `dynamodb:AssociateTableReplica` - This permission allows tables to be joined into a global table setup. This is the foundational permission that enables the initial establishment of the replication relationship.

This precise control allows only authorized accounts to participate in the global table setup.

#### Example IAM policies for creating global tables
<a name="globaltables_MA_creating_examples"></a>

##### Example IAM policies for a 2-replica setup
<a name="globaltables_MA_2replica_example"></a>

The setup of multi-account global tables follows a specific authorization flow that provides secure replication. Let's examine how this works in practice by walking through a practical scenario where a customer wants to establish a global table with two replicas. The first replica (ReplicaA) resides in Account A in the ap-east-1 region, while the second replica (ReplicaB) is in Account B in the eu-south-1 region.
+ In the source account (Account A), the process begins with creating the primary replica table. The account administrator must attach a resource-based policy to this table that explicitly grants necessary permissions to the destination account (Account B) to perform the association. This policy also authorizes the DynamoDB replication service to perform essential replication actions.
+ The destination account (Account B) follows a similar process by attaching a corresponding resource-based policy while creating the replica and referencing the source table ARN to be used to create the replica. This policy mirrors the permissions granted by Account A, creating a trusted bi-directional relationship. Before establishing replication, DynamoDB validates these cross-account permissions to verify proper authorization is in place.

To establish this setup:
+ The administrator of Account A must first attach the resource-based policy to ReplicaA. This policy explicitly grants the necessary permissions to Account B and the DynamoDB replication service.
+ Similarly, the administrator of Account B must attach a matching policy to ReplicaB, with account references reversed to grant corresponding permissions to Account A, in the create table call to create replica B referencing replica A as source table.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "DynamoDBActionsNeededForSteadyStateReplication",
            "Effect": "Allow",
            "Action": [
                "dynamodb:ReadDataForReplication",
                "dynamodb:WriteDataForReplication",
                "dynamodb:ReplicateSettings"
            ],
            "Resource": "arn:aws:dynamodb:ap-east-1:111122223333:table/ReplicaA",
            "Principal": {"Service": ["replication.dynamodb.amazonaws.com"]},
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": [ "111122223333", "444455556666" ],
                    "aws:SourceArn": [
                        "arn:aws:dynamodb:ap-east-1:111122223333:table/ReplicaA",
                        "arn:aws:dynamodb:eu-south-1:444455556666:table/ReplicaB"
                    ]
                }
            }
        },
        {
            "Sid": "AllowTrustedAccountsToJoinThisGlobalTable",
            "Effect": "Allow",
            "Action": [
                "dynamodb:AssociateTableReplica"
            ],
            "Resource": "arn:aws:dynamodb:ap-east-1:111122223333:table/ReplicaA",
            "Principal": {"AWS": ["444455556666"]}
        }
    ]
}
```

------

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "DynamoDBActionsNeededForSteadyStateReplication",
            "Effect": "Allow",
            "Action": [
                "dynamodb:ReadDataForReplication",
                "dynamodb:WriteDataForReplication",
                "dynamodb:ReplicateSettings"
            ],
            "Resource": "arn:aws:dynamodb:eu-south-1:444455556666:table/ReplicaB",
            "Principal": {"Service": ["replication.dynamodb.amazonaws.com"]},
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": [ "111122223333", "444455556666" ],
                    "aws:SourceArn": [
                        "arn:aws:dynamodb:ap-east-1:111122223333:table/ReplicaA",
                        "arn:aws:dynamodb:eu-south-1:444455556666:table/ReplicaB"
                    ]
                }
            }
        }
    ]
}
```

------

##### Example IAM policies for a 3-replica setup
<a name="globaltables_MA_3replica_example"></a>

In this setup, we have 3 replicas ReplicaA, ReplicaB, and ReplicaC in Account A, Account B, and Account C, respectively. Replica A is the first replica, which starts as a regional table, and then ReplicaB and ReplicaC are added to it.
+ The administrator of Account A must first attach the resource-based policy to ReplicaA allowing replication with all members, and allowing the IAM principals of Account B and Account C to add replicas.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "DynamoDBActionsNeededForSteadyStateReplication",
            "Effect": "Allow",
            "Action": [
                "dynamodb:ReadDataForReplication",
                "dynamodb:WriteDataForReplication",
                "dynamodb:ReplicateSettings"
            ],
            "Resource": "arn:aws:dynamodb:ap-east-1:111122223333:table/ReplicaA",
            "Principal": {"Service": ["replication.dynamodb.amazonaws.com"]},
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": [ "111122223333", "444455556666", "123456789012" ],
                    "aws:SourceArn": [
                        "arn:aws:dynamodb:ap-east-1:111122223333:table/ReplicaA",
                        "arn:aws:dynamodb:eu-south-1:444455556666:table/ReplicaB",
                        "arn:aws:dynamodb:us-east-1:123456789012:table/ReplicaC"
                    ]
                }
            }
        },
        {
            "Sid": "AllowTrustedAccountsToJoinThisGlobalTable",
            "Effect": "Allow",
            "Action": [
                "dynamodb:AssociateTableReplica"
            ],
            "Resource": "arn:aws:dynamodb:ap-east-1:111122223333:table/ReplicaA",
            "Principal": { "AWS": [ "444455556666", "123456789012" ] }
        }
    ]
}
```

------
+ The administrator of Account B must add a replica (Replica B) pointing to ReplicaA as a source. Replica B has the following policy allowing replication between all members, and allowing Account C to add a replica:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "DynamoDBActionsNeededForSteadyStateReplication",
            "Effect": "Allow",
            "Action": [
                "dynamodb:ReadDataForReplication",
                "dynamodb:WriteDataForReplication",
                "dynamodb:ReplicateSettings"
            ],
            "Resource": "arn:aws:dynamodb:eu-south-1:444455556666:table/ReplicaB",
            "Principal": {"Service": ["replication.dynamodb.amazonaws.com"]},
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": [ "111122223333", "444455556666", "123456789012" ],
                    "aws:SourceArn": [
                        "arn:aws:dynamodb:ap-east-1:111122223333:table/ReplicaA",
                        "arn:aws:dynamodb:eu-south-1:444455556666:table/ReplicaB",
                        "arn:aws:dynamodb:us-east-1:123456789012:table/ReplicaC"
                    ]
                }
            }
        },
        {
            "Sid": "AllowTrustedAccountsToJoinThisGlobalTable",
            "Effect": "Allow",
            "Action": [
                "dynamodb:AssociateTableReplica"
            ],
            "Resource": "arn:aws:dynamodb:eu-south-1:444455556666:table/ReplicaB",
            "Principal": { "AWS": [ "123456789012" ] }
        }
    ]
}
```

------
+ Finally, the administrator of Account C create a replica with the following policy allowing replication permissions between all members. The policy doesn't allow any further replicas to be added.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "DynamoDBActionsNeededForSteadyStateReplication",
            "Effect": "Allow",
            "Action": [
                "dynamodb:ReadDataForReplication",
                "dynamodb:WriteDataForReplication",
                "dynamodb:ReplicateSettings"
            ],
            "Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/ReplicaC",
            "Principal": {"Service": ["replication.dynamodb.amazonaws.com"]},
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": [ "111122223333", "444455556666" ],
                    "aws:SourceArn": [
                        "arn:aws:dynamodb:ap-east-1:111122223333:table/ReplicaA",
                        "arn:aws:dynamodb:eu-south-1:444455556666:table/ReplicaB"
                    ]
                }
            }
        }
    ]
}
```

------

### Updating a multi-account global table
<a name="globaltables_MA_updating"></a>

To modify replica settings for an existing global table using the UpdateTable API, you need the following permission on the table resource in the Region where you're making the API call: `dynamodb:UpdateTable`

You can additionally update other global table configurations, such as auto scaling policies and Time to Live settings. The following permissions are required for these additional update operations:

To update Time to Live settings with the `UpdateTimeToLive` API, you must have the following permission on the table resource in all Regions containing replicas: `dynamodb:UpdateTimeToLive`

To update a replica auto scaling policy with the `UpdateTableReplicaAutoScaling` API, you must have the following permissions on the table resource in all Regions containing replicas:
+ `application-autoscaling:DeleteScalingPolicy`
+ `application-autoscaling:DeleteScheduledAction`
+ `application-autoscaling:DeregisterScalableTarget`
+ `application-autoscaling:DescribeScalableTargets`
+ `application-autoscaling:DescribeScalingActivities`
+ `application-autoscaling:DescribeScalingPolicies`
+ `application-autoscaling:DescribeScheduledActions`
+ `application-autoscaling:PutScalingPolicy`
+ `application-autoscaling:PutScheduledAction`
+ `application-autoscaling:RegisterScalableTarget`

**Note**  
You need to provide `dynamodb:ReplicateSettings` permissions across all replica regions and accounts for the update table to succeed. If any replica does not provide permissions to replicate settings to any replica in the multi-account global table, all Update operations across all replicas will fail with `AccessDeniedException` till the permissions are fixed.

### Deleting global tables and removing replicas
<a name="globaltables_MA_deleting"></a>

To delete a global table, you must remove all replicas. Unlike same-account Global Table, you cannot use `UpdateTable` to delete a replica table in a remote region and each replica must be deleted through the `DeleteTable` API from the account that controls it.

#### Permissions for deleting global tables and removing replicas
<a name="globaltables_MA_deleting_permissions"></a>

The following permissions are required both for removing individual replicas and for completely deleting global tables. Deleting a global table configuration only removes the replication relationship between tables in different Regions. It does not delete the underlying DynamoDB table in the last remaining Region. The table in the last Region continues to exist as a standard DynamoDB table with the same data and settings.

You need the following permissions on the table resource in each Region where you're removing a replica:
+ `dynamodb:DeleteTable`
+ `dynamodb:DeleteTableReplica`

## How global tables use AWS KMS
<a name="globaltables_MA_kms"></a>

Like all DynamoDB tables, global table replicas always encrypt data at rest using encryption keys stored in AWS Key Management Service (AWS KMS).

**Note**  
Unlike same-account global table, different replicas in a multi-account global table can be configured with the different type of AWS KMS key (AWS owned key, or Customer managed key). Multi-account global tables do not support AWS Managed Keys.

Multi-account global tables that use CMKs requires each replica's keys policy to give permissions to the DynamoDB replication service principal (`replication.dynamodb.amazonaws.com`) to access the key for replication and settings management. The following permissions are required:
+ `kms:Decrypt`
+ `kms:ReEncrypt*`
+ `kms:GenerateDataKey*`
+ `kms:DescribeKey`

**Important**

DynamoDB requires access to the replica's encryption key to delete a replica. If you want to disable or delete a customer managed key used to encrypt a replica because you are deleting the replica, you should first delete the replica, wait for the table to be removed from the replication group by calling describe in one of the other replicas, then disable or delete the key.

If you disable or revoke DynamoDB's access to a customer managed key used to encrypt a replica, replication to and from the replica will stop and the replica status will change to `INACCESSIBLE_ENCRYPTION_CREDENTIALS`. If a replica remains in the `INACCESSIBLE_ENCRYPTION_CREDENTIALS` state for more than 20 hours, the replica is irreversibly converted to a single-Region DynamoDB table.

### Example AWS KMS policy
<a name="globaltables_MA_kms_example"></a>

The AWS KMS policy allows DynamoDB to access both AWS KMS keys for replication between replicas A an B. The AWS KMS keys attached to the DynamoDB replica in each account needs to be updated with the following policy:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
      {
        "Effect": "Allow",
        "Principal": { "Service": "replication.dynamodb.amazonaws.com" },
        "Action": [
            "kms:Decrypt",
            "kms:ReEncrypt*",
            "kms:GenerateDataKey*",
            "kms:DescribeKey"
        ],
        "Resource": "*",
        "Condition": {
            "StringEquals": {
                "aws:SourceAccount": [ "111122223333", "444455556666" ],
                "aws:SourceArn": [
                    "arn:aws:dynamodb:ap-east-1:111122223333:table/ReplicaA",
                    "arn:aws:dynamodb:eu-south-1:444455556666:table/ReplicaB"
                ]
            }
        }
      }
   ]
 }
```

------

# Understanding Amazon DynamoDB billing for global tables
<a name="global-tables-billing"></a>

This guide describes how DynamoDB billing works for global tables, identifying the components that contribute to the cost of global tables, including a practical example. 

[Amazon DynamoDB global tables](GlobalTables.md) is a fully managed, serverless, multi-Region, and multi-active database. Global tables are designed for [99.999% availability](https://aws.amazon.com/dynamodb/sla/), delivering increased application resiliency, and improved business continuity. Global tables replicate your DynamoDB tables automatically across your choice of AWS Regions so you can achieve fast, local read and write performance. 

## How it works
<a name="global-tables-billing-how-it-works"></a>

The billing model for global tables differs from single-Region DynamoDB tables. Write operations for single-Region DynamoDB tables are billed using the following units:
+ Write Request Units (WRUs) for on-demand capacity mode, where one WRU is charged for each write up to 1KB
+ Write Capacity Units (WCUs) for provisioned capacity mode, where one WCU provides one write per second for up to 1 KB

When you create a global table by adding a replica table to an existing single-Region table, that single-Region table becomes a replica table, which means the units used to bill for writes to the table also change. Write operations to replica tables are billed using the following units: 
+ Replicated Write Request Units (rWRUs) for on-demand capacity mode, where one rWRU per replica table is charged for each write up to 1KB
+ Replicated Write Capacity Units (rWCUs) for provisioned capacity mode, where one WCU per replica table provides one write per second for up to 1 KB

Updates to Global Secondary Indexes (GSIs) are billed using the same units as single-Region DynamoDB tables, even if the base table for the GSI is a replica table. Update operations for GSIs are billed using the following units:
+ Write Request Units (WRUs) for on-demand capacity mode, where one WRU is charged for each write up to 1KB
+ Write Capacity Units (WCUs) for provisioned capacity mode, where one WCU provides one write per second for up to 1 KB

Replicated write units (rWCUs and rWRUs) are priced the same as single-Region write units (WCUs and WRUs). Cross-Region data transfer fees apply for global tables as data is replicated across Regions. Replicated write (rWCU or rWRU) charges are incurred in every Region containing a replica table for the global table.

Read operations from single-Region tables and from replica tables use the following units::
+ Read Request Units (RRUs) for on-demand capacity mode, where one RRU is charged for each strongly consistent read up to 4KB
+ Read Capacity Units (RCUs) for provisioned tables, where one RCU provides one strongly consistent read per second for up to 4KB

## Consistency modes and billing
<a name="global-tables-billing-consistency-modes"></a>

The replicated write units (rWCUs and rWRUs) used to bill for write operations are identical for both multi-Region strong consistency (MRSC) and multi-Region eventual consistency (MREC) modes. Global tables using multi-Region strong consistency (MRSC) mode configured with a witness don't incur replicated write unit costs (rWCUs and rWRUs), storage costs, or data transfer costs for replication to the witness.

## DynamoDB global tables billing example
<a name="global-tables-billing-example"></a>

Let's walk through a multi-day example scenario to see how global table write request billing works in practice (note that this example only considers write requests, and does not include the table restore and cross-Region data transfer charges that would be incurred in the example):

**Day 1 - Single-Region table: **You have a single-Region on-demand DynamoDB table named Table\$1A in the us-west-2 Region. You write 100 1KB items to Table\$1A. For these single-Region write operations, you are charged 1 write request unit (WRU) per 1KB written. Your day 1 charges are:
+ 100 WRUs in the us-west-2 Region for single-Region writes

The total request units charged on day 1: **100 WRUs**.

**Day 2 - Creating a global table: **You create a global table by adding a replica to Table\$1A in the us-east-2 Region. Table\$1A is now a global table with two replica tables; one in the us-west-2 Region, and one in the us-east-2 Region. You write 150 1KB items to the replica table in the us-west-2 Region. Your day 2 charges are:
+ 150 rWRUs in the us-west-2 Region for replicated writes
+ 150 rWRUs in the us-east-2 Region for replicated writes

The total request units charged on day 2: **300 rWRUs**.

**Day 3 - Adding a Global Secondary Index: **You add a global secondary index (GSI) to the replica table in the us-east-2 Region that projects all attributes from the base (replica) table. The global table automatically creates the GSI on the replica table in the us-west-2 Region for you. You write 200 new 1KB records to the replica table in the us-west-2 Region. Your day 3 charges are:
+ • 200 rWRUs in the us-west-2 Region for replicated writes
+ • 200 WRUs in the us-west-2 Region for GSI updates
+ • 200 rWRUs in the us-east-2 Region for replicated writes
+ • 200 WRUs in the us-east-2 Region for GSI updates

The total write request units charged on day 3: **400 WRUs and 400 rWRUs**.

The total write unit charges for all three days are 500WRUs (100 WRU on day 1 \$1 400 WRUs on day 3) and 700 rWRUs (300 rWRUs on Day2 \$1 400 rWRUs on Day 3).

In summary, replica table write operations are billed in replicated write units in all Regions that contain a replica table. If you have global secondary indexes, you are charged write units for updates to GSIs in all regions that contain a GSI (which in a global table is all Regions that contain a replica table). 

# DynamoDB global tables versions
<a name="V2globaltables_versions"></a>

There are two versions of DynamoDB global tables available: Global Tables version 2019.11.21 (Current) and Global tables version 2017.11.29 (Legacy). We recommend using Global Tables version 2019.11.21 (Current), as it is easier to use, supported in more Regions, and lower cost for most workloads compared to version 2017.11.29 (Legacy).

## Determining the version of a global table
<a name="globaltables.DetermineVersion"></a>

### Determining the version using the AWS CLI
<a name="globaltables.CLI"></a>

#### Identifying a version 2019.11.21 (Current) global table replica
<a name="globaltables.CLI.current"></a>

To determine if a table is a global tables version 2019.11.21 (Current) replica, invoke the `describe-table` command for the table. If the output contains the `GlobalTableVersion` attribute with a value of "2019.11.21", the table is a version 2019.11.21 (Current) global table replica.

An example CLI command for `describe-table`:

```
aws dynamodb describe-table \
--table-name users \
--region us-east-2
```

The (abridged) output contains the `GlobalTableVersion` attribute with a value of "2019.11.21", so this table is a version 2019.11.21 (Current) global table replica.

```
{
    "Table": {
        "AttributeDefinitions": [
            {
                "AttributeName": "id",
                "AttributeType": "S"
            },
            {
                "AttributeName": "name",
                "AttributeType": "S"
            }
        ],
        "TableName": "users",
        ...
        "GlobalTableVersion": "2019.11.21",
        "Replicas": [
            {
                "RegionName": "us-west-2",
                "ReplicaStatus": "ACTIVE",
            }
        ],
        ...
    }
}
```

#### Identifying a version 2017.11.29 (Legacy) global table replica
<a name="globaltables.CLI.legacy"></a>

Global tables version 2017.11.29 (Legacy) uses a dedicated set of commands for global table management. To determine if a table is a global tables version 2017.11.29 (Legacy) replica, invoke the `describe-global-table` command for the table. If you receive a successful response, the table is a version 2017.11.29 (Legacy) global table replica. If the `describe-global-table` command returns a `GlobalTableNotFoundException` error, the table is not a version 2017.11.29 (Legacy) replica.

An example CLI command for `describe-global-table`:

```
aws dynamodb describe-global-table \
--table-name users \
--region us-east-2
```

The command returns a successful response, so this table is a version 2017.11.29 (Legacy) global table replica.

```
{
    "GlobalTableDescription": {
        "ReplicationGroup": [
            {
                "RegionName": "us-west-2"
            },
            {
                "RegionName": "us-east-2"
            }
        ],
        "GlobalTableArn": "arn:aws:dynamodb::123456789012:global-table/users",
        "CreationDateTime": "2025-06-10T13:55:53.630000-04:00",
        "GlobalTableStatus": "ACTIVE",
        "GlobalTableName": "users"
    }
}
```

### Determining the version using the DynamoDB Console
<a name="globaltables.console"></a>

To identify the version of a global table replica, perform the following:

1. Open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/home](https://console.aws.amazon.com/dynamodb/home).

1. In the navigation pane on the left side of the console, choose **Tables**.

1. Choose the table you want to identify the global tables version for.

1. Choose the **Global Tables** tab.

   The *Summary* section displays the version of global tables in use.

## Differences in behavior between Legacy and Current versions
<a name="DiffLegacyVsCurrent"></a>

The following list describes the differences in behavior between the Legacy and Current versions of global tables.
+ version 2019.11.21 (Current) consumes less write capacity for several DynamoDB operations compared to version 2017.11.29 (Legacy), and therefore, is more cost-effective for most customers. The differences for these DynamoDB operations are as follows:
  + Invoking [PutItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html) for a 1KB item in a Region and replicating to other Regions requires 2 rWRUs per region for 2017.11.29 (Legacy), but only 1 rWRU for 2019.11.21 (Current).
  + Invoking [UpdateItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateItem.html) for a 1KB item requires 2 rWRUs in the source Region and 1 rWRU per destination Region for 2017.11.29 (Legacy), but only 1 rWRU for both source and destination Regions for 2019.11.21 (Current).
  + Invoking [DeleteItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DeleteItem.html) for a 1KB item requires 1 rWRU in the source Region and 2 rWRUs per destination Region for 2017.11.29 (Legacy), but only 1 rWRU for both source or destination Region for 2019.11.21 (Current).

  The following table shows the rWRU consumption of 2017.11.29 (Legacy) and 2019.11.21 (Current) tables for a 1KB item in two Regions.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/V2globaltables_versions.html)
+ version 2017.11.29 (Legacy) is available in only 11 AWS Regions. However, version 2019.11.21 (Current) is available in all the AWS Regions.
+ You create version 2017.11.29 (Legacy) global tables by first creating a set of empty Regional tables, then invoking the [CreateGlobalTable](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateGlobalTable.html) API to form the global table. You create version 2019.11.21 (Current) global tables by invoking the [UpdateTable](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html) API to add a replica to an existing Regional table.
+ version 2017.11.29 (Legacy) requires you to empty all replicas in the table before adding a replica in a new Region (including during creation). version 2019.11.21 (Current) supports you to add and remove replicas to Regions on a table that already contains data.
+ version 2017.11.29 (Legacy) uses the following dedicated set of control plane APIs for managing replicas:
  + [CreateGlobalTable](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateGlobalTable.html)
  + [DescribeGlobalTable](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeGlobalTable.html)
  + [DescribeGlobalTableSettings](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeGlobalTableSettings.html)
  + [ListGlobalTables](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ListGlobalTables.html)
  + [UpdateGlobalTable](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateGlobalTable.html)
  + [UpdateGlobalTableSettings](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateGlobalTableSettings.html)

  version 2019.11.21 (Current) uses the [DescribeTable](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.html) and [UpdateTable](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html) APIs to manage replicas.
+ version 2017.11.29 (Legacy) publishes two DynamoDB Streams records for each write. version 2019.11.21 (Current) only publishes one DynamoDB Streams record for each write.
+ version 2017.11.29 (Legacy) populates and updates the `aws:rep:deleting`, `aws:rep:updateregion`, and `aws:rep:updatetime` attributes. version 2019.11.21 (Current) does not populate or update these attributes.
+ version 2017.11.29 (Legacy) does not synchronize [Using time to live (TTL) in DynamoDB](TTL.md) settings across replicas. version 2019.11.21 (Current) synchronizes TTL settings across replicas.
+ version 2017.11.29 (Legacy) does not replicate TTL deletes to other replicas. version 2019.11.21 (Current) replicates TTL deletes to all replicas.
+ version 2017.11.29 (Legacy) does not synchronize [auto scaling](AutoScaling.md) settings across replicas. version 2019.11.21 (Current) synchronizes auto scaling settings across replicas.
+ version 2017.11.29 (Legacy) does not synchronize [global secondary index (GSI)](GSI.md) settings across replicas. version 2019.11.21 (Current) synchronizes GSI settings across replicas.
+ version 2017.11.29 (Legacy) does not synchronize [encryption at rest](encryption.usagenotes.md) settings across replicas. version 2019.11.21 (Current) synchronizes encryption at rest settings across replicas.
+ version 2017.11.29 (Legacy) publishes the `PendingReplicationCount` metric. version 2019.11.21 (Current) does not publish this metric.

## Upgrading to the current version
<a name="upgrading-to-current-version"></a>

### Required permissions for global tables upgrade
<a name="V2globaltables_versions.Notes-permissions"></a>

To upgrade to version 2019.11.21 (Current), you must have `dynamodb:UpdateGlobalTableversion` permissions in all Regions with replicas. These permissions are required in addition to the permissions needed for accessing the DynamoDB console and viewing tables.

The following IAM policy grants permissions to upgrade any global table to version 2019.11.21 (Current).

```
{
    "version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "dynamodb:UpdateGlobalTableversion",
            "Resource": "*"
        }
    ]
}
```

The following IAM policy grants permissions to upgrade only the `Music` global table with replicas in two Regions to version 2019.11.21 (Current).

```
{
    "version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "dynamodb:UpdateGlobalTableversion",
            "Resource": [
                "arn:aws:dynamodb::123456789012:global-table/Music",
                "arn:aws:dynamodb:ap-southeast-1:123456789012:table/Music",
                "arn:aws:dynamodb:us-east-2:123456789012:table/Music"
            ]
        }
    ]
}
```

### What to expect during the upgrade
<a name="V2GlobalTablesUpgradeExpectations"></a>
+ All global table replicas will continue to process read and write traffic while upgrading.
+ The upgrade process requires between a few minutes to several hours depending on the table size and number of replicas.
+ During the upgrade process, the value of [TableStatus](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TableDescription.html#DDB-Type-TableDescription-TableStatus) will change from `ACTIVE` to `UPDATING`. You can view the status of the table by invoking the [DescribeTable](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.html) API, or with the **Tables** view in the DynamoDB console.
+ Auto scaling will not adjust the provisioned capacity settings for a global table while the table is being upgraded. We strongly recommend that you set the table to [on-demand](capacity-mode.md#capacity-mode-on-demand) capacity mode during the upgrade.
+ If you choose to use [provisioned](provisioned-capacity-mode.md) capacity mode with auto scaling during the upgrade, you must increase the minimum read and write throughput on your policies to accommodate any expected increases in traffic to avoid throttling during the upgrade.
+ The `ReplicationLatency` metric can temporarily report latency spikes or stop reporting metric data during the upgrade process. See, [ReplicationLatency](metrics-dimensions.md#ReplicationLatency), for more information. 
+ When the upgrade process is complete, your table status will change to `ACTIVE`.

### DynamoDB Streams behavior before, during, and after upgrade
<a name="V2GlobalTablesUpgradeDDBStreamsBehavior"></a>

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/V2globaltables_versions.html)

### Upgrading to version 2019.11.21 (Current)
<a name="V2globaltables_versions.upgrade"></a>

Perform the following steps to upgrade your version of DynamoDB global tables using the AWS Management Console.

**To upgrade global tables to version 2019.11.21 (Current)**

1. Open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/home](https://console.aws.amazon.com/dynamodb/home). 

1. In the navigation pane on the left side of the console, choose **Tables**, and then select the global table that you want to upgrade to version 2019.11.21 (Current). 

1. Choose the **Global Tables** tab.

1. Choose **Update version**.  
![\[Console screenshot showing the Update version button.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/GlobalTables-upgrade.png)

1. Read and agree to the new requirements, and then choose **Update version**.

1. After the upgrade process is complete, the global tables version that appears on the console changes to **2019.11.21**.

# Best practices for global tables
<a name="globaltables-bestpractices"></a>

The following sections describe best practices for deploying and using global tables.

## Version
<a name="globaltables-bestpractices-version"></a>

There are two versions of DynamoDB global tables available: version 2019.11.21 (Current) and [version 2017.11.29 (Legacy)](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/globaltables.V1.html). You should use version 2019.11.21 (Current) whenever possible. 

## Deletion protection
<a name="globaltables-bestpractices-deletionprotection"></a>

You should enable deletion protection on global table replicas you want protected against accidental deletion. You must enable deletion protection on each replica.

## Using AWS CloudFormation
<a name="globaltables-bestpractices-cloudformation"></a>

CloudFormation does not currently support the coordination of multi-Region resources like global tables across stacks. If you define each replica of a global table in a separate Regional stack, you will encounter errors due to detected drift across stacks when performing replica updates. To avoid this issue, you should choose one Region as the reference Region for deploying your global tables and define all of your global table's replicas in that Region's stack.

**Important**  
You cannot convert a resource of type `AWS::DynamoDB::Table` into a resource of type `AWS::DynamoDB::GlobalTable` by changing its type in your template. Attempting to convert a single-Region table to a global table by changing its CloudFormation resource type may result in the deletion of your DynamoDB table.

You can use the `AWS::DynamoDB::GlobalTable` resource to create a table in a single Region. This table will be deployed like any other single-Region table. If you later update the stack to add other Regions to a resource, replicas will be added to the table and it will safely be converted to a global table.

If you have an existing `AWS::DynamoDB::Table` resource you want to convert to a `AWS::DynamoDB::GlobalTable` resource, the recommended steps to convert the resource type are:

1. Set the `AWS::DynamoDB::Table` deletion policy to retain.

1. Remove the table from the stack definition.

1. Add replicas to the single-Region table in the AWS console, converting it to a global table.

1. Import the new global table as a new `AWS::DynamoDB::GlobalTable` resource to the stack.

## Backups and Point-in-Time Recovery
<a name="globaltables-bestpractices-backups"></a>

Enabling automated backups and Point-in-Time Recovery (PITR) for one replica in a global table may be sufficient to meet your disaster recovery objectives. Replica backups created with AWS-Backup can be automatically replicated across Regions for greater resilience. Consider your disaster recovery plan goals in the context of multi-Region high availability when choosing your backup and PITR enablement strategy.

## Designing for multi-Region high availability
<a name="globaltables-bestpractices-multiregion"></a>

For prescriptive guidance on deploying global tables, see [Best Practices for DynamoDB global table design](bp-global-table-design.md).

# Working with items and attributes in DynamoDB
<a name="WorkingWithItems"></a>

In Amazon DynamoDB, an *item* is a collection of attributes. Each attribute has a name and a value. An attribute value can be a scalar, a set, or a document type. For more information, see [Amazon DynamoDB: How it works](HowItWorks.md).

DynamoDB provides four operations for basic create, read, update, and delete (CRUD) functionality. All these operations are atomic.
+ `PutItem` — Create an item.
+ `GetItem` — Read an item.
+ `UpdateItem` — Update an item.
+ `DeleteItem` — Delete an item.

Each of these operations requires that you specify the primary key of the item that you want to work with. For example, to read an item using `GetItem`, you must specify the partition key and sort key (if applicable) for that item.

In addition to the four basic CRUD operations, DynamoDB also provides the following:
+ `BatchGetItem` — Read up to 100 items from one or more tables.
+ `BatchWriteItem` — Create or delete up to 25 items in one or more tables.

These batch operations combine multiple CRUD operations into a single request. In addition, the batch operations read and write items in parallel to minimize response latencies.

This section describes how to use these operations and includes related topics, such as conditional updates and atomic counters. This section also includes example code that uses the AWS SDKs. 

**Topics**
+ [DynamoDB item sizes and formats](CapacityUnitCalculations.md)
+ [Reading an item](#WorkingWithItems.ReadingData)
+ [Writing an item](#WorkingWithItems.WritingData)
+ [Return values](#WorkingWithItems.ReturnValues)
+ [Batch operations](#WorkingWithItems.BatchOperations)
+ [Atomic counters](#WorkingWithItems.AtomicCounters)
+ [Conditional writes](#WorkingWithItems.ConditionalUpdate)
+ [Using expressions in DynamoDB](Expressions.md)
+ [Using time to live (TTL) in DynamoDB](TTL.md)
+ [Querying tables in DynamoDB](Query.md)
+ [Scanning tables in DynamoDB](Scan.md)
+ [PartiQL - a SQL-compatible query language for Amazon DynamoDB](ql-reference.md)
+ [Working with items: Java](JavaDocumentAPIItemCRUD.md)
+ [Working with items: .NET](LowLevelDotNetItemCRUD.md)

# DynamoDB item sizes and formats
<a name="CapacityUnitCalculations"></a>

DynamoDB tables are schemaless, except for the primary key, so the items in a table can all have different attributes, sizes, and data types.

The total size of an item is the sum of the lengths of its attribute names and values, plus any applicable overhead as described below. You can use the following guidelines to estimate attribute sizes:
+ Strings are Unicode with UTF-8 binary encoding. The size of a string is *(number of UTF-8-encoded bytes of attribute name) \$1 (number of UTF-8-encoded bytes)*.
+ Numbers are variable length, with up to 38 significant digits. Leading and trailing zeroes are trimmed. The size of a number is approximately *(number of UTF-8-encoded bytes of attribute name) \$1 (1 byte per two significant digits) \$1 (1 byte)*.
+ A binary value must be encoded in base64 format before it can be sent to DynamoDB, but the value's raw byte length is used for calculating size. The size of a binary attribute is *(number of UTF-8-encoded bytes of attribute name) \$1 (number of raw bytes).*
+ The size of a null attribute or a Boolean attribute is *(number of UTF-8-encoded bytes of attribute name) \$1 (1 byte)*.
+ An attribute of type `List` or `Map` requires 3 bytes of overhead, regardless of its contents. The size of a `List` or `Map` is *(number of UTF-8-encoded bytes of attribute name) \$1 sum (size of nested elements) \$1 (3 bytes) *. The size of an empty `List` or `Map` is *(number of UTF-8-encoded bytes of attribute name) \$1 (3 bytes)*.
+ Each `List` or `Map` element also requires 1 byte of overhead.

**Note**  
We recommend that you choose shorter attribute names rather than long ones. This helps you reduce the amount of storage required, but also can lower the amount of RCU/WCUs you use.

For storage billing purposes, each item includes a per-item storage overhead that depends on the features you have enabled.
+ All items in DynamoDB require 100 bytes of storage overhead for indexing.
+ Some DynamoDB features (global tables, transactions, change data capture for Kinesis Data Streams with DynamoDB) require additional storage overhead to account for system-created attributes resulting from enabling those features. For example, global tables requires an additional 48 bytes of storage overhead.

## Reading an item
<a name="WorkingWithItems.ReadingData"></a>

To read an item from a DynamoDB table, use the `GetItem` operation. You must provide the name of the table, along with the primary key of the item you want.

**Example**  
The following AWS CLI example shows how to read an item from the `ProductCatalog` table.  

```
aws dynamodb get-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"1"}}'
```

**Note**  
With `GetItem`, you must specify the *entire* primary key, not just part of it. For example, if a table has a composite primary key (partition key and sort key), you must supply a value for the partition key and a value for the sort key.

A `GetItem` request performs an eventually consistent read by default. You can use the `ConsistentRead` parameter to request a strongly consistent read instead. (This consumes additional read capacity units, but it returns the most up-to-date version of the item.)

`GetItem` returns all of the item's attributes. You can use a *projection expression* to return only some of the attributes. For more information, see [Using projection expressions in DynamoDB](Expressions.ProjectionExpressions.md).

To return the number of read capacity units consumed by `GetItem`, set the `ReturnConsumedCapacity` parameter to `TOTAL`.

**Example**  
The following AWS Command Line Interface (AWS CLI) example shows some of the optional `GetItem` parameters.  

```
aws dynamodb get-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"1"}}' \
    --consistent-read \
    --projection-expression "Description, Price, RelatedItems" \
    --return-consumed-capacity TOTAL
```

## Writing an item
<a name="WorkingWithItems.WritingData"></a>

To create, update, or delete an item in a DynamoDB table, use one of the following operations:
+ `PutItem`
+ `UpdateItem`
+ `DeleteItem`

For each of these operations, you must specify the entire primary key, not just part of it. For example, if a table has a composite primary key (partition key and sort key), you must provide a value for the partition key and a value for the sort key.

To return the number of write capacity units consumed by any of these operations, set the `ReturnConsumedCapacity` parameter to one of the following: 
+ `TOTAL` — Returns the total number of write capacity units consumed.
+ `INDEXES` — Returns the total number of write capacity units consumed, with subtotals for the table and any secondary indexes that were affected by the operation.
+ `NONE` — No write capacity details are returned. (This is the default.)

### PutItem
<a name="WorkingWithItems.WritingData.PutItem"></a>

`PutItem` creates a new item. If an item with the same key already exists in the table, it is replaced with the new item.

**Example**  
Write a new item to the `Thread` table. The primary key for `Thread` consists of `ForumName` (partition key) and `Subject` (sort key).  

```
aws dynamodb put-item \
    --table-name Thread \
    --item file://item.json
```
The arguments for `--item` are stored in the `item.json` file.  

```
{
    "ForumName": {"S": "Amazon DynamoDB"},
    "Subject": {"S": "New discussion thread"},
    "Message": {"S": "First post in this thread"},
    "LastPostedBy": {"S": "fred@example.com"},
    "LastPostDateTime": {"S": "201603190422"}
}
```

### UpdateItem
<a name="WorkingWithItems.WritingData.UpdateItem"></a>

If an item with the specified key does not exist, `UpdateItem` creates a new item. Otherwise, it modifies an existing item's attributes.

You use an *update expression* to specify the attributes that you want to modify and their new values. For more information, see [Using update expressions in DynamoDB](Expressions.UpdateExpressions.md). 

Within the update expression, you use expression attribute values as placeholders for the actual values. For more information, see [Using expression attribute values in DynamoDB](Expressions.ExpressionAttributeValues.md).

**Example**  
Modify various attributes in the `Thread` item. The optional `ReturnValues` parameter shows the item as it appears after the update. For more information, see [Return values](#WorkingWithItems.ReturnValues).  

```
aws dynamodb update-item \
    --table-name Thread \
    --key file://key.json \
    --update-expression "SET Answered = :zero, Replies = :zero, LastPostedBy = :lastpostedby" \
    --expression-attribute-values file://expression-attribute-values.json \
    --return-values ALL_NEW
```

The arguments for `--key` are stored in the `key.json` file.

```
{
    "ForumName": {"S": "Amazon DynamoDB"},
    "Subject": {"S": "New discussion thread"}
}
```

The arguments for `--expression-attribute-values` are stored in the `expression-attribute-values.json` file.

```
{
    ":zero": {"N":"0"},
    ":lastpostedby": {"S":"barney@example.com"}
}
```

### DeleteItem
<a name="WorkingWithItems.WritingData.DeleteItem"></a>

`DeleteItem` deletes the item with the specified key.

**Example**  
The following AWS CLI example shows how to delete the `Thread` item.  

```
aws dynamodb delete-item \
    --table-name Thread \
    --key file://key.json
```

## Return values
<a name="WorkingWithItems.ReturnValues"></a>

In some cases, you might want DynamoDB to return certain attribute values as they appeared before or after you modified them. The `PutItem`, `UpdateItem`, and `DeleteItem` operations have a `ReturnValues` parameter that you can use to return the attribute values before or after they are modified.

The default value for `ReturnValues` is `NONE`, meaning that DynamoDB does not return any information about attributes that were modified. 

The following are the other valid settings for `ReturnValues`, organized by DynamoDB API operation.

### PutItem
<a name="WorkingWithItems.ReturnValues.PutItem"></a>
+ `ReturnValues`: `ALL_OLD`
  + If you overwrite an existing item, `ALL_OLD` returns the entire item as it appeared before the overwrite.
  + If you write a nonexistent item, `ALL_OLD` has no effect.

### UpdateItem
<a name="WorkingWithItems.ReturnValues.UpdateItem"></a>

The most common usage for `UpdateItem` is to update an existing item. However, `UpdateItem` actually performs an *upsert*, meaning that it automatically creates the item if it doesn't already exist.
+ `ReturnValues`: `ALL_OLD`
  + If you update an existing item, `ALL_OLD` returns the entire item as it appeared before the update.
  + If you update a nonexistent item (upsert), `ALL_OLD` has no effect.
+ `ReturnValues`: `ALL_NEW`
  + If you update an existing item, `ALL_NEW` returns the entire item as it appeared after the update.
  + If you update a nonexistent item (upsert), `ALL_NEW` returns the entire item.
+ `ReturnValues`: `UPDATED_OLD`
  + If you update an existing item, `UPDATED_OLD` returns only the updated attributes, as they appeared before the update.
  + If you update a nonexistent item (upsert), `UPDATED_OLD` has no effect.
+ `ReturnValues`: `UPDATED_NEW`
  + If you update an existing item, `UPDATED_NEW` returns only the affected attributes, as they appeared after the update.
  + If you update a nonexistent item (upsert), `UPDATED_NEW` returns only the updated attributes, as they appear after the update.

### DeleteItem
<a name="WorkingWithItems.ReturnValues.DeleteItem"></a>
+ `ReturnValues`: `ALL_OLD`
  + If you delete an existing item, `ALL_OLD` returns the entire item as it appeared before you deleted it.
  + If you delete a nonexistent item, `ALL_OLD` doesn't return any data.

## Batch operations
<a name="WorkingWithItems.BatchOperations"></a>

For applications that need to read or write multiple items, DynamoDB provides the `BatchGetItem` and `BatchWriteItem` operations. Using these operations can reduce the number of network round trips from your application to DynamoDB. In addition, DynamoDB performs the individual read or write operations in parallel. Your applications benefit from this parallelism without having to manage concurrency or threading.

The batch operations are essentially wrappers around multiple read or write requests. For example, if a `BatchGetItem` request contains five items, DynamoDB performs five `GetItem` operations on your behalf. Similarly, if a `BatchWriteItem` request contains two put requests and four delete requests, DynamoDB performs two `PutItem` and four `DeleteItem` requests.

In general, a batch operation does not fail unless *all* the requests in the batch fail. For example, suppose that you perform a `BatchGetItem` operation, but one of the individual `GetItem` requests in the batch fails. In this case, `BatchGetItem` returns the keys and data from the `GetItem` request that failed. The other `GetItem` requests in the batch are not affected.

### BatchGetItem
<a name="WorkingWithItems.BatchOperations.BatchGetItem"></a>

A single `BatchGetItem` operation can contain up to 100 individual `GetItem` requests and can retrieve up to 16 MB of data. In addition, a `BatchGetItem` operation can retrieve items from multiple tables.

**Example**  
Retrieve two items from the `Thread` table, using a projection expression to return only some of the attributes.  

```
aws dynamodb batch-get-item \
    --request-items file://request-items.json
```
The arguments for `--request-items` are stored in the `request-items.json` file.  

```
{
    "Thread": {
        "Keys": [
            {
                "ForumName":{"S": "Amazon DynamoDB"},
                "Subject":{"S": "DynamoDB Thread 1"}
            },
            {
                "ForumName":{"S": "Amazon S3"},
                "Subject":{"S": "S3 Thread 1"}
            }
        ],
        "ProjectionExpression":"ForumName, Subject, LastPostedDateTime, Replies"
    }
}
```

### BatchWriteItem
<a name="WorkingWithItems.BatchOperations.BatchWriteItem"></a>

The `BatchWriteItem` operation can contain up to 25 individual `PutItem` and `DeleteItem` requests and can write up to 16 MB of data. (The maximum size of an individual item is 400 KB.) In addition, a `BatchWriteItem` operation can put or delete items in multiple tables. 

**Note**  
`BatchWriteItem` does not support `UpdateItem` requests.

**Example**  
Write two items to the `ProductCatalog` table.  

```
aws dynamodb batch-write-item \
    --request-items file://request-items.json
```
The arguments for `--request-items` are stored in the `request-items.json` file.  

```
{
    "ProductCatalog": [
        {
            "PutRequest": {
                "Item": {
                    "Id": { "N": "601" },
                    "Description": { "S": "Snowboard" },
                    "QuantityOnHand": { "N": "5" },
                    "Price": { "N": "100" }
                }
            }
        },
        {
            "PutRequest": {
                "Item": {
                    "Id": { "N": "602" },
                    "Description": { "S": "Snow shovel" }
                }
            }
        }
    ]
}
```

## Atomic counters
<a name="WorkingWithItems.AtomicCounters"></a>

You can use the `UpdateItem` operation to implement an *atomic counter*—a numeric attribute that is incremented, unconditionally, without interfering with other write requests. (All write requests are applied in the order in which they were received.) With an atomic counter, the updates are not idempotent. In other words, the numeric value increments or decrements each time you call `UpdateItem`. If the increment value used to update the atomic counter is positive, then it can cause overcounting. If the increment value is negative, then it can cause undercounting.

You might use an atomic counter to track the number of visitors to a website. In this case, your application would increment a numeric value, regardless of its current value. If an `UpdateItem` operation fails, the application could simply retry the operation. This would risk updating the counter twice, but you could probably tolerate a slight overcounting or undercounting of website visitors.

An atomic counter would not be appropriate where overcounting or undercounting can't be tolerated (for example, in a banking application). In this case, it is safer to use a conditional update instead of an atomic counter.

For more information, see [Incrementing and decrementing numeric attributes](Expressions.UpdateExpressions.md#Expressions.UpdateExpressions.SET.IncrementAndDecrement).

**Example**  
The following AWS CLI example increments the `Price` of a product by 5. For this example, the item was known to exist before the counter is updated. Because `UpdateItem` is not idempotent, the `Price` increases every time you run this code.   

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id": { "N": "601" }}' \
    --update-expression "SET Price = Price + :incr" \
    --expression-attribute-values '{":incr":{"N":"5"}}' \
    --return-values UPDATED_NEW
```

## Conditional writes
<a name="WorkingWithItems.ConditionalUpdate"></a>

By default, the DynamoDB write operations (`PutItem`, `DeleteItem`) are *unconditional*: Each operation overwrites an existing item that has the specified primary key.

DynamoDB optionally supports conditional writes for these operations. A conditional write succeeds only if the item attributes meet one or more expected conditions. Otherwise, it returns an error.

Conditional writes check their conditions against the most recently updated version of the item. Note that if the item did not previously exist or if the most recent successful operation against that item was a delete, then the conditional write will find no previous item.

 Conditional writes are helpful in many situations. For example, you might want a `PutItem` operation to succeed only if there is not already an item with the same primary key. Or you could prevent an `UpdateItem` operation from modifying an item if one of its attributes has a certain value.

Conditional writes are helpful in cases where multiple users attempt to modify the same item. Consider the following diagram, in which two users (Alice and Bob) are working with the same item from a DynamoDB table.

![\[Users Alice and Bob attempt to modify an item with Id 1, demonstrating the need for conditional writes.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/update-no-condition.png)


Suppose that Alice uses the AWS CLI to update the `Price` attribute to 8.

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"1"}}' \
    --update-expression "SET Price = :newval" \
    --expression-attribute-values file://expression-attribute-values.json
```

The arguments for `--expression-attribute-values` are stored in the file `expression-attribute-values.json`:

```
{
    ":newval":{"N":"8"}
}
```

Now suppose that Bob issues a similar `UpdateItem` request later, but changes the `Price` to 12. For Bob, the `--expression-attribute-values` parameter looks like the following.

```
{
    ":newval":{"N":"12"}
}
```

Bob's request succeeds, but Alice's earlier update is lost.

To request a conditional `PutItem`, `DeleteItem`, or `UpdateItem`, you specify a condition expression. A *condition expression* is a string containing attribute names, conditional operators, and built-in functions. The entire expression must evaluate to true. Otherwise, the operation fails.

Now consider the following diagram, showing how conditional writes would prevent Alice's update from being overwritten.

![\[Conditional write preventing user Bob’s update from overwriting user Alice’s change to the same item.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/update-yes-condition.png)


Alice first tries to update `Price` to 8, but only if the current `Price` is 10.

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"1"}}' \
    --update-expression "SET Price = :newval" \
    --condition-expression "Price = :currval" \
    --expression-attribute-values file://expression-attribute-values.json
```

The arguments for `--expression-attribute-values` are stored in the `expression-attribute-values.json` file.

```
{
    ":newval":{"N":"8"},
    ":currval":{"N":"10"}
}
```

Alice's update succeeds because the condition evaluates to true.

Next, Bob attempts to update the `Price` to 12, but only if the current `Price` is 10. For Bob, the `--expression-attribute-values` parameter looks like the following.

```
{
    ":newval":{"N":"12"},
    ":currval":{"N":"10"}
}
```

Because Alice has previously changed the `Price` to 8, the condition expression evaluates to false, and Bob's update fails.

For more information, see [DynamoDB condition expression CLI example](Expressions.ConditionExpressions.md).

### Conditional write idempotence
<a name="WorkingWithItems.ConditionalWrites.Idempotence"></a>

Conditional writes can be *idempotent* if the conditional check is on the same attribute that is being updated. This means that DynamoDB performs a given write request only if certain attribute values in the item match what you expect them to be at the time of the request. 

For example, suppose that you issue an `UpdateItem` request to increase the `Price` of an item by 3, but only if the `Price` is currently 20. After you send the request, but before you get the results back, a network error occurs, and you don't know whether the request was successful. Because this conditional write is idempotent, you can retry the same `UpdateItem` request, and DynamoDB updates the item only if the `Price` is currently 20.

### Capacity units consumed by conditional writes
<a name="WorkingWithItems.ConditionalWrites.ReturnConsumedCapacity"></a>

If a `ConditionExpression` evaluates to false during a conditional write, DynamoDB still consumes write capacity from the table. The amount consumed is dependent on the size of the existing item (or a minimum of 1). For example, if an existing item is 300kb and the new item you are trying to create or update is 310kb, the write capacity units consumed will be the 300 if the condition fails, and 310 if the condition succeeds. If this is a new item (no existing item), then the write capacity units consumed will be 1 if the condition fails and 310 if the condition succeeds.

**Note**  
Write operations consume *write* capacity units only. They never consume *read* capacity units.

A failed conditional write returns a `ConditionalCheckFailedException`. When this occurs, you don't receive any information in the response about the write capacity that was consumed. .

To return the number of write capacity units consumed during a conditional write, you use the `ReturnConsumedCapacity` parameter:
+ `TOTAL` — Returns the total number of write capacity units consumed.
+ `INDEXES` — Returns the total number of write capacity units consumed, with subtotals for the table and any secondary indexes that were affected by the operation.
+ `NONE` — No write capacity details are returned. (This is the default.)

  

**Note**  
Unlike a global secondary index, a local secondary index shares its provisioned throughput capacity with its table. Read and write activity on a local secondary index consumes provisioned throughput capacity from the table.

# Using expressions in DynamoDB
<a name="Expressions"></a>

In Amazon DynamoDB, you can use *expressions* to specify which attributes to read from an item, write data when a condition is met, specify how to update an item, define queries and filter the results of a query.

This table describes the basic expression grammar and the available kinds of expressions.


| Expression type | Description | 
| --- | --- | 
| Projection expression | A projection expression identifies the attributes that you want to retrieve from an item when you use operations such as GetItem, Query, or Scan. | 
| Condition expression | A condition expression determines which items should be modified when you use the PutItem, UpdateItem, and DeleteItem operations. | 
| Update expression | An update expression specifies how UpdateItem will modify the attributes of an item— for example, setting a scalar value or removing elements from a list or a map. | 
| Key condition expression | A key condition expression determines which items a query will read from a table or index. | 
| Filter expression | A filter expression determines which items among the Query results should be returned to you. All the other results are discarded. | 

For information about expression syntax and more detailed information about each type of expression, see the following sections.

**Topics**
+ [Referring to item attributes when using expressions in DynamoDB](Expressions.Attributes.md)
+ [Expression attribute names (aliases) in DynamoDB](Expressions.ExpressionAttributeNames.md)
+ [Using expression attribute values in DynamoDB](Expressions.ExpressionAttributeValues.md)
+ [Using projection expressions in DynamoDB](Expressions.ProjectionExpressions.md)
+ [Using update expressions in DynamoDB](Expressions.UpdateExpressions.md)
+ [Condition and filter expressions, operators, and functions in DynamoDB](Expressions.OperatorsAndFunctions.md)
+ [DynamoDB condition expression CLI example](Expressions.ConditionExpressions.md)

**Note**  
For backward compatibility, DynamoDB also supports conditional parameters that do not use expressions. For more information, see [Legacy DynamoDB conditional parameters](LegacyConditionalParameters.md).  
New applications should use expressions rather than the legacy parameters.

# Referring to item attributes when using expressions in DynamoDB
<a name="Expressions.Attributes"></a>

This section describes how to refer to item attributes in an expression in Amazon DynamoDB. You can work with any attribute, even if it is deeply nested within multiple lists and maps.

**Topics**
+ [Top-level attributes](#Expressions.Attributes.TopLevelAttributes)
+ [Nested attributes](#Expressions.Attributes.NestedAttributes)
+ [Document paths](#Expressions.Attributes.NestedElements.DocumentPathExamples)

**A Sample Item: ProductCatalog**  
The examples on this page use the following sample item in the `ProductCatalog` table. (This table is described in [Example tables and data for use in DynamoDB](AppendixSampleTables.md).)

```
{
    "Id": 123,
    "Title": "Bicycle 123",
    "Description": "123 description",
    "BicycleType": "Hybrid",
    "Brand": "Brand-Company C",
    "Price": 500,
    "Color": ["Red", "Black"],
    "ProductCategory": "Bicycle",
    "InStock": true,
    "QuantityOnHand": null,
    "RelatedItems": [
        341,
        472,
        649
    ],
    "Pictures": {
        "FrontView": "http://example.com/products/123_front.jpg",
        "RearView": "http://example.com/products/123_rear.jpg",
        "SideView": "http://example.com/products/123_left_side.jpg"
    },
    "ProductReviews": {
	    "FiveStar": [
	    		"Excellent! Can't recommend it highly enough! Buy it!",
	    		"Do yourself a favor and buy this."
	    ],
	    "OneStar": [
	    		"Terrible product! Do not buy this."
	    ]
    },
    "Comment": "This product sells out quickly during the summer",
    "Safety.Warning": "Always wear a helmet"
 }
```

Note the following:
+ The partition key value (`Id`) is `123`. There is no sort key.
+ Most of the attributes have scalar data types, such as `String`, `Number`, `Boolean`, and `Null`.
+ One attribute (`Color`) is a `String Set`.
+ The following attributes are document data types:
  + A list of `RelatedItems`. Each element is an `Id` for a related product.
  + A map of `Pictures`. Each element is a short description of a picture, along with a URL for the corresponding image file.
  + A map of `ProductReviews`. Each element represents a rating and a list of reviews corresponding to that rating. Initially, this map is populated with five-star and one-star reviews.

## Top-level attributes
<a name="Expressions.Attributes.TopLevelAttributes"></a>

An attribute is said to be *top level* if it is not embedded within another attribute. For the `ProductCatalog` item, the top-level attributes are as follows:
+ `Id`
+ `Title`
+ `Description`
+ `BicycleType`
+ `Brand`
+ `Price`
+ `Color`
+ `ProductCategory`
+ `InStock`
+ `QuantityOnHand`
+ `RelatedItems`
+ `Pictures`
+ `ProductReviews`
+ `Comment`
+ `Safety.Warning`

All of these top-level attributes are scalars, except for `Color` (list), `RelatedItems` (list), `Pictures` (map), and `ProductReviews` (map).

## Nested attributes
<a name="Expressions.Attributes.NestedAttributes"></a>

An attribute is said to be *nested* if it is embedded within another attribute. To access a nested attribute, you use *dereference operators*:
+ `[n]` — for list elements
+ `.` (dot) — for map elements

### Accessing list elements
<a name="Expressions.Attributes.NestedElements.AccessingListElements"></a>

The dereference operator for a list element is **[*N*]**, where *n* is the element number. List elements are zero-based, so [0] represents the first element in the list, [1] represents the second, and so on. Here are some examples:
+ `MyList[0]`
+ `AnotherList[12]`
+ `ThisList[5][11]`

The element `ThisList[5]` is itself a nested list. Therefore, `ThisList[5][11]` refers to the 12th element in that list.

The number within the square brackets must be a non-negative integer. Therefore, the following expressions are not valid:
+ `MyList[-1]`
+ `MyList[0.4]`

### Accessing map elements
<a name="Expressions.Attributes.NestedElements.AccessingMapElements"></a>

The dereference operator for a map element is **.** (a dot). Use a dot as a separator between elements in a map:
+ `MyMap.nestedField`
+ `MyMap.nestedField.deeplyNestedField`

## Document paths
<a name="Expressions.Attributes.NestedElements.DocumentPathExamples"></a>

In an expression, you use a *document path* to tell DynamoDB where to find an attribute. For a top-level attribute, the document path is simply the attribute name. For a nested attribute, you construct the document path using dereference operators.

The following are some examples of document paths. (Refer to the item shown in [Referring to item attributes when using expressions in DynamoDB](#Expressions.Attributes).)
+ A top-level scalar attribute.

   `Description`
+ A top-level list attribute. (This returns the entire list, not just some of the elements.)

  `RelatedItems`
+ The third element from the `RelatedItems` list. (Remember that list elements are zero-based.)

  `RelatedItems[2]`
+ The front-view picture of the product.

  `Pictures.FrontView`
+ All of the five-star reviews.

  `ProductReviews.FiveStar`
+ The first of the five-star reviews.

  `ProductReviews.FiveStar[0]`

**Note**  
The maximum depth for a document path is 32. Therefore, the number of dereferences operators in a path cannot exceed this limit.

You can use any attribute name in a document path as long as they meet these requirements:
+ The first character is `a-z` or `A-Z` and or `0-9`
+ The second character (if present) is `a-z`, `A-Z`

**Note**  
If an attribute name does not meet this requirement, you must define an expression attribute name as a placeholder.

For more information, see [Expression attribute names (aliases) in DynamoDB](Expressions.ExpressionAttributeNames.md).

# Expression attribute names (aliases) in DynamoDB
<a name="Expressions.ExpressionAttributeNames"></a>

An *expression attribute name* is an alias (or placeholder) that you use in an Amazon DynamoDB expression as an alternative to an actual attribute name. An expression attribute name must begin with a pound sign (`#`) and be followed by one or more alphanumeric characters. The underscore (`_`) character is also allowed.

This section describes several situations in which you must use expression attribute names.

**Note**  
The examples in this section use the AWS Command Line Interface (AWS CLI). 

**Topics**
+ [Reserved words](#Expressions.ExpressionAttributeNames.ReservedWords)
+ [Attribute names containing special characters](#Expressions.ExpressionAttributeNames.AttributeNamesContainingSpecialCharacters)
+ [Nested attributes](#Expressions.ExpressionAttributeNames.NestedAttributes)
+ [Repeatedly referencing attribute names](#Expressions.ExpressionAttributeNames.RepeatingAttributeNames)

## Reserved words
<a name="Expressions.ExpressionAttributeNames.ReservedWords"></a>

Sometimes you might need to write an expression containing an attribute name that conflicts with a DynamoDB reserved word. (For a complete list of reserved words, see [Reserved words in DynamoDB](ReservedWords.md).)

For example, the following AWS CLI example would fail because `COMMENT` is a reserved word.

```
aws dynamodb get-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"123"}}' \
    --projection-expression "Comment"
```

To work around this, you can replace `Comment` with an expression attribute name such as `#c`. The `#` (pound sign) is required and indicates that this is a placeholder for an attribute name. The AWS CLI example would now look like the following.

```
aws dynamodb get-item \
     --table-name ProductCatalog \
     --key '{"Id":{"N":"123"}}' \
     --projection-expression "#c" \
     --expression-attribute-names '{"#c":"Comment"}'
```

**Note**  
If an attribute name begins with a number, contains a space or contains a reserved word, you *must* use an expression attribute name to replace that attribute's name in the expression.

## Attribute names containing special characters
<a name="Expressions.ExpressionAttributeNames.AttributeNamesContainingSpecialCharacters"></a>

In an expression, a dot (".") is interpreted as a separator character in a document path. However, DynamoDB also allows you to use a dot character and other special characters, such as a hyphen ("-") as part of an attribute name. This can be ambiguous in some cases. To illustrate, suppose that you wanted to retrieve the `Safety.Warning` attribute from a `ProductCatalog` item (see [Referring to item attributes when using expressions in DynamoDB](Expressions.Attributes.md)).

Suppose that you wanted to access `Safety.Warning` using a projection expression.

```
aws dynamodb get-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"123"}}' \
    --projection-expression "Safety.Warning"
```

DynamoDB would return an empty result, rather than the expected string ("`Always wear a helmet`"). This is because DynamoDB interprets a dot in an expression as a document path separator. In this case, you must define an expression attribute name (such as `#sw`) as a substitute for `Safety.Warning`. You could then use the following projection expression.

```
aws dynamodb get-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"123"}}' \
    --projection-expression "#sw" \
    --expression-attribute-names '{"#sw":"Safety.Warning"}'
```

DynamoDB would then return the correct result.

**Note**  
If an attribute name contains a dot (".") or a hyphen ("-"), you *must* use an expression attribute name to replace that attribute's name in the expression.

## Nested attributes
<a name="Expressions.ExpressionAttributeNames.NestedAttributes"></a>

Suppose that you wanted to access the nested attribute `ProductReviews.OneStar`. In an expression attribute name, DynamoDB treats the dot (".") as a character within an attribute's name. To reference the nested attribute, define an expression attribute name for each element in the document path:
+ `#pr — ProductReviews`
+ `#1star — OneStar`

You could then use `#pr.#1star` for the projection expression.

```
aws dynamodb get-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"123"}}' \
    --projection-expression "#pr.#1star"  \
    --expression-attribute-names '{"#pr":"ProductReviews", "#1star":"OneStar"}'
```

DynamoDB would then return the correct result.

## Repeatedly referencing attribute names
<a name="Expressions.ExpressionAttributeNames.RepeatingAttributeNames"></a>

Expression attribute names are helpful when you need to refer to the same attribute name repeatedly. For example, consider the following expression for retrieving some of the reviews from a `ProductCatalog` item.

```
aws dynamodb get-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"123"}}' \
    --projection-expression "ProductReviews.FiveStar, ProductReviews.ThreeStar, ProductReviews.OneStar"
```

To make this more concise, you can replace `ProductReviews` with an expression attribute name such as `#pr`. The revised expression would now look like the following.
+  `#pr.FiveStar, #pr.ThreeStar, #pr.OneStar` 

```
aws dynamodb get-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"123"}}' \
    --projection-expression "#pr.FiveStar, #pr.ThreeStar, #pr.OneStar" \
    --expression-attribute-names '{"#pr":"ProductReviews"}'
```

If you define an expression attribute name, you must use it consistently throughout the entire expression. Also, you cannot omit the `#` symbol. 

# Using expression attribute values in DynamoDB
<a name="Expressions.ExpressionAttributeValues"></a>

*Expression attribute values* in Amazon DynamoDB act as variables. They're substitutes for the actual values that you want to compare—values that you might not know until runtime. An expression attribute value must begin with a colon (`:`) and be followed by one or more alphanumeric characters.

For example, suppose that you wanted to return all of the `ProductCatalog` items that are available in `Black` and cost `500` or less. You could use a `Scan` operation with a filter expression, as in this AWS Command Line Interface (AWS CLI) example.

```
aws dynamodb scan \
    --table-name ProductCatalog \
    --filter-expression "contains(Color, :c) and Price <= :p" \
    --expression-attribute-values file://values.json
```

The arguments for `--expression-attribute-values` are stored in the `values.json` file.

```
{
    ":c": { "S": "Black" },
    ":p": { "N": "500" }
}
```

If you define an expression attribute value, you must use it consistently throughout the entire expression. Also, you can't omit the `:` symbol. 

Expression attribute values are used with key condition expressions, condition expressions, update expressions, and filter expressions.

# Using projection expressions in DynamoDB
<a name="Expressions.ProjectionExpressions"></a>

To read data from a table, you use operations such as `GetItem`, `Query`, or `Scan`. Amazon DynamoDB returns all the item attributes by default. To get only some, rather than all of the attributes, use a projection expression.

A *projection expression* is a string that identifies the attributes that you want. To retrieve a single attribute, specify its name. For multiple attributes, the names must be comma-separated.

The following are some examples of projection expressions, based on the `ProductCatalog` item from [Referring to item attributes when using expressions in DynamoDB](Expressions.Attributes.md):
+ A single top-level attribute.

  `Title `
+ Three top-level attributes. DynamoDB retrieves the entire `Color` set.

  `Title, Price, Color`
+ Four top-level attributes. DynamoDB returns the entire contents of `RelatedItems` and `ProductReviews`.

  `Title, Description, RelatedItems, ProductReviews`

**Note**  
Projection expression has no effect on provisioned throughput consumption. DynamoDB determines capacity units consumed based on item size, instead of the amount of data that is returned to an application.

**Reserved words and special characters**

DynamoDB has reserved words and special characters. DynamoDB allows you to use these reserved words and special characters for names, but we recommend that you avoid doing so because you have to use aliases for them whenever you use these names in an expression. For a complete list, see [Reserved words in DynamoDB](ReservedWords.md).

You'll need to use expression attribute names in place of the actual name if: 
+ The attribute name is on the list of reserved words in DynamoDB.
+ The attribute name does not meet the requirement that the first character is `a-z` or `A-Z` and that the second character (if present) is `a-Z`, `A-Z`, or `0-9`.
+ The attribute name contains a **\$1** (hash) or **:** (colon).

The following AWS CLI example shows how to use a projection expression with a `GetItem` operation. This projection expression retrieves a top-level scalar attribute (`Description`), the first element in a list (`RelatedItems[0]`), and a list nested within a map (`ProductReviews.FiveStar`).

```
aws dynamodb get-item \
    --table-name ProductCatalog \
    --key '"Id": { "N": "123" } \
    --projection-expression "Description, RelatedItems[0], ProductReviews.FiveStar"
```

The following JSON would be returned for this example.

```
{
    "Item": {
        "Description": {
            "S": "123 description"
        },
        "ProductReviews": {
            "M": {
                "FiveStar": {
                    "L": [
                        {
                            "S": "Excellent! Can't recommend it highly enough! Buy it!"
                        },
                        {
                            "S": "Do yourself a favor and buy this."
                        }
                    ]
                }
            }
        },
        "RelatedItems": {
            "L": [
                {
                    "N": "341"
                }
            ]
        }
    }
}
```

# Using update expressions in DynamoDB
<a name="Expressions.UpdateExpressions"></a>

The `UpdateItem` operation updates an existing item, or adds a new item to the table if it does not already exist. You must provide the key of the item that you want to update. You must also provide an update expression, indicating the attributes that you want to modify and the values that you want to assign to them. 

An *update expression* specifies how `UpdateItem` will modify the attributes of an item—for example, setting a scalar value or removing elements from a list or a map.

The following is a syntax summary for update expressions.

```
update-expression ::=
    [ SET action [, action] ... ]
    [ REMOVE action [, action] ...]
    [ ADD action [, action] ... ]
    [ DELETE action [, action] ...]
```

An update expression consists of one or more clauses. Each clause begins with a `SET`, `REMOVE`, `ADD`, or `DELETE` keyword. You can include any of these clauses in an update expression, in any order. However, each action keyword can appear only once.

Within each clause, there are one or more actions separated by commas. Each action represents a data modification.

The examples in this section are based on the `ProductCatalog` item shown in [Using projection expressions in DynamoDB](Expressions.ProjectionExpressions.md).

The topics below cover some different use cases for the `SET` action.

**Topics**
+ [SET — modifying or adding item attributes](#Expressions.UpdateExpressions.SET)
+ [REMOVE — deleting attributes from an item](#Expressions.UpdateExpressions.REMOVE)
+ [ADD — updating numbers and sets](#Expressions.UpdateExpressions.ADD)
+ [DELETE — removing elements from a set](#Expressions.UpdateExpressions.DELETE)
+ [Using multiple update expressions](#Expressions.UpdateExpressions.Multiple)

## SET — modifying or adding item attributes
<a name="Expressions.UpdateExpressions.SET"></a>

Use the `SET` action in an update expression to add one or more attributes to an item. If any of these attributes already exists, they are overwritten by the new values. If you want to avoid overwriting an existing attribute, you can use `SET` with the `if_not_exists` function. The `if_not_exists` function is specific to the `SET` action and can only be used in an update expression.

When you use `SET` to update a list element, the contents of that element are replaced with the new data that you specify. If the element doesn't already exist, `SET` appends the new element at the end of the list.

If you add multiple elements in a single `SET` operation, the elements are sorted in order by element number.

You can also use `SET` to add or subtract from an attribute that is of type `Number`. To perform multiple `SET` actions, separate them with commas.

In the following syntax summary:
+ The *path* element is the document path to the item.
+ An **operand** element can be either a document path to an item or a function.

```
set-action ::=
    path = value

value ::=
    operand
    | operand '+' operand
    | operand '-' operand

operand ::=
    path | function

function ::=
    if_not_exists (path, value)
```

If the item does not contain an attribute at the specified path, `if_not_exists` evaluates to `value`. Otherwise, it evaluates to `path`.

The following `PutItem` operation creates a sample item that the examples refer to.

```
aws dynamodb put-item \
    --table-name ProductCatalog \
    --item file://item.json
```

The arguments for `--item` are stored in the `item.json` file. (For simplicity, only a few item attributes are used.)

```
{
    "Id": {"N": "789"},
    "ProductCategory": {"S": "Home Improvement"},
    "Price": {"N": "52"},
    "InStock": {"BOOL": true},
    "Brand": {"S": "Acme"}
}
```

**Topics**
+ [Modifying attributes](#Expressions.UpdateExpressions.SET.ModifyingAttributes)
+ [Adding lists and maps](#Expressions.UpdateExpressions.SET.AddingListsAndMaps)
+ [Adding elements to a list](#Expressions.UpdateExpressions.SET.AddingListElements)
+ [Adding nested map attributes](#Expressions.UpdateExpressions.SET.AddingNestedMapAttributes)
+ [Incrementing and decrementing numeric attributes](#Expressions.UpdateExpressions.SET.IncrementAndDecrement)
+ [Appending elements to a list](#Expressions.UpdateExpressions.SET.UpdatingListElements)
+ [Preventing overwrites of an existing attribute](#Expressions.UpdateExpressions.SET.PreventingAttributeOverwrites)

### Modifying attributes
<a name="Expressions.UpdateExpressions.SET.ModifyingAttributes"></a>

**Example**  
Update the `ProductCategory` and `Price` attributes.  

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "SET ProductCategory = :c, Price = :p" \
    --expression-attribute-values file://values.json \
    --return-values ALL_NEW
```
The arguments for `--expression-attribute-values` are stored in the `values.json` file.  

```
{
    ":c": { "S": "Hardware" },
    ":p": { "N": "60" }
}
```

**Note**  
In the `UpdateItem` operation, `--return-values ALL_NEW` causes DynamoDB to return the item as it appears after the update.

### Adding lists and maps
<a name="Expressions.UpdateExpressions.SET.AddingListsAndMaps"></a>

**Example**  
Add a new list and a new map.  

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "SET RelatedItems = :ri, ProductReviews = :pr" \
    --expression-attribute-values file://values.json \
    --return-values ALL_NEW
```
The arguments for `--expression-attribute-values` are stored in the `values.json` file.  

```
{
    ":ri": {
        "L": [
            { "S": "Hammer" }
        ]
    },
    ":pr": {
        "M": {
            "FiveStar": {
                "L": [
                    { "S": "Best product ever!" }
                ]
            }
        }
    }
}
```

### Adding elements to a list
<a name="Expressions.UpdateExpressions.SET.AddingListElements"></a>

**Example**  
Add a new attribute to the `RelatedItems` list. (Remember that list elements are zero-based, so [0] represents the first element in the list, [1] represents the second, and so on.)  

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "SET RelatedItems[1] = :ri" \
    --expression-attribute-values file://values.json \
    --return-values ALL_NEW
```
The arguments for `--expression-attribute-values` are stored in the `values.json` file.  

```
{
    ":ri": { "S": "Nails" }
}
```

**Note**  
When you use `SET` to update a list element, the contents of that element are replaced with the new data that you specify. If the element doesn't already exist, `SET` appends the new element at the end of the list.  
If you add multiple elements in a single `SET` operation, the elements are sorted in order by element number.

### Adding nested map attributes
<a name="Expressions.UpdateExpressions.SET.AddingNestedMapAttributes"></a>

**Example**  
Add some nested map attributes.  

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "SET #pr.#5star[1] = :r5, #pr.#3star = :r3" \
    --expression-attribute-names file://names.json \
    --expression-attribute-values file://values.json \
    --return-values ALL_NEW
```
The arguments for `--expression-attribute-names` are stored in the `names.json` file.  

```
{
    "#pr": "ProductReviews",
    "#5star": "FiveStar",
    "#3star": "ThreeStar"
}
```
The arguments for `--expression-attribute-values` are stored in the `values.json` file.  

```
{
    ":r5": { "S": "Very happy with my purchase" },
    ":r3": {
        "L": [
            { "S": "Just OK - not that great" }
        ]
    }
}
```

**Important**  
You cannot update nested map attributes if the parent map does not exist. If you attempt to update a nested attribute (for example, `ProductReviews.FiveStar`) when the parent map (`ProductReviews`) does not exist, DynamoDB returns a `ValidationException` with the message *"The document path provided in the update expression is invalid for update."*  
When creating items that will have nested map attributes updated later, initialize empty maps for the parent attributes. For example:  

```
{
    "Id": {"N": "789"},
    "ProductReviews": {"M": {}},
    "Metadata": {"M": {}}
}
```
This allows you to update nested attributes like `ProductReviews.FiveStar` without errors.

### Incrementing and decrementing numeric attributes
<a name="Expressions.UpdateExpressions.SET.IncrementAndDecrement"></a>

You can add to or subtract from an existing numeric attribute. To do this, use the `+` (plus) and `-` (minus) operators.

**Example**  
Decrease the `Price` of an item.  

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "SET Price = Price - :p" \
    --expression-attribute-values '{":p": {"N":"15"}}' \
    --return-values ALL_NEW
```
To increase the `Price`, you would use the `+` operator in the update expression.

### Appending elements to a list
<a name="Expressions.UpdateExpressions.SET.UpdatingListElements"></a>

You can add elements to the end of a list. To do this, use `SET` with the `list_append` function. (The function name is case sensitive.) The `list_append` function is specific to the `SET` action and can only be used in an update expression. The syntax is as follows.
+ `list_append (list1, list2)`

The function takes two lists as input and appends all elements from `list2` to ` list1`.

**Example**  
In [Adding elements to a list](#Expressions.UpdateExpressions.SET.AddingListElements), you create the `RelatedItems` list and populate it with two elements: `Hammer` and `Nails`. Now you append two more elements to the end of `RelatedItems`.  

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "SET #ri = list_append(#ri, :vals)" \
    --expression-attribute-names '{"#ri": "RelatedItems"}' \
    --expression-attribute-values file://values.json  \
    --return-values ALL_NEW
```
The arguments for `--expression-attribute-values` are stored in the `values.json` file.  

```
{
    ":vals": {
        "L": [
            { "S": "Screwdriver" },
            {"S": "Hacksaw" }
        ]
    }
}
```
Finally, you append one more element to the *beginning* of `RelatedItems`. To do this, swap the order of the `list_append` elements. (Remember that `list_append` takes two lists as input and appends the second list to the first.)  

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "SET #ri = list_append(:vals, #ri)" \
    --expression-attribute-names '{"#ri": "RelatedItems"}' \
    --expression-attribute-values '{":vals": {"L": [ { "S": "Chisel" }]}}' \
    --return-values ALL_NEW
```
The resulting `RelatedItems` attribute now contains five elements, in the following order: `Chisel`, `Hammer`, `Nails`, `Screwdriver`, `Hacksaw`.

### Preventing overwrites of an existing attribute
<a name="Expressions.UpdateExpressions.SET.PreventingAttributeOverwrites"></a>

**Example**  
Set the `Price` of an item, but only if the item does not already have a `Price` attribute. (If `Price` already exists, nothing happens.)  

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "SET Price = if_not_exists(Price, :p)" \
    --expression-attribute-values '{":p": {"N": "100"}}' \
    --return-values ALL_NEW
```

## REMOVE — deleting attributes from an item
<a name="Expressions.UpdateExpressions.REMOVE"></a>

Use the `REMOVE` action in an update expression to remove one or more attributes from an item in Amazon DynamoDB. To perform multiple `REMOVE` actions, separate them with commas.

The following is a syntax summary for `REMOVE` in an update expression. The only operand is the document path for the attribute that you want to remove.

```
remove-action ::=
    path
```

**Example**  
Remove some attributes from an item. (If the attributes don't exist, nothing happens.)  

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "REMOVE Brand, InStock, QuantityOnHand" \
    --return-values ALL_NEW
```

### Removing elements from a list
<a name="Expressions.UpdateExpressions.REMOVE.RemovingListElements"></a>

You can use `REMOVE` to delete individual elements from a list.

**Example**  
In [Appending elements to a list](#Expressions.UpdateExpressions.SET.UpdatingListElements), you modify a list attribute (`RelatedItems`) so that it contained five elements:   
+ `[0]`—`Chisel`
+ `[1]`—`Hammer`
+ `[2]`—`Nails`
+ `[3]`—`Screwdriver`
+ `[4]`—`Hacksaw`
The following AWS Command Line Interface (AWS CLI) example deletes `Hammer` and `Nails` from the list.  

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "REMOVE RelatedItems[1], RelatedItems[2]" \
    --return-values ALL_NEW
```
After `Hammer` and `Nails` are removed, the remaining elements are shifted. The list now contains the following:  
+ `[0]`—`Chisel`
+ `[1]`—`Screwdriver`
+ `[2]`—`Hacksaw`

## ADD — updating numbers and sets
<a name="Expressions.UpdateExpressions.ADD"></a>

**Note**  
In general, we recommend using `SET` rather than `ADD` to ensure idempotent operations.

Use the `ADD` action in an update expression to add a new attribute and its values to an item.

If the attribute already exists, the behavior of `ADD` depends on the attribute's data type:
+ If the attribute is a number, and the value you are adding is also a number, the value is mathematically added to the existing attribute. (If the value is a negative number, it is subtracted from the existing attribute.)
+ If the attribute is a set, and the value you are adding is also a set, the value is appended to the existing set.

**Note**  
The `ADD` action supports only number and set data types.

To perform multiple `ADD` actions, separate them with commas.

In the following syntax summary:
+ The *path* element is the document path to an attribute. The attribute must be either a `Number` or a set data type. 
+ The *value* element is a number that you want to add to the attribute (for `Number` data types), or a set to append to the attribute (for set types).

```
add-action ::=
    path value
```

The topics below cover some different use cases for the `ADD` action.

**Topics**
+ [Adding a number](#Expressions.UpdateExpressions.ADD.Number)
+ [Adding elements to a set](#Expressions.UpdateExpressions.ADD.Set)

### Adding a number
<a name="Expressions.UpdateExpressions.ADD.Number"></a>

Assume that the `QuantityOnHand` attribute does not exist. The following AWS CLI example sets `QuantityOnHand` to 5.

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "ADD QuantityOnHand :q" \
    --expression-attribute-values '{":q": {"N": "5"}}' \
    --return-values ALL_NEW
```

Now that `QuantityOnHand` exists, you can rerun the example to increment `QuantityOnHand` by 5 each time.

### Adding elements to a set
<a name="Expressions.UpdateExpressions.ADD.Set"></a>

Assume that the `Color` attribute does not exist. The following AWS CLI example sets `Color` to a string set with two elements.

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "ADD Color :c" \
    --expression-attribute-values '{":c": {"SS":["Orange", "Purple"]}}' \
    --return-values ALL_NEW
```

Now that `Color` exists, you can add more elements to it.

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "ADD Color :c" \
    --expression-attribute-values '{":c": {"SS":["Yellow", "Green", "Blue"]}}' \
    --return-values ALL_NEW
```

## DELETE — removing elements from a set
<a name="Expressions.UpdateExpressions.DELETE"></a>

**Important**  
The `DELETE` action supports only `Set` data types.

Use the `DELETE` action in an update expression to remove one or more elements from a set. To perform multiple `DELETE` actions, separate them with commas.

In the following syntax summary:
+ The *path* element is the document path to an attribute. The attribute must be a set data type.
+ The *subset* is one or more elements that you want to delete from *path*. You must specify *subset* as a set type.

```
delete-action ::=
    path subset
```

**Example**  
In [Adding elements to a set](#Expressions.UpdateExpressions.ADD.Set), you create the `Color` string set. This example removes some of the elements from that set.  

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "DELETE Color :p" \
    --expression-attribute-values '{":p": {"SS": ["Yellow", "Purple"]}}' \
    --return-values ALL_NEW
```

## Using multiple update expressions
<a name="Expressions.UpdateExpressions.Multiple"></a>

You can use multiple actions in a single update expression. All attribute references are resolved against the item's state before any of the actions are applied.

**Example**  
Given an item `{"id": "1", "a": 1, "b": 2, "c": 3}`, the following expression removes `a` and shifts the values of `b` and `c`:  

```
aws dynamodb update-item \
    --table-name test \
    --key '{"id":{"S":"1"}}' \
    --update-expression "REMOVE a SET b = a, c = b" \
    --return-values ALL_NEW
```
The result is `{"id": "1", "b": 1, "c": 2}`. Even though `a` is removed and `b` is reassigned in the same expression, both references resolve to their original values.

**Example**  
If you want to modify an attribute's value and completely remove another attribute, you could use a SET and a REMOVE action in a single statement. This operation would reduce the `Price` value to 15 while also removing the `InStock` attribute from the item.  

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "SET Price = Price - :p REMOVE InStock" \
    --expression-attribute-values '{":p": {"N":"15"}}' \
    --return-values ALL_NEW
```

**Example**  
If you want to add to a list while also changing another attribute's value, you could use two SET actions in a single statement. This operation would add "Nails" to the `RelatedItems` list attribute and also set the `Price` value to 21.  

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"789"}}' \
    --update-expression "SET RelatedItems[1] = :newValue, Price = :newPrice" \
    --expression-attribute-values '{":newValue": {"S":"Nails"}, ":newPrice": {"N":"21"}}'  \
    --return-values ALL_NEW
```

# Condition and filter expressions, operators, and functions in DynamoDB
<a name="Expressions.OperatorsAndFunctions"></a>

To manipulate data in an DynamoDB table, you use the `PutItem`, `UpdateItem`, and `DeleteItem` operations. For these data manipulation operations, you can specify a condition expression to determine which items should be modified. If the condition expression evaluates to true, the operation succeeds. Otherwise, the operation fails.

This section covers the built-in functions and keywords for writing filter expressions and condition expressions in Amazon DynamoDB. For more detailed information on functions and programming with DynamoDB, see [Programming with DynamoDB and the AWS SDKs](Programming.md) and the [DynamoDB API Reference](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/).

**Topics**
+ [Syntax for filter and condition expressions](#Expressions.OperatorsAndFunctions.Syntax)
+ [Making comparisons](#Expressions.OperatorsAndFunctions.Comparators)
+ [Functions](#Expressions.OperatorsAndFunctions.Functions)
+ [Logical evaluations](#Expressions.OperatorsAndFunctions.LogicalEvaluations)
+ [Parentheses](#Expressions.OperatorsAndFunctions.Parentheses)
+ [Precedence in conditions](#Expressions.OperatorsAndFunctions.Precedence)

## Syntax for filter and condition expressions
<a name="Expressions.OperatorsAndFunctions.Syntax"></a>

In the following syntax summary, an *operand* can be the following: 
+ A top-level attribute name, such as `Id`, `Title`, `Description`, or `ProductCategory`
+ A document path that references a nested attribute

```
condition-expression ::=
      operand comparator operand
    | operand BETWEEN operand AND operand
    | operand IN ( operand (',' operand (, ...) ))
    | function
    | condition AND condition
    | condition OR condition
    | NOT condition
    | ( condition )

comparator ::=
    =
    | <>
    | <
    | <=
    | >
    | >=

function ::=
    attribute_exists (path)
    | attribute_not_exists (path)
    | attribute_type (path, type)
    | begins_with (path, substr)
    | contains (path, operand)
    | size (path)
```

## Making comparisons
<a name="Expressions.OperatorsAndFunctions.Comparators"></a>

Use these comparators to compare an operand against a single value:
+ `a = b` – True if *a* is equal to *b*.
+ `a <> b` – True if *a* is not equal to *b*.
+ `a < b` – True if *a* is less than *b*.
+ `a <= b` – True if *a* is less than or equal to *b*.
+ `a > b` – True if *a* is greater than *b*.
+ `a >= b` – True if *a* is greater than or equal to *b*.

Use the `BETWEEN` and `IN` keywords to compare an operand against a range of values or an enumerated list of values:
+ `a BETWEEN b AND c` – True if *a* is greater than or equal to *b*, and less than or equal to *c*.
+ `a IN (b, c, d) ` – True if *a* is equal to any value in the list—for example, any of *b*, *c*, or *d*. The list can contain up to 100 values, separated by commas.

## Functions
<a name="Expressions.OperatorsAndFunctions.Functions"></a>

Use the following functions to determine whether an attribute exists in an item, or to evaluate the value of an attribute. These function names are case sensitive. For a nested attribute, you must provide its full document path.


****  

| Function | Description | 
| --- | --- | 
|  `attribute_exists (path)`  | True if the item contains the attribute specified by `path`. Example: Check whether an item in the `Product` table has a side view picture. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html)  | 
|  `attribute_not_exists (path)`  | True if the attribute specified by `path` does not exist in the item. Example: Check whether an item has a `Manufacturer` attribute. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html)  | 
|  `attribute_type (path, type)`  |  True if the attribute at the specified path is of a particular data type. The `type` parameter must be one of the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html) You must use an expression attribute value for the `type` parameter. Example: Check whether the `QuantityOnHand` attribute is of type List. In this example, `:v_sub` is a placeholder for the string `L`. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html) You must use an expression attribute value for the `type` parameter.   | 
|  `begins_with (path, substr)`  |  True if the attribute specified by `path` begins with a particular substring. Example: Check whether the first few characters of the front view picture URL are `http://`. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html) The expression attribute value `:v_sub` is a placeholder for `http://`.  | 
|  `contains (path, operand)`  | True if the attribute specified by `path` is one of the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html) If the attribute specified by `path` is a `String`, the `operand` must be a `String`. If the attribute specified by `path` is a `Set`, the `operand` must be the set's element type. The path and the operand must be distinct. That is, `contains (a, a)` returns an error. Example: Check whether the `Brand` attribute contains the substring `Company`. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html) The expression attribute value `:v_sub` is a placeholder for `Company`. Example: Check whether the product is available in red. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html) The expression attribute value `:v_sub` is a placeholder for `Red`. | 
|  `size (path)`  | Returns a number that represents an attribute's size. The following are valid data types for use with `size`.  If the attribute is of type `String`, `size` returns the length of the string. Example: Check whether the string `Brand` is less than or equal to 20 characters. The expression attribute value `:v_sub` is a placeholder for `20`. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html)  If the attribute is of type `Binary`, `size` returns the number of bytes in the attribute value. Example: Suppose that the `ProductCatalog` item has a binary attribute named `VideoClip` that contains a short video of the product in use. The following expression checks whether `VideoClip` exceeds 64,000 bytes. The expression attribute value `:v_sub` is a placeholder for `64000`. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html)  If the attribute is a `Set` data type, `size` returns the number of elements in the set.  Example: Check whether the product is available in more than one color. The expression attribute value `:v_sub` is a placeholder for `1`. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html)  If the attribute is of type `List` or `Map`, `size` returns the number of child elements. Example: Check whether the number of `OneStar` reviews has exceeded a certain threshold. The expression attribute value `:v_sub` is a placeholder for `3`. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html)  | 

## Logical evaluations
<a name="Expressions.OperatorsAndFunctions.LogicalEvaluations"></a>

Use the `AND`, `OR`, and `NOT` keywords to perform logical evaluations. In the following list, *a* and *b* represent conditions to be evaluated.
+ `a AND b` – True if *a* and *b* are both true.
+ `a OR b` – True if either *a* or *b* (or both) are true.
+ `NOT a` – True if *a* is false. False if *a* is true.

The following is a code example of AND in an operation.

`dynamodb-local (*)> select * from exprtest where a > 3 and a < 5;`

## Parentheses
<a name="Expressions.OperatorsAndFunctions.Parentheses"></a>

Use parentheses to change the precedence of a logical evaluation. For example, suppose that conditions *a* and *b* are true, and that condition *c* is false. The following expression evaluates to true:
+ `a OR b AND c`

However, if you enclose a condition in parentheses, it is evaluated first. For example, the following evaluates to false:
+  `(a OR b) AND c`

**Note**  
You can nest parentheses in an expression. The innermost ones are evaluated first.

The following is a code example with parentheses in a logical evaluation.

`dynamodb-local (*)> select * from exprtest where attribute_type(b, string) or ( a = 5 and c = “coffee”);`

## Precedence in conditions
<a name="Expressions.OperatorsAndFunctions.Precedence"></a>

 DynamoDB evaluates conditions from left to right using the following precedence rules:
+ `= <> < <= > >=`
+ `IN`
+ `BETWEEN`
+ `attribute_exists attribute_not_exists begins_with contains`
+ Parentheses
+ `NOT`
+ `AND`
+ `OR`

# DynamoDB condition expression CLI example
<a name="Expressions.ConditionExpressions"></a>

The following are some AWS Command Line Interface (AWS CLI) examples of using condition expressions. These examples are based on the `ProductCatalog` table, which was introduced in [Referring to item attributes when using expressions in DynamoDB](Expressions.Attributes.md). The partition key for this table is `Id`; there is no sort key. The following `PutItem` operation creates a sample `ProductCatalog` item that the examples refer to.

```
aws dynamodb put-item \
    --table-name ProductCatalog \
    --item file://item.json
```

The arguments for `--item` are stored in the `item.json` file. (For simplicity, only a few item attributes are used.)

```
{
    "Id": {"N": "456" },
    "ProductCategory": {"S": "Sporting Goods" },
    "Price": {"N": "650" }
}
```

**Topics**
+ [Conditional put](#Expressions.ConditionExpressions.PreventingOverwrites)
+ [Conditional deletes](#Expressions.ConditionExpressions.AdvancedComparisons)
+ [Conditional updates](#Expressions.ConditionExpressions.SimpleComparisons)
+ [Conditional expression examples](#Expressions.ConditionExpressions.ConditionalExamples)

## Conditional put
<a name="Expressions.ConditionExpressions.PreventingOverwrites"></a>

The `PutItem` operation overwrites an item with the same primary key (if it exists). If you want to avoid this, use a condition expression. This allows the write to proceed only if the item in question does not already have the same primary key.

The following example uses `attribute_not_exists()` to check whether the primary key exists in the table before attempting the write operation. 

**Note**  
If your primary key consists of both a partition key(pk) and a sort key(sk), the parameter will check whether `attribute_not_exists(pk)` AND `attribute_not_exists(sk)` evaluate to true or false as an entire statement before attempting the write operation.

```
aws dynamodb put-item \
    --table-name ProductCatalog \
    --item file://item.json \
    --condition-expression "attribute_not_exists(Id)"
```

If the condition expression evaluates to false, DynamoDB returns the following error message: The conditional request failed.

**Note**  
For more information about `attribute_not_exists` and other functions, see [Condition and filter expressions, operators, and functions in DynamoDB](Expressions.OperatorsAndFunctions.md).

## Conditional deletes
<a name="Expressions.ConditionExpressions.AdvancedComparisons"></a>

To perform a conditional delete, you use a `DeleteItem` operation with a condition expression. The condition expression must evaluate to true in order for the operation to succeed; otherwise, the operation fails.

Consider the item defined above.

Suppose that you wanted to delete the item, but only under the following conditions:
+  The `ProductCategory` is either "Sporting Goods" or "Gardening Supplies."
+  The `Price` is between 500 and 600.

The following example tries to delete the item.

```
aws dynamodb delete-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"456"}}' \
    --condition-expression "(ProductCategory IN (:cat1, :cat2)) and (Price between :lo and :hi)" \
    --expression-attribute-values file://values.json
```

The arguments for `--expression-attribute-values` are stored in the `values.json` file.

```
{
    ":cat1": {"S": "Sporting Goods"},
    ":cat2": {"S": "Gardening Supplies"},
    ":lo": {"N": "500"},
    ":hi": {"N": "600"}
}
```

**Note**  
In the condition expression, the `:` (colon character) indicates an *expression attribute value*—a placeholder for an actual value. For more information, see [Using expression attribute values in DynamoDB](Expressions.ExpressionAttributeValues.md).  
For more information about `IN`, `AND`, and other keywords, see [Condition and filter expressions, operators, and functions in DynamoDB](Expressions.OperatorsAndFunctions.md).

In this example, the `ProductCategory` comparison evaluates to true, but the `Price` comparison evaluates to false. This causes the condition expression to evaluate to false and the `DeleteItem` operation to fail.

## Conditional updates
<a name="Expressions.ConditionExpressions.SimpleComparisons"></a>

To perform a conditional update, you use an `UpdateItem` operation with a condition expression. The condition expression must evaluate to true in order for the operation to succeed; otherwise, the operation fails.

**Note**  
`UpdateItem` also supports *update expressions*, where you specify the modifications you want to make to an item. For more information, see [Using update expressions in DynamoDB](Expressions.UpdateExpressions.md).

Suppose that you started with the item defined above.

The following example performs an `UpdateItem` operation. It tries to reduce the `Price` of a product by 75—but the condition expression prevents the update if the current `Price` is less than or equal to 500.

```
aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id": {"N": "456"}}' \
    --update-expression "SET Price = Price - :discount" \
    --condition-expression "Price > :limit" \
    --expression-attribute-values file://values.json
```

The arguments for `--expression-attribute-values` are stored in the `values.json` file.

```
{
    ":discount": { "N": "75"},
    ":limit": {"N": "500"}
}
```

If the starting `Price` is 650, the `UpdateItem` operation reduces the `Price` to 575. If you run the `UpdateItem` operation again, the `Price` is reduced to 500. If you run it a third time, the condition expression evaluates to false, and the update fails.

**Note**  
In the condition expression, the `:` (colon character) indicates an *expression attribute value*—a placeholder for an actual value. For more information, see [Using expression attribute values in DynamoDB](Expressions.ExpressionAttributeValues.md).  
For more information about "*>*" and other operators, see [Condition and filter expressions, operators, and functions in DynamoDB](Expressions.OperatorsAndFunctions.md).

## Conditional expression examples
<a name="Expressions.ConditionExpressions.ConditionalExamples"></a>

For more information about the functions used in the following examples, see [Condition and filter expressions, operators, and functions in DynamoDB](Expressions.OperatorsAndFunctions.md). If you want to know more about how to specify different attribute types in an expression, see [Referring to item attributes when using expressions in DynamoDB](Expressions.Attributes.md). 

### Checking for attributes in an item
<a name="Expressions.ConditionExpressions.CheckingForAttributes"></a>

You can check for the existence (or nonexistence) of any attribute. If the condition expression evaluates to true, the operation succeeds; otherwise, it fails.

The following example uses `attribute_not_exists` to delete a product only if it does not have a `Price` attribute.

```
aws dynamodb delete-item \
    --table-name ProductCatalog \
    --key '{"Id": {"N": "456"}}' \
    --condition-expression "attribute_not_exists(Price)"
```

DynamoDB also provides an `attribute_exists` function. The following example deletes a product only if it has received poor reviews.

```
aws dynamodb delete-item \
    --table-name ProductCatalog \
    --key '{"Id": {"N": "456"}}' \
    --condition-expression "attribute_exists(ProductReviews.OneStar)"
```

### Checking for attribute type
<a name="Expressions.ConditionExpressions.CheckingForAttributeType"></a>

You can check the data type of an attribute value by using the `attribute_type` function. If the condition expression evaluates to true, the operation succeeds; otherwise, it fails.

The following example uses `attribute_type` to delete a product only if it has a `Color` attribute of type String Set. 

```
aws dynamodb delete-item \
    --table-name ProductCatalog \
    --key '{"Id": {"N": "456"}}' \
    --condition-expression "attribute_type(Color, :v_sub)" \
    --expression-attribute-values file://expression-attribute-values.json
```

The arguments for `--expression-attribute-values` are stored in the expression-attribute-values.json file.

```
{
    ":v_sub":{"S":"SS"}
}
```

### Checking string starting value
<a name="Expressions.ConditionExpressions.CheckingBeginsWith"></a>

You can check if a String attribute value begins with a particular substring by using the `begins_with` function. If the condition expression evaluates to true, the operation succeeds; otherwise, it fails. 

The following example uses `begins_with` to delete a product only if the `FrontView` element of the `Pictures` map starts with a specific value.

```
aws dynamodb delete-item \
    --table-name ProductCatalog \
    --key '{"Id": {"N": "456"}}' \
    --condition-expression "begins_with(Pictures.FrontView, :v_sub)" \
    --expression-attribute-values file://expression-attribute-values.json
```

The arguments for `--expression-attribute-values` are stored in the expression-attribute-values.json file.

```
{
    ":v_sub":{"S":"http://"}
}
```

### Checking for an element in a set
<a name="Expressions.ConditionExpressions.CheckingForContains"></a>

You can check for an element in a set or look for a substring within a string by using the `contains` function. If the condition expression evaluates to true, the operation succeeds; otherwise, it fails. 

The following example uses `contains` to delete a product only if the `Color` String Set has an element with a specific value. 

```
aws dynamodb delete-item \
    --table-name ProductCatalog \
    --key '{"Id": {"N": "456"}}' \
    --condition-expression "contains(Color, :v_sub)" \
    --expression-attribute-values file://expression-attribute-values.json
```

The arguments for `--expression-attribute-values` are stored in the expression-attribute-values.json file.

```
{
    ":v_sub":{"S":"Red"}
}
```

### Checking the size of an attribute value
<a name="Expressions.ConditionExpressions.CheckingForSize"></a>

You can check for the size of an attribute value by using the `size` function. If the condition expression evaluates to true, the operation succeeds; otherwise, it fails. 

The following example uses `size` to delete a product only if the size of the `VideoClip` Binary attribute is greater than `64000` bytes. 

```
aws dynamodb delete-item \
    --table-name ProductCatalog \
    --key '{"Id": {"N": "456"}}' \
    --condition-expression "size(VideoClip) > :v_sub" \
    --expression-attribute-values file://expression-attribute-values.json
```

The arguments for `--expression-attribute-values` are stored in the expression-attribute-values.json file.

```
{
    ":v_sub":{"N":"64000"}
}
```

# Using time to live (TTL) in DynamoDB
<a name="TTL"></a>

Time To Live (TTL) for DynamoDB is a cost-effective method for deleting items that are no longer relevant. TTL allows you to define a per-item expiration timestamp that indicates when an item is no longer needed. DynamoDB automatically deletes expired items within a few days of their expiration time, without consuming write throughput. 

To use TTL, first enable it on a table and then define a specific attribute to store the TTL expiration timestamp. The timestamp must be stored as a [Number](HowItWorks.NamingRulesDataTypes.md#HowItWorks.DataTypes) data type in [Unix epoch time format](https://en.wikipedia.org/wiki/Unix_time) at the seconds granularity. Items with a TTL attribute that is not a Number type are ignored by the TTL process. Each time an item is created or updated, you can compute the expiration time and save it in the TTL attribute.

Items with valid, expired TTL attributes may be deleted by the system at any time, typically within a few days of their expiration. You can still update the expired items that are pending deletion, including changing or removing their TTL attributes. While updating an expired item, we recommended that you use a condition expression to make sure the item has not been subsequently deleted. Use filter expressions to remove expired items from [Scan](Scan.md#Scan.FilterExpression) and [Query](Query.FilterExpression.md) results.

Deleted items work similarly to those deleted through typical delete operations. Once deleted, items go into DynamoDB Streams as service deletions instead of user deletes, and are removed from local secondary indexes and global secondary indexes just like other delete operations. 

If you are using [Global Tables version 2019.11.21 (Current)](GlobalTables.md) of global tables and you also use the TTL feature, DynamoDB replicates TTL deletes to all replica tables. The initial TTL delete does not consume Write Capacity Units (WCU) in the region in which the TTL expiry occurs. However, the replicated TTL delete to the replica table(s) consumes a replicated Write Capacity Unit when using provisioned capacity, or Replicated Write Unit when using on-demand capacity mode, in each of the replica regions and applicable charges will apply.

For more information about TTL, see these topics:

**Topics**
+ [Enable time to live (TTL) in DynamoDB](time-to-live-ttl-how-to.md)
+ [Computing time to live (TTL) in DynamoDB](time-to-live-ttl-before-you-start.md)
+ [Working with expired items and time to live (TTL)](ttl-expired-items.md)

# Enable time to live (TTL) in DynamoDB
<a name="time-to-live-ttl-how-to"></a>

**Note**  
To assist in debugging and verification of proper operation of the TTL feature, the values provided for the item TTL are logged in plaintext in DynamoDB diagnostic logs.

You can enable TTL in the Amazon DynamoDB Console, the AWS Command Line Interface (AWS CLI), or using the [Amazon DynamoDB API Reference](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/) with any of the supposed AWS SDKs. It takes approximately one hour to enable TTL across all partitions.

## Enable DynamoDB TTL using the AWS console
<a name="time-to-live-ttl-how-to-enable-console"></a>

1. Sign in to the AWS Management Console and open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/).

1. Choose **Tables**, and then choose the table that you want to modify.

1. In the **Additional settings** tab, in the **Time to Live (TTL)** section, choose **Turn on** to enable TTL.

1. When enabling TTL on a table, DynamoDB requires you to identify a specific attribute name that the service will look for when determining if an item is eligible for expiration. The TTL attribute name, shown below, is case sensitive and must match the attribute defined in your read and write operations. A mismatch will cause expired items to go undeleted. Renaming the TTL attribute requires you to disable TTL and then re-enable it with the new attribute going forward. TTL will continue to process deletions for approximately 30 minutes once it is disabled. TTL must be reconfigured on restored tables.  
![\[Case-sensitive TTL attribute name that DynamoDB uses to determine an item's eligiblity for expiration.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/EnableTTL-Settings.png)

1. (Optional) You can perform a test by simulating the date and time of the expiration and matching a few items. This provides you with a sample list of items and confirms that there are items containing the TTL attribute name provided along with the expiration time.

After TTL is enabled, the TTL attribute is marked **TTL** when you view items on the DynamoDB console. You can view the date and time that an item expires by hovering your pointer over the attribute. 

## Enable DynamoDB TTL using the API
<a name="time-to-live-ttl-how-to-enable-api"></a>

------
#### [ Python ]

You can enable TTL with code, using the [UpdateTimeToLive](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb/client/update_time_to_live.html) operation.

```
import boto3


def enable_ttl(table_name, ttl_attribute_name):
    """
    Enables TTL on DynamoDB table for a given attribute name
        on success, returns a status code of 200
        on error, throws an exception

    :param table_name: Name of the DynamoDB table
    :param ttl_attribute_name: The name of the TTL attribute being provided to the table.
    """
    try:
        dynamodb = boto3.client('dynamodb')

        # Enable TTL on an existing DynamoDB table
        response = dynamodb.update_time_to_live(
            TableName=table_name,
            TimeToLiveSpecification={
                'Enabled': True,
                'AttributeName': ttl_attribute_name
            }
        )

        # In the returned response, check for a successful status code.
        if response['ResponseMetadata']['HTTPStatusCode'] == 200:
            print("TTL has been enabled successfully.")
        else:
            print(f"Failed to enable TTL, status code {response['ResponseMetadata']['HTTPStatusCode']}")
    except Exception as ex:
        print("Couldn't enable TTL in table %s. Here's why: %s" % (table_name, ex))
        raise


# your values
enable_ttl('your-table-name', 'expirationDate')
```

You can confirm TTL is enabled by using the [DescribeTimeToLive](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb/client/describe_time_to_live.html) operation, which describes the TTL status on a table. The `TimeToLive` status is either `ENABLED` or `DISABLED`.

```
# create a DynamoDB client
dynamodb = boto3.client('dynamodb')

# set the table name
table_name = 'YourTable'

# describe TTL
response = dynamodb.describe_time_to_live(TableName=table_name)
```

------
#### [ JavaScript ]

You can enable TTL with code, using the [UpdateTimeToLiveCommand](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-client-dynamodb/Class/UpdateTimeToLiveCommand/) operation.

```
import { DynamoDBClient, UpdateTimeToLiveCommand } from "@aws-sdk/client-dynamodb";

const enableTTL = async (tableName, ttlAttribute) => {

    const client = new DynamoDBClient({});

    const params = {
        TableName: tableName,
        TimeToLiveSpecification: {
            Enabled: true,
            AttributeName: ttlAttribute
        }
    };

    try {
        const response = await client.send(new UpdateTimeToLiveCommand(params));
        if (response.$metadata.httpStatusCode === 200) {
            console.log(`TTL enabled successfully for table ${tableName}, using attribute name ${ttlAttribute}.`);
        } else {
            console.log(`Failed to enable TTL for table ${tableName}, response object: ${response}`);
        }
        return response;
    } catch (e) {
        console.error(`Error enabling TTL: ${e}`);
        throw e;
    }
};

// call with your own values
enableTTL('ExampleTable', 'exampleTtlAttribute');
```

------

## Enable Time to Live using the AWS CLI
<a name="time-to-live-ttl-how-to-enable-cli-sdk"></a>

1. Enable TTL on the `TTLExample` table.

   ```
   aws dynamodb update-time-to-live --table-name TTLExample --time-to-live-specification "Enabled=true, AttributeName=ttl"
   ```

1. Describe TTL on the `TTLExample` table.

   ```
   aws dynamodb describe-time-to-live --table-name TTLExample
   {
       "TimeToLiveDescription": {
           "AttributeName": "ttl",
           "TimeToLiveStatus": "ENABLED"
       }
   }
   ```

1. Add an item to the `TTLExample` table with the Time to Live attribute set using the BASH shell and the AWS CLI. 

   ```
   EXP=`date -d '+5 days' +%s`
   aws dynamodb put-item --table-name "TTLExample" --item '{"id": {"N": "1"}, "ttl": {"N": "'$EXP'"}}'
   ```

This example starts with the current date and adds 5 days to it to create an expiration time. Then, it converts the expiration time to epoch time format to finally add an item to the "`TTLExample`" table. 

**Note**  
 One way to set expiration values for Time to Live is to calculate the number of seconds to add to the expiration time. For example, 5 days is 432,000 seconds. However, it is often preferable to start with a date and work from there.

It is fairly simple to get the current time in epoch time format, as in the following examples.
+ Linux Terminal: `date +%s`
+ Python: `import time; int(time.time())`
+ Java: `System.currentTimeMillis() / 1000L`
+ JavaScript: `Math.floor(Date.now() / 1000)`

## Enable DynamoDB TTL using CloudFormation
<a name="time-to-live-ttl-how-to-enable-cf"></a>

```
AWSTemplateFormatVersion: "2010-09-09"
Resources:
  TTLExampleTable:
    Type: AWS::DynamoDB::Table
    Description: "A DynamoDB table with TTL Specification enabled"
    Properties:
      AttributeDefinitions:
        - AttributeName: "Album"
          AttributeType: "S"
        - AttributeName: "Artist"
          AttributeType: "S"
      KeySchema:
        - AttributeName: "Album"
          KeyType: "HASH"
        - AttributeName: "Artist"
          KeyType: "RANGE"
      ProvisionedThroughput:
        ReadCapacityUnits: "5"
        WriteCapacityUnits: "5"
      TimeToLiveSpecification:
        AttributeName: "TTLExampleAttribute"
        Enabled: true
```

Additional details on using TTL within your CloudFormation templates can be found [here](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-table-timetolivespecification.html).

# Computing time to live (TTL) in DynamoDB
<a name="time-to-live-ttl-before-you-start"></a>

A common way to implement TTL is to set an expiration time for items based on when they were created or last updated. This can be done by adding time to the `createdAt` and `updatedAt` timestamps. For example, the TTL for newly created items can be set to `createdAt` \$1 90 days. When the item is updated the TTL can be recalculated to `updatedAt` \$1 90 days.

The computed expiration time must be in epoch format, in seconds. To be considered for expiry and deletion, the TTL can't be more than five years in the past. If you use any other format, the TTL processes ignore the item. If you set the expiration time to sometime in the future when you want the item to expire, the item expires after that time. For example, say that you set the expiration time to 1724241326 (which is Monday, August 21, 2024 11:55:26 (UTC)). The item expires after the specified time. There is no minimum TTL duration. You can set the expiration time to any future time, such as 5 minutes from the current time. However, DynamoDB typically deletes expired items within 48 hours after their expiration time, not immediately when the item expires.

**Topics**
+ [Create an item and set the Time to Live](#time-to-live-ttl-before-you-start-create)
+ [Update an item and refresh the Time to Live](#time-to-live-ttl-before-you-start-update)

## Create an item and set the Time to Live
<a name="time-to-live-ttl-before-you-start-create"></a>

The following example demonstrates how to calculate the expiration time when creating a new item, using `expireAt` as the TTL attribute name. An assignment statement obtains the current time as a variable. In the example, the expiration time is calculated as 90 days from the current time. The time is then converted to epoch format and saved as an integer data type in the TTL attribute.

The following code examples show how to create an item with TTL.

------
#### [ Java ]

**SDK for Java 2.x**  

```
package com.amazon.samplelib.ttl;

import com.amazon.samplelib.CodeSampleUtils;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.AttributeValue;
import software.amazon.awssdk.services.dynamodb.model.DynamoDbException;
import software.amazon.awssdk.services.dynamodb.model.PutItemRequest;
import software.amazon.awssdk.services.dynamodb.model.PutItemResponse;
import software.amazon.awssdk.services.dynamodb.model.ResourceNotFoundException;

import java.util.HashMap;
import java.util.Map;
import java.util.Optional;

/**
 * Creates an item in a DynamoDB table with TTL attributes.
 * This class demonstrates how to add TTL expiration timestamps to DynamoDB items.
 */
public class CreateTTL {

    private static final String USAGE =
        """
            Usage:
                <tableName> <primaryKey> <sortKey> <region>
            Where:
                tableName - The Amazon DynamoDB table being queried.
                primaryKey - The name of the primary key. Also known as the hash or partition key.
                sortKey - The name of the sort key. Also known as the range attribute.
                region (optional) - The AWS region that the Amazon DynamoDB table is located in. (Default: us-east-1)
            """;
    private static final int DAYS_TO_EXPIRE = 90;
    private static final int SECONDS_PER_DAY = 24 * 60 * 60;
    private static final String PRIMARY_KEY_ATTR = "primaryKey";
    private static final String SORT_KEY_ATTR = "sortKey";
    private static final String CREATION_DATE_ATTR = "creationDate";
    private static final String EXPIRE_AT_ATTR = "expireAt";
    private static final String SUCCESS_MESSAGE = "%s PutItem operation with TTL successful.";
    private static final String TABLE_NOT_FOUND_ERROR = "Error: The Amazon DynamoDB table \"%s\" can't be found.";

    private final DynamoDbClient dynamoDbClient;

    /**
     * Constructs a CreateTTL instance with the specified DynamoDB client.
     *
     * @param dynamoDbClient The DynamoDB client to use
     */
    public CreateTTL(final DynamoDbClient dynamoDbClient) {
        this.dynamoDbClient = dynamoDbClient;
    }

    /**
     * Constructs a CreateTTL with a default DynamoDB client.
     */
    public CreateTTL() {
        this.dynamoDbClient = null;
    }

    /**
     * Main method to demonstrate creating an item with TTL.
     *
     * @param args Command line arguments
     */
    public static void main(final String[] args) {
        try {
            int result = new CreateTTL().processArgs(args);
            System.exit(result);
        } catch (Exception e) {
            System.err.println(e.getMessage());
            System.exit(1);
        }
    }

    /**
     * Process command line arguments and create an item with TTL.
     *
     * @param args Command line arguments
     * @return 0 if successful, non-zero otherwise
     * @throws ResourceNotFoundException If the table doesn't exist
     * @throws DynamoDbException If an error occurs during the operation
     * @throws IllegalArgumentException If arguments are invalid
     */
    public int processArgs(final String[] args) {
        // Argument validation (remove or replace this line when reusing this code)
        CodeSampleUtils.validateArgs(args, new int[] {3, 4}, USAGE);

        final String tableName = args[0];
        final String primaryKey = args[1];
        final String sortKey = args[2];
        final Region region = Optional.ofNullable(args.length > 3 ? args[3] : null)
            .map(Region::of)
            .orElse(Region.US_EAST_1);

        try (DynamoDbClient ddb = dynamoDbClient != null
            ? dynamoDbClient
            : DynamoDbClient.builder().region(region).build()) {
            final CreateTTL createTTL = new CreateTTL(ddb);
            createTTL.createItemWithTTL(tableName, primaryKey, sortKey);
            return 0;
        } catch (Exception e) {
            throw e;
        }
    }

    /**
     * Creates an item in the specified table with TTL attributes.
     *
     * @param tableName The name of the table
     * @param primaryKeyValue The value for the primary key
     * @param sortKeyValue The value for the sort key
     * @return The response from the PutItem operation
     * @throws ResourceNotFoundException If the table doesn't exist
     * @throws DynamoDbException If an error occurs during the operation
     */
    public PutItemResponse createItemWithTTL(
        final String tableName, final String primaryKeyValue, final String sortKeyValue) {
        // Get current time in epoch second format
        final long createDate = System.currentTimeMillis() / 1000;

        // Calculate expiration time 90 days from now in epoch second format
        final long expireDate = createDate + (DAYS_TO_EXPIRE * SECONDS_PER_DAY);

        final Map<String, AttributeValue> itemMap = new HashMap<>();
        itemMap.put(
            PRIMARY_KEY_ATTR, AttributeValue.builder().s(primaryKeyValue).build());
        itemMap.put(SORT_KEY_ATTR, AttributeValue.builder().s(sortKeyValue).build());
        itemMap.put(
            CREATION_DATE_ATTR,
            AttributeValue.builder().n(String.valueOf(createDate)).build());
        itemMap.put(
            EXPIRE_AT_ATTR,
            AttributeValue.builder().n(String.valueOf(expireDate)).build());

        final PutItemRequest request =
            PutItemRequest.builder().tableName(tableName).item(itemMap).build();

        try {
            final PutItemResponse response = dynamoDbClient.putItem(request);
            System.out.println(String.format(SUCCESS_MESSAGE, tableName));
            return response;
        } catch (ResourceNotFoundException e) {
            System.err.format(TABLE_NOT_FOUND_ERROR, tableName);
            throw e;
        } catch (DynamoDbException e) {
            System.err.println(e.getMessage());
            throw e;
        }
    }
}
```
+  For API details, see [PutItem](https://docs.aws.amazon.com/goto/SdkForJavaV2/dynamodb-2012-08-10/PutItem) in *AWS SDK for Java 2.x API Reference*. 

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  

```
import { DynamoDBClient, PutItemCommand } from "@aws-sdk/client-dynamodb";

export function createDynamoDBItem(table_name, region, partition_key, sort_key) {
    const client = new DynamoDBClient({
        region: region,
        endpoint: `https://dynamodb.${region}.amazonaws.com`
    });

    // Get the current time in epoch second format
    const current_time = Math.floor(new Date().getTime() / 1000);

    // Calculate the expireAt time (90 days from now) in epoch second format
    const expire_at = Math.floor((new Date().getTime() + 90 * 24 * 60 * 60 * 1000) / 1000);

    // Create DynamoDB item
    const item = {
        'partitionKey': {'S': partition_key},
        'sortKey': {'S': sort_key},
        'createdAt': {'N': current_time.toString()},
        'expireAt': {'N': expire_at.toString()}
    };

    const putItemCommand = new PutItemCommand({
        TableName: table_name,
        Item: item,
        ProvisionedThroughput: {
            ReadCapacityUnits: 1,
            WriteCapacityUnits: 1,
        },
    });

    client.send(putItemCommand, function(err, data) {
        if (err) {
            console.log("Exception encountered when creating item %s, here's what happened: ", data, err);
            throw err;
        } else {
            console.log("Item created successfully: %s.", data);
            return data;
        }
    });
}

// Example usage (commented out for testing)
// createDynamoDBItem('your-table-name', 'us-east-1', 'your-partition-key-value', 'your-sort-key-value');
```
+  For API details, see [PutItem](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/dynamodb/command/PutItemCommand) in *AWS SDK for JavaScript API Reference*. 

------
#### [ Python ]

**SDK for Python (Boto3)**  

```
from datetime import datetime, timedelta

import boto3


def create_dynamodb_item(table_name, region, primary_key, sort_key):
    """
    Creates a DynamoDB item with an attached expiry attribute.

    :param table_name: Table name for the boto3 resource to target when creating an item
    :param region: string representing the AWS region. Example: `us-east-1`
    :param primary_key: one attribute known as the partition key.
    :param sort_key: Also known as a range attribute.
    :return: Void (nothing)
    """
    try:
        dynamodb = boto3.resource("dynamodb", region_name=region)
        table = dynamodb.Table(table_name)

        # Get the current time in epoch second format
        current_time = int(datetime.now().timestamp())

        # Calculate the expiration time (90 days from now) in epoch second format
        expiration_time = int((datetime.now() + timedelta(days=90)).timestamp())

        item = {
            "primaryKey": primary_key,
            "sortKey": sort_key,
            "creationDate": current_time,
            "expireAt": expiration_time,
        }
        response = table.put_item(Item=item)

        print("Item created successfully.")
        return response
    except Exception as e:
        print(f"Error creating item: {e}")
        raise e


# Use your own values
create_dynamodb_item(
    "your-table-name", "us-west-2", "your-partition-key-value", "your-sort-key-value"
)
```
+  For API details, see [PutItem](https://docs.aws.amazon.com/goto/boto3/dynamodb-2012-08-10/PutItem) in *AWS SDK for Python (Boto3) API Reference*. 

------

## Update an item and refresh the Time to Live
<a name="time-to-live-ttl-before-you-start-update"></a>

This example is a continuation of the one from the [previous section](#time-to-live-ttl-before-you-start-create). The expiration time can be recomputed if the item is updated. The following example recomputes the `expireAt` timestamp to be 90 days from the current time.

The following code examples show how to update an item's TTL.

------
#### [ Java ]

**SDK for Java 2.x**  
Update TTL on an existing DynamoDB item in a table.  

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.AttributeValue;
import software.amazon.awssdk.services.dynamodb.model.DynamoDbException;
import software.amazon.awssdk.services.dynamodb.model.ResourceNotFoundException;
import software.amazon.awssdk.services.dynamodb.model.UpdateItemRequest;
import software.amazon.awssdk.services.dynamodb.model.UpdateItemResponse;

import java.util.HashMap;
import java.util.Map;
import java.util.Optional;

    public UpdateItemResponse updateItemWithTTL(
        final String tableName, final String primaryKeyValue, final String sortKeyValue) {
        // Get current time in epoch second format
        final long currentTime = System.currentTimeMillis() / 1000;

        // Calculate expiration time 90 days from now in epoch second format
        final long expireDate = currentTime + (DAYS_TO_EXPIRE * SECONDS_PER_DAY);

        // Create the key map for the item to update
        final Map<String, AttributeValue> keyMap = new HashMap<>();
        keyMap.put(PRIMARY_KEY_ATTR, AttributeValue.builder().s(primaryKeyValue).build());
        keyMap.put(SORT_KEY_ATTR, AttributeValue.builder().s(sortKeyValue).build());

        // Create the expression attribute values
        final Map<String, AttributeValue> expressionAttributeValues = new HashMap<>();
        expressionAttributeValues.put(
            ":c", AttributeValue.builder().n(String.valueOf(currentTime)).build());
        expressionAttributeValues.put(
            ":e", AttributeValue.builder().n(String.valueOf(expireDate)).build());

        final UpdateItemRequest request = UpdateItemRequest.builder()
            .tableName(tableName)
            .key(keyMap)
            .updateExpression(UPDATE_EXPRESSION)
            .expressionAttributeValues(expressionAttributeValues)
            .build();

        try {
            final UpdateItemResponse response = dynamoDbClient.updateItem(request);
            System.out.println(String.format(SUCCESS_MESSAGE, tableName));
            return response;
        } catch (ResourceNotFoundException e) {
            System.err.format(TABLE_NOT_FOUND_ERROR, tableName);
            throw e;
        } catch (DynamoDbException e) {
            System.err.println(e.getMessage());
            throw e;
        }
    }
```
+  For API details, see [UpdateItem](https://docs.aws.amazon.com/goto/SdkForJavaV2/dynamodb-2012-08-10/UpdateItem) in *AWS SDK for Java 2.x API Reference*. 

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  

```
import { DynamoDBClient, UpdateItemCommand } from "@aws-sdk/client-dynamodb";
import { marshall, unmarshall } from "@aws-sdk/util-dynamodb";

export const updateItem = async (tableName, partitionKey, sortKey, region = 'us-east-1') => {
    const client = new DynamoDBClient({
        region: region,
        endpoint: `https://dynamodb.${region}.amazonaws.com`
    });

    const currentTime = Math.floor(Date.now() / 1000);
    const expireAt = Math.floor((Date.now() + 90 * 24 * 60 * 60 * 1000) / 1000);

    const params = {
        TableName: tableName,
        Key: marshall({
            partitionKey: partitionKey,
            sortKey: sortKey
        }),
        UpdateExpression: "SET updatedAt = :c, expireAt = :e",
        ExpressionAttributeValues: marshall({
            ":c": currentTime,
            ":e": expireAt
        }),
    };

    try {
        const data = await client.send(new UpdateItemCommand(params));
        const responseData = unmarshall(data.Attributes);
        console.log("Item updated successfully: %s", responseData);
        return responseData;
    } catch (err) {
        console.error("Error updating item:", err);
        throw err;
    }
}

// Example usage (commented out for testing)
// updateItem('your-table-name', 'your-partition-key-value', 'your-sort-key-value');
```
+  For API details, see [UpdateItem](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/dynamodb/command/UpdateItemCommand) in *AWS SDK for JavaScript API Reference*. 

------
#### [ Python ]

**SDK for Python (Boto3)**  

```
from datetime import datetime, timedelta

import boto3


def update_dynamodb_item(table_name, region, primary_key, sort_key):
    """
    Update an existing DynamoDB item with a TTL.
    :param table_name: Name of the DynamoDB table
    :param region: AWS Region of the table - example `us-east-1`
    :param primary_key: one attribute known as the partition key.
    :param sort_key: Also known as a range attribute.
    :return: Void (nothing)
    """
    try:
        # Create the DynamoDB resource.
        dynamodb = boto3.resource("dynamodb", region_name=region)
        table = dynamodb.Table(table_name)

        # Get the current time in epoch second format
        current_time = int(datetime.now().timestamp())

        # Calculate the expireAt time (90 days from now) in epoch second format
        expire_at = int((datetime.now() + timedelta(days=90)).timestamp())

        table.update_item(
            Key={"partitionKey": primary_key, "sortKey": sort_key},
            UpdateExpression="set updatedAt=:c, expireAt=:e",
            ExpressionAttributeValues={":c": current_time, ":e": expire_at},
        )

        print("Item updated successfully.")
    except Exception as e:
        print(f"Error updating item: {e}")


# Replace with your own values
update_dynamodb_item(
    "your-table-name", "us-west-2", "your-partition-key-value", "your-sort-key-value"
)
```
+  For API details, see [UpdateItem](https://docs.aws.amazon.com/goto/boto3/dynamodb-2012-08-10/UpdateItem) in *AWS SDK for Python (Boto3) API Reference*. 

------

The TTL examples discussed in this introduction demonstrate a method to ensure only recently updated items are kept in a table. Updated items have their lifespan extended, whereas items not updated post-creation expire and are deleted at no cost, reducing storage and maintaining clean tables.

# Working with expired items and time to live (TTL)
<a name="ttl-expired-items"></a>

Expired items that are pending deletion can be filtered from read and write operations. This is useful in scenarios when expired data is no longer valid and should not be used. If they are not filtered, they’ll continue to show in read and write operations until they are deleted by the background process.

**Note**  
These items still count towards storage and read costs until they are deleted.

TTL deletions can be identified in DynamoDB Streams, but only in the Region where the deletion occurred. TTL deletions that are replicated to global table regions are not identifiable in DynamoDB streams in the regions the deletion is replicated to.

## Filter expired items from read operations
<a name="ttl-expired-items-filter"></a>

For read operations such as [Scan](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Scan.html) and [Query](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html), a filter expression can filter out expired items that are pending deletion. As shown in the following code snippet, the filter expression can filter out items where the TTL time is equal to or less than the current time. For example, the Python SDK code includes an assignment statement that obtains the current time as a variable (`now`), and converts it into `int` for epoch time format.

The following code examples show how to query for TTL items.

------
#### [ Java ]

**SDK for Java 2.x**  
Query Filtered Expression to gather TTL items in a DynamoDB table using AWS SDK for Java 2.x.  

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.AttributeValue;
import software.amazon.awssdk.services.dynamodb.model.DynamoDbException;
import software.amazon.awssdk.services.dynamodb.model.QueryRequest;
import software.amazon.awssdk.services.dynamodb.model.QueryResponse;
import software.amazon.awssdk.services.dynamodb.model.ResourceNotFoundException;

import java.util.Map;
import java.util.Optional;

        final QueryRequest request = QueryRequest.builder()
            .tableName(tableName)
            .keyConditionExpression(KEY_CONDITION_EXPRESSION)
            .filterExpression(FILTER_EXPRESSION)
            .expressionAttributeNames(expressionAttributeNames)
            .expressionAttributeValues(expressionAttributeValues)
            .build();

        try (DynamoDbClient ddb = dynamoDbClient != null
            ? dynamoDbClient
            : DynamoDbClient.builder().region(region).build()) {
            final QueryResponse response = ddb.query(request);
            System.out.println("Query successful. Found " + response.count() + " items that have not expired yet.");

            // Print each item
            response.items().forEach(item -> {
                System.out.println("Item: " + item);
            });

            return 0;
        } catch (ResourceNotFoundException e) {
            System.err.format(TABLE_NOT_FOUND_ERROR, tableName);
            throw e;
        } catch (DynamoDbException e) {
            System.err.println(e.getMessage());
            throw e;
        }
```
+  For API details, see [Query](https://docs.aws.amazon.com/goto/SdkForJavaV2/dynamodb-2012-08-10/Query) in *AWS SDK for Java 2.x API Reference*. 

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  
Query Filtered Expression to gather TTL items in a DynamoDB table using AWS SDK for JavaScript.  

```
import { DynamoDBClient, QueryCommand } from "@aws-sdk/client-dynamodb";
import { marshall, unmarshall } from "@aws-sdk/util-dynamodb";

export const queryFiltered = async (tableName, primaryKey, region = 'us-east-1') => {
    const client = new DynamoDBClient({
        region: region,
        endpoint: `https://dynamodb.${region}.amazonaws.com`
    });

    const currentTime = Math.floor(Date.now() / 1000);

    const params = {
        TableName: tableName,
        KeyConditionExpression: "#pk = :pk",
        FilterExpression: "#ea > :ea",
        ExpressionAttributeNames: {
            "#pk": "primaryKey",
            "#ea": "expireAt"
        },
        ExpressionAttributeValues: marshall({
            ":pk": primaryKey,
            ":ea": currentTime
        })
    };

    try {
        const { Items } = await client.send(new QueryCommand(params));
        Items.forEach(item => {
            console.log(unmarshall(item))
        });
        return Items;
    } catch (err) {
        console.error(`Error querying items: ${err}`);
        throw err;
    }
}

// Example usage (commented out for testing)
// queryFiltered('your-table-name', 'your-partition-key-value');
```
+  For API details, see [Query](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/dynamodb/command/QueryCommand) in *AWS SDK for JavaScript API Reference*. 

------
#### [ Python ]

**SDK for Python (Boto3)**  
Query Filtered Expression to gather TTL items in a DynamoDB table using AWS SDK for Python (Boto3).  

```
from datetime import datetime

import boto3


def query_dynamodb_items(table_name, partition_key):
    """

    :param table_name: Name of the DynamoDB table
    :param partition_key:
    :return:
    """
    try:
        # Initialize a DynamoDB resource
        dynamodb = boto3.resource("dynamodb", region_name="us-east-1")

        # Specify your table
        table = dynamodb.Table(table_name)

        # Get the current time in epoch format
        current_time = int(datetime.now().timestamp())

        # Perform the query operation with a filter expression to exclude expired items
        # response = table.query(
        #    KeyConditionExpression=boto3.dynamodb.conditions.Key('partitionKey').eq(partition_key),
        #    FilterExpression=boto3.dynamodb.conditions.Attr('expireAt').gt(current_time)
        # )
        response = table.query(
            KeyConditionExpression=dynamodb.conditions.Key("partitionKey").eq(partition_key),
            FilterExpression=dynamodb.conditions.Attr("expireAt").gt(current_time),
        )

        # Print the items that are not expired
        for item in response["Items"]:
            print(item)

    except Exception as e:
        print(f"Error querying items: {e}")


# Call the function with your values
query_dynamodb_items("Music", "your-partition-key-value")
```
+  For API details, see [Query](https://docs.aws.amazon.com/goto/boto3/dynamodb-2012-08-10/Query) in *AWS SDK for Python (Boto3) API Reference*. 

------

## Conditionally write to expired items
<a name="ttl-expired-items-conditional-write"></a>

A condition expression can be used to avoid writes against expired items. The code snippet below is a conditional update that checks whether the expiration time is greater than the current time. If true, the write operation will continue.

The following code examples show how to conditionally update an item's TTL.

------
#### [ Java ]

**SDK for Java 2.x**  
Update TTL on on an existing DynamoDB Item in a table, with a condition.  

```
package com.amazon.samplelib.ttl;

import com.amazon.samplelib.CodeSampleUtils;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.AttributeValue;
import software.amazon.awssdk.services.dynamodb.model.ConditionalCheckFailedException;
import software.amazon.awssdk.services.dynamodb.model.DynamoDbException;
import software.amazon.awssdk.services.dynamodb.model.ResourceNotFoundException;
import software.amazon.awssdk.services.dynamodb.model.UpdateItemRequest;
import software.amazon.awssdk.services.dynamodb.model.UpdateItemResponse;

import java.util.Map;
import java.util.Optional;

/**
 * Updates an item in a DynamoDB table with TTL attributes using a conditional expression.
 * This class demonstrates how to conditionally update TTL expiration timestamps.
 */
public class UpdateTTLConditional {

    private static final String USAGE =
        """
            Usage:
                <tableName> <primaryKey> <sortKey> <region>
            Where:
                tableName - The Amazon DynamoDB table being queried.
                primaryKey - The name of the primary key. Also known as the hash or partition key.
                sortKey - The name of the sort key. Also known as the range attribute.
                region (optional) - The AWS region that the Amazon DynamoDB table is located in. (Default: us-east-1)
            """;
    private static final int DAYS_TO_EXPIRE = 90;
    private static final int SECONDS_PER_DAY = 24 * 60 * 60;
    private static final String PRIMARY_KEY_ATTR = "primaryKey";
    private static final String SORT_KEY_ATTR = "sortKey";
    private static final String UPDATED_AT_ATTR = "updatedAt";
    private static final String EXPIRE_AT_ATTR = "expireAt";
    private static final String UPDATE_EXPRESSION = "SET " + UPDATED_AT_ATTR + "=:c, " + EXPIRE_AT_ATTR + "=:e";
    private static final String CONDITION_EXPRESSION = "attribute_exists(" + PRIMARY_KEY_ATTR + ")";
    private static final String SUCCESS_MESSAGE = "%s UpdateItem operation with TTL successful.";
    private static final String CONDITION_FAILED_MESSAGE = "Condition check failed. Item does not exist.";
    private static final String TABLE_NOT_FOUND_ERROR = "Error: The Amazon DynamoDB table \"%s\" can't be found.";

    private final DynamoDbClient dynamoDbClient;

    /**
     * Constructs an UpdateTTLConditional with a default DynamoDB client.
     */
    public UpdateTTLConditional() {
        this.dynamoDbClient = null;
    }

    /**
     * Constructs an UpdateTTLConditional with the specified DynamoDB client.
     *
     * @param dynamoDbClient The DynamoDB client to use
     */
    public UpdateTTLConditional(final DynamoDbClient dynamoDbClient) {
        this.dynamoDbClient = dynamoDbClient;
    }

    /**
     * Main method to demonstrate conditionally updating an item with TTL.
     *
     * @param args Command line arguments
     */
    public static void main(final String[] args) {
        try {
            int result = new UpdateTTLConditional().processArgs(args);
            System.exit(result);
        } catch (Exception e) {
            System.err.println(e.getMessage());
            System.exit(1);
        }
    }

    /**
     * Process command line arguments and conditionally update an item with TTL.
     *
     * @param args Command line arguments
     * @return 0 if successful, non-zero otherwise
     * @throws ResourceNotFoundException If the table doesn't exist
     * @throws DynamoDbException If an error occurs during the operation
     * @throws IllegalArgumentException If arguments are invalid
     */
    public int processArgs(final String[] args) {
        // Argument validation (remove or replace this line when reusing this code)
        CodeSampleUtils.validateArgs(args, new int[] {3, 4}, USAGE);

        final String tableName = args[0];
        final String primaryKey = args[1];
        final String sortKey = args[2];
        final Region region = Optional.ofNullable(args.length > 3 ? args[3] : null)
            .map(Region::of)
            .orElse(Region.US_EAST_1);

        // Get current time in epoch second format
        final long currentTime = System.currentTimeMillis() / 1000;

        // Calculate expiration time 90 days from now in epoch second format
        final long expireDate = currentTime + (DAYS_TO_EXPIRE * SECONDS_PER_DAY);

        // Create the key map for the item to update
        final Map<String, AttributeValue> keyMap = Map.of(
            PRIMARY_KEY_ATTR, AttributeValue.builder().s(primaryKey).build(),
            SORT_KEY_ATTR, AttributeValue.builder().s(sortKey).build());

        // Create the expression attribute values
        final Map<String, AttributeValue> expressionAttributeValues = Map.of(
            ":c", AttributeValue.builder().n(String.valueOf(currentTime)).build(),
            ":e", AttributeValue.builder().n(String.valueOf(expireDate)).build());

        final UpdateItemRequest request = UpdateItemRequest.builder()
            .tableName(tableName)
            .key(keyMap)
            .updateExpression(UPDATE_EXPRESSION)
            .conditionExpression(CONDITION_EXPRESSION)
            .expressionAttributeValues(expressionAttributeValues)
            .build();

        try (DynamoDbClient ddb = dynamoDbClient != null
            ? dynamoDbClient
            : DynamoDbClient.builder().region(region).build()) {
            final UpdateItemResponse response = ddb.updateItem(request);
            System.out.println(String.format(SUCCESS_MESSAGE, tableName));
            return 0;
        } catch (ConditionalCheckFailedException e) {
            System.err.println(CONDITION_FAILED_MESSAGE);
            throw e;
        } catch (ResourceNotFoundException e) {
            System.err.format(TABLE_NOT_FOUND_ERROR, tableName);
            throw e;
        } catch (DynamoDbException e) {
            System.err.println(e.getMessage());
            throw e;
        }
    }
}
```
+  For API details, see [UpdateItem](https://docs.aws.amazon.com/goto/SdkForJavaV2/dynamodb-2012-08-10/UpdateItem) in *AWS SDK for Java 2.x API Reference*. 

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  
Update TTL on on an existing DynamoDB Item in a table, with a condition.  

```
import { DynamoDBClient, UpdateItemCommand } from "@aws-sdk/client-dynamodb";
import { marshall, unmarshall } from "@aws-sdk/util-dynamodb";

export const updateItemConditional = async (tableName, partitionKey, sortKey, region = 'us-east-1', newAttribute = 'default-value') => {
    const client = new DynamoDBClient({
        region: region,
        endpoint: `https://dynamodb.${region}.amazonaws.com`
    });

    const currentTime = Math.floor(Date.now() / 1000);

    const params = {
        TableName: tableName,
        Key: marshall({
            artist: partitionKey,
            album: sortKey
        }),
        UpdateExpression: "SET newAttribute = :newAttribute",
        ConditionExpression: "expireAt > :expiration",
        ExpressionAttributeValues: marshall({
            ':newAttribute': newAttribute,
            ':expiration': currentTime
        }),
        ReturnValues: "ALL_NEW"
    };

    try {
        const response = await client.send(new UpdateItemCommand(params));
        const responseData = unmarshall(response.Attributes);
        console.log("Item updated successfully: ", responseData);
        return responseData;
    } catch (error) {
        if (error.name === "ConditionalCheckFailedException") {
            console.log("Condition check failed: Item's 'expireAt' is expired.");
        } else {
            console.error("Error updating item: ", error);
        }
        throw error;
    }
};

// Example usage (commented out for testing)
// updateItemConditional('your-table-name', 'your-partition-key-value', 'your-sort-key-value');
```
+  For API details, see [UpdateItem](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/dynamodb/command/UpdateItemCommand) in *AWS SDK for JavaScript API Reference*. 

------
#### [ Python ]

**SDK for Python (Boto3)**  
Update TTL on on an existing DynamoDB Item in a table, with a condition.  

```
from datetime import datetime, timedelta

import boto3
from botocore.exceptions import ClientError


def update_dynamodb_item_ttl(table_name, region, primary_key, sort_key, ttl_attribute):
    """
    Updates an existing record in a DynamoDB table with a new or updated TTL attribute.

    :param table_name: Name of the DynamoDB table
    :param region: AWS Region of the table - example `us-east-1`
    :param primary_key: one attribute known as the partition key.
    :param sort_key: Also known as a range attribute.
    :param ttl_attribute: name of the TTL attribute in the target DynamoDB table
    :return:
    """
    try:
        dynamodb = boto3.resource("dynamodb", region_name=region)
        table = dynamodb.Table(table_name)

        # Generate updated TTL in epoch second format
        updated_expiration_time = int((datetime.now() + timedelta(days=90)).timestamp())

        # Define the update expression for adding/updating a new attribute
        update_expression = "SET newAttribute = :val1"

        # Define the condition expression for checking if 'expireAt' is not expired
        condition_expression = "expireAt > :val2"

        # Define the expression attribute values
        expression_attribute_values = {":val1": ttl_attribute, ":val2": updated_expiration_time}

        response = table.update_item(
            Key={"primaryKey": primary_key, "sortKey": sort_key},
            UpdateExpression=update_expression,
            ConditionExpression=condition_expression,
            ExpressionAttributeValues=expression_attribute_values,
        )

        print("Item updated successfully.")
        return response["ResponseMetadata"]["HTTPStatusCode"]  # Ideally a 200 OK
    except ClientError as e:
        if e.response["Error"]["Code"] == "ConditionalCheckFailedException":
            print("Condition check failed: Item's 'expireAt' is expired.")
        else:
            print(f"Error updating item: {e}")
    except Exception as e:
        print(f"Error updating item: {e}")


# replace with your values
update_dynamodb_item_ttl(
    "your-table-name",
    "us-east-1",
    "your-partition-key-value",
    "your-sort-key-value",
    "your-ttl-attribute-value",
)
```
+  For API details, see [UpdateItem](https://docs.aws.amazon.com/goto/boto3/dynamodb-2012-08-10/UpdateItem) in *AWS SDK for Python (Boto3) API Reference*. 

------

## Identifying deleted items in DynamoDB Streams
<a name="ttl-expired-items-identifying"></a>

The streams record contains a user identity field `Records[<index>].userIdentity`. Items that are deleted by the TTL process have the following fields:

```
Records[<index>].userIdentity.type
"Service"

Records[<index>].userIdentity.principalId
"dynamodb.amazonaws.com"
```

The following JSON shows the relevant portion of a single streams record:

```
"Records": [ 
  { 
	... 
		"userIdentity": {
		"type": "Service", 
      	"principalId": "dynamodb.amazonaws.com" 
   	} 
   ... 
	} 
]
```

# Querying tables in DynamoDB
<a name="Query"></a>

You can use the `Query` API operation in Amazon DynamoDB to find items based on primary key values.

You must provide the name of the partition key attribute and a single value for that attribute. `Query` returns all items with that partition key value. Optionally, you can provide a sort key attribute and use a comparison operator to refine the search results.

For more information on how to use `Query`, such as the request syntax, response parameters, and additional examples, see [Query](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html) in the *Amazon DynamoDB API Reference*.

**Topics**
+ [Key condition expressions for the Query operation in DynamoDB](Query.KeyConditionExpressions.md)
+ [Filter expressions for the Query operation in DynamoDB](Query.FilterExpression.md)
+ [Paginating table query results in DynamoDB](Query.Pagination.md)
+ [Other aspects of working with the Query operation in DynamoDB](Query.Other.md)

# Key condition expressions for the Query operation in DynamoDB
<a name="Query.KeyConditionExpressions"></a>

You can use any attribute name in a key condition expression, provided that the first character is `a-z` or `A-Z` and the rest of the characters (starting from the second character, if present) are `a-z`, `A-Z`, or `0-9`. In addition, the attribute name must not be a DynamoDB reserved word. (For a complete list of these, see [Reserved words in DynamoDB](ReservedWords.md).) If an attribute name does not meet these requirements, you must define an expression attribute name as a placeholder. For more information, see [Expression attribute names (aliases) in DynamoDB](Expressions.ExpressionAttributeNames.md).

For items with a given partition key value, DynamoDB stores these items close together, in sorted order by sort key value. In a `Query` operation, DynamoDB retrieves the items in sorted order, and then processes the items using `KeyConditionExpression` and any `FilterExpression` that might be present. Only then are the `Query` results sent back to the client.

A `Query` operation always returns a result set. If no matching items are found, the result set is empty.

`Query` results are always sorted by the sort key value. If the data type of the sort key is `Number`, the results are returned in numeric order. Otherwise, the results are returned in order of UTF-8 bytes. By default, the sort order is ascending. To reverse the order, set the `ScanIndexForward` parameter to `false`.

A single `Query` operation can retrieve a maximum of 1 MB of data. This limit applies before any `FilterExpression` or `ProjectionExpression` is applied to the results. If `LastEvaluatedKey` is present in the response and is non-null, you must paginate the result set (see [Paginating table query results in DynamoDB](Query.Pagination.md)).

## Key condition expression examples
<a name="Query.KeyConditionExpressions-example"></a>

To specify the search criteria, you use a *key condition expression*—a string that determines the items to be read from the table or index.

You must specify the partition key name and value as an equality condition. You cannot use a non-key attribute in a key condition expression.

You can optionally provide a second condition for the sort key (if present). The sort key condition must use one of the following comparison operators:
+ `a = b` — true if the attribute *a* is equal to the value *b*
+ `a < b` — true if *a* is less than *b*
+ `a <= b` — true if *a* is less than or equal to *b*
+ `a > b` — true if *a* is greater than *b*
+ `a >= b` — true if *a* is greater than or equal to *b*
+ `a BETWEEN b AND c` — true if *a* is greater than or equal to *b*, and less than or equal to *c*.

The following function is also supported:
+ `begins_with (a, substr)`— true if the value of attribute `a` begins with a particular substring.

The following AWS Command Line Interface (AWS CLI) examples demonstrate the use of key condition expressions. These expressions use placeholders (such as `:name` and `:sub`) instead of actual values. For more information, see [Expression attribute names (aliases) in DynamoDB](Expressions.ExpressionAttributeNames.md) and [Using expression attribute values in DynamoDB](Expressions.ExpressionAttributeValues.md).

**Example**  
Query the `Thread` table for a particular `ForumName` (partition key). All of the items with that `ForumName` value are read by the query because the sort key (`Subject`) is not included in `KeyConditionExpression`.  

```
aws dynamodb query \
    --table-name Thread \
    --key-condition-expression "ForumName = :name" \
    --expression-attribute-values  '{":name":{"S":"Amazon DynamoDB"}}'
```

**Example**  
Query the `Thread` table for a particular `ForumName` (partition key), but this time return only the items with a given `Subject` (sort key).  

```
aws dynamodb query \
    --table-name Thread \
    --key-condition-expression "ForumName = :name and Subject = :sub" \
    --expression-attribute-values  file://values.json
```
The arguments for `--expression-attribute-values` are stored in the `values.json` file.  

```
{
    ":name":{"S":"Amazon DynamoDB"},
    ":sub":{"S":"DynamoDB Thread 1"}
}
```

**Example**  
Query the `Reply` table for a particular `Id` (partition key), but return only those items whose `ReplyDateTime` (sort key) begins with certain characters.  

```
aws dynamodb query \
    --table-name Reply \
    --key-condition-expression "Id = :id and begins_with(ReplyDateTime, :dt)" \
    --expression-attribute-values  file://values.json
```
The arguments for `--expression-attribute-values` are stored in the `values.json` file.  

```
{
    ":id":{"S":"Amazon DynamoDB#DynamoDB Thread 1"},
    ":dt":{"S":"2015-09"}
}
```

# Filter expressions for the Query operation in DynamoDB
<a name="Query.FilterExpression"></a>

If you need to further refine the `Query` results, you can optionally provide a filter expression. A *filter expression* determines which items within the `Query` results should be returned to you. All of the other results are discarded.

A filter expression is applied after a `Query` finishes, but before the results are returned. Therefore, a `Query` consumes the same amount of read capacity, regardless of whether a filter expression is present.

A `Query` operation can retrieve a maximum of 1 MB of data. This limit applies before the filter expression is evaluated.

A filter expression cannot contain partition key or sort key attributes. You need to specify those attributes in the key condition expression, not the filter expression.

The syntax for a filter expression is similar to that of a key condition expression. Filter expressions can use the same comparators, functions, and logical operators as a key condition expression. In addition, filter expressions can use the not-equals operator (`<>`), the `OR` operator, the `CONTAINS` operator, the `IN` operator, the `BEGINS_WITH` operator, the `BETWEEN` operator, the `EXISTS` operator, and the `SIZE` operator. For more information, see [Key condition expressions for the Query operation in DynamoDB](Query.KeyConditionExpressions.md) and [Syntax for filter and condition expressions](Expressions.OperatorsAndFunctions.md#Expressions.OperatorsAndFunctions.Syntax).

**Example**  
The following AWS CLI example queries the `Thread` table for a particular `ForumName` (partition key) and `Subject` (sort key). Of the items that are found, only the most popular discussion threads are returned—in other words, only those threads with more than a certain number of `Views`.  

```
aws dynamodb query \
    --table-name Thread \
    --key-condition-expression "ForumName = :fn and Subject begins_with :sub" \
    --filter-expression "#v >= :num" \
    --expression-attribute-names '{"#v": "Views"}' \
    --expression-attribute-values file://values.json
```
The arguments for `--expression-attribute-values` are stored in the `values.json` file.  

```
{
    ":fn":{"S":"Amazon DynamoDB"},
    ":sub":{"S":"DynamoDB Thread 1"},
    ":num":{"N":"3"}
}
```
Note that `Views` is a reserved word in DynamoDB (see [Reserved words in DynamoDB](ReservedWords.md)), so this example uses `#v` as a placeholder. For more information, see [Expression attribute names (aliases) in DynamoDB](Expressions.ExpressionAttributeNames.md).

**Note**  
A filter expression removes items from the `Query` result set. If possible, avoid using `Query` where you expect to retrieve a large number of items but also need to discard most of those items.

# Paginating table query results in DynamoDB
<a name="Query.Pagination"></a>

DynamoDB *paginates* the results from `Query` operations. With pagination, the `Query` results are divided into "pages" of data that are 1 MB in size (or less). An application can process the first page of results, then the second page, and so on.

A single `Query` only returns a result set that fits within the 1 MB size limit. To determine whether there are more results, and to retrieve them one page at a time, applications should do the following: 

1. Examine the low-level `Query` result:
   + If the result contains a `LastEvaluatedKey` element and it's non-null, proceed to step 2.
   + If there is *not* a `LastEvaluatedKey` in the result, there are no more items to be retrieved.

1. Construct a `Query` with the same `KeyConditionExpression`. However, this time, take the `LastEvaluatedKey` value from step 1 and use it as the `ExclusiveStartKey` parameter in the new `Query` request.

1. Run the new `Query` request.

1. Go to step 1.

In other words, the `LastEvaluatedKey` from a `Query` response should be used as the `ExclusiveStartKey` for the next `Query` request. If there is not a `LastEvaluatedKey` element in a `Query` response, then you have retrieved the final page of results. If `LastEvaluatedKey` is not empty, it does not necessarily mean that there is more data in the result set. The only way to know when you have reached the end of the result set is when `LastEvaluatedKey` is empty.

You can use the AWS CLI to view this behavior. The AWS CLI sends low-level `Query` requests to DynamoDB repeatedly, until `LastEvaluatedKey` is no longer present in the results. Consider the following AWS CLI example that retrieves movie titles from a particular year.

```
aws dynamodb query --table-name Movies \
    --projection-expression "title" \
    --key-condition-expression "#y = :yyyy" \
    --expression-attribute-names '{"#y":"year"}' \
    --expression-attribute-values '{":yyyy":{"N":"1993"}}' \
    --page-size 5 \
    --debug
```

Ordinarily, the AWS CLI handles pagination automatically. However, in this example, the AWS CLI `--page-size` parameter limits the number of items per page. The `--debug` parameter prints low-level information about requests and responses.

If you run the example, the first response from DynamoDB looks similar to the following.

```
2017-07-07 11:13:15,603 - MainThread - botocore.parsers - DEBUG - Response body:
b'{"Count":5,"Items":[{"title":{"S":"A Bronx Tale"}},
{"title":{"S":"A Perfect World"}},{"title":{"S":"Addams Family Values"}},
{"title":{"S":"Alive"}},{"title":{"S":"Benny & Joon"}}],
"LastEvaluatedKey":{"year":{"N":"1993"},"title":{"S":"Benny & Joon"}},
"ScannedCount":5}'
```

The `LastEvaluatedKey` in the response indicates that not all of the items have been retrieved. The AWS CLI then issues another `Query` request to DynamoDB. This request and response pattern continues, until the final response.

```
2017-07-07 11:13:16,291 - MainThread - botocore.parsers - DEBUG - Response body:
b'{"Count":1,"Items":[{"title":{"S":"What\'s Eating Gilbert Grape"}}],"ScannedCount":1}'
```

The absence of `LastEvaluatedKey` indicates that there are no more items to retrieve.

**Note**  
The AWS SDKs handle the low-level DynamoDB responses (including the presence or absence of `LastEvaluatedKey`) and provide various abstractions for paginating `Query` results. For example, the SDK for Java document interface provides `java.util.Iterator` support so that you can walk through the results one at a time.  
For code examples in various programming languages, see the [Amazon DynamoDB Getting Started Guide](https://docs.aws.amazon.com/amazondynamodb/latest/gettingstartedguide/) and the AWS SDK documentation for your language.

You can also reduce page size by limiting the number of items in the result set, with the `Limit` parameter of the `Query` operation.

For more information about querying with DynamoDB, see [Querying tables in DynamoDB](Query.md).

# Other aspects of working with the Query operation in DynamoDB
<a name="Query.Other"></a>

This section covers additional aspects of the DynamoDB Query operation, including limiting result size, counting scanned vs. returned items, monitoring read capacity consumption, and controlling read consistency.

## Limiting the number of items in the result set
<a name="Query.Limit"></a>

With the `Query` operation, you can limit the number of items that it reads. To do this, set the `Limit` parameter to the maximum number of items that you want.

For example, suppose that you `Query` a table, with a `Limit` value of `6`, and without a filter expression. The `Query` result contains the first six items from the table that match the key condition expression from the request.

Now suppose that you add a filter expression to the `Query`. In this case, DynamoDB reads up to six items, and then returns only those that match the filter expression. The final `Query` result contains six items or fewer, even if more items would have matched the filter expression if DynamoDB had kept reading more items.

## Counting the items in the results
<a name="Query.Count"></a>

In addition to the items that match your criteria, the `Query` response contains the following elements:
+ `ScannedCount` — The number of items that matched the key condition expression *before* a filter expression (if present) was applied.
+ `Count` — The number of items that remain *after* a filter expression (if present) was applied.

**Note**  
If you don't use a filter expression, `ScannedCount` and `Count` have the same value.

If the size of the `Query` result set is larger than 1 MB, `ScannedCount` and `Count` represent only a partial count of the total items. You need to perform multiple `Query` operations to retrieve all the results (see [Paginating table query results in DynamoDB](Query.Pagination.md)).

Each `Query` response contains the `ScannedCount` and `Count` for the items that were processed by that particular `Query` request. To obtain grand totals for all of the `Query` requests, you could keep a running tally of both `ScannedCount` and `Count`.

## Capacity units consumed by query
<a name="Query.CapacityUnits"></a>

You can `Query` any table or secondary index, as long as you provide the name of the partition key attribute and a single value for that attribute. `Query` returns all items with that partition key value. Optionally, you can provide a sort key attribute and use a comparison operator to refine the search results. `Query` API operations consume read capacity units, as follows.


****  

| If you `Query` a... | DynamoDB consumes read capacity units from... | 
| --- | --- | 
| Table | The table's provisioned read capacity. | 
| Global secondary index | The index's provisioned read capacity. | 
| Local secondary index | The base table's provisioned read capacity. | 

By default, a `Query` operation does not return any data on how much read capacity it consumes. However, you can specify the `ReturnConsumedCapacity` parameter in a `Query` request to obtain this information. The following are the valid settings for `ReturnConsumedCapacity`:
+ `NONE` — No consumed capacity data is returned. (This is the default.)
+ `TOTAL` — The response includes the aggregate number of read capacity units consumed.
+ `INDEXES` — The response shows the aggregate number of read capacity units consumed, together with the consumed capacity for each table and index that was accessed.

DynamoDB calculates the number of read capacity units consumed based on the number of items and the size of those items, not on the amount of data that is returned to an application. For this reason, the number of capacity units consumed is the same whether you request all of the attributes (the default behavior) or just some of them (using a projection expression). The number is also the same whether or not you use a filter expression. `Query` consumes a minimum read capacity unit to perform one strongly consistent read per second, or two eventually consistent reads per second for an item up to 4 KB. If you need to read an item that is larger than 4 KB, DynamoDB needs additional read request units. Empty tables and very large tables which have a sparse amount of partition keys might see some additional RCUs charged beyond the amount of data queried. This covers the cost of serving the `Query` request, even if no data exists.

## Read consistency for query
<a name="Query.ReadConsistency"></a>

A `Query` operation performs eventually consistent reads, by default. This means that the `Query` results might not reflect changes due to recently completed `PutItem` or `UpdateItem` operations. For more information, see [DynamoDB read consistency](HowItWorks.ReadConsistency.md).

If you require strongly consistent reads, set the `ConsistentRead` parameter to `true` in the `Query` request.

# Scanning tables in DynamoDB
<a name="Scan"></a>

A `Scan` operation in Amazon DynamoDB reads every item in a table or a secondary index. By default, a `Scan` operation returns all of the data attributes for every item in the table or index. You can use the `ProjectionExpression` parameter so that `Scan` only returns some of the attributes, rather than all of them.

`Scan` always returns a result set. If no matching items are found, the result set is empty.

A single `Scan` request can retrieve a maximum of 1 MB of data. Optionally, DynamoDB can apply a filter expression to this data, narrowing the results before they are returned to the user.

**Topics**
+ [Filter expressions for scan](#Scan.FilterExpression)
+ [Limiting the number of items in the result set](#Scan.Limit)
+ [Paginating the results](#Scan.Pagination)
+ [Counting the items in the results](#Scan.Count)
+ [Capacity units consumed by scan](#Scan.CapacityUnits)
+ [Read consistency for scan](#Scan.ReadConsistency)
+ [Parallel scan](#Scan.ParallelScan)

## Filter expressions for scan
<a name="Scan.FilterExpression"></a>

If you need to further refine the `Scan` results, you can optionally provide a filter expression. A *filter expression* determines which items within the `Scan` results should be returned to you. All of the other results are discarded.

A filter expression is applied after a `Scan` finishes but before the results are returned. Therefore, a `Scan` consumes the same amount of read capacity, regardless of whether a filter expression is present.

A `Scan` operation can retrieve a maximum of 1 MB of data. This limit applies before the filter expression is evaluated.

With `Scan`, you can specify any attributes in a filter expression—including partition key and sort key attributes.

The syntax for a filter expression is identical to that of a condition expression. Filter expressions can use the same comparators, functions, and logical operators as a condition expression. See [Condition and filter expressions, operators, and functions in DynamoDB](Expressions.OperatorsAndFunctions.md) for more information about logical operators.

**Example**  
The following AWS Command Line Interface (AWS CLI) example scans the `Thread` table and returns only the items that were last posted to by a particular user.  

```
aws dynamodb scan \
     --table-name Thread \
     --filter-expression "LastPostedBy = :name" \
     --expression-attribute-values '{":name":{"S":"User A"}}'
```

## Limiting the number of items in the result set
<a name="Scan.Limit"></a>

The `Scan` operation enables you to limit the number of items that it returns in the result. To do this, set the `Limit` parameter to the maximum number of items that you want the `Scan` operation to return, prior to filter expression evaluation.

For example, suppose that you `Scan` a table with a `Limit` value of `6` and without a filter expression. The `Scan` result contains the first six items from the table.

Now suppose that you add a filter expression to the `Scan`. In this case, DynamoDB applies the filter expression to the six items that were returned, discarding those that do not match. The final `Scan` result contains six items or fewer, depending on the number of items that were filtered.

## Paginating the results
<a name="Scan.Pagination"></a>

DynamoDB *paginates* the results from `Scan` operations. With pagination, the `Scan` results are divided into "pages" of data that are 1 MB in size (or less). An application can process the first page of results, then the second page, and so on.

A single `Scan` only returns a result set that fits within the 1 MB size limit. 

To determine whether there are more results and to retrieve them one page at a time, applications should do the following:

1. Examine the low-level `Scan` result:
   + If the result contains a `LastEvaluatedKey` element, proceed to step 2.
   + If there is *not* a `LastEvaluatedKey` in the result, then there are no more items to be retrieved.

1. Construct a new `Scan` request, with the same parameters as the previous one. However, this time, take the `LastEvaluatedKey` value from step 1 and use it as the `ExclusiveStartKey` parameter in the new `Scan` request.

1. Run the new `Scan` request.

1. Go to step 1.

In other words, the `LastEvaluatedKey` from a `Scan` response should be used as the `ExclusiveStartKey` for the next `Scan` request. If there is not a `LastEvaluatedKey` element in a `Scan` response, you have retrieved the final page of results. (The absence of `LastEvaluatedKey` is the only way to know that you have reached the end of the result set.)

You can use the AWS CLI to view this behavior. The AWS CLI sends low-level `Scan` requests to DynamoDB, repeatedly, until `LastEvaluatedKey` is no longer present in the results. Consider the following AWS CLI example that scans the entire `Movies` table but returns only the movies from a particular genre.

```
aws dynamodb scan \
    --table-name Movies \
    --projection-expression "title" \
    --filter-expression 'contains(info.genres,:gen)' \
    --expression-attribute-values '{":gen":{"S":"Sci-Fi"}}' \
    --page-size 100  \
    --debug
```

Ordinarily, the AWS CLI handles pagination automatically. However, in this example, the AWS CLI `--page-size` parameter limits the number of items per page. The `--debug` parameter prints low-level information about requests and responses.

**Note**  
Your pagination results will also differ based on the input parameters you pass.   
Using `aws dynamodb scan --table-name Prices --max-items 1` returns a `NextToken`
Using `aws dynamodb scan --table-name Prices --limit 1` returns a `LastEvaluatedKey`.
Also be aware that using `--starting-token` in particular requires the `NextToken` value. 

If you run the example, the first response from DynamoDB looks similar to the following.

```
2017-07-07 12:19:14,389 - MainThread - botocore.parsers - DEBUG - Response body:
b'{"Count":7,"Items":[{"title":{"S":"Monster on the Campus"}},{"title":{"S":"+1"}},
{"title":{"S":"100 Degrees Below Zero"}},{"title":{"S":"About Time"}},{"title":{"S":"After Earth"}},
{"title":{"S":"Age of Dinosaurs"}},{"title":{"S":"Cloudy with a Chance of Meatballs 2"}}],
"LastEvaluatedKey":{"year":{"N":"2013"},"title":{"S":"Curse of Chucky"}},"ScannedCount":100}'
```

The `LastEvaluatedKey` in the response indicates that not all of the items have been retrieved. The AWS CLI then issues another `Scan` request to DynamoDB. This request and response pattern continues, until the final response.

```
2017-07-07 12:19:17,830 - MainThread - botocore.parsers - DEBUG - Response body:
b'{"Count":1,"Items":[{"title":{"S":"WarGames"}}],"ScannedCount":6}'
```

The absence of `LastEvaluatedKey` indicates that there are no more items to retrieve.

**Note**  
The AWS SDKs handle the low-level DynamoDB responses (including the presence or absence of `LastEvaluatedKey`) and provide various abstractions for paginating `Scan` results. For example, the SDK for Java document interface provides `java.util.Iterator` support so that you can walk through the results one at a time.  
For code examples in various programming languages, see the [Amazon DynamoDB Getting Started Guide](https://docs.aws.amazon.com/amazondynamodb/latest/gettingstartedguide/) and the AWS SDK documentation for your language.

## Counting the items in the results
<a name="Scan.Count"></a>

In addition to the items that match your criteria, the `Scan` response contains the following elements:
+ `ScannedCount` — The number of items evaluated, before any `ScanFilter` is applied. A high `ScannedCount` value with few, or no, `Count` results indicates an inefficient `Scan` operation. If you did not use a filter in the request, `ScannedCount` is the same as `Count`. 
+ `Count` — The number of items that remain, *after* a filter expression (if present) was applied.

**Note**  
If you do not use a filter expression, `ScannedCount` and `Count` have the same value.

If the size of the `Scan` result set is larger than 1 MB, `ScannedCount` and `Count` represent only a partial count of the total items. You need to perform multiple `Scan` operations to retrieve all the results (see [Paginating the results](#Scan.Pagination)).

Each `Scan` response contains the `ScannedCount` and `Count` for the items that were processed by that particular `Scan` request. To get grand totals for all of the `Scan` requests, you could keep a running tally of both `ScannedCount` and `Count`.

## Capacity units consumed by scan
<a name="Scan.CapacityUnits"></a>

You can `Scan` any table or secondary index. `Scan` operations consume read capacity units, as follows.


****  

| If you `Scan` a... | DynamoDB consumes read capacity units from... | 
| --- | --- | 
| Table | The table's provisioned read capacity. | 
| Global secondary index | The index's provisioned read capacity. | 
| Local secondary index | The base table's provisioned read capacity. | 

**Note**  
Cross-account access for secondary index scan operations is currently not supported with [resource-based policies](access-control-resource-based.md).

By default, a `Scan` operation does not return any data on how much read capacity it consumes. However, you can specify the `ReturnConsumedCapacity` parameter in a `Scan` request to obtain this information. The following are the valid settings for `ReturnConsumedCapacity`:
+ `NONE` — No consumed capacity data is returned. (This is the default.)
+ `TOTAL` — The response includes the aggregate number of read capacity units consumed.
+ `INDEXES` — The response shows the aggregate number of read capacity units consumed, together with the consumed capacity for each table and index that was accessed.

DynamoDB calculates the number of read capacity units consumed based on the number of items and the size of those items, not on the amount of data that is returned to an application. For this reason, the number of capacity units consumed is the same whether you request all of the attributes (the default behavior) or just some of them (using a projection expression). The number is also the same whether or not you use a filter expression. `Scan` consumes a minimum read capacity unit to perform one strongly consistent read per second, or two eventually consistent reads per second for an item up to 4 KB. If you need to read an item that is larger than 4 KB, DynamoDB needs additional read request units. Empty tables and very large tables which have a sparse amount of partition keys might see some additional RCUs charged beyond the amount of data scanned. This covers the cost of serving the `Scan` request, even if no data exists.

## Read consistency for scan
<a name="Scan.ReadConsistency"></a>

A `Scan` operation performs eventually consistent reads, by default. This means that the `Scan` results might not reflect changes due to recently completed `PutItem` or `UpdateItem` operations. For more information, see [DynamoDB read consistency](HowItWorks.ReadConsistency.md).

If you require strongly consistent reads, as of the time that the `Scan` begins, set the `ConsistentRead` parameter to `true` in the `Scan` request. This ensures that all of the write operations that completed before the `Scan` began are included in the `Scan` response. 

Setting `ConsistentRead` to `true` can be useful in table backup or replication scenarios, in conjunction with [DynamoDB Streams](./Streams.html). You first use `Scan` with `ConsistentRead` set to true to obtain a consistent copy of the data in the table. During the `Scan`, DynamoDB Streams records any additional write activity that occurs on the table. After the `Scan` is complete, you can apply the write activity from the stream to the table.

**Note**  
A `Scan` operation with `ConsistentRead` set to `true` consumes twice as many read capacity units as compared to leaving `ConsistentRead` at its default value (`false`).

## Parallel scan
<a name="Scan.ParallelScan"></a>

By default, the `Scan` operation processes data sequentially. Amazon DynamoDB returns data to the application in 1 MB increments, and an application performs additional `Scan` operations to retrieve the next 1 MB of data. 

The larger the table or index being scanned, the more time the `Scan` takes to complete. In addition, a sequential `Scan` might not always be able to fully use the provisioned read throughput capacity: Even though DynamoDB distributes a large table's data across multiple physical partitions, a `Scan` operation can only read one partition at a time. For this reason, the throughput of a `Scan` is constrained by the maximum throughput of a single partition.

To address these issues, the `Scan` operation can logically divide a table or secondary index into multiple *segments*, with multiple application workers scanning the segments in parallel. Each worker can be a thread (in programming languages that support multithreading) or an operating system process. To perform a parallel scan, each worker issues its own `Scan` request with the following parameters:
+ `Segment` — A segment to be scanned by a particular worker. Each worker should use a different value for `Segment`.
+ `TotalSegments` — The total number of segments for the parallel scan. This value must be the same as the number of workers that your application will use.

The following diagram shows how a multithreaded application performs a parallel `Scan` with three degrees of parallelism.

![\[A multithreaded application that performs a parallel scan by dividing a table into three segments.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/ParallelScan.png)




In this diagram, the application spawns three threads and assigns each thread a number. (Segments are zero-based, so the first number is always 0.) Each thread issues a `Scan` request, setting `Segment` to its designated number and setting `TotalSegments` to 3. Each thread scans its designated segment, retrieving data 1 MB at a time, and returns the data to the application's main thread.

DynamoDB assigns items to *segments* by applying a hash function to each item's partition key. For a given `TotalSegments` value, all items with the same partition key are always assigned to the same `Segment`. This means that in a table where *Item 1*, *Item 2*, and *Item 3* all share `pk="account#123"` (but have different sort keys), these items will be processed by the same worker, regardless of the sort key values or the size of the *item collection*.

Because *segment* assignment is based solely on the partition key hash, segments can be unevenly distributed. Some segments might contain no items, while others might contain many partition keys with large item collections. As a result, increasing the total number of segments does not guarantee faster scan performance, particularly when partition keys are not uniformly distributed across the keyspace.

The values for `Segment` and `TotalSegments` apply to individual `Scan` requests, and you can use different values at any time. You might need to experiment with these values, and the number of workers you use, until your application achieves its best performance.

**Note**  
A parallel scan with a large number of workers can easily consume all of the provisioned throughput for the table or index being scanned. It is best to avoid such scans if the table or index is also incurring heavy read or write activity from other applications.  
To control the amount of data returned per request, use the `Limit` parameter. This can help prevent situations where one worker consumes all of the provisioned throughput, at the expense of all other workers.

# PartiQL - a SQL-compatible query language for Amazon DynamoDB
<a name="ql-reference"></a>

Amazon DynamoDB supports [PartiQL](https://partiql.org/), a SQL-compatible query language, to select, insert, update, and delete data in Amazon DynamoDB. Using PartiQL, you can easily interact with DynamoDB tables and run ad hoc queries using the AWS Management Console, NoSQL Workbench, AWS Command Line Interface, and DynamoDB APIs for PartiQL.

PartiQL operations provide the same availability, latency, and performance as the other DynamoDB data plane operations.

The following sections describe the DynamoDB implementation of PartiQL.

**Topics**
+ [What is PartiQL?](#ql-reference.what-is)
+ [PartiQL in Amazon DynamoDB](#ql-reference.what-is)
+ [Getting started](ql-gettingstarted.md)
+ [Data types](ql-reference.data-types.md)
+ [Statements](ql-reference.statements.md)
+ [Functions](ql-functions.md)
+ [Operators](ql-operators.md)
+ [Transactions](ql-reference.multiplestatements.transactions.md)
+ [Batch operations](ql-reference.multiplestatements.batching.md)
+ [IAM policies](ql-iam.md)

## What is PartiQL?
<a name="ql-reference.what-is"></a>

*PartiQL* provides SQL-compatible query access across multiple data stores containing structured data, semistructured data, and nested data. It is widely used within Amazon and is now available as part of many AWS services, including DynamoDB.

For the PartiQL specification and a tutorial on the core query language, see the [PartiQL documentation](https://partiql.org/docs.html).

**Note**  
Amazon DynamoDB supports a *subset* of the [PartiQL](https://partiql.org/) query language.
Amazon DynamoDB does not support the [Amazon ion](http://amzn.github.io/ion-docs/) data format or Amazon Ion literals.

## PartiQL in Amazon DynamoDB
<a name="ql-reference.what-is"></a>

To run PartiQL queries in DynamoDB, you can use:
+ The DynamoDB console
+ The NoSQL Workbench
+ The AWS Command Line Interface (AWS CLI)
+ The DynamoDB APIs

For information about using these methods to access DynamoDB, see [Accessing DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AccessingDynamoDB.html).

# Getting started with PartiQL for DynamoDB
<a name="ql-gettingstarted"></a>

This section describes how to use PartiQL for DynamoDB from the Amazon DynamoDB console, the AWS Command Line Interface (AWS CLI), and DynamoDB APIs.

In the following examples, the DynamoDB table that is defined in the [Getting started with DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStartedDynamoDB.html) tutorial is a pre-requisite.

For information about using the DynamoDB console, AWS Command Line Interface, or DynamoDB APIs to access DynamoDB, see [Accessing DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AccessingDynamoDB.html).

To [download](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/workbench.settingup.html) and use the [NoSQL workbench](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/workbench.html) to build [PartiQL for DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.html) statements choose **PartiQL operations** at the top right corner of the NoSQL Workbench for DynamoDB [Operation builder](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/workbench.querybuilder.operationbuilder.html).

------
#### [ Console ]

![\[PartiQL editor interface that shows the result of running the Query operation on the Music table.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/partiqlgettingstarted.png)


1. Sign in to the AWS Management Console and open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/).

1. In the navigation pane on the left side of the console, choose **PartiQL editor**.

1. Choose the **Music** table.

1. Choose **Query table**. This action generates a query that will not result in a full table scan.

1. Replace `partitionKeyValue` with the string value `Acme Band`. Replace `sortKeyValue` with the string value `Happy Day`.

1. Choose the **Run** button. 

1. You can view the results of the query by choosing the **Table view** or the **JSON view** buttons. 

------
#### [ NoSQL workbench ]

![\[NoSQL workbench interface. It shows a PartiQL SELECT statement that you can run on the Music table.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/workbench/partiql.single.png)


1. Choose **PartiQL statement**.

1. Enter the following PartiQL [SELECT statement](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.select.html) 

   ```
   SELECT *                                         
   FROM Music  
   WHERE Artist=? and SongTitle=?
   ```

1. To specify a value for the `Artist` and `SongTitle` parameters:

   1. Choose **Optional request parameters**.

   1. Choose **Add new parameters**.

   1. Choose the attribute type **string** and value `Acme Band`.

   1. Repeat steps b and c, and choose type **string** and value `PartiQL Rocks`. 

1. If you want to generate code, choose **Generate code**.

   Select your desired language from the displayed tabs. You can now copy this code and use it in your application.

1. If you want the operation to be run immediately, choose **Run**.

------
#### [ AWS CLI ]

1. Create an item in the `Music` table using the INSERT PartiQL statement. 

   ```
   aws dynamodb execute-statement --statement "INSERT INTO Music  \
   					    VALUE  \
   					    {'Artist':'Acme Band','SongTitle':'PartiQL Rocks'}"
   ```

1. Retrieve an item from the Music table using the SELECT PartiQL statement.

   ```
   aws dynamodb execute-statement --statement "SELECT * FROM Music   \
                                               WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'"
   ```

1. Update an item in the `Music` table using the UPDATE PartiQL statement.

   ```
   aws dynamodb execute-statement --statement "UPDATE Music  \
                                               SET AwardsWon=1  \
                                               SET AwardDetail={'Grammys':[2020, 2018]}  \
                                               WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'"
   ```

   Add a list value for an item in the `Music` table. 

   ```
   aws dynamodb execute-statement --statement "UPDATE Music  \
                                               SET AwardDetail.Grammys =list_append(AwardDetail.Grammys,[2016])  \
                                               WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'"
   ```

   Remove a list value for an item in the `Music` table. 

   ```
   aws dynamodb execute-statement --statement "UPDATE Music  \
                                               REMOVE AwardDetail.Grammys[2]  \
                                               WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'"
   ```

   Add a new map member for an item in the `Music` table. 

   ```
   aws dynamodb execute-statement --statement "UPDATE Music  \
                                               SET AwardDetail.BillBoard=[2020]  \
                                               WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'"
   ```

   Add a new string set attribute for an item in the `Music` table. 

   ```
   aws dynamodb execute-statement --statement "UPDATE Music  \
                                               SET BandMembers =<<'member1', 'member2'>>  \
                                               WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'"
   ```

   Update a string set attribute for an item in the `Music` table. 

   ```
   aws dynamodb execute-statement --statement "UPDATE Music  \
                                               SET BandMembers =set_add(BandMembers, <<'newmember'>>)  \
                                               WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'"
   ```

1. Delete an item from the `Music` table using the DELETE PartiQL statement.

   ```
   aws dynamodb execute-statement --statement "DELETE  FROM Music  \
       WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'"
   ```

------
#### [ Java ]

```
import java.util.ArrayList;
import java.util.List;

import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import software.amazon.dynamodb.AmazonDynamoDB;
import software.amazon.dynamodb.AmazonDynamoDBClientBuilder;
import software.amazon.dynamodb.model.AttributeValue;
import software.amazon.dynamodb.model.ConditionalCheckFailedException;
import software.amazon.dynamodb.model.ExecuteStatementRequest;
import software.amazon.dynamodb.model.ExecuteStatementResult;
import software.amazon.dynamodb.model.InternalServerErrorException;
import software.amazon.dynamodb.model.ItemCollectionSizeLimitExceededException;
import software.amazon.dynamodb.model.ProvisionedThroughputExceededException;
import software.amazon.dynamodb.model.RequestLimitExceededException;
import software.amazon.dynamodb.model.ResourceNotFoundException;
import software.amazon.dynamodb.model.TransactionConflictException;

public class DynamoDBPartiQGettingStarted {

    public static void main(String[] args) {
        // Create the DynamoDB Client with the region you want
        AmazonDynamoDB dynamoDB = createDynamoDbClient("us-west-1");

        try {
            // Create ExecuteStatementRequest
            ExecuteStatementRequest executeStatementRequest = new ExecuteStatementRequest();
            List<AttributeValue> parameters= getPartiQLParameters();

            //Create an item in the Music table using the INSERT PartiQL statement
            processResults(executeStatementRequest(dynamoDB, "INSERT INTO Music value {'Artist':?,'SongTitle':?}", parameters));

            //Retrieve an item from the Music table using the SELECT PartiQL statement.
            processResults(executeStatementRequest(dynamoDB, "SELECT * FROM Music  where Artist=? and SongTitle=?", parameters));

            //Update an item in the Music table using the UPDATE PartiQL statement.
            processResults(executeStatementRequest(dynamoDB, "UPDATE Music SET AwardsWon=1 SET AwardDetail={'Grammys':[2020, 2018]}  where Artist=? and SongTitle=?", parameters));

            //Add a list value for an item in the Music table.
            processResults(executeStatementRequest(dynamoDB, "UPDATE Music SET AwardDetail.Grammys =list_append(AwardDetail.Grammys,[2016])  where Artist=? and SongTitle=?", parameters));

            //Remove a list value for an item in the Music table.
            processResults(executeStatementRequest(dynamoDB, "UPDATE Music REMOVE AwardDetail.Grammys[2]   where Artist=? and SongTitle=?", parameters));

            //Add a new map member for an item in the Music table.
            processResults(executeStatementRequest(dynamoDB, "UPDATE Music set AwardDetail.BillBoard=[2020] where Artist=? and SongTitle=?", parameters));

            //Add a new string set attribute for an item in the Music table.
            processResults(executeStatementRequest(dynamoDB, "UPDATE Music SET BandMembers =<<'member1', 'member2'>> where Artist=? and SongTitle=?", parameters));

            //update a string set attribute for an item in the Music table.
            processResults(executeStatementRequest(dynamoDB, "UPDATE Music SET BandMembers =set_add(BandMembers, <<'newmember'>>) where Artist=? and SongTitle=?", parameters));

            //Retrieve an item from the Music table using the SELECT PartiQL statement.
            processResults(executeStatementRequest(dynamoDB, "SELECT * FROM Music  where Artist=? and SongTitle=?", parameters));

            //delete an item from the Music Table
            processResults(executeStatementRequest(dynamoDB, "DELETE  FROM Music  where Artist=? and SongTitle=?", parameters));
        } catch (Exception e) {
            handleExecuteStatementErrors(e);
        }
    }

    private static AmazonDynamoDB createDynamoDbClient(String region) {
        return AmazonDynamoDBClientBuilder.standard().withRegion(region).build();
    }

    private static List<AttributeValue> getPartiQLParameters() {
        List<AttributeValue> parameters = new ArrayList<AttributeValue>();
        parameters.add(new AttributeValue("Acme Band"));
        parameters.add(new AttributeValue("PartiQL Rocks"));
        return parameters;
    }

    private static ExecuteStatementResult executeStatementRequest(AmazonDynamoDB client, String statement, List<AttributeValue> parameters ) {
        ExecuteStatementRequest request = new ExecuteStatementRequest();
        request.setStatement(statement);
        request.setParameters(parameters);
        return client.executeStatement(request);
    }

    private static void processResults(ExecuteStatementResult executeStatementResult) {
        System.out.println("ExecuteStatement successful: "+ executeStatementResult.toString());

    }

    // Handles errors during ExecuteStatement execution. Use recommendations in error messages below to add error handling specific to
    // your application use-case.
    private static void handleExecuteStatementErrors(Exception exception) {
        try {
            throw exception;
        } catch (ConditionalCheckFailedException ccfe) {
            System.out.println("Condition check specified in the operation failed, review and update the condition " +
                                       "check before retrying. Error: " + ccfe.getErrorMessage());
        } catch (TransactionConflictException tce) {
            System.out.println("Operation was rejected because there is an ongoing transaction for the item, generally " +
                                       "safe to retry with exponential back-off. Error: " + tce.getErrorMessage());
        } catch (ItemCollectionSizeLimitExceededException icslee) {
            System.out.println("An item collection is too large, you\'re using Local Secondary Index and exceeded " +
                                       "size limit of items per partition key. Consider using Global Secondary Index instead. Error: " + icslee.getErrorMessage());
        } catch (Exception e) {
            handleCommonErrors(e);
        }
    }

    private static void handleCommonErrors(Exception exception) {
        try {
            throw exception;
        } catch (InternalServerErrorException isee) {
            System.out.println("Internal Server Error, generally safe to retry with exponential back-off. Error: " + isee.getErrorMessage());
        } catch (RequestLimitExceededException rlee) {
            System.out.println("Throughput exceeds the current throughput limit for your account, increase account level throughput before " +
                                       "retrying. Error: " + rlee.getErrorMessage());
        } catch (ProvisionedThroughputExceededException ptee) {
            System.out.println("Request rate is too high. If you're using a custom retry strategy make sure to retry with exponential back-off. " +
                                       "Otherwise consider reducing frequency of requests or increasing provisioned capacity for your table or secondary index. Error: " +
                                       ptee.getErrorMessage());
        } catch (ResourceNotFoundException rnfe) {
            System.out.println("One of the tables was not found, verify table exists before retrying. Error: " + rnfe.getErrorMessage());
        } catch (AmazonServiceException ase) {
            System.out.println("An AmazonServiceException occurred, indicates that the request was correctly transmitted to the DynamoDB " +
                                       "service, but for some reason, the service was not able to process it, and returned an error response instead. Investigate and " +
                                       "configure retry strategy. Error type: " + ase.getErrorType() + ". Error message: " + ase.getErrorMessage());
        } catch (AmazonClientException ace) {
            System.out.println("An AmazonClientException occurred, indicates that the client was unable to get a response from DynamoDB " +
                                       "service, or the client was unable to parse the response from the service. Investigate and configure retry strategy. "+
                                       "Error: " + ace.getMessage());
        } catch (Exception e) {
            System.out.println("An exception occurred, investigate and configure retry strategy. Error: " + e.getMessage());
        }
    }

}
```

------

## Using parameterized statements
<a name="ql-gettingstarted.parameterized"></a>

Instead of embedding values directly in a PartiQL statement string, you can use question mark (`?`) placeholders and supply the values separately in the `Parameters` field. Each `?` is replaced by the corresponding parameter value, in the order they are provided.

Using parameterized statements is a best practice because it separates the statement structure from the data values, making statements easier to read and reuse. It also avoids the need to manually format and escape attribute values in the statement string.

Parameterized statements are supported in `ExecuteStatement`, `BatchExecuteStatement`, and `ExecuteTransaction` operations.

The following examples retrieve an item from the `Music` table using parameterized values for the partition key and sort key.

------
#### [ AWS CLI parameterized ]

```
aws dynamodb execute-statement \
    --statement "SELECT * FROM \"Music\" WHERE Artist=? AND SongTitle=?" \
    --parameters '[{"S": "Acme Band"}, {"S": "PartiQL Rocks"}]'
```

------
#### [ Java parameterized ]

```
List<AttributeValue> parameters = new ArrayList<>();
parameters.add(new AttributeValue("Acme Band"));
parameters.add(new AttributeValue("PartiQL Rocks"));

ExecuteStatementRequest request = new ExecuteStatementRequest()
    .withStatement("SELECT * FROM Music WHERE Artist=? AND SongTitle=?")
    .withParameters(parameters);

ExecuteStatementResult result = dynamoDB.executeStatement(request);
```

------
#### [ Python parameterized ]

```
response = dynamodb_client.execute_statement(
    Statement="SELECT * FROM Music WHERE Artist=? AND SongTitle=?",
    Parameters=[
        {'S': 'Acme Band'},
        {'S': 'PartiQL Rocks'}
    ]
)
```

------

**Note**  
The Java example in the preceding getting started section uses parameterized statements throughout. The `getPartiQLParameters()` method builds the parameter list, and each statement uses `?` placeholders instead of inline values.

# PartiQL data types for DynamoDB
<a name="ql-reference.data-types"></a>

The following table lists the data types you can use with PartiQL for DynamoDB.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.data-types.html)

## Examples
<a name="ql-reference.data-types"></a>

The following statement demonstrates how to insert the following data types: `String`, `Number`, `Map`, `List`, `Number Set` and `String Set`.

```
INSERT INTO TypesTable value {'primarykey':'1', 
'NumberType':1,
'MapType' : {'entryname1': 'value', 'entryname2': 4}, 
'ListType': [1,'stringval'], 
'NumberSetType':<<1,34,32,4.5>>, 
'StringSetType':<<'stringval','stringval2'>>
}
```

The following statement demonstrates how to insert new elements into the `Map`, `List`, `Number Set` and `String Set` types and change the value of a `Number` type.

```
UPDATE TypesTable 
SET NumberType=NumberType + 100 
SET MapType.NewMapEntry=[2020, 'stringvalue', 2.4]
SET ListType = LIST_APPEND(ListType, [4, <<'string1', 'string2'>>])
SET NumberSetType= SET_ADD(NumberSetType, <<345, 48.4>>)
SET StringSetType = SET_ADD(StringSetType, <<'stringsetvalue1', 'stringsetvalue2'>>)
WHERE primarykey='1'
```

The following statement demonstrates how to remove elements from the `Map`, `List`, `Number Set` and `String Set` types and change the value of a `Number` type.

```
UPDATE TypesTable 
SET NumberType=NumberType - 1
REMOVE ListType[1]
REMOVE MapType.NewMapEntry
SET NumberSetType = SET_DELETE( NumberSetType, <<345>>)
SET StringSetType = SET_DELETE( StringSetType, <<'stringsetvalue1'>>)
WHERE primarykey='1'
```

For more information, see [DynamoDB data types](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.NamingRulesDataTypes.html#HowItWorks.DataTypes).

# PartiQL statements for DynamoDB
<a name="ql-reference.statements"></a>

Amazon DynamoDB supports the following PartiQL statements.

**Note**  
DynamoDB does not support all PartiQL statements.  
This reference provides basic syntax and usage examples of PartiQL statements that you manually run using the AWS CLI or APIs.

*Data manipulation language* (DML) is the set of PartiQL statements that you use to manage data in DynamoDB tables. You use DML statements to add, modify, or delete data in a table.

The following DML and query language statements are supported:
+ [PartiQL select statements for DynamoDB](ql-reference.select.md)
+ [PartiQL update statements for DynamoDB](ql-reference.update.md)
+ [PartiQL insert statements for DynamoDB](ql-reference.insert.md)
+ [PartiQL delete statements for DynamoDB](ql-reference.delete.md)

[Performing transactions with PartiQL for DynamoDB](ql-reference.multiplestatements.transactions.md) and [Running batch operations with PartiQL for DynamoDB](ql-reference.multiplestatements.batching.md) are also supported by PartiQL for DynamoDB.

# PartiQL select statements for DynamoDB
<a name="ql-reference.select"></a>

Use the `SELECT` statement to retrieve data from a table in Amazon DynamoDB.

Using the `SELECT` statement can result in a full table scan if an equality or IN condition with a partition key is not provided in the WHERE clause. A scan operation examines every item for the requested values and can use up the provisioned throughput for a large table or index in a single operation. 

If you want to avoid full table scan in PartiQL, you can:
+ Author your `SELECT` statements to not result in full table scans by making sure your [WHERE clause condition](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.select.html#ql-reference.select.parameters) is configured accordingly.
+ Disable full table scans using the IAM policy specified at [Example: Allow select statements and deny full table scan statements in PartiQL for DynamoDB](ql-iam.md#access-policy-ql-iam-example6), in the DynamoDB developer guide.

For more information see [Best practices for querying and scanning data](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-query-scan.html), in the DynamoDB developer guide.

**Topics**
+ [Syntax](#ql-reference.select.syntax)
+ [Parameters](#ql-reference.select.parameters)
+ [Examples](#ql-reference.select.examples)

## Syntax
<a name="ql-reference.select.syntax"></a>

```
SELECT expression  [, ...] 
FROM table[.index]
[ WHERE condition ] [ [ORDER BY key [DESC|ASC] , ...]
```

## Parameters
<a name="ql-reference.select.parameters"></a>

***expression***  
(Required) A projection formed from the `*` wildcard or a projection list of one or more attribute names or document paths from the result set. An expression can consist of calls to [Use PartiQL functions with DynamoDB](ql-functions.md) or fields that are modified by [PartiQL arithmetic, comparison, and logical operators for DynamoDB](ql-operators.md).

***table***  
(Required) The table name to query.

***index***  
(Optional) The name of the index to query.  
You must add double quotation marks to the table name and index name when querying an index.  

```
SELECT * 
FROM "TableName"."IndexName"
```

***condition***  
(Optional) The selection criteria for the query.  
To ensure that a `SELECT` statement does not result in a full table scan, the `WHERE` clause condition must specify a partition key. Use the equality or IN operator.  
For example, if you have an `Orders` table with an `OrderID` partition key and other non-key attributes, including an `Address`, the following statements would not result in a full table scan:  

```
SELECT * 
FROM "Orders" 
WHERE OrderID = 100

SELECT * 
FROM "Orders" 
WHERE OrderID = 100 and Address='some address'

SELECT * 
FROM "Orders" 
WHERE OrderID = 100 or OrderID = 200

SELECT * 
FROM "Orders" 
WHERE OrderID IN [100, 300, 234]
```
The following `SELECT` statements, however, will result in a full table scan:  

```
SELECT * 
FROM "Orders" 
WHERE OrderID > 1

SELECT * 
FROM "Orders" 
WHERE Address='some address'

SELECT * 
FROM "Orders" 
WHERE OrderID = 100 OR Address='some address'
```

***key***  
(Optional) A hash key or a sort key to use to order returned results. The default order is ascending (`ASC`) specify `DESC` if you want the results retuned in descending order.

**Note**  
If you omit the `WHERE` clause, then all of the items in the table are retrieved.

## Examples
<a name="ql-reference.select.examples"></a>

The following query returns one item, if one exists, from the `Orders` table by specifying the partition key, `OrderID`, and using the equality operator.

```
SELECT OrderID, Total
FROM "Orders"
WHERE OrderID = 1
```

The following query returns all items in the `Orders` table that have a specific partition key, `OrderID`, values using the OR operator.

```
SELECT OrderID, Total
FROM "Orders"
WHERE OrderID = 1 OR OrderID = 2
```

The following query returns all items in the `Orders` table that have a specific partition key, `OrderID`, values using the IN operator. The returned results are in descending order, based on the `OrderID` key attribute value.

```
SELECT OrderID, Total
FROM "Orders"
WHERE OrderID IN [1, 2, 3] ORDER BY OrderID DESC
```

The following query shows a full table scan that returns all items from the `Orders` table that have a `Total` greater than 500, where `Total` is a non-key attribute.

```
SELECT OrderID, Total 
FROM "Orders"
WHERE Total > 500
```

The following query shows a full table scan that returns all items from the `Orders` table within a specific `Total` order range, using the IN operator and a non-key attribute `Total`.

```
SELECT OrderID, Total 
FROM "Orders"
WHERE Total IN [500, 600]
```

The following query shows a full table scan that returns all items from the `Orders` table within a specific `Total` order range, using the BETWEEN operator and a non-key attribute `Total`.

```
SELECT OrderID, Total 
FROM "Orders" 
WHERE Total BETWEEN 500 AND 600
```

The following query returns the first date a firestick device was used to watch by specifying the partition key `CustomerID` and sort key `MovieID` in the WHERE clause condition and using document paths in the SELECT clause.

```
SELECT Devices.FireStick.DateWatched[0] 
FROM WatchList 
WHERE CustomerID= 'C1' AND MovieID= 'M1'
```

The following query shows a full table scan that returns the list of items where a firestick device was first used after 12/24/19 using document paths in the WHERE clause condition.

```
SELECT Devices 
FROM WatchList 
WHERE Devices.FireStick.DateWatched[0] >= '12/24/19'
```

# PartiQL update statements for DynamoDB
<a name="ql-reference.update"></a>

Use the `UPDATE` statement to modify the value of one or more attributes within an item in an Amazon DynamoDB table. 

**Note**  
You can only update one item at a time; you cannot issue a single DynamoDB PartiQL statement that updates multiple items. For information on updating multiple items, see [Performing transactions with PartiQL for DynamoDB](ql-reference.multiplestatements.transactions.md) or [Running batch operations with PartiQL for DynamoDB](ql-reference.multiplestatements.batching.md).

**Topics**
+ [Syntax](#ql-reference.update.syntax)
+ [Parameters](#ql-reference.update.parameters)
+ [Return value](#ql-reference.update.return)
+ [Examples](#ql-reference.update.examples)

## Syntax
<a name="ql-reference.update.syntax"></a>

```
UPDATE  table  
[SET | REMOVE]  path  [=  data] […]
WHERE condition [RETURNING returnvalues]
<returnvalues>  ::= [ALL OLD | MODIFIED OLD | ALL NEW | MODIFIED NEW] *
```

## Parameters
<a name="ql-reference.update.parameters"></a>

***table***  
(Required) The table containing the data to be modified.

***path***  
(Required) An attribute name or document path to be created or modified.

***data***  
(Required) An attribute value or the result of an operation.  
The supported operations to use with SET:  
+ LIST\$1APPEND: adds a value to a list type.
+ SET\$1ADD: adds a value to a number or string set.
+ SET\$1DELETE: removes a value from a number or string set.

***condition***  
(Required) The selection criteria for the item to be modified. This condition must resolve to a single primary key value.

***returnvalues***  
(Optional) Use `returnvalues` if you want to get the item attributes as they appear before or after they are updated. The valid values are:   
+ `ALL OLD *`- Returns all of the attributes of the item, as they appeared before the update operation.
+ `MODIFIED OLD *`- Returns only the updated attributes, as they appeared before the update operation.
+ `ALL NEW *`- Returns all of the attributes of the item, as they appear after the update operation.
+ `MODIFIED NEW *`- Returns only the updated attributes, as they appear after the `UpdateItem` operation.

## Return value
<a name="ql-reference.update.return"></a>

This statement does not return a value unless `returnvalues` parameter is specified.

**Note**  
If the WHERE clause of the UPDATE statement does not evaluate to true for any item in the DynamoDB table, `ConditionalCheckFailedException` is returned.

## Examples
<a name="ql-reference.update.examples"></a>

Update an attribute value in an existing item. If the attribute does not exist, it is created.

The following query updates an item in the `"Music"` table by adding an attribute of type number (`AwardsWon`) and an attribute of type map (`AwardDetail`).

```
UPDATE "Music" 
SET AwardsWon=1 
SET AwardDetail={'Grammys':[2020, 2018]}  
WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'
```

You can add `RETURNING ALL OLD *` to return the attributes as they appeared before the `Update` operation.

```
UPDATE "Music" 
SET AwardsWon=1 
SET AwardDetail={'Grammys':[2020, 2018]}  
WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'
RETURNING ALL OLD *
```

This returns the following:

```
{
    "Items": [
        {
            "Artist": {
                "S": "Acme Band"
            },
            "SongTitle": {
                "S": "PartiQL Rocks"
            }
        }
    ]
}
```

You can add `RETURNING ALL NEW *` to return the attributes as they appeared after the `Update` operation.

```
UPDATE "Music" 
SET AwardsWon=1 
SET AwardDetail={'Grammys':[2020, 2018]}  
WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'
RETURNING ALL NEW *
```

This returns the following:

```
{
    "Items": [
        {
            "AwardDetail": {
                "M": {
                    "Grammys": {
                        "L": [
                            {
                                "N": "2020"
                            },
                            {
                                "N": "2018"
                            }
                        ]
                    }
                }
            },
            "AwardsWon": {
                "N": "1"
            }
        }
    ]
}
```

The following query updates an item in the `"Music"` table by appending to a list `AwardDetail.Grammys`.

```
UPDATE "Music" 
SET AwardDetail.Grammys =list_append(AwardDetail.Grammys,[2016])  
WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'
```

The following query updates an item in the `"Music"` table by removing from a list `AwardDetail.Grammys`.

```
UPDATE "Music" 
REMOVE AwardDetail.Grammys[2]   
WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'
```

The following query updates an item in the `"Music"` table by adding `BillBoard` to the map `AwardDetail`.

```
UPDATE "Music" 
SET AwardDetail.BillBoard=[2020] 
WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'
```

The following query updates an item in the `"Music"` table by adding the string set attribute `BandMembers`.

```
UPDATE "Music" 
SET BandMembers =<<'member1', 'member2'>> 
WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'
```

The following query updates an item in the `"Music"` table by adding `newbandmember` to the string set `BandMembers`.

```
UPDATE "Music" 
SET BandMembers =set_add(BandMembers, <<'newbandmember'>>) 
WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'
```

# PartiQL delete statements for DynamoDB
<a name="ql-reference.delete"></a>

Use the `DELETE` statement to delete an existing item from your Amazon DynamoDB table.

**Note**  
You can only delete one item at a time. You cannot issue a single DynamoDB PartiQL statement that deletes multiple items. For information on deleting multiple items, see [Performing transactions with PartiQL for DynamoDB](ql-reference.multiplestatements.transactions.md) or [Running batch operations with PartiQL for DynamoDB](ql-reference.multiplestatements.batching.md).

**Topics**
+ [Syntax](#ql-reference.delete.syntax)
+ [Parameters](#ql-reference.delete.parameters)
+ [Return value](#ql-reference.delete.return)
+ [Examples](#ql-reference.delete.examples)

## Syntax
<a name="ql-reference.delete.syntax"></a>

```
DELETE FROM table 
 WHERE condition [RETURNING returnvalues]
 <returnvalues>  ::= ALL OLD *
```

## Parameters
<a name="ql-reference.delete.parameters"></a>

***table***  
(Required) The DynamoDB table containing the item to be deleted.

***condition***  
(Required) The selection criteria for the item to be deleted; this condition must resolve to a single primary key value.

***returnvalues***  
(Optional) Use `returnvalues` if you want to get the item attributes as they appeared before they were deleted. The valid values are:   
+ `ALL OLD *`- The content of the old item is returned.

## Return value
<a name="ql-reference.delete.return"></a>

This statement does not return a value unless `returnvalues` parameter is specified.

**Note**  
If the DynamoDB table does not have any item with the same primary key as that of the item for which the DELETE is issued, SUCCESS is returned with 0 items deleted. If the table has an item with same primary key, but the condition in the WHERE clause of the DELETE statement evaluates to false, `ConditionalCheckFailedException` is returned.

## Examples
<a name="ql-reference.delete.examples"></a>

The following query deletes an item in the `"Music"` table.

```
DELETE FROM "Music" WHERE "Artist" = 'Acme Band' AND "SongTitle" = 'PartiQL Rocks'
```

You can add the parameter `RETURNING ALL OLD *` to return the data that was deleted.

```
DELETE FROM "Music" WHERE "Artist" = 'Acme Band' AND "SongTitle" = 'PartiQL Rocks' RETURNING ALL OLD *
```

The `Delete` statement now returns the following:

```
{
    "Items": [
        {
            "Artist": {
                "S": "Acme Band"
            },
            "SongTitle": {
                "S": "PartiQL Rocks"
            }
        }
    ]
}
```

# PartiQL insert statements for DynamoDB
<a name="ql-reference.insert"></a>

Use the `INSERT` statement to add an item to a table in Amazon DynamoDB.

**Note**  
You can only insert one item at a time; you cannot issue a single DynamoDB PartiQL statement that inserts multiple items. For information on inserting multiple items, see [Performing transactions with PartiQL for DynamoDB](ql-reference.multiplestatements.transactions.md) or [Running batch operations with PartiQL for DynamoDB](ql-reference.multiplestatements.batching.md).

**Topics**
+ [Syntax](#ql-reference.insert.syntax)
+ [Parameters](#ql-reference.insert.parameters)
+ [Return value](#ql-reference.insert.return)
+ [Examples](#ql-reference.insert.examples)

## Syntax
<a name="ql-reference.insert.syntax"></a>

Insert a single item.

```
INSERT INTO table VALUE item
```

## Parameters
<a name="ql-reference.insert.parameters"></a>

***table***  
(Required) The table where you want to insert the data. The table must already exist.

***item***  
(Required) A valid DynamoDB item represented as a [PartiQL tuple](https://partiql.org/docs.html). You must specify only *one* item and each attribute name in the item is case-sensitive and can be denoted with *single* quotation marks (`'...'`) in PartiQL.  
String values are also denoted with *single* quotation marks (`'...'`) in PartiQL.

## Return value
<a name="ql-reference.insert.return"></a>

This statement does not return any values.

**Note**  
If the DynamoDB table already has an item with the same primary key as the primary key of the item being inserted, `DuplicateItemException` is returned.

## Examples
<a name="ql-reference.insert.examples"></a>

```
INSERT INTO "Music" value {'Artist' : 'Acme Band','SongTitle' : 'PartiQL Rocks'}
```

# Use PartiQL functions with DynamoDB
<a name="ql-functions"></a>

PartiQL in Amazon DynamoDB supports the following built-in variants of SQL standard functions.

**Note**  
Any SQL functions that are not included in this list are not currently supported in DynamoDB.

## Aggregate functions
<a name="ql-functions.aggregate"></a>
+ [Using the SIZE function with PartiQL for amazon DynamoDB](ql-functions.size.md)

## Conditional functions
<a name="ql-functions.conditional"></a>
+ [Using the EXISTS function with PartiQL for DynamoDB](ql-functions.exists.md)
+ [Using the ATTRIBUTE\$1TYPE function with PartiQL for DynamoDB](ql-functions.attribute_type.md)
+ [Using the BEGINS\$1WITH function with PartiQL for DynamoDB](ql-functions.beginswith.md)
+ [Using the CONTAINS function with PartiQL for DynamoDB](ql-functions.contains.md)
+ [Using the MISSING function with PartiQL for DynamoDB](ql-functions.missing.md)

# Using the EXISTS function with PartiQL for DynamoDB
<a name="ql-functions.exists"></a>

You can use EXISTS to perform the same function as `ConditionCheck` does in the [TransactWriteItems](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transaction-apis.html#transaction-apis-txwriteitems) API. The EXISTS function can only be used in transactions.

Given a value, returns `TRUE` if the value is a non-empty collection. Otherwise, returns `FALSE`.

**Note**  
This function can only be used in transactional operations.

## Syntax
<a name="ql-functions.exists.syntax"></a>

```
EXISTS ( statement )
```

## Arguments
<a name="ql-functions.exists.arguments"></a>

*statement*  
(Required) The SELECT statement that the function evaluates.  
The SELECT statement must specify a full primary key and one other condition.

## Return type
<a name="ql-functions.exists.return-type"></a>

`bool`

## Examples
<a name="ql-functions.exists.examples"></a>

```
EXISTS(
    SELECT * FROM "Music" 
    WHERE "Artist" = 'Acme Band' AND "SongTitle" = 'PartiQL Rocks')
```

# Using the BEGINS\$1WITH function with PartiQL for DynamoDB
<a name="ql-functions.beginswith"></a>

Returns `TRUE` if the attribute specified begins with a particular substring.

## Syntax
<a name="ql-functions.beginswith.syntax"></a>

```
begins_with(path, value )
```

## Arguments
<a name="ql-functions.beginswith.arguments"></a>

*path*  
(Required) The attribute name or document path to use.

*value*  
(Required) The string to search for.

## Return type
<a name="ql-functions.beginswith.return-type"></a>

`bool`

## Examples
<a name="ql-functions.beginswith.examples"></a>

```
SELECT * FROM "Orders" WHERE "OrderID"=1 AND begins_with("Address", '7834 24th')
```

# Using the MISSING function with PartiQL for DynamoDB
<a name="ql-functions.missing"></a>

Returns `TRUE` if the item does not contain the attribute specified. Only equality and inequality operators can be used with this function.

## Syntax
<a name="ql-functions.missing.syntax"></a>

```
 attributename IS | IS NOT  MISSING 
```

## Arguments
<a name="ql-functions.missing.arguments"></a>

*attributename*  
(Required) The attribute name to look for.

## Return type
<a name="ql-functions.missing.return-type"></a>

`bool`

## Examples
<a name="ql-functions.missing.examples"></a>

```
SELECT * FROM Music WHERE "Awards" is MISSING
```

# Using the ATTRIBUTE\$1TYPE function with PartiQL for DynamoDB
<a name="ql-functions.attribute_type"></a>

Returns `TRUE` if the attribute at the specified path is of a particular data type.

## Syntax
<a name="ql-functions.attribute_type.syntax"></a>

```
attribute_type( attributename, type )
```

## Arguments
<a name="ql-functions.attribute_type.arguments"></a>

*attributename*  
(Required) The attribute name to use.

*type*  
(Required) The attribute type to check for. For a list of valid values, see DynamoDB [attribute\$1type](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html#Expressions.OperatorsAndFunctions.Functions).

## Return type
<a name="ql-functions.attribute_type.return-type"></a>

`bool`

## Examples
<a name="ql-functions.attribute_type.examples"></a>

```
SELECT * FROM "Music" WHERE attribute_type("Artist", 'S')
```

# Using the CONTAINS function with PartiQL for DynamoDB
<a name="ql-functions.contains"></a>

Returns `TRUE` if the attribute specified by the path is one of the following:
+ A String that contains a particular substring. 
+ A Set that contains a particular element within the set.

For more information, see the DynamoDB [contains](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html#Expressions.OperatorsAndFunctions.Functions) function. 

## Syntax
<a name="ql-functions.contains.syntax"></a>

```
contains( path, substring )
```

## Arguments
<a name="ql-functions.contains.arguments"></a>

*path*  
(Required) The attribute name or document path to use.

*substring*  
(Required) The attribute substring or set member to check for. For more information, see the DynamoDB [contains](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html#Expressions.OperatorsAndFunctions.Functions) function.

## Return type
<a name="ql-functions.contains.return-type"></a>

`bool`

## Examples
<a name="ql-functions.contains.examples"></a>

```
SELECT * FROM "Orders" WHERE "OrderID"=1 AND contains("Address", 'Kirkland')
```

# Using the SIZE function with PartiQL for amazon DynamoDB
<a name="ql-functions.size"></a>

Returns a number representing an attribute's size in bytes. The following are valid data types for use with size. For more information, see the DynamoDB [size](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html#Expressions.OperatorsAndFunctions.Functions) function.

## Syntax
<a name="ql-functions.size.syntax"></a>

```
size( path)
```

## Arguments
<a name="ql-functions.size.arguments"></a>

*path*  
(Required) The attribute name or document path.   
For supported types, see DynamoDB [size](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html#Expressions.OperatorsAndFunctions.Functions) function.

## Return type
<a name="ql-functions.size.return-type"></a>

`int`

## Examples
<a name="ql-functions.size.examples"></a>

```
 SELECT * FROM "Orders" WHERE "OrderID"=1 AND size("Image") >300
```

# PartiQL arithmetic, comparison, and logical operators for DynamoDB
<a name="ql-operators"></a>

PartiQL in Amazon DynamoDB supports the following [SQL standard operators](https://www.w3schools.com/sql/sql_operators.asp).

**Note**  
Any SQL operators that are not included in this list are not currently supported in DynamoDB.

## Arithmetic operators
<a name="ql-operators.arithmetic"></a>


****  

| Operator | Description | 
| --- | --- | 
| \$1 | Add | 
| - | Subtract | 

## Comparison operators
<a name="ql-operators.comparison"></a>


****  

| Operator | Description | 
| --- | --- | 
| = | Equal to | 
| <> | Not Equal to | 
| \$1= | Not Equal to | 
| > | Greater than | 
| < | Less than | 
| >= | Greater than or equal to | 
| <= | Less than or equal to | 

## Logical operators
<a name="ql-operators.logical"></a>


****  

| Operator | Description | 
| --- | --- | 
| AND | TRUE if all the conditions separated by AND are TRUE | 
| BETWEEN |  `TRUE` if the operand is within the range of comparisons. This operator is inclusive of the lower and upper bound of the operands on which you apply it.  | 
| IN | `TRUE` if the operand is equal to one of a list of expressions (at max 50 hash attribute values or at max 100 non-key attribute values). Results are returned in pages of up to 10 items. If the `IN` list contains more values, you must use the `NextToken` returned in the response to retrieve subsequent pages. | 
| IS | TRUE if the operand is a given, PartiQL data type, including NULL or MISSING | 
| NOT | Reverses the value of a given Boolean expression | 
| OR | TRUE if any of the conditions separated by OR are TRUE | 

For more information about using logical operators, see [Making comparisons](Expressions.OperatorsAndFunctions.md#Expressions.OperatorsAndFunctions.Comparators) and [Logical evaluations](Expressions.OperatorsAndFunctions.md#Expressions.OperatorsAndFunctions.LogicalEvaluations).

# Performing transactions with PartiQL for DynamoDB
<a name="ql-reference.multiplestatements.transactions"></a>

This section describes how to use transactions with PartiQL for DynamoDB. PartiQL transactions are limited to 100 total statements (actions).

For more information on DynamoDB transactions, see [Managing complex workflows with DynamoDB transactions](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transactions.html).

**Note**  
The entire transaction must consist of either read statements or write statements. You can't mix both in one transaction. The EXISTS function is an exception. You can use it to check the condition of specific attributes of the item in a similar manner to `ConditionCheck` in the [TransactWriteItems](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transaction-apis.html#transaction-apis-txwriteitems) API operation.

**Topics**
+ [Syntax](#ql-reference.multiplestatements.transactions.syntax)
+ [Parameters](#ql-reference.multiplestatements.transactions.parameters)
+ [Return values](#ql-reference.multiplestatements.transactions.return)
+ [Examples](#ql-reference.multiplestatements.transactions.examples)

## Syntax
<a name="ql-reference.multiplestatements.transactions.syntax"></a>

```
[
   {
      "Statement":" statement ",
      "Parameters":[
         {
            " parametertype " : " parametervalue "
         }, ...]
   } , ...
]
```

## Parameters
<a name="ql-reference.multiplestatements.transactions.parameters"></a>

***statement***  
(Required) A PartiQL for DynamoDB supported statement.  
The entire transaction must consist of either read statements or write statements. You can't mix both in one transaction.

***parametertype***  
(Optional) A DynamoDB type, if parameters were used when specifying the PartiQL statement.

***parametervalue***  
(Optional) A parameter value if parameters were used when specifying the PartiQL statement.

## Return values
<a name="ql-reference.multiplestatements.transactions.return"></a>

This statement doesn't return any values for Write operations (INSERT, UPDATE, or DELETE). However, it returns different values for Read operations (SELECT) based on the conditions specified in the WHERE clause.

**Note**  
If any of the singleton INSERT, UPDATE, or DELETE operations return an error, the transactions are canceled with the `TransactionCanceledException` exception, and the cancellation reason code includes the errors from the individual singleton operations.

## Examples
<a name="ql-reference.multiplestatements.transactions.examples"></a>

The following example runs multiple statements as a transaction.

------
#### [ AWS CLI ]

1. Save the following JSON code to a file called partiql.json. 

   ```
   [
       {
           "Statement": "EXISTS(SELECT * FROM \"Music\" where Artist='No One You Know' and SongTitle='Call Me Today' and Awards is  MISSING)"
       },
       {
           "Statement": "INSERT INTO Music value {'Artist':?,'SongTitle':'?'}",
           "Parameters": [{\"S\": \"Acme Band\"}, {\"S\": \"Best Song\"}]
       },
       {
           "Statement": "UPDATE \"Music\" SET AwardsWon=1 SET AwardDetail={'Grammys':[2020, 2018]}  where Artist='Acme Band' and SongTitle='PartiQL Rocks'"
       }
   ]
   ```

1. Run the following command in a command prompt.

   ```
   aws dynamodb execute-transaction --transact-statements  file://partiql.json
   ```

------
#### [ Java ]

```
public class DynamoDBPartiqlTransaction {

    public static void main(String[] args) {
        // Create the DynamoDB Client with the region you want
        AmazonDynamoDB dynamoDB = createDynamoDbClient("us-west-2");
        
        try {
            // Create ExecuteTransactionRequest
            ExecuteTransactionRequest executeTransactionRequest = createExecuteTransactionRequest();
            ExecuteTransactionResult executeTransactionResult = dynamoDB.executeTransaction(executeTransactionRequest);
            System.out.println("ExecuteTransaction successful.");
            // Handle executeTransactionResult

        } catch (Exception e) {
            handleExecuteTransactionErrors(e);
        }
    }

    private static AmazonDynamoDB createDynamoDbClient(String region) {
        return AmazonDynamoDBClientBuilder.standard().withRegion(region).build();
    }

    private static ExecuteTransactionRequest createExecuteTransactionRequest() {
        ExecuteTransactionRequest request = new ExecuteTransactionRequest();
        
        // Create statements
        List<ParameterizedStatement> statements = getPartiQLTransactionStatements();

        request.setTransactStatements(statements);
        return request;
    }

    private static List<ParameterizedStatement> getPartiQLTransactionStatements() {
        List<ParameterizedStatement> statements = new ArrayList<ParameterizedStatement>();

        statements.add(new ParameterizedStatement()
                               .withStatement("EXISTS(SELECT * FROM "Music" where Artist='No One You Know' and SongTitle='Call Me Today' and Awards is  MISSING)"));

        statements.add(new ParameterizedStatement()
                               .withStatement("INSERT INTO "Music" value {'Artist':'?','SongTitle':'?'}")
                               .withParameters(new AttributeValue("Acme Band"),new AttributeValue("Best Song")));

        statements.add(new ParameterizedStatement()
                               .withStatement("UPDATE "Music" SET AwardsWon=1 SET AwardDetail={'Grammys':[2020, 2018]}  where Artist='Acme Band' and SongTitle='PartiQL Rocks'"));

        return statements;
    }

    // Handles errors during ExecuteTransaction execution. Use recommendations in error messages below to add error handling specific to 
    // your application use-case.
    private static void handleExecuteTransactionErrors(Exception exception) {
        try {
            throw exception;
        } catch (TransactionCanceledException tce) {
            System.out.println("Transaction Cancelled, implies a client issue, fix before retrying. Error: " + tce.getErrorMessage());
        } catch (TransactionInProgressException tipe) {
            System.out.println("The transaction with the given request token is already in progress, consider changing " +
                "retry strategy for this type of error. Error: " + tipe.getErrorMessage());
        } catch (IdempotentParameterMismatchException ipme) {
            System.out.println("Request rejected because it was retried with a different payload but with a request token that was already used, " +
                "change request token for this payload to be accepted. Error: " + ipme.getErrorMessage());
        } catch (Exception e) {
            handleCommonErrors(e);
        }
    }

    private static void handleCommonErrors(Exception exception) {
        try {
            throw exception;
        } catch (InternalServerErrorException isee) {
            System.out.println("Internal Server Error, generally safe to retry with exponential back-off. Error: " + isee.getErrorMessage());
        } catch (RequestLimitExceededException rlee) {
            System.out.println("Throughput exceeds the current throughput limit for your account, increase account level throughput before " + 
                "retrying. Error: " + rlee.getErrorMessage());
        } catch (ProvisionedThroughputExceededException ptee) {
            System.out.println("Request rate is too high. If you're using a custom retry strategy make sure to retry with exponential back-off. " +
                "Otherwise consider reducing frequency of requests or increasing provisioned capacity for your table or secondary index. Error: " + 
                ptee.getErrorMessage());
        } catch (ResourceNotFoundException rnfe) {
            System.out.println("One of the tables was not found, verify table exists before retrying. Error: " + rnfe.getErrorMessage());
        } catch (AmazonServiceException ase) {
            System.out.println("An AmazonServiceException occurred, indicates that the request was correctly transmitted to the DynamoDB " + 
                "service, but for some reason, the service was not able to process it, and returned an error response instead. Investigate and " +
                "configure retry strategy. Error type: " + ase.getErrorType() + ". Error message: " + ase.getErrorMessage());
        } catch (AmazonClientException ace) {
            System.out.println("An AmazonClientException occurred, indicates that the client was unable to get a response from DynamoDB " +
                "service, or the client was unable to parse the response from the service. Investigate and configure retry strategy. "+
                "Error: " + ace.getMessage());
        } catch (Exception e) {
            System.out.println("An exception occurred, investigate and configure retry strategy. Error: " + e.getMessage());
        }
    }

}
```

------

The following example shows the different return values when DynamoDB reads items with different conditions specified in the WHERE clause.

------
#### [ AWS CLI ]

1. Save the following JSON code to a file called partiql.json.

   ```
   [
       // Item exists and projected attribute exists
       {
           "Statement": "SELECT * FROM "Music" WHERE Artist='No One You Know' and SongTitle='Call Me Today'"
       },
       // Item exists but projected attributes do not exist
       {
           "Statement": "SELECT non_existent_projected_attribute FROM "Music" WHERE Artist='No One You Know' and SongTitle='Call Me Today'"
       },
       // Item does not exist
       {
           "Statement": "SELECT * FROM "Music" WHERE Artist='No One I Know' and SongTitle='Call You Today'"
       }
   ]
   ```

1.  following command in a command prompt.

   ```
   aws dynamodb execute-transaction --transact-statements  file://partiql.json
   ```

1. The following response is returned:

   ```
   {
       "Responses": [
           // Item exists and projected attribute exists
           {
               "Item": {
                   "Artist":{
                       "S": "No One You Know"
                   },
                   "SongTitle":{
                       "S": "Call Me Today"
                   }    
               }
           },
           // Item exists but projected attributes do not exist
           {
               "Item": {}
           },
           // Item does not exist
           {}
       ]
   }
   ```

------

# Running batch operations with PartiQL for DynamoDB
<a name="ql-reference.multiplestatements.batching"></a>

This section describes how to use batch statements with PartiQL for DynamoDB.

**Note**  
The entire batch must consist of either read statements or write statements; you cannot mix both in one batch.
`BatchExecuteStatement` and `BatchWriteItem` can perform no more than 25 statements per batch.
`BatchExecuteStatement` makes use of `BatchGetItem` which takes a list of primary keys in separate statements.

**Topics**
+ [Syntax](#ql-reference.multiplestatements.batching.syntax)
+ [Parameters](#ql-reference.multiplestatements.batching.parameters)
+ [Examples](#ql-reference.multiplestatements.batching.examples)

## Syntax
<a name="ql-reference.multiplestatements.batching.syntax"></a>

```
[
  {
    "Statement": "SELECT pk FROM ProblemSet WHERE pk = 'p#9StkWHYTxm7x2AqSXcrfu7' AND sk = 'info'"
  },
  {
    "Statement": "SELECT pk FROM ProblemSet WHERE pk = 'p#isC2ChceGbxHgESc4szoTE' AND sk = 'info'"
  }
]
```

```
[
   {
      "Statement":" statement ",
      "Parameters":[
         {
            " parametertype " : " parametervalue "
         }, ...]
   } , ...
]
```

## Parameters
<a name="ql-reference.multiplestatements.batching.parameters"></a>

***statement***  
(Required) A PartiQL for DynamoDB supported statement.  
+ The entire batch must consist of either read statements or write statements; you cannot mix both in one batch.
+ `BatchExecuteStatement` and `BatchWriteItem` can perform no more than 25 statements per batch.

***parametertype***  
(Optional) A DynamoDB type, if parameters were used when specifying the PartiQL statement.

***parametervalue***  
(Optional) A parameter value if parameters were used when specifying the PartiQL statement.

## Examples
<a name="ql-reference.multiplestatements.batching.examples"></a>

------
#### [ AWS CLI ]

1. Save the following json to a file called partiql.json

   ```
   [
      {
   	 "Statement": "INSERT INTO Music VALUE {'Artist':?,'SongTitle':?}",
   	  "Parameters": [{"S": "Acme Band"}, {"S": "Best Song"}]
   	},
   	{
   	 "Statement": "UPDATE Music SET AwardsWon=1, AwardDetail={'Grammys':[2020, 2018]} WHERE Artist='Acme Band' AND SongTitle='PartiQL Rocks'"
       }
   ]
   ```

1. Run the following command in a command prompt.

   ```
   aws dynamodb batch-execute-statement  --statements  file://partiql.json
   ```

------
#### [ Java ]

```
public class DynamoDBPartiqlBatch {

    public static void main(String[] args) {
        // Create the DynamoDB Client with the region you want
        AmazonDynamoDB dynamoDB = createDynamoDbClient("us-west-2");
        
        try {
            // Create BatchExecuteStatementRequest
            BatchExecuteStatementRequest batchExecuteStatementRequest = createBatchExecuteStatementRequest();
            BatchExecuteStatementResult batchExecuteStatementResult = dynamoDB.batchExecuteStatement(batchExecuteStatementRequest);
            System.out.println("BatchExecuteStatement successful.");
            // Handle batchExecuteStatementResult

        } catch (Exception e) {
            handleBatchExecuteStatementErrors(e);
        }
    }

    private static AmazonDynamoDB createDynamoDbClient(String region) {

        return AmazonDynamoDBClientBuilder.standard().withRegion(region).build();
    }

    private static BatchExecuteStatementRequest createBatchExecuteStatementRequest() {
        BatchExecuteStatementRequest request = new BatchExecuteStatementRequest();

        // Create statements
        List<BatchStatementRequest> statements = getPartiQLBatchStatements();

        request.setStatements(statements);
        return request;
    }

    private static List<BatchStatementRequest> getPartiQLBatchStatements() {
        List<BatchStatementRequest> statements = new ArrayList<BatchStatementRequest>();

        statements.add(new BatchStatementRequest()
                               .withStatement("INSERT INTO Music value {'Artist':'Acme Band','SongTitle':'PartiQL Rocks'}"));

        statements.add(new BatchStatementRequest()
                               .withStatement("UPDATE Music set AwardDetail.BillBoard=[2020] where Artist='Acme Band' and SongTitle='PartiQL Rocks'"));

        return statements;
    }

    // Handles errors during BatchExecuteStatement execution. Use recommendations in error messages below to add error handling specific to 
    // your application use-case.
    private static void handleBatchExecuteStatementErrors(Exception exception) {
        try {
            throw exception;
        } catch (Exception e) {
            // There are no API specific errors to handle for BatchExecuteStatement, common DynamoDB API errors are handled below
            handleCommonErrors(e);
        }
    }

    private static void handleCommonErrors(Exception exception) {
        try {
            throw exception;
        } catch (InternalServerErrorException isee) {
            System.out.println("Internal Server Error, generally safe to retry with exponential back-off. Error: " + isee.getErrorMessage());
        } catch (RequestLimitExceededException rlee) {
            System.out.println("Throughput exceeds the current throughput limit for your account, increase account level throughput before " + 
                "retrying. Error: " + rlee.getErrorMessage());
        } catch (ProvisionedThroughputExceededException ptee) {
            System.out.println("Request rate is too high. If you're using a custom retry strategy make sure to retry with exponential back-off. " +
                "Otherwise consider reducing frequency of requests or increasing provisioned capacity for your table or secondary index. Error: " + 
                ptee.getErrorMessage());
        } catch (ResourceNotFoundException rnfe) {
            System.out.println("One of the tables was not found, verify table exists before retrying. Error: " + rnfe.getErrorMessage());
        } catch (AmazonServiceException ase) {
            System.out.println("An AmazonServiceException occurred, indicates that the request was correctly transmitted to the DynamoDB " + 
                "service, but for some reason, the service was not able to process it, and returned an error response instead. Investigate and " +
                "configure retry strategy. Error type: " + ase.getErrorType() + ". Error message: " + ase.getErrorMessage());
        } catch (AmazonClientException ace) {
            System.out.println("An AmazonClientException occurred, indicates that the client was unable to get a response from DynamoDB " +
                "service, or the client was unable to parse the response from the service. Investigate and configure retry strategy. "+
                "Error: " + ace.getMessage());
        } catch (Exception e) {
            System.out.println("An exception occurred, investigate and configure retry strategy. Error: " + e.getMessage());
        }
    }

}
```

------

# IAM security policies with PartiQL for DynamoDB
<a name="ql-iam"></a>

The following permissions are required:
+ To read items using PartiQL for DynamoDB, you must have `dynamodb:PartiQLSelect` permission on the table or index.
+ To insert items using PartiQL for DynamoDB, you must have `dynamodb:PartiQLInsert` permission on the table or index.
+ To update items using PartiQL for DynamoDB, you must have `dynamodb:PartiQLUpdate` permission on the table or index.
+ To delete items using PartiQL for DynamoDB, you must have `dynamodb:PartiQLDelete` permission on the table or index.

## Example: Allow all PartiQL for DynamoDB statements (Select/Insert/Update/Delete) on a table
<a name="access-policy-ql-iam-example1"></a>

The following IAM policy grants permissions to run all PartiQL for DynamoDB statements on a table. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "dynamodb:PartiQLInsert",
            "dynamodb:PartiQLUpdate",
            "dynamodb:PartiQLDelete",
            "dynamodb:PartiQLSelect"
         ],
         "Resource":[
            "arn:aws:dynamodb:us-west-2:123456789012:table/Music"
         ]
      }
   ]
}
```

------

## Example: Allow PartiQL for DynamoDB select statements on a table
<a name="access-policy-ql-iam-example2"></a>

The following IAM policy grants permissions to run the `select` statement on a specific table.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "dynamodb:PartiQLSelect"
         ],
         "Resource":[
            "arn:aws:dynamodb:us-west-2:123456789012:table/Music"
         ]
      }
   ]
}
```

------

## Example: Allow PartiQL for DynamoDB insert statements on an index
<a name="access-policy-ql-iam-example3"></a>

The following IAM policy grants permissions to run the `insert` statement on a specific index. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "dynamodb:PartiQLInsert"
         ],
         "Resource":[
            "arn:aws:dynamodb:us-west-2:123456789012:table/Music/index/index1"
         ]
      }
   ]
}
```

------

## Example: Allow PartiQL for DynamoDB transactional statements only on a table
<a name="access-policy-ql-iam-example4"></a>

The following IAM policy grants permissions to run only transactional statements on a specific table. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "dynamodb:PartiQLInsert",
            "dynamodb:PartiQLUpdate",
            "dynamodb:PartiQLDelete",
            "dynamodb:PartiQLSelect"
         ],
         "Resource":[
            "arn:aws:dynamodb:us-west-2:123456789012:table/Music"
         ],
         "Condition":{
            "StringEquals":{
               "dynamodb:EnclosingOperation":[
                  "ExecuteTransaction"
               ]
            }
         }
      }
   ]
}
```

------

## Example: Allow PartiQL for DynamoDB non-transactional reads and writes and block PartiQL transactional reads and writes transactional statements on a table.
<a name="access-policy-ql-iam-example5"></a>

 The following IAM policy grants permissions to run PartiQL for DynamoDB non-transactional reads and writes while blocking PartiQL for DynamoDB transactional reads and writes.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Deny",
         "Action":[
            "dynamodb:PartiQLInsert",
            "dynamodb:PartiQLUpdate",
            "dynamodb:PartiQLDelete",
            "dynamodb:PartiQLSelect"
         ],
         "Resource":[
            "arn:aws:dynamodb:us-west-2:123456789012:table/Music"
         ],
         "Condition":{
            "StringEquals":{
               "dynamodb:EnclosingOperation":[
                  "ExecuteTransaction"
               ]
            }
         }
      },
      {
         "Effect":"Allow",
         "Action":[
            "dynamodb:PartiQLInsert",
            "dynamodb:PartiQLUpdate",
            "dynamodb:PartiQLDelete",
            "dynamodb:PartiQLSelect"
         ],
         "Resource":[
            "arn:aws:dynamodb:us-west-2:123456789012:table/Music"
         ]
      }
   ]
}
```

------

## Example: Allow select statements and deny full table scan statements in PartiQL for DynamoDB
<a name="access-policy-ql-iam-example6"></a>

The following IAM policy grants permissions to run the `select` statement on a specific table while blocking `select` statements that result in a full table scan.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Deny",
         "Action":[
            "dynamodb:PartiQLSelect"
         ],
         "Resource":[
            "arn:aws:dynamodb:us-west-2:123456789012:table/WatchList"
         ],
         "Condition":{
            "Bool":{
               "dynamodb:FullTableScan":[
                  "true"
               ]
            }
         }
      },
      {
         "Effect":"Allow",
         "Action":[
            "dynamodb:PartiQLSelect"
         ],
         "Resource":[
            "arn:aws:dynamodb:us-west-2:123456789012:table/WatchList"
         ]
      }
   ]
}
```

------

# Working with items: Java
<a name="JavaDocumentAPIItemCRUD"></a>

You can use the AWS SDK for Java Document API to perform typical create, read, update, and delete (CRUD) operations on Amazon DynamoDB items in a table.

**Note**  
The SDK for Java also provides an object persistence model, allowing you to map your client-side classes to DynamoDB tables. This approach can reduce the amount of code that you have to write. For more information, see [Java 1.x: DynamoDBMapper](DynamoDBMapper.md).

This section contains Java examples to perform several Java Document API item actions and several complete working examples.

**Topics**
+ [Putting an item](#PutDocumentAPIJava)
+ [Getting an item](#JavaDocumentAPIGetItem)
+ [Batch write: Putting and deleting multiple items](#BatchWriteDocumentAPIJava)
+ [Batch get: Getting multiple items](#JavaDocumentAPIBatchGetItem)
+ [Updating an item](#JavaDocumentAPIItemUpdate)
+ [Deleting an item](#DeleteMidLevelJava)
+ [Example: CRUD operations using the AWS SDK for Java document API](JavaDocumentAPICRUDExample.md)
+ [Example: Batch operations using AWS SDK for Java document API](batch-operation-document-api-java.md)
+ [Example: Handling binary type attributes using the AWS SDK for Java document API](JavaDocumentAPIBinaryTypeExample.md)

## Putting an item
<a name="PutDocumentAPIJava"></a>

The `putItem` method stores an item in a table. If the item exists, it replaces the entire item. Instead of replacing the entire item, if you want to update only specific attributes, you can use the `updateItem` method. For more information, see [Updating an item](#JavaDocumentAPIItemUpdate). 

------
#### [ Java v2 ]

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.AttributeValue;
import software.amazon.awssdk.services.dynamodb.model.DynamoDbException;
import software.amazon.awssdk.services.dynamodb.model.PutItemRequest;
import software.amazon.awssdk.services.dynamodb.model.PutItemResponse;
import software.amazon.awssdk.services.dynamodb.model.ResourceNotFoundException;
import java.util.HashMap;

/**
 * Before running this Java V2 code example, set up your development
 * environment, including your credentials.
 *
 * For more information, see the following documentation topic:
 *
 * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
 *
 * To place items into an Amazon DynamoDB table using the AWS SDK for Java V2,
 * its better practice to use the
 * Enhanced Client. See the EnhancedPutItem example.
 */
public class PutItem {
    public static void main(String[] args) {
        final String usage = """

                Usage:
                    <tableName> <key> <keyVal> <albumtitle> <albumtitleval> <awards> <awardsval> <Songtitle> <songtitleval>

                Where:
                    tableName - The Amazon DynamoDB table in which an item is placed (for example, Music3).
                    key - The key used in the Amazon DynamoDB table (for example, Artist).
                    keyval - The key value that represents the item to get (for example, Famous Band).
                    albumTitle - The Album title (for example, AlbumTitle).
                    AlbumTitleValue - The name of the album (for example, Songs About Life ).
                    Awards - The awards column (for example, Awards).
                    AwardVal - The value of the awards (for example, 10).
                    SongTitle - The song title (for example, SongTitle).
                    SongTitleVal - The value of the song title (for example, Happy Day).
                **Warning** This program will  place an item that you specify into a table!
                """;

        if (args.length != 9) {
            System.out.println(usage);
            System.exit(1);
        }

        String tableName = args[0];
        String key = args[1];
        String keyVal = args[2];
        String albumTitle = args[3];
        String albumTitleValue = args[4];
        String awards = args[5];
        String awardVal = args[6];
        String songTitle = args[7];
        String songTitleVal = args[8];

        Region region = Region.US_EAST_1;
        DynamoDbClient ddb = DynamoDbClient.builder()
                .region(region)
                .build();

        putItemInTable(ddb, tableName, key, keyVal, albumTitle, albumTitleValue, awards, awardVal, songTitle,
                songTitleVal);
        System.out.println("Done!");
        ddb.close();
    }

    public static void putItemInTable(DynamoDbClient ddb,
            String tableName,
            String key,
            String keyVal,
            String albumTitle,
            String albumTitleValue,
            String awards,
            String awardVal,
            String songTitle,
            String songTitleVal) {

        HashMap<String, AttributeValue> itemValues = new HashMap<>();
        itemValues.put(key, AttributeValue.builder().s(keyVal).build());
        itemValues.put(songTitle, AttributeValue.builder().s(songTitleVal).build());
        itemValues.put(albumTitle, AttributeValue.builder().s(albumTitleValue).build());
        itemValues.put(awards, AttributeValue.builder().s(awardVal).build());

        PutItemRequest request = PutItemRequest.builder()
                .tableName(tableName)
                .item(itemValues)
                .build();

        try {
            PutItemResponse response = ddb.putItem(request);
            System.out.println(tableName + " was successfully updated. The request id is "
                    + response.responseMetadata().requestId());

        } catch (ResourceNotFoundException e) {
            System.err.format("Error: The Amazon DynamoDB table \"%s\" can't be found.\n", tableName);
            System.err.println("Be sure that it exists and that you've typed its name correctly!");
            System.exit(1);
        } catch (DynamoDbException e) {
            System.err.println(e.getMessage());
            System.exit(1);
        }
    }
}
```

------
#### [ Java v1 ]

Follow these steps: 

1. Create an instance of the `DynamoDB` class.

1. Create an instance of the `Table` class to represent the table you want to work with.

1. Create an instance of the `Item` class to represent the new item. You must specify the new item's primary key and its attributes.

1. Call the `putItem` method of the `Table` object, using the `Item` that you created in the preceding step.

The following Java code example demonstrates the preceding tasks. The code writes a new item to the `ProductCatalog` table.

**Example**  

```
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);

Table table = dynamoDB.getTable("ProductCatalog");

// Build a list of related items
List<Number> relatedItems = new ArrayList<Number>();
relatedItems.add(341);
relatedItems.add(472);
relatedItems.add(649);

//Build a map of product pictures
Map<String, String> pictures = new HashMap<String, String>();
pictures.put("FrontView", "http://example.com/products/123_front.jpg");
pictures.put("RearView", "http://example.com/products/123_rear.jpg");
pictures.put("SideView", "http://example.com/products/123_left_side.jpg");

//Build a map of product reviews
Map<String, List<String>> reviews = new HashMap<String, List<String>>();

List<String> fiveStarReviews = new ArrayList<String>();
fiveStarReviews.add("Excellent! Can't recommend it highly enough!  Buy it!");
fiveStarReviews.add("Do yourself a favor and buy this");
reviews.put("FiveStar", fiveStarReviews);

List<String> oneStarReviews = new ArrayList<String>();
oneStarReviews.add("Terrible product!  Do not buy this.");
reviews.put("OneStar", oneStarReviews);

// Build the item
Item item = new Item()
    .withPrimaryKey("Id", 123)
    .withString("Title", "Bicycle 123")
    .withString("Description", "123 description")
    .withString("BicycleType", "Hybrid")
    .withString("Brand", "Brand-Company C")
    .withNumber("Price", 500)
    .withStringSet("Color",  new HashSet<String>(Arrays.asList("Red", "Black")))
    .withString("ProductCategory", "Bicycle")
    .withBoolean("InStock", true)
    .withNull("QuantityOnHand")
    .withList("RelatedItems", relatedItems)
    .withMap("Pictures", pictures)
    .withMap("Reviews", reviews);

// Write the item to the table
PutItemOutcome outcome = table.putItem(item);
```

In the preceding example, the item has attributes that are scalars (`String`, `Number`, `Boolean`, `Null`), sets (`String Set`), and document types (`List`, `Map`).

------

### Specifying optional parameters
<a name="PutItemJavaDocumentAPIOptions"></a>

Along with the required parameters, you can also specify optional parameters to the `putItem` method. For example, the following Java code example uses an optional parameter to specify a condition for uploading the item. If the condition you specify is not met, the AWS SDK for Java throws a `ConditionalCheckFailedException`. The code example specifies the following optional parameters in the `putItem` method:
+ A `ConditionExpression` that defines the conditions for the request. The code defines the condition that the existing item with the same primary key is replaced only if it has an ISBN attribute that equals a specific value. 
+ A map for `ExpressionAttributeValues` that is used in the condition. In this case, there is only one substitution required: The placeholder `:val` in the condition expression is replaced at runtime with the actual ISBN value to be checked.

The following example adds a new book item using these optional parameters.

**Example**  

```
Item item = new Item()
    .withPrimaryKey("Id", 104)
    .withString("Title", "Book 104 Title")
    .withString("ISBN", "444-4444444444")
    .withNumber("Price", 20)
    .withStringSet("Authors",
        new HashSet<String>(Arrays.asList("Author1", "Author2")));

Map<String, Object> expressionAttributeValues = new HashMap<String, Object>();
expressionAttributeValues.put(":val", "444-4444444444");

PutItemOutcome outcome = table.putItem(
    item,
    "ISBN = :val", // ConditionExpression parameter
    null,          // ExpressionAttributeNames parameter - we're not using it for this example
    expressionAttributeValues);
```

### PutItem and JSON documents
<a name="PutItemJavaDocumentAPI.JSON"></a>

You can store a JSON document as an attribute in a DynamoDB table. To do this, use the `withJSON` method of `Item`. This method parses the JSON document and maps each element to a native DynamoDB data type.

Suppose that you wanted to store the following JSON document, containing vendors that can fulfill orders for a particular product.

**Example**  

```
{
    "V01": {
        "Name": "Acme Books",
        "Offices": [ "Seattle" ]
    },
    "V02": {
        "Name": "New Publishers, Inc.",
        "Offices": ["London", "New York"
        ]
    },
    "V03": {
        "Name": "Better Buy Books",
        "Offices": [ "Tokyo", "Los Angeles", "Sydney"
        ]
    }
}
```

You can use the `withJSON` method to store this in the `ProductCatalog` table, in a `Map` attribute named `VendorInfo`. The following Java code example demonstrates how to do this.

```
// Convert the document into a String.  Must escape all double-quotes.
String vendorDocument = "{"
    + "    \"V01\": {"
    + "        \"Name\": \"Acme Books\","
    + "        \"Offices\": [ \"Seattle\" ]"
    + "    },"
    + "    \"V02\": {"
    + "        \"Name\": \"New Publishers, Inc.\","
    + "        \"Offices\": [ \"London\", \"New York\"" + "]" + "},"
    + "    \"V03\": {"
    + "        \"Name\": \"Better Buy Books\","
    +          "\"Offices\": [ \"Tokyo\", \"Los Angeles\", \"Sydney\""
    + "            ]"
    + "        }"
    + "    }";

Item item = new Item()
    .withPrimaryKey("Id", 210)
    .withString("Title", "Book 210 Title")
    .withString("ISBN", "210-2102102102")
    .withNumber("Price", 30)
    .withJSON("VendorInfo", vendorDocument);

PutItemOutcome outcome = table.putItem(item);
```

## Getting an item
<a name="JavaDocumentAPIGetItem"></a>

To retrieve a single item, use the `getItem` method of a `Table` object. Follow these steps: 

1. Create an instance of the `DynamoDB` class.

1. Create an instance of the `Table` class to represent the table you want to work with.

1. Call the `getItem` method of the `Table` instance. You must specify the primary key of the item that you want to retrieve.

The following Java code example demonstrates the preceding steps. The code gets the item that has the specified partition key.

```
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);

Table table = dynamoDB.getTable("ProductCatalog");

Item item = table.getItem("Id", 210);
```

### Specifying optional parameters
<a name="GetItemJavaDocumentAPIOptions"></a>

Along with the required parameters, you can also specify optional parameters for the `getItem` method. For example, the following Java code example uses an optional method to retrieve only a specific list of attributes and to specify strongly consistent reads. (To learn more about read consistency, see [DynamoDB read consistency](HowItWorks.ReadConsistency.md).)

You can use a `ProjectionExpression` to retrieve only specific attributes or elements, rather than an entire item. A `ProjectionExpression` can specify top-level or nested attributes using document paths. For more information, see [Using projection expressions in DynamoDB](Expressions.ProjectionExpressions.md).

The parameters of the `getItem` method don't let you specify read consistency. However, you can create a `GetItemSpec`, which provides full access to all of the inputs to the low-level `GetItem` operation. The following code example creates a `GetItemSpec` and uses that spec as input to the `getItem` method.

**Example**  

```
GetItemSpec spec = new GetItemSpec()
    .withPrimaryKey("Id", 206)
    .withProjectionExpression("Id, Title, RelatedItems[0], Reviews.FiveStar")
    .withConsistentRead(true);

Item item = table.getItem(spec);

System.out.println(item.toJSONPretty());
```

 To print an `Item` in a human-readable format, use the `toJSONPretty` method. The output from the previous example looks like the following.

```
{
  "RelatedItems" : [ 341 ],
  "Reviews" : {
    "FiveStar" : [ "Excellent! Can't recommend it highly enough! Buy it!", "Do yourself a favor and buy this" ]
  },
  "Id" : 123,
  "Title" : "20-Bicycle 123"
}
```

### GetItem and JSON documents
<a name="GetItemJavaDocumentAPI.JSON"></a>

In the [PutItem and JSON documents](#PutItemJavaDocumentAPI.JSON) section, you store a JSON document in a `Map` attribute named `VendorInfo`. You can use the `getItem` method to retrieve the entire document in JSON format. Or you can use document path notation to retrieve only some of the elements in the document. The following Java code example demonstrates these techniques.

```
GetItemSpec spec = new GetItemSpec()
    .withPrimaryKey("Id", 210);

System.out.println("All vendor info:");
spec.withProjectionExpression("VendorInfo");
System.out.println(table.getItem(spec).toJSON());

System.out.println("A single vendor:");
spec.withProjectionExpression("VendorInfo.V03");
System.out.println(table.getItem(spec).toJSON());

System.out.println("First office location for this vendor:");
spec.withProjectionExpression("VendorInfo.V03.Offices[0]");
System.out.println(table.getItem(spec).toJSON());
```

The output from the previous example looks like the following.

```
All vendor info:
{"VendorInfo":{"V03":{"Name":"Better Buy Books","Offices":["Tokyo","Los Angeles","Sydney"]},"V02":{"Name":"New Publishers, Inc.","Offices":["London","New York"]},"V01":{"Name":"Acme Books","Offices":["Seattle"]}}}
A single vendor:
{"VendorInfo":{"V03":{"Name":"Better Buy Books","Offices":["Tokyo","Los Angeles","Sydney"]}}}
First office location for a single vendor:
{"VendorInfo":{"V03":{"Offices":["Tokyo"]}}}
```

**Note**  
You can use the `toJSON` method to convert any item (or its attributes) to a JSON-formatted string. The following code retrieves several top-level and nested attributes and prints the results as JSON.  

```
GetItemSpec spec = new GetItemSpec()
    .withPrimaryKey("Id", 210)
    .withProjectionExpression("VendorInfo.V01, Title, Price");

Item item = table.getItem(spec);
System.out.println(item.toJSON());
```
The output looks like the following.  

```
{"VendorInfo":{"V01":{"Name":"Acme Books","Offices":["Seattle"]}},"Price":30,"Title":"Book 210 Title"}
```

## Batch write: Putting and deleting multiple items
<a name="BatchWriteDocumentAPIJava"></a>

*Batch write* refers to putting and deleting multiple items in a batch. The `batchWriteItem` method enables you to put and delete multiple items from one or more tables in a single call. The following are the steps to put or delete multiple items using the AWS SDK for Java Document API.

1. Create an instance of the `DynamoDB` class.

1. Create an instance of the `TableWriteItems` class that describes all the put and delete operations for a table. If you want to write to multiple tables in a single batch write operation, you must create one `TableWriteItems` instance per table.

1. Call the `batchWriteItem` method by providing the `TableWriteItems` objects that you created in the preceding step. 

1. Process the response. You should check if there were any unprocessed request items returned in the response. This could happen if you reach the provisioned throughput quota or some other transient error. Also, DynamoDB limits the request size and the number of operations you can specify in a request. If you exceed these limits, DynamoDB rejects the request. For more information, see [Quotas in Amazon DynamoDB](ServiceQuotas.md). 

The following Java code example demonstrates the preceding steps. The example performs a `batchWriteItem` operation on two tables: `Forum` and `Thread`. The corresponding `TableWriteItems` objects define the following actions:
+ Put an item in the `Forum` table.
+ Put and delete an item in the `Thread` table.

The code then calls `batchWriteItem` to perform the operation.

```
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);

TableWriteItems forumTableWriteItems = new TableWriteItems("Forum")
    .withItemsToPut(
        new Item()
            .withPrimaryKey("Name", "Amazon RDS")
            .withNumber("Threads", 0));

TableWriteItems threadTableWriteItems = new TableWriteItems("Thread")
    .withItemsToPut(
        new Item()
            .withPrimaryKey("ForumName","Amazon RDS","Subject","Amazon RDS Thread 1")
    .withHashAndRangeKeysToDelete("ForumName","Some partition key value", "Amazon S3", "Some sort key value");

BatchWriteItemOutcome outcome = dynamoDB.batchWriteItem(forumTableWriteItems, threadTableWriteItems);

// Code for checking unprocessed items is omitted in this example
```

For a working example, see [Example: Batch write operation using the AWS SDK for Java document API](batch-operation-document-api-java.md#JavaDocumentAPIBatchWrite). 

## Batch get: Getting multiple items
<a name="JavaDocumentAPIBatchGetItem"></a>

The `batchGetItem` method enables you to retrieve multiple items from one or more tables. To retrieve a single item, you can use the `getItem` method. 

Follow these steps: 

1. Create an instance of the `DynamoDB` class.

1. Create an instance of the `TableKeysAndAttributes` class that describes a list of primary key values to retrieve from a table. If you want to read from multiple tables in a single batch get operation, you must create one `TableKeysAndAttributes` instance per table.

1. Call the `batchGetItem` method by providing the `TableKeysAndAttributes` objects that you created in the preceding step.

The following Java code example demonstrates the preceding steps. The example retrieves two items from the `Forum` table and three items from the `Thread` table.

```
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);

    TableKeysAndAttributes forumTableKeysAndAttributes = new TableKeysAndAttributes(forumTableName);
    forumTableKeysAndAttributes.addHashOnlyPrimaryKeys("Name",
    "Amazon S3",
    "Amazon DynamoDB");

TableKeysAndAttributes threadTableKeysAndAttributes = new TableKeysAndAttributes(threadTableName);
threadTableKeysAndAttributes.addHashAndRangePrimaryKeys("ForumName", "Subject",
    "Amazon DynamoDB","DynamoDB Thread 1",
    "Amazon DynamoDB","DynamoDB Thread 2",
    "Amazon S3","S3 Thread 1");

BatchGetItemOutcome outcome = dynamoDB.batchGetItem(
    forumTableKeysAndAttributes, threadTableKeysAndAttributes);

for (String tableName : outcome.getTableItems().keySet()) {
    System.out.println("Items in table " + tableName);
    List<Item> items = outcome.getTableItems().get(tableName);
    for (Item item : items) {
        System.out.println(item);
    }
}
```

### Specifying optional parameters
<a name="BatchGetItemJavaDocumentAPIOptions"></a>

Along with the required parameters, you can also specify optional parameters when using `batchGetItem`. For example, you can provide a `ProjectionExpression` with each `TableKeysAndAttributes` you define. This allows you to specify the attributes that you want to retrieve from the table.

The following code example retrieves two items from the `Forum` table. The `withProjectionExpression` parameter specifies that only the `Threads` attribute is to be retrieved.

**Example**  

```
TableKeysAndAttributes forumTableKeysAndAttributes = new TableKeysAndAttributes("Forum")
    .withProjectionExpression("Threads");

forumTableKeysAndAttributes.addHashOnlyPrimaryKeys("Name",
    "Amazon S3",
    "Amazon DynamoDB");

BatchGetItemOutcome outcome = dynamoDB.batchGetItem(forumTableKeysAndAttributes);
```

## Updating an item
<a name="JavaDocumentAPIItemUpdate"></a>

The `updateItem` method of a `Table` object can update existing attribute values, add new attributes, or delete attributes from an existing item. 

The `updateItem` method behaves as follows:
+ If an item does not exist (no item in the table with the specified primary key), `updateItem` adds a new item to the table.
+ If an item exists, `updateItem` performs the update as specified by the `UpdateExpression` parameter.

**Note**  
It is also possible to "update" an item using `putItem`. For example, if you call `putItem` to add an item to the table, but there is already an item with the specified primary key, `putItem` replaces the entire item. If there are attributes in the existing item that are not specified in the input, `putItem` removes those attributes from the item.  
In general, we recommend that you use `updateItem` whenever you want to modify any item attributes. The `updateItem` method only modifies the item attributes that you specify in the input, and the other attributes in the item remain unchanged.

Follow these steps: 

1. Create an instance of the `Table` class to represent the table that you want to work with.

1. Call the `updateTable` method of the `Table` instance. You must specify the primary key of the item that you want to retrieve, along with an `UpdateExpression` that describes the attributes to modify and how to modify them.

The following Java code example demonstrates the preceding tasks. The code updates a book item in the `ProductCatalog` table. It adds a new author to the set of `Authors` and deletes the existing `ISBN` attribute. It also reduces the price by one.

An `ExpressionAttributeValues` map is used in the `UpdateExpression`. The placeholders `:val1` and `:val2` are replaced at runtime with the actual values for `Authors` and `Price`.

```
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);

Table table = dynamoDB.getTable("ProductCatalog");

Map<String, String> expressionAttributeNames = new HashMap<String, String>();
expressionAttributeNames.put("#A", "Authors");
expressionAttributeNames.put("#P", "Price");
expressionAttributeNames.put("#I", "ISBN");

Map<String, Object> expressionAttributeValues = new HashMap<String, Object>();
expressionAttributeValues.put(":val1",
    new HashSet<String>(Arrays.asList("Author YY","Author ZZ")));
expressionAttributeValues.put(":val2", 1);   //Price

UpdateItemOutcome outcome =  table.updateItem(
    "Id",          // key attribute name
    101,           // key attribute value
    "add #A :val1 set #P = #P - :val2 remove #I", // UpdateExpression
    expressionAttributeNames,
    expressionAttributeValues);
```

### Specifying optional parameters
<a name="UpdateItemJavaDocumentAPIOptions"></a>

Along with the required parameters, you can also specify optional parameters for the `updateItem` method, including a condition that must be met in order for the update is to occur. If the condition you specify is not met, the AWS SDK for Java throws a `ConditionalCheckFailedException`. For example, the following Java code example conditionally updates a book item price to 25. It specifies a `ConditionExpression` stating that the price should be updated only if the existing price is 20.

**Example**  

```
Table table = dynamoDB.getTable("ProductCatalog");

Map<String, String> expressionAttributeNames = new HashMap<String, String>();
expressionAttributeNames.put("#P", "Price");

Map<String, Object> expressionAttributeValues = new HashMap<String, Object>();
expressionAttributeValues.put(":val1", 25);  // update Price to 25...
expressionAttributeValues.put(":val2", 20);  //...but only if existing Price is 20

UpdateItemOutcome outcome = table.updateItem(
    new PrimaryKey("Id",101),
    "set #P = :val1", // UpdateExpression
    "#P = :val2",     // ConditionExpression
    expressionAttributeNames,
    expressionAttributeValues);
```

### Atomic counter
<a name="AtomicCounterJavaDocumentAPI"></a>

You can use `updateItem` to implement an atomic counter, where you increment or decrement the value of an existing attribute without interfering with other write requests. To increment an atomic counter, use an `UpdateExpression` with a `set` action to add a numeric value to an existing attribute of type `Number`.

The following example demonstrates this, incrementing the `Quantity` attribute by one. It also demonstrates the use of the `ExpressionAttributeNames` parameter in an `UpdateExpression`.

```
Table table = dynamoDB.getTable("ProductCatalog");

Map<String,String> expressionAttributeNames = new HashMap<String,String>();
expressionAttributeNames.put("#p", "PageCount");

Map<String,Object> expressionAttributeValues = new HashMap<String,Object>();
expressionAttributeValues.put(":val", 1);

UpdateItemOutcome outcome = table.updateItem(
    "Id", 121,
    "set #p = #p + :val",
    expressionAttributeNames,
    expressionAttributeValues);
```

## Deleting an item
<a name="DeleteMidLevelJava"></a>

The `deleteItem` method deletes an item from a table. You must provide the primary key of the item that you want to delete.

Follow these steps: 

1. Create an instance of the `DynamoDB` client.

1. Call the `deleteItem` method by providing the key of the item you want to delete. 

The following Java example demonstrates these tasks.

**Example**  

```
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);

Table table = dynamoDB.getTable("ProductCatalog");

DeleteItemOutcome outcome = table.deleteItem("Id", 101);
```

### Specifying optional parameters
<a name="DeleteItemJavaDocumentAPIOptions"></a>

You can specify optional parameters for `deleteItem`. For example, the following Java code example specifies a `ConditionExpression`, stating that a book item in `ProductCatalog` can only be deleted if the book is no longer in publication (the `InPublication` attribute is false).

**Example**  

```
Map<String,Object> expressionAttributeValues = new HashMap<String,Object>();
expressionAttributeValues.put(":val", false);

DeleteItemOutcome outcome = table.deleteItem("Id",103,
    "InPublication = :val",
    null, // ExpressionAttributeNames - not used in this example
    expressionAttributeValues);
```

# Example: CRUD operations using the AWS SDK for Java document API
<a name="JavaDocumentAPICRUDExample"></a>

The following code example illustrates CRUD operations on an Amazon DynamoDB item. The example creates an item, retrieves it, performs various updates, and finally deletes the item.

**Note**  
The SDK for Java also provides an object persistence model, enabling you to map your client-side classes to DynamoDB tables. This approach can reduce the amount of code that you have to write. For more information, see [Java 1.x: DynamoDBMapper](DynamoDBMapper.md).

**Note**  
This code example assumes that you have already loaded data into DynamoDB for your account by following the instructions in the [Creating tables and loading data for code examples in DynamoDB](SampleData.md) section.  
For step-by-step instructions to run the following example, see [Java code examples](CodeSamples.Java.md).

```
package com.amazonaws.codesamples.document;

import java.io.IOException;
import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;

import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DeleteItemOutcome;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.UpdateItemOutcome;
import com.amazonaws.services.dynamodbv2.document.spec.DeleteItemSpec;
import com.amazonaws.services.dynamodbv2.document.spec.UpdateItemSpec;
import com.amazonaws.services.dynamodbv2.document.utils.NameMap;
import com.amazonaws.services.dynamodbv2.document.utils.ValueMap;
import com.amazonaws.services.dynamodbv2.model.ReturnValue;

public class DocumentAPIItemCRUDExample {

    static AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
    static DynamoDB dynamoDB = new DynamoDB(client);

    static String tableName = "ProductCatalog";

    public static void main(String[] args) throws IOException {

        createItems();

        retrieveItem();

        // Perform various updates.
        updateMultipleAttributes();
        updateAddNewAttribute();
        updateExistingAttributeConditionally();

        // Delete the item.
        deleteItem();

    }

    private static void createItems() {

        Table table = dynamoDB.getTable(tableName);
        try {

            Item item = new Item().withPrimaryKey("Id", 120).withString("Title", "Book 120 Title")
                    .withString("ISBN", "120-1111111111")
                    .withStringSet("Authors", new HashSet<String>(Arrays.asList("Author12", "Author22")))
                    .withNumber("Price", 20).withString("Dimensions", "8.5x11.0x.75").withNumber("PageCount", 500)
                    .withBoolean("InPublication", false).withString("ProductCategory", "Book");
            table.putItem(item);

            item = new Item().withPrimaryKey("Id", 121).withString("Title", "Book 121 Title")
                    .withString("ISBN", "121-1111111111")
                    .withStringSet("Authors", new HashSet<String>(Arrays.asList("Author21", "Author 22")))
                    .withNumber("Price", 20).withString("Dimensions", "8.5x11.0x.75").withNumber("PageCount", 500)
                    .withBoolean("InPublication", true).withString("ProductCategory", "Book");
            table.putItem(item);

        } catch (Exception e) {
            System.err.println("Create items failed.");
            System.err.println(e.getMessage());

        }
    }

    private static void retrieveItem() {
        Table table = dynamoDB.getTable(tableName);

        try {

            Item item = table.getItem("Id", 120, "Id, ISBN, Title, Authors", null);

            System.out.println("Printing item after retrieving it....");
            System.out.println(item.toJSONPretty());

        } catch (Exception e) {
            System.err.println("GetItem failed.");
            System.err.println(e.getMessage());
        }

    }

    private static void updateAddNewAttribute() {
        Table table = dynamoDB.getTable(tableName);

        try {

            UpdateItemSpec updateItemSpec = new UpdateItemSpec().withPrimaryKey("Id", 121)
                    .withUpdateExpression("set #na = :val1").withNameMap(new NameMap().with("#na", "NewAttribute"))
                    .withValueMap(new ValueMap().withString(":val1", "Some value"))
                    .withReturnValues(ReturnValue.ALL_NEW);

            UpdateItemOutcome outcome = table.updateItem(updateItemSpec);

            // Check the response.
            System.out.println("Printing item after adding new attribute...");
            System.out.println(outcome.getItem().toJSONPretty());

        } catch (Exception e) {
            System.err.println("Failed to add new attribute in " + tableName);
            System.err.println(e.getMessage());
        }
    }

    private static void updateMultipleAttributes() {

        Table table = dynamoDB.getTable(tableName);

        try {

            UpdateItemSpec updateItemSpec = new UpdateItemSpec().withPrimaryKey("Id", 120)
                    .withUpdateExpression("add #a :val1 set #na=:val2")
                    .withNameMap(new NameMap().with("#a", "Authors").with("#na", "NewAttribute"))
                    .withValueMap(
                            new ValueMap().withStringSet(":val1", "Author YY", "Author ZZ").withString(":val2",
                                    "someValue"))
                    .withReturnValues(ReturnValue.ALL_NEW);

            UpdateItemOutcome outcome = table.updateItem(updateItemSpec);

            // Check the response.
            System.out.println("Printing item after multiple attribute update...");
            System.out.println(outcome.getItem().toJSONPretty());

        } catch (Exception e) {
            System.err.println("Failed to update multiple attributes in " + tableName);
            System.err.println(e.getMessage());

        }
    }

    private static void updateExistingAttributeConditionally() {

        Table table = dynamoDB.getTable(tableName);

        try {

            // Specify the desired price (25.00) and also the condition (price =
            // 20.00)

            UpdateItemSpec updateItemSpec = new UpdateItemSpec().withPrimaryKey("Id", 120)
                    .withReturnValues(ReturnValue.ALL_NEW).withUpdateExpression("set #p = :val1")
                    .withConditionExpression("#p = :val2").withNameMap(new NameMap().with("#p", "Price"))
                    .withValueMap(new ValueMap().withNumber(":val1", 25).withNumber(":val2", 20));

            UpdateItemOutcome outcome = table.updateItem(updateItemSpec);

            // Check the response.
            System.out.println("Printing item after conditional update to new attribute...");
            System.out.println(outcome.getItem().toJSONPretty());

        } catch (Exception e) {
            System.err.println("Error updating item in " + tableName);
            System.err.println(e.getMessage());
        }
    }

    private static void deleteItem() {

        Table table = dynamoDB.getTable(tableName);

        try {

            DeleteItemSpec deleteItemSpec = new DeleteItemSpec().withPrimaryKey("Id", 120)
                    .withConditionExpression("#ip = :val").withNameMap(new NameMap().with("#ip", "InPublication"))
                    .withValueMap(new ValueMap().withBoolean(":val", false)).withReturnValues(ReturnValue.ALL_OLD);

            DeleteItemOutcome outcome = table.deleteItem(deleteItemSpec);

            // Check the response.
            System.out.println("Printing item that was deleted...");
            System.out.println(outcome.getItem().toJSONPretty());

        } catch (Exception e) {
            System.err.println("Error deleting item in " + tableName);
            System.err.println(e.getMessage());
        }
    }
}
```

# Example: Batch operations using AWS SDK for Java document API
<a name="batch-operation-document-api-java"></a>

This section provides examples of batch write and batch get operations in Amazon DynamoDB using the AWS SDK for Java Document API.

**Note**  
The SDK for Java also provides an object persistence model, enabling you to map your client-side classes to DynamoDB tables. This approach can reduce the amount of code that you have to write. For more information, see [Java 1.x: DynamoDBMapper](DynamoDBMapper.md).

**Topics**
+ [Example: Batch write operation using the AWS SDK for Java document API](#JavaDocumentAPIBatchWrite)
+ [Example: Batch get operation using the AWS SDK for Java document API](#JavaDocumentAPIBatchGet)

## Example: Batch write operation using the AWS SDK for Java document API
<a name="JavaDocumentAPIBatchWrite"></a>

The following Java code example uses the `batchWriteItem` method to perform the following put and delete operations:
+ Put one item in the `Forum` table.
+ Put one item and delete one item from the `Thread` table. 

You can specify any number of put and delete requests against one or more tables when creating your batch write request. However, `batchWriteItem` limits the size of a batch write request and the number of put and delete operations in a single batch write operation. If your request exceeds these limits, your request is rejected. If your table does not have sufficient provisioned throughput to serve this request, the unprocessed request items are returned in the response. 

The following example checks the response to see if it has any unprocessed request items. If it does, it loops back and resends the `batchWriteItem` request with unprocessed items in the request. If you followed the examples in this guide, you should already have created the `Forum` and `Thread` tables. You can also create these tables and upload sample data programmatically. For more information, see [Creating example tables and uploading data using the AWS SDK for Java](AppendixSampleDataCodeJava.md).

For step-by-step instructions for testing the following sample, see [Java code examples](CodeSamples.Java.md). 

**Example**  

```
package com.amazonaws.codesamples.document;

import java.io.IOException;
import java.util.Arrays;
import java.util.HashSet;
import java.util.List;
import java.util.Map;

import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.BatchWriteItemOutcome;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.TableWriteItems;
import com.amazonaws.services.dynamodbv2.model.WriteRequest;

public class DocumentAPIBatchWrite {

    static AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
    static DynamoDB dynamoDB = new DynamoDB(client);

    static String forumTableName = "Forum";
    static String threadTableName = "Thread";

    public static void main(String[] args) throws IOException {

        writeMultipleItemsBatchWrite();

    }

    private static void writeMultipleItemsBatchWrite() {
        try {

            // Add a new item to Forum
            TableWriteItems forumTableWriteItems = new TableWriteItems(forumTableName) // Forum
                    .withItemsToPut(new Item().withPrimaryKey("Name", "Amazon RDS").withNumber("Threads", 0));

            // Add a new item, and delete an existing item, from Thread
            // This table has a partition key and range key, so need to specify
            // both of them
            TableWriteItems threadTableWriteItems = new TableWriteItems(threadTableName)
                    .withItemsToPut(
                            new Item().withPrimaryKey("ForumName", "Amazon RDS", "Subject", "Amazon RDS Thread 1")
                                    .withString("Message", "ElastiCache Thread 1 message")
                                    .withStringSet("Tags", new HashSet<String>(Arrays.asList("cache", "in-memory"))))
                    .withHashAndRangeKeysToDelete("ForumName", "Subject", "Amazon S3", "S3 Thread 100");

            System.out.println("Making the request.");
            BatchWriteItemOutcome outcome = dynamoDB.batchWriteItem(forumTableWriteItems, threadTableWriteItems);

            do {

                // Check for unprocessed keys which could happen if you exceed
                // provisioned throughput

                Map<String, List<WriteRequest>> unprocessedItems = outcome.getUnprocessedItems();

                if (outcome.getUnprocessedItems().size() == 0) {
                    System.out.println("No unprocessed items found");
                } else {
                    System.out.println("Retrieving the unprocessed items");
                    outcome = dynamoDB.batchWriteItemUnprocessed(unprocessedItems);
                }

            } while (outcome.getUnprocessedItems().size() > 0);

        } catch (Exception e) {
            System.err.println("Failed to retrieve items: ");
            e.printStackTrace(System.err);
        }

    }

}
```

## Example: Batch get operation using the AWS SDK for Java document API
<a name="JavaDocumentAPIBatchGet"></a>

The following Java code example uses the `batchGetItem` method to retrieve multiple items from the `Forum` and the `Thread` tables. The `BatchGetItemRequest` specifies the table names and a list of keys for each item to get. The example processes the response by printing the items retrieved.

**Note**  
This code example assumes that you have already loaded data into DynamoDB for your account by following the instructions in the [Creating tables and loading data for code examples in DynamoDB](SampleData.md) section.  
For step-by-step instructions to run the following example, see [Java code examples](CodeSamples.Java.md).

**Example**  

```
package com.amazonaws.codesamples.document;

import java.io.IOException;
import java.util.List;
import java.util.Map;

import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.BatchGetItemOutcome;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.TableKeysAndAttributes;
import com.amazonaws.services.dynamodbv2.model.KeysAndAttributes;

public class DocumentAPIBatchGet {
    static AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
    static DynamoDB dynamoDB = new DynamoDB(client);

    static String forumTableName = "Forum";
    static String threadTableName = "Thread";

    public static void main(String[] args) throws IOException {
        retrieveMultipleItemsBatchGet();
    }

    private static void retrieveMultipleItemsBatchGet() {

        try {

            TableKeysAndAttributes forumTableKeysAndAttributes = new TableKeysAndAttributes(forumTableName);
            // Add a partition key
            forumTableKeysAndAttributes.addHashOnlyPrimaryKeys("Name", "Amazon S3", "Amazon DynamoDB");

            TableKeysAndAttributes threadTableKeysAndAttributes = new TableKeysAndAttributes(threadTableName);
            // Add a partition key and a sort key
            threadTableKeysAndAttributes.addHashAndRangePrimaryKeys("ForumName", "Subject", "Amazon DynamoDB",
                    "DynamoDB Thread 1", "Amazon DynamoDB", "DynamoDB Thread 2", "Amazon S3", "S3 Thread 1");

            System.out.println("Making the request.");

            BatchGetItemOutcome outcome = dynamoDB.batchGetItem(forumTableKeysAndAttributes,
                    threadTableKeysAndAttributes);

            Map<String, KeysAndAttributes> unprocessed = null;

            do {
                for (String tableName : outcome.getTableItems().keySet()) {
                    System.out.println("Items in table " + tableName);
                    List<Item> items = outcome.getTableItems().get(tableName);
                    for (Item item : items) {
                        System.out.println(item.toJSONPretty());
                    }
                }

                // Check for unprocessed keys which could happen if you exceed
                // provisioned
                // throughput or reach the limit on response size.
                unprocessed = outcome.getUnprocessedKeys();

                if (unprocessed.isEmpty()) {
                    System.out.println("No unprocessed keys found");
                } else {
                    System.out.println("Retrieving the unprocessed keys");
                    outcome = dynamoDB.batchGetItemUnprocessed(unprocessed);
                }

            } while (!unprocessed.isEmpty());

        } catch (Exception e) {
            System.err.println("Failed to retrieve items.");
            System.err.println(e.getMessage());
        }

    }

}
```

# Example: Handling binary type attributes using the AWS SDK for Java document API
<a name="JavaDocumentAPIBinaryTypeExample"></a>

The following Java code example illustrates handling binary type attributes. The example adds an item to the `Reply` table. The item includes a binary type attribute (`ExtendedMessage`) that stores compressed data. The example then retrieves the item and prints all the attribute values. For illustration, the example uses the `GZIPOutputStream` class to compress a sample stream and assign it to the `ExtendedMessage` attribute. When the binary attribute is retrieved, it is decompressed using the `GZIPInputStream` class. 

**Note**  
The SDK for Java also provides an object persistence model, enabling you to map your client-side classes to DynamoDB tables. This approach can reduce the amount of code that you have to write. For more information, see [Java 1.x: DynamoDBMapper](DynamoDBMapper.md).

If you followed the [Creating tables and loading data for code examples in DynamoDB](SampleData.md) section, you should already have created the `Reply` table. You can also create this table programmatically. For more information, see [Creating example tables and uploading data using the AWS SDK for Java](AppendixSampleDataCodeJava.md).

For step-by-step instructions for testing the following sample, see [Java code examples](CodeSamples.Java.md). 

**Example**  

```
package com.amazonaws.codesamples.document;

import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.TimeZone;
import java.util.zip.GZIPInputStream;
import java.util.zip.GZIPOutputStream;

import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.spec.GetItemSpec;

public class DocumentAPIItemBinaryExample {

    static AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
    static DynamoDB dynamoDB = new DynamoDB(client);

    static String tableName = "Reply";
    static SimpleDateFormat dateFormatter = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'");

    public static void main(String[] args) throws IOException {
        try {

            // Format the primary key values
            String threadId = "Amazon DynamoDB#DynamoDB Thread 2";

            dateFormatter.setTimeZone(TimeZone.getTimeZone("UTC"));
            String replyDateTime = dateFormatter.format(new Date());

            // Add a new reply with a binary attribute type
            createItem(threadId, replyDateTime);

            // Retrieve the reply with a binary attribute type
            retrieveItem(threadId, replyDateTime);

            // clean up by deleting the item
            deleteItem(threadId, replyDateTime);
        } catch (Exception e) {
            System.err.println("Error running the binary attribute type example: " + e);
            e.printStackTrace(System.err);
        }
    }

    public static void createItem(String threadId, String replyDateTime) throws IOException {

        Table table = dynamoDB.getTable(tableName);

        // Craft a long message
        String messageInput = "Long message to be compressed in a lengthy forum reply";

        // Compress the long message
        ByteBuffer compressedMessage = compressString(messageInput.toString());

        table.putItem(new Item().withPrimaryKey("Id", threadId).withString("ReplyDateTime", replyDateTime)
                .withString("Message", "Long message follows").withBinary("ExtendedMessage", compressedMessage)
                .withString("PostedBy", "User A"));
    }

    public static void retrieveItem(String threadId, String replyDateTime) throws IOException {

        Table table = dynamoDB.getTable(tableName);

        GetItemSpec spec = new GetItemSpec().withPrimaryKey("Id", threadId, "ReplyDateTime", replyDateTime)
                .withConsistentRead(true);

        Item item = table.getItem(spec);

        // Uncompress the reply message and print
        String uncompressed = uncompressString(ByteBuffer.wrap(item.getBinary("ExtendedMessage")));

        System.out.println("Reply message:\n" + " Id: " + item.getString("Id") + "\n" + " ReplyDateTime: "
                + item.getString("ReplyDateTime") + "\n" + " PostedBy: " + item.getString("PostedBy") + "\n"
                + " Message: "
                + item.getString("Message") + "\n" + " ExtendedMessage (uncompressed): " + uncompressed + "\n");
    }

    public static void deleteItem(String threadId, String replyDateTime) {

        Table table = dynamoDB.getTable(tableName);
        table.deleteItem("Id", threadId, "ReplyDateTime", replyDateTime);
    }

    private static ByteBuffer compressString(String input) throws IOException {
        // Compress the UTF-8 encoded String into a byte[]
        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        GZIPOutputStream os = new GZIPOutputStream(baos);
        os.write(input.getBytes("UTF-8"));
        os.close();
        baos.close();
        byte[] compressedBytes = baos.toByteArray();

        // The following code writes the compressed bytes to a ByteBuffer.
        // A simpler way to do this is by simply calling
        // ByteBuffer.wrap(compressedBytes);
        // However, the longer form below shows the importance of resetting the
        // position of the buffer
        // back to the beginning of the buffer if you are writing bytes directly
        // to it, since the SDK
        // will consider only the bytes after the current position when sending
        // data to DynamoDB.
        // Using the "wrap" method automatically resets the position to zero.
        ByteBuffer buffer = ByteBuffer.allocate(compressedBytes.length);
        buffer.put(compressedBytes, 0, compressedBytes.length);
        buffer.position(0); // Important: reset the position of the ByteBuffer
                            // to the beginning
        return buffer;
    }

    private static String uncompressString(ByteBuffer input) throws IOException {
        byte[] bytes = input.array();
        ByteArrayInputStream bais = new ByteArrayInputStream(bytes);
        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        GZIPInputStream is = new GZIPInputStream(bais);

        int chunkSize = 1024;
        byte[] buffer = new byte[chunkSize];
        int length = 0;
        while ((length = is.read(buffer, 0, chunkSize)) != -1) {
            baos.write(buffer, 0, length);
        }

        String result = new String(baos.toByteArray(), "UTF-8");

        is.close();
        baos.close();
        bais.close();

        return result;
    }
}
```

# Working with items: .NET
<a name="LowLevelDotNetItemCRUD"></a>

You can use the AWS SDK for .NET low-level API to perform typical create, read, update, and delete (CRUD) operations on an item in a table. The following are the common steps that you follow to perform data CRUD operations using the .NET low-level API:

1. Create an instance of the `AmazonDynamoDBClient` class (the client).

1. Provide the operation-specific required parameters in a corresponding request object.

   For example, use the `PutItemRequest` request object when uploading an item and use the `GetItemRequest` request object when retrieving an existing item. 

   You can use the request object to provide both the required and optional parameters. 

1. Run the appropriate method provided by the client by passing in the request object that you created in the preceding step. 

   The `AmazonDynamoDBClient` client provides `PutItem`, `GetItem`, `UpdateItem`, and `DeleteItem` methods for the CRUD operations.

**Topics**
+ [Putting an item](#PutItemLowLevelAPIDotNet)
+ [Getting an item](#GetItemLowLevelDotNET)
+ [Updating an item](#UpdateItemLowLevelDotNet)
+ [Atomic counter](#AtomicCounterLowLevelDotNet)
+ [Deleting an item](#DeleteMidLevelDotNet)
+ [Batch write: Putting and deleting multiple items](#BatchWriteLowLevelDotNet)
+ [Batch get: Getting multiple items](#BatchGetLowLevelDotNet)
+ [Example: CRUD operations using the AWS SDK for .NET low-level API](LowLevelDotNetItemsExample.md)
+ [Example: Batch operations using the AWS SDK for .NET low-level API](batch-operation-lowlevel-dotnet.md)
+ [Example: Handling binary type attributes using the AWS SDK for .NET low-level API](LowLevelDotNetBinaryTypeExample.md)

## Putting an item
<a name="PutItemLowLevelAPIDotNet"></a>

The `PutItem` method uploads an item to a table. If the item exists, it replaces the entire item.

**Note**  
Instead of replacing the entire item, if you want to update only specific attributes, you can use the `UpdateItem` method. For more information, see [Updating an item](#UpdateItemLowLevelDotNet).

The following are the steps to upload an item using the low-level .NET SDK API:

1. Create an instance of the `AmazonDynamoDBClient` class.

1. Provide the required parameters by creating an instance of the `PutItemRequest` class.

   To put an item, you must provide the table name and the item. 

1. Run the `PutItem` method by providing the `PutItemRequest` object that you created in the preceding step.

The following C\$1 example demonstrates the preceding steps. The example uploads an item to the `ProductCatalog` table.

**Example**  

```
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
string tableName = "ProductCatalog";

var request = new PutItemRequest
{
   TableName = tableName,
   Item = new Dictionary<string, AttributeValue>()
      {
          { "Id", new AttributeValue { N = "201" }},
          { "Title", new AttributeValue { S = "Book 201 Title" }},
          { "ISBN", new AttributeValue { S = "11-11-11-11" }},
          { "Price", new AttributeValue { S = "20.00" }},
          {
            "Authors",
            new AttributeValue
            { SS = new List<string>{"Author1", "Author2"}   }
          }
      }
};
client.PutItem(request);
```

In the preceding example, you upload a book item that has the `Id`, `Title`, `ISBN`, and `Authors` attributes. Note that `Id` is a numeric type attribute, and all other attributes are of the string type. Authors is a `String` set.

### Specifying optional parameters
<a name="PutItemLowLevelAPIDotNetOptions"></a>

You can also provide optional parameters using the `PutItemRequest` object as shown in the following C\$1 example. The example specifies the following optional parameters:
+ `ExpressionAttributeNames`, `ExpressionAttributeValues`, and `ConditionExpression` specify that the item can be replaced only if the existing item has the ISBN attribute with a specific value.
+ `ReturnValues` parameter to request the old item in the response.

**Example**  

```
var request = new PutItemRequest
 {
   TableName = tableName,
   Item = new Dictionary<string, AttributeValue>()
               {
                   { "Id", new AttributeValue { N = "104" }},
                   { "Title", new AttributeValue { S = "Book 104  Title" }},
                   { "ISBN", new AttributeValue { S = "444-4444444444" }},
                   { "Authors",
                     new AttributeValue { SS = new List<string>{"Author3"}}}
               },
    // Optional parameters.
    ExpressionAttributeNames = new Dictionary<string,string>()
    {
        {"#I", "ISBN"}
    },
    ExpressionAttributeValues = new Dictionary<string, AttributeValue>()
    {
        {":isbn",new AttributeValue {S = "444-4444444444"}}
    },
    ConditionExpression = "#I = :isbn"

};
var response = client.PutItem(request);
```

For more information, see [PutItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html).

## Getting an item
<a name="GetItemLowLevelDotNET"></a>

The `GetItem` method retrieves an item.

**Note**  
To retrieve multiple items, you can use the `BatchGetItem` method. For more information, see [Batch get: Getting multiple items](#BatchGetLowLevelDotNet).

The following are the steps to retrieve an existing item using the low-level AWS SDK for .NET API.

1. Create an instance of the `AmazonDynamoDBClient` class.

1. Provide the required parameters by creating an instance of the `GetItemRequest` class.

   To get an item, you must provide the table name and primary key of the item. 

1. Run the `GetItem` method by providing the `GetItemRequest` object that you created in the preceding step.

The following C\$1 example demonstrates the preceding steps. The example retrieves an item from the `ProductCatalog` table.

```
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
string tableName = "ProductCatalog";

var request = new GetItemRequest
 {
   TableName = tableName,
   Key = new Dictionary<string,AttributeValue>() { { "Id", new AttributeValue { N = "202" } } },
 };
 var response = client.GetItem(request);

// Check the response.
var result = response.GetItemResult;
var attributeMap = result.Item; // Attribute list in the response.
```

### Specifying optional parameters
<a name="GetItemLowLevelDotNETOptions"></a>

You can also provide optional parameters using the `GetItemRequest` object, as shown in the following C\$1 example. The sample specifies the following optional parameters:
+ `ProjectionExpression` parameter to specify the attributes to retrieve.
+ `ConsistentRead` parameter to perform a strongly consistent read. To learn more read consistency, see [DynamoDB read consistency](HowItWorks.ReadConsistency.md).

**Example**  

```
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
string tableName = "ProductCatalog";

var request = new GetItemRequest
 {
   TableName = tableName,
   Key = new Dictionary<string,AttributeValue>() { { "Id", new AttributeValue { N = "202" } } },
   // Optional parameters.
   ProjectionExpression = "Id, ISBN, Title, Authors",
   ConsistentRead = true
 };

 var response = client.GetItem(request);

// Check the response.
var result = response.GetItemResult;
var attributeMap = result.Item;
```

For more information, see [GetItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_GetItem.html).

## Updating an item
<a name="UpdateItemLowLevelDotNet"></a>

The `UpdateItem` method updates an existing item if it is present. You can use the `UpdateItem` operation to update existing attribute values, add new attributes, or delete attributes from the existing collection. If the item that has the specified primary key is not found, it adds a new item.

The `UpdateItem` operation uses the following guidelines:
+ If the item does not exist, `UpdateItem` adds a new item using the primary key that is specified in the input.
+ If the item exists, `UpdateItem` applies the updates as follows:
  + Replaces the existing attribute values by the values in the update.
  + If the attribute that you provide in the input does not exist, it adds a new attribute to the item.
  + If the input attribute is null, it deletes the attribute, if it is present. 
  + If you use `ADD` for the `Action`, you can add values to an existing set (string or number set), or mathematically add (use a positive number) or subtract (use a negative number) from the existing numeric attribute value.

**Note**  
The `PutItem` operation also can perform an update. For more information, see [Putting an item](#PutItemLowLevelAPIDotNet). For example, if you call `PutItem` to upload an item and the primary key exists, the `PutItem` operation replaces the entire item. If there are attributes in the existing item and those attributes are not specified in the input, the `PutItem` operation deletes those attributes. However, `UpdateItem` updates only the specified input attributes. Any other existing attributes of that item remain unchanged. 

The following are the steps to update an existing item using the low-level .NET SDK API:

1. Create an instance of the `AmazonDynamoDBClient` class.

1. Provide the required parameters by creating an instance of the `UpdateItemRequest` class.

   This is the request object in which you describe all the updates, such as add attributes, update existing attributes, or delete attributes. To delete an existing attribute, specify the attribute name with null value. 

1. Run the `UpdateItem` method by providing the `UpdateItemRequest` object that you created in the preceding step. 

The following C\$1 code example demonstrates the preceding steps. The example updates a book item in the `ProductCatalog` table. It adds a new author to the `Authors` collection, and deletes the existing `ISBN` attribute. It also reduces the price by one.



```
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
string tableName = "ProductCatalog";

var request = new UpdateItemRequest
{
    TableName = tableName,
    Key = new Dictionary<string,AttributeValue>() { { "Id", new AttributeValue { N = "202" } } },
    ExpressionAttributeNames = new Dictionary<string,string>()
    {
        {"#A", "Authors"},
        {"#P", "Price"},
        {"#NA", "NewAttribute"},
        {"#I", "ISBN"}
    },
    ExpressionAttributeValues = new Dictionary<string, AttributeValue>()
    {
        {":auth",new AttributeValue { SS = {"Author YY","Author ZZ"}}},
        {":p",new AttributeValue {N = "1"}},
        {":newattr",new AttributeValue {S = "someValue"}},
    },

    // This expression does the following:
    // 1) Adds two new authors to the list
    // 2) Reduces the price
    // 3) Adds a new attribute to the item
    // 4) Removes the ISBN attribute from the item
    UpdateExpression = "ADD #A :auth SET #P = #P - :p, #NA = :newattr REMOVE #I"
};
var response = client.UpdateItem(request);
```

### Specifying optional parameters
<a name="UpdateItemLowLevelDotNETOptions"></a>

You can also provide optional parameters using the `UpdateItemRequest` object, as shown in the following C\$1 example. It specifies the following optional parameters:
+ `ExpressionAttributeValues` and `ConditionExpression` to specify that the price can be updated only if the existing price is 20.00.
+ `ReturnValues` parameter to request the updated item in the response. 

**Example**  

```
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
string tableName = "ProductCatalog";

var request = new UpdateItemRequest
{
    Key = new Dictionary<string,AttributeValue>() { { "Id", new AttributeValue { N = "202" } } },

    // Update price only if the current price is 20.00.
    ExpressionAttributeNames = new Dictionary<string,string>()
    {
        {"#P", "Price"}
    },
    ExpressionAttributeValues = new Dictionary<string, AttributeValue>()
    {
        {":newprice",new AttributeValue {N = "22"}},
        {":currprice",new AttributeValue {N = "20"}}
    },
    UpdateExpression = "SET #P = :newprice",
    ConditionExpression = "#P = :currprice",
    TableName = tableName,
    ReturnValues = "ALL_NEW" // Return all the attributes of the updated item.
};

var response = client.UpdateItem(request);
```

For more information, see [UpdateItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateItem.html). 

## Atomic counter
<a name="AtomicCounterLowLevelDotNet"></a>

You can use `updateItem` to implement an atomic counter, where you increment or decrement the value of an existing attribute without interfering with other write requests. To update an atomic counter, use `updateItem` with an attribute of type `Number` in the `UpdateExpression` parameter, and `ADD` as the `Action`.

The following example demonstrates this, incrementing the `Quantity` attribute by one.

```
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
string tableName = "ProductCatalog";

var request = new UpdateItemRequest
{
    Key = new Dictionary<string, AttributeValue>() { { "Id", new AttributeValue { N = "121" } } },
    ExpressionAttributeNames = new Dictionary<string, string>()
    {
        {"#Q", "Quantity"}
    },
    ExpressionAttributeValues = new Dictionary<string, AttributeValue>()
    {
        {":incr",new AttributeValue {N = "1"}}
    },
    UpdateExpression = "SET #Q = #Q + :incr",
    TableName = tableName
};

var response = client.UpdateItem(request);
```

## Deleting an item
<a name="DeleteMidLevelDotNet"></a>

The `DeleteItem` method deletes an item from a table. 

The following are the steps to delete an item using the low-level .NET SDK API. 

1. Create an instance of the `AmazonDynamoDBClient` class.

1. Provide the required parameters by creating an instance of the `DeleteItemRequest` class.

    To delete an item, the table name and item's primary key are required. 

1. Run the `DeleteItem` method by providing the `DeleteItemRequest` object that you created in the preceding step. 

**Example**  

```
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
string tableName = "ProductCatalog";

var request = new DeleteItemRequest
{
    TableName = tableName,
    Key = new Dictionary<string,AttributeValue>() { { "Id", new AttributeValue { N = "201" } } },
};

var response = client.DeleteItem(request);
```

### Specifying optional parameters
<a name="DeleteItemLowLevelDotNETOptions"></a>

You can also provide optional parameters using the `DeleteItemRequest` object as shown in the following C\$1 code example. It specifies the following optional parameters:
+ `ExpressionAttributeValues` and `ConditionExpression` to specify that the book item can be deleted only if it is no longer in publication (the InPublication attribute value is false). 
+ `ReturnValues` parameter to request the deleted item in the response.

**Example**  

```
var request = new DeleteItemRequest
{
    TableName = tableName,
    Key = new Dictionary<string,AttributeValue>() { { "Id", new AttributeValue { N = "201" } } },

    // Optional parameters.
    ReturnValues = "ALL_OLD",
    ExpressionAttributeNames = new Dictionary<string, string>()
    {
        {"#IP", "InPublication"}
    },
    ExpressionAttributeValues = new Dictionary<string, AttributeValue>()
    {
        {":inpub",new AttributeValue {BOOL = false}}
    },
    ConditionExpression = "#IP = :inpub"
};

var response = client.DeleteItem(request);
```

For more information, see [DeleteItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DeleteItem.html).

## Batch write: Putting and deleting multiple items
<a name="BatchWriteLowLevelDotNet"></a>

*Batch write* refers to putting and deleting multiple items in a batch. The `BatchWriteItem` method enables you to put and delete multiple items from one or more tables in a single call. The following are the steps to retrieve multiple items using the low-level .NET SDK API.

1. Create an instance of the `AmazonDynamoDBClient` class.

1. Describe all the put and delete operations by creating an instance of the `BatchWriteItemRequest` class.

1. Run the `BatchWriteItem` method by providing the `BatchWriteItemRequest` object that you created in the preceding step.

1. Process the response. You should check if there were any unprocessed request items returned in the response. This could happen if you reach the provisioned throughput quota or some other transient error. Also, DynamoDB limits the request size and the number of operations you can specify in a request. If you exceed these limits, DynamoDB rejects the request. For more information, see [BatchWriteItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchWriteItem.html). 

The following C\$1 code example demonstrates the preceding steps. The example creates a `BatchWriteItemRequest` to perform the following write operations:
+ Put an item in `Forum` table.
+ Put and delete an item from `Thread` table.

The code runs `BatchWriteItem` to perform a batch operation.

```
AmazonDynamoDBClient client = new AmazonDynamoDBClient();

string table1Name = "Forum";
string table2Name = "Thread";

var request = new BatchWriteItemRequest
 {
   RequestItems = new Dictionary<string, List<WriteRequest>>
    {
      {
        table1Name, new List<WriteRequest>
        {
          new WriteRequest
          {
             PutRequest = new PutRequest
             {
                Item = new Dictionary<string,AttributeValue>
                {
                  { "Name", new AttributeValue { S = "Amazon S3 forum" } },
                  { "Threads", new AttributeValue { N = "0" }}
                }
             }
          }
        }
      } ,
      {
        table2Name, new List<WriteRequest>
        {
          new WriteRequest
          {
            PutRequest = new PutRequest
            {
               Item = new Dictionary<string,AttributeValue>
               {
                 { "ForumName", new AttributeValue { S = "Amazon S3 forum" } },
                 { "Subject", new AttributeValue { S = "My sample question" } },
                 { "Message", new AttributeValue { S = "Message Text." } },
                 { "KeywordTags", new AttributeValue { SS = new List<string> { "Amazon S3", "Bucket" }  } }
               }
            }
          },
          new WriteRequest
          {
             DeleteRequest = new DeleteRequest
             {
                Key = new Dictionary<string,AttributeValue>()
                {
                   { "ForumName", new AttributeValue { S = "Some forum name" } },
                   { "Subject", new AttributeValue { S = "Some subject" } }
                }
             }
          }
        }
      }
    }
 };
response = client.BatchWriteItem(request);
```

For a working example, see [Example: Batch operations using the AWS SDK for .NET low-level API](batch-operation-lowlevel-dotnet.md). 

## Batch get: Getting multiple items
<a name="BatchGetLowLevelDotNet"></a>

The `BatchGetItem` method enables you to retrieve multiple items from one or more tables. 

**Note**  
To retrieve a single item, you can use the `GetItem` method. 

The following are the steps to retrieve multiple items using the low-level AWS SDK for .NET API.

1. Create an instance of the `AmazonDynamoDBClient` class.

1. Provide the required parameters by creating an instance of the `BatchGetItemRequest` class.

   To retrieve multiple items, the table name and a list of primary key values are required. 

1. Run the `BatchGetItem` method by providing the `BatchGetItemRequest` object that you created in the preceding step.

1. Process the response. You should check if there were any unprocessed keys, which could happen if you reach the provisioned throughput quota or some other transient error.

The following C\$1 code example demonstrates the preceding steps. The example retrieves items from two tables, `Forum` and `Thread`. The request specifies two items in the `Forum` and three items in the `Thread` table. The response includes items from both of the tables. The code shows how you can process the response.



```
AmazonDynamoDBClient client = new AmazonDynamoDBClient();

string table1Name = "Forum";
string table2Name = "Thread";

var request = new BatchGetItemRequest
{
  RequestItems = new Dictionary<string, KeysAndAttributes>()
  {
    { table1Name,
      new KeysAndAttributes
      {
        Keys = new List<Dictionary<string, AttributeValue>>()
        {
          new Dictionary<string, AttributeValue>()
          {
            { "Name", new AttributeValue { S = "DynamoDB" } }
          },
          new Dictionary<string, AttributeValue>()
          {
            { "Name", new AttributeValue { S = "Amazon S3" } }
          }
        }
      }
    },
    {
      table2Name,
      new KeysAndAttributes
      {
        Keys = new List<Dictionary<string, AttributeValue>>()
        {
          new Dictionary<string, AttributeValue>()
          {
            { "ForumName", new AttributeValue { S = "DynamoDB" } },
            { "Subject", new AttributeValue { S = "DynamoDB Thread 1" } }
          },
          new Dictionary<string, AttributeValue>()
          {
            { "ForumName", new AttributeValue { S = "DynamoDB" } },
            { "Subject", new AttributeValue { S = "DynamoDB Thread 2" } }
          },
          new Dictionary<string, AttributeValue>()
          {
            { "ForumName", new AttributeValue { S = "Amazon S3" } },
            { "Subject", new AttributeValue { S = "Amazon S3 Thread 1" } }
          }
        }
      }
    }
  }
};

var response = client.BatchGetItem(request);

// Check the response.
var result = response.BatchGetItemResult;
var responses = result.Responses; // The attribute list in the response.

var table1Results = responses[table1Name];
Console.WriteLine("Items in table {0}" + table1Name);
foreach (var item1 in table1Results.Items)
{
  PrintItem(item1);
}

var table2Results = responses[table2Name];
Console.WriteLine("Items in table {1}" + table2Name);
foreach (var item2 in table2Results.Items)
{
  PrintItem(item2);
}
// Any unprocessed keys? could happen if you exceed ProvisionedThroughput or some other error.
Dictionary<string, KeysAndAttributes> unprocessedKeys = result.UnprocessedKeys;
foreach (KeyValuePair<string, KeysAndAttributes> pair in unprocessedKeys)
{
    Console.WriteLine(pair.Key, pair.Value);
}
```



### Specifying optional parameters
<a name="BatchGetItemLowLevelDotNETOptions"></a>

You can also provide optional parameters using the `BatchGetItemRequest` object as shown in the following C\$1 code example. The example retrieves two items from the `Forum` table. It specifies the following optional parameter:
+  `ProjectionExpression` parameter to specify the attributes to retrieve.

**Example**  

```
AmazonDynamoDBClient client = new AmazonDynamoDBClient();

string table1Name = "Forum";

var request = new BatchGetItemRequest
{
  RequestItems = new Dictionary<string, KeysAndAttributes>()
  {
    { table1Name,
      new KeysAndAttributes
      {
        Keys = new List<Dictionary<string, AttributeValue>>()
        {
          new Dictionary<string, AttributeValue>()
          {
            { "Name", new AttributeValue { S = "DynamoDB" } }
          },
          new Dictionary<string, AttributeValue>()
          {
            { "Name", new AttributeValue { S = "Amazon S3" } }
          }
        }
      },
      // Optional - name of an attribute to retrieve.
      ProjectionExpression = "Title"
    }
  }
};

var response = client.BatchGetItem(request);
```

For more information, see [BatchGetItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html). 

# Example: CRUD operations using the AWS SDK for .NET low-level API
<a name="LowLevelDotNetItemsExample"></a>

The following C\$1 code example illustrates CRUD operations on an Amazon DynamoDB item. The example adds an item to the `ProductCatalog` table, retrieves it, performs various updates, and finally deletes the item. If you haven't created this table, you can also create it programmatically. For more information, see [Creating example tables and uploading data using the AWS SDK for .NET](AppendixSampleDataCodeDotNET.md).

For step-by-step instructions for testing the following sample, see [.NET code examples](CodeSamples.DotNet.md). 

**Example**  

```
using System;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;
using Amazon.SecurityToken;

namespace com.amazonaws.codesamples
{
    class LowLevelItemCRUDExample
    {
        private static string tableName = "ProductCatalog";
        private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();

        static void Main(string[] args)
        {
            try
            {
                CreateItem();
                RetrieveItem();

                // Perform various updates.
                UpdateMultipleAttributes();
                UpdateExistingAttributeConditionally();

                // Delete item.
                DeleteItem();
                Console.WriteLine("To continue, press Enter");
                Console.ReadLine();
            }
            catch (Exception e)
            {
                Console.WriteLine(e.Message);
                Console.WriteLine("To continue, press Enter");
                Console.ReadLine();
            }
        }

        private static void CreateItem()
        {
            var request = new PutItemRequest
            {
                TableName = tableName,
                Item = new Dictionary<string, AttributeValue>()
            {
                { "Id", new AttributeValue {
                      N = "1000"
                  }},
                { "Title", new AttributeValue {
                      S = "Book 201 Title"
                  }},
                { "ISBN", new AttributeValue {
                      S = "11-11-11-11"
                  }},
                { "Authors", new AttributeValue {
                      SS = new List<string>{"Author1", "Author2" }
                  }},
                { "Price", new AttributeValue {
                      N = "20.00"
                  }},
                { "Dimensions", new AttributeValue {
                      S = "8.5x11.0x.75"
                  }},
                { "InPublication", new AttributeValue {
                      BOOL = false
                  } }
            }
            };
            client.PutItem(request);
        }

        private static void RetrieveItem()
        {
            var request = new GetItemRequest
            {
                TableName = tableName,
                Key = new Dictionary<string, AttributeValue>()
            {
                { "Id", new AttributeValue {
                      N = "1000"
                  } }
            },
                ProjectionExpression = "Id, ISBN, Title, Authors",
                ConsistentRead = true
            };
            var response = client.GetItem(request);

            // Check the response.
            var attributeList = response.Item; // attribute list in the response.
            Console.WriteLine("\nPrinting item after retrieving it ............");
            PrintItem(attributeList);
        }

        private static void UpdateMultipleAttributes()
        {
            var request = new UpdateItemRequest
            {
                Key = new Dictionary<string, AttributeValue>()
            {
                { "Id", new AttributeValue {
                      N = "1000"
                  } }
            },
                // Perform the following updates:
                // 1) Add two new authors to the list
                // 1) Set a new attribute
                // 2) Remove the ISBN attribute
                ExpressionAttributeNames = new Dictionary<string, string>()
            {
                {"#A","Authors"},
                {"#NA","NewAttribute"},
                {"#I","ISBN"}
            },
                ExpressionAttributeValues = new Dictionary<string, AttributeValue>()
            {
                {":auth",new AttributeValue {
                     SS = {"Author YY", "Author ZZ"}
                 }},
                {":new",new AttributeValue {
                     S = "New Value"
                 }}
            },

                UpdateExpression = "ADD #A :auth SET #NA = :new REMOVE #I",

                TableName = tableName,
                ReturnValues = "ALL_NEW" // Give me all attributes of the updated item.
            };
            var response = client.UpdateItem(request);

            // Check the response.
            var attributeList = response.Attributes; // attribute list in the response.
                                                     // print attributeList.
            Console.WriteLine("\nPrinting item after multiple attribute update ............");
            PrintItem(attributeList);
        }

        private static void UpdateExistingAttributeConditionally()
        {
            var request = new UpdateItemRequest
            {
                Key = new Dictionary<string, AttributeValue>()
            {
                { "Id", new AttributeValue {
                      N = "1000"
                  } }
            },
                ExpressionAttributeNames = new Dictionary<string, string>()
            {
                {"#P", "Price"}
            },
                ExpressionAttributeValues = new Dictionary<string, AttributeValue>()
            {
                {":newprice",new AttributeValue {
                     N = "22.00"
                 }},
                {":currprice",new AttributeValue {
                     N = "20.00"
                 }}
            },
                // This updates price only if current price is 20.00.
                UpdateExpression = "SET #P = :newprice",
                ConditionExpression = "#P = :currprice",

                TableName = tableName,
                ReturnValues = "ALL_NEW" // Give me all attributes of the updated item.
            };
            var response = client.UpdateItem(request);

            // Check the response.
            var attributeList = response.Attributes; // attribute list in the response.
            Console.WriteLine("\nPrinting item after updating price value conditionally ............");
            PrintItem(attributeList);
        }

        private static void DeleteItem()
        {
            var request = new DeleteItemRequest
            {
                TableName = tableName,
                Key = new Dictionary<string, AttributeValue>()
            {
                { "Id", new AttributeValue {
                      N = "1000"
                  } }
            },

                // Return the entire item as it appeared before the update.
                ReturnValues = "ALL_OLD",
                ExpressionAttributeNames = new Dictionary<string, string>()
            {
                {"#IP", "InPublication"}
            },
                ExpressionAttributeValues = new Dictionary<string, AttributeValue>()
            {
                {":inpub",new AttributeValue {
                     BOOL = false
                 }}
            },
                ConditionExpression = "#IP = :inpub"
            };

            var response = client.DeleteItem(request);

            // Check the response.
            var attributeList = response.Attributes; // Attribute list in the response.
                                                     // Print item.
            Console.WriteLine("\nPrinting item that was just deleted ............");
            PrintItem(attributeList);
        }

        private static void PrintItem(Dictionary<string, AttributeValue> attributeList)
        {
            foreach (KeyValuePair<string, AttributeValue> kvp in attributeList)
            {
                string attributeName = kvp.Key;
                AttributeValue value = kvp.Value;

                Console.WriteLine(
                    attributeName + " " +
                    (value.S == null ? "" : "S=[" + value.S + "]") +
                    (value.N == null ? "" : "N=[" + value.N + "]") +
                    (value.SS == null ? "" : "SS=[" + string.Join(",", value.SS.ToArray()) + "]") +
                    (value.NS == null ? "" : "NS=[" + string.Join(",", value.NS.ToArray()) + "]")
                    );
            }
            Console.WriteLine("************************************************");
        }
    }
}
```

# Example: Batch operations using the AWS SDK for .NET low-level API
<a name="batch-operation-lowlevel-dotnet"></a>

**Topics**
+ [Example: Batch write operation using the AWS SDK for .NET low-level API](#batch-write-low-level-dotnet)
+ [Example: Batch get operation using the AWS SDK for .NET low-level API](#LowLevelDotNetBatchGet)

This section provides examples of batch operations, *batch write* and *batch get*, that Amazon DynamoDB supports.

## Example: Batch write operation using the AWS SDK for .NET low-level API
<a name="batch-write-low-level-dotnet"></a>

The following C\$1 code example uses the `BatchWriteItem` method to perform the following put and delete operations:
+ Put one item in the `Forum` table.
+ Put one item and delete one item from the `Thread` table. 

You can specify any number of put and delete requests against one or more tables when creating your batch write request. However, DynamoDB `BatchWriteItem` limits the size of a batch write request and the number of put and delete operations in a single batch write operation. For more information, see [BatchWriteItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchWriteItem.html). If your request exceeds these limits, your request is rejected. If your table does not have sufficient provisioned throughput to serve this request, the unprocessed request items are returned in the response. 

The following example checks the response to see if it has any unprocessed request items. If it does, it loops back and resends the `BatchWriteItem` request with unprocessed items in the request. You can also create these sample tables and upload sample data programmatically. For more information, see [Creating example tables and uploading data using the AWS SDK for .NET](AppendixSampleDataCodeDotNET.md).

For step-by-step instructions for testing the following sample, see [.NET code examples](CodeSamples.DotNet.md). 

**Example**  

```
using System;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;

namespace com.amazonaws.codesamples
{
    class LowLevelBatchWrite
    {
        private static string table1Name = "Forum";
        private static string table2Name = "Thread";
        private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();

        static void Main(string[] args)
        {
            try
            {
                TestBatchWrite();
            }
            catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
            catch (Exception e) { Console.WriteLine(e.Message); }

            Console.WriteLine("To continue, press Enter");
            Console.ReadLine();
        }

        private static void TestBatchWrite()
        {
            var request = new BatchWriteItemRequest
            {
                ReturnConsumedCapacity = "TOTAL",
                RequestItems = new Dictionary<string, List<WriteRequest>>
            {
                {
                    table1Name, new List<WriteRequest>
                    {
                        new WriteRequest
                        {
                            PutRequest = new PutRequest
                            {
                                Item = new Dictionary<string, AttributeValue>
                                {
                                    { "Name", new AttributeValue {
                                          S = "S3 forum"
                                      } },
                                    { "Threads", new AttributeValue {
                                          N = "0"
                                      }}
                                }
                            }
                        }
                    }
                },
                {
                    table2Name, new List<WriteRequest>
                    {
                        new WriteRequest
                        {
                            PutRequest = new PutRequest
                            {
                                Item = new Dictionary<string, AttributeValue>
                                {
                                    { "ForumName", new AttributeValue {
                                          S = "S3 forum"
                                      } },
                                    { "Subject", new AttributeValue {
                                          S = "My sample question"
                                      } },
                                    { "Message", new AttributeValue {
                                          S = "Message Text."
                                      } },
                                    { "KeywordTags", new AttributeValue {
                                          SS = new List<string> { "S3", "Bucket" }
                                      } }
                                }
                            }
                        },
                        new WriteRequest
                        {
                            // For the operation to delete an item, if you provide a primary key value
                            // that does not exist in the table, there is no error, it is just a no-op.
                            DeleteRequest = new DeleteRequest
                            {
                                Key = new Dictionary<string, AttributeValue>()
                                {
                                    { "ForumName",  new AttributeValue {
                                          S = "Some partition key value"
                                      } },
                                    { "Subject", new AttributeValue {
                                          S = "Some sort key value"
                                      } }
                                }
                            }
                        }
                    }
                }
            }
            };

            CallBatchWriteTillCompletion(request);
        }

        private static void CallBatchWriteTillCompletion(BatchWriteItemRequest request)
        {
            BatchWriteItemResponse response;

            int callCount = 0;
            do
            {
                Console.WriteLine("Making request");
                response = client.BatchWriteItem(request);
                callCount++;

                // Check the response.

                var tableConsumedCapacities = response.ConsumedCapacity;
                var unprocessed = response.UnprocessedItems;

                Console.WriteLine("Per-table consumed capacity");
                foreach (var tableConsumedCapacity in tableConsumedCapacities)
                {
                    Console.WriteLine("{0} - {1}", tableConsumedCapacity.TableName, tableConsumedCapacity.CapacityUnits);
                }

                Console.WriteLine("Unprocessed");
                foreach (var unp in unprocessed)
                {
                    Console.WriteLine("{0} - {1}", unp.Key, unp.Value.Count);
                }
                Console.WriteLine();

                // For the next iteration, the request will have unprocessed items.
                request.RequestItems = unprocessed;
            } while (response.UnprocessedItems.Count > 0);

            Console.WriteLine("Total # of batch write API calls made: {0}", callCount);
        }
    }
}
```

## Example: Batch get operation using the AWS SDK for .NET low-level API
<a name="LowLevelDotNetBatchGet"></a>

The following C\$1 code example uses the `BatchGetItem` method to retrieve multiple items from the `Forum` and the `Thread` tables in Amazon DynamoDB. The `BatchGetItemRequest` specifies the table names and a list of primary keys for each table. The example processes the response by printing the items retrieved. 

For step-by-step instructions for testing the following sample, see [.NET code examples](CodeSamples.DotNet.md). 

**Example**  

```
using System;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;

namespace com.amazonaws.codesamples
{
    class LowLevelBatchGet
    {
        private static string table1Name = "Forum";
        private static string table2Name = "Thread";
        private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();

        static void Main(string[] args)
        {
            try
            {
                RetrieveMultipleItemsBatchGet();

                Console.WriteLine("To continue, press Enter");
                Console.ReadLine();
            }
            catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
            catch (Exception e) { Console.WriteLine(e.Message); }
        }

        private static void RetrieveMultipleItemsBatchGet()
        {
            var request = new BatchGetItemRequest
            {
                RequestItems = new Dictionary<string, KeysAndAttributes>()
            {
                { table1Name,
                  new KeysAndAttributes
                  {
                      Keys = new List<Dictionary<string, AttributeValue> >()
                      {
                          new Dictionary<string, AttributeValue>()
                          {
                              { "Name", new AttributeValue {
                            S = "Amazon DynamoDB"
                        } }
                          },
                          new Dictionary<string, AttributeValue>()
                          {
                              { "Name", new AttributeValue {
                            S = "Amazon S3"
                        } }
                          }
                      }
                  }},
                {
                    table2Name,
                    new KeysAndAttributes
                    {
                        Keys = new List<Dictionary<string, AttributeValue> >()
                        {
                            new Dictionary<string, AttributeValue>()
                            {
                                { "ForumName", new AttributeValue {
                                      S = "Amazon DynamoDB"
                                  } },
                                { "Subject", new AttributeValue {
                                      S = "DynamoDB Thread 1"
                                  } }
                            },
                            new Dictionary<string, AttributeValue>()
                            {
                                { "ForumName", new AttributeValue {
                                      S = "Amazon DynamoDB"
                                  } },
                                { "Subject", new AttributeValue {
                                      S = "DynamoDB Thread 2"
                                  } }
                            },
                            new Dictionary<string, AttributeValue>()
                            {
                                { "ForumName", new AttributeValue {
                                      S = "Amazon S3"
                                  } },
                                { "Subject", new AttributeValue {
                                      S = "S3 Thread 1"
                                  } }
                            }
                        }
                    }
                }
            }
            };

            BatchGetItemResponse response;
            do
            {
                Console.WriteLine("Making request");
                response = client.BatchGetItem(request);

                // Check the response.
                var responses = response.Responses; // Attribute list in the response.

                foreach (var tableResponse in responses)
                {
                    var tableResults = tableResponse.Value;
                    Console.WriteLine("Items retrieved from table {0}", tableResponse.Key);
                    foreach (var item1 in tableResults)
                    {
                        PrintItem(item1);
                    }
                }

                // Any unprocessed keys? could happen if you exceed ProvisionedThroughput or some other error.
                Dictionary<string, KeysAndAttributes> unprocessedKeys = response.UnprocessedKeys;
                foreach (var unprocessedTableKeys in unprocessedKeys)
                {
                    // Print table name.
                    Console.WriteLine(unprocessedTableKeys.Key);
                    // Print unprocessed primary keys.
                    foreach (var key in unprocessedTableKeys.Value.Keys)
                    {
                        PrintItem(key);
                    }
                }

                request.RequestItems = unprocessedKeys;
            } while (response.UnprocessedKeys.Count > 0);
        }

        private static void PrintItem(Dictionary<string, AttributeValue> attributeList)
        {
            foreach (KeyValuePair<string, AttributeValue> kvp in attributeList)
            {
                string attributeName = kvp.Key;
                AttributeValue value = kvp.Value;

                Console.WriteLine(
                    attributeName + " " +
                    (value.S == null ? "" : "S=[" + value.S + "]") +
                    (value.N == null ? "" : "N=[" + value.N + "]") +
                    (value.SS == null ? "" : "SS=[" + string.Join(",", value.SS.ToArray()) + "]") +
                    (value.NS == null ? "" : "NS=[" + string.Join(",", value.NS.ToArray()) + "]")
                    );
            }
            Console.WriteLine("************************************************");
        }
    }
}
```

# Example: Handling binary type attributes using the AWS SDK for .NET low-level API
<a name="LowLevelDotNetBinaryTypeExample"></a>

The following C\$1 code example illustrates the handling of binary type attributes. The example adds an item to the `Reply` table. The item includes a binary type attribute (`ExtendedMessage`) that stores compressed data. The example then retrieves the item and prints all the attribute values. For illustration, the example uses the `GZipStream` class to compress a sample stream and assigns it to the `ExtendedMessage` attribute, and decompresses it when printing the attribute value. 

For step-by-step instructions for testing the following example, see [.NET code examples](CodeSamples.DotNet.md). 

**Example**  

```
using System;
using System.Collections.Generic;
using System.IO;
using System.IO.Compression;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;

namespace com.amazonaws.codesamples
{
    class LowLevelItemBinaryExample
    {
        private static string tableName = "Reply";
        private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();

        static void Main(string[] args)
        {
            // Reply table primary key.
            string replyIdPartitionKey = "Amazon DynamoDB#DynamoDB Thread 1";
            string replyDateTimeSortKey = Convert.ToString(DateTime.UtcNow);

            try
            {
                CreateItem(replyIdPartitionKey, replyDateTimeSortKey);
                RetrieveItem(replyIdPartitionKey, replyDateTimeSortKey);
                // Delete item.
                DeleteItem(replyIdPartitionKey, replyDateTimeSortKey);
                Console.WriteLine("To continue, press Enter");
                Console.ReadLine();
            }
            catch (AmazonDynamoDBException e) { Console.WriteLine(e.Message); }
            catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
            catch (Exception e) { Console.WriteLine(e.Message); }
        }

        private static void CreateItem(string partitionKey, string sortKey)
        {
            MemoryStream compressedMessage = ToGzipMemoryStream("Some long extended message to compress.");
            var request = new PutItemRequest
            {
                TableName = tableName,
                Item = new Dictionary<string, AttributeValue>()
            {
                { "Id", new AttributeValue {
                      S = partitionKey
                  }},
                { "ReplyDateTime", new AttributeValue {
                      S = sortKey
                  }},
                { "Subject", new AttributeValue {
                      S = "Binary type "
                  }},
                { "Message", new AttributeValue {
                      S = "Some message about the binary type"
                  }},
                { "ExtendedMessage", new AttributeValue {
                      B = compressedMessage
                  }}
            }
            };
            client.PutItem(request);
        }

        private static void RetrieveItem(string partitionKey, string sortKey)
        {
            var request = new GetItemRequest
            {
                TableName = tableName,
                Key = new Dictionary<string, AttributeValue>()
            {
                { "Id", new AttributeValue {
                      S = partitionKey
                  } },
                { "ReplyDateTime", new AttributeValue {
                      S = sortKey
                  } }
            },
                ConsistentRead = true
            };
            var response = client.GetItem(request);

            // Check the response.
            var attributeList = response.Item; // attribute list in the response.
            Console.WriteLine("\nPrinting item after retrieving it ............");

            PrintItem(attributeList);
        }

        private static void DeleteItem(string partitionKey, string sortKey)
        {
            var request = new DeleteItemRequest
            {
                TableName = tableName,
                Key = new Dictionary<string, AttributeValue>()
            {
                { "Id", new AttributeValue {
                      S = partitionKey
                  } },
                { "ReplyDateTime", new AttributeValue {
                      S = sortKey
                  } }
            }
            };
            var response = client.DeleteItem(request);
        }

        private static void PrintItem(Dictionary<string, AttributeValue> attributeList)
        {
            foreach (KeyValuePair<string, AttributeValue> kvp in attributeList)
            {
                string attributeName = kvp.Key;
                AttributeValue value = kvp.Value;

                Console.WriteLine(
                    attributeName + " " +
                    (value.S == null ? "" : "S=[" + value.S + "]") +
                    (value.N == null ? "" : "N=[" + value.N + "]") +
                    (value.SS == null ? "" : "SS=[" + string.Join(",", value.SS.ToArray()) + "]") +
                    (value.NS == null ? "" : "NS=[" + string.Join(",", value.NS.ToArray()) + "]") +
                    (value.B == null ? "" : "B=[" + FromGzipMemoryStream(value.B) + "]")
                    );
            }
            Console.WriteLine("************************************************");
        }

        private static MemoryStream ToGzipMemoryStream(string value)
        {
            MemoryStream output = new MemoryStream();
            using (GZipStream zipStream = new GZipStream(output, CompressionMode.Compress, true))
            using (StreamWriter writer = new StreamWriter(zipStream))
            {
                writer.Write(value);
            }
            return output;
        }

        private static string FromGzipMemoryStream(MemoryStream stream)
        {
            using (GZipStream zipStream = new GZipStream(stream, CompressionMode.Decompress))
            using (StreamReader reader = new StreamReader(zipStream))
            {
                return reader.ReadToEnd();
            }
        }
    }
}
```

# Improving data access with secondary indexes in DynamoDB
<a name="SecondaryIndexes"></a>

Amazon DynamoDB provides fast access to items in a table by specifying primary key values. However, many applications might benefit from having one or more secondary (or alternate) keys available, to allow efficient access to data with attributes other than the primary key. To address this, you can create one or more secondary indexes on a table and issue `Query` or `Scan` requests against these indexes.

A *secondary index* is a data structure that contains a subset of attributes from a table, along with an alternate key to support `Query` operations. You can retrieve data from the index using a `Query`, in much the same way as you use `Query` with a table. A table can have multiple secondary indexes, which give your applications access to many different query patterns.

**Note**  
You can also `Scan` an index, in much the same way as you would `Scan` a table.   
Cross-account access for secondary index scan operations is currently not supported with [resource-based policies](access-control-resource-based.md).

Every secondary index is associated with exactly one table, from which it obtains its data. This is called the *base table* for the index. When you create an index, you define an alternate key for the index (partition key and sort key). You also define the attributes that you want to be *projected*, or copied, from the base table into the index. DynamoDB copies these attributes into the index, along with the primary key attributes from the base table. You can then query or scan the index just as you would query or scan a table. 

Every secondary index is automatically maintained by DynamoDB. When you add, modify, or delete items in the base table, any indexes on that table are also updated to reflect these changes.

DynamoDB supports two types of secondary indexes:
+ **[Global secondary index](GSI.html) — **An index with a partition key and a sort key that can be different from those on the base table. A global secondary index is considered "global" because queries on the index can span all of the data in the base table, across all partitions. A global secondary index is stored in its own partition space away from the base table and scales separately from the base table.
+ **[Local secondary index](LSI.html) — **An index that has the same partition key as the base table, but a different sort key. A local secondary index is "local" in the sense that every partition of a local secondary index is scoped to a base table partition that has the same partition key value.

For a comparison of global secondary indexes and local secondary indexes, see this video.

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/BkEu7zBWge8/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/BkEu7zBWge8)


**Topics**
+ [Using Global Secondary Indexes in DynamoDB](GSI.md)
+ [Local secondary indexes](LSI.md)

You should consider your application's requirements when you determine which type of index to use. The following table shows the main differences between a global secondary index and a local secondary index.


****  

| Characteristic | Global secondary index | Local secondary index | 
| --- | --- | --- | 
| Key Schema | The primary key of a global secondary index can be either simple (partition key) or composite (partition key and sort key). | The primary key of a local secondary index must be composite (partition key and sort key). | 
| Key Attributes | The index partition key and sort key (if present) can be any base table attributes of type string, number, or binary. | The partition key of the index is the same attribute as the partition key of the base table. The sort key can be any base table attribute of type string, number, or binary. | 
| Size Restrictions Per Partition Key Value | There are no size restrictions for global secondary indexes. | For each partition key value, the total size of all indexed items must be 10 GB or less. | 
| Online Index Operations | Global secondary indexes can be created at the same time that you create a table. You can also add a new global secondary index to an existing table, or delete an existing global secondary index. For more information, see [Managing Global Secondary Indexes in DynamoDB](GSI.OnlineOps.md).  | Local secondary indexes are created at the same time that you create a table. You cannot add a local secondary index to an existing table, nor can you delete any local secondary indexes that currently exist. | 
| Queries and Partitions | A global secondary index lets you query over the entire table, across all partitions.  | A local secondary index lets you query over a single partition, as specified by the partition key value in the query. | 
| Read Consistency | Queries on global secondary indexes support eventual consistency only. | When you query a local secondary index, you can choose either eventual consistency or strong consistency. | 
| Provisioned Throughput Consumption | Every global secondary index has its own provisioned throughput settings for read and write activity. Queries or scans on a global secondary index consume capacity units from the index, not from the base table. The same holds true for global secondary index updates due to table writes. A global secondary index associated with global tables consumes write capacity units.  | Queries or scans on a local secondary index consume read capacity units from the base table. When you write to a table its local secondary indexes are also updated, and these updates consume write capacity units from the base table. A local secondary index associated with global tables consumes replicated write capacity units. | 
| Projected Attributes | With global secondary index queries or scans, you can only request the attributes that are projected into the index. DynamoDB does not fetch any attributes from the table. | If you query or scan a local secondary index, you can request attributes that are not projected in to the index. DynamoDB automatically fetches those attributes from the table. | 

If you want to create more than one table with secondary indexes, you must do so sequentially. For example, you would create the first table and wait for it to become `ACTIVE`, create the next table and wait for it to become `ACTIVE`, and so on. If you try to concurrently create more than one table with a secondary index, DynamoDB returns a `LimitExceededException`.

Each secondary index uses the same [table class](HowItWorks.TableClasses.html) and [capacity mode](capacity-mode.md) as the base table it is associated with. For each secondary index, you must specify the following:
+ The type of index to be created – either a global secondary index or a local secondary index.
+ A name for the index. The naming rules for indexes are the same as those for tables, as listed in [Quotas in Amazon DynamoDB](ServiceQuotas.md). The name must be unique for the base table it is associated with, but you can use the same name for indexes that are associated with different base tables.
+ The key schema for the index. Every attribute in the index key schema must be a top-level attribute of type `String`, `Number`, or `Binary`. Other data types, including documents and sets, are not allowed. Other requirements for the key schema depend on the type of index: 
  + For a global secondary index, the partition key can be any scalar attribute of the base table. A sort key is optional, and it too can be any scalar attribute of the base table.
  + For a local secondary index, the partition key must be the same as the base table's partition key, and the sort key must be a non-key base table attribute.
+ Additional attributes, if any, to project from the base table into the index. These attributes are in addition to the table's key attributes, which are automatically projected into every index. You can project attributes of any data type, including scalars, documents, and sets.
+ The provisioned throughput settings for the index, if necessary:
  + For a global secondary index, you must specify read and write capacity unit settings. These provisioned throughput settings are independent of the base table's settings.
  + For a local secondary index, you do not need to specify read and write capacity unit settings. Any read and write operations on a local secondary index draw from the provisioned throughput settings of its base table.

For maximum query flexibility, you can create up to 20 global secondary indexes (default quota) and up to 5 local secondary indexes per table.

To get a detailed listing of secondary indexes on a table, use the `DescribeTable` operation. `DescribeTable` returns the name, storage size, and item counts for every secondary index on the table. These values are not updated in real time, but they are refreshed approximately every six hours.

You can access the data in a secondary index using either the `Query` or `Scan` operation. You must specify the name of the base table and the name of the index that you want to use, the attributes to be returned in the results, and any condition expressions or filters that you want to apply. DynamoDB can return the results in ascending or descending order.

When you delete a table, all of the indexes associated with that table are also deleted.

For best practices, see [Best practices for using secondary indexes in DynamoDB](bp-indexes.md).

# Using Global Secondary Indexes in DynamoDB
<a name="GSI"></a>

Some applications might need to perform many kinds of queries, using a variety of different attributes as query criteria. To support these requirements, you can create one or more* global secondary indexes* and issue `Query` requests against these indexes in Amazon DynamoDB.

**Topics**
+ [Scenario: Using a Global Secondary Index](#GSI.scenario)
+ [Attribute projections](#GSI.Projections)
+ [Multi-attribute key schema](#GSI.MultiAttributeKeys)
+ [Reading data from a Global Secondary Index](#GSI.Reading)
+ [Data synchronization between tables and Global Secondary Indexes](#GSI.Writes)
+ [Table classes with Global Secondary Index](#GSI.tableclasses)
+ [Provisioned throughput considerations for Global Secondary Indexes](#GSI.ThroughputConsiderations)
+ [Storage considerations for Global Secondary Indexes](#GSI.StorageConsiderations)
+ [Design patterns](GSI.DesignPatterns.md)
+ [Managing Global Secondary Indexes in DynamoDB](GSI.OnlineOps.md)
+ [Detecting and correcting index key violations in DynamoDB](GSI.OnlineOps.ViolationDetection.md)
+ [Working with Global Secondary Indexes: Java](GSIJavaDocumentAPI.md)
+ [Working with Global Secondary Indexes: .NET](GSILowLevelDotNet.md)
+ [Working with Global Secondary Indexes in DynamoDB using AWS CLI](GCICli.md)

## Scenario: Using a Global Secondary Index
<a name="GSI.scenario"></a>

To illustrate, consider a table named `GameScores` that tracks users and scores for a mobile gaming application. Each item in `GameScores` is identified by a partition key (`UserId`) and a sort key (`GameTitle`). The following diagram shows how the items in the table would be organized. (Not all of the attributes are shown.)

![\[GameScores table containing a list of user id, title, score, date, and wins/losses.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/GSI_01.png)


Now suppose that you wanted to write a leaderboard application to display top scores for each game. A query that specified the key attributes (`UserId` and `GameTitle`) would be very efficient. However, if the application needed to retrieve data from `GameScores` based on `GameTitle` only, it would need to use a `Scan` operation. As more items are added to the table, scans of all the data would become slow and inefficient. This makes it difficult to answer questions such as the following:
+ What is the top score ever recorded for the game Meteor Blasters?
+ Which user had the highest score for Galaxy Invaders?
+ What was the highest ratio of wins vs. losses?

To speed up queries on non-key attributes, you can create a global secondary index. A global secondary index contains a selection of attributes from the base table, but they are organized by a primary key that is different from that of the table. The index key does not need to have any of the key attributes from the table. It doesn't even need to have the same key schema as a table.

For example, you could create a global secondary index named `GameTitleIndex`, with a partition key of `GameTitle` and a sort key of `TopScore`. The base table's primary key attributes are always projected into an index, so the `UserId` attribute is also present. The following diagram shows what `GameTitleIndex` index would look like.

![\[GameTitleIndex table containing a list of titles, scores, and user ids.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/GSI_02.png)


Now you can query `GameTitleIndex` and easily obtain the scores for Meteor Blasters. The results are ordered by the sort key values, `TopScore`. If you set the `ScanIndexForward` parameter to false, the results are returned in descending order, so the highest score is returned first.

Every global secondary index must have a partition key, and can have an optional sort key. The index key schema can be different from the base table schema. You could have a table with a simple primary key (partition key), and create a global secondary index with a composite primary key (partition key and sort key)—or vice versa. The index key attributes can consist of any top-level `String`, `Number`, or `Binary` attributes from the base table. Other scalar types, document types, and set types are not allowed.

You can project other base table attributes into the index if you want. When you query the index, DynamoDB can retrieve these projected attributes efficiently. However, global secondary index queries cannot fetch attributes from the base table. For example, if you query `GameTitleIndex` as shown in the previous diagram, the query could not access any non-key attributes other than `TopScore` (although the key attributes `GameTitle` and `UserId` would automatically be projected).

In a DynamoDB table, each key value must be unique. However, the key values in a global secondary index do not need to be unique. To illustrate, suppose that a game named Comet Quest is especially difficult, with many new users trying but failing to get a score above zero. The following is some data that could represent this.


****  

| UserId | GameTitle | TopScore | 
| --- | --- | --- | 
| 123 | Comet Quest | 0 | 
| 201 | Comet Quest | 0 | 
| 301 | Comet Quest | 0 | 

When this data is added to the `GameScores` table, DynamoDB propagates it to `GameTitleIndex`. If we then query the index using Comet Quest for `GameTitle` and 0 for `TopScore`, the following data is returned.

![\[Table containing a list of titles, top scores, and user ids.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/GSI_05.png)


Only the items with the specified key values appear in the response. Within that set of data, the items are in no particular order. 

A global secondary index only tracks data items where its key attributes actually exist. For example, suppose that you added another new item to the `GameScores` table, but only provided the required primary key attributes.


****  

| UserId | GameTitle | 
| --- | --- | 
| 400 | Comet Quest | 

Because you didn't specify the `TopScore` attribute, DynamoDB would not propagate this item to `GameTitleIndex`. Thus, if you queried `GameScores` for all the Comet Quest items, you would get the following four items.

![\[Table containing a list of 4 titles, top scores, and user ids.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/GSI_04.png)


A similar query on `GameTitleIndex` would still return three items, rather than four. This is because the item with the nonexistent `TopScore` is not propagated to the index.

![\[Table containing a list of 3 titles, top scores, and user ids.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/GSI_05.png)


## Attribute projections
<a name="GSI.Projections"></a>

A *projection* is the set of attributes that is copied from a table into a secondary index. The partition key and sort key of the table are always projected into the index; you can project other attributes to support your application's query requirements. When you query an index, Amazon DynamoDB can access any attribute in the projection as if those attributes were in a table of their own.

When you create a secondary index, you need to specify the attributes that will be projected into the index. DynamoDB provides three different options for this:
+ *KEYS\$1ONLY* – Each item in the index consists only of the table partition key and sort key values, plus the index key values. The `KEYS_ONLY` option results in the smallest possible secondary index.
+ *INCLUDE* – In addition to the attributes described in `KEYS_ONLY`, the secondary index will include other non-key attributes that you specify.
+ *ALL* – The secondary index includes all of the attributes from the source table. Because all of the table data is duplicated in the index, an `ALL` projection results in the largest possible secondary index.

In the previous diagram, `GameTitleIndex` has only one projected attribute: `UserId`. So while an application can efficiently determine the `UserId` of the top scorers for each game using `GameTitle` and `TopScore` in queries, it can't efficiently determine the highest ratio of wins vs. losses for the top scorers. To do so, the application would have to perform an additional query on the base table to fetch the wins and losses for each of the top scorers. A more efficient way to support queries on this data would be to project these attributes from the base table into the global secondary index, as shown in this diagram. 

![\[Depiction of projecting non-key attributes into a GSI to support efficient querying.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/GSI_06.png)


Because the non-key attributes `Wins` and `Losses` are projected into the index, an application can determine the wins vs. losses ratio for any game, or for any combination of game and user ID.

When you choose the attributes to project into a global secondary index, you must consider the tradeoff between provisioned throughput costs and storage costs:
+ If you need to access just a few attributes with the lowest possible latency, consider projecting only those attributes into a global secondary index. The smaller the index, the less that it costs to store it, and the less your write costs are.
+ If your application frequently accesses some non-key attributes, you should consider projecting those attributes into a global secondary index. The additional storage costs for the global secondary index offset the cost of performing frequent table scans.
+ If you need to access most of the non-key attributes on a frequent basis, you can project these attributes—or even the entire base table— into a global secondary index. This gives you maximum flexibility. However, your storage cost would increase, or even double.
+ If your application needs to query a table infrequently, but must perform many writes or updates against the data in the table, consider projecting `KEYS_ONLY`. The global secondary index would be of minimal size, but would still be available when needed for query activity. 

## Multi-attribute key schema
<a name="GSI.MultiAttributeKeys"></a>

Global Secondary Indexes support multi-attribute keys, allowing you to compose partition keys and sort keys from multiple attributes. With multi-attribute keys, you can create a partition key from up to four attributes and a sort key from up to four attributes, for a total of up to eight attributes per key schema.

Multi-attribute keys simplify your data model by eliminating the need to manually concatenate attributes into synthetic keys. Instead of creating composite strings like `TOURNAMENT#WINTER2024#REGION#NA-EAST`, you can use the natural attributes from your domain model directly. DynamoDB handles the composite key logic automatically, hashing multiple partition key attributes together for data distribution and maintaining hierarchical sort order across multiple sort key attributes.

For example, consider a gaming tournament system where you want to organize matches by tournament and region. With multi-attribute keys, you can define your partition key as two separate attributes: `tournamentId` and `region`. Similarly, you can define your sort key using multiple attributes like `round`, `bracket`, and `matchId` to create a natural hierarchy. This approach keeps your data typed and your code clean, without string manipulation or parsing.

When you query a global secondary index with multi-attribute keys, you must specify all partition key attributes using equality conditions. For sort key attributes, you can query them left-to-right in the order they're defined in the key schema. This means you can query the first sort key attribute alone, the first two attributes together, or all attributes together, but you cannot skip attributes in the middle. Inequality conditions such as `>`, `<`, `BETWEEN`, or `begins_with()` must be the last condition in your query.

Multi-attribute keys work particularly well when creating global secondary indexes on existing tables. You can use attributes that already exist in your table without backfilling synthetic keys across your data. This makes it straightforward to add new query patterns to your application by creating indexes that reorganize your data using different attribute combinations.

Each attribute in a multi-attribute key can have its own data type: `String` (S), `Number` (N), or `Binary` (B). When choosing data types, consider that `Number` attributes sort numerically without requiring zero-padding, while `String` attributes sort lexicographically. For example, if you use a `Number` type for a score attribute, the values 5, 50, 500, and 1000 sort in natural numeric order. The same values as `String` type would sort as "1000", "5", "50", "500" unless you pad them with leading zeros.

When designing multi-attribute keys, order your attributes from most general to most specific. For partition keys, combine attributes that are always queried together and that provide good data distribution. For sort keys, place frequently queried attributes first in the hierarchy to maximize query flexibility. This ordering allows you to query at any level of granularity that matches your access patterns.

See the [Multi-attribute keys](GSI.DesignPattern.MultiAttributeKeys.md) for implementation examples.

## Reading data from a Global Secondary Index
<a name="GSI.Reading"></a>

You can retrieve items from a global secondary index using the `Query` and `Scan` operations. The `GetItem` and `BatchGetItem` operations can't be used on a global secondary index.

### Querying a Global Secondary Index
<a name="GSI.Querying"></a>

You can use the `Query` operation to access one or more items in a global secondary index. The query must specify the name of the base table and the name of the index that you want to use, the attributes to be returned in the query results, and any query conditions that you want to apply. DynamoDB can return the results in ascending or descending order.

Consider the following data returned from a `Query` that requests gaming data for a leaderboard application.

```
{
    "TableName": "GameScores",
    "IndexName": "GameTitleIndex",
    "KeyConditionExpression": "GameTitle = :v_title",
    "ExpressionAttributeValues": {
        ":v_title": {"S": "Meteor Blasters"}
    },
    "ProjectionExpression": "UserId, TopScore",
    "ScanIndexForward": false
}
```

In this query:
+ DynamoDB accesses *GameTitleIndex*, using the *GameTitle* partition key to locate the index items for Meteor Blasters. All of the index items with this key are stored adjacent to each other for rapid retrieval.
+ Within this game, DynamoDB uses the index to access all of the user IDs and top scores for this game.
+ The results are returned, sorted in descending order because the `ScanIndexForward` parameter is set to false.

### Scanning a Global Secondary Index
<a name="GSI.Scanning"></a>

You can use the `Scan` operation to retrieve all of the data from a global secondary index. You must provide the base table name and the index name in the request. With a `Scan`, DynamoDB reads all of the data in the index and returns it to the application. You can also request that only some of the data be returned, and that the remaining data should be discarded. To do this, use the `FilterExpression` parameter of the `Scan` operation. For more information, see [Filter expressions for scan](Scan.md#Scan.FilterExpression).

## Data synchronization between tables and Global Secondary Indexes
<a name="GSI.Writes"></a>

DynamoDB automatically synchronizes each global secondary index with its base table. When an application writes or deletes items in a table, any global secondary indexes on that table are updated asynchronously, using an eventually consistent model. Applications never write directly to an index. However, it is important that you understand the implications of how DynamoDB maintains these indexes.

 Global secondary indexes inherit the read/write capacity mode from the base table. For more information, see [Considerations when switching capacity modes in DynamoDB](bp-switching-capacity-modes.md). 

When you create a global secondary index, you specify one or more index key attributes and their data types. This means that whenever you write an item to the base table, the data types for those attributes must match the index key schema's data types. In the case of `GameTitleIndex`, the `GameTitle` partition key in the index is defined as a `String` data type. The `TopScore` sort key in the index is of type `Number`. If you try to add an item to the `GameScores` table and specify a different data type for either `GameTitle` or `TopScore`, DynamoDB returns a `ValidationException` because of the data type mismatch.

When you put or delete items in a table, the global secondary indexes on that table are updated in an eventually consistent fashion. Changes to the table data are propagated to the global secondary indexes within a fraction of a second, under normal conditions. However, in some unlikely failure scenarios, longer propagation delays might occur. Because of this, your applications need to anticipate and handle situations where a query on a global secondary index returns results that are not up to date.

If you write an item to a table, you don't have to specify the attributes for any global secondary index sort key. Using `GameTitleIndex` as an example, you would not need to specify a value for the `TopScore` attribute to write a new item to the `GameScores` table. In this case, DynamoDB does not write any data to the index for this particular item.

A table with many global secondary indexes incurs higher costs for write activity than tables with fewer indexes. For more information, see [Provisioned throughput considerations for Global Secondary Indexes](#GSI.ThroughputConsiderations).

## Table classes with Global Secondary Index
<a name="GSI.tableclasses"></a>

A global secondary index will always use the same table class as its base table. Any time a new global secondary index is added for a table, the new index will use the same table class as its base table. When a table's table class is updated, all associated global secondary indexes are updated as well.

## Provisioned throughput considerations for Global Secondary Indexes
<a name="GSI.ThroughputConsiderations"></a>

When you create a global secondary index on a provisioned mode table, you must specify read and write capacity units for the expected workload on that index. The provisioned throughput settings of a global secondary index are separate from those of its base table. A `Query` operation on a global secondary index consumes read capacity units from the index, not the base table. When you put, update or delete items in a table, the global secondary indexes on that table are also updated. These index updates consume write capacity units from the index, not from the base table.

For example, if you `Query` a global secondary index and exceed its provisioned read capacity, your request will be throttled. If you perform heavy write activity on the table, but a global secondary index on that table has insufficient write capacity, the write activity on the table will be throttled.

**Important**  
 To avoid potential throttling, the provisioned write capacity for a global secondary index should be equal or greater than the write capacity of the base table because new updates write to both the base table and global secondary index. 

To view the provisioned throughput settings for a global secondary index, use the `DescribeTable` operation. Detailed information about all of the table's global secondary indexes is returned.

### Read capacity units
<a name="GSI.ThroughputConsiderations.Reads"></a>

Global secondary indexes support eventually consistent reads, each of which consume one half of a read capacity unit. This means that a single global secondary index query can retrieve up to 2 × 4 KB = 8 KB per read capacity unit.

For global secondary index queries, DynamoDB calculates the provisioned read activity in the same way as it does for queries against tables. The only difference is that the calculation is based on the sizes of the index entries, rather than the size of the item in the base table. The number of read capacity units is the sum of all projected attribute sizes across all of the items returned. The result is then rounded up to the next 4 KB boundary. For more information about how DynamoDB calculates provisioned throughput usage, see [DynamoDB provisioned capacity mode](provisioned-capacity-mode.md).

The maximum size of the results returned by a `Query` operation is 1 MB. This includes the sizes of all the attribute names and values across all of the items returned.

For example, consider a global secondary index where each item contains 2,000 bytes of data. Now suppose that you `Query` this index and that the query's `KeyConditionExpression` matches eight items. The total size of the matching items is 2,000 bytes × 8 items = 16,000 bytes. This result is then rounded up to the nearest 4 KB boundary. Because global secondary index queries are eventually consistent, the total cost is 0.5 × (16 KB / 4 KB), or 2 read capacity units.

### Write capacity units
<a name="GSI.ThroughputConsiderations.Writes"></a>

When an item in a table is added, updated, or deleted, and a global secondary index is affected by this, the global secondary index consumes provisioned write capacity units for the operation. The total provisioned throughput cost for a write consists of the sum of write capacity units consumed by writing to the base table and those consumed by updating the global secondary indexes. If a write to a table does not require a global secondary index update, no write capacity is consumed from the index.

For a table write to succeed, the provisioned throughput settings for the table and all of its global secondary indexes must have enough write capacity to accommodate the write. Otherwise, the write to the table is throttled. 

**Important**  
When creating a Global Secondary Index (GSI), write operations to the base table can be throttled if the GSI activity resulting from writes to the base table exceeds the GSI's provisioned write capacity. This throttling affects all write operations, from indexing process to potentially disrupting your production workloads. For more information, see [Troubleshooting throttling in Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TroubleshootingThrottling.html).

The cost of writing an item to a global secondary index depends on several factors:
+ If you write a new item to the table that defines an indexed attribute, or you update an existing item to define a previously undefined indexed attribute, one write operation is required to put the item into the index.
+ If an update to the table changes the value of an indexed key attribute (from A to B), two writes are required, one to delete the previous item from the index and another write to put the new item into the index.  
+ If an item was present in the index, but a write to the table caused the indexed attribute to be deleted, one write is required to delete the old item projection from the index.
+ If an item is not present in the index before or after the item is updated, there is no additional write cost for the index.
+ If an update to the table only changes the value of projected attributes in the index key schema, but does not change the value of any indexed key attribute, one write is required to update the values of the projected attributes into the index.

All of these factors assume that the size of each item in the index is less than or equal to the 1 KB item size for calculating write capacity units. Larger index entries require additional write capacity units. You can minimize your write costs by considering which attributes your queries will need to return and projecting only those attributes into the index.

## Storage considerations for Global Secondary Indexes
<a name="GSI.StorageConsiderations"></a>

When an application writes an item to a table, DynamoDB automatically copies the correct subset of attributes to any global secondary indexes in which those attributes should appear. Your AWS account is charged for storage of the item in the base table and also for storage of attributes in any global secondary indexes on that table.

The amount of space used by an index item is the sum of the following:
+ The size in bytes of the base table primary key (partition key and sort key)
+ The size in bytes of the index key attribute
+ The size in bytes of the projected attributes (if any)
+ 100 bytes of overhead per index item

To estimate the storage requirements for a global secondary index, you can estimate the average size of an item in the index and then multiply by the number of items in the base table that have the global secondary index key attributes.

If a table contains an item where a particular attribute(s) is not defined, but that attribute is defined as an index partition key or sort key, DynamoDB doesn't write any data for that item to the index.

# Design patterns
<a name="GSI.DesignPatterns"></a>

Design patterns provide proven solutions to common challenges when working with global secondary indexes. These patterns help you build efficient, scalable applications by showing you how to structure your indexes for specific use cases.

Each pattern includes a complete implementation guide with code examples, best practices, and real-world use cases to help you apply the pattern to your own applications.

**Topics**
+ [Multi-attribute keys](GSI.DesignPattern.MultiAttributeKeys.md)

# Multi-attribute keys pattern
<a name="GSI.DesignPattern.MultiAttributeKeys"></a>

## Overview
<a name="GSI.DesignPattern.MultiAttributeKeys.Overview"></a>

Multi-attribute keys allow you to create Global Secondary Index (GSI) partition and sort keys composed of up to four attributes each. This reduces client-side code and makes it easier to initially model data and add new access patterns later.

Consider a common scenario: to create a GSI that queries items by multiple hierarchical attributes, you would traditionally need to create synthetic keys by concatenating values. For example, in a gaming app, to query tournament matches by tournament, region, and round, you might create a synthetic GSI partition key like TOURNAMENT\$1WINTER2024\$1REGION\$1NA-EAST and a synthetic sort key like ROUND\$1SEMIFINALS\$1BRACKET\$1UPPER. This approach works, but requires string concatenation when writing data, parsing when reading, and backfilling synthetic keys across all existing items if you're adding the GSI to an existing table. This makes code more cluttered and challenging to maintain type safety on individual key components.

Multi-attribute keys solve this problem for GSIs. You define your GSI partition key using multiple existing attributes like tournamentId and region. DynamoDB handles the composite key logic automatically, hashing them together for data distribution. You write items using natural attributes from your domain model, and the GSI automatically indexes them. No concatenation, no parsing, no backfilling. Your code stays clean, your data stays typed, and your queries stay simple. This approach is particularly useful when you have hierarchical data with natural attribute groupings (like tournament → region → round, or organization → department → team).

## Application example
<a name="GSI.DesignPattern.MultiAttributeKeys.ApplicationExample"></a>

This guide walks through building a tournament match tracking system for an esports platform. The platform needs to efficiently query matches across multiple dimensions: by tournament and region for bracket management, by player for match history, and by date for scheduling.

## Data model
<a name="GSI.DesignPattern.MultiAttributeKeys.DataModel"></a>

In this walkthrough, the tournament match tracking system supports three primary access patterns, each requiring a different key structure:

**Access pattern 1:** Look up a specific match by its unique ID
+ **Solution:** Base table with `matchId` as partition key

**Access pattern 2:** Query all matches for a specific tournament and region, optionally filtering by round, bracket, or match
+ **Solution:** Global Secondary Index with multi-attribute partition key (`tournamentId` \$1 `region`) and multi-attribute sort key (`round` \$1 `bracket` \$1 `matchId`)
+ **Example queries:** "All WINTER2024 matches in NA-EAST region" or "All SEMIFINALS matches in UPPER bracket for WINTER2024/NA-EAST"

**Access pattern 3:** Query a player's match history, optionally filtering by date range or tournament round
+ **Solution:** Global Secondary Index with single partition key (`player1Id`) and multi-attribute sort key (`matchDate` \$1 `round`)
+ **Example queries:** "All matches for player 101" or "Player 101's matches in January 2024"

The key difference between traditional and multi-attribute approaches becomes clear when examining the item structure:

**Traditional Global Secondary Index approach (concatenated keys):**

```
// Manual concatenation required for GSI keys
const item = {
    matchId: 'match-001',                                          // Base table PK
    tournamentId: 'WINTER2024',
    region: 'NA-EAST',
    round: 'SEMIFINALS',
    bracket: 'UPPER',
    player1Id: '101',
    // Synthetic keys needed for GSI
    GSI_PK: `TOURNAMENT#${tournamentId}#REGION#${region}`,       // Must concatenate
    GSI_SK: `${round}#${bracket}#${matchId}`,                    // Must concatenate
    // ... other attributes
};
```

**Multi-attribute Global Secondary Index approach (native keys):**

```
// Use existing attributes directly - no concatenation needed
const item = {
    matchId: 'match-001',                                          // Base table PK
    tournamentId: 'WINTER2024',
    region: 'NA-EAST',
    round: 'SEMIFINALS',
    bracket: 'UPPER',
    player1Id: '101',
    matchDate: '2024-01-18',
    // No synthetic keys needed - GSI uses existing attributes directly
    // ... other attributes
};
```

With multi-attribute keys, you write items once with natural domain attributes. DynamoDB automatically indexes them across multiple GSIs without requiring synthetic concatenated keys.

**Base table schema:**
+ Partition key: `matchId` (1 attribute)

**Global Secondary Index Schema (TournamentRegionIndex with multi-attribute keys):**
+ Partition key: `tournamentId`, `region` (2 attributes)
+ Sort key: `round`, `bracket`, `matchId` (3 attributes)

**Global Secondary Index Schema (PlayerMatchHistoryIndex with multi-attribute keys):**
+ Partition key: `player1Id` (1 attribute)
+ Sort key: `matchDate`, `round` (2 attributes)

### Base table: TournamentMatches
<a name="GSI.DesignPattern.MultiAttributeKeys.BaseTable"></a>


| matchId (PK) | tournamentId | region | round | bracket | player1Id | player2Id | matchDate | winner | score | 
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | 
| match-001 | WINTER2024 | NA-EAST | FINALS | CHAMPIONSHIP | 101 | 103 | 2024-01-20 | 101 | 3-1 | 
| match-002 | WINTER2024 | NA-EAST | SEMIFINALS | UPPER | 101 | 105 | 2024-01-18 | 101 | 3-2 | 
| match-003 | WINTER2024 | NA-EAST | SEMIFINALS | UPPER | 103 | 107 | 2024-01-18 | 103 | 3-0 | 
| match-004 | WINTER2024 | NA-EAST | QUARTERFINALS | UPPER | 101 | 109 | 2024-01-15 | 101 | 3-1 | 
| match-005 | WINTER2024 | NA-WEST | FINALS | CHAMPIONSHIP | 102 | 104 | 2024-01-20 | 102 | 3-2 | 
| match-006 | WINTER2024 | NA-WEST | SEMIFINALS | UPPER | 102 | 106 | 2024-01-18 | 102 | 3-1 | 
| match-007 | SPRING2024 | NA-EAST | QUARTERFINALS | UPPER | 101 | 108 | 2024-03-15 | 101 | 3-0 | 
| match-008 | SPRING2024 | NA-EAST | QUARTERFINALS | LOWER | 103 | 110 | 2024-03-15 | 103 | 3-2 | 

### GSI: TournamentRegionIndex (multi-attribute keys)
<a name="GSI.DesignPattern.MultiAttributeKeys.TournamentRegionIndexTable"></a>


| tournamentId (PK) | region (PK) | round (SK) | bracket (SK) | matchId (SK) | player1Id | player2Id | matchDate | winner | score | 
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | 
| WINTER2024 | NA-EAST | FINALS | CHAMPIONSHIP | match-001 | 101 | 103 | 2024-01-20 | 101 | 3-1 | 
| WINTER2024 | NA-EAST | QUARTERFINALS | UPPER | match-004 | 101 | 109 | 2024-01-15 | 101 | 3-1 | 
| WINTER2024 | NA-EAST | SEMIFINALS | UPPER | match-002 | 101 | 105 | 2024-01-18 | 101 | 3-2 | 
| WINTER2024 | NA-EAST | SEMIFINALS | UPPER | match-003 | 103 | 107 | 2024-01-18 | 103 | 3-0 | 
| WINTER2024 | NA-WEST | FINALS | CHAMPIONSHIP | match-005 | 102 | 104 | 2024-01-20 | 102 | 3-2 | 
| WINTER2024 | NA-WEST | SEMIFINALS | UPPER | match-006 | 102 | 106 | 2024-01-18 | 102 | 3-1 | 
| SPRING2024 | NA-EAST | QUARTERFINALS | LOWER | match-008 | 103 | 110 | 2024-03-15 | 103 | 3-2 | 
| SPRING2024 | NA-EAST | QUARTERFINALS | UPPER | match-007 | 101 | 108 | 2024-03-15 | 101 | 3-0 | 

### GSI: PlayerMatchHistoryIndex (multi-attribute keys)
<a name="GSI.DesignPattern.MultiAttributeKeys.PlayerMatchHistoryIndexTable"></a>


| player1Id (PK) | matchDate (SK) | round (SK) | tournamentId | region | bracket | matchId | player2Id | winner | score | 
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | 
| 101 | 2024-01-15 | QUARTERFINALS | WINTER2024 | NA-EAST | UPPER | match-004 | 109 | 101 | 3-1 | 
| 101 | 2024-01-18 | SEMIFINALS | WINTER2024 | NA-EAST | UPPER | match-002 | 105 | 101 | 3-2 | 
| 101 | 2024-01-20 | FINALS | WINTER2024 | NA-EAST | CHAMPIONSHIP | match-001 | 103 | 101 | 3-1 | 
| 101 | 2024-03-15 | QUARTERFINALS | SPRING2024 | NA-EAST | UPPER | match-007 | 108 | 101 | 3-0 | 
| 102 | 2024-01-18 | SEMIFINALS | WINTER2024 | NA-WEST | UPPER | match-006 | 106 | 102 | 3-1 | 
| 102 | 2024-01-20 | FINALS | WINTER2024 | NA-WEST | CHAMPIONSHIP | match-005 | 104 | 102 | 3-2 | 
| 103 | 2024-01-18 | SEMIFINALS | WINTER2024 | NA-EAST | UPPER | match-003 | 107 | 103 | 3-0 | 
| 103 | 2024-03-15 | QUARTERFINALS | SPRING2024 | NA-EAST | LOWER | match-008 | 110 | 103 | 3-2 | 

## Prerequisites
<a name="GSI.DesignPattern.MultiAttributeKeys.Prerequisites"></a>

Before you begin, ensure you have:

### Account and permissions
<a name="GSI.DesignPattern.MultiAttributeKeys.Prerequisites.AWSAccount"></a>
+ An active AWS account ([create one here](https://aws.amazon.com/free/) if needed)
+ IAM permissions for DynamoDB operations:
  + `dynamodb:CreateTable`
  + `dynamodb:DeleteTable`
  + `dynamodb:DescribeTable`
  + `dynamodb:PutItem`
  + `dynamodb:Query`
  + `dynamodb:BatchWriteItem`

**Note**  
**Security Note:** For production use, create a custom IAM policy with only the permissions you need. For this tutorial, you can use the AWS managed policy `AmazonDynamoDBFullAccessV2`.

### Development Environment
<a name="GSI.DesignPattern.MultiAttributeKeys.Prerequisites.DevEnvironment"></a>
+ Node.js installed on your machine
+ AWS credentials configured using one of these methods:

**Option 1: AWS CLI**

```
aws configure
```

**Option 2: Environment Variables**

```
export AWS_ACCESS_KEY_ID=your_access_key_here
export AWS_SECRET_ACCESS_KEY=your_secret_key_here
export AWS_DEFAULT_REGION=us-east-1
```

### Install Required Packages
<a name="GSI.DesignPattern.MultiAttributeKeys.Prerequisites.InstallPackages"></a>

```
npm install @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb
```

## Implementation
<a name="GSI.DesignPattern.MultiAttributeKeys.Implementation"></a>

### Step 1: Create table with GSIs using multi-attribute keys
<a name="GSI.DesignPattern.MultiAttributeKeys.CreateTable"></a>

Create a table with a simple base key structure and GSIs that use multi-attribute keys.

#### Code example
<a name="w2aac19c13c45c23b9c11b3b5b1"></a>

```
import { DynamoDBClient, CreateTableCommand } from "@aws-sdk/client-dynamodb";

const client = new DynamoDBClient({ region: 'us-west-2' });

const response = await client.send(new CreateTableCommand({
    TableName: 'TournamentMatches',
    
    // Base table: Simple partition key
    KeySchema: [
        { AttributeName: 'matchId', KeyType: 'HASH' }              // Simple PK
    ],
    
    AttributeDefinitions: [
        { AttributeName: 'matchId', AttributeType: 'S' },
        { AttributeName: 'tournamentId', AttributeType: 'S' },
        { AttributeName: 'region', AttributeType: 'S' },
        { AttributeName: 'round', AttributeType: 'S' },
        { AttributeName: 'bracket', AttributeType: 'S' },
        { AttributeName: 'player1Id', AttributeType: 'S' },
        { AttributeName: 'matchDate', AttributeType: 'S' }
    ],
    
    // GSIs with multi-attribute keys
    GlobalSecondaryIndexes: [
        {
            IndexName: 'TournamentRegionIndex',
            KeySchema: [
                { AttributeName: 'tournamentId', KeyType: 'HASH' },    // GSI PK attribute 1
                { AttributeName: 'region', KeyType: 'HASH' },          // GSI PK attribute 2
                { AttributeName: 'round', KeyType: 'RANGE' },          // GSI SK attribute 1
                { AttributeName: 'bracket', KeyType: 'RANGE' },        // GSI SK attribute 2
                { AttributeName: 'matchId', KeyType: 'RANGE' }         // GSI SK attribute 3
            ],
            Projection: { ProjectionType: 'ALL' }
        },
        {
            IndexName: 'PlayerMatchHistoryIndex',
            KeySchema: [
                { AttributeName: 'player1Id', KeyType: 'HASH' },       // GSI PK
                { AttributeName: 'matchDate', KeyType: 'RANGE' },      // GSI SK attribute 1
                { AttributeName: 'round', KeyType: 'RANGE' }           // GSI SK attribute 2
            ],
            Projection: { ProjectionType: 'ALL' }
        }
    ],
    
    BillingMode: 'PAY_PER_REQUEST'
}));

console.log("Table with multi-attribute GSI keys created successfully");
```

**Key design decisions:**

**Base table:** The base table uses a simple `matchId` partition key for direct match lookups, keeping the base table structure straightforward while the GSIs provide the complex query patterns.

**TournamentRegionIndex Global Secondary Index:** The `TournamentRegionIndex` Global Secondary Index uses `tournamentId` \$1 `region` as a multi-attribute partition key, creating tournament-region isolation where data is distributed by the hash of both attributes combined, enabling efficient queries within a specific tournament-region context. The multi-attribute sort key (`round` \$1 `bracket` \$1 `matchId`) provides hierarchical sorting that supports queries at any level of the hierarchy with natural ordering from general (round) to specific (match ID).

**PlayerMatchHistoryIndex Global Secondary Index:** The `PlayerMatchHistoryIndex` Global Secondary Index reorganizes data by player using `player1Id` as the partition key, enabling cross-tournament queries for a specific player. The multi-attribute sort key (`matchDate` \$1 `round`) provides chronological ordering with the ability to filter by date ranges or specific tournament rounds.

### Step 2: Insert data with native attributes
<a name="GSI.DesignPattern.MultiAttributeKeys.InsertData"></a>

Add tournament match data using natural attributes. The GSI will automatically index these attributes without requiring synthetic keys.

#### Code example
<a name="w2aac19c13c45c23b9c11b5b5b1"></a>

```
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient, PutCommand } from "@aws-sdk/lib-dynamodb";

const client = new DynamoDBClient({ region: 'us-west-2' });
const docClient = DynamoDBDocumentClient.from(client);

// Tournament match data - no synthetic keys needed for GSIs
const matches = [
    // Winter 2024 Tournament, NA-EAST region
    {
        matchId: 'match-001',
        tournamentId: 'WINTER2024',
        region: 'NA-EAST',
        round: 'FINALS',
        bracket: 'CHAMPIONSHIP',
        player1Id: '101',
        player2Id: '103',
        matchDate: '2024-01-20',
        winner: '101',
        score: '3-1'
    },
    {
        matchId: 'match-002',
        tournamentId: 'WINTER2024',
        region: 'NA-EAST',
        round: 'SEMIFINALS',
        bracket: 'UPPER',
        player1Id: '101',
        player2Id: '105',
        matchDate: '2024-01-18',
        winner: '101',
        score: '3-2'
    },
    {
        matchId: 'match-003',
        tournamentId: 'WINTER2024',
        region: 'NA-EAST',
        round: 'SEMIFINALS',
        bracket: 'UPPER',
        player1Id: '103',
        player2Id: '107',
        matchDate: '2024-01-18',
        winner: '103',
        score: '3-0'
    },
    {
        matchId: 'match-004',
        tournamentId: 'WINTER2024',
        region: 'NA-EAST',
        round: 'QUARTERFINALS',
        bracket: 'UPPER',
        player1Id: '101',
        player2Id: '109',
        matchDate: '2024-01-15',
        winner: '101',
        score: '3-1'
    },
    
    // Winter 2024 Tournament, NA-WEST region
    {
        matchId: 'match-005',
        tournamentId: 'WINTER2024',
        region: 'NA-WEST',
        round: 'FINALS',
        bracket: 'CHAMPIONSHIP',
        player1Id: '102',
        player2Id: '104',
        matchDate: '2024-01-20',
        winner: '102',
        score: '3-2'
    },
    {
        matchId: 'match-006',
        tournamentId: 'WINTER2024',
        region: 'NA-WEST',
        round: 'SEMIFINALS',
        bracket: 'UPPER',
        player1Id: '102',
        player2Id: '106',
        matchDate: '2024-01-18',
        winner: '102',
        score: '3-1'
    },
    
    // Spring 2024 Tournament, NA-EAST region
    {
        matchId: 'match-007',
        tournamentId: 'SPRING2024',
        region: 'NA-EAST',
        round: 'QUARTERFINALS',
        bracket: 'UPPER',
        player1Id: '101',
        player2Id: '108',
        matchDate: '2024-03-15',
        winner: '101',
        score: '3-0'
    },
    {
        matchId: 'match-008',
        tournamentId: 'SPRING2024',
        region: 'NA-EAST',
        round: 'QUARTERFINALS',
        bracket: 'LOWER',
        player1Id: '103',
        player2Id: '110',
        matchDate: '2024-03-15',
        winner: '103',
        score: '3-2'
    }
];

// Insert all matches
for (const match of matches) {
    await docClient.send(new PutCommand({
        TableName: 'TournamentMatches',
        Item: match
    }));
    
    console.log(`Added: ${match.matchId} - ${match.tournamentId}/${match.region} - ${match.round} ${match.bracket}`);
}

console.log(`\nInserted ${matches.length} tournament matches`);
console.log("No synthetic keys created - GSIs use native attributes automatically");
```

**Data structure explained:**

**Natural attribute usage:** Each attribute represents a real tournament concept with no string concatenation or parsing required, providing direct mapping to the domain model.

**Automatic Global Secondary Index indexing:** The GSIs automatically index items using the existing attributes (`tournamentId`, `region`, `round`, `bracket`, `matchId` for TournamentRegionIndex and `player1Id`, `matchDate`, `round` for PlayerMatchHistoryIndex) without requiring synthetic concatenated keys.

**No backfilling needed:** When you add a new Global Secondary Index with multi-attribute keys to an existing table, DynamoDB automatically indexes all existing items using their natural attributes—no need to update items with synthetic keys.

### Step 3: Query TournamentRegionIndex Global Secondary Index with all partition key attributes
<a name="GSI.DesignPattern.MultiAttributeKeys.QueryAllPartitionKeys"></a>

This example queries the TournamentRegionIndex Global Secondary Index which has a multi-attribute partition key (`tournamentId` \$1 `region`). All partition key attributes must be specified with equality conditions in queries—you cannot query with just `tournamentId` alone or use inequality operators on partition key attributes.

#### Code example
<a name="w2aac19c13c45c23b9c11b7b5b1"></a>

```
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient, QueryCommand } from "@aws-sdk/lib-dynamodb";

const client = new DynamoDBClient({ region: 'us-west-2' });
const docClient = DynamoDBDocumentClient.from(client);

// Query GSI: All matches for WINTER2024 tournament in NA-EAST region
const response = await docClient.send(new QueryCommand({
    TableName: 'TournamentMatches',
    IndexName: 'TournamentRegionIndex',
    KeyConditionExpression: 'tournamentId = :tournament AND #region = :region',
    ExpressionAttributeNames: {
        '#region': 'region',  // 'region' is a reserved keyword
        '#tournament': 'tournament'
    },
    ExpressionAttributeValues: {
        ':tournament': 'WINTER2024',
        ':region': 'NA-EAST'
    }
}));

console.log(`Found ${response.Items.length} matches for WINTER2024/NA-EAST:\n`);
response.Items.forEach(match => {
    console.log(`  ${match.round} | ${match.bracket} | ${match.matchId}`);
    console.log(`    Players: ${match.player1Id} vs ${match.player2Id}`);
    console.log(`    Winner: ${match.winner}, Score: ${match.score}\n`);
});
```

**Expected output:**

```
Found 4 matches for WINTER2024/NA-EAST:

  FINALS | CHAMPIONSHIP | match-001
    Players: 101 vs 103
    Winner: 101, Score: 3-1

  QUARTERFINALS | UPPER | match-004
    Players: 101 vs 109
    Winner: 101, Score: 3-1

  SEMIFINALS | UPPER | match-002
    Players: 101 vs 105
    Winner: 101, Score: 3-2

  SEMIFINALS | UPPER | match-003
    Players: 103 vs 107
    Winner: 103, Score: 3-0
```

**Invalid queries:**

```
// Missing region attribute
KeyConditionExpression: 'tournamentId = :tournament'

// Using inequality on partition key attribute
KeyConditionExpression: 'tournamentId = :tournament AND #region > :region'
```

**Performance:** Multi-attribute partition keys are hashed together, providing the same O(1) lookup performance as single-attribute keys.

### Step 4: Query Global Secondary Index sort keys left-to-right
<a name="GSI.DesignPattern.MultiAttributeKeys.QuerySortKeysLeftToRight"></a>

Sort key attributes must be queried left-to-right in the order they're defined in the Global Secondary Index. This example demonstrates querying the TournamentRegionIndex at different hierarchy levels: filtering by just `round`, by `round` \$1 `bracket`, or by all three sort key attributes. You cannot skip attributes in the middle—for example, you cannot query by `round` and `matchId` while skipping `bracket`.

#### Code example
<a name="w2aac19c13c45c23b9c11b9b5b1"></a>

```
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient, QueryCommand } from "@aws-sdk/lib-dynamodb";

const client = new DynamoDBClient({ region: 'us-west-2' });
const docClient = DynamoDBDocumentClient.from(client);

// Query 1: Filter by first sort key attribute (round)
console.log("Query 1: All SEMIFINALS matches");
const query1 = await docClient.send(new QueryCommand({
    TableName: 'TournamentMatches',
    IndexName: 'TournamentRegionIndex',
    KeyConditionExpression: 'tournamentId = :tournament AND #region = :region AND round = :round',
    ExpressionAttributeNames: {
        '#region': 'region'  // 'region' is a reserved keyword
    },
    ExpressionAttributeValues: {
        ':tournament': 'WINTER2024',
        ':region': 'NA-EAST',
        ':round': 'SEMIFINALS'
    }
}));
console.log(`  Found ${query1.Items.length} matches\n`);

// Query 2: Filter by first two sort key attributes (round + bracket)
console.log("Query 2: SEMIFINALS UPPER bracket matches");
const query2 = await docClient.send(new QueryCommand({
    TableName: 'TournamentMatches',
    IndexName: 'TournamentRegionIndex',
    KeyConditionExpression: 'tournamentId = :tournament AND #region = :region AND round = :round AND bracket = :bracket',
    ExpressionAttributeNames: {
        '#region': 'region'  // 'region' is a reserved keyword
    },
    ExpressionAttributeValues: {
        ':tournament': 'WINTER2024',
        ':region': 'NA-EAST',
        ':round': 'SEMIFINALS',
        ':bracket': 'UPPER'
    }
}));
console.log(`  Found ${query2.Items.length} matches\n`);

// Query 3: Filter by all three sort key attributes (round + bracket + matchId)
console.log("Query 3: Specific match in SEMIFINALS UPPER bracket");
const query3 = await docClient.send(new QueryCommand({
    TableName: 'TournamentMatches',
    IndexName: 'TournamentRegionIndex',
    KeyConditionExpression: 'tournamentId = :tournament AND #region = :region AND round = :round AND bracket = :bracket AND matchId = :matchId',
    ExpressionAttributeNames: {
        '#region': 'region'  // 'region' is a reserved keyword
    },
    ExpressionAttributeValues: {
        ':tournament': 'WINTER2024',
        ':region': 'NA-EAST',
        ':round': 'SEMIFINALS',
        ':bracket': 'UPPER',
        ':matchId': 'match-002'
    }
}));
console.log(`  Found ${query3.Items.length} matches\n`);

// Query 4: INVALID - skipping round
console.log("Query 4: Attempting to skip first sort key attribute (WILL FAIL)");
try {
    const query4 = await docClient.send(new QueryCommand({
        TableName: 'TournamentMatches',
        IndexName: 'TournamentRegionIndex',
        KeyConditionExpression: 'tournamentId = :tournament AND #region = :region AND bracket = :bracket',
        ExpressionAttributeNames: {
            '#region': 'region'  // 'region' is a reserved keyword
        },
        ExpressionAttributeValues: {
            ':tournament': 'WINTER2024',
            ':region': 'NA-EAST',
            ':bracket': 'UPPER'
        }
    }));
} catch (error) {
    console.log(`  Error: ${error.message}`);
    console.log(`  Cannot skip sort key attributes - must query left-to-right\n`);
}
```

**Expected output:**

```
Query 1: All SEMIFINALS matches
  Found 2 matches

Query 2: SEMIFINALS UPPER bracket matches
  Found 2 matches

Query 3: Specific match in SEMIFINALS UPPER bracket
  Found 1 matches

Query 4: Attempting to skip first sort key attribute (WILL FAIL)
  Error: Query key condition not supported
  Cannot skip sort key attributes - must query left-to-right
```

**Left-to-right query rules:** You must query attributes in order from left to right, without skipping any.

**Valid patterns:**
+ First attribute only: `round = 'SEMIFINALS'`
+ First two attributes: `round = 'SEMIFINALS' AND bracket = 'UPPER'`
+ All three attributes: `round = 'SEMIFINALS' AND bracket = 'UPPER' AND matchId = 'match-002'`

**Invalid patterns:**
+ Skipping the first attribute: `bracket = 'UPPER'` (skips round)
+ Querying out of order: `matchId = 'match-002' AND round = 'SEMIFINALS'`
+ Leaving gaps: `round = 'SEMIFINALS' AND matchId = 'match-002'` (skips bracket)

**Note**  
**Design tip:** Order sort key attributes from most general to most specific to maximize query flexibility.

### Step 5: Use inequality conditions on Global Secondary Index sort keys
<a name="GSI.DesignPattern.MultiAttributeKeys.InequalityConditions"></a>

Inequality conditions must be the last condition in your query. This example demonstrates using comparison operators (`>=`, `BETWEEN`) and prefix matching (`begins_with()`) on sort key attributes. Once you use an inequality operator, you cannot add any additional sort key conditions after it—the inequality must be the final condition in your key condition expression.

#### Code example
<a name="w2aac19c13c45c23b9c11c11b5b1"></a>

```
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient, QueryCommand } from "@aws-sdk/lib-dynamodb";

const client = new DynamoDBClient({ region: 'us-west-2' });
const docClient = DynamoDBDocumentClient.from(client);

// Query 1: Round comparison (inequality on first sort key attribute)
console.log("Query 1: Matches from QUARTERFINALS onwards");
const query1 = await docClient.send(new QueryCommand({
    TableName: 'TournamentMatches',
    IndexName: 'TournamentRegionIndex',
    KeyConditionExpression: 'tournamentId = :tournament AND #region = :region AND round >= :round',
    ExpressionAttributeNames: {
        '#region': 'region'  // 'region' is a reserved keyword
    },
    ExpressionAttributeValues: {
        ':tournament': 'WINTER2024',
        ':region': 'NA-EAST',
        ':round': 'QUARTERFINALS'
    }
}));
console.log(`  Found ${query1.Items.length} matches\n`);

// Query 2: Round range with BETWEEN
console.log("Query 2: Matches between QUARTERFINALS and SEMIFINALS");
const query2 = await docClient.send(new QueryCommand({
    TableName: 'TournamentMatches',
    IndexName: 'TournamentRegionIndex',
    KeyConditionExpression: 'tournamentId = :tournament AND #region = :region AND round BETWEEN :start AND :end',
    ExpressionAttributeNames: {
        '#region': 'region'  // 'region' is a reserved keyword
    },
    ExpressionAttributeValues: {
        ':tournament': 'WINTER2024',
        ':region': 'NA-EAST',
        ':start': 'QUARTERFINALS',
        ':end': 'SEMIFINALS'
    }
}));
console.log(`  Found ${query2.Items.length} matches\n`);

// Query 3: Prefix matching with begins_with (treated as inequality)
console.log("Query 3: Matches in brackets starting with 'U'");
const query3 = await docClient.send(new QueryCommand({
    TableName: 'TournamentMatches',
    IndexName: 'TournamentRegionIndex',
    KeyConditionExpression: 'tournamentId = :tournament AND #region = :region AND round = :round AND begins_with(bracket, :prefix)',
    ExpressionAttributeNames: {
        '#region': 'region'  // 'region' is a reserved keyword
    },
    ExpressionAttributeValues: {
        ':tournament': 'WINTER2024',
        ':region': 'NA-EAST',
        ':round': 'SEMIFINALS',
        ':prefix': 'U'
    }
}));
console.log(`  Found ${query3.Items.length} matches\n`);

// Query 4: INVALID - condition after inequality
console.log("Query 4: Attempting condition after inequality (WILL FAIL)");
try {
    const query4 = await docClient.send(new QueryCommand({
        TableName: 'TournamentMatches',
        IndexName: 'TournamentRegionIndex',
        KeyConditionExpression: 'tournamentId = :tournament AND #region = :region AND round > :round AND bracket = :bracket',
        ExpressionAttributeNames: {
            '#region': 'region'  // 'region' is a reserved keyword
        },
        ExpressionAttributeValues: {
            ':tournament': 'WINTER2024',
            ':region': 'NA-EAST',
            ':round': 'QUARTERFINALS',
            ':bracket': 'UPPER'
        }
    }));
} catch (error) {
    console.log(`  Error: ${error.message}`);
    console.log(`  Cannot add conditions after inequality - it must be last\n`);
}
```

**Inequality operator rules:** You can use comparison operators (`>`, `>=`, `<`, `<=`), `BETWEEN` for range queries, and `begins_with()` for prefix matching. The inequality must be the last condition in your query.

**Valid patterns:**
+ Equality conditions followed by inequality: `round = 'SEMIFINALS' AND bracket = 'UPPER' AND matchId > 'match-001'`
+ Inequality on first attribute: `round BETWEEN 'QUARTERFINALS' AND 'SEMIFINALS'`
+ Prefix matching as final condition: `round = 'SEMIFINALS' AND begins_with(bracket, 'U')`

**Invalid patterns:**
+ Adding conditions after an inequality: `round > 'QUARTERFINALS' AND bracket = 'UPPER'`
+ Using multiple inequalities: `round > 'QUARTERFINALS' AND bracket > 'L'`

**Important**  
`begins_with()` is treated as an inequality condition, so no additional sort key conditions can follow it.

### Step 6: Query PlayerMatchHistoryIndex Global Secondary Index with multi-attribute sort key
<a name="GSI.DesignPattern.MultiAttributeKeys.QueryPlayerHistory"></a>

This example queries the PlayerMatchHistoryIndex which has a single partition key (`player1Id`) and a multi-attribute sort key (`matchDate` \$1 `round`). This enables cross-tournament analysis by querying all matches for a specific player without knowing tournament IDs—whereas the base table would require separate queries per tournament-region combination.

#### Code example
<a name="w2aac19c13c45c23b9c11c13b5b1"></a>

```
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient, QueryCommand } from "@aws-sdk/lib-dynamodb";

const client = new DynamoDBClient({ region: 'us-west-2' });
const docClient = DynamoDBDocumentClient.from(client);

// Query 1: All matches for Player 101 across all tournaments
console.log("Query 1: All matches for Player 101");
const query1 = await docClient.send(new QueryCommand({
    TableName: 'TournamentMatches',
    IndexName: 'PlayerMatchHistoryIndex',
    KeyConditionExpression: 'player1Id = :player',
    ExpressionAttributeValues: {
        ':player': '101'
    }
}));

console.log(`  Found ${query1.Items.length} matches for Player 101:`);
query1.Items.forEach(match => {
    console.log(`    ${match.tournamentId}/${match.region} - ${match.matchDate} - ${match.round}`);
});
console.log();

// Query 2: Player 101 matches on specific date
console.log("Query 2: Player 101 matches on 2024-01-18");
const query2 = await docClient.send(new QueryCommand({
    TableName: 'TournamentMatches',
    IndexName: 'PlayerMatchHistoryIndex',
    KeyConditionExpression: 'player1Id = :player AND matchDate = :date',
    ExpressionAttributeValues: {
        ':player': '101',
        ':date': '2024-01-18'
    }
}));

console.log(`  Found ${query2.Items.length} matches\n`);

// Query 3: Player 101 SEMIFINALS matches on specific date
console.log("Query 3: Player 101 SEMIFINALS matches on 2024-01-18");
const query3 = await docClient.send(new QueryCommand({
    TableName: 'TournamentMatches',
    IndexName: 'PlayerMatchHistoryIndex',
    KeyConditionExpression: 'player1Id = :player AND matchDate = :date AND round = :round',
    ExpressionAttributeValues: {
        ':player': '101',
        ':date': '2024-01-18',
        ':round': 'SEMIFINALS'
    }
}));

console.log(`  Found ${query3.Items.length} matches\n`);

// Query 4: Player 101 matches in date range
console.log("Query 4: Player 101 matches in January 2024");
const query4 = await docClient.send(new QueryCommand({
    TableName: 'TournamentMatches',
    IndexName: 'PlayerMatchHistoryIndex',
    KeyConditionExpression: 'player1Id = :player AND matchDate BETWEEN :start AND :end',
    ExpressionAttributeValues: {
        ':player': '101',
        ':start': '2024-01-01',
        ':end': '2024-01-31'
    }
}));

console.log(`  Found ${query4.Items.length} matches\n`);
```

## Pattern variations
<a name="GSI.DesignPattern.MultiAttributeKeys.PatternVariations"></a>

### Time-series data with multi-attribute keys
<a name="GSI.DesignPattern.MultiAttributeKeys.TimeSeries"></a>

Optimize for time-series queries with hierarchical time attributes

#### Code example
<a name="w2aac19c13c45c23b9c13b3b5b1"></a>

```
{
    TableName: 'IoTReadings',
    // Base table: Simple partition key
    KeySchema: [
        { AttributeName: 'readingId', KeyType: 'HASH' }
    ],
    AttributeDefinitions: [
        { AttributeName: 'readingId', AttributeType: 'S' },
        { AttributeName: 'deviceId', AttributeType: 'S' },
        { AttributeName: 'locationId', AttributeType: 'S' },
        { AttributeName: 'year', AttributeType: 'S' },
        { AttributeName: 'month', AttributeType: 'S' },
        { AttributeName: 'day', AttributeType: 'S' },
        { AttributeName: 'timestamp', AttributeType: 'S' }
    ],
    // GSI with multi-attribute keys for time-series queries
    GlobalSecondaryIndexes: [{
        IndexName: 'DeviceLocationTimeIndex',
        KeySchema: [
            { AttributeName: 'deviceId', KeyType: 'HASH' },
            { AttributeName: 'locationId', KeyType: 'HASH' },
            { AttributeName: 'year', KeyType: 'RANGE' },
            { AttributeName: 'month', KeyType: 'RANGE' },
            { AttributeName: 'day', KeyType: 'RANGE' },
            { AttributeName: 'timestamp', KeyType: 'RANGE' }
        ],
        Projection: { ProjectionType: 'ALL' }
    }],
    BillingMode: 'PAY_PER_REQUEST'
}

// Query patterns enabled via GSI:
// - All readings for device in location
// - Readings for specific year
// - Readings for specific month in year
// - Readings for specific day
// - Readings in time range
```

**Benefits:** Natural time hierarchy (year → month → day → timestamp) enables efficient queries at any time granularity without date parsing or manipulation. Global Secondary Index automatically indexes all readings using their natural time attributes.

### E-commerce orders with multi-attribute keys
<a name="GSI.DesignPattern.MultiAttributeKeys.ECommerce"></a>

Track orders with multiple dimensions

#### Code example
<a name="w2aac19c13c45c23b9c13b5b5b1"></a>

```
{
    TableName: 'Orders',
    // Base table: Simple partition key
    KeySchema: [
        { AttributeName: 'orderId', KeyType: 'HASH' }
    ],
    AttributeDefinitions: [
        { AttributeName: 'orderId', AttributeType: 'S' },
        { AttributeName: 'sellerId', AttributeType: 'S' },
        { AttributeName: 'region', AttributeType: 'S' },
        { AttributeName: 'orderDate', AttributeType: 'S' },
        { AttributeName: 'category', AttributeType: 'S' },
        { AttributeName: 'customerId', AttributeType: 'S' },
        { AttributeName: 'orderStatus', AttributeType: 'S' }
    ],
    GlobalSecondaryIndexes: [
        {
            IndexName: 'SellerRegionIndex',
            KeySchema: [
                { AttributeName: 'sellerId', KeyType: 'HASH' },
                { AttributeName: 'region', KeyType: 'HASH' },
                { AttributeName: 'orderDate', KeyType: 'RANGE' },
                { AttributeName: 'category', KeyType: 'RANGE' },
                { AttributeName: 'orderId', KeyType: 'RANGE' }
            ],
            Projection: { ProjectionType: 'ALL' }
        },
        {
            IndexName: 'CustomerOrdersIndex',
            KeySchema: [
                { AttributeName: 'customerId', KeyType: 'HASH' },
                { AttributeName: 'orderDate', KeyType: 'RANGE' },
                { AttributeName: 'orderStatus', KeyType: 'RANGE' }
            ],
            Projection: { ProjectionType: 'ALL' }
        }
    ],
    BillingMode: 'PAY_PER_REQUEST'
}

// SellerRegionIndex GSI queries:
// - Orders by seller and region
// - Orders by seller, region, and date
// - Orders by seller, region, date, and category

// CustomerOrdersIndex GSI queries:
// - Customer's orders
// - Customer's orders by date
// - Customer's orders by date and status
```

### Hierarchical organization data
<a name="GSI.DesignPattern.MultiAttributeKeys.Hierarchical"></a>

Model organizational hierarchies

#### Code example
<a name="w2aac19c13c45c23b9c13b7b5b1"></a>

```
{
    TableName: 'Employees',
    // Base table: Simple partition key
    KeySchema: [
        { AttributeName: 'employeeId', KeyType: 'HASH' }
    ],
    AttributeDefinitions: [
        { AttributeName: 'employeeId', AttributeType: 'S' },
        { AttributeName: 'companyId', AttributeType: 'S' },
        { AttributeName: 'divisionId', AttributeType: 'S' },
        { AttributeName: 'departmentId', AttributeType: 'S' },
        { AttributeName: 'teamId', AttributeType: 'S' },
        { AttributeName: 'skillCategory', AttributeType: 'S' },
        { AttributeName: 'skillLevel', AttributeType: 'S' },
        { AttributeName: 'yearsExperience', AttributeType: 'N' }
    ],
    GlobalSecondaryIndexes: [
        {
            IndexName: 'OrganizationIndex',
            KeySchema: [
                { AttributeName: 'companyId', KeyType: 'HASH' },
                { AttributeName: 'divisionId', KeyType: 'HASH' },
                { AttributeName: 'departmentId', KeyType: 'RANGE' },
                { AttributeName: 'teamId', KeyType: 'RANGE' },
                { AttributeName: 'employeeId', KeyType: 'RANGE' }
            ],
            Projection: { ProjectionType: 'ALL' }
        },
        {
            IndexName: 'SkillsIndex',
            KeySchema: [
                { AttributeName: 'skillCategory', KeyType: 'HASH' },
                { AttributeName: 'skillLevel', KeyType: 'RANGE' },
                { AttributeName: 'yearsExperience', KeyType: 'RANGE' }
            ],
            Projection: { ProjectionType: 'INCLUDE', NonKeyAttributes: ['employeeId', 'name'] }
        }
    ],
    BillingMode: 'PAY_PER_REQUEST'
}

// OrganizationIndex GSI query patterns:
// - All employees in company/division
// - Employees in specific department
// - Employees in specific team

// SkillsIndex GSI query patterns:
// - Employees by skill and experience level
```

### Sparse multi-attribute keys
<a name="GSI.DesignPattern.MultiAttributeKeys.Sparse"></a>

Combine multi-attribute keys to make a sparse GSI

#### Code example
<a name="w2aac19c13c45c23b9c13b9b5b1"></a>

```
{
    TableName: 'Products',
    // Base table: Simple partition key
    KeySchema: [
        { AttributeName: 'productId', KeyType: 'HASH' }
    ],
    AttributeDefinitions: [
        { AttributeName: 'productId', AttributeType: 'S' },
        { AttributeName: 'categoryId', AttributeType: 'S' },
        { AttributeName: 'subcategoryId', AttributeType: 'S' },
        { AttributeName: 'averageRating', AttributeType: 'N' },
        { AttributeName: 'reviewCount', AttributeType: 'N' }
    ],
    GlobalSecondaryIndexes: [
        {
            IndexName: 'CategoryIndex',
            KeySchema: [
                { AttributeName: 'categoryId', KeyType: 'HASH' },
                { AttributeName: 'subcategoryId', KeyType: 'HASH' },
                { AttributeName: 'productId', KeyType: 'RANGE' }
            ],
            Projection: { ProjectionType: 'ALL' }
        },
        {
            IndexName: 'ReviewedProductsIndex',
            KeySchema: [
                { AttributeName: 'categoryId', KeyType: 'HASH' },
                { AttributeName: 'averageRating', KeyType: 'RANGE' },  // Optional attribute
                { AttributeName: 'reviewCount', KeyType: 'RANGE' }     // Optional attribute
            ],
            Projection: { ProjectionType: 'ALL' }
        }
    ],
    BillingMode: 'PAY_PER_REQUEST'
}

// Only products with reviews appear in ReviewedProductsIndex GSI
// Automatic filtering without application logic
// Multi-attribute sort key enables rating and count queries
```

### SaaS multi-tenancy
<a name="GSI.DesignPattern.MultiAttributeKeys.SaaS"></a>

Multi-tenant SaaS platform with customer isolation

#### Code example
<a name="w2aac19c13c45c23b9c13c11b5b1"></a>

```
// Table design
{
    TableName: 'SaasData',
    // Base table: Simple partition key
    KeySchema: [
        { AttributeName: 'resourceId', KeyType: 'HASH' }
    ],
    AttributeDefinitions: [
        { AttributeName: 'resourceId', AttributeType: 'S' },
        { AttributeName: 'tenantId', AttributeType: 'S' },
        { AttributeName: 'customerId', AttributeType: 'S' },
        { AttributeName: 'resourceType', AttributeType: 'S' }
    ],
    // GSI with multi-attribute keys for tenant-customer isolation
    GlobalSecondaryIndexes: [{
        IndexName: 'TenantCustomerIndex',
        KeySchema: [
            { AttributeName: 'tenantId', KeyType: 'HASH' },
            { AttributeName: 'customerId', KeyType: 'HASH' },
            { AttributeName: 'resourceType', KeyType: 'RANGE' },
            { AttributeName: 'resourceId', KeyType: 'RANGE' }
        ],
        Projection: { ProjectionType: 'ALL' }
    }],
    BillingMode: 'PAY_PER_REQUEST'
}

// Query GSI: All resources for tenant T001, customer C001
const resources = await docClient.send(new QueryCommand({
    TableName: 'SaasData',
    IndexName: 'TenantCustomerIndex',
    KeyConditionExpression: 'tenantId = :tenant AND customerId = :customer',
    ExpressionAttributeValues: {
        ':tenant': 'T001',
        ':customer': 'C001'
    }
}));

// Query GSI: Specific resource type for tenant/customer
const documents = await docClient.send(new QueryCommand({
    TableName: 'SaasData',
    IndexName: 'TenantCustomerIndex',
    KeyConditionExpression: 'tenantId = :tenant AND customerId = :customer AND resourceType = :type',
    ExpressionAttributeValues: {
        ':tenant': 'T001',
        ':customer': 'C001',
        ':type': 'document'
    }
}));
```

**Benefits:** Efficient queries within tenant-customer context and natural data organization.

### Financial transactions
<a name="GSI.DesignPattern.MultiAttributeKeys.Financial"></a>

Banking system tracking account transactions using GSIs

#### Code example
<a name="w2aac19c13c45c23b9c13c13b5b1"></a>

```
// Table design
{
    TableName: 'BankTransactions',
    // Base table: Simple partition key
    KeySchema: [
        { AttributeName: 'transactionId', KeyType: 'HASH' }
    ],
    AttributeDefinitions: [
        { AttributeName: 'transactionId', AttributeType: 'S' },
        { AttributeName: 'accountId', AttributeType: 'S' },
        { AttributeName: 'year', AttributeType: 'S' },
        { AttributeName: 'month', AttributeType: 'S' },
        { AttributeName: 'day', AttributeType: 'S' },
        { AttributeName: 'transactionType', AttributeType: 'S' }
    ],
    GlobalSecondaryIndexes: [
        {
            IndexName: 'AccountTimeIndex',
            KeySchema: [
                { AttributeName: 'accountId', KeyType: 'HASH' },
                { AttributeName: 'year', KeyType: 'RANGE' },
                { AttributeName: 'month', KeyType: 'RANGE' },
                { AttributeName: 'day', KeyType: 'RANGE' },
                { AttributeName: 'transactionId', KeyType: 'RANGE' }
            ],
            Projection: { ProjectionType: 'ALL' }
        },
        {
            IndexName: 'TransactionTypeIndex',
            KeySchema: [
                { AttributeName: 'accountId', KeyType: 'HASH' },
                { AttributeName: 'transactionType', KeyType: 'RANGE' },
                { AttributeName: 'year', KeyType: 'RANGE' },
                { AttributeName: 'month', KeyType: 'RANGE' }
            ],
            Projection: { ProjectionType: 'ALL' }
        }
    ],
    BillingMode: 'PAY_PER_REQUEST'
}

// Query AccountTimeIndex GSI: All transactions for account in 2023
const yearTransactions = await docClient.send(new QueryCommand({
    TableName: 'BankTransactions',
    IndexName: 'AccountTimeIndex',
    KeyConditionExpression: 'accountId = :account AND #year = :year',
    ExpressionAttributeNames: { '#year': 'year' },
    ExpressionAttributeValues: {
        ':account': 'ACC-12345',
        ':year': '2023'
    }
}));

// Query AccountTimeIndex GSI: Transactions in specific month
const monthTransactions = await docClient.send(new QueryCommand({
    TableName: 'BankTransactions',
    IndexName: 'AccountTimeIndex',
    KeyConditionExpression: 'accountId = :account AND #year = :year AND #month = :month',
    ExpressionAttributeNames: { '#year': 'year', '#month': 'month' },
    ExpressionAttributeValues: {
        ':account': 'ACC-12345',
        ':year': '2023',
        ':month': '11'
    }
}));

// Query TransactionTypeIndex GSI: Deposits in 2023
const deposits = await docClient.send(new QueryCommand({
    TableName: 'BankTransactions',
    IndexName: 'TransactionTypeIndex',
    KeyConditionExpression: 'accountId = :account AND transactionType = :type AND #year = :year',
    ExpressionAttributeNames: { '#year': 'year' },
    ExpressionAttributeValues: {
        ':account': 'ACC-12345',
        ':type': 'deposit',
        ':year': '2023'
    }
}));
```

## Complete example
<a name="GSI.DesignPattern.MultiAttributeKeys.CompleteExample"></a>

The following example demonstrates multi-attribute keys from setup to cleanup:

### Code example
<a name="w2aac19c13c45c23b9c15b5b1"></a>

```
import { 
    DynamoDBClient, 
    CreateTableCommand, 
    DeleteTableCommand, 
    waitUntilTableExists 
} from "@aws-sdk/client-dynamodb";
import { 
    DynamoDBDocumentClient, 
    PutCommand, 
    QueryCommand 
} from "@aws-sdk/lib-dynamodb";

const client = new DynamoDBClient({ region: 'us-west-2' });
const docClient = DynamoDBDocumentClient.from(client);

async function multiAttributeKeysDemo() {
    console.log("Starting Multi-Attribute GSI Keys Demo\n");
    
    // Step 1: Create table with GSIs using multi-attribute keys
    console.log("1. Creating table with multi-attribute GSI keys...");
    await client.send(new CreateTableCommand({
        TableName: 'TournamentMatches',
        KeySchema: [
            { AttributeName: 'matchId', KeyType: 'HASH' }
        ],
        AttributeDefinitions: [
            { AttributeName: 'matchId', AttributeType: 'S' },
            { AttributeName: 'tournamentId', AttributeType: 'S' },
            { AttributeName: 'region', AttributeType: 'S' },
            { AttributeName: 'round', AttributeType: 'S' },
            { AttributeName: 'bracket', AttributeType: 'S' },
            { AttributeName: 'player1Id', AttributeType: 'S' },
            { AttributeName: 'matchDate', AttributeType: 'S' }
        ],
        GlobalSecondaryIndexes: [
            {
                IndexName: 'TournamentRegionIndex',
                KeySchema: [
                    { AttributeName: 'tournamentId', KeyType: 'HASH' },
                    { AttributeName: 'region', KeyType: 'HASH' },
                    { AttributeName: 'round', KeyType: 'RANGE' },
                    { AttributeName: 'bracket', KeyType: 'RANGE' },
                    { AttributeName: 'matchId', KeyType: 'RANGE' }
                ],
                Projection: { ProjectionType: 'ALL' }
            },
            {
                IndexName: 'PlayerMatchHistoryIndex',
                KeySchema: [
                    { AttributeName: 'player1Id', KeyType: 'HASH' },
                    { AttributeName: 'matchDate', KeyType: 'RANGE' },
                    { AttributeName: 'round', KeyType: 'RANGE' }
                ],
                Projection: { ProjectionType: 'ALL' }
            }
        ],
        BillingMode: 'PAY_PER_REQUEST'
    }));
    
    await waitUntilTableExists({ client, maxWaitTime: 120 }, { TableName: 'TournamentMatches' });
    console.log("Table created\n");
    
    // Step 2: Insert tournament matches
    console.log("2. Inserting tournament matches...");
    const matches = [
        { matchId: 'match-001', tournamentId: 'WINTER2024', region: 'NA-EAST', round: 'FINALS', bracket: 'CHAMPIONSHIP', player1Id: '101', player2Id: '103', matchDate: '2024-01-20', winner: '101', score: '3-1' },
        { matchId: 'match-002', tournamentId: 'WINTER2024', region: 'NA-EAST', round: 'SEMIFINALS', bracket: 'UPPER', player1Id: '101', player2Id: '105', matchDate: '2024-01-18', winner: '101', score: '3-2' },
        { matchId: 'match-003', tournamentId: 'WINTER2024', region: 'NA-WEST', round: 'FINALS', bracket: 'CHAMPIONSHIP', player1Id: '102', player2Id: '104', matchDate: '2024-01-20', winner: '102', score: '3-2' },
        { matchId: 'match-004', tournamentId: 'SPRING2024', region: 'NA-EAST', round: 'QUARTERFINALS', bracket: 'UPPER', player1Id: '101', player2Id: '108', matchDate: '2024-03-15', winner: '101', score: '3-0' }
    ];
    
    for (const match of matches) {
        await docClient.send(new PutCommand({ TableName: 'TournamentMatches', Item: match }));
    }
    console.log(`Inserted ${matches.length} tournament matches\n`);
    
    // Step 3: Query GSI with multi-attribute partition key
    console.log("3. Query TournamentRegionIndex GSI: WINTER2024/NA-EAST matches");
    const gsiQuery1 = await docClient.send(new QueryCommand({
        TableName: 'TournamentMatches',
        IndexName: 'TournamentRegionIndex',
        KeyConditionExpression: 'tournamentId = :tournament AND #region = :region',
        ExpressionAttributeNames: { '#region': 'region' },
        ExpressionAttributeValues: { ':tournament': 'WINTER2024', ':region': 'NA-EAST' }
    }));
    
    console.log(`  Found ${gsiQuery1.Items.length} matches:`);
    gsiQuery1.Items.forEach(match => {
        console.log(`    ${match.round} - ${match.bracket} - ${match.winner} won`);
    });
    
    // Step 4: Query GSI with multi-attribute sort key
    console.log("\n4. Query PlayerMatchHistoryIndex GSI: All matches for Player 101");
    const gsiQuery2 = await docClient.send(new QueryCommand({
        TableName: 'TournamentMatches',
        IndexName: 'PlayerMatchHistoryIndex',
        KeyConditionExpression: 'player1Id = :player',
        ExpressionAttributeValues: { ':player': '101' }
    }));
    
    console.log(`  Found ${gsiQuery2.Items.length} matches for Player 101:`);
    gsiQuery2.Items.forEach(match => {
        console.log(`    ${match.tournamentId}/${match.region} - ${match.matchDate} - ${match.round}`);
    });
    
    console.log("\nDemo complete");
    console.log("No synthetic keys needed - GSIs use native attributes automatically");
}

async function cleanup() {
    console.log("Deleting table...");
    await client.send(new DeleteTableCommand({ TableName: 'TournamentMatches' }));
    console.log("Table deleted");
}

// Run demo
multiAttributeKeysDemo().catch(console.error);

// Uncomment to cleanup:
// cleanup().catch(console.error);
```

**Minimal code scaffold**

### Code example
<a name="w2aac19c13c45c23b9c15b9b1"></a>

```
// 1. Create table with GSI using multi-attribute keys
await client.send(new CreateTableCommand({
    TableName: 'MyTable',
    KeySchema: [
        { AttributeName: 'id', KeyType: 'HASH' }        // Simple base table PK
    ],
    AttributeDefinitions: [
        { AttributeName: 'id', AttributeType: 'S' },
        { AttributeName: 'attr1', AttributeType: 'S' },
        { AttributeName: 'attr2', AttributeType: 'S' },
        { AttributeName: 'attr3', AttributeType: 'S' },
        { AttributeName: 'attr4', AttributeType: 'S' }
    ],
    GlobalSecondaryIndexes: [{
        IndexName: 'MyGSI',
        KeySchema: [
            { AttributeName: 'attr1', KeyType: 'HASH' },    // GSI PK attribute 1
            { AttributeName: 'attr2', KeyType: 'HASH' },    // GSI PK attribute 2
            { AttributeName: 'attr3', KeyType: 'RANGE' },   // GSI SK attribute 1
            { AttributeName: 'attr4', KeyType: 'RANGE' }    // GSI SK attribute 2
        ],
        Projection: { ProjectionType: 'ALL' }
    }],
    BillingMode: 'PAY_PER_REQUEST'
}));

// 2. Insert items with native attributes (no concatenation needed for GSI)
await docClient.send(new PutCommand({
    TableName: 'MyTable',
    Item: {
        id: 'item-001',
        attr1: 'value1',
        attr2: 'value2',
        attr3: 'value3',
        attr4: 'value4',
        // ... other attributes
    }
}));

// 3. Query GSI with all partition key attributes
await docClient.send(new QueryCommand({
    TableName: 'MyTable',
    IndexName: 'MyGSI',
    KeyConditionExpression: 'attr1 = :v1 AND attr2 = :v2',
    ExpressionAttributeValues: {
        ':v1': 'value1',
        ':v2': 'value2'
    }
}));

// 4. Query GSI with sort key attributes (left-to-right)
await docClient.send(new QueryCommand({
    TableName: 'MyTable',
    IndexName: 'MyGSI',
    KeyConditionExpression: 'attr1 = :v1 AND attr2 = :v2 AND attr3 = :v3',
    ExpressionAttributeValues: {
        ':v1': 'value1',
        ':v2': 'value2',
        ':v3': 'value3'
    }
}));

// Note: If any attribute name is a DynamoDB reserved keyword, use ExpressionAttributeNames:
// KeyConditionExpression: 'attr1 = :v1 AND #attr2 = :v2'
// ExpressionAttributeNames: { '#attr2': 'attr2' }
```

## Additional resources
<a name="GSI.DesignPattern.MultiAttributeKeys.AdditionalResources"></a>
+ [DynamoDB Best Practices](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/best-practices.html)
+ [Working with Tables and Data](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html)
+ [Global Secondary Indexes](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html)
+ [Query and Scan Operations](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html)

# Managing Global Secondary Indexes in DynamoDB
<a name="GSI.OnlineOps"></a>

This section describes how to create, modify, and delete global secondary indexes in Amazon DynamoDB.

**Topics**
+ [Creating a table with Global Secondary Indexes](#GSI.Creating)
+ [Describing the Global Secondary Indexes on a table](#GSI.Describing)
+ [Adding a Global Secondary Index to an existing table](#GSI.OnlineOps.Creating)
+ [Deleting a Global Secondary Index](#GSI.OnlineOps.Deleting)
+ [Modifying a Global Secondary Index during creation](#GSI.OnlineOps.Creating.Modify)

## Creating a table with Global Secondary Indexes
<a name="GSI.Creating"></a>

To create a table with one or more global secondary indexes, use the `CreateTable` operation with the `GlobalSecondaryIndexes` parameter. For maximum query flexibility, you can create up to 20 global secondary indexes (default quota) per table. 

You must specify one attribute to act as the index partition key. You can optionally specify another attribute for the index sort key. It is not necessary for either of these key attributes to be the same as a key attribute in the table. For example, in the *GameScores* table (see [Using Global Secondary Indexes in DynamoDB](GSI.md)), neither `TopScore` nor `TopScoreDateTime` are key attributes. You could create a global secondary index with a partition key of `TopScore` and a sort key of `TopScoreDateTime`. You might use such an index to determine whether there is a correlation between high scores and the time of day a game is played.

Each index key attribute must be a scalar of type `String`, `Number`, or `Binary`. (It cannot be a document or a set.) You can project attributes of any data type into a global secondary index. This includes scalars, documents, and sets. For a complete list of data types, see [Data types](HowItWorks.NamingRulesDataTypes.md#HowItWorks.DataTypes).

If using provisioned mode, you must provide `ProvisionedThroughput` settings for the index, consisting of `ReadCapacityUnits` and `WriteCapacityUnits`. These provisioned throughput settings are separate from those of the table, but behave in similar ways. For more information, see [Provisioned throughput considerations for Global Secondary Indexes](GSI.md#GSI.ThroughputConsiderations).

 Global secondary indexes inherit the read/write capacity mode from the base table. For more information, see [Considerations when switching capacity modes in DynamoDB](bp-switching-capacity-modes.md). 

**Note**  
 When creating a new GSI, it can be important to check if your choice of partition key is producing uneven or narrowed distribution of data or traffic across the new index’s partition key values. If this occurs, you could be seeing backfill and write operations occurring at the same time and throttling writes to the base table. The service takes measures to minimize the potential for this scenario, but has no insight into the shape of customer data with respect to the index partition key, the chosen projection, or the sparseness of the index primary key.  
If you suspect that your new global secondary index might have narrow or skewed data or traffic distribution across partition key values, consider the following before adding new indexes to operationally important tables.  
It might be safest to add the index at a time when your application is driving the least amount of traffic.
Consider enabling CloudWatch Contributor Insights on your base table and indexes. This will give you valuable insight into your traffic distribution.
 Watch `WriteThrottleEvents`, `ThrottledRequests`, and `OnlineIndexPercentageProgress` CloudWatch metrics throughout the process. Adjust the provisioned write capacity as required to complete the backfill in a reasonable time without any significant throttling effects on your ongoing operations. `OnlineIndexConsumedWriteCapacity` and `OnlineThrottleEvents` are expected to show 0 during index backfill.
Be prepared to cancel the index creation if you experience operational impact due to write throttling.

## Describing the Global Secondary Indexes on a table
<a name="GSI.Describing"></a>

To view the status of all the global secondary indexes on a table, use the `DescribeTable` operation. The `GlobalSecondaryIndexes` portion of the response shows all of the indexes on the table, along with the current status of each ( `IndexStatus`).

The `IndexStatus` for a global secondary index will be one of the following:
+ `CREATING` — The index is currently being created, and is not yet available for use.
+ `ACTIVE` — The index is ready for use, and applications can perform `Query` operations on the index.
+ `UPDATING` — The provisioned throughput settings of the index are being changed.
+ `DELETING` — The index is currently being deleted, and can no longer be used.

When DynamoDB has finished building a global secondary index, the index status changes from `CREATING` to `ACTIVE`.

## Adding a Global Secondary Index to an existing table
<a name="GSI.OnlineOps.Creating"></a>

To add a global secondary index to an existing table, use the `UpdateTable` operation with the `GlobalSecondaryIndexUpdates` parameter. You must provide the following:
+ An index name. The name must be unique among all the indexes on the table.
+ The key schema of the index. You must specify one attribute for the index partition key; you can optionally specify another attribute for the index sort key. It is not necessary for either of these key attributes to be the same as a key attribute in the table. The data types for each schema attribute must be scalar: `String`, `Number`, or `Binary`.
+ The attributes to be projected from the table into the index:
  + `KEYS_ONLY` — Each item in the index consists only of the table partition key and sort key values, plus the index key values. 
  + `INCLUDE` — In addition to the attributes described in `KEYS_ONLY`, the secondary index includes other non-key attributes that you specify.
  + `ALL` — The index includes all of the attributes from the source table.
+ The provisioned throughput settings for the index, consisting of `ReadCapacityUnits` and `WriteCapacityUnits`. These provisioned throughput settings are separate from those of the table.

You can only create one global secondary index per `UpdateTable` operation.

### Phases of index creation
<a name="GSI.OnlineOps.Creating.Phases"></a>

When you add a new global secondary index to an existing table, the table continues to be available while the index is being built. However, the new index is not available for Query operations until its status changes from `CREATING` to `ACTIVE`.

**Note**  
Global secondary index creation does not use Application Auto Scaling. Increasing the `MIN` Application Auto Scaling capacity will not decrease the creation time of the global secondary index.

Behind the scenes, DynamoDB builds the index in two phases:

**Resource Allocation**  
DynamoDB allocates the compute and storage resources that are needed for building the index.  
During the resource allocation phase, the `IndexStatus` attribute is `CREATING` and the `Backfilling` attribute is false. Use the `DescribeTable` operation to retrieve the status of a table and all of its secondary indexes.  
While the index is in the resource allocation phase, you can't delete the index or delete its parent table. You also can't modify the provisioned throughput of the index or the table. You cannot add or delete other indexes on the table. However, you can modify the provisioned throughput of these other indexes.

**Backfilling**  
For each item in the table, DynamoDB determines which set of attributes to write to the index based on its projection (`KEYS_ONLY`, `INCLUDE`, or `ALL`). It then writes these attributes to the index. During the backfill phase, DynamoDB tracks the items that are being added, deleted, or updated in the table. The attributes from these items are also added, deleted, or updated in the index as appropriate.  
During the backfilling phase, the `IndexStatus` attribute is set to `CREATING`, and the `Backfilling` attribute is true. Use the `DescribeTable` operation to retrieve the status of a table and all of its secondary indexes.  
While the index is backfilling, you cannot delete its parent table. However, you can still delete the index or modify the provisioned throughput of the table and any of its global secondary indexes.  
During the backfilling phase, some writes of violating items might succeed while others are rejected. After backfilling, all writes to items that violate the new index's key schema are rejected. We recommend that you run the Violation Detector tool after the backfill phase finishes to detect and resolve any key violations that might have occurred. For more information, see [Detecting and correcting index key violations in DynamoDB](GSI.OnlineOps.ViolationDetection.md).

While the resource allocation and backfilling phases are in progress, the index is in the `CREATING` state. During this time, DynamoDB performs read operations on the table. You are not charged for read operations from the base table to populate the global secondary index.

When the index build is complete, its status changes to `ACTIVE`. You can't `Query` or `Scan` the index until it is `ACTIVE`.

**Note**  
In some cases, DynamoDB can't write data from the table to the index because of index key violations. This can occur if:  
The data type of an attribute value does not match the data type of an index key schema data type.
The size of an attribute exceeds the maximum length for an index key attribute.
An index key attribute has an empty String or empty Binary attribute value.
Index key violations do not interfere with global secondary index creation. However, when the index becomes `ACTIVE`, the violating keys are not present in the index.  
DynamoDB provides a standalone tool for finding and resolving these issues. For more information, see [Detecting and correcting index key violations in DynamoDB](GSI.OnlineOps.ViolationDetection.md).

### Adding a Global Secondary Index to a large table
<a name="GSI.OnlineOps.Creating.LargeTable"></a>

The time required for building a global secondary index depends on several factors, such as the following:
+ The size of the table
+ The number of items in the table that qualify for inclusion in the index
+ The number of attributes projected into the index
+ Write activity on the main table during index builds

If you are adding a global secondary index to a very large table, it might take a long time for the creation process to complete. To monitor progress and determine whether the index has sufficient write capacity, consult the following Amazon CloudWatch metrics:
+ `OnlineIndexPercentageProgress`

For more information about CloudWatch metrics related to DynamoDB, see [DynamoDB metrics](metrics-dimensions.md#dynamodb-metrics).

**Important**  
You may need to allowlist very large tables before creating or updating a Global Secondary Index. Please reach out to AWS Support to allowlist your tables.

While an index is being backfilled, DynamoDB uses internal system capacity to read from the table. This is to minimize the impact of the index creation and to assure that your table does not run out of read capacity.

## Deleting a Global Secondary Index
<a name="GSI.OnlineOps.Deleting"></a>

If you no longer need a global secondary index, you can delete it using the `UpdateTable` operation.

You can delete only one global secondary index per `UpdateTable` operation.

While the global secondary index is being deleted, there is no effect on any read or write activity in the parent table. While the deletion is in progress, you can still modify the provisioned throughput on other indexes.

**Note**  
When you delete a table using the `DeleteTable` action, all of the global secondary indexes on that table are also deleted.
Your account will not be charged for the delete operation of the global secondary index.

## Modifying a Global Secondary Index during creation
<a name="GSI.OnlineOps.Creating.Modify"></a>

While an index is being built, you can use the `DescribeTable` operation to determine what phase it is in. The description for the index includes a Boolean attribute, `Backfilling`, to indicate whether DynamoDB is currently loading the index with items from the table. If `Backfilling` is true, the resource allocation phase is complete and the index is now backfilling. 

During the backfilling phase, you can delete the index that is being created. During this phase, you can't add or delete other indexes on the table.

**Note**  
For indexes that were created as part of a `CreateTable` operation, the `Backfilling` attribute does not appear in the `DescribeTable` output. For more information, see [Phases of index creation](#GSI.OnlineOps.Creating.Phases).

# Detecting and correcting index key violations in DynamoDB
<a name="GSI.OnlineOps.ViolationDetection"></a>

During the backfill phase of global secondary index creation, Amazon DynamoDB examines each item in the table to determine whether it is eligible for inclusion in the index. Some items might not be eligible because they would cause index key violations. In these cases, the items remain in the table, but the index doesn't have a corresponding entry for that item.

An *index key violation* occurs in the following situations:
+ There is a data type mismatch between an attribute value and the index key schema data type. For example, suppose that one of the items in the `GameScores` table had a `TopScore` value of type `String`. If you added a global secondary index with a partition key of `TopScore`, of type `Number`, the item from the table would violate the index key.
+ An attribute value from the table exceeds the maximum length for an index key attribute. The maximum length of a partition key is 2048 bytes, and the maximum length of a sort key is 1024 bytes. If any of the corresponding attribute values in the table exceed these limits, the item from the table would violate the index key.

**Note**  
If a String or Binary attribute value is set for an attribute that is used as an index key, then the attribute value must have a length greater than zero;, otherwise, the item from the table would violate the index key.  
This tool does not flag this index key violation, at this time.

If an index key violation occurs, the backfill phase continues without interruption. However, any violating items are not included in the index. After the backfill phase completes, all writes to items that violate the new index's key schema will be rejected.

To identify and fix attribute values in a table that violate an index key, use the Violation Detector tool. To run Violation Detector, you create a configuration file that specifies the name of a table to be scanned, the names and data types of the global secondary index partition key and sort key, and what actions to take if any index key violations are found. Violation Detector can run in one of two different modes:
+ **Detection mode** — Detect index key violations. Use detection mode to report the items in the table that would cause key violations in a global secondary index. (You can optionally request that these violating table items be deleted immediately when they are found.) The output from detection mode is written to a file, which you can use for further analysis.
+ **Correction mode** — Correct index key violations. In correction mode, Violation Detector reads an input file with the same format as the output file from detection mode. Correction mode reads the records from the input file and, for each record, it either deletes or updates the corresponding items in the table. (Note that if you choose to update the items, you must edit the input file and set appropriate values for these updates.)

## Downloading and running Violation Detector
<a name="GSI.OnlineOps.ViolationDetection.Running"></a>

Violation Detector is available as an executable Java Archive (`.jar` file), and runs on Windows, macOS, or Linux computers. Violation Detector requires Java 1.7 (or later) and Apache Maven.
+ [Download violation detector from GitHub](https://github.com/awslabs/dynamodb-online-index-violation-detector)

Follow the instructions in the `README.md` file to download and install Violation Detector using Maven.

To start Violation Detector, go to the directory where you have built `ViolationDetector.java` and enter the following command.

```
java -jar ViolationDetector.jar [options]
```

The Violation Detector command line accepts the following options:
+ `-h | --help` — Prints a usage summary and options for Violation Detector.
+ `-p | --configFilePath` `value` — The fully qualified name of a Violation Detector configuration file. For more information, see [The Violation Detector configuration file](#GSI.OnlineOps.ViolationDetection.ConfigFile).
+ `-t | --detect` `value` — Detect index key violations in the table, and write them to the Violation Detector output file. If the value of this parameter is set to `keep`, items with key violations are not modified. If the value is set to `delete`, items with key violations are deleted from the table.
+ `-c | --correct` `value` — Read index key violations from an input file, and take corrective actions on the items in the table. If the value of this parameter is set to `update`, items with key violations are updated with new, non-violating values. If the value is set to `delete`, items with key violations are deleted from the table.

## The Violation Detector configuration file
<a name="GSI.OnlineOps.ViolationDetection.ConfigFile"></a>

At runtime, the Violation Detector tool requires a configuration file. The parameters in this file determine which DynamoDB resources that Violation Detector can access, and how much provisioned throughput it can consume. The following table describes these parameters.


****  

| Parameter name | Description | Required? | 
| --- | --- | --- | 
|  `awsCredentialsFile`  |  The fully qualified name of a file containing your AWS credentials. The credentials file must be in the following format: <pre>accessKey = access_key_id_goes_here<br />secretKey = secret_key_goes_here </pre>  |  Yes  | 
|  `dynamoDBRegion`  |  The AWS Region in which the table resides. For example: `us-west-2`.  |  Yes  | 
|  `tableName`  | The name of the DynamoDB table to be scanned. |  Yes  | 
|  `gsiHashKeyName`  |  The name of the index partition key.  |  Yes  | 
|  `gsiHashKeyType`  |  The data type of the index partition key—`String`, `Number`, or `Binary`: `S \| N \| B`  |  Yes  | 
|  `gsiRangeKeyName`  |  The name of the index sort key. Do not specify this parameter if the index only has a simple primary key (partition key).  |  No  | 
|  `gsiRangeKeyType`  |  The data type of the index sort key—`String`, `Number`, or `Binary`: `S \| N \| B`  Do not specify this parameter if the index only has a simple primary key (partition key).  |  No  | 
|  `recordDetails`  |  Whether to write the full details of index key violations to the output file. If set to `true` (the default), full information about the violating items is reported. If set to `false`, only the number of violations is reported.  |  No  | 
|  `recordGsiValueInViolationRecord`  |  Whether to write the values of the violating index keys to the output file. If set to `true` (default), the key values are reported. If set to `false`, the key values are not reported.  |  No  | 
|  `detectionOutputPath`  |  The full path of the Violation Detector output file. This parameter supports writing to a local directory or to Amazon Simple Storage Service (Amazon S3). The following are examples: `detectionOutputPath = ``//local/path/filename.csv` `detectionOutputPath = ``s3://bucket/filename.csv` Information in the output file appears in comma-separated values (CSV) format. If you don't set `detectionOutputPath`, the output file is named `violation_detection.csv` and is written to your current working directory.  |  No  | 
|  `numOfSegments`  | The number of parallel scan segments to be used when Violation Detector scans the table. The default value is 1, meaning that the table is scanned in a sequential manner. If the value is 2 or higher, then Violation Detector divides the table into that many logical segments and an equal number of scan threads. The maximum setting for `numOfSegments` is 4096.For larger tables, a parallel scan is generally faster than a sequential scan. In addition, if the table is large enough to span multiple partitions, a parallel scan distributes its read activity evenly across multiple partitions.For more information about parallel scans in DynamoDB, see [Parallel scan](Scan.md#Scan.ParallelScan). |  No  | 
|  `numOfViolations`  |  The upper limit of index key violations to write to the output file. If set to `-1` (the default), the entire table is scanned. If set to a positive integer, then Violation Detector stops after it encounters that number of violations.  |  No  | 
|  `numOfRecords`  |  The number of items in the table to be scanned. If set to -1 (the default), the entire table is scanned. If set to a positive integer, Violation Detector stops after it scans that many items in the table.  |  No  | 
|  `readWriteIOPSPercent`  |  Regulates the percentage of provisioned read capacity units that are consumed during the table scan. Valid values range from `1` to `100`. The default value (`25`) means that Violation Detector will consume no more than 25% of the table's provisioned read throughput.  |  No  | 
|  `correctionInputPath`  |  The full path of the Violation Detector correction input file. If you run Violation Detector in correction mode, the contents of this file are used to modify or delete data items in the table that violate the global secondary index. The format of the `correctionInputPath` file is the same as that of the `detectionOutputPath` file. This lets you process the output from detection mode as input in correction mode.  |  No  | 
|  `correctionOutputPath`  |  The full path of the Violation Detector correction output file. This file is created only if there are update errors. This parameter supports writing to a local directory or to Amazon S3. The following are examples: `correctionOutputPath = ``//local/path/filename.csv` `correctionOutputPath = ``s3://bucket/filename.csv` Information in the output file appears in CSV format. If you don't set `correctionOutputPath`, the output file is named `violation_update_errors.csv` and is written to your current working directory.  |  No  | 

## Detection
<a name="GSI.OnlineOps.ViolationDetection.Detection"></a>

To detect index key violations, use Violation Detector with the `--detect` command line option. To show how this option works, consider the `ProductCatalog` table. The following is a list of items in the table. Only the primary key (`Id`) and the `Price` attribute are shown.


****  

| Id (primary key) | Price | 
| --- | --- | 
| 101 |  5  | 
| 102 |  20  | 
| 103 | 200  | 
| 201 |  100  | 
| 202 |  200  | 
| 203 |  300  | 
| 204 |  400  | 
| 205 |  500  | 

All of the values for `Price` are of type `Number`. However, because DynamoDB is schemaless, it is possible to add an item with a non-numeric `Price`. For example, suppose that you add another item to the `ProductCatalog` table.


****  

| Id (primary key) | Price | 
| --- | --- | 
| 999 | "Hello" | 

The table now has a total of nine items.

Now you add a new global secondary index to the table: `PriceIndex`. The primary key for this index is a partition key, `Price`, which is of type `Number`. After the index has been built, it will contain eight items—but the `ProductCatalog` table has nine items. The reason for this discrepancy is that the value `"Hello"` is of type `String`, but `PriceIndex` has a primary key of type `Number`. The `String` value violates the global secondary index key, so it is not present in the index.

To use Violation Detector in this scenario, you first create a configuration file such as the following.

```
# Properties file for violation detection tool configuration.
# Parameters that are not specified will use default values.

awsCredentialsFile = /home/alice/credentials.txt
dynamoDBRegion = us-west-2
tableName = ProductCatalog
gsiHashKeyName = Price
gsiHashKeyType = N
recordDetails = true
recordGsiValueInViolationRecord = true
detectionOutputPath = ./gsi_violation_check.csv
correctionInputPath = ./gsi_violation_check.csv
numOfSegments = 1
readWriteIOPSPercent = 40
```

Next, you run Violation Detector as in the following example.

```
$  java -jar ViolationDetector.jar --configFilePath config.txt --detect keep

Violation detection started: sequential scan, Table name: ProductCatalog, GSI name: PriceIndex
Progress: Items scanned in total: 9,    Items scanned by this thread: 9,    Violations found by this thread: 1, Violations deleted by this thread: 0
Violation detection finished: Records scanned: 9, Violations found: 1, Violations deleted: 0, see results at: ./gsi_violation_check.csv
```

If the `recordDetails` config parameter is set to `true`, Violation Detector writes details of each violation to the output file, as in the following example.

```
Table Hash Key,GSI Hash Key Value,GSI Hash Key Violation Type,GSI Hash Key Violation Description,GSI Hash Key Update Value(FOR USER),Delete Blank Attributes When Updating?(Y/N) 

999,"{""S"":""Hello""}",Type Violation,Expected: N Found: S,,
```

The output file is in CSV format. The first line in the file is a header, followed by one record per item that violates the index key. The fields of these violation records are as follows:
+ **Table hash key** — The partition key value of the item in the table.
+ **Table range key** — The sort key value of the item in the table.
+ **GSI hash key value** — The partition key value of the global secondary index.
+ **GSI hash key violation type** — Either `Type Violation` or `Size Violation`.
+ **GSI hash key violation description** — The cause of the violation.
+ **GSI hash key update Value(FOR USER)** — In correction mode, a new user-supplied value for the attribute.
+ **GSI range key value** — The sort key value of the global secondary index.
+ **GSI range key violation type** — Either `Type Violation` or `Size Violation`.
+ **GSI range key violation description** — The cause of the violation.
+ **GSI range key update Value(FOR USER)** — In correction mode, a new user-supplied value for the attribute.
+ **Delete blank attribute when Updating(Y/N)** — In correction mode, determines whether to delete (Y) or keep (N) the violating item in the table—but only if either of the following fields are blank:
  + `GSI Hash Key Update Value(FOR USER)`
  + `GSI Range Key Update Value(FOR USER)`

  If either of these fields are non-blank, then `Delete Blank Attribute When Updating(Y/N)` has no effect.

**Note**  
The output format might vary, depending on the configuration file and command line options. For example, if the table has a simple primary key (without a sort key), no sort key fields will be present in the output.  
The violation records in the file might not be in sorted order.

## Correction
<a name="GSI.OnlineOps.ViolationDetection.Correction"></a>

To correct index key violations, use Violation Detector with the `--correct` command line option. In correction mode, Violation Detector reads the input file specified by the `correctionInputPath` parameter. This file has the same format as the `detectionOutputPath` file, so that you can use the output from detection as input for correction.

Violation Detector provides two different ways to correct index key violations:
+ **Delete violations** — Delete the table items that have violating attribute values.
+ **Update violations** — Update the table items, replacing the violating attributes with non-violating values.

In either case, you can use the output file from detection mode as input for correction mode.

Continuing with the `ProductCatalog` example, suppose that you want to delete the violating item from the table. To do this, you use the following command line.

```
$  java -jar ViolationDetector.jar --configFilePath config.txt --correct delete
```

At this point, you are asked to confirm whether you want to delete the violating items.

```
Are you sure to delete all violations on the table?y/n
y
Confirmed, will delete violations on the table...
Violation correction from file started: Reading records from file: ./gsi_violation_check.csv, will delete these records from table.
Violation correction from file finished: Violations delete: 1, Violations Update: 0
```

Now both `ProductCatalog` and `PriceIndex` have the same number of items.

# Working with Global Secondary Indexes: Java
<a name="GSIJavaDocumentAPI"></a>

You can use the AWS SDK for Java Document API to create an Amazon DynamoDB table with one or more global secondary indexes, describe the indexes on the table, and perform queries using the indexes. 

The following are the common steps for table operations. 

1. Create an instance of the `DynamoDB` class.

1. Provide the required and optional parameters for the operation by creating the corresponding request objects. 

1. Call the appropriate method provided by the client that you created in the preceding step. 

**Topics**
+ [Create a table with a Global Secondary Index](#GSIJavaDocumentAPI.CreateTableWithIndex)
+ [Describe a table with a Global Secondary Index](#GSIJavaDocumentAPI.DescribeTableWithIndex)
+ [Query a Global Secondary Index](#GSIJavaDocumentAPI.QueryAnIndex)
+ [Example: Global Secondary Indexes using the AWS SDK for Java document API](GSIJavaDocumentAPI.Example.md)

## Create a table with a Global Secondary Index
<a name="GSIJavaDocumentAPI.CreateTableWithIndex"></a>

You can create global secondary indexes at the same time that you create a table. To do this, use `CreateTable` and provide your specifications for one or more global secondary indexes. The following Java code example creates a table to hold information about weather data. The partition key is `Location` and the sort key is `Date`. A global secondary index named `PrecipIndex` allows fast access to precipitation data for various locations.

The following are the steps to create a table with a global secondary index, using the DynamoDB document API. 

1. Create an instance of the `DynamoDB` class.

1. Create an instance of the `CreateTableRequest` class to provide the request information.

   You must provide the table name, its primary key, and the provisioned throughput values. For the global secondary index, you must provide the index name, its provisioned throughput settings, the attribute definitions for the index sort key, the key schema for the index, and the attribute projection.

1. Call the `createTable` method by providing the request object as a parameter.

The following Java code example demonstrates the preceding steps. The code creates a table (`WeatherData`) with a global secondary index (`PrecipIndex`). The index partition key is `Date` and its sort key is `Precipitation`. All of the table attributes are projected into the index. Users can query this index to obtain weather data for a particular date, optionally sorting the data by precipitation amount. 

Because `Precipitation` is not a key attribute for the table, it is not required. However, `WeatherData` items without `Precipitation` do not appear in `PrecipIndex`.

```
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);

// Attribute definitions
ArrayList<AttributeDefinition> attributeDefinitions = new ArrayList<AttributeDefinition>();

attributeDefinitions.add(new AttributeDefinition()
    .withAttributeName("Location")
    .withAttributeType("S"));
attributeDefinitions.add(new AttributeDefinition()
    .withAttributeName("Date")
    .withAttributeType("S"));
attributeDefinitions.add(new AttributeDefinition()
    .withAttributeName("Precipitation")
    .withAttributeType("N"));

// Table key schema
ArrayList<KeySchemaElement> tableKeySchema = new ArrayList<KeySchemaElement>();
tableKeySchema.add(new KeySchemaElement()
    .withAttributeName("Location")
    .withKeyType(KeyType.HASH));  //Partition key
tableKeySchema.add(new KeySchemaElement()
    .withAttributeName("Date")
    .withKeyType(KeyType.RANGE));  //Sort key

// PrecipIndex
GlobalSecondaryIndex precipIndex = new GlobalSecondaryIndex()
    .withIndexName("PrecipIndex")
    .withProvisionedThroughput(new ProvisionedThroughput()
        .withReadCapacityUnits((long) 10)
        .withWriteCapacityUnits((long) 1))
        .withProjection(new Projection().withProjectionType(ProjectionType.ALL));

ArrayList<KeySchemaElement> indexKeySchema = new ArrayList<KeySchemaElement>();

indexKeySchema.add(new KeySchemaElement()
    .withAttributeName("Date")
    .withKeyType(KeyType.HASH));  //Partition key
indexKeySchema.add(new KeySchemaElement()
    .withAttributeName("Precipitation")
    .withKeyType(KeyType.RANGE));  //Sort key

precipIndex.setKeySchema(indexKeySchema);

CreateTableRequest createTableRequest = new CreateTableRequest()
    .withTableName("WeatherData")
    .withProvisionedThroughput(new ProvisionedThroughput()
        .withReadCapacityUnits((long) 5)
        .withWriteCapacityUnits((long) 1))
    .withAttributeDefinitions(attributeDefinitions)
    .withKeySchema(tableKeySchema)
    .withGlobalSecondaryIndexes(precipIndex);

Table table = dynamoDB.createTable(createTableRequest);
System.out.println(table.getDescription());
```

You must wait until DynamoDB creates the table and sets the table status to `ACTIVE`. After that, you can begin putting data items into the table.

## Describe a table with a Global Secondary Index
<a name="GSIJavaDocumentAPI.DescribeTableWithIndex"></a>

To get information about global secondary indexes on a table, use `DescribeTable`. For each index, you can access its name, key schema, and projected attributes.

The following are the steps to access global secondary index information a table. 

1. Create an instance of the `DynamoDB` class.

1. Create an instance of the `Table` class to represent the index you want to work with.

1. Call the `describe` method on the `Table` object.

The following Java code example demonstrates the preceding steps.

**Example**  

```
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);

Table table = dynamoDB.getTable("WeatherData");
TableDescription tableDesc = table.describe();
    

Iterator<GlobalSecondaryIndexDescription> gsiIter = tableDesc.getGlobalSecondaryIndexes().iterator();
while (gsiIter.hasNext()) {
    GlobalSecondaryIndexDescription gsiDesc = gsiIter.next();
    System.out.println("Info for index "
         + gsiDesc.getIndexName() + ":");

    Iterator<KeySchemaElement> kseIter = gsiDesc.getKeySchema().iterator();
    while (kseIter.hasNext()) {
        KeySchemaElement kse = kseIter.next();
        System.out.printf("\t%s: %s\n", kse.getAttributeName(), kse.getKeyType());
    }
    Projection projection = gsiDesc.getProjection();
    System.out.println("\tThe projection type is: "
        + projection.getProjectionType());
    if (projection.getProjectionType().toString().equals("INCLUDE")) {
        System.out.println("\t\tThe non-key projected attributes are: "
            + projection.getNonKeyAttributes());
    }
}
```

## Query a Global Secondary Index
<a name="GSIJavaDocumentAPI.QueryAnIndex"></a>

You can use `Query` on a global secondary index, in much the same way you `Query` a table. You need to specify the index name, the query criteria for the index partition key and sort key (if present), and the attributes that you want to return. In this example, the index is `PrecipIndex`, which has a partition key of `Date` and a sort key of `Precipitation`. The index query returns all of the weather data for a particular date, where the precipitation is greater than zero.

The following are the steps to query a global secondary index using the AWS SDK for Java Document API. 

1. Create an instance of the `DynamoDB` class.

1. Create an instance of the `Table` class to represent the index you want to work with.

1. Create an instance of the `Index` class for the index you want to query.

1. Call the `query` method on the `Index` object.

The attribute name `Date` is a DynamoDB reserved word. Therefore, you must use an expression attribute name as a placeholder in the `KeyConditionExpression`.

The following Java code example demonstrates the preceding steps.

**Example**  

```
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);

Table table = dynamoDB.getTable("WeatherData");
Index index = table.getIndex("PrecipIndex");

QuerySpec spec = new QuerySpec()
    .withKeyConditionExpression("#d = :v_date and Precipitation = :v_precip")
    .withNameMap(new NameMap()
        .with("#d", "Date"))
    .withValueMap(new ValueMap()
        .withString(":v_date","2013-08-10")
        .withNumber(":v_precip",0));

ItemCollection<QueryOutcome> items = index.query(spec);
Iterator<Item> iter = items.iterator(); 
while (iter.hasNext()) {
    System.out.println(iter.next().toJSONPretty());
}
```

# Example: Global Secondary Indexes using the AWS SDK for Java document API
<a name="GSIJavaDocumentAPI.Example"></a>

The following Java code example shows how to work with global secondary indexes. The example creates a table named `Issues`, which might be used in a simple bug tracking system for software development. The partition key is `IssueId` and the sort key is `Title`. There are three global secondary indexes on this table:
+ `CreateDateIndex` — The partition key is `CreateDate` and the sort key is `IssueId`. In addition to the table keys, the attributes `Description` and `Status` are projected into the index.
+ `TitleIndex` — The partition key is `Title` and the sort key is `IssueId`. No attributes other than the table keys are projected into the index.
+ `DueDateIndex` — The partition key is `DueDate`, and there is no sort key. All of the table attributes are projected into the index.

After the `Issues` table is created, the program loads the table with data representing software bug reports. It then queries the data using the global secondary indexes. Finally, the program deletes the `Issues` table.

For step-by-step instructions for testing the following example, see [Java code examples](CodeSamples.Java.md).

**Example**  

```
package com.amazonaws.codesamples.document;

import java.util.ArrayList;
import java.util.Iterator;

import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Index;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.ItemCollection;
import com.amazonaws.services.dynamodbv2.document.QueryOutcome;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.spec.QuerySpec;
import com.amazonaws.services.dynamodbv2.document.utils.ValueMap;
import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
import com.amazonaws.services.dynamodbv2.model.CreateTableRequest;
import com.amazonaws.services.dynamodbv2.model.GlobalSecondaryIndex;
import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
import com.amazonaws.services.dynamodbv2.model.KeyType;
import com.amazonaws.services.dynamodbv2.model.Projection;
import com.amazonaws.services.dynamodbv2.model.ProvisionedThroughput;

public class DocumentAPIGlobalSecondaryIndexExample {

    static AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
    static DynamoDB dynamoDB = new DynamoDB(client);

    public static String tableName = "Issues";

    public static void main(String[] args) throws Exception {

        createTable();
        loadData();

        queryIndex("CreateDateIndex");
        queryIndex("TitleIndex");
        queryIndex("DueDateIndex");

        deleteTable(tableName);

    }

    public static void createTable() {

        // Attribute definitions
        ArrayList<AttributeDefinition> attributeDefinitions = new ArrayList<AttributeDefinition>();

        attributeDefinitions.add(new AttributeDefinition().withAttributeName("IssueId").withAttributeType("S"));
        attributeDefinitions.add(new AttributeDefinition().withAttributeName("Title").withAttributeType("S"));
        attributeDefinitions.add(new AttributeDefinition().withAttributeName("CreateDate").withAttributeType("S"));
        attributeDefinitions.add(new AttributeDefinition().withAttributeName("DueDate").withAttributeType("S"));

        // Key schema for table
        ArrayList<KeySchemaElement> tableKeySchema = new ArrayList<KeySchemaElement>();
        tableKeySchema.add(new KeySchemaElement().withAttributeName("IssueId").withKeyType(KeyType.HASH)); // Partition
                                                                                                           // key
        tableKeySchema.add(new KeySchemaElement().withAttributeName("Title").withKeyType(KeyType.RANGE)); // Sort
                                                                                                          // key

        // Initial provisioned throughput settings for the indexes
        ProvisionedThroughput ptIndex = new ProvisionedThroughput().withReadCapacityUnits(1L)
                .withWriteCapacityUnits(1L);

        // CreateDateIndex
        GlobalSecondaryIndex createDateIndex = new GlobalSecondaryIndex().withIndexName("CreateDateIndex")
                .withProvisionedThroughput(ptIndex)
                .withKeySchema(new KeySchemaElement().withAttributeName("CreateDate").withKeyType(KeyType.HASH), // Partition
                                                                                                                 // key
                        new KeySchemaElement().withAttributeName("IssueId").withKeyType(KeyType.RANGE)) // Sort
                                                                                                        // key
                .withProjection(
                        new Projection().withProjectionType("INCLUDE").withNonKeyAttributes("Description", "Status"));

        // TitleIndex
        GlobalSecondaryIndex titleIndex = new GlobalSecondaryIndex().withIndexName("TitleIndex")
                .withProvisionedThroughput(ptIndex)
                .withKeySchema(new KeySchemaElement().withAttributeName("Title").withKeyType(KeyType.HASH), // Partition
                                                                                                            // key
                        new KeySchemaElement().withAttributeName("IssueId").withKeyType(KeyType.RANGE)) // Sort
                                                                                                        // key
                .withProjection(new Projection().withProjectionType("KEYS_ONLY"));

        // DueDateIndex
        GlobalSecondaryIndex dueDateIndex = new GlobalSecondaryIndex().withIndexName("DueDateIndex")
                .withProvisionedThroughput(ptIndex)
                .withKeySchema(new KeySchemaElement().withAttributeName("DueDate").withKeyType(KeyType.HASH)) // Partition
                                                                                                              // key
                .withProjection(new Projection().withProjectionType("ALL"));

        CreateTableRequest createTableRequest = new CreateTableRequest().withTableName(tableName)
                .withProvisionedThroughput(
                        new ProvisionedThroughput().withReadCapacityUnits((long) 1).withWriteCapacityUnits((long) 1))
                .withAttributeDefinitions(attributeDefinitions).withKeySchema(tableKeySchema)
                .withGlobalSecondaryIndexes(createDateIndex, titleIndex, dueDateIndex);

        System.out.println("Creating table " + tableName + "...");
        dynamoDB.createTable(createTableRequest);

        // Wait for table to become active
        System.out.println("Waiting for " + tableName + " to become ACTIVE...");
        try {
            Table table = dynamoDB.getTable(tableName);
            table.waitForActive();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }

    public static void queryIndex(String indexName) {

        Table table = dynamoDB.getTable(tableName);

        System.out.println("\n***********************************************************\n");
        System.out.print("Querying index " + indexName + "...");

        Index index = table.getIndex(indexName);

        ItemCollection<QueryOutcome> items = null;

        QuerySpec querySpec = new QuerySpec();

        if (indexName == "CreateDateIndex") {
            System.out.println("Issues filed on 2013-11-01");
            querySpec.withKeyConditionExpression("CreateDate = :v_date and begins_with(IssueId, :v_issue)")
                    .withValueMap(new ValueMap().withString(":v_date", "2013-11-01").withString(":v_issue", "A-"));
            items = index.query(querySpec);
        } else if (indexName == "TitleIndex") {
            System.out.println("Compilation errors");
            querySpec.withKeyConditionExpression("Title = :v_title and begins_with(IssueId, :v_issue)")
                    .withValueMap(
                            new ValueMap().withString(":v_title", "Compilation error").withString(":v_issue", "A-"));
            items = index.query(querySpec);
        } else if (indexName == "DueDateIndex") {
            System.out.println("Items that are due on 2013-11-30");
            querySpec.withKeyConditionExpression("DueDate = :v_date")
                    .withValueMap(new ValueMap().withString(":v_date", "2013-11-30"));
            items = index.query(querySpec);
        } else {
            System.out.println("\nNo valid index name provided");
            return;
        }

        Iterator<Item> iterator = items.iterator();

        System.out.println("Query: printing results...");

        while (iterator.hasNext()) {
            System.out.println(iterator.next().toJSONPretty());
        }

    }

    public static void deleteTable(String tableName) {

        System.out.println("Deleting table " + tableName + "...");

        Table table = dynamoDB.getTable(tableName);
        table.delete();

        // Wait for table to be deleted
        System.out.println("Waiting for " + tableName + " to be deleted...");
        try {
            table.waitForDelete();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }

    public static void loadData() {

        System.out.println("Loading data into table " + tableName + "...");

        // IssueId, Title,
        // Description,
        // CreateDate, LastUpdateDate, DueDate,
        // Priority, Status

        putItem("A-101", "Compilation error", "Can't compile Project X - bad version number. What does this mean?",
                "2013-11-01", "2013-11-02", "2013-11-10", 1, "Assigned");

        putItem("A-102", "Can't read data file", "The main data file is missing, or the permissions are incorrect",
                "2013-11-01", "2013-11-04", "2013-11-30", 2, "In progress");

        putItem("A-103", "Test failure", "Functional test of Project X produces errors", "2013-11-01", "2013-11-02",
                "2013-11-10", 1, "In progress");

        putItem("A-104", "Compilation error", "Variable 'messageCount' was not initialized.", "2013-11-15",
                "2013-11-16", "2013-11-30", 3, "Assigned");

        putItem("A-105", "Network issue", "Can't ping IP address 127.0.0.1. Please fix this.", "2013-11-15",
                "2013-11-16", "2013-11-19", 5, "Assigned");

    }

    public static void putItem(

            String issueId, String title, String description, String createDate, String lastUpdateDate, String dueDate,
            Integer priority, String status) {

        Table table = dynamoDB.getTable(tableName);

        Item item = new Item().withPrimaryKey("IssueId", issueId).withString("Title", title)
                .withString("Description", description).withString("CreateDate", createDate)
                .withString("LastUpdateDate", lastUpdateDate).withString("DueDate", dueDate)
                .withNumber("Priority", priority).withString("Status", status);

        table.putItem(item);
    }

}
```

# Working with Global Secondary Indexes: .NET
<a name="GSILowLevelDotNet"></a>

You can use the AWS SDK for .NET low-level API to create an Amazon DynamoDB table with one or more global secondary indexes, describe the indexes on the table, and perform queries using the indexes. These operations map to the corresponding DynamoDB operations. For more information, see the [Amazon DynamoDB API Reference](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/). 

The following are the common steps for table operations using the .NET low-level API. 

1. Create an instance of the `AmazonDynamoDBClient` class.

1. Provide the required and optional parameters for the operation by creating the corresponding request objects.

   For example, create a `CreateTableRequest` object to create a table and `QueryRequest` object to query a table or an index. 

1. Run the appropriate method provided by the client that you created in the preceding step. 

**Topics**
+ [Create a table with a Global Secondary Index](#GSILowLevelDotNet.CreateTableWithIndex)
+ [Describe a table with a Global Secondary Index](#GSILowLevelDotNet.DescribeTableWithIndex)
+ [Query a Global Secondary Index](#GSILowLevelDotNet.QueryAnIndex)
+ [Example: Global Secondary Indexes using the AWS SDK for .NET low-level API](GSILowLevelDotNet.Example.md)

## Create a table with a Global Secondary Index
<a name="GSILowLevelDotNet.CreateTableWithIndex"></a>

You can create global secondary indexes at the same time that you create a table. To do this, use `CreateTable` and provide your specifications for one or more global secondary indexes. The following C\$1 code example creates a table to hold information about weather data. The partition key is `Location` and the sort key is `Date`. A global secondary index named `PrecipIndex` allows fast access to precipitation data for various locations.

The following are the steps to create a table with a global secondary index, using the .NET low-level API. 

1. Create an instance of the `AmazonDynamoDBClient` class.

1. Create an instance of the `CreateTableRequest` class to provide the request information. 

   You must provide the table name, its primary key, and the provisioned throughput values. For the global secondary index, you must provide the index name, its provisioned throughput settings, the attribute definitions for the index sort key, the key schema for the index, and the attribute projection.

1. Run the `CreateTable` method by providing the request object as a parameter.

The following C\$1 code example demonstrates the preceding steps. The code creates a table (`WeatherData`) with a global secondary index (`PrecipIndex`). The index partition key is `Date` and its sort key is `Precipitation`. All of the table attributes are projected into the index. Users can query this index to obtain weather data for a particular date, optionally sorting the data by precipitation amount. 

Because `Precipitation` is not a key attribute for the table, it is not required. However, `WeatherData` items without `Precipitation` do not appear in `PrecipIndex`.

```
client = new AmazonDynamoDBClient();
string tableName = "WeatherData";

// Attribute definitions
var attributeDefinitions = new List<AttributeDefinition>()
{
    {new AttributeDefinition{
        AttributeName = "Location",
        AttributeType = "S"}},
    {new AttributeDefinition{
        AttributeName = "Date",
        AttributeType = "S"}},
    {new AttributeDefinition(){
        AttributeName = "Precipitation",
        AttributeType = "N"}
    }
};

// Table key schema
var tableKeySchema = new List<KeySchemaElement>()
{
    {new KeySchemaElement {
        AttributeName = "Location",
        KeyType = "HASH"}},  //Partition key
    {new KeySchemaElement {
        AttributeName = "Date",
        KeyType = "RANGE"}  //Sort key
    }
};

// PrecipIndex
var precipIndex = new GlobalSecondaryIndex
{
    IndexName = "PrecipIndex",
    ProvisionedThroughput = new ProvisionedThroughput
    {
        ReadCapacityUnits = (long)10,
        WriteCapacityUnits = (long)1
    },
    Projection = new Projection { ProjectionType = "ALL" }
};

var indexKeySchema = new List<KeySchemaElement> {
    {new KeySchemaElement { AttributeName = "Date", KeyType = "HASH"}},  //Partition key
    {new KeySchemaElement{AttributeName = "Precipitation",KeyType = "RANGE"}}  //Sort key
};

precipIndex.KeySchema = indexKeySchema;

CreateTableRequest createTableRequest = new CreateTableRequest
{
    TableName = tableName,
    ProvisionedThroughput = new ProvisionedThroughput
    {
        ReadCapacityUnits = (long)5,
        WriteCapacityUnits = (long)1
    },
    AttributeDefinitions = attributeDefinitions,
    KeySchema = tableKeySchema,
    GlobalSecondaryIndexes = { precipIndex }
};

CreateTableResponse response = client.CreateTable(createTableRequest);
Console.WriteLine(response.CreateTableResult.TableDescription.TableName);
Console.WriteLine(response.CreateTableResult.TableDescription.TableStatus);
```

You must wait until DynamoDB creates the table and sets the table status to `ACTIVE`. After that, you can begin putting data items into the table.

## Describe a table with a Global Secondary Index
<a name="GSILowLevelDotNet.DescribeTableWithIndex"></a>

To get information about global secondary indexes on a table, use `DescribeTable`. For each index, you can access its name, key schema, and projected attributes.

The following are the steps to access global secondary index information for a table using the .NET low-level API. 

1. Create an instance of the `AmazonDynamoDBClient` class.

1. Run the `describeTable` method by providing the request object as a parameter.

   Create an instance of the `DescribeTableRequest` class to provide the request information. You must provide the table name.

The following C\$1 code example demonstrates the preceding steps.

**Example**  

```
client = new AmazonDynamoDBClient();
string tableName = "WeatherData";

DescribeTableResponse response = client.DescribeTable(new DescribeTableRequest { TableName = tableName});

List<GlobalSecondaryIndexDescription> globalSecondaryIndexes =
response.DescribeTableResult.Table.GlobalSecondaryIndexes;

// This code snippet will work for multiple indexes, even though
// there is only one index in this example.

foreach (GlobalSecondaryIndexDescription gsiDescription in globalSecondaryIndexes) {
     Console.WriteLine("Info for index " + gsiDescription.IndexName + ":");

     foreach (KeySchemaElement kse in gsiDescription.KeySchema) {
          Console.WriteLine("\t" + kse.AttributeName + ": key type is " + kse.KeyType);
     }

      Projection projection = gsiDescription.Projection;
      Console.WriteLine("\tThe projection type is: " + projection.ProjectionType);

      if (projection.ProjectionType.ToString().Equals("INCLUDE")) {
           Console.WriteLine("\t\tThe non-key projected attributes are: "
                + projection.NonKeyAttributes);
      }
}
```

## Query a Global Secondary Index
<a name="GSILowLevelDotNet.QueryAnIndex"></a>

You can use `Query` on a global secondary index, in much the same way you `Query` a table. You need to specify the index name, the query criteria for the index partition key and sort key (if present), and the attributes that you want to return. In this example, the index is `PrecipIndex`, which has a partition key of `Date` and a sort key of `Precipitation`. The index query returns all of the weather data for a particular date, where the precipitation is greater than zero.

The following are the steps to query a global secondary index using the .NET low-level API. 

1. Create an instance of the `AmazonDynamoDBClient` class.

1. Create an instance of the `QueryRequest` class to provide the request information.

1. Run the `query` method by providing the request object as a parameter.

The attribute name `Date` is a DynamoDB reserved word. Therefore, you must use an expression attribute name as a placeholder in the `KeyConditionExpression`.

The following C\$1 code example demonstrates the preceding steps.

**Example**  

```
client = new AmazonDynamoDBClient();

QueryRequest queryRequest = new QueryRequest
{
    TableName = "WeatherData",
    IndexName = "PrecipIndex",
    KeyConditionExpression = "#dt = :v_date and Precipitation > :v_precip",
    ExpressionAttributeNames = new Dictionary<String, String> {
        {"#dt", "Date"}
    },
    ExpressionAttributeValues = new Dictionary<string, AttributeValue> {
        {":v_date", new AttributeValue { S =  "2013-08-01" }},
        {":v_precip", new AttributeValue { N =  "0" }}
    },
    ScanIndexForward = true
};

var result = client.Query(queryRequest);

var items = result.Items;
foreach (var currentItem in items)
{
    foreach (string attr in currentItem.Keys)
    {
        Console.Write(attr + "---> ");
        if (attr == "Precipitation")
        {
            Console.WriteLine(currentItem[attr].N);
    }
    else
    {
        Console.WriteLine(currentItem[attr].S);
    }

         }
     Console.WriteLine();
}
```

# Example: Global Secondary Indexes using the AWS SDK for .NET low-level API
<a name="GSILowLevelDotNet.Example"></a>

The following C\$1 code example shows how to work with global secondary indexes. The example creates a table named `Issues`, which might be used in a simple bug tracking system for software development. The partition key is `IssueId` and the sort key is `Title`. There are three global secondary indexes on this table:
+ `CreateDateIndex` — The partition key is `CreateDate` and the sort key is `IssueId`. In addition to the table keys, the attributes `Description` and `Status` are projected into the index.
+ `TitleIndex` — The partition key is `Title` and the sort key is `IssueId`. No attributes other than the table keys are projected into the index.
+ `DueDateIndex` — The partition key is `DueDate`, and there is no sort key. All of the table attributes are projected into the index.

After the `Issues` table is created, the program loads the table with data representing software bug reports. It then queries the data using the global secondary indexes. Finally, the program deletes the `Issues` table.

For step-by-step instructions for testing the following sample, see [.NET code examples](CodeSamples.DotNet.md).

**Example**  

```
using System;
using System.Collections.Generic;
using System.Linq;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DataModel;
using Amazon.DynamoDBv2.DocumentModel;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;
using Amazon.SecurityToken;

namespace com.amazonaws.codesamples
{
    class LowLevelGlobalSecondaryIndexExample
    {
        private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
        public static String tableName = "Issues";

        public static void Main(string[] args)
        {
            CreateTable();
            LoadData();

            QueryIndex("CreateDateIndex");
            QueryIndex("TitleIndex");
            QueryIndex("DueDateIndex");

            DeleteTable(tableName);

            Console.WriteLine("To continue, press enter");
            Console.Read();
        }

        private static void CreateTable()
        {
            // Attribute definitions
            var attributeDefinitions = new List<AttributeDefinition>()
        {
            {new AttributeDefinition {
                 AttributeName = "IssueId", AttributeType = "S"
             }},
            {new AttributeDefinition {
                 AttributeName = "Title", AttributeType = "S"
             }},
            {new AttributeDefinition {
                 AttributeName = "CreateDate", AttributeType = "S"
             }},
            {new AttributeDefinition {
                 AttributeName = "DueDate", AttributeType = "S"
             }}
        };

            // Key schema for table
            var tableKeySchema = new List<KeySchemaElement>() {
            {
                new KeySchemaElement {
                    AttributeName= "IssueId",
                    KeyType = "HASH" //Partition key
                }
            },
            {
                new KeySchemaElement {
                    AttributeName = "Title",
                    KeyType = "RANGE" //Sort key
                }
            }
        };

            // Initial provisioned throughput settings for the indexes
            var ptIndex = new ProvisionedThroughput
            {
                ReadCapacityUnits = 1L,
                WriteCapacityUnits = 1L
            };

            // CreateDateIndex
            var createDateIndex = new GlobalSecondaryIndex()
            {
                IndexName = "CreateDateIndex",
                ProvisionedThroughput = ptIndex,
                KeySchema = {
                new KeySchemaElement {
                    AttributeName = "CreateDate", KeyType = "HASH" //Partition key
                },
                new KeySchemaElement {
                    AttributeName = "IssueId", KeyType = "RANGE" //Sort key
                }
            },
                Projection = new Projection
                {
                    ProjectionType = "INCLUDE",
                    NonKeyAttributes = {
                    "Description", "Status"
                }
                }
            };

            // TitleIndex
            var titleIndex = new GlobalSecondaryIndex()
            {
                IndexName = "TitleIndex",
                ProvisionedThroughput = ptIndex,
                KeySchema = {
                new KeySchemaElement {
                    AttributeName = "Title", KeyType = "HASH" //Partition key
                },
                new KeySchemaElement {
                    AttributeName = "IssueId", KeyType = "RANGE" //Sort key
                }
            },
                Projection = new Projection
                {
                    ProjectionType = "KEYS_ONLY"
                }
            };

            // DueDateIndex
            var dueDateIndex = new GlobalSecondaryIndex()
            {
                IndexName = "DueDateIndex",
                ProvisionedThroughput = ptIndex,
                KeySchema = {
                new KeySchemaElement {
                    AttributeName = "DueDate",
                    KeyType = "HASH" //Partition key
                }
            },
                Projection = new Projection
                {
                    ProjectionType = "ALL"
                }
            };



            var createTableRequest = new CreateTableRequest
            {
                TableName = tableName,
                ProvisionedThroughput = new ProvisionedThroughput
                {
                    ReadCapacityUnits = (long)1,
                    WriteCapacityUnits = (long)1
                },
                AttributeDefinitions = attributeDefinitions,
                KeySchema = tableKeySchema,
                GlobalSecondaryIndexes = {
                createDateIndex, titleIndex, dueDateIndex
            }
            };

            Console.WriteLine("Creating table " + tableName + "...");
            client.CreateTable(createTableRequest);

            WaitUntilTableReady(tableName);
        }

        private static void LoadData()
        {
            Console.WriteLine("Loading data into table " + tableName + "...");

            // IssueId, Title,
            // Description,
            // CreateDate, LastUpdateDate, DueDate,
            // Priority, Status

            putItem("A-101", "Compilation error",
                "Can't compile Project X - bad version number. What does this mean?",
                "2013-11-01", "2013-11-02", "2013-11-10",
                1, "Assigned");

            putItem("A-102", "Can't read data file",
                "The main data file is missing, or the permissions are incorrect",
                "2013-11-01", "2013-11-04", "2013-11-30",
                2, "In progress");

            putItem("A-103", "Test failure",
                "Functional test of Project X produces errors",
                "2013-11-01", "2013-11-02", "2013-11-10",
                1, "In progress");

            putItem("A-104", "Compilation error",
                "Variable 'messageCount' was not initialized.",
                "2013-11-15", "2013-11-16", "2013-11-30",
                3, "Assigned");

            putItem("A-105", "Network issue",
                "Can't ping IP address 127.0.0.1. Please fix this.",
                "2013-11-15", "2013-11-16", "2013-11-19",
                5, "Assigned");
        }

        private static void putItem(
            String issueId, String title,
            String description,
            String createDate, String lastUpdateDate, String dueDate,
            Int32 priority, String status)
        {
            Dictionary<String, AttributeValue> item = new Dictionary<string, AttributeValue>();

            item.Add("IssueId", new AttributeValue
            {
                S = issueId
            });
            item.Add("Title", new AttributeValue
            {
                S = title
            });
            item.Add("Description", new AttributeValue
            {
                S = description
            });
            item.Add("CreateDate", new AttributeValue
            {
                S = createDate
            });
            item.Add("LastUpdateDate", new AttributeValue
            {
                S = lastUpdateDate
            });
            item.Add("DueDate", new AttributeValue
            {
                S = dueDate
            });
            item.Add("Priority", new AttributeValue
            {
                N = priority.ToString()
            });
            item.Add("Status", new AttributeValue
            {
                S = status
            });

            try
            {
                client.PutItem(new PutItemRequest
                {
                    TableName = tableName,
                    Item = item
                });
            }
            catch (Exception e)
            {
                Console.WriteLine(e.ToString());
            }
        }

        private static void QueryIndex(string indexName)
        {
            Console.WriteLine
                ("\n***********************************************************\n");
            Console.WriteLine("Querying index " + indexName + "...");

            QueryRequest queryRequest = new QueryRequest
            {
                TableName = tableName,
                IndexName = indexName,
                ScanIndexForward = true
            };


            String keyConditionExpression;
            Dictionary<string, AttributeValue> expressionAttributeValues = new Dictionary<string, AttributeValue>();

            if (indexName == "CreateDateIndex")
            {
                Console.WriteLine("Issues filed on 2013-11-01\n");

                keyConditionExpression = "CreateDate = :v_date and begins_with(IssueId, :v_issue)";
                expressionAttributeValues.Add(":v_date", new AttributeValue
                {
                    S = "2013-11-01"
                });
                expressionAttributeValues.Add(":v_issue", new AttributeValue
                {
                    S = "A-"
                });
            }
            else if (indexName == "TitleIndex")
            {
                Console.WriteLine("Compilation errors\n");

                keyConditionExpression = "Title = :v_title and begins_with(IssueId, :v_issue)";
                expressionAttributeValues.Add(":v_title", new AttributeValue
                {
                    S = "Compilation error"
                });
                expressionAttributeValues.Add(":v_issue", new AttributeValue
                {
                    S = "A-"
                });

                // Select
                queryRequest.Select = "ALL_PROJECTED_ATTRIBUTES";
            }
            else if (indexName == "DueDateIndex")
            {
                Console.WriteLine("Items that are due on 2013-11-30\n");

                keyConditionExpression = "DueDate = :v_date";
                expressionAttributeValues.Add(":v_date", new AttributeValue
                {
                    S = "2013-11-30"
                });

                // Select
                queryRequest.Select = "ALL_PROJECTED_ATTRIBUTES";
            }
            else
            {
                Console.WriteLine("\nNo valid index name provided");
                return;
            }

            queryRequest.KeyConditionExpression = keyConditionExpression;
            queryRequest.ExpressionAttributeValues = expressionAttributeValues;

            var result = client.Query(queryRequest);
            var items = result.Items;
            foreach (var currentItem in items)
            {
                foreach (string attr in currentItem.Keys)
                {
                    if (attr == "Priority")
                    {
                        Console.WriteLine(attr + "---> " + currentItem[attr].N);
                    }
                    else
                    {
                        Console.WriteLine(attr + "---> " + currentItem[attr].S);
                    }
                }
                Console.WriteLine();
            }
        }

        private static void DeleteTable(string tableName)
        {
            Console.WriteLine("Deleting table " + tableName + "...");
            client.DeleteTable(new DeleteTableRequest
            {
                TableName = tableName
            });
            WaitForTableToBeDeleted(tableName);
        }

        private static void WaitUntilTableReady(string tableName)
        {
            string status = null;
            // Let us wait until table is created. Call DescribeTable.
            do
            {
                System.Threading.Thread.Sleep(5000); // Wait 5 seconds.
                try
                {
                    var res = client.DescribeTable(new DescribeTableRequest
                    {
                        TableName = tableName
                    });

                    Console.WriteLine("Table name: {0}, status: {1}",
                              res.Table.TableName,
                              res.Table.TableStatus);
                    status = res.Table.TableStatus;
                }
                catch (ResourceNotFoundException)
                {
                    // DescribeTable is eventually consistent. So you might
                    // get resource not found. So we handle the potential exception.
                }
            } while (status != "ACTIVE");
        }

        private static void WaitForTableToBeDeleted(string tableName)
        {
            bool tablePresent = true;

            while (tablePresent)
            {
                System.Threading.Thread.Sleep(5000); // Wait 5 seconds.
                try
                {
                    var res = client.DescribeTable(new DescribeTableRequest
                    {
                        TableName = tableName
                    });

                    Console.WriteLine("Table name: {0}, status: {1}",
                              res.Table.TableName,
                              res.Table.TableStatus);
                }
                catch (ResourceNotFoundException)
                {
                    tablePresent = false;
                }
            }
        }
    }
}
```

# Working with Global Secondary Indexes in DynamoDB using AWS CLI
<a name="GCICli"></a>

You can use the AWS CLI to create an Amazon DynamoDB table with one or more global secondary indexes, describe the indexes on the table, and perform queries using the indexes.

**Topics**
+ [Create a table with a Global Secondary Index](#GCICli.CreateTableWithIndex)
+ [Add a Global Secondary Index to an existing table](#GCICli.CreateIndexAfterTable)
+ [Describe a table with a Global Secondary Index](#GCICli.DescribeTableWithIndex)
+ [Query a Global Secondary Index](#GCICli.QueryAnIndex)

## Create a table with a Global Secondary Index
<a name="GCICli.CreateTableWithIndex"></a>

Global secondary indexes may be created at the same time you create a table. To do this, use the `create-table` parameter and provide your specifications for one or more global secondary indexes. The following example creates a table named `GameScores` with a global secondary index called `GameTitleIndex`. The base table has a partition key of `UserId` and a sort key of `GameTitle`, allowing you to find an individual user's best score for a specific game efficiently, whereas the GSI has a partition key of `GameTitle` and a sort key of `TopScore`, allowing you to quickly find the overall highest score for a particular game.

```
aws dynamodb create-table \
    --table-name GameScores \
    --attribute-definitions AttributeName=UserId,AttributeType=S \
                            AttributeName=GameTitle,AttributeType=S \
                            AttributeName=TopScore,AttributeType=N  \
    --key-schema AttributeName=UserId,KeyType=HASH \
                 AttributeName=GameTitle,KeyType=RANGE \
    --provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=5 \
    --global-secondary-indexes \
        "[
            {
                \"IndexName\": \"GameTitleIndex\",
                \"KeySchema\": [{\"AttributeName\":\"GameTitle\",\"KeyType\":\"HASH\"},
                                {\"AttributeName\":\"TopScore\",\"KeyType\":\"RANGE\"}],
                \"Projection\":{
                    \"ProjectionType\":\"INCLUDE\",
                    \"NonKeyAttributes\":[\"UserId\"]
                },
                \"ProvisionedThroughput\": {
                    \"ReadCapacityUnits\": 10,
                    \"WriteCapacityUnits\": 5
                }
            }
        ]"
```

You must wait until DynamoDB creates the table and sets the table status to `ACTIVE`. After that, you can begin putting data items into the table. You can use [describe-table](https://docs.aws.amazon.com/cli/latest/reference/dynamodb/describe-table.html) to determine the status of the table creation.

## Add a Global Secondary Index to an existing table
<a name="GCICli.CreateIndexAfterTable"></a>

Global secondary indexes may also be added or modified after table creation. To do this, use the `update-table` parameter and provide your specifications for one or more global secondary indexes. The following example uses the same schema as the previous example, but assumes that the table has already been created and we're adding the GSI later.

```
aws dynamodb update-table \
    --table-name GameScores \
    --attribute-definitions AttributeName=TopScore,AttributeType=N  \
    --global-secondary-index-updates \
        "[
            {
                \"Create\": {
                    \"IndexName\": \"GameTitleIndex\",
                    \"KeySchema\": [{\"AttributeName\":\"GameTitle\",\"KeyType\":\"HASH\"},
                                    {\"AttributeName\":\"TopScore\",\"KeyType\":\"RANGE\"}],
                    \"Projection\":{
                        \"ProjectionType\":\"INCLUDE\",
                        \"NonKeyAttributes\":[\"UserId\"]
                    }
                }
            }
        ]"
```

## Describe a table with a Global Secondary Index
<a name="GCICli.DescribeTableWithIndex"></a>

To get information about Global Secondary Indexes on a table, use the `describe-table` parameter. For each index, you can access its name, key schema, and projected attributes.

```
aws dynamodb describe-table --table-name GameScores
```

## Query a Global Secondary Index
<a name="GCICli.QueryAnIndex"></a>

You can use the `query` operation on a global secondary index in much the same way that you `query` a table. You must specify the index name, the query criteria for the index sort key, and the attributes that you want to return. In this example, the index is `GameTitleIndex` and the index sort key is `GameTitle`.

The only attributes returned are those that have been projected into the index. You could modify this query to select non-key attributes too, but this would require table fetch activity that is relatively expensive. For more information about table fetches, see [Attribute projections](GSI.md#GSI.Projections).

```
aws dynamodb query --table-name GameScores\
    --index-name GameTitleIndex \
    --key-condition-expression "GameTitle = :v_game" \
    --expression-attribute-values '{":v_game":{"S":"Alien Adventure"} }'
```

# Local secondary indexes
<a name="LSI"></a>

Some applications only need to query data using the base table's primary key. However, there might be situations where an alternative sort key would be helpful. To give your application a choice of sort keys, you can create one or more local secondary indexes on an Amazon DynamoDB table and issue `Query` or `Scan` requests against these indexes.

**Topics**
+ [Scenario: Using a Local Secondary Index](#LSI.Scenario)
+ [Attribute projections](#LSI.Projections)
+ [Creating a Local Secondary Index](#LSI.Creating)
+ [Reading data from a Local Secondary Index](#LSI.Reading)
+ [Item writes and Local Secondary Indexes](#LSI.Writes)
+ [Provisioned throughput considerations for Local Secondary Indexes](#LSI.ThroughputConsiderations)
+ [Storage considerations for Local Secondary Indexes](#LSI.StorageConsiderations)
+ [Item collections in Local Secondary Indexes](#LSI.ItemCollections)
+ [Working with Local Secondary Indexes: Java](LSIJavaDocumentAPI.md)
+ [Working with Local Secondary Indexes: .NET](LSILowLevelDotNet.md)
+ [Working with Local Secondary Indexes in DynamoDB AWS CLI](LCICli.md)

## Scenario: Using a Local Secondary Index
<a name="LSI.Scenario"></a>

As an example, consider the `Thread` table. This table is useful for an application such as the [AWS discussion forums](https://forums.aws.amazon.com/). The following diagram shows how the items in the table would be organized. (Not all of the attributes are shown.)

![\[Thread table containing a list of forum names, subjects, last post time, and number of replies.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/LSI_01.png)


DynamoDB stores all of the items with the same partition key value continuously. In this example, given a particular `ForumName`, a `Query` operation could immediately locate all of the threads for that forum. Within a group of items with the same partition key value, the items are sorted by sort key value. If the sort key (`Subject`) is also provided in the query, DynamoDB can narrow down the results that are returned—for example, returning all of the threads in the "S3" forum that have a `Subject` beginning with the letter "a".

Some requests might require more complex data access patterns. For example:
+ Which forum threads get the most views and replies?
+ Which thread in a particular forum has the largest number of messages?
+ How many threads were posted in a particular forum within a particular time period?

To answer these questions, the `Query` action would not be sufficient. Instead, you would have to `Scan` the entire table. For a table with millions of items, this would consume a large amount of provisioned read throughput and take a long time to complete.

However, you can specify one or more local secondary indexes on non-key attributes, such as `Replies` or `LastPostDateTime`.

A *local secondary index* maintains an alternate sort key for a given partition key value. A local secondary index also contains a copy of some or all of the attributes from its base table. You specify which attributes are projected into the local secondary index when you create the table. The data in a local secondary index is organized by the same partition key as the base table, but with a different sort key. This lets you access data items efficiently across this different dimension. For greater query or scan flexibility, you can create up to five local secondary indexes per table. 

Suppose that an application needs to find all of the threads that have been posted within the last three months in a particular forum. Without a local secondary index, the application would have to `Scan` the entire `Thread` table and discard any posts that were not within the specified time frame. With a local secondary index, a `Query` operation could use `LastPostDateTime` as a sort key and find the data quickly.

The following diagram shows a local secondary index named `LastPostIndex`. Note that the partition key is the same as that of the `Thread` table, but the sort key is `LastPostDateTime`.

![\[LastPostIndex table containing a list of forum names, subjects, and last post time.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/LSI_02.png)


Every local secondary index must meet the following conditions:
+ The partition key is the same as that of its base table.
+ The sort key consists of exactly one scalar attribute.
+ The sort key of the base table is projected into the index, where it acts as a non-key attribute.

In this example, the partition key is `ForumName` and the sort key of the local secondary index is `LastPostDateTime`. In addition, the sort key value from the base table (in this example, `Subject`) is projected into the index, but it is not a part of the index key. If an application needs a list that is based on `ForumName` and `LastPostDateTime`, it can issue a `Query` request against `LastPostIndex`. The query results are sorted by `LastPostDateTime`, and can be returned in ascending or descending order. The query can also apply key conditions, such as returning only items that have a `LastPostDateTime` within a particular time span.

Every local secondary index automatically contains the partition and sort keys from its base table; you can optionally project non-key attributes into the index. When you query the index, DynamoDB can retrieve these projected attributes efficiently. When you query a local secondary index, the query can also retrieve attributes that are *not* projected into the index. DynamoDB automatically fetches these attributes from the base table, but at a greater latency and with higher provisioned throughput costs.

For any local secondary index, you can store up to 10 GB of data per distinct partition key value. This figure includes all of the items in the base table, plus all of the items in the indexes, that have the same partition key value. For more information, see [Item collections in Local Secondary Indexes](#LSI.ItemCollections).

## Attribute projections
<a name="LSI.Projections"></a>

With `LastPostIndex`, an application could use `ForumName` and `LastPostDateTime` as query criteria. However, to retrieve any additional attributes, DynamoDB must perform additional read operations against the `Thread` table. These extra reads are known as *fetches*, and they can increase the total amount of provisioned throughput required for a query.

Suppose that you wanted to populate a webpage with a list of all the threads in "S3" and the number of replies for each thread, sorted by the last reply date/time beginning with the most recent reply. To populate this list, you would need the following attributes:
+ `Subject`
+ `Replies`
+ `LastPostDateTime`

The most efficient way to query this data and to avoid fetch operations would be to project the `Replies` attribute from the table into the local secondary index, as shown in this diagram.

![\[LastPostIndex table containing a list of forum names, last post times, subjects, and replies.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/LSI_03.png)




A *projection* is the set of attributes that is copied from a table into a secondary index. The partition key and sort key of the table are always projected into the index; you can project other attributes to support your application's query requirements. When you query an index, Amazon DynamoDB can access any attribute in the projection as if those attributes were in a table of their own.

When you create a secondary index, you need to specify the attributes that will be projected into the index. DynamoDB provides three different options for this:
+ *KEYS\$1ONLY* – Each item in the index consists only of the table partition key and sort key values, plus the index key values. The `KEYS_ONLY` option results in the smallest possible secondary index.
+ *INCLUDE* – In addition to the attributes described in `KEYS_ONLY`, the secondary index will include other non-key attributes that you specify.
+ *ALL* – The secondary index includes all of the attributes from the source table. Because all of the table data is duplicated in the index, an `ALL` projection results in the largest possible secondary index.

In the previous diagram, the non-key attribute `Replies` is projected into `LastPostIndex`. An application can query `LastPostIndex` instead of the full `Thread` table to populate a webpage with `Subject`, `Replies`, and `LastPostDateTime`. If any other non-key attributes are requested, DynamoDB would need to fetch those attributes from the `Thread` table. 

From an application's point of view, fetching additional attributes from the base table is automatic and transparent, so there is no need to rewrite any application logic. However, such fetching can greatly reduce the performance advantage of using a local secondary index.

When you choose the attributes to project into a local secondary index, you must consider the tradeoff between provisioned throughput costs and storage costs:
+ If you need to access just a few attributes with the lowest possible latency, consider projecting only those attributes into a local secondary index. The smaller the index, the less that it costs to store it, and the less your write costs are. If there are attributes that you occasionally need to fetch, the cost for provisioned throughput may well outweigh the longer-term cost of storing those attributes.
+ If your application frequently accesses some non-key attributes, you should consider projecting those attributes into a local secondary index. The additional storage costs for the local secondary index offset the cost of performing frequent table scans.
+ If you need to access most of the non-key attributes on a frequent basis, you can project these attributes—or even the entire base table— into a local secondary index. This gives you maximum flexibility and lowest provisioned throughput consumption, because no fetching would be required. However, your storage cost would increase, or even double if you are projecting all attributes.
+ If your application needs to query a table infrequently, but must perform many writes or updates against the data in the table, consider projecting *KEYS\$1ONLY*. The local secondary index would be of minimal size, but would still be available when needed for query activity. 

## Creating a Local Secondary Index
<a name="LSI.Creating"></a>

To create one or more local secondary indexes on a table, use the `LocalSecondaryIndexes` parameter of the `CreateTable` operation. Local secondary indexes on a table are created when the table is created. When you delete a table, any local secondary indexes on that table are also deleted.

You must specify one non-key attribute to act as the sort key of the local secondary index. The attribute that you choose must be a scalar `String`, `Number`, or `Binary`. Other scalar types, document types, and set types are not allowed. For a complete list of data types, see [Data types](HowItWorks.NamingRulesDataTypes.md#HowItWorks.DataTypes).

**Important**  
For tables with local secondary indexes, there is a 10 GB size limit per partition key value. A table with local secondary indexes can store any number of items, as long as the total size for any one partition key value does not exceed 10 GB. For more information, see [Item collection size limit](#LSI.ItemCollections.SizeLimit).

You can project attributes of any data type into a local secondary index. This includes scalars, documents, and sets. For a complete list of data types, see [Data types](HowItWorks.NamingRulesDataTypes.md#HowItWorks.DataTypes).

## Reading data from a Local Secondary Index
<a name="LSI.Reading"></a>

You can retrieve items from a local secondary index using the `Query` and `Scan` operations. The `GetItem` and `BatchGetItem` operations can't be used on a local secondary index.

### Querying a Local Secondary Index
<a name="LSI.Querying"></a>

In a DynamoDB table, the combined partition key value and sort key value for each item must be unique. However, in a local secondary index, the sort key value does not need to be unique for a given partition key value. If there are multiple items in the local secondary index that have the same sort key value, a `Query` operation returns all of the items that have the same partition key value. In the response, the matching items are not returned in any particular order.

You can query a local secondary index using either eventually consistent or strongly consistent reads. To specify which type of consistency you want, use the `ConsistentRead` parameter of the `Query` operation. A strongly consistent read from a local secondary index always returns the latest updated values. If the query needs to fetch additional attributes from the base table, those attributes will be consistent with respect to the index.

**Example**  
Consider the following data returned from a `Query` that requests data from the discussion threads in a particular forum.  

```
{
    "TableName": "Thread",
    "IndexName": "LastPostIndex",
    "ConsistentRead": false,
    "ProjectionExpression": "Subject, LastPostDateTime, Replies, Tags",
    "KeyConditionExpression": 
        "ForumName = :v_forum and LastPostDateTime between :v_start and :v_end",
    "ExpressionAttributeValues": {
        ":v_start": {"S": "2015-08-31T00:00:00.000Z"},
        ":v_end": {"S": "2015-11-31T00:00:00.000Z"},
        ":v_forum": {"S": "EC2"}
    }
}
```
In this query:  
+ DynamoDB accesses `LastPostIndex`, using the `ForumName` partition key to locate the index items for "EC2". All of the index items with this key are stored adjacent to each other for rapid retrieval.
+ Within this forum, DynamoDB uses the index to look up the keys that match the specified `LastPostDateTime` condition.
+ Because the `Replies` attribute is projected into the index, DynamoDB can retrieve this attribute without consuming any additional provisioned throughput.
+ The `Tags` attribute is not projected into the index, so DynamoDB must access the `Thread` table and fetch this attribute.
+ The results are returned, sorted by `LastPostDateTime`. The index entries are sorted by partition key value and then by sort key value, and `Query` returns them in the order they are stored. (You can use the `ScanIndexForward` parameter to return the results in descending order.)
Because the `Tags` attribute is not projected into the local secondary index, DynamoDB must consume additional read capacity units to fetch this attribute from the base table. If you need to run this query often, you should project `Tags` into `LastPostIndex` to avoid fetching from the base table. However, if you needed to access `Tags` only occasionally, the additional storage cost for projecting `Tags` into the index might not be worthwhile.

### Scanning a Local Secondary Index
<a name="LSI.Scanning"></a>

You can use `Scan` to retrieve all of the data from a local secondary index. You must provide the base table name and the index name in the request. With a `Scan`, DynamoDB reads all of the data in the index and returns it to the application. You can also request that only some of the data be returned, and that the remaining data should be discarded. To do this, use the `FilterExpression` parameter of the `Scan` API. For more information, see [Filter expressions for scan](Scan.md#Scan.FilterExpression).

## Item writes and Local Secondary Indexes
<a name="LSI.Writes"></a>

DynamoDB automatically keeps all local secondary indexes synchronized with their respective base tables. Applications never write directly to an index. However, it is important that you understand the implications of how DynamoDB maintains these indexes.

When you create a local secondary index, you specify an attribute to serve as the sort key for the index. You also specify a data type for that attribute. This means that whenever you write an item to the base table, if the item defines an index key attribute, its type must match the index key schema's data type. In the case of `LastPostIndex`, the `LastPostDateTime` sort key in the index is defined as a `String` data type. If you try to add an item to the `Thread` table and specify a different data type for `LastPostDateTime` (such as `Number`), DynamoDB returns a `ValidationException` because of the data type mismatch.

There is no requirement for a one-to-one relationship between the items in a base table and the items in a local secondary index. In fact, this behavior can be advantageous for many applications. 

A table with many local secondary indexes incurs higher costs for write activity than tables with fewer indexes. For more information, see [Provisioned throughput considerations for Local Secondary Indexes](#LSI.ThroughputConsiderations).

**Important**  
For tables with local secondary indexes, there is a 10 GB size limit per partition key value. A table with local secondary indexes can store any number of items, as long as the total size for any one partition key value does not exceed 10 GB. For more information, see [Item collection size limit](#LSI.ItemCollections.SizeLimit).

## Provisioned throughput considerations for Local Secondary Indexes
<a name="LSI.ThroughputConsiderations"></a>

When you create a table in DynamoDB, you provision read and write capacity units for the table's expected workload. That workload includes read and write activity on the table's local secondary indexes.

To view the current rates for provisioned throughput capacity, see [Amazon DynamoDB pricing](https://aws.amazon.com/dynamodb/pricing).

### Read capacity units
<a name="LSI.ThroughputConsiderations.Reads"></a>

When you query a local secondary index, the number of read capacity units consumed depends on how the data is accessed.

As with table queries, an index query can use either eventually consistent or strongly consistent reads depending on the value of `ConsistentRead`. One strongly consistent read consumes one read capacity unit; an eventually consistent read consumes only half of that. Thus, by choosing eventually consistent reads, you can reduce your read capacity unit charges.

For index queries that request only index keys and projected attributes, DynamoDB calculates the provisioned read activity in the same way as it does for queries against tables. The only difference is that the calculation is based on the sizes of the index entries, rather than the size of the item in the base table. The number of read capacity units is the sum of all projected attribute sizes across all of the items returned; the result is then rounded up to the next 4 KB boundary. For more information about how DynamoDB calculates provisioned throughput usage, see [DynamoDB provisioned capacity mode](provisioned-capacity-mode.md).

For index queries that read attributes that are not projected into the local secondary index, DynamoDB needs to fetch those attributes from the base table, in addition to reading the projected attributes from the index. These fetches occur when you include any non-projected attributes in the `Select` or `ProjectionExpression` parameters of the `Query` operation. Fetching causes additional latency in query responses, and it also incurs a higher provisioned throughput cost: In addition to the reads from the local secondary index described previously, you are charged for read capacity units for every base table item fetched. This charge is for reading each entire item from the table, not just the requested attributes.

The maximum size of the results returned by a `Query` operation is 1 MB. This includes the sizes of all the attribute names and values across all of the items returned. However, if a Query against a local secondary index causes DynamoDB to fetch item attributes from the base table, the maximum size of the data in the results might be lower. In this case, the result size is the sum of:
+ The size of the matching items in the index, rounded up to the next 4 KB.
+ The size of each matching item in the base table, with each item individually rounded up to the next 4 KB.

Using this formula, the maximum size of the results returned by a Query operation is still 1 MB.

For example, consider a table where the size of each item is 300 bytes. There is a local secondary index on that table, but only 200 bytes of each item is projected into the index. Now suppose that you `Query` this index, that the query requires table fetches for each item, and that the query returns 4 items. DynamoDB sums up the following:
+ The size of the matching items in the index: 200 bytes × 4 items = 800 bytes; this is then rounded up to 4 KB.
+ The size of each matching item in the base table: (300 bytes, rounded up to 4 KB) × 4 items = 16 KB.

The total size of the data in the result is therefore 20 KB.

### Write capacity units
<a name="LSI.ThroughputConsiderations.Writes"></a>

When an item in a table is added, updated, or deleted, updating the local secondary indexes consumes provisioned write capacity units for the table. The total provisioned throughput cost for a write is the sum of write capacity units consumed by writing to the table and those consumed by updating the local secondary indexes.

The cost of writing an item to a local secondary index depends on several factors:
+ If you write a new item to the table that defines an indexed attribute, or you update an existing item to define a previously undefined indexed attribute, one write operation is required to put the item into the index.
+ If an update to the table changes the value of an indexed key attribute (from A to B), two writes are required: one to delete the previous item from the index and another write to put the new item into the index.  
+ If an item was present in the index, but a write to the table caused the indexed attribute to be deleted, one write is required to delete the old item projection from the index.
+ If an item is not present in the index before or after the item is updated, there is no additional write cost for the index.

All of these factors assume that the size of each item in the index is less than or equal to the 1 KB item size for calculating write capacity units. Larger index entries require additional write capacity units. You can minimize your write costs by considering which attributes your queries need to return and projecting only those attributes into the index.

## Storage considerations for Local Secondary Indexes
<a name="LSI.StorageConsiderations"></a>

When an application writes an item to a table, DynamoDB automatically copies the correct subset of attributes to any local secondary indexes in which those attributes should appear. Your AWS account is charged for storage of the item in the base table and also for storage of attributes in any local secondary indexes on that table.

The amount of space used by an index item is the sum of the following:
+ The size in bytes of the base table primary key (partition key and sort key)
+ The size in bytes of the index key attribute
+ The size in bytes of the projected attributes (if any)
+ 100 bytes of overhead per index item

To estimate the storage requirements for a local secondary index, you can estimate the average size of an item in the index and then multiply by the number of items in the index.

If a table contains an item where a particular attribute is not defined, but that attribute is defined as an index sort key, then DynamoDB does not write any data for that item to the index. 

## Item collections in Local Secondary Indexes
<a name="LSI.ItemCollections"></a>

**Note**  
This section pertains only to tables that have local secondary indexes.

In DynamoDB, an *item collection* is any group of items that have the same partition key value in a table and all of its local secondary indexes. In the examples used throughout this section, the partition key for the `Thread` table is `ForumName`, and the partition key for `LastPostIndex` is also `ForumName`. All the table and index items with the same `ForumName` are part of the same item collection. For example, in the `Thread` table and the `LastPostIndex` local secondary index, there is an item collection for forum `EC2` and a different item collection for forum `RDS`.

The following diagram shows the item collection for forum `S3`.

![\[A DynamoDB item collection with table and Local Secondary Index items that have the same partition key value of S3.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/LSI_04.png)


In this diagram, the item collection consists of all the items in `Thread` and `LastPostIndex` where the `ForumName` partition key value is "S3". If there were other local secondary indexes on the table, any items in those indexes with `ForumName` equal to "S3" would also be part of the item collection.

You can use any of the following operations in DynamoDB to return information about item collections:
+ `BatchWriteItem`
+ `DeleteItem`
+ `PutItem`
+ `UpdateItem`
+ `TransactWriteItems`

Each of these operations supports the `ReturnItemCollectionMetrics` parameter. When you set this parameter to `SIZE`, you can view information about the size of each item collection in the index.

**Example**  
The following is an example from the output of an `UpdateItem` operation on the `Thread` table, with `ReturnItemCollectionMetrics` set to `SIZE`. The item that was updated had a `ForumName` value of "EC2", so the output includes information about that item collection.  

```
{
    ItemCollectionMetrics: {
        ItemCollectionKey: {
            ForumName: "EC2"
        },
        SizeEstimateRangeGB: [0.0, 1.0]
    }
}
```
The `SizeEstimateRangeGB` object shows that the size of this item collection is between 0 and 1 GB. DynamoDB periodically updates this size estimate, so the numbers might be different next time the item is modified.

### Item collection size limit
<a name="LSI.ItemCollections.SizeLimit"></a>

The maximum size of any item collection for a table which has one or more local secondary indexes is 10 GB. This does not apply to item collections in tables without local secondary indexes, and also does not apply to item collections in global secondary indexes. Only tables that have one or more local secondary indexes are affected.

If an item collection exceeds the 10 GB limit, DynamoDB may return an `ItemCollectionSizeLimitExceededException`, and you may not be able to add more items to the item collection or increase the sizes of items that are in the item collection. (Read and write operations that shrink the size of the item collection are still allowed.) You can still add items to other item collections.

To reduce the size of an item collection, you can do one of the following:
+ Delete any unnecessary items with the partition key value in question. When you delete these items from the base table, DynamoDB also removes any index entries that have the same partition key value.
+ Update the items by removing attributes or by reducing the size of the attributes. If these attributes are projected into any local secondary indexes, DynamoDB also reduces the size of the corresponding index entries.
+ Create a new table with the same partition key and sort key, and then move items from the old table to the new table. This might be a good approach if a table has historical data that is infrequently accessed. You might also consider archiving this historical data to Amazon Simple Storage Service (Amazon S3).

When the total size of the item collection drops below 10 GB, you can once again add items with the same partition key value.

We recommend as a best practice that you instrument your application to monitor the sizes of your item collections. One way to do so is to set the `ReturnItemCollectionMetrics` parameter to `SIZE` whenever you use `BatchWriteItem`, `DeleteItem`, `PutItem`, or `UpdateItem`. Your application should examine the `ReturnItemCollectionMetrics` object in the output and log an error message whenever an item collection exceeds a user-defined limit (8 GB, for example). Setting a limit that is less than 10 GB would provide an early warning system so you know that an item collection is approaching the limit in time to do something about it.

### Item collections and partitions
<a name="LSI.ItemCollections.OnePartition"></a>

In a table with one or more local secondary indexes, each item collection is stored in one partition. The total size of such an item collection is limited to the capability of that partition: 10 GB. For an application where the data model includes item collections which are unbounded in size, or where you might reasonably expect some item collections to grow beyond 10 GB in the future, you should consider using a global secondary index instead.

You should design your applications so that table data is evenly distributed across distinct partition key values. For tables with local secondary indexes, your applications should not create "hot spots" of read and write activity within a single item collection on a single partition. 

# Working with Local Secondary Indexes: Java
<a name="LSIJavaDocumentAPI"></a>

You can use the AWS SDK for Java Document API to create an Amazon DynamoDB table with one or more local secondary indexes, describe the indexes on the table, and perform queries using the indexes.

The following are the common steps for table operations using the AWS SDK for Java Document API.

1. Create an instance of the `DynamoDB` class.

1. Provide the required and optional parameters for the operation by creating the corresponding request objects. 

1. Call the appropriate method provided by the client that you created in the preceding step. 

**Topics**
+ [Create a table with a Local Secondary Index](#LSIJavaDocumentAPI.CreateTableWithIndex)
+ [Describe a table with a Local Secondary Index](#LSIJavaDocumentAPI.DescribeTableWithIndex)
+ [Query a Local Secondary Index](#LSIJavaDocumentAPI.QueryAnIndex)
+ [Example: Local Secondary Indexes using the Java document API](LSIJavaDocumentAPI.Example.md)

## Create a table with a Local Secondary Index
<a name="LSIJavaDocumentAPI.CreateTableWithIndex"></a>

Local secondary indexes must be created at the same time you create a table. To do this, use the `createTable` method and provide your specifications for one or more local secondary indexes. The following Java code example creates a table to hold information about songs in a music collection. The partition key is `Artist` and the sort key is `SongTitle`. A secondary index, `AlbumTitleIndex`, facilitates queries by album title. 

The following are the steps to create a table with a local secondary index, using the DynamoDB document API. 

1. Create an instance of the `DynamoDB` class.

1. Create an instance of the `CreateTableRequest` class to provide the request information. 

   You must provide the table name, its primary key, and the provisioned throughput values. For the local secondary index, you must provide the index name, the name and data type for the index sort key, the key schema for the index, and the attribute projection.

1. Call the `createTable` method by providing the request object as a parameter.

The following Java code example demonstrates the preceding steps. The code creates a table (`Music`) with a secondary index on the `AlbumTitle` attribute. The table partition key and sort key, plus the index sort key, are the only attributes projected into the index.

```
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);

String tableName = "Music";

CreateTableRequest createTableRequest = new CreateTableRequest().withTableName(tableName);

//ProvisionedThroughput
createTableRequest.setProvisionedThroughput(new ProvisionedThroughput().withReadCapacityUnits((long)5).withWriteCapacityUnits((long)5));

//AttributeDefinitions
ArrayList<AttributeDefinition> attributeDefinitions= new ArrayList<AttributeDefinition>();
attributeDefinitions.add(new AttributeDefinition().withAttributeName("Artist").withAttributeType("S"));
attributeDefinitions.add(new AttributeDefinition().withAttributeName("SongTitle").withAttributeType("S"));
attributeDefinitions.add(new AttributeDefinition().withAttributeName("AlbumTitle").withAttributeType("S"));

createTableRequest.setAttributeDefinitions(attributeDefinitions);

//KeySchema
ArrayList<KeySchemaElement> tableKeySchema = new ArrayList<KeySchemaElement>();
tableKeySchema.add(new KeySchemaElement().withAttributeName("Artist").withKeyType(KeyType.HASH));  //Partition key
tableKeySchema.add(new KeySchemaElement().withAttributeName("SongTitle").withKeyType(KeyType.RANGE));  //Sort key

createTableRequest.setKeySchema(tableKeySchema);

ArrayList<KeySchemaElement> indexKeySchema = new ArrayList<KeySchemaElement>();
indexKeySchema.add(new KeySchemaElement().withAttributeName("Artist").withKeyType(KeyType.HASH));  //Partition key
indexKeySchema.add(new KeySchemaElement().withAttributeName("AlbumTitle").withKeyType(KeyType.RANGE));  //Sort key

Projection projection = new Projection().withProjectionType(ProjectionType.INCLUDE);
ArrayList<String> nonKeyAttributes = new ArrayList<String>();
nonKeyAttributes.add("Genre");
nonKeyAttributes.add("Year");
projection.setNonKeyAttributes(nonKeyAttributes);

LocalSecondaryIndex localSecondaryIndex = new LocalSecondaryIndex()
    .withIndexName("AlbumTitleIndex").withKeySchema(indexKeySchema).withProjection(projection);

ArrayList<LocalSecondaryIndex> localSecondaryIndexes = new ArrayList<LocalSecondaryIndex>();
localSecondaryIndexes.add(localSecondaryIndex);
createTableRequest.setLocalSecondaryIndexes(localSecondaryIndexes);

Table table = dynamoDB.createTable(createTableRequest);
System.out.println(table.getDescription());
```

You must wait until DynamoDB creates the table and sets the table status to `ACTIVE`. After that, you can begin putting data items into the table.

## Describe a table with a Local Secondary Index
<a name="LSIJavaDocumentAPI.DescribeTableWithIndex"></a>

To get information about local secondary indexes on a table, use the `describeTable` method. For each index, you can access its name, key schema, and projected attributes.

The following are the steps to access local secondary index information of a table using the AWS SDK for Java Document API.

1. Create an instance of the `DynamoDB` class.

1. Create an instance of the `Table` class. You must provide the table name.

1. Call the `describeTable` method on the `Table` object.

The following Java code example demonstrates the preceding steps.

**Example**  

```
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);

String tableName = "Music";

Table table = dynamoDB.getTable(tableName);

TableDescription tableDescription = table.describe();

List<LocalSecondaryIndexDescription> localSecondaryIndexes 
    = tableDescription.getLocalSecondaryIndexes();

// This code snippet will work for multiple indexes, even though
// there is only one index in this example.

Iterator<LocalSecondaryIndexDescription> lsiIter = localSecondaryIndexes.iterator();
while (lsiIter.hasNext()) {

    LocalSecondaryIndexDescription lsiDescription = lsiIter.next();
    System.out.println("Info for index " + lsiDescription.getIndexName() + ":");
    Iterator<KeySchemaElement> kseIter = lsiDescription.getKeySchema().iterator();
    while (kseIter.hasNext()) {
        KeySchemaElement kse = kseIter.next();
        System.out.printf("\t%s: %s\n", kse.getAttributeName(), kse.getKeyType());
    }
    Projection projection = lsiDescription.getProjection();
    System.out.println("\tThe projection type is: " + projection.getProjectionType());
    if (projection.getProjectionType().toString().equals("INCLUDE")) {
        System.out.println("\t\tThe non-key projected attributes are: " + projection.getNonKeyAttributes());
    }
}
```

## Query a Local Secondary Index
<a name="LSIJavaDocumentAPI.QueryAnIndex"></a>

You can use the `Query` operation on a local secondary index in much the same way that you `Query` a table. You must specify the index name, the query criteria for the index sort key, and the attributes that you want to return. In this example, the index is `AlbumTitleIndex` and the index sort key is `AlbumTitle`. 

The only attributes returned are those that have been projected into the index. You could modify this query to select non-key attributes too, but this would require table fetch activity that is relatively expensive. For more information about table fetches, see [Attribute projections](LSI.md#LSI.Projections).

The following are the steps to query a local secondary index using the AWS SDK for Java Document API. 

1. Create an instance of the `DynamoDB` class.

1. Create an instance of the `Table` class. You must provide the table name.

1. Create an instance of the `Index` class. You must provide the index name.

1. Call the `query` method of the `Index` class.

The following Java code example demonstrates the preceding steps.

**Example**  

```
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);

String tableName = "Music";

Table table = dynamoDB.getTable(tableName);
Index index = table.getIndex("AlbumTitleIndex");

QuerySpec spec = new QuerySpec()
    .withKeyConditionExpression("Artist = :v_artist and AlbumTitle = :v_title")
    .withValueMap(new ValueMap()
        .withString(":v_artist", "Acme Band")
        .withString(":v_title", "Songs About Life"));

ItemCollection<QueryOutcome> items = index.query(spec);

Iterator<Item> itemsIter = items.iterator();

while (itemsIter.hasNext()) {
    Item item = itemsIter.next();
    System.out.println(item.toJSONPretty());
}
```

### Consistent reads on a Local Secondary Index
<a name="LSIJavaDocumentAPI.ConsistentReads"></a>

Unlike global secondary indexes, which only support eventually consistent reads, a local secondary index supports both eventually consistent and strongly consistent reads. A strongly consistent read from a local secondary index always returns the latest updated values. If the query needs to fetch additional attributes from the base table, those fetched attributes are also consistent with respect to the index.

By default, `Query` uses eventually consistent reads. To request a strongly consistent read, set `ConsistentRead` to `true` in the `QuerySpec`. The following example queries `AlbumTitleIndex` using a strongly consistent read:

**Example**  

```
QuerySpec spec = new QuerySpec()
    .withKeyConditionExpression("Artist = :v_artist and AlbumTitle = :v_title")
    .withValueMap(new ValueMap()
        .withString(":v_artist", "Acme Band")
        .withString(":v_title", "Songs About Life"))
    .withConsistentRead(true);
```

**Note**  
A strongly consistent read consumes one read capacity unit per 4 KB of data returned (rounded up), whereas an eventually consistent read consumes half of that. For example, a strongly consistent read that returns 9 KB of data consumes 3 read capacity units (9 KB / 4 KB = 2.25, rounded up to 3), while the same query using an eventually consistent read consumes 1.5 read capacity units. If your application can tolerate reading data that might be slightly stale, use eventually consistent reads to reduce your read capacity usage. For more information, see [Read capacity units](LSI.md#LSI.ThroughputConsiderations.Reads).

# Example: Local Secondary Indexes using the Java document API
<a name="LSIJavaDocumentAPI.Example"></a>

The following Java code example shows how to work with local secondary indexes in Amazon DynamoDB. The example creates a table named `CustomerOrders` with a partition key of `CustomerId` and a sort key of `OrderId`. There are two local secondary indexes on this table:
+ `OrderCreationDateIndex` — The sort key is `OrderCreationDate`, and the following attributes are projected into the index:
  + `ProductCategory`
  + `ProductName`
  + `OrderStatus`
  + `ShipmentTrackingId`
+ `IsOpenIndex` — The sort key is `IsOpen`, and all of the table attributes are projected into the index.

After the `CustomerOrders` table is created, the program loads the table with data representing customer orders. It then queries the data using the local secondary indexes. Finally, the program deletes the `CustomerOrders` table.

For step-by-step instructions for testing the following sample, see [Java code examples](CodeSamples.Java.md).

**Example**  

```
package com.example.dynamodb;

import software.amazon.awssdk.core.waiters.WaiterResponse;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.*;
import software.amazon.awssdk.services.dynamodb.waiters.DynamoDbWaiter;

import java.util.HashMap;
import java.util.Map;

public class DocumentAPILocalSecondaryIndexExample {

    static DynamoDbClient client = DynamoDbClient.create();
    public static String tableName = "CustomerOrders";

    public static void main(String[] args) {
        createTable();
        loadData();
        query(null);
        query("IsOpenIndex");
        query("OrderCreationDateIndex");
        deleteTable(tableName);
    }

    public static void createTable() {
        CreateTableRequest request = CreateTableRequest.builder()
            .tableName(tableName)
            .provisionedThroughput(ProvisionedThroughput.builder()
                .readCapacityUnits(1L)
                .writeCapacityUnits(1L)
                .build())
            .attributeDefinitions(
                AttributeDefinition.builder().attributeName("CustomerId").attributeType(ScalarAttributeType.S).build(),
                AttributeDefinition.builder().attributeName("OrderId").attributeType(ScalarAttributeType.N).build(),
                AttributeDefinition.builder().attributeName("OrderCreationDate").attributeType(ScalarAttributeType.N).build(),
                AttributeDefinition.builder().attributeName("IsOpen").attributeType(ScalarAttributeType.N).build())
            .keySchema(
                KeySchemaElement.builder().attributeName("CustomerId").keyType(KeyType.HASH).build(),
                KeySchemaElement.builder().attributeName("OrderId").keyType(KeyType.RANGE).build())
            .localSecondaryIndexes(
                LocalSecondaryIndex.builder()
                    .indexName("OrderCreationDateIndex")
                    .keySchema(
                        KeySchemaElement.builder().attributeName("CustomerId").keyType(KeyType.HASH).build(),
                        KeySchemaElement.builder().attributeName("OrderCreationDate").keyType(KeyType.RANGE).build())
                    .projection(Projection.builder()
                        .projectionType(ProjectionType.INCLUDE)
                        .nonKeyAttributes("ProductCategory", "ProductName")
                        .build())
                    .build(),
                LocalSecondaryIndex.builder()
                    .indexName("IsOpenIndex")
                    .keySchema(
                        KeySchemaElement.builder().attributeName("CustomerId").keyType(KeyType.HASH).build(),
                        KeySchemaElement.builder().attributeName("IsOpen").keyType(KeyType.RANGE).build())
                    .projection(Projection.builder()
                        .projectionType(ProjectionType.ALL)
                        .build())
                    .build())
            .build();

        System.out.println("Creating table " + tableName + "...");
        client.createTable(request);

        try (DynamoDbWaiter waiter = client.waiter()) {
            WaiterResponse<DescribeTableResponse> response = waiter.waitUntilTableExists(r -> r.tableName(tableName));
            response.matched().response().ifPresent(System.out::println);
        }
    }

    public static void query(String indexName) {
        System.out.println("\n***********************************************************\n");
        System.out.println("Querying table " + tableName + "...");

        if ("IsOpenIndex".equals(indexName)) {
            System.out.println("\nUsing index: '" + indexName + "': Bob's orders that are open.");
            System.out.println("Only a user-specified list of attributes are returned\n");

            Map<String, AttributeValue> values = new HashMap<>();
            values.put(":v_custid", AttributeValue.builder().s("bob@example.com").build());
            values.put(":v_isopen", AttributeValue.builder().n("1").build());

            QueryRequest request = QueryRequest.builder()
                .tableName(tableName)
                .indexName(indexName)
                .keyConditionExpression("CustomerId = :v_custid and IsOpen = :v_isopen")
                .expressionAttributeValues(values)
                .projectionExpression("OrderCreationDate, ProductCategory, ProductName, OrderStatus")
                .build();

            System.out.println("Query: printing results...");
            client.query(request).items().forEach(System.out::println);

        } else if ("OrderCreationDateIndex".equals(indexName)) {
            System.out.println("\nUsing index: '" + indexName + "': Bob's orders that were placed after 01/31/2015.");
            System.out.println("Only the projected attributes are returned\n");

            Map<String, AttributeValue> values = new HashMap<>();
            values.put(":v_custid", AttributeValue.builder().s("bob@example.com").build());
            values.put(":v_orddate", AttributeValue.builder().n("20150131").build());

            QueryRequest request = QueryRequest.builder()
                .tableName(tableName)
                .indexName(indexName)
                .keyConditionExpression("CustomerId = :v_custid and OrderCreationDate >= :v_orddate")
                .expressionAttributeValues(values)
                .select(Select.ALL_PROJECTED_ATTRIBUTES)
                .build();

            System.out.println("Query: printing results...");
            client.query(request).items().forEach(System.out::println);

        } else {
            System.out.println("\nNo index: All of Bob's orders, by OrderId:\n");

            Map<String, AttributeValue> values = new HashMap<>();
            values.put(":v_custid", AttributeValue.builder().s("bob@example.com").build());

            QueryRequest request = QueryRequest.builder()
                .tableName(tableName)
                .keyConditionExpression("CustomerId = :v_custid")
                .expressionAttributeValues(values)
                .build();

            System.out.println("Query: printing results...");
            client.query(request).items().forEach(System.out::println);
        }
    }

    public static void deleteTable(String tableName) {
        System.out.println("Deleting table " + tableName + "...");
        client.deleteTable(DeleteTableRequest.builder().tableName(tableName).build());

        try (DynamoDbWaiter waiter = client.waiter()) {
            waiter.waitUntilTableNotExists(r -> r.tableName(tableName));
        }
    }

    public static void loadData() {
        System.out.println("Loading data into table " + tableName + "...");

        putItem(Map.of(
            "CustomerId", AttributeValue.builder().s("alice@example.com").build(),
            "OrderId", AttributeValue.builder().n("1").build(),
            "IsOpen", AttributeValue.builder().n("1").build(),
            "OrderCreationDate", AttributeValue.builder().n("20150101").build(),
            "ProductCategory", AttributeValue.builder().s("Book").build(),
            "ProductName", AttributeValue.builder().s("The Great Outdoors").build(),
            "OrderStatus", AttributeValue.builder().s("PACKING ITEMS").build()));

        putItem(Map.of(
            "CustomerId", AttributeValue.builder().s("alice@example.com").build(),
            "OrderId", AttributeValue.builder().n("2").build(),
            "IsOpen", AttributeValue.builder().n("1").build(),
            "OrderCreationDate", AttributeValue.builder().n("20150221").build(),
            "ProductCategory", AttributeValue.builder().s("Bike").build(),
            "ProductName", AttributeValue.builder().s("Super Mountain").build(),
            "OrderStatus", AttributeValue.builder().s("ORDER RECEIVED").build()));

        putItem(Map.of(
            "CustomerId", AttributeValue.builder().s("alice@example.com").build(),
            "OrderId", AttributeValue.builder().n("3").build(),
            "OrderCreationDate", AttributeValue.builder().n("20150304").build(),
            "ProductCategory", AttributeValue.builder().s("Music").build(),
            "ProductName", AttributeValue.builder().s("A Quiet Interlude").build(),
            "OrderStatus", AttributeValue.builder().s("IN TRANSIT").build(),
            "ShipmentTrackingId", AttributeValue.builder().s("176493").build()));

        putItem(Map.of(
            "CustomerId", AttributeValue.builder().s("bob@example.com").build(),
            "OrderId", AttributeValue.builder().n("1").build(),
            "OrderCreationDate", AttributeValue.builder().n("20150111").build(),
            "ProductCategory", AttributeValue.builder().s("Movie").build(),
            "ProductName", AttributeValue.builder().s("Calm Before The Storm").build(),
            "OrderStatus", AttributeValue.builder().s("SHIPPING DELAY").build(),
            "ShipmentTrackingId", AttributeValue.builder().s("859323").build()));

        putItem(Map.of(
            "CustomerId", AttributeValue.builder().s("bob@example.com").build(),
            "OrderId", AttributeValue.builder().n("2").build(),
            "OrderCreationDate", AttributeValue.builder().n("20150124").build(),
            "ProductCategory", AttributeValue.builder().s("Music").build(),
            "ProductName", AttributeValue.builder().s("E-Z Listening").build(),
            "OrderStatus", AttributeValue.builder().s("DELIVERED").build(),
            "ShipmentTrackingId", AttributeValue.builder().s("756943").build()));

        putItem(Map.of(
            "CustomerId", AttributeValue.builder().s("bob@example.com").build(),
            "OrderId", AttributeValue.builder().n("3").build(),
            "OrderCreationDate", AttributeValue.builder().n("20150221").build(),
            "ProductCategory", AttributeValue.builder().s("Music").build(),
            "ProductName", AttributeValue.builder().s("Symphony 9").build(),
            "OrderStatus", AttributeValue.builder().s("DELIVERED").build(),
            "ShipmentTrackingId", AttributeValue.builder().s("645193").build()));

        putItem(Map.of(
            "CustomerId", AttributeValue.builder().s("bob@example.com").build(),
            "OrderId", AttributeValue.builder().n("4").build(),
            "IsOpen", AttributeValue.builder().n("1").build(),
            "OrderCreationDate", AttributeValue.builder().n("20150222").build(),
            "ProductCategory", AttributeValue.builder().s("Hardware").build(),
            "ProductName", AttributeValue.builder().s("Extra Heavy Hammer").build(),
            "OrderStatus", AttributeValue.builder().s("PACKING ITEMS").build()));

        putItem(Map.of(
            "CustomerId", AttributeValue.builder().s("bob@example.com").build(),
            "OrderId", AttributeValue.builder().n("5").build(),
            "OrderCreationDate", AttributeValue.builder().n("20150309").build(),
            "ProductCategory", AttributeValue.builder().s("Book").build(),
            "ProductName", AttributeValue.builder().s("How To Cook").build(),
            "OrderStatus", AttributeValue.builder().s("IN TRANSIT").build(),
            "ShipmentTrackingId", AttributeValue.builder().s("440185").build()));

        putItem(Map.of(
            "CustomerId", AttributeValue.builder().s("bob@example.com").build(),
            "OrderId", AttributeValue.builder().n("6").build(),
            "OrderCreationDate", AttributeValue.builder().n("20150318").build(),
            "ProductCategory", AttributeValue.builder().s("Luggage").build(),
            "ProductName", AttributeValue.builder().s("Really Big Suitcase").build(),
            "OrderStatus", AttributeValue.builder().s("DELIVERED").build(),
            "ShipmentTrackingId", AttributeValue.builder().s("893927").build()));

        putItem(Map.of(
            "CustomerId", AttributeValue.builder().s("bob@example.com").build(),
            "OrderId", AttributeValue.builder().n("7").build(),
            "OrderCreationDate", AttributeValue.builder().n("20150324").build(),
            "ProductCategory", AttributeValue.builder().s("Golf").build(),
            "ProductName", AttributeValue.builder().s("PGA Pro II").build(),
            "OrderStatus", AttributeValue.builder().s("OUT FOR DELIVERY").build(),
            "ShipmentTrackingId", AttributeValue.builder().s("383283").build()));
    }

    private static void putItem(Map<String, AttributeValue> item) {
        client.putItem(PutItemRequest.builder().tableName(tableName).item(item).build());
    }
}
```

# Working with Local Secondary Indexes: .NET
<a name="LSILowLevelDotNet"></a>

**Topics**
+ [Create a table with a Local Secondary Index](#LSILowLevelDotNet.CreateTableWithIndex)
+ [Describe a table with a Local Secondary Index](#LSILowLevelDotNet.DescribeTableWithIndex)
+ [Query a Local Secondary Index](#LSILowLevelDotNet.QueryAnIndex)
+ [Example: Local Secondary Indexes using the AWS SDK for .NET low-level API](LSILowLevelDotNet.Example.md)

You can use the AWS SDK for .NET low-level API to create an Amazon DynamoDB table with one or more local secondary indexes, describe the indexes on the table, and perform queries using the indexes. These operations map to the corresponding low-level DynamoDB API actions. For more information, see [.NET code examples](CodeSamples.DotNet.md). 

The following are the common steps for table operations using the .NET low-level API. 

1. Create an instance of the `AmazonDynamoDBClient` class.

1. Provide the required and optional parameters for the operation by creating the corresponding request objects.

   For example, create a `CreateTableRequest` object to create a table and an `QueryRequest` object to query a table or an index. 

1. Run the appropriate method provided by the client that you created in the preceding step. 

## Create a table with a Local Secondary Index
<a name="LSILowLevelDotNet.CreateTableWithIndex"></a>

Local secondary indexes must be created at the same time that you create a table. To do this, use `CreateTable` and provide your specifications for one or more local secondary indexes. The following C\$1 code example creates a table to hold information about songs in a music collection. The partition key is `Artist` and the sort key is `SongTitle`. A secondary index, `AlbumTitleIndex`, facilitates queries by album title. 

The following are the steps to create a table with a local secondary index, using the .NET low-level API. 

1. Create an instance of the `AmazonDynamoDBClient` class.

1. Create an instance of the `CreateTableRequest` class to provide the request information. 

   You must provide the table name, its primary key, and the provisioned throughput values. For the local secondary index, you must provide the index name, the name and data type of the index sort key, the key schema for the index, and the attribute projection.

1. Run the `CreateTable` method by providing the request object as a parameter.

The following C\$1 code example demonstrates the preceding steps. The code creates a table (`Music`) with a secondary index on the `AlbumTitle` attribute. The table partition key and sort key, plus the index sort key, are the only attributes projected into the index.

```
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
string tableName = "Music";

CreateTableRequest createTableRequest = new CreateTableRequest()
{
    TableName = tableName
};

//ProvisionedThroughput
createTableRequest.ProvisionedThroughput = new ProvisionedThroughput()
{
    ReadCapacityUnits = (long)5,
    WriteCapacityUnits = (long)5
};

//AttributeDefinitions
List<AttributeDefinition> attributeDefinitions = new List<AttributeDefinition>();

attributeDefinitions.Add(new AttributeDefinition()
{
    AttributeName = "Artist",
    AttributeType = "S"
});

attributeDefinitions.Add(new AttributeDefinition()
 {
     AttributeName = "SongTitle",
     AttributeType = "S"
 });

attributeDefinitions.Add(new AttributeDefinition()
 {
     AttributeName = "AlbumTitle",
     AttributeType = "S"
 });

createTableRequest.AttributeDefinitions = attributeDefinitions;

//KeySchema
List<KeySchemaElement> tableKeySchema = new List<KeySchemaElement>();

tableKeySchema.Add(new KeySchemaElement() { AttributeName = "Artist", KeyType = "HASH" });  //Partition key
tableKeySchema.Add(new KeySchemaElement() { AttributeName = "SongTitle", KeyType = "RANGE" });  //Sort key

createTableRequest.KeySchema = tableKeySchema;

List<KeySchemaElement> indexKeySchema = new List<KeySchemaElement>();
indexKeySchema.Add(new KeySchemaElement() { AttributeName = "Artist", KeyType = "HASH" });  //Partition key
indexKeySchema.Add(new KeySchemaElement() { AttributeName = "AlbumTitle", KeyType = "RANGE" });  //Sort key

Projection projection = new Projection() { ProjectionType = "INCLUDE" };

List<string> nonKeyAttributes = new List<string>();
nonKeyAttributes.Add("Genre");
nonKeyAttributes.Add("Year");
projection.NonKeyAttributes = nonKeyAttributes;

LocalSecondaryIndex localSecondaryIndex = new LocalSecondaryIndex()
{
    IndexName = "AlbumTitleIndex",
    KeySchema = indexKeySchema,
    Projection = projection
};

List<LocalSecondaryIndex> localSecondaryIndexes = new List<LocalSecondaryIndex>();
localSecondaryIndexes.Add(localSecondaryIndex);
createTableRequest.LocalSecondaryIndexes = localSecondaryIndexes;

CreateTableResponse result = client.CreateTable(createTableRequest);
Console.WriteLine(result.CreateTableResult.TableDescription.TableName);
Console.WriteLine(result.CreateTableResult.TableDescription.TableStatus);
```

You must wait until DynamoDB creates the table and sets the table status to `ACTIVE`. After that, you can begin putting data items into the table.

## Describe a table with a Local Secondary Index
<a name="LSILowLevelDotNet.DescribeTableWithIndex"></a>

To get information about local secondary indexes on a table, use the `DescribeTable` API. For each index, you can access its name, key schema, and projected attributes.

The following are the steps to access local secondary index information a table using the .NET low-level API. 

1. Create an instance of the `AmazonDynamoDBClient` class.

1. Create an instance of the `DescribeTableRequest` class to provide the request information. You must provide the table name.

1. Run the `describeTable` method by providing the request object as a parameter.

The following C\$1 code example demonstrates the preceding steps.

**Example**  

```
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
string tableName = "Music";

DescribeTableResponse response = client.DescribeTable(new DescribeTableRequest() { TableName = tableName });
List<LocalSecondaryIndexDescription> localSecondaryIndexes =
    response.DescribeTableResult.Table.LocalSecondaryIndexes;

// This code snippet will work for multiple indexes, even though
// there is only one index in this example.
foreach (LocalSecondaryIndexDescription lsiDescription in localSecondaryIndexes)
{
    Console.WriteLine("Info for index " + lsiDescription.IndexName + ":");

    foreach (KeySchemaElement kse in lsiDescription.KeySchema)
    {
        Console.WriteLine("\t" + kse.AttributeName + ": key type is " + kse.KeyType);
    }

    Projection projection = lsiDescription.Projection;

    Console.WriteLine("\tThe projection type is: " + projection.ProjectionType);

    if (projection.ProjectionType.ToString().Equals("INCLUDE"))
    {
        Console.WriteLine("\t\tThe non-key projected attributes are:");

        foreach (String s in projection.NonKeyAttributes)
        {
            Console.WriteLine("\t\t" + s);
        }

    }
}
```

## Query a Local Secondary Index
<a name="LSILowLevelDotNet.QueryAnIndex"></a>

You can use `Query` on a local secondary index in much the same way you `Query` a table. You must specify the index name, the query criteria for the index sort key, and the attributes that you want to return. In this example, the index is `AlbumTitleIndex`, and the index sort key is `AlbumTitle`. 

The only attributes returned are those that have been projected into the index. You could modify this query to select non-key attributes too, but this would require table fetch activity that is relatively expensive. For more information about table fetches, see [Attribute projections](LSI.md#LSI.Projections)

The following are the steps to query a local secondary index using the .NET low-level API. 

1. Create an instance of the `AmazonDynamoDBClient` class.

1. Create an instance of the `QueryRequest` class to provide the request information.

1. Run the `query` method by providing the request object as a parameter.

The following C\$1 code example demonstrates the preceding steps.

**Example**  

```
QueryRequest queryRequest = new QueryRequest
{
    TableName = "Music",
    IndexName = "AlbumTitleIndex",
    Select = "ALL_ATTRIBUTES",
    ScanIndexForward = true,
    KeyConditionExpression = "Artist = :v_artist and AlbumTitle = :v_title",
    ExpressionAttributeValues = new Dictionary<string, AttributeValue>()
    {
        {":v_artist",new AttributeValue {S = "Acme Band"}},
        {":v_title",new AttributeValue {S = "Songs About Life"}}
    },
};

QueryResponse response = client.Query(queryRequest);

foreach (var attribs in response.Items)
{
    foreach (var attrib in attribs)
    {
        Console.WriteLine(attrib.Key + " ---> " + attrib.Value.S);
    }
    Console.WriteLine();
}
```

# Example: Local Secondary Indexes using the AWS SDK for .NET low-level API
<a name="LSILowLevelDotNet.Example"></a>

The following C\$1 code example shows how to work with local secondary indexes in Amazon DynamoDB. The example creates a table named `CustomerOrders` with a partition key of `CustomerId` and a sort key of `OrderId`. There are two local secondary indexes on this table:
+ `OrderCreationDateIndex` — The sort key is `OrderCreationDate`, and the following attributes are projected into the index:
  + `ProductCategory`
  + `ProductName`
  + `OrderStatus`
  + `ShipmentTrackingId`
+ `IsOpenIndex` — The sort key is `IsOpen`, and all of the table attributes are projected into the index.

After the `CustomerOrders` table is created, the program loads the table with data representing customer orders. It then queries the data using the local secondary indexes. Finally, the program deletes the `CustomerOrders` table.

For step-by-step instructions for testing the following example, see [.NET code examples](CodeSamples.DotNet.md).

**Example**  

```
using System;
using System.Collections.Generic;
using System.Linq;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DataModel;
using Amazon.DynamoDBv2.DocumentModel;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;
using Amazon.SecurityToken;

namespace com.amazonaws.codesamples
{
    class LowLevelLocalSecondaryIndexExample
    {
        private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
        private static string tableName = "CustomerOrders";

        static void Main(string[] args)
        {
            try
            {
                CreateTable();
                LoadData();

                Query(null);
                Query("IsOpenIndex");
                Query("OrderCreationDateIndex");

                DeleteTable(tableName);

                Console.WriteLine("To continue, press Enter");
                Console.ReadLine();
            }
            catch (AmazonDynamoDBException e) { Console.WriteLine(e.Message); }
            catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
            catch (Exception e) { Console.WriteLine(e.Message); }
        }

        private static void CreateTable()
        {
            var createTableRequest =
                new CreateTableRequest()
                {
                    TableName = tableName,
                    ProvisionedThroughput =
                    new ProvisionedThroughput()
                    {
                        ReadCapacityUnits = (long)1,
                        WriteCapacityUnits = (long)1
                    }
                };

            var attributeDefinitions = new List<AttributeDefinition>()
        {
            // Attribute definitions for table primary key
            { new AttributeDefinition() {
                  AttributeName = "CustomerId", AttributeType = "S"
              } },
            { new AttributeDefinition() {
                  AttributeName = "OrderId", AttributeType = "N"
              } },
            // Attribute definitions for index primary key
            { new AttributeDefinition() {
                  AttributeName = "OrderCreationDate", AttributeType = "N"
              } },
            { new AttributeDefinition() {
                  AttributeName = "IsOpen", AttributeType = "N"
              }}
        };

            createTableRequest.AttributeDefinitions = attributeDefinitions;

            // Key schema for table
            var tableKeySchema = new List<KeySchemaElement>()
        {
            { new KeySchemaElement() {
                  AttributeName = "CustomerId", KeyType = "HASH"
              } },                                                  //Partition key
            { new KeySchemaElement() {
                  AttributeName = "OrderId", KeyType = "RANGE"
              } }                                                //Sort key
        };

            createTableRequest.KeySchema = tableKeySchema;

            var localSecondaryIndexes = new List<LocalSecondaryIndex>();

            // OrderCreationDateIndex
            LocalSecondaryIndex orderCreationDateIndex = new LocalSecondaryIndex()
            {
                IndexName = "OrderCreationDateIndex"
            };

            // Key schema for OrderCreationDateIndex
            var indexKeySchema = new List<KeySchemaElement>()
        {
            { new KeySchemaElement() {
                  AttributeName = "CustomerId", KeyType = "HASH"
              } },                                                    //Partition key
            { new KeySchemaElement() {
                  AttributeName = "OrderCreationDate", KeyType = "RANGE"
              } }                                                            //Sort key
        };

            orderCreationDateIndex.KeySchema = indexKeySchema;

            // Projection (with list of projected attributes) for
            // OrderCreationDateIndex
            var projection = new Projection()
            {
                ProjectionType = "INCLUDE"
            };

            var nonKeyAttributes = new List<string>()
        {
            "ProductCategory",
            "ProductName"
        };
            projection.NonKeyAttributes = nonKeyAttributes;

            orderCreationDateIndex.Projection = projection;

            localSecondaryIndexes.Add(orderCreationDateIndex);

            // IsOpenIndex
            LocalSecondaryIndex isOpenIndex
                = new LocalSecondaryIndex()
                {
                    IndexName = "IsOpenIndex"
                };

            // Key schema for IsOpenIndex
            indexKeySchema = new List<KeySchemaElement>()
        {
            { new KeySchemaElement() {
                  AttributeName = "CustomerId", KeyType = "HASH"
              }},                                                     //Partition key
            { new KeySchemaElement() {
                  AttributeName = "IsOpen", KeyType = "RANGE"
              }}                                                  //Sort key
        };

            // Projection (all attributes) for IsOpenIndex
            projection = new Projection()
            {
                ProjectionType = "ALL"
            };

            isOpenIndex.KeySchema = indexKeySchema;
            isOpenIndex.Projection = projection;

            localSecondaryIndexes.Add(isOpenIndex);

            // Add index definitions to CreateTable request
            createTableRequest.LocalSecondaryIndexes = localSecondaryIndexes;

            Console.WriteLine("Creating table " + tableName + "...");
            client.CreateTable(createTableRequest);
            WaitUntilTableReady(tableName);
        }

        public static void Query(string indexName)
        {
            Console.WriteLine("\n***********************************************************\n");
            Console.WriteLine("Querying table " + tableName + "...");

            QueryRequest queryRequest = new QueryRequest()
            {
                TableName = tableName,
                ConsistentRead = true,
                ScanIndexForward = true,
                ReturnConsumedCapacity = "TOTAL"
            };


            String keyConditionExpression = "CustomerId = :v_customerId";
            Dictionary<string, AttributeValue> expressionAttributeValues = new Dictionary<string, AttributeValue> {
            {":v_customerId", new AttributeValue {
                 S = "bob@example.com"
             }}
        };


            if (indexName == "IsOpenIndex")
            {
                Console.WriteLine("\nUsing index: '" + indexName
                          + "': Bob's orders that are open.");
                Console.WriteLine("Only a user-specified list of attributes are returned\n");
                queryRequest.IndexName = indexName;

                keyConditionExpression += " and IsOpen = :v_isOpen";
                expressionAttributeValues.Add(":v_isOpen", new AttributeValue
                {
                    N = "1"
                });

                // ProjectionExpression
                queryRequest.ProjectionExpression = "OrderCreationDate, ProductCategory, ProductName, OrderStatus";
            }
            else if (indexName == "OrderCreationDateIndex")
            {
                Console.WriteLine("\nUsing index: '" + indexName
                          + "': Bob's orders that were placed after 01/31/2013.");
                Console.WriteLine("Only the projected attributes are returned\n");
                queryRequest.IndexName = indexName;

                keyConditionExpression += " and OrderCreationDate > :v_Date";
                expressionAttributeValues.Add(":v_Date", new AttributeValue
                {
                    N = "20130131"
                });

                // Select
                queryRequest.Select = "ALL_PROJECTED_ATTRIBUTES";
            }
            else
            {
                Console.WriteLine("\nNo index: All of Bob's orders, by OrderId:\n");
            }
            queryRequest.KeyConditionExpression = keyConditionExpression;
            queryRequest.ExpressionAttributeValues = expressionAttributeValues;

            var result = client.Query(queryRequest);
            var items = result.Items;
            foreach (var currentItem in items)
            {
                foreach (string attr in currentItem.Keys)
                {
                    if (attr == "OrderId" || attr == "IsOpen"
                        || attr == "OrderCreationDate")
                    {
                        Console.WriteLine(attr + "---> " + currentItem[attr].N);
                    }
                    else
                    {
                        Console.WriteLine(attr + "---> " + currentItem[attr].S);
                    }
                }
                Console.WriteLine();
            }
            Console.WriteLine("\nConsumed capacity: " + result.ConsumedCapacity.CapacityUnits + "\n");
        }

        private static void DeleteTable(string tableName)
        {
            Console.WriteLine("Deleting table " + tableName + "...");
            client.DeleteTable(new DeleteTableRequest()
            {
                TableName = tableName
            });
            WaitForTableToBeDeleted(tableName);
        }

        public static void LoadData()
        {
            Console.WriteLine("Loading data into table " + tableName + "...");

            Dictionary<string, AttributeValue> item = new Dictionary<string, AttributeValue>();

            item["CustomerId"] = new AttributeValue
            {
                S = "alice@example.com"
            };
            item["OrderId"] = new AttributeValue
            {
                N = "1"
            };
            item["IsOpen"] = new AttributeValue
            {
                N = "1"
            };
            item["OrderCreationDate"] = new AttributeValue
            {
                N = "20130101"
            };
            item["ProductCategory"] = new AttributeValue
            {
                S = "Book"
            };
            item["ProductName"] = new AttributeValue
            {
                S = "The Great Outdoors"
            };
            item["OrderStatus"] = new AttributeValue
            {
                S = "PACKING ITEMS"
            };
            /* no ShipmentTrackingId attribute */
            PutItemRequest putItemRequest = new PutItemRequest
            {
                TableName = tableName,
                Item = item,
                ReturnItemCollectionMetrics = "SIZE"
            };
            client.PutItem(putItemRequest);

            item = new Dictionary<string, AttributeValue>();
            item["CustomerId"] = new AttributeValue
            {
                S = "alice@example.com"
            };
            item["OrderId"] = new AttributeValue
            {
                N = "2"
            };
            item["IsOpen"] = new AttributeValue
            {
                N = "1"
            };
            item["OrderCreationDate"] = new AttributeValue
            {
                N = "20130221"
            };
            item["ProductCategory"] = new AttributeValue
            {
                S = "Bike"
            };
            item["ProductName"] = new AttributeValue
            {
                S = "Super Mountain"
            };
            item["OrderStatus"] = new AttributeValue
            {
                S = "ORDER RECEIVED"
            };
            /* no ShipmentTrackingId attribute */
            putItemRequest = new PutItemRequest
            {
                TableName = tableName,
                Item = item,
                ReturnItemCollectionMetrics = "SIZE"
            };
            client.PutItem(putItemRequest);

            item = new Dictionary<string, AttributeValue>();
            item["CustomerId"] = new AttributeValue
            {
                S = "alice@example.com"
            };
            item["OrderId"] = new AttributeValue
            {
                N = "3"
            };
            /* no IsOpen attribute */
            item["OrderCreationDate"] = new AttributeValue
            {
                N = "20130304"
            };
            item["ProductCategory"] = new AttributeValue
            {
                S = "Music"
            };
            item["ProductName"] = new AttributeValue
            {
                S = "A Quiet Interlude"
            };
            item["OrderStatus"] = new AttributeValue
            {
                S = "IN TRANSIT"
            };
            item["ShipmentTrackingId"] = new AttributeValue
            {
                S = "176493"
            };
            putItemRequest = new PutItemRequest
            {
                TableName = tableName,
                Item = item,
                ReturnItemCollectionMetrics = "SIZE"
            };
            client.PutItem(putItemRequest);

            item = new Dictionary<string, AttributeValue>();
            item["CustomerId"] = new AttributeValue
            {
                S = "bob@example.com"
            };
            item["OrderId"] = new AttributeValue
            {
                N = "1"
            };
            /* no IsOpen attribute */
            item["OrderCreationDate"] = new AttributeValue
            {
                N = "20130111"
            };
            item["ProductCategory"] = new AttributeValue
            {
                S = "Movie"
            };
            item["ProductName"] = new AttributeValue
            {
                S = "Calm Before The Storm"
            };
            item["OrderStatus"] = new AttributeValue
            {
                S = "SHIPPING DELAY"
            };
            item["ShipmentTrackingId"] = new AttributeValue
            {
                S = "859323"
            };
            putItemRequest = new PutItemRequest
            {
                TableName = tableName,
                Item = item,
                ReturnItemCollectionMetrics = "SIZE"
            };
            client.PutItem(putItemRequest);

            item = new Dictionary<string, AttributeValue>();
            item["CustomerId"] = new AttributeValue
            {
                S = "bob@example.com"
            };
            item["OrderId"] = new AttributeValue
            {
                N = "2"
            };
            /* no IsOpen attribute */
            item["OrderCreationDate"] = new AttributeValue
            {
                N = "20130124"
            };
            item["ProductCategory"] = new AttributeValue
            {
                S = "Music"
            };
            item["ProductName"] = new AttributeValue
            {
                S = "E-Z Listening"
            };
            item["OrderStatus"] = new AttributeValue
            {
                S = "DELIVERED"
            };
            item["ShipmentTrackingId"] = new AttributeValue
            {
                S = "756943"
            };
            putItemRequest = new PutItemRequest
            {
                TableName = tableName,
                Item = item,
                ReturnItemCollectionMetrics = "SIZE"
            };
            client.PutItem(putItemRequest);

            item = new Dictionary<string, AttributeValue>();
            item["CustomerId"] = new AttributeValue
            {
                S = "bob@example.com"
            };
            item["OrderId"] = new AttributeValue
            {
                N = "3"
            };
            /* no IsOpen attribute */
            item["OrderCreationDate"] = new AttributeValue
            {
                N = "20130221"
            };
            item["ProductCategory"] = new AttributeValue
            {
                S = "Music"
            };
            item["ProductName"] = new AttributeValue
            {
                S = "Symphony 9"
            };
            item["OrderStatus"] = new AttributeValue
            {
                S = "DELIVERED"
            };
            item["ShipmentTrackingId"] = new AttributeValue
            {
                S = "645193"
            };
            putItemRequest = new PutItemRequest
            {
                TableName = tableName,
                Item = item,
                ReturnItemCollectionMetrics = "SIZE"
            };
            client.PutItem(putItemRequest);

            item = new Dictionary<string, AttributeValue>();
            item["CustomerId"] = new AttributeValue
            {
                S = "bob@example.com"
            };
            item["OrderId"] = new AttributeValue
            {
                N = "4"
            };
            item["IsOpen"] = new AttributeValue
            {
                N = "1"
            };
            item["OrderCreationDate"] = new AttributeValue
            {
                N = "20130222"
            };
            item["ProductCategory"] = new AttributeValue
            {
                S = "Hardware"
            };
            item["ProductName"] = new AttributeValue
            {
                S = "Extra Heavy Hammer"
            };
            item["OrderStatus"] = new AttributeValue
            {
                S = "PACKING ITEMS"
            };
            /* no ShipmentTrackingId attribute */
            putItemRequest = new PutItemRequest
            {
                TableName = tableName,
                Item = item,
                ReturnItemCollectionMetrics = "SIZE"
            };
            client.PutItem(putItemRequest);

            item = new Dictionary<string, AttributeValue>();
            item["CustomerId"] = new AttributeValue
            {
                S = "bob@example.com"
            };
            item["OrderId"] = new AttributeValue
            {
                N = "5"
            };
            /* no IsOpen attribute */
            item["OrderCreationDate"] = new AttributeValue
            {
                N = "20130309"
            };
            item["ProductCategory"] = new AttributeValue
            {
                S = "Book"
            };
            item["ProductName"] = new AttributeValue
            {
                S = "How To Cook"
            };
            item["OrderStatus"] = new AttributeValue
            {
                S = "IN TRANSIT"
            };
            item["ShipmentTrackingId"] = new AttributeValue
            {
                S = "440185"
            };
            putItemRequest = new PutItemRequest
            {
                TableName = tableName,
                Item = item,
                ReturnItemCollectionMetrics = "SIZE"
            };
            client.PutItem(putItemRequest);

            item = new Dictionary<string, AttributeValue>();
            item["CustomerId"] = new AttributeValue
            {
                S = "bob@example.com"
            };
            item["OrderId"] = new AttributeValue
            {
                N = "6"
            };
            /* no IsOpen attribute */
            item["OrderCreationDate"] = new AttributeValue
            {
                N = "20130318"
            };
            item["ProductCategory"] = new AttributeValue
            {
                S = "Luggage"
            };
            item["ProductName"] = new AttributeValue
            {
                S = "Really Big Suitcase"
            };
            item["OrderStatus"] = new AttributeValue
            {
                S = "DELIVERED"
            };
            item["ShipmentTrackingId"] = new AttributeValue
            {
                S = "893927"
            };
            putItemRequest = new PutItemRequest
            {
                TableName = tableName,
                Item = item,
                ReturnItemCollectionMetrics = "SIZE"
            };
            client.PutItem(putItemRequest);

            item = new Dictionary<string, AttributeValue>();
            item["CustomerId"] = new AttributeValue
            {
                S = "bob@example.com"
            };
            item["OrderId"] = new AttributeValue
            {
                N = "7"
            };
            /* no IsOpen attribute */
            item["OrderCreationDate"] = new AttributeValue
            {
                N = "20130324"
            };
            item["ProductCategory"] = new AttributeValue
            {
                S = "Golf"
            };
            item["ProductName"] = new AttributeValue
            {
                S = "PGA Pro II"
            };
            item["OrderStatus"] = new AttributeValue
            {
                S = "OUT FOR DELIVERY"
            };
            item["ShipmentTrackingId"] = new AttributeValue
            {
                S = "383283"
            };
            putItemRequest = new PutItemRequest
            {
                TableName = tableName,
                Item = item,
                ReturnItemCollectionMetrics = "SIZE"
            };
            client.PutItem(putItemRequest);
        }

        private static void WaitUntilTableReady(string tableName)
        {
            string status = null;
            // Let us wait until table is created. Call DescribeTable.
            do
            {
                System.Threading.Thread.Sleep(5000); // Wait 5 seconds.
                try
                {
                    var res = client.DescribeTable(new DescribeTableRequest
                    {
                        TableName = tableName
                    });

                    Console.WriteLine("Table name: {0}, status: {1}",
                              res.Table.TableName,
                              res.Table.TableStatus);
                    status = res.Table.TableStatus;
                }
                catch (ResourceNotFoundException)
                {
                    // DescribeTable is eventually consistent. So you might
                    // get resource not found. So we handle the potential exception.
                }
            } while (status != "ACTIVE");
        }

        private static void WaitForTableToBeDeleted(string tableName)
        {
            bool tablePresent = true;

            while (tablePresent)
            {
                System.Threading.Thread.Sleep(5000); // Wait 5 seconds.
                try
                {
                    var res = client.DescribeTable(new DescribeTableRequest
                    {
                        TableName = tableName
                    });

                    Console.WriteLine("Table name: {0}, status: {1}",
                              res.Table.TableName,
                              res.Table.TableStatus);
                }
                catch (ResourceNotFoundException)
                {
                    tablePresent = false;
                }
            }
        }
    }
}
```

# Working with Local Secondary Indexes in DynamoDB AWS CLI
<a name="LCICli"></a>

You can use the AWS CLI to create an Amazon DynamoDB table with one or more Local Secondary Indexes, describe the indexes on the table, and perform queries using the indexes.

**Topics**
+ [Create a table with a Local Secondary Index](#LCICli.CreateTableWithIndex)
+ [Describe a table with a Local Secondary Index](#LCICli.DescribeTableWithIndex)
+ [Query a Local Secondary Index](#LCICli.QueryAnIndex)

## Create a table with a Local Secondary Index
<a name="LCICli.CreateTableWithIndex"></a>

Local Secondary Indexes must be created at the same time you create a table. To do this, use the `create-table` parameter and provide your specifications for one or more Local Secondary Indexes. The following example creates a table (`Music`) to hold information about songs in a music collection. The partition key is `Artist` and the sort key is `SongTitle`. A secondary index, `AlbumTitleIndex` on the `AlbumTitle` attribute facilitates queries by album title. 

```
aws dynamodb create-table \
    --table-name Music \
    --attribute-definitions AttributeName=Artist,AttributeType=S AttributeName=SongTitle,AttributeType=S \
        AttributeName=AlbumTitle,AttributeType=S  \
    --key-schema AttributeName=Artist,KeyType=HASH AttributeName=SongTitle,KeyType=RANGE \
    --provisioned-throughput \
        ReadCapacityUnits=10,WriteCapacityUnits=5 \
    --local-secondary-indexes \
        "[{\"IndexName\": \"AlbumTitleIndex\",
        \"KeySchema\":[{\"AttributeName\":\"Artist\",\"KeyType\":\"HASH\"},
                      {\"AttributeName\":\"AlbumTitle\",\"KeyType\":\"RANGE\"}],
        \"Projection\":{\"ProjectionType\":\"INCLUDE\",  \"NonKeyAttributes\":[\"Genre\", \"Year\"]}}]"
```

You must wait until DynamoDB creates the table and sets the table status to `ACTIVE`. After that, you can begin putting data items into the table. You can use [describe-table](https://docs.aws.amazon.com/cli/latest/reference/dynamodb/describe-table.html) to determine the status of the table creation. 

## Describe a table with a Local Secondary Index
<a name="LCICli.DescribeTableWithIndex"></a>

To get information about local secondary indexes on a table, use the `describe-table` parameter. For each index, you can access its name, key schema, and projected attributes.

```
aws dynamodb describe-table --table-name Music
```

## Query a Local Secondary Index
<a name="LCICli.QueryAnIndex"></a>

You can use the `query` operation on a Local Secondary Index in much the same way that you `query` a table. You must specify the index name, the query criteria for the index sort key, and the attributes that you want to return. In this example, the index is `AlbumTitleIndex` and the index sort key is `AlbumTitle`. 

The only attributes returned are those that have been projected into the index. You could modify this query to select non-key attributes too, but this would require table fetch activity that is relatively expensive. For more information about table fetches, see [Attribute projections](LSI.md#LSI.Projections).

```
aws dynamodb query \
    --table-name Music \
    --index-name AlbumTitleIndex \
    --key-condition-expression "Artist = :v_artist and AlbumTitle = :v_title" \
    --expression-attribute-values  '{":v_artist":{"S":"Acme Band"},":v_title":{"S":"Songs About Life"} }'
```

# Managing complex workflows with DynamoDB transactions
<a name="transactions"></a>

Amazon DynamoDB transactions simplify the developer experience of making coordinated, all-or-nothing changes to multiple items both within and across tables. Transactions provide atomicity, consistency, isolation, and durability (ACID) in DynamoDB, helping you to maintain data correctness in your applications.

You can use the DynamoDB transactional read and write APIs to manage complex business workflows that require adding, updating, or deleting multiple items as a single, all-or-nothing operation. For example, a video game developer can ensure that players’ profiles are updated correctly when they exchange items in a game or make in-game purchases.

With the transaction write API, you can group multiple `Put`, `Update`, `Delete`, and `ConditionCheck` actions. You can then submit the actions as a single `TransactWriteItems` operation that either succeeds or fails as a unit. The same is true for multiple `Get` actions, which you can group and submit as a single `TransactGetItems` operation.

There is no additional cost to enable transactions for your DynamoDB tables. You pay only for the reads or writes that are part of your transaction. DynamoDB performs two underlying reads or writes of every item in the transaction: one to prepare the transaction and one to commit the transaction. These two underlying read/write operations are visible in your Amazon CloudWatch metrics.

To get started with DynamoDB transactions, download the latest AWS SDK or the AWS Command Line Interface (AWS CLI). Then follow the [DynamoDB transactions example](transaction-example.md).

The following sections provide a detailed overview of the transaction APIs and how you can use them in DynamoDB.

**Topics**
+ [How it works](transaction-apis.md)
+ [Using IAM with transactions](transaction-apis-iam.md)
+ [Example code](transaction-example.md)

# Amazon DynamoDB Transactions: How it works
<a name="transaction-apis"></a>

With Amazon DynamoDB transactions, you can group multiple actions together and submit them as a single all-or-nothing `TransactWriteItems` or `TransactGetItems` operation. The following sections describe API operations, capacity management, best practices, and other details about using transactional operations in DynamoDB.

**Topics**
+ [TransactWriteItems API](#transaction-apis-txwriteitems)
+ [TransactGetItems API](#transaction-apis-txgetitems)
+ [Isolation levels for DynamoDB transactions](#transaction-isolation)
+ [Transaction conflict handling in DynamoDB](#transaction-conflict-handling)
+ [Using transactional APIs in DynamoDB Accelerator (DAX)](#transaction-apis-dax)
+ [Capacity management for transactions](#transaction-capacity-handling)
+ [Best practices for transactions](#transaction-best-practices)
+ [Using transactional APIs with global tables](#transaction-integration)
+ [DynamoDB Transactions vs. the AWSLabs transactions client library](#transaction-vs-library)

## TransactWriteItems API
<a name="transaction-apis-txwriteitems"></a>

`TransactWriteItems` is a synchronous and idempotent write operation that groups up to 100 write actions in a single all-or-nothing operation. These actions can target up to 100 distinct items in one or more DynamoDB tables within the same AWS account and in the same Region. The aggregate size of the items in the transaction cannot exceed 4 MB. The actions are completed atomically so that either all of them succeed or none of them succeeds.

**Note**  
 A `TransactWriteItems` operation differs from a `BatchWriteItem` operation in that all the actions it contains must be completed successfully, or no changes are made at all. With a `BatchWriteItem` operation, it is possible that only some of the actions in the batch succeed while the others do not. 
 Transactions cannot be performed using indexes. 

You can't target the same item with multiple operations within the same transaction. For example, you can't perform a `ConditionCheck` and also an `Update` action on the same item in the same transaction.

You can add the following types of actions to a transaction:
+ `Put` — Initiates a `PutItem` operation to create a new item or replace an old item with a new item, conditionally or without specifying any condition.
+ `Update` — Initiates an `UpdateItem` operation to edit an existing item's attributes or add a new item to the table if it does not already exist. Use this action to add, delete, or update attributes on an existing item conditionally or without a condition.
+ `Delete` — Initiates a `DeleteItem` operation to delete a single item in a table identified by its primary key.
+ `ConditionCheck` — Checks that an item exists or checks the condition of specific attributes of the item.

When a transaction completes in DynamoDB, its changes start propagating to global secondary indexes (GSIs), streams, and backups. This propagation occurs gradually: stream records from the same transaction might appear at different times and could be interleaved with records from other transactions. Stream consumers shouldn't assume transaction atomicity or ordering guarantees.

To ensure an atomic snapshot of items modified in a transaction, use the TransactGetItems operation to read all relevant items together. This operation provides a consistent view of the data, ensuring you see either all changes from a completed transaction or none at all.

Because propagation isn't immediate, if a table is restored from backup ([RestoreTableFromBackup](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_RestoreTableFromBackup.html)) or exported to a point in time ([ExportTableToPointInTime](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ExportTableToPointInTime.html)) mid-propagation,, it can contain only some of the changes made during a recent transaction.

### Idempotency
<a name="transaction-apis-txwriteitems-idempotency"></a>

You can optionally include a client token when you make a `TransactWriteItems` call to ensure that the request is *idempotent*. Making your transactions idempotent helps prevent application errors if the same operation is submitted multiple times due to a connection time-out or other connectivity issue.

If the original `TransactWriteItems` call was successful, then subsequent `TransactWriteItems` calls with the same client token return successfully without making any changes. If the `ReturnConsumedCapacity` parameter is set, the initial `TransactWriteItems` call returns the number of write capacity units consumed in making the changes. Subsequent `TransactWriteItems` calls with the same client token return the number of read capacity units consumed in reading the item.

**Important points about idempotency**
+ A client token is valid for 10 minutes after the request that uses it finishes. After 10 minutes, any request that uses the same client token is treated as a new request. You should not reuse the same client token for the same request after 10 minutes.
+ If you repeat a request with the same client token within the 10-minute idempotency window but change some other request parameter, DynamoDB returns an `IdempotentParameterMismatch` exception.

### Error handling for writing
<a name="transaction-apis-txwriteitems-errors"></a>

Write transactions don't succeed under the following circumstances:
+ When a condition in one of the condition expressions is not met.
+ When a transaction validation error occurs because more than one action in the same `TransactWriteItems` operation targets the same item.
+ When a `TransactWriteItems` request conflicts with an ongoing `TransactWriteItems` operation on one or more items in the `TransactWriteItems` request. In this case, the request fails with a `TransactionCanceledException`.
+ When there is insufficient provisioned capacity for the transaction to be completed.
+ When an item size becomes too large (larger than 400 KB), or a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
+ When there is a user error, such as an invalid data format.

 For more information about how conflicts with `TransactWriteItems` operations are handled, see [Transaction conflict handling in DynamoDB](#transaction-conflict-handling).

## TransactGetItems API
<a name="transaction-apis-txgetitems"></a>

`TransactGetItems` is a synchronous read operation that groups up to 100 `Get` actions together. These actions can target up to 100 distinct items in one or more DynamoDB tables within the same AWS account and Region. The aggregate size of the items in the transaction can't exceed 4 MB. 

The `Get` actions are performed atomically so that either all of them succeed or all of them fail:
+ `Get` — Initiates a `GetItem` operation to retrieve a set of attributes for the item with the given primary key. If no matching item is found, `Get` does not return any data.

### Error handling for reading
<a name="transaction-apis-txgetitems-errors"></a>

Read transactions don't succeed under the following circumstances:
+ When a `TransactGetItems` request conflicts with an ongoing `TransactWriteItems` operation on one or more items in the `TransactGetItems` request. In this case, the request fails with a `TransactionCanceledException`.
+ When there is insufficient provisioned capacity for the transaction to be completed.
+ When there is a user error, such as an invalid data format.

 For more information about how conflicts with `TransactGetItems` operations are handled, see [Transaction conflict handling in DynamoDB](#transaction-conflict-handling).

## Isolation levels for DynamoDB transactions
<a name="transaction-isolation"></a>

The isolation levels of transactional operations (`TransactWriteItems` or `TransactGetItems`) and other operations are as follows.

### SERIALIZABLE
<a name="transaction-isolation-serializable"></a>

*Serializable* isolation ensures that the results of multiple concurrent operations are the same as if no operation begins until the previous one has finished.

There is serializable isolation between the following types of operation:
+ Between any transactional operation and any standard write operation (`PutItem`, `UpdateItem`, or `DeleteItem`).
+ Between any transactional operation and any standard read operation (`GetItem`).
+ Between a `TransactWriteItems` operation and a `TransactGetItems` operation.

Although there is serializable isolation between transactional operations, and each individual standard write in a `BatchWriteItem` operation, there is no serializable isolation between the transaction and the `BatchWriteItem` operation as a unit.

Similarly, the isolation level between a transactional operation and individual `GetItems` in a `BatchGetItem` operation is serializable. But the isolation level between the transaction and the `BatchGetItem` operation as a unit is *read-committed*.

A single `GetItem` request is serializable with respect to a `TransactWriteItems` request in one of two ways, either before or after the `TransactWriteItems` request. Multiple `GetItem` requests, against keys in a concurrent `TransactWriteItems` requests can be run in any order, and therefore the results are *read-committed*.

For example, if `GetItem` requests for item A and item B are run concurrently with a `TransactWriteItems` request that modifies both item A and item B, there are four possibilities:
+ Both `GetItem` requests are run before the `TransactWriteItems` request.
+ Both `GetItem` requests are run after the `TransactWriteItems` request.
+ `GetItem` request for item A is run before the `TransactWriteItems` request. For item B the `GetItem` is run after `TransactWriteItems`.
+ `GetItem` request for item B is run before the `TransactWriteItems` request. For item A the `GetItem` is run after `TransactWriteItems`.

You should use `TransactGetItems` if you prefer serializable isolation level for multiple `GetItem` requests.

If a non-transactional read is made on multiple items that were part of the same transaction write request in-flight, it's possible that you'll be able to read the new state of some of the items and the old state of the other items. You'll be able to read the new state of all items that were part of the transaction write request only when a successful response is received for the transactional write, indicating that the transaction has been completed.

Once the transaction is successfully completed and a response is received, subsequent *eventually consistent* read operations may still return the old state for a short period due to DynamoDB's eventual consistency model. To guarantee reading the most up-to-date data immediately after a transaction, you should use [*strongly consistent*](HowItWorks.ReadConsistency.md#HowItWorks.ReadConsistency.Strongly) reads by setting `ConsistentRead` to true.

### READ-COMMITTED
<a name="transaction-isolation-read-committed"></a>

*Read-committed* isolation ensures that read operations always return committed values for an item - the read will never present a view to the item representing a state from a transactional write which did not ultimately succeed. Read-committed isolation does not prevent modifications of the item immediately after the read operation.

The isolation level is read-committed between any transactional operation and any read operation that involves multiple standard reads (`BatchGetItem`, `Query`, or `Scan`). If a transactional write updates an item in the middle of a `BatchGetItem`, `Query`, or `Scan` operation, the subsequent part of the read operation returns the newly committed value (with `ConsistentRead)` or possibly a prior committed value (eventually consistent reads).

### Operation summary
<a name="transaction-isolation-table"></a>

To summarize, the following table shows the isolation levels between a transaction operation (`TransactWriteItems` or `TransactGetItems`) and other operations.


| Operation | Isolation Level | 
| --- | --- | 
| `DeleteItem` | *Serializable* | 
| `PutItem` | *Serializable* | 
| `UpdateItem` | *Serializable* | 
| `GetItem` | *Serializable* | 
| `BatchGetItem` | *Read-committed*\$1 | 
| `BatchWriteItem` | *NOT Serializable*\$1 | 
| `Query` | *Read-committed* | 
| `Scan` | *Read-committed* | 
| Other transactional operation | *Serializable* | 

Levels marked with an asterisk (\$1) apply to the operation as a unit. However, individual actions within those operations have a *serializable* isolation level.

## Transaction conflict handling in DynamoDB
<a name="transaction-conflict-handling"></a>

A transactional conflict can occur during concurrent item-level requests on an item within a transaction. Transaction conflicts can occur in the following scenarios: 
+ A `PutItem`, `UpdateItem`, or `DeleteItem` request for an item conflicts with an ongoing `TransactWriteItems` request that includes the same item.
+ An item within a `TransactWriteItems` request is part of another ongoing `TransactWriteItems` request.
+ An item within a `TransactGetItems` request is part of an ongoing `TransactWriteItems`, `BatchWriteItem`, `PutItem`, `UpdateItem`, or `DeleteItem` request.

**Note**  
When a `PutItem`, `UpdateItem`, or `DeleteItem` request is rejected, the request fails with a `TransactionConflictException`. 
If any item-level request within `TransactWriteItems` or `TransactGetItems` is rejected, the request fails with a `TransactionCanceledException`. If that request fails, AWS SDKs do not retry the request.  
If you are using the AWS SDK for Java, the exception contains the list of [CancellationReasons](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CancellationReason.html), ordered according to the list of items in the `TransactItems` request parameter. For other languages, a string representation of the list is included in the exception’s error message. 
If an ongoing `TransactWriteItems` or `TransactGetItems` operation conflicts with a concurrent `GetItem` request, both operations can succeed.

The [TransactionConflict CloudWatch metric](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/metrics-dimensions.html) is incremented for each failed item-level request.

## Using transactional APIs in DynamoDB Accelerator (DAX)
<a name="transaction-apis-dax"></a>

`TransactWriteItems` and `TransactGetItems` are both supported in DynamoDB Accelerator (DAX) with the same isolation levels as in DynamoDB.

`TransactWriteItems` writes through DAX. DAX passes a `TransactWriteItems` call to DynamoDB and returns the response. To populate the cache after the write, DAX calls `TransactGetItems` in the background for each item in the `TransactWriteItems` operation, which consumes additional read capacity units. (For more information, see [Capacity management for transactions](#transaction-capacity-handling).) This functionality enables you to keep your application logic simple and use DAX for both transactional operations and nontransactional ones.

`TransactGetItems` calls are passed through DAX without the items being cached locally. This is the same behavior as for strongly consistent read APIs in DAX.

## Capacity management for transactions
<a name="transaction-capacity-handling"></a>

There is no additional cost to enable transactions for your DynamoDB tables. You pay only for the reads or writes that are part of your transaction. DynamoDB performs two underlying reads or writes of every item in the transaction: one to prepare the transaction and one to commit the transaction. The two underlying read/write operations are visible in your Amazon CloudWatch metrics.

Plan for the additional reads and writes that are required by transactional APIs when you are provisioning capacity to your tables. For example, suppose that your application runs one transaction per second, and each transaction writes three 500-byte items in your table. Each item requires two write capacity units (WCUs): one to prepare the transaction and one to commit the transaction. Therefore, you would need to provision six WCUs to the table. 

If you were using DynamoDB Accelerator (DAX) in the previous example, you would also use two read capacity units (RCUs) for each item in the `TransactWriteItems` call. So you would need to provision six additional RCUs to the table.

Similarly, if your application runs one read transaction per second, and each transaction reads three 500-byte items in your table, you would need to provision six read capacity units (RCUs) to the table. Reading each item requires two RCUs: one to prepare the transaction and one to commit the transaction.

Also, default SDK behavior is to retry transactions in case of a `TransactionInProgressException` exception. Plan for the additional read-capacity units (RCUs) that these retries consume. The same is true if you are retrying transactions in your own code using a `ClientRequestToken`.

## Best practices for transactions
<a name="transaction-best-practices"></a>

Consider the following recommended practices when using DynamoDB transactions.
+ Enable automatic scaling on your tables, or ensure that you have provisioned enough throughput capacity to perform the two read or write operations for every item in your transaction.
+ If you are not using an AWS provided SDK, include a `ClientRequestToken` attribute when you make a `TransactWriteItems` call to ensure that the request is idempotent.
+ Don't group operations together in a transaction if it's not necessary. For example, if a single transaction with 10 operations can be broken up into multiple transactions without compromising the application correctness, we recommend splitting up the transaction. Simpler transactions improve throughput and are more likely to succeed. 
+ Multiple transactions updating the same items simultaneously can cause conflicts that cancel the transactions. We recommend following DynamoDB best practices for data modeling to minimize such conflicts.
+ If a set of attributes is often updated across multiple items as part of a single transaction, consider grouping the attributes into a single item to reduce the scope of the transaction.
+ Avoid using transactions for ingesting data in bulk. For bulk writes, it is better to use `BatchWriteItem`.

## Using transactional APIs with global tables
<a name="transaction-integration"></a>

Transactional operations provide atomicity, consistency, isolation, and durability (ACID) guarantees only within the AWS Region where the write API was invoked. Transactions aren't supported across Regions in global tables. For example, suppose that you have a global table with replicas in the US East (Ohio) and US West (Oregon) Regions and you perform a `TransactWriteItems` operation in the US East (N. Virginia) Region. You may observe partially completed transactions in the US West (Oregon) Region as changes are replicated. Changes are replicated to other Regions only after they've been committed in the source Region.

## DynamoDB Transactions vs. the AWSLabs transactions client library
<a name="transaction-vs-library"></a>

DynamoDB transactions provide a more cost-effective, robust, and performant replacement for the [AWSLabs](https://github.com/awslabs) transactions client library. We suggest that you update your applications to use the native, server-side transaction APIs.

# Using IAM with DynamoDB transactions
<a name="transaction-apis-iam"></a>

You can use AWS Identity and Access Management (IAM) to restrict the actions that transactional operations can perform in Amazon DynamoDB. For more information about using IAM policies in DynamoDB, see [Identity-based policies for DynamoDB](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies).

Permissions for `Put`, `Update`, `Delete`, and `Get` actions are governed by the permissions used for the underlying `PutItem`, `UpdateItem`, `DeleteItem`, and `GetItem` operations. For the `ConditionCheck` action, you can use the `dynamodb:ConditionCheckItem` permission in IAM policies.

The following are examples of IAM policies that you can use to configure the DynamoDB transactions.

## Example 1: Allow transactional operations
<a name="tx-policy-example-1"></a>

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:ConditionCheckItem",
                "dynamodb:PutItem",
                "dynamodb:UpdateItem",
                "dynamodb:DeleteItem",
                "dynamodb:GetItem"
            ],
            "Resource": [
                "arn:aws:dynamodb:*:*:table/table04"
            ]
        }
    ]
}
```

------

## Example 2: Allow only transactional operations
<a name="tx-policy-example-2"></a>

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:ConditionCheckItem",
                "dynamodb:PutItem",
                "dynamodb:UpdateItem",
                "dynamodb:DeleteItem",
                "dynamodb:GetItem"
            ],
            "Resource": [
                "arn:aws:dynamodb:*:*:table/table04"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "dynamodb:EnclosingOperation": [
                        "TransactWriteItems",
                        "TransactGetItems"
                    ]
                }
            }
        }
    ]
}
```

------

## Example 3: Allow nontransactional reads and writes, and block transactional reads and writes
<a name="tx-policy-example-3"></a>

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "dynamodb:ConditionCheckItem",
                "dynamodb:PutItem",
                "dynamodb:UpdateItem",
                "dynamodb:DeleteItem",
                "dynamodb:GetItem"
            ],
            "Resource": [
                "arn:aws:dynamodb:*:*:table/table04"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "dynamodb:EnclosingOperation": [
                        "TransactWriteItems",
                        "TransactGetItems"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
             "Action": [
                 "dynamodb:PutItem",
                 "dynamodb:DeleteItem",
                 "dynamodb:GetItem",
                 "dynamodb:UpdateItem"
             ],
             "Resource": [
                 "arn:aws:dynamodb:*:*:table/table04"
             ]
         }
    ]
}
```

------

## Example 4: Prevent information from being returned on a ConditionCheck failure
<a name="tx-policy-example-4"></a>

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:ConditionCheckItem",
                "dynamodb:PutItem",
                "dynamodb:UpdateItem",
                "dynamodb:DeleteItem",
                "dynamodb:GetItem"
            ],
            "Resource": "arn:aws:dynamodb:*:*:table/table01",
            "Condition": {
                "StringEqualsIfExists": {
                    "dynamodb:ReturnValues": "NONE"
                }
            }
        }
    ]
}
```

------

# DynamoDB transactions example
<a name="transaction-example"></a>

As an example of a situation in which Amazon DynamoDB transactions can be useful, consider this sample Java application for an online marketplace.

The application has three DynamoDB tables in the backend:
+ `Customers` — This table stores details about the marketplace customers. Its primary key is a `CustomerId` unique identifier.
+ `ProductCatalog` — This table stores details such as price and availability about the products for sale in the marketplace. Its primary key is a `ProductId` unique identifier.
+ `Orders` — This table stores details about orders from the marketplace. Its primary key is an `OrderId` unique identifier.

## Making an order
<a name="transaction-example-write-order"></a>

The following code snippets illustrate how to use DynamoDB transactions to coordinate the multiple steps that are required to create and process an order. Using a single all-or-nothing operation ensures that if any part of the transaction fails, no actions in the transaction are run and no changes are made.

In this example, you set up an order from a customer whose `customerId` is `09e8e9c8-ec48`. You then run it as a single transaction using the following simple order-processing workflow:

1. Determine that the customer ID is valid.

1. Make sure that the product is `IN_STOCK`, and update the product status to `SOLD`.

1. Make sure that the order does not already exist, and create the order.

### Validate the customer
<a name="transaction-example-order-part-a"></a>

First, define an action to verify that a customer with `customerId` equal to `09e8e9c8-ec48` exists in the customer table.

```
final String CUSTOMER_TABLE_NAME = "Customers";
final String CUSTOMER_PARTITION_KEY = "CustomerId";
final String customerId = "09e8e9c8-ec48";
final HashMap<String, AttributeValue> customerItemKey = new HashMap<>();
customerItemKey.put(CUSTOMER_PARTITION_KEY, new AttributeValue(customerId));

ConditionCheck checkCustomerValid = new ConditionCheck()
    .withTableName(CUSTOMER_TABLE_NAME)
    .withKey(customerItemKey)
    .withConditionExpression("attribute_exists(" + CUSTOMER_PARTITION_KEY + ")");
```

### Update the product status
<a name="transaction-example-order-part-b"></a>

Next, define an action to update the product status to `SOLD` if the condition that the product status is currently set to `IN_STOCK` is `true`. Setting the `ReturnValuesOnConditionCheckFailure` parameter returns the item if the item's product status attribute was not equal to `IN_STOCK`.

```
final String PRODUCT_TABLE_NAME = "ProductCatalog";
final String PRODUCT_PARTITION_KEY = "ProductId";
HashMap<String, AttributeValue> productItemKey = new HashMap<>();
productItemKey.put(PRODUCT_PARTITION_KEY, new AttributeValue(productKey));

Map<String, AttributeValue> expressionAttributeValues = new HashMap<>();
expressionAttributeValues.put(":new_status", new AttributeValue("SOLD"));
expressionAttributeValues.put(":expected_status", new AttributeValue("IN_STOCK"));

Update markItemSold = new Update()
    .withTableName(PRODUCT_TABLE_NAME)
    .withKey(productItemKey)
    .withUpdateExpression("SET ProductStatus = :new_status")
    .withExpressionAttributeValues(expressionAttributeValues)
    .withConditionExpression("ProductStatus = :expected_status")
    .withReturnValuesOnConditionCheckFailure(ReturnValuesOnConditionCheckFailure.ALL_OLD);
```

### Create the order
<a name="transaction-example-order-part-c"></a>

Lastly, create the order as long as an order with that `OrderId` does not already exist.

```
final String ORDER_PARTITION_KEY = "OrderId";
final String ORDER_TABLE_NAME = "Orders";

HashMap<String, AttributeValue> orderItem = new HashMap<>();
orderItem.put(ORDER_PARTITION_KEY, new AttributeValue(orderId));
orderItem.put(PRODUCT_PARTITION_KEY, new AttributeValue(productKey));
orderItem.put(CUSTOMER_PARTITION_KEY, new AttributeValue(customerId));
orderItem.put("OrderStatus", new AttributeValue("CONFIRMED"));
orderItem.put("OrderTotal", new AttributeValue("100"));

Put createOrder = new Put()
    .withTableName(ORDER_TABLE_NAME)
    .withItem(orderItem)
    .withReturnValuesOnConditionCheckFailure(ReturnValuesOnConditionCheckFailure.ALL_OLD)
    .withConditionExpression("attribute_not_exists(" + ORDER_PARTITION_KEY + ")");
```

### Run the transaction
<a name="transaction-example-order-part-d"></a>

The following example illustrates how to run the actions defined previously as a single all-or-nothing operation.

```
    Collection<TransactWriteItem> actions = Arrays.asList(
        new TransactWriteItem().withConditionCheck(checkCustomerValid),
        new TransactWriteItem().withUpdate(markItemSold),
        new TransactWriteItem().withPut(createOrder));

    TransactWriteItemsRequest placeOrderTransaction = new TransactWriteItemsRequest()
        .withTransactItems(actions)
        .withReturnConsumedCapacity(ReturnConsumedCapacity.TOTAL);

    // Run the transaction and process the result.
    try {
        client.transactWriteItems(placeOrderTransaction);
        System.out.println("Transaction Successful");

    } catch (ResourceNotFoundException rnf) {
        System.err.println("One of the table involved in the transaction is not found" + rnf.getMessage());
    } catch (InternalServerErrorException ise) {
        System.err.println("Internal Server Error" + ise.getMessage());
    } catch (TransactionCanceledException tce) {
        System.out.println("Transaction Canceled " + tce.getMessage());
    }
```

## Reading the order details
<a name="transaction-example-read-order"></a>

The following example shows how to read the completed order transactionally across the `Orders` and `ProductCatalog` tables.

```
HashMap<String, AttributeValue> productItemKey = new HashMap<>();
productItemKey.put(PRODUCT_PARTITION_KEY, new AttributeValue(productKey));

HashMap<String, AttributeValue> orderKey = new HashMap<>();
orderKey.put(ORDER_PARTITION_KEY, new AttributeValue(orderId));

Get readProductSold = new Get()
    .withTableName(PRODUCT_TABLE_NAME)
    .withKey(productItemKey);
Get readCreatedOrder = new Get()
    .withTableName(ORDER_TABLE_NAME)
    .withKey(orderKey);

Collection<TransactGetItem> getActions = Arrays.asList(
    new TransactGetItem().withGet(readProductSold),
    new TransactGetItem().withGet(readCreatedOrder));

TransactGetItemsRequest readCompletedOrder = new TransactGetItemsRequest()
    .withTransactItems(getActions)
    .withReturnConsumedCapacity(ReturnConsumedCapacity.TOTAL);

// Run the transaction and process the result.
try {
    TransactGetItemsResult result = client.transactGetItems(readCompletedOrder);
    System.out.println(result.getResponses());
} catch (ResourceNotFoundException rnf) {
    System.err.println("One of the table involved in the transaction is not found" + rnf.getMessage());
} catch (InternalServerErrorException ise) {
    System.err.println("Internal Server Error" + ise.getMessage());
} catch (TransactionCanceledException tce) {
    System.err.println("Transaction Canceled" + tce.getMessage());
}
```

# Change data capture with Amazon DynamoDB
<a name="streamsmain"></a>

Many applications benefit from capturing changes to items stored in a DynamoDB table, at the point in time when such changes occur. The following are some example use cases:
+ A popular mobile app modifies data in a DynamoDB table, at the rate of thousands of updates per second. Another application captures and stores data about these updates, providing near-real-time usage metrics for the mobile app.
+ A financial application modifies stock market data in a DynamoDB table. Different applications running in parallel track these changes in real time, compute value-at-risk, and automatically rebalance portfolios based on stock price movements.
+ Sensors in transportation vehicles and industrial equipment send data to a DynamoDB table. Different applications monitor performance and send messaging alerts when a problem is detected, predict any potential defects by applying machine learning algorithms, and compress and archive data to Amazon Simple Storage Service (Amazon S3).
+ An application automatically sends notifications to the mobile devices of all friends in a group as soon as one friend uploads a new picture.
+ A new customer adds data to a DynamoDB table. This event invokes another application that sends a welcome email to the new customer.

DynamoDB supports streaming of item-level change data capture records in near-real time. You can build applications that consume these streams and take action based on the contents.

**Note**  
Adding tags to DynamoDB Streams and using [attribute-based access control (ABAC)](access-control-resource-based.md) with DynamoDB Streams aren't supported.

The following video will give you an introductory look at the change data capture concept.

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/VVv_-mZ5Ge8/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/VVv_-mZ5Ge8)


**Topics**
+ [Streaming options for change data capture](#streamsmain.choose)
+ [Using Kinesis Data Streams to capture changes to DynamoDB](kds.md)
+ [Change data capture for DynamoDB Streams](Streams.md)

## Streaming options for change data capture
<a name="streamsmain.choose"></a>

DynamoDB offers two streaming models for change data capture: Kinesis Data Streams for DynamoDB and DynamoDB Streams.

To help you choose the right solution for your application, the following table summarizes the features of each streaming model. 

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/streamsmain.html)

You can enable both streaming models on the same DynamoDB table.

The following video talks more about the differences between the two options.

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/UgG17Wh2y0g/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/UgG17Wh2y0g)


# Using Kinesis Data Streams to capture changes to DynamoDB
<a name="kds"></a>

You can use Amazon Kinesis Data Streams to capture changes to Amazon DynamoDB.

Kinesis Data Streams captures item-level modifications in any DynamoDB table and replicates them to a [Kinesis data stream](https://docs.aws.amazon.com/streams/latest/dev/introduction.html). Your applications can access this stream and view item-level changes in near-real time. You can continuously capture and store terabytes of data per hour. You can take advantage of longer data retention time—and with enhanced fan-out capability, you can simultaneously reach two or more downstream applications. Other benefits include additional audit and security transparency.

Kinesis Data Streams also gives you access to [Amazon Data Firehose](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html) and [Amazon Managed Service for Apache Flink](https://docs.aws.amazon.com/kinesisanalytics/latest/dev/what-is.html). These services can help you build applications that power real-time dashboards, generate alerts, implement dynamic pricing and advertising, and implement sophisticated data analytics and machine learning algorithms.

**Note**  
Using Kinesis data streams for DynamoDB is subject to both [Kinesis Data Streams pricing](https://aws.amazon.com/kinesis/data-streams/pricing/) for the data stream and [DynamoDB pricing](https://aws.amazon.com/dynamodb/pricing/) for the source table.

To enable Kinesis streaming on a DynamoDB table using the console, AWS CLI, or Java SDK, see [Getting started with Kinesis Data Streams for Amazon DynamoDB](kds_gettingstarted.md).

**Topics**
+ [How Kinesis Data Streams works with DynamoDB](#kds_howitworks)
+ [Getting started with Kinesis Data Streams for Amazon DynamoDB](kds_gettingstarted.md)
+ [Using shards and metrics with DynamoDB Streams and Kinesis Data Streams](kds_using-shards-and-metrics.md)
+ [Using IAM policies for Amazon Kinesis Data Streams and Amazon DynamoDB](kds_iam.md)

## How Kinesis Data Streams works with DynamoDB
<a name="kds_howitworks"></a>

When a Kinesis data stream is enabled for a DynamoDB table, the table sends out a data record that captures any changes to that table’s data. This data record includes:
+ The specific time any item was recently created, updated, or deleted
+ That item’s primary key
+ A snapshot of the record before the modification
+ A snapshot of the record after the modification 

These data records are captured and published in near-real time. After they are written to the Kinesis data stream, they can be read just like any other record. You can use the Kinesis Client Library, use AWS Lambda, call the Kinesis Data Streams API, and use other connected services. For more information, see [Reading Data from Amazon Kinesis Data Streams](https://docs.aws.amazon.com/streams/latest/dev/building-consumers.html) in the Amazon Kinesis Data Streams Developer Guide.

These changes to data are also captured asynchronously. Kinesis has no performance impact on a table that it’s streaming from. The stream records stored in your Kinesis data stream are also encrypted at rest. For more information, see [Data Protection in Amazon Kinesis Data Streams](https://docs.aws.amazon.com/streams/latest/dev/server-side-encryption.html).

The Kinesis data stream records might appear in a different order than when the item changes occurred. The same item notifications might also appear more than once in the stream. You can check the `ApproximateCreationDateTime` attribute to identify the order that the item modifications occurred in, and to identify duplicate records. 

When you enable a Kinesis data stream as a streaming destination of a DynamoDB table, you can configure the precision of `ApproximateCreationDateTime` values in either milliseconds or microseconds. By default, `ApproximateCreationDateTime` indicates the time of the change in milliseconds. Additionally, you can change this value on an active streaming destination. After such an update, stream records written to Kinesis will have `ApproximateCreationDateTime` values of the desired precision. 

Binary values written to DynamoDB must be encoded in [base64-encoded format](HowItWorks.NamingRulesDataTypes.md) . However, when data records are written to a Kinesis data stream, these encoded binary values are encoded with base64-encoding a second time. When reading these records from a Kinesis data stream, in order to retrieve the raw binary values, applications must decode these values twice.

DynamoDB charges for using Kinesis Data Streams in change data capture units. 1 KB of change per single item counts as one change data capture unit. The KB of change in each item is calculated by the larger of the “before” and “after” images of the item written to the stream, using the same logic as [capacity unit consumption for write operations](read-write-operations.md#write-operation-consumption). Similar to how DynamoDB [on-demand](capacity-mode.md#capacity-mode-on-demand) mode works, you don't need to provision capacity throughput for change data capture units.

### Turning on a Kinesis data stream for your DynamoDB table
<a name="kds_howitworks.enabling"></a>

You can enable or disable streaming to Kinesis from your existing DynamoDB table by using the AWS Management Console, the AWS SDK, or the AWS Command Line Interface (AWS CLI).
+ You can only stream data from DynamoDB to Kinesis Data Streams in the same AWS account and AWS Region as your table. 
+ You can only stream data from a DynamoDB table to one Kinesis data stream.

  

### Making changes to a Kinesis Data Streams destination on your DynamoDB table
<a name="kds_howitworks.makingchanges"></a>

By default, all Kinesis data stream records include an `ApproximateCreationDateTime` attribute. This attribute represents a timestamp in milliseconds of the approximate time when each record was created. You can change the precision of these values by using the [https://console.aws.amazon.com/kinesis](https://console.aws.amazon.com/kinesis), the SDK or the AWS CLI 

# Getting started with Kinesis Data Streams for Amazon DynamoDB
<a name="kds_gettingstarted"></a>

This section describes how to use Kinesis Data Streams for Amazon DynamoDB tables with the Amazon DynamoDB console, the AWS Command Line Interface (AWS CLI), and the API.

## Creating an active Amazon Kinesis data stream
<a name="kds_gettingstarted.making-changes"></a>

All of these examples use the `Music` DynamoDB table that was created as part of the [Getting started with DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStartedDynamoDB.html) tutorial.

To learn more about how to build consumers and connect your Kinesis data stream to other AWS services, see [Reading data from Kinesis Data Streams](https://docs.aws.amazon.com/streams/latest/dev/building-consumers.html) in the *Amazon Kinesis Data Streams developer guide*.

**Note**  
 When you're first using KDS shards, we recommend setting your shards to scale up and down with usage patterns. After you have accumulated more data on usage patterns, you can adjust the shards in your stream to match. 

------
#### [ Console ]

1. Sign in to the AWS Management Console and open the Kinesis console at [https://console.aws.amazon.com/kinesis/](https://console.aws.amazon.com/kinesis/).

1. Choose **Create data stream** and follow the instructions to create a stream called `samplestream`. 

1. Open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/).

1. In the navigation pane on the left side of the console, choose **Tables**.

1. Choose the **Music** table.

1. Choose the **Exports and streams** tab.

1. (Optional) Under **Amazon Kinesis data stream details**, you can change the record timestamp precision from microsecond (default) to millisecond. 

1. Choose **samplestream** from the dropdown list.

1. Choose the **Turn On** button.

------
#### [ AWS CLI ]

1. Create a Kinesis Data Streams named `samplestream` by using the [create-stream command](https://docs.aws.amazon.com/cli/latest/reference/kinesis/create-stream.html).

   ```
   aws kinesis create-stream --stream-name samplestream --shard-count 3 
   ```

   See [Shard management considerations for Kinesis Data Streams](kds_using-shards-and-metrics.md#kds_using-shards-and-metrics.shardmanagment) before setting the number of shards for the Kinesis data stream.

1. Check that the Kinesis stream is active and ready for use by using the [describe-stream command](https://docs.aws.amazon.com/cli/latest/reference/kinesis/describe-stream.html).

   ```
   aws kinesis describe-stream --stream-name samplestream
   ```

1. Enable Kinesis streaming on the DynamoDB table by using the DynamoDB `enable-kinesis-streaming-destination` command. Replace the `stream-arn` value with the one that was returned by `describe-stream` in the previous step. Optionally, enable streaming with a more granular (microsecond) precision of timestamp values returned on each record.

   Enable streaming with microsecond timestamp precision:

   ```
   aws dynamodb enable-kinesis-streaming-destination \
     --table-name Music \
     --stream-arn arn:aws:kinesis:us-west-2:12345678901:stream/samplestream
     --enable-kinesis-streaming-configuration ApproximateCreationDateTimePrecision=MICROSECOND
   ```

   Or enable streaming with default timestamp precision (millisecond):

   ```
   aws dynamodb enable-kinesis-streaming-destination \
     --table-name Music \
     --stream-arn arn:aws:kinesis:us-west-2:12345678901:stream/samplestream
   ```

1. Check if Kinesis streaming is active on the table by using the DynamoDB `describe-kinesis-streaming-destination` command.

   ```
   aws dynamodb describe-kinesis-streaming-destination --table-name Music
   ```

1. Write data to the DynamoDB table by using the `put-item` command, as described in the [DynamoDB Developer Guide](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/getting-started-step-2.html).

   ```
   aws dynamodb put-item \
       --table-name Music  \
       --item \
           '{"Artist": {"S": "No One You Know"}, "SongTitle": {"S": "Call Me Today"}, "AlbumTitle": {"S": "Somewhat Famous"}, "Awards": {"N": "1"}}'
   
   aws dynamodb put-item \
       --table-name Music \
       --item \
           '{"Artist": {"S": "Acme Band"}, "SongTitle": {"S": "Happy Day"}, "AlbumTitle": {"S": "Songs About Life"}, "Awards": {"N": "10"} }'
   ```

1. Use the Kinesis [ get-records](https://docs.aws.amazon.com/cli/latest/reference/kinesis/get-records.html) CLI command to retrieve the Kinesis stream contents. Then use the following code snippet to deserialize the stream content.

   ```
   /**
    * Takes as input a Record fetched from Kinesis and does arbitrary processing as an example.
    */
   public void processRecord(Record kinesisRecord) throws IOException {
       ByteBuffer kdsRecordByteBuffer = kinesisRecord.getData();
       JsonNode rootNode = OBJECT_MAPPER.readTree(kdsRecordByteBuffer.array());
       JsonNode dynamoDBRecord = rootNode.get("dynamodb");
       JsonNode oldItemImage = dynamoDBRecord.get("OldImage");
       JsonNode newItemImage = dynamoDBRecord.get("NewImage");
       Instant recordTimestamp = fetchTimestamp(dynamoDBRecord);
   
       /**
        * Say for example our record contains a String attribute named "stringName" and we want to fetch the value
        * of this attribute from the new item image. The following code fetches this value.
        */
       JsonNode attributeNode = newItemImage.get("stringName");
       JsonNode attributeValueNode = attributeNode.get("S"); // Using DynamoDB "S" type attribute
       String attributeValue = attributeValueNode.textValue();
       System.out.println(attributeValue);
   }
   
   private Instant fetchTimestamp(JsonNode dynamoDBRecord) {
       JsonNode timestampJson = dynamoDBRecord.get("ApproximateCreationDateTime");
       JsonNode timestampPrecisionJson = dynamoDBRecord.get("ApproximateCreationDateTimePrecision");
       if (timestampPrecisionJson != null && timestampPrecisionJson.equals("MICROSECOND")) {
           return Instant.EPOCH.plus(timestampJson.longValue(), ChronoUnit.MICROS);
       }
       return Instant.ofEpochMilli(timestampJson.longValue());
   }
   ```

------
#### [ Java ]

1. Follow the instructions in the Kinesis Data Streams developer guide to [create](https://docs.aws.amazon.com/streams/latest/dev/kinesis-using-sdk-java-create-stream.html) a Kinesis data stream named `samplestream` using Java.

   See [Shard management considerations for Kinesis Data Streams](kds_using-shards-and-metrics.md#kds_using-shards-and-metrics.shardmanagment) before setting the number of shards for the Kinesis data stream. 

1. Use the following code snippet to enable Kinesis streaming on the DynamoDB table. Optionally, enable streaming with a more granular (microsecond) precision of timestamp values returned on each record. 

   Enable streaming with microsecond timestamp precision:

   ```
   EnableKinesisStreamingConfiguration enableKdsConfig = EnableKinesisStreamingConfiguration.builder()
     .approximateCreationDateTimePrecision(ApproximateCreationDateTimePrecision.MICROSECOND)
     .build();
   
   EnableKinesisStreamingDestinationRequest enableKdsRequest = EnableKinesisStreamingDestinationRequest.builder()
     .tableName(tableName)
     .streamArn(kdsArn)
     .enableKinesisStreamingConfiguration(enableKdsConfig)
     .build();
   
   EnableKinesisStreamingDestinationResponse enableKdsResponse = ddbClient.enableKinesisStreamingDestination(enableKdsRequest);
   ```

   Or enable streaming with default timestamp precision (millisecond):

   ```
   EnableKinesisStreamingDestinationRequest enableKdsRequest = EnableKinesisStreamingDestinationRequest.builder()
     .tableName(tableName)
     .streamArn(kdsArn)
     .build();
   
   EnableKinesisStreamingDestinationResponse enableKdsResponse = ddbClient.enableKinesisStreamingDestination(enableKdsRequest);
   ```

1. Follow the instructions in the *Kinesis Data Streams developer guide* to [read](https://docs.aws.amazon.com/streams/latest/dev/building-consumers.html) from the created data stream.

1. Use the following code snippet to deserialize the stream content

   ```
   /**
    * Takes as input a Record fetched from Kinesis and does arbitrary processing as an example.
    */
   public void processRecord(Record kinesisRecord) throws IOException {
       ByteBuffer kdsRecordByteBuffer = kinesisRecord.getData();
       JsonNode rootNode = OBJECT_MAPPER.readTree(kdsRecordByteBuffer.array());
       JsonNode dynamoDBRecord = rootNode.get("dynamodb");
       JsonNode oldItemImage = dynamoDBRecord.get("OldImage");
       JsonNode newItemImage = dynamoDBRecord.get("NewImage");
       Instant recordTimestamp = fetchTimestamp(dynamoDBRecord);
   
       /**
        * Say for example our record contains a String attribute named "stringName" and we wanted to fetch the value
        * of this attribute from the new item image, the below code would fetch this.
        */
       JsonNode attributeNode = newItemImage.get("stringName");
       JsonNode attributeValueNode = attributeNode.get("S"); // Using DynamoDB "S" type attribute
       String attributeValue = attributeValueNode.textValue();
       System.out.println(attributeValue);
   }
   
   private Instant fetchTimestamp(JsonNode dynamoDBRecord) {
       JsonNode timestampJson = dynamoDBRecord.get("ApproximateCreationDateTime");
       JsonNode timestampPrecisionJson = dynamoDBRecord.get("ApproximateCreationDateTimePrecision");
       if (timestampPrecisionJson != null && timestampPrecisionJson.equals("MICROSECOND")) {
           return Instant.EPOCH.plus(timestampJson.longValue(), ChronoUnit.MICROS);
       }
       return Instant.ofEpochMilli(timestampJson.longValue());
   }
   ```

------

## Making changes to an active Amazon Kinesis data stream
<a name="kds_gettingstarted.making-changes"></a>

This section describes how to make changes to an active Kinesis Data Streams for DynamoDB setup by using the console, AWS CLI and the API.

**AWS Management Console**

1. Open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/)

1. Go to your table.

1. Choose **Exports and Streams**.

**AWS CLI**

1. Call `describe-kinesis-streaming-destination` to confirm that the stream is `ACTIVE`. 

1. Call `UpdateKinesisStreamingDestination`, such as in this example:

   ```
   aws dynamodb update-kinesis-streaming-destination --table-name enable_test_table --stream-arn arn:aws:kinesis:us-east-1:12345678901:stream/enable_test_stream --update-kinesis-streaming-configuration ApproximateCreationDateTimePrecision=MICROSECOND
   ```

1. Call `describe-kinesis-streaming-destination` to confirm that the stream is `UPDATING`.

1. Call `describe-kinesis-streaming-destination` periodically until the streaming status is `ACTIVE` again. It typically takes up to 5 minutes for the timestamp precision updates to take effect. Once this status updates, that indicates that the update is complete and the new precision value will be applied on future records.

1. Write to the table using `putItem`.

1. Use the Kinesis `get-records` command to get the stream contents.

1. Confirm that the `ApproximateCreationDateTime` of the writes have the desired precision.

**Java API**

1. Provide a code snippet that constructs an `UpdateKinesisStreamingDestination` request and an `UpdateKinesisStreamingDestination` response. 

1. Provide a code snippet that constructs a `DescribeKinesisStreamingDestination` request and a `DescribeKinesisStreamingDestination response`.

1. Call `describe-kinesis-streaming-destination` periodically until the streaming status is `ACTIVE` again, indicating that the update is complete and the new precision value will be applied on future records.

1. Perform writes to the table.

1.  Read from the stream and deserialize the stream content.

1. Confirm that the `ApproximateCreationDateTime` of the writes have the desired precision.

# Using shards and metrics with DynamoDB Streams and Kinesis Data Streams
<a name="kds_using-shards-and-metrics"></a>

## Shard management considerations for Kinesis Data Streams
<a name="kds_using-shards-and-metrics.shardmanagment"></a>

A Kinesis data stream counts its throughput in [shards](https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html). In Amazon Kinesis Data streams, you can choose between an **on-demand** mode and a **provisioned** mode for your data streams. 

We recommend using on-demand mode for your Kinesis Data Stream if your DynamoDB write workload is highly variable and unpredictable. With on-demand mode, there is no capacity planning required as Kinesis Data Streams automatically manages the shards in order to provide the necessary throughput.

For predictable workloads, you can use provisioned mode for your Kinesis Data Stream. With provisioned mode, you must specify the number of shards for the data stream to accommodate the change data capture records from DynamoDB. To determine the number of shards that the Kinesis data stream will need to support your DynamoDB table, you need the following input values:
+ The average size of your DynamoDB table’s record in bytes (`average_record_size_in_bytes`).
+ The maximum number of write operations that your DynamoDB table will perform per second. This includes create, delete, and update operations performed by your applications, as well as automatically generated operations like Time to Live generated delete operations(`write_throughput`).
+ The percentage of update and overwrite operations that you perform on your table, as compared to create or delete operations (`percentage_of_updates`). Keep in mind that update and overwrite operations replicate both the old and new images of the modified item to the stream. This generates twice the DynamoDB item size.

You can calculate the number of shards (`number_of_shards`) that your Kinesis data stream needs by using the input values in the following formula:

```
number_of_shards = ceiling( max( ((write_throughput * (4+percentage_of_updates) * average_record_size_in_bytes) / 1024 / 1024), (write_throughput/1000)), 1)
```

For example, you might have a maximum throughput of 1040 write operations per second (`write_throughput`) with an average record size of 800 bytes (`average_record_size_in_bytes)`. If 25 percent of those write operations are update operations (`percentage_of_updates`), then you will need two shards (`number_of_shards`) to accommodate your DynamoDB streaming throughput:

```
ceiling( max( ((1040 * (4+25/100) * 800)/ 1024 / 1024), (1040/1000)), 1).
```

Consider the following before using the formula to calculate the number of shards required with provisioned mode for Kinesis data streams:
+ This formula helps estimate the number of shards that will be required to accommodate your DynamoDB change data records. It doesn't represent the total number of shards needed in your Kinesis data stream, such as the number of shards required to support additional Kinesis data stream consumers.
+ You may still experience read and write throughput exceptions in the provisioned mode if you don't configure your data stream to handle your peak throughput. In this case, you must manually scale your data stream to accommodate your data traffic. 
+ This formula takes into consideration the additional bloat generated by DynamoDB before streaming the change logs data records to Kinesis Data Stream.

To learn more about capacity modes on Kinesis Data Stream see [Choosing the Data Stream Capacity Mode](https://docs.aws.amazon.com/streams/latest/dev/how-do-i-size-a-stream.html). To learn more about pricing difference between different capacity modes, see [Amazon Kinesis Data Streams pricing](https://aws.amazon.com/kinesis/data-streams/pricing/) .

## Monitoring change data capture with Kinesis Data Streams
<a name="kds_using-shards-and-metrics.monitoring"></a>

DynamoDB provides several Amazon CloudWatch metrics to help you monitor the replication of change data capture to Kinesis. For a full list of CloudWatch metrics, see [DynamoDB Metrics and dimensions](metrics-dimensions.md).

To determine whether your stream has sufficient capacity, we recommend that you monitor the following items both during stream enabling and in production:
+ `ThrottledPutRecordCount`: The number of records that were throttled by your Kinesis data stream because of insufficient Kinesis data stream capacity. You might experience some throttling during exceptional usage peaks, but the `ThrottledPutRecordCount` should remain as low as possible. DynamoDB retries sending throttled records to the Kinesis data stream, but this might result in higher replication latency. 

  If you experience excessive and regular throttling, you might need to increase the number of Kinesis stream shards proportionally to the observed write throughput of your table. To learn more about determining the size of a Kinesis data stream, see [Determining the Initial Size of a Kinesis Data Stream](https://docs.aws.amazon.com/streams/latest/dev/amazon-kinesis-streams.html#how-do-i-size-a-stream).
+ `AgeOfOldestUnreplicatedRecord`: The elapsed time since the oldest item-level change yet to replicate to the Kinesis data stream appeared in the DynamoDB table. Under normal operation, `AgeOfOldestUnreplicatedRecord` should be in the order of milliseconds. This number grows based on unsuccessful replication attempts when these are caused by customer-controlled configuration choices.

   If `AgeOfOldestUnreplicatedRecord` metric exceeds 168 hours, replication of item-level changes from the DynamoDB table to Kinesis data stream will be automatically disabled.

  Customer-controlled configuration examples that leads to unsuccessful replication attempts are an under-provisioned Kinesis data stream capacity that leads to excessive throttling, or a manual update to your Kinesis data stream’s access policies that prevents DynamoDB from adding data to your data stream. To keep this metric as low as possible, you might need to ensure the right provisioning of your Kinesis data stream capacity, and make sure that DynamoDB’s permissions are unchanged. 
+ `FailedToReplicateRecordCount`: The number of records that DynamoDB failed to replicate to your Kinesis data stream. Certain items larger than 34 KB might expand in size to change data records that are larger than the 1 MB item size limit of Kinesis Data Streams. This size expansion occurs when these larger than 34 KB items include a large number of Boolean or empty attribute values. Boolean and empty attribute values are stored as 1 byte in DynamoDB, but expand up to 5 bytes when they’re serialized using standard JSON for Kinesis Data Streams replication. DynamoDB can’t replicate such change records to your Kinesis data stream. DynamoDB skips these change data records, and automatically continues replicating subsequent records. 

   

You can create Amazon CloudWatch alarms that send an Amazon Simple Notification Service (Amazon SNS) message for notification when any of the preceding metrics exceed a specific threshold. 

# Using IAM policies for Amazon Kinesis Data Streams and Amazon DynamoDB
<a name="kds_iam"></a>

The first time that you enable Amazon Kinesis Data Streams for Amazon DynamoDB, DynamoDB automatically creates an AWS Identity and Access Management (IAM) service-linked role for you. This role, `AWSServiceRoleForDynamoDBKinesisDataStreamsReplication`, allows DynamoDB to manage the replication of item-level changes to Kinesis Data Streams on your behalf. Don't delete this service-linked role.

For more information about service-linked roles, see [Using service-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html) in the *IAM User Guide*.

**Note**  
DynamoDB does not support tag-based conditions for IAM policies.

To enable Amazon Kinesis Data Streams for Amazon DynamoDB, you must have the following permissions on the table:
+ `dynamodb:EnableKinesisStreamingDestination`
+ `kinesis:ListStreams`
+ `kinesis:PutRecords`
+ `kinesis:DescribeStream`

To describe Amazon Kinesis Data Streams for Amazon DynamoDB for a given DynamoDB table, you must have the following permissions on the table.
+ `dynamodb:DescribeKinesisStreamingDestination`
+ `kinesis:DescribeStreamSummary`
+ `kinesis:DescribeStream`

To disable Amazon Kinesis Data Streams for Amazon DynamoDB, you must have the following permissions on the table.
+ `dynamodb:DisableKinesisStreamingDestination`

To update Amazon Kinesis Data Streams for Amazon DynamoDB, you must have the following permissions on the table.
+ `dynamodb:UpdateKinesisStreamingDestination`

The following examples show how to use IAM policies to grant permissions for Amazon Kinesis Data Streams for Amazon DynamoDB.

## Example: Enable Amazon Kinesis Data Streams for Amazon DynamoDB
<a name="access-policy-kds-example1"></a>

The following IAM policy grants permissions to enable Amazon Kinesis Data Streams for Amazon DynamoDB for the `Music` table. It does not grant permissions to disable, update or describe Kinesis Data Streams for DynamoDB for the `Music` table. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "iam:CreateServiceLinkedRole",
            "Resource": "arn:aws:iam::*:role/aws-service-role/kinesisreplication.dynamodb.amazonaws.com/AWSServiceRoleForDynamoDBKinesisDataStreamsReplication",
            "Condition": {
                "StringLike": {
                    "iam:AWSServiceName": "kinesisreplication.dynamodb.amazonaws.com"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:EnableKinesisStreamingDestination"
            ],
            "Resource": "arn:aws:dynamodb:us-west-2:111122223333:table/Music"
        }
    ]
}
```

------

## Example: Update Amazon Kinesis Data Streams for Amazon DynamoDB
<a name="access-policy-kds-example2"></a>

The following IAM policy grants permissions to update Amazon Kinesis Data Streams for Amazon DynamoDB for the `Music` table. It does not grant permissions to enable, disable or describe Amazon Kinesis Data Streams for Amazon DynamoDB for the `Music` table. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:UpdateKinesisStreamingDestination"
            ],
            "Resource": "arn:aws:dynamodb:us-west-2:111122223333:table/Music"
        }
    ]
}
```

------

## Example: Disable Amazon Kinesis Data Streams for Amazon DynamoDB
<a name="access-policy-kds-example2"></a>

The following IAM policy grants permissions to disable Amazon Kinesis Data Streams for Amazon DynamoDB for the `Music` table. It does not grant permissions to enable, update or describe Amazon Kinesis Data Streams for Amazon DynamoDB for the `Music` table. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:DisableKinesisStreamingDestination"
            ],
            "Resource": "arn:aws:dynamodb:us-west-2:111122223333:table/Music"
        }
    ]
}
```

------

## Example: Selectively apply permissions for Amazon Kinesis Data Streams for Amazon DynamoDB based on resource
<a name="access-policy-kds-example3"></a>

The following IAM policy grants permissions to enable and describe Amazon Kinesis Data Streams for Amazon DynamoDB for the `Music` table, and denies permissions to disable Amazon Kinesis Data Streams for Amazon DynamoDB for the `Orders` table. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:EnableKinesisStreamingDestination",
                "dynamodb:DescribeKinesisStreamingDestination"
            ],
            "Resource": "arn:aws:dynamodb:us-west-2:111122223333:table/Music"
        },
        {
            "Effect": "Deny",
            "Action": [
                "dynamodb:DisableKinesisStreamingDestination"
            ],
            "Resource": "arn:aws:dynamodb:us-west-2:111122223333:table/Orders"
        }
    ]
}
```

------

## Using service-linked roles for Kinesis Data Streams for DynamoDB
<a name="kds-service-linked-roles"></a>

Amazon Kinesis Data Streams for Amazon DynamoDB uses AWS Identity and Access Management (IAM)[ service-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-linked-role). A service-linked role is a unique type of IAM role that is linked directly to Kinesis Data Streams for DynamoDB. Service-linked roles are predefined by Kinesis Data Streams for DynamoDB and include all the permissions that the service requires to call other AWS services on your behalf. 

A service-linked role makes setting up Kinesis Data Streams for DynamoDB easier because you don’t have to manually add the necessary permissions. Kinesis Data Streams for DynamoDB defines the permissions of its service-linked roles, and unless defined otherwise, only Kinesis Data Streams for DynamoDB can assume its roles. The defined permissions include the trust policy and the permissions policy, and that permissions policy cannot be attached to any other IAM entity.

For information about other services that support service-linked roles, see [AWS Services That Work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) and look for the services that have **Yes **in the **Service-Linked Role** column. Choose a **Yes** with a link to view the service-linked role documentation for that service.

### Service-linked role permissions for Kinesis Data Streams for DynamoDB
<a name="slr-permissions"></a>

Kinesis Data Streams for DynamoDB uses the service-linked role named **AWSServiceRoleForDynamoDBKinesisDataStreamsReplication**. The purpose of the service-linked role is to allow Amazon DynamoDB to manage the replication of item-level changes to Kinesis Data Streams, on your behalf.

The `AWSServiceRoleForDynamoDBKinesisDataStreamsReplication` service-linked role trusts the following services to assume the role:
+ `kinesisreplication.dynamodb.amazonaws.com`

The role permissions policy allows Kinesis Data Streams for DynamoDB to complete the following actions on the specified resources:
+ Action: `Put records and describe` on `Kinesis stream`
+ Action: `Generate data keys` on `AWS KMS` in order to put data on Kinesis streams that are encrypted using User-Generated AWS KMS keys.

For the exact contents of the policy document, see [DynamoDBKinesisReplicationServiceRolePolicy](https://console.aws.amazon.com/iam/home#policies/arn:aws:iam::aws:policy/aws-service-role/DynamoDBKinesisReplicationServiceRolePolicy).

You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or delete a service-linked role. For more information, see [Service-Linked Role Permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/contributorinsights-service-linked-roles.html#service-linked-role-permissions) in the *IAM User Guide*.

### Creating a service-linked role for Kinesis Data Streams for DynamoDB
<a name="create-slr"></a>

You don't need to manually create a service-linked role. When you enable Kinesis Data Streams for DynamoDB in the AWS Management Console, the AWS CLI, or the AWS API, Kinesis Data Streams for DynamoDB creates the service-linked role for you. 

If you delete this service-linked role, and then need to create it again, you can use the same process to recreate the role in your account. When you enable Kinesis Data Streams for DynamoDB, Kinesis Data Streams for DynamoDB creates the service-linked role for you again. 

### Editing a service-linked role for Kinesis Data Streams for DynamoDB
<a name="edit-slr"></a>

Kinesis Data Streams for DynamoDB does not allow you to edit the `AWSServiceRoleForDynamoDBKinesisDataStreamsReplication` service-linked role. After you create a service-linked role, you cannot change the name of the role because various entities might reference the role. However, you can edit the description of the role using IAM. For more information, see [Editing a Service-Linked Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/contributorinsights-service-linked-roles.html#edit-service-linked-role) in the *IAM User Guide*.

### Deleting a service-linked role for Kinesis Data Streams for DynamoDB
<a name="delete-slr"></a>

You can also use the IAM console, the AWS CLI or the AWS API to manually delete the service-linked role. To do this, you must first manually clean up the resources for your service-linked role and then you can manually delete it.

**Note**  
If the Kinesis Data Streams for DynamoDB service is using the role when you try to delete the resources, then the deletion might fail. If that happens, wait for a few minutes and try the operation again.

**To manually delete the service-linked role using IAM**

Use the IAM console, the AWS CLI, or the AWS API to delete the `AWSServiceRoleForDynamoDBKinesisDataStreamsReplication` service-linked role. For more information, see [Deleting a Service-Linked Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html) in the *IAM User Guide*.

# Change data capture for DynamoDB Streams
<a name="Streams"></a>

 DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours. Applications can access this log and view the data items as they appeared before and after they were modified, in near-real time.

 Encryption at rest encrypts the data in DynamoDB streams. For more information, see [DynamoDB encryption at rest](EncryptionAtRest.md).

A *DynamoDB stream* is an ordered flow of information about changes to items in a DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.

Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams writes a stream record with the primary key attributes of the items that were modified. A *stream record* contains information about a data modification to a single item in a DynamoDB table. You can configure the stream so that the stream records capture additional information, such as the "before" and "after" images of modified items.

DynamoDB Streams helps ensure the following:
+ Each stream record appears exactly once in the stream.
+ For each item that is modified in a DynamoDB table, the stream records appear in the same sequence as the actual modifications to the item.

DynamoDB Streams writes stream records in near-real time so that you can build applications that consume these streams and take action based on the contents.

**Topics**
+ [Endpoints for DynamoDB Streams](#Streams.Endpoints)
+ [Enabling a stream](#Streams.Enabling)
+ [Reading and processing a stream](#Streams.Processing)
+ [DynamoDB Streams and Time to Live](time-to-live-ttl-streams.md)
+ [Using the DynamoDB Streams Kinesis adapter to process stream records](Streams.KCLAdapter.md)
+ [DynamoDB Streams low-level API: Java example](Streams.LowLevel.Walkthrough.md)
+ [DynamoDB Streams and AWS Lambda triggers](Streams.Lambda.md)
+ [DynamoDB Streams and Apache Flink](StreamsApacheFlink.xml.md)

## Endpoints for DynamoDB Streams
<a name="Streams.Endpoints"></a>

AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region.

DynamoDB Streams offers two sets of endpoints. They are:
+ **IPv4-only endpoints**: Endpoints with the `streams.dynamodb.<region>.amazonaws.com` naming convention.
+ **Dual-stack endpoints**: New endpoints that are compatible with both IPv4 and IPv6 and follows the `streams-dynamodb.<region>.api.aws` naming convention.

**Note**  
For a complete list of DynamoDB and DynamoDB Streams Regions and endpoints, see [Regions and endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) in the *AWS General Reference*.

The AWS SDKs provide separate clients for DynamoDB and DynamoDB Streams. Depending on your requirements, your application can access a DynamoDB endpoint, a DynamoDB Streams endpoint, or both at the same time. To connect to both endpoints, your application must instantiate two clients—one for DynamoDB and one for DynamoDB Streams.

## Enabling a stream
<a name="Streams.Enabling"></a>

You can enable a stream on a new table when you create it using the AWS CLI or one of the AWS SDKs. You can also enable or disable a stream on an existing table, or change the settings of a stream. DynamoDB Streams operates asynchronously, so there is no performance impact on a table if you enable a stream.

The easiest way to manage DynamoDB Streams is by using the AWS Management Console.

1. Sign in to the AWS Management Console and open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/).

1. On the DynamoDB console dashboard, choose **Tables** and select an existing table.

1. Choose the **Exports and streams** tab.

1. In the **DynamoDB stream details** section, choose **Turn on**.

1. On the **Turn on DynamoDB stream** page, choose the information that will be written to the stream whenever the data in the table is modified:
   + **Key attributes only** — Only the key attributes of the modified item.
   + **New image** — The entire item, as it appears after it was modified.
   + **Old image** — The entire item, as it appeared before it was modified.
   + **New and old images** — Both the new and the old images of the item.

   When the settings are as you want them, choose **Turn on stream**.

1. (Optional) To disable an existing stream, choose **Turn off** under **DynamoDB stream details**.

You can also use the `CreateTable` or `UpdateTable` API operations to enable or modify a stream. The `StreamSpecification` parameter determines how the stream is configured:
+ `StreamEnabled` — Specifies whether a stream is enabled (`true`) or disabled (`false`) for the table.
+ `StreamViewType` — Specifies the information that will be written to the stream whenever data in the table is modified:
  + `KEYS_ONLY` — Only the key attributes of the modified item.
  + `NEW_IMAGE` — The entire item, as it appears after it was modified.
  + `OLD_IMAGE` — The entire item, as it appeared before it was modified.
  + `NEW_AND_OLD_IMAGES` — Both the new and the old images of the item.

You can enable or disable a stream at any time. However, you receive a `ValidationException` if you try to enable a stream on a table that already has a stream. You also receive a `ValidationException` if you try to disable a stream on a table that doesn't have a stream.

When you set `StreamEnabled` to `true`, DynamoDB creates a new stream with a unique stream descriptor assigned to it. If you disable and then re-enable a stream on the table, a new stream is created with a different stream descriptor.

Every stream is uniquely identified by an Amazon Resource Name (ARN). The following is an example ARN for a stream on a DynamoDB table named `TestTable`.

```
arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291
```

To determine the latest stream descriptor for a table, issue a DynamoDB `DescribeTable` request and look for the `LatestStreamArn` element in the response.

**Note**  
It is not possible to edit a `StreamViewType` once a stream has been setup. If you need to make changes to a stream after it has been setup, you must disable the current stream and create a new one.

## Reading and processing a stream
<a name="Streams.Processing"></a>

To read and process a stream, your application must connect to a DynamoDB Streams endpoint and issue API requests.

A stream consists of *stream records*. Each stream record represents a single data modification in the DynamoDB table to which the stream belongs. Each stream record is assigned a sequence number, reflecting the order in which the record was published to the stream.

Stream records are organized into groups, or *shards*. Each shard acts as a container for multiple stream records, and contains information required for accessing and iterating through these records. The stream records within a shard are removed automatically after 24 hours.

Shards are ephemeral: They are created and deleted automatically, as needed. Any shard can also split into multiple new shards; this also occurs automatically. (It's also possible for a parent shard to have just one child shard.) A shard might split in response to high levels of write activity on its parent table, so that applications can process records from multiple shards in parallel.

If you disable a stream, any shards that are open will be closed. The data in the stream will continue to be readable for 24 hours.

Because shards have a lineage (parent and children), an application must always process a parent shard before it processes a child shard. This helps ensure that the stream records are also processed in the correct order. (If you use the DynamoDB Streams Kinesis Adapter, this is handled for you. Your application processes the shards and stream records in the correct order. It automatically handles new or expired shards, in addition to shards that split while the application is running. For more information, see [Using the DynamoDB Streams Kinesis adapter to process stream records](Streams.KCLAdapter.md).)

The following diagram shows the relationship between a stream, shards in the stream, and stream records in the shards.

![\[DynamoDB Streams structure. Stream records that represent data modifications are organized into shards.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/streams-terminology.png)


**Note**  
If you perform a `PutItem` or `UpdateItem` operation that does not change any data in an item, DynamoDB Streams does *not* write a stream record for that operation.

To access a stream and process the stream records within, you must do the following:
+ Determine the unique ARN of the stream that you want to access.
+ Determine which shards in the stream contain the stream records that you are interested in.
+ Access the shards and retrieve the stream records that you want.

**Note**  
No more than two processes at most should be reading from the same stream's shard at the same time. Having more than two readers per shard can result in throttling.

The DynamoDB Streams API provides the following actions for use by application programs:
+  `[ListStreams](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_ListStreams.html)` — Returns a list of stream descriptors for the current account and endpoint. You can optionally request just the stream descriptors for a particular table name.
+ `[DescribeStream](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_DescribeStream.html)` — Returns information about a stream, including the current status of the stream, its Amazon Resource Name (ARN), the composition of its shards, and its corresponding DynamoDB table. You can optionally use the `ShardFilter` field to retrieve the existing child shard associated with the parent shard.
+ `[GetShardIterator](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_GetShardIterator.html)` — Returns a *shard iterator*, which describes a location within a shard. You can request that the iterator provide access to the oldest point, the newest point, or a particular point in the stream.
+ `[GetRecords](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_GetRecords.html)` — Returns the stream records from within a given shard. You must provide the shard iterator returned from a `GetShardIterator` request.

For complete descriptions of these API operations, including example requests and responses, see the [Amazon DynamoDB Streams API Reference](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Operations_Amazon_DynamoDB_Streams.html).

### Shard discovery
<a name="Streams.ShardDiscovery"></a>



Discover new shards in your DynamoDB stream with two powerful methods. As a Amazon DynamoDB Streams user, you have two effective ways to track and identify new shards:

**Polling the entire stream topology**  
Use the `DescribeStream` API to regularly poll the stream. This returns all shards in the stream, including any new shards that have been created. By comparing results over time, you can detect newly added shards.

**Discovering child shards**  
Use the `DescribeStream` API with the `ShardFilter` parameter to find a subset of shards. By specifying a parent shard in the request, DynamoDB Streams will return its immediate child shards. This approach is useful when you only need to track shard lineage without scanning the entire stream.   
Applications consuming data from DynamoDB Streams can efficiently transition from reading a closed shard to its child shard using this `ShardFilter` parameter, avoiding repeated calls to the `DescribeStream` API to retrieve and traverse the shard map for all closed and open shards. This helps to quickly discover child shards after a parent shard has been closed, making your stream processing applications more responsive and cost-effective.

Both methods empower you to stay on top of your DynamoDB Streams' evolving structure, ensuring you never miss critical data updates or shard modifications.

### Data retention limit for DynamoDB Streams
<a name="Streams.DataRetention"></a>

All data in DynamoDB Streams is subject to a 24-hour lifetime. You can retrieve and analyze the last 24 hours of activity for any given table. However, data that is older than 24 hours is susceptible to trimming (removal) at any moment.

If you disable a stream on a table, the data in the stream continues to be readable for 24 hours. After this time, the data expires and the stream records are automatically deleted. There is no mechanism for manually deleting an existing stream. You must wait until the retention limit expires (24 hours), and all the stream records will be deleted.

# DynamoDB Streams and Time to Live
<a name="time-to-live-ttl-streams"></a>

You can back up, or otherwise process, items that are deleted by [Time to Live](TTL.md) (TTL) by enabling Amazon DynamoDB Streams on the table and processing the streams records of the expired items. For more information, see [Reading and processing a stream](Streams.md#Streams.Processing).

The streams record contains a user identity field `Records[<index>].userIdentity`.

Items that are deleted by the Time to Live process after expiration have the following fields:
+ `Records[<index>].userIdentity.type`

  `"Service"`
+ `Records[<index>].userIdentity.principalId`

  `"dynamodb.amazonaws.com"`

**Note**  
When you use TTL in a global table, the region the TTL was performed in will have the `userIdentity` field set. This field won't be set in other regions when the delete is replicated.

The following JSON shows the relevant portion of a single streams record.

```
"Records": [
    {
        ...

        "userIdentity": {
            "type": "Service",
            "principalId": "dynamodb.amazonaws.com"
        }

        ...

    }
]
```

## Using DynamoDB Streams and Lambda to archive TTL deleted items
<a name="streams-archive-ttl-deleted-items"></a>

Combining [DynamoDB Time to Live (TTL)](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html), [DynamoDB Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html), and [AWS Lambda](https://aws.amazon.com/lambda/) can help simplify archiving data, reduce DynamoDB storage costs, and reduce code complexity. Using Lambda as the stream consumer provides many advantages, most notably the cost reduction compared to other consumers such as Kinesis Client Library (KCL). You aren’t charged for `GetRecords` API calls on your DynamoDB stream when using Lambda to consume events, and Lambda can provide event filtering by identifying JSON patterns in a stream event. With event-pattern content filtering, you can define up to five different filters to control which events are sent to Lambda for processing. This helps reduce invocations of your Lambda functions, simplifies code, and reduces overall cost.

While DynamoDB Streams contains all data modifications, such as `Create`, `Modify`, and `Remove` actions, this can result in unwanted invocations of your archive Lambda function. For example, say you have a table with 2 million data modifications per hour flowing into the stream, but less than 5 percent of these are item deletes that will expire through the TTL process and need to be archived. With [Lambda event source filters](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html), the Lambda function will only invoke 100,000 times per hour. The result with event filtering is that you’re charged only for the needed invocations instead of the 2 million invocations you would have without event filtering.

Event filtering is applied to the [Lambda event source mapping](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventsourcemapping.html), which is a resource that reads from a chosen event—the DynamoDB stream—and invokes a Lambda function. In the following diagram, you can see how a Time to Live deleted item is consumed by a Lambda function using streams and event filters.

![\[An item deleted through TTL process starts a Lambda function that uses streams and event filters.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/streams-lambda-ttl.png)


### DynamoDB Time to Live event filter pattern
<a name="ttl-event-filter-pattern"></a>

Adding the following JSON to your event source mapping [filter criteria](https://docs.aws.amazon.com/lambda/latest/dg/API_FilterCriteria.html) allows invocation of your Lambda function only for TTL deleted items:

```
{
    "Filters": [
        {
            "Pattern": { "userIdentity": { "type": ["Service"], "principalId": ["dynamodb.amazonaws.com"] } }
        }
    ]
}
```

### Create an AWS Lambda event source mapping
<a name="create-event-source-mapping"></a>

Use the following code snippets to create a filtered event source mapping which you can connect to a table's DynamoDB stream. Each code block includes the event filter pattern.

------
#### [ AWS CLI ]

```
aws lambda create-event-source-mapping \
--event-source-arn 'arn:aws:dynamodb:eu-west-1:012345678910:table/test/stream/2021-12-10T00:00:00.000' \
--batch-size 10 \
--enabled \
--function-name test_func \
--starting-position LATEST \
--filter-criteria '{"Filters": [{"Pattern": "{\"userIdentity\":{\"type\":[\"Service\"],\"principalId\":[\"dynamodb.amazonaws.com\"]}}"}]}'
```

------
#### [ Java ]

```
LambdaClient client = LambdaClient.builder()
        .region(Region.EU_WEST_1)
        .build();

Filter userIdentity = Filter.builder()
        .pattern("{\"userIdentity\":{\"type\":[\"Service\"],\"principalId\":[\"dynamodb.amazonaws.com\"]}}")
        .build();

FilterCriteria filterCriteria = FilterCriteria.builder()
        .filters(userIdentity)
        .build();

CreateEventSourceMappingRequest mappingRequest = CreateEventSourceMappingRequest.builder()
        .eventSourceArn("arn:aws:dynamodb:eu-west-1:012345678910:table/test/stream/2021-12-10T00:00:00.000")
        .batchSize(10)
        .enabled(Boolean.TRUE)
        .functionName("test_func")
        .startingPosition("LATEST")
        .filterCriteria(filterCriteria)
        .build();

try{
    CreateEventSourceMappingResponse eventSourceMappingResponse = client.createEventSourceMapping(mappingRequest);
    System.out.println("The mapping ARN is "+eventSourceMappingResponse.eventSourceArn());

}catch (ServiceException e){
    System.out.println(e.getMessage());
}
```

------
#### [ Node ]

```
const client = new LambdaClient({ region: "eu-west-1" });

const input = {
    EventSourceArn: "arn:aws:dynamodb:eu-west-1:012345678910:table/test/stream/2021-12-10T00:00:00.000",
    BatchSize: 10,
    Enabled: true,
    FunctionName: "test_func",
    StartingPosition: "LATEST",
    FilterCriteria: { "Filters": [{ "Pattern": "{\"userIdentity\":{\"type\":[\"Service\"],\"principalId\":[\"dynamodb.amazonaws.com\"]}}" }] }
}

const command = new CreateEventSourceMappingCommand(input);

try {
    const results = await client.send(command);
    console.log(results);
} catch (err) {
    console.error(err);
}
```

------
#### [ Python ]

```
session = boto3.session.Session(region_name = 'eu-west-1')
client = session.client('lambda')

try:
    response = client.create_event_source_mapping(
        EventSourceArn='arn:aws:dynamodb:eu-west-1:012345678910:table/test/stream/2021-12-10T00:00:00.000',
        BatchSize=10,
        Enabled=True,
        FunctionName='test_func',
        StartingPosition='LATEST',
        FilterCriteria={
            'Filters': [
                {
                    'Pattern': "{\"userIdentity\":{\"type\":[\"Service\"],\"principalId\":[\"dynamodb.amazonaws.com\"]}}"
                },
            ]
        }
    )
    print(response)
except Exception as e:
    print(e)
```

------
#### [ JSON ]

```
{
  "userIdentity": {
     "type": ["Service"],
     "principalId": ["dynamodb.amazonaws.com"]
   }
}
```

------

# Using the DynamoDB Streams Kinesis adapter to process stream records
<a name="Streams.KCLAdapter"></a>

Using the Amazon Kinesis Adapter is the recommended way to consume streams from Amazon DynamoDB. The DynamoDB Streams API is intentionally similar to that of Kinesis Data Streams. In both services, data streams are composed of shards, which are containers for stream records. Both services' APIs contain `ListStreams`, `DescribeStream`, `GetShards`, and `GetShardIterator` operations. (Although these DynamoDB Streams actions are similar to their counterparts in Kinesis Data Streams, they are not 100 percent identical.)

As a DynamoDB Streams user, you can use the design patterns found within the KCL to process DynamoDB Streams shards and stream records. To do this, you use the DynamoDB Streams Kinesis Adapter. The Kinesis Adapter implements the Kinesis Data Streams interface so that the KCL can be used for consuming and processing records from DynamoDB Streams. For instructions on how to set up and install the DynamoDB Streams Kinesis Adapter, see the [GitHub repository](https://github.com/awslabs/dynamodb-streams-kinesis-adapter).

You can write applications for Kinesis Data Streams using the Kinesis Client Library (KCL). The KCL simplifies coding by providing useful abstractions above the low-level Kinesis Data Streams API. For more information about the KCL, see the [Developing consumers using the Kinesis client library](https://docs.aws.amazon.com/kinesis/latest/dev/developing-consumers-with-kcl.html) in the *Amazon Kinesis Data Streams Developer Guide*.

DynamoDB recommends using KCL version 3.x with AWS SDK for Java v2.x. The current DynamoDB Streams Kinesis Adapter version 1.x with AWS SDK for AWS SDK for Java v1.x will continue to be fully supported throughout its lifecycle as intended during the transitional period in alignment with [AWS SDKs and Tools maintenance policy](https://docs.aws.amazon.com/sdkref/latest/guide/maint-policy.html).

**Note**  
Amazon Kinesis Client Library (KCL) versions 1.x and 2.x are outdated. KCL 1.x will reach end-of-support on January 30, 2026. We strongly recommend that you migrate your KCL applications using version 1.x to the latest KCL version before January 30, 2026. To find the latest KCL version, see the [Amazon Kinesis Client Library](https://github.com/awslabs/amazon-kinesis-client) page on GitHub. For information about the latest KCL versions, see [Use Kinesis Client Library](https://docs.aws.amazon.com/streams/latest/dev/kcl.html). For information about migrating from KCL 1.x to KCL 3.x, see Migrating from KCL 1.x to KCL 3.x.

The following diagram shows how these libraries interact with one another.

![\[Interaction between DynamoDB Streams, Kinesis Data Streams, and KCL for processing DynamoDB Streams records.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/streams-kinesis-adapter.png)


With the DynamoDB Streams Kinesis Adapter in place, you can begin developing against the KCL interface, with the API calls seamlessly directed at the DynamoDB Streams endpoint.

When your application starts, it calls the KCL to instantiate a worker. You must provide the worker with configuration information for the application, such as the stream descriptor and AWS credentials, and the name of a record processor class that you provide. As it runs the code in the record processor, the worker performs the following tasks:
+ Connects to the stream
+ Enumerates the shards within the stream
+ Checks and enumerates child shards of a closed parent shard within the stream
+ Coordinates shard associations with other workers (if any)
+ Instantiates a record processor for every shard it manages
+ Pulls records from the stream
+ Scales GetRecords API calling rate during high throughput (if catch-up mode is configured)
+ Pushes the records to the corresponding record processor
+ Checkpoints processed records
+ Balances shard-worker associations when the worker instance count changes
+ Balances shard-worker associations when shards are split

The KCL adapter supports catch-up mode, an automatic calling rate adjustment feature for handling temporary throughput increases. When stream processing lag exceeds a configurable threshold (default one minute), catch-up mode scales GetRecords API calling frequency by a configurable value (default 3x) to retrieve records faster, then returns to normal once the lag drops. This is valuable during high-throughput periods where DynamoDB write activity can overwhelm consumers using default polling rates. Catch-up mode can be enabled through the `catchupEnabled` configuration parameter (default false).

**Note**  
For a description of the KCL concepts listed here, see [Developing consumers using the Kinesis client library](https://docs.aws.amazon.com/kinesis/latest/dev/developing-consumers-with-kcl.html) in the *Amazon Kinesis Data Streams Developer Guide*.  
For more information on using streams with AWS Lambda see [DynamoDB Streams and AWS Lambda triggers](Streams.Lambda.md)

# Migrating from KCL 1.x to KCL 3.x
<a name="streams-migrating-kcl"></a>

## Overview
<a name="migrating-kcl-overview"></a>

This guide provides instructions for migrating your consumer application from KCL 1.x to KCL 3.x. Due to architectural differences between KCL 1.x and KCL 3.x, migration requires updating several components to ensure compatibility.

KCL 1.x uses different classes and interfaces compared to KCL 3.x. You must migrate the record processor, record processor factory, and worker classes to the KCL 3.x compatible format first, and follow the migration steps for KCL 1.x to KCL 3.x migration.

## Migration steps
<a name="migration-steps"></a>

**Topics**
+ [Step 1: Migrate the record processor](#step1-record-processor)
+ [Step 2: Migrate the record processor factory](#step2-record-processor-factory)
+ [Step 3: Migrate the worker](#step3-worker-migration)
+ [Step 4: KCL 3.x configuration overview and recommendations](#step4-configuration-migration)
+ [Step 5: Migrate from KCL 2.x to KCL 3.x](#step5-kcl2-to-kcl3)

### Step 1: Migrate the record processor
<a name="step1-record-processor"></a>

The following example shows a record processor implemented for KCL 1.x DynamoDB Streams Kinesis adapter:

```
package com.amazonaws.kcl;

import com.amazonaws.services.kinesis.clientlibrary.exceptions.InvalidStateException;
import com.amazonaws.services.kinesis.clientlibrary.exceptions.ShutdownException;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorCheckpointer;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IShutdownNotificationAware;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShutdownReason;
import com.amazonaws.services.kinesis.clientlibrary.types.InitializationInput;
import com.amazonaws.services.kinesis.clientlibrary.types.ProcessRecordsInput;
import com.amazonaws.services.kinesis.clientlibrary.types.ShutdownInput;

public class StreamsRecordProcessor implements IRecordProcessor, IShutdownNotificationAware {
    @Override
    public void initialize(InitializationInput initializationInput) {
        //
        // Setup record processor
        //
    }

    @Override
    public void processRecords(ProcessRecordsInput processRecordsInput) {
        for (Record record : processRecordsInput.getRecords()) {
            String data = new String(record.getData().array(), Charset.forName("UTF-8"));
            System.out.println(data);
            if (record instanceof RecordAdapter) {
                // record processing and checkpointing logic
            }
        }
    }

    @Override
    public void shutdown(ShutdownInput shutdownInput) {
        if (shutdownInput.getShutdownReason() == ShutdownReason.TERMINATE) {
            try {
                shutdownInput.getCheckpointer().checkpoint();
            } catch (ShutdownException | InvalidStateException e) {
                throw new RuntimeException(e);
            }
        }
    }

    @Override
    public void shutdownRequested(IRecordProcessorCheckpointer checkpointer) {
        try {
            checkpointer.checkpoint();
        } catch (ShutdownException | InvalidStateException e) {
            //
            // Swallow exception
            //
            e.printStackTrace();
        }
    }
}
```

**To migrate the RecordProcessor class**

1. Change the interfaces from `com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor` and `com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IShutdownNotificationAware` to `com.amazonaws.services.dynamodbv2.streamsadapter.processor.DynamoDBStreamsShardRecordProcessor` as follows:

   ```
   // import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor;
   // import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IShutdownNotificationAware;
   
   import com.amazonaws.services.dynamodbv2.streamsadapter.processor.DynamoDBStreamsShardRecordProcessor;
   ```

1. Update import statements for the `initialize` and `processRecords` methods:

   ```
   // import com.amazonaws.services.kinesis.clientlibrary.types.InitializationInput;
   import software.amazon.kinesis.lifecycle.events.InitializationInput;
   
   // import com.amazonaws.services.kinesis.clientlibrary.types.ProcessRecordsInput;
   import com.amazonaws.services.dynamodbv2.streamsadapter.model.DynamoDBStreamsProcessRecordsInput;
   ```

1. Replace the `shutdownRequested` method with the following new methods: `leaseLost`, `shardEnded`, and `shutdownRequested`.

   ```
   //    @Override
   //    public void shutdownRequested(IRecordProcessorCheckpointer checkpointer) {
   //        //
   //        // This is moved to shardEnded(...) and shutdownRequested(ShutdownReauestedInput)
   //        //
   //        try {
   //            checkpointer.checkpoint();
   //        } catch (ShutdownException | InvalidStateException e) {
   //            //
   //            // Swallow exception
   //            //
   //            e.printStackTrace();
   //        }
   //    }
   
       @Override
       public void leaseLost(LeaseLostInput leaseLostInput) {
   
       }
   
       @Override
       public void shardEnded(ShardEndedInput shardEndedInput) {
           try {
               shardEndedInput.checkpointer().checkpoint();
           } catch (ShutdownException | InvalidStateException e) {
               //
               // Swallow the exception
               //
               e.printStackTrace();
           }
       }
   
       @Override
       public void shutdownRequested(ShutdownRequestedInput shutdownRequestedInput) {
           try {
               shutdownRequestedInput.checkpointer().checkpoint();
           } catch (ShutdownException | InvalidStateException e) {
               //
               // Swallow the exception
               //
               e.printStackTrace();
           }
       }
   ```

The following is the updated version of the record processor class:

```
package com.amazonaws.codesamples;

import software.amazon.kinesis.exceptions.InvalidStateException;
import software.amazon.kinesis.exceptions.ShutdownException;
import software.amazon.kinesis.lifecycle.events.InitializationInput;
import software.amazon.kinesis.lifecycle.events.LeaseLostInput;
import com.amazonaws.services.dynamodbv2.streamsadapter.model.DynamoDBStreamsProcessRecordsInput;
import software.amazon.kinesis.lifecycle.events.ShardEndedInput;
import software.amazon.kinesis.lifecycle.events.ShutdownRequestedInput;
import software.amazon.dynamodb.streamsadapter.processor.DynamoDBStreamsShardRecordProcessor;
import software.amazon.dynamodb.streamsadapter.adapter.DynamoDBStreamsKinesisClientRecord;
import com.amazonaws.services.dynamodbv2.streamsadapter.processor.DynamoDBStreamsShardRecordProcessor;
import com.amazonaws.services.dynamodbv2.streamsadapter.adapter.DynamoDBStreamsClientRecord;
import software.amazon.awssdk.services.dynamodb.model.Record;

public class StreamsRecordProcessor implements DynamoDBStreamsShardRecordProcessor {

    @Override
    public void initialize(InitializationInput initializationInput) {
        
    }

    @Override
    public void processRecords(DynamoDBStreamsProcessRecordsInput processRecordsInput) {
        for (DynamoDBStreamsKinesisClientRecord record: processRecordsInput.records())
            Record ddbRecord = record.getRecord();
            // processing and checkpointing logic for the ddbRecord
        }
    }

    @Override
    public void leaseLost(LeaseLostInput leaseLostInput) {
        
    }

    @Override
    public void shardEnded(ShardEndedInput shardEndedInput) {
        try {
            shardEndedInput.checkpointer().checkpoint();
        } catch (ShutdownException | InvalidStateException e) {
            //
            // Swallow the exception
            //
            e.printStackTrace();
        }
    }

    @Override
    public void shutdownRequested(ShutdownRequestedInput shutdownRequestedInput) {
        try {
            shutdownRequestedInput.checkpointer().checkpoint();
        } catch (ShutdownException | InvalidStateException e) {
            //
            // Swallow the exception
            //
            e.printStackTrace();
        }
    }
}
```

**Note**  
DynamoDB Streams Kinesis Adapter now uses SDKv2 Record model. In SDKv2, complex `AttributeValue` objects (`BS`, `NS`, `M`, `L`, `SS`) never return null. Use `hasBs()`, `hasNs()`, `hasM()`, `hasL()`, `hasSs()` methods to verify if these values exist.

### Step 2: Migrate the record processor factory
<a name="step2-record-processor-factory"></a>

The record processor factory is responsible for creating record processors when a lease is acquired. The following is an example of a KCL 1.x factory:

```
package com.amazonaws.codesamples;

import software.amazon.dynamodb.AmazonDynamoDB;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessorFactory;

public class StreamsRecordProcessorFactory implements IRecordProcessorFactory {
    
    @Override
    public IRecordProcessor createProcessor() {
        return new StreamsRecordProcessor(dynamoDBClient, tableName);
    }
}
```

**To migrate the `RecordProcessorFactory`**
+ Change the implemented interface from `com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessorFactory` to `software.amazon.kinesis.processor.ShardRecordProcessorFactory`, as follows:

  ```
  // import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor;
  import software.amazon.kinesis.processor.ShardRecordProcessor;
  
  // import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessorFactory;
  import software.amazon.kinesis.processor.ShardRecordProcessorFactory;
  
  // public class TestRecordProcessorFactory implements IRecordProcessorFactory {
  public class StreamsRecordProcessorFactory implements ShardRecordProcessorFactory {
  
  Change the return signature for createProcessor.
  
  // public IRecordProcessor createProcessor() {
  public ShardRecordProcessor shardRecordProcessor() {
  ```

The following is an example of the record processor factory in 3.0:

```
package com.amazonaws.codesamples;

import software.amazon.kinesis.processor.ShardRecordProcessor;
import software.amazon.kinesis.processor.ShardRecordProcessorFactory;

public class StreamsRecordProcessorFactory implements ShardRecordProcessorFactory {

    @Override
    public ShardRecordProcessor shardRecordProcessor() {
        return new StreamsRecordProcessor();
    }
}
```

### Step 3: Migrate the worker
<a name="step3-worker-migration"></a>

In version 3.0 of the KCL, a new class, called **Scheduler**, replaces the **Worker** class. The following is an example of a KCL 1.x worker:

```
final KinesisClientLibConfiguration config = new KinesisClientLibConfiguration(...)
final IRecordProcessorFactory recordProcessorFactory = new RecordProcessorFactory();
final Worker worker = StreamsWorkerFactory.createDynamoDbStreamsWorker(
        recordProcessorFactory,
        workerConfig,
        adapterClient,
        amazonDynamoDB,
        amazonCloudWatchClient);
```

**To migrate the worker**

1. Change the `import` statement for the `Worker` class to the import statements for the `Scheduler` and `ConfigsBuilder` classes.

   ```
   // import com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker;
   import software.amazon.kinesis.coordinator.Scheduler;
   import software.amazon.kinesis.common.ConfigsBuilder;
   ```

1. Import `StreamTracker` and change import of `StreamsWorkerFactory` to `StreamsSchedulerFactory`.

   ```
   import software.amazon.kinesis.processor.StreamTracker;
   // import software.amazon.dynamodb.streamsadapter.StreamsWorkerFactory;
   import software.amazon.dynamodb.streamsadapter.StreamsSchedulerFactory;
   ```

1. Choose the position from which to start the application. It can be `TRIM_HORIZON` or `LATEST`.

   ```
   import software.amazon.kinesis.common.InitialPositionInStream;
   import software.amazon.kinesis.common.InitialPositionInStreamExtended;
   ```

1. Create a `StreamTracker` instance.

   ```
   StreamTracker streamTracker = StreamsSchedulerFactory.createSingleStreamTracker(
           streamArn,
           InitialPositionInStreamExtended.newInitialPosition(InitialPositionInStream.TRIM_HORIZON)
   );
   ```

1. Create the `AmazonDynamoDBStreamsAdapterClient` object.

   ```
   import software.amazon.dynamodb.streamsadapter.AmazonDynamoDBStreamsAdapterClient; 
   import software.amazon.awssdk.auth.credentials.AwsCredentialsProvider;
   import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider;
   
   ...
   
   AwsCredentialsProvider credentialsProvider = DefaultCredentialsProvider.create();
   
   AmazonDynamoDBStreamsAdapterClient adapterClient = new AmazonDynamoDBStreamsAdapterClient(
           credentialsProvider, awsRegion);
   ```

1. Create the `ConfigsBuilder` object.

   ```
   import software.amazon.kinesis.common.ConfigsBuilder;
   
   ...
   ConfigsBuilder configsBuilder = new ConfigsBuilder(
                   streamTracker,
                   applicationName,
                   adapterClient,
                   dynamoDbAsyncClient,
                   cloudWatchAsyncClient,
                   UUID.randomUUID().toString(),
                   new StreamsRecordProcessorFactory());
   ```

1. Create the `Scheduler` using `ConfigsBuilder` as shown in the following example:

   ```
   import java.util.UUID;
   
   import software.amazon.awssdk.regions.Region;
   import software.amazon.awssdk.services.dynamodb.DynamoDbAsyncClient;
   import software.amazon.awssdk.services.cloudwatch.CloudWatchAsyncClient;
   import software.amazon.awssdk.services.kinesis.KinesisAsyncClient;
   
   import software.amazon.kinesis.common.KinesisClientUtil;
   import software.amazon.kinesis.coordinator.Scheduler;
   
   ...
   
                   
   DynamoDbAsyncClient dynamoClient = DynamoDbAsyncClient.builder().region(region).build();
   CloudWatchAsyncClient cloudWatchClient = CloudWatchAsyncClient.builder().region(region).build();
   
                   
   DynamoDBStreamsPollingConfig pollingConfig = new DynamoDBStreamsPollingConfig(adapterClient);
   pollingConfig.idleTimeBetweenReadsInMillis(idleTimeBetweenReadsInMillis);
   
   // Use ConfigsBuilder to configure settings
   RetrievalConfig retrievalConfig = configsBuilder.retrievalConfig();
   retrievalConfig.retrievalSpecificConfig(pollingConfig);
   
   CoordinatorConfig coordinatorConfig = configsBuilder.coordinatorConfig();
   coordinatorConfig.clientVersionConfig(CoordinatorConfig.ClientVersionConfig.CLIENT_VERSION_CONFIG_COMPATIBLE_WITH_2X);
                   
   Scheduler scheduler = StreamsSchedulerFactory.createScheduler(
                   configsBuilder.checkpointConfig(),
                   coordinatorConfig,
                   configsBuilder.leaseManagementConfig(),
                   configsBuilder.lifecycleConfig(),
                   configsBuilder.metricsConfig(),
                   configsBuilder.processorConfig(),
                   retrievalConfig,
                   adapterClient
           );
   ```

**Important**  
The `CLIENT_VERSION_CONFIG_COMPATIBLE_WITH_2X` setting maintains compatibility between DynamoDB Streams Kinesis Adapter for KCL v3 and KCL v1, not between KCL v2 and v3.

### Step 4: KCL 3.x configuration overview and recommendations
<a name="step4-configuration-migration"></a>

For a detailed description of the configurations introduced post KCL 1.x that are relevant in KCL 3.x see [KCL configurations](https://docs.aws.amazon.com//streams/latest/dev/kcl-configuration.html) and [KCL migration client configuration](https://docs.aws.amazon.com//streams/latest/dev/kcl-migration.html#client-configuration).

**Important**  
Instead of directly creating objects of `checkpointConfig`, `coordinatorConfig`, `leaseManagementConfig`, `metricsConfig`, `processorConfig` and `retrievalConfig`, we recommend using `ConfigsBuilder` to set configurations in KCL 3.x and later versions to avoid Scheduler initialization issues. `ConfigsBuilder` provides a more flexible and maintainable way to configure your KCL application.

#### Configurations with update default value in KCL 3.x
<a name="kcl3-configuration-overview"></a>

`billingMode`  
In KCL version 1.x, the default value for `billingMode` is set to `PROVISIONED`. However, with KCL version 3.x, the default `billingMode` is `PAY_PER_REQUEST` (on-demand mode). We recommend that you use the on-demand capacity mode for your lease table to automatically adjust the capacity based on your usage. For guidance on using provisioned capacity for your lease tables, see [Best practices for the lease table with provisioned capacity mode](https://docs.aws.amazon.com//streams/latest/dev/kcl-migration-lease-table.html).

`idleTimeBetweenReadsInMillis`  
In KCL version 1.x, the default value for `idleTimeBetweenReadsInMillis` is set to is 1,000 (or 1 second). KCL version 3.x sets the default value for i`dleTimeBetweenReadsInMillis` to 1,500 (or 1.5 seconds), but Amazon DynamoDB Streams Kinesis Adapter overrides the default value to 1,000 (or 1 second).

#### New configurations in KCL 3.x
<a name="kcl3-new-configs"></a>

`leaseAssignmentIntervalMillis`  
This configuration defines the time interval before newly discovered shards begin processing, and is calculated as 1.5 × `leaseAssignmentIntervalMillis`. If this setting isn't explicitly configured, the time interval defaults to 1.5 × `failoverTimeMillis`. Processing new shards involves scanning the lease table and querying a global secondary index (GSI) on the lease table. Lowering the `leaseAssignmentIntervalMillis` increases the frequency of these scan and query operations, resulting in higher DynamoDB costs. We recommend setting this value to 2000 (or 2 seconds) to minimize the delay in processing new shards.

`shardConsumerDispatchPollIntervalMillis`  
This configuration defines the interval between successive polls by the shard consumer to trigger state transitions. In KCL version 1.x, this behavior was controlled by the `idleTimeInMillis` parameter, which was not exposed as a configurable setting. With KCL version 3.x, we recommend setting this config to match the value used for` idleTimeInMillis` in your KCL version 1.x setup.

### Step 5: Migrate from KCL 2.x to KCL 3.x
<a name="step5-kcl2-to-kcl3"></a>

To ensure a smooth transition and compatibility with the latest Kinesis Client Library (KCL) version, follow steps 5-8 in the migration guide's instructions for [upgrading from KCL 2.x to KCL 3.x](https://docs.aws.amazon.com//streams/latest/dev/kcl-migration-from-2-3.html#kcl-migration-from-2-3-worker-metrics).

For common KCL 3.x troubleshooting issues, see [Troubleshooting KCL consumer applications](https://docs.aws.amazon.com//streams/latest/dev/troubleshooting-consumers.html).

# Roll back to the previous KCL version
<a name="kcl-migration-rollback"></a>

This topic explains how to roll back your consumer application to the previous KCL version. The roll-back process consists of two steps:

1. Run the [KCL Migration Tool](https://github.com/awslabs/amazon-kinesis-client/blob/master/amazon-kinesis-client/scripts/KclMigrationTool.py).

1. Redeploy previous KCL version code.

## Step 1: Run the KCL Migration Tool
<a name="kcl-migration-rollback-step1"></a>

When you need to roll back to the previous KCL version, you must run the KCL Migration Tool. The tool performs two important tasks:
+ It removes a metadata table called worker metrics table and global secondary index on the lease table in DynamoDB. These artifacts are created by KCL 3.x but aren't needed when you roll back to the previous version.
+ It makes all workers run in a mode compatible with KCL 1.x and start using the load balancing algorithm used in previous KCL versions. If you have issues with the new load balancing algorithm in KCL 3.x, this will mitigate the issue immediately.

**Important**  
The coordinator state table in DynamoDB must exist and must not be deleted during the migration, rollback, and rollforward process.

**Note**  
It's important that all workers in your consumer application use the same load balancing algorithm at a given time. The KCL Migration Tool makes sure that all workers in your KCL 3.x consumer application switch to the KCL 1.x compatible mode so that all workers run the same load balancing algorithm during the application rollback to the previous KCL version.

You can download the [KCL Migration Tool](https://github.com/awslabs/amazon-kinesis-client/blob/master/amazon-kinesis-client/scripts/KclMigrationTool.py) in the scripts directory of the [KCL GitHub repository](https://github.com/awslabs/amazon-kinesis-client/tree/master). Run the script from a worker or host with appropriate permissions to write to the coordinator state table, worker metrics table, and lease table. Ensure the appropriate [IAM permissions](https://docs.aws.amazon.com/streams/latest/dev/kcl-iam-permissions.html) are configured for KCL consumer applications. Run the script only once per KCL application using the specified command:

```
python3 ./KclMigrationTool.py --region region --mode rollback [--application_name applicationName] [--lease_table_name leaseTableName] [--coordinator_state_table_name coordinatorStateTableName] [--worker_metrics_table_name workerMetricsTableName]
```

### Parameters
<a name="kcl-migration-rollback-parameters"></a>

`--region`  
Replace *region* with your AWS Region.

`--application_name`  
This parameter is required if you're using default names for your DynamoDB metadata tables (lease table, coordinator state table, and worker metrics table). If you have specified custom names for these tables, you can omit this parameter. Replace *applicationName* with your actual KCL application name. The tool uses this name to derive the default table names if custom names are not provided.

`--lease_table_name`  
This parameter is needed when you have set a custom name for the lease table in your KCL configuration. If you're using the default table name, you can omit this parameter. Replace *leaseTableName* with the custom table name you specified for your lease table.

`--coordinator_state_table_name`  
This parameter is needed when you have set a custom name for the coordinator state table in your KCL configuration. If you're using the default table name, you can omit this parameter. Replace *coordinatorStateTableName* with the custom table name you specified for your coordinator state table.

`--worker_metrics_table_name`  
This parameter is needed when you have set a custom name for the worker metrics table in your KCL configuration. If you're using the default table name, you can omit this parameter. Replace *workerMetricsTableName* with the custom table name you specified for your worker metrics table.

## Step 2: Redeploy the code with the previous KCL version
<a name="kcl-migration-rollback-step2"></a>

**Important**  
Any mention of version 2.x in the output generated by the KCL Migration Tool should be interpreted as referring to KCL version 1.x. Running the script does not perform a complete rollback, it only switches the load balancing algorithm to the one used in KCL version 1.x.

After running the KCL Migration Tool for a rollback, you'll see one of these messages:

Message 1  
"Rollback completed. Your application was running 2x compatible functionality. Please rollback to your previous application binaries by deploying the code with your previous KCL version."  
**Required action:** This means that your workers were running in the KCL 1.x compatible mode. Redeploy the code with the previous KCL version to your workers.

Message 2  
"Rollback completed. Your KCL Application was running 3x functionality and will rollback to 2x compatible functionality. If you don't see mitigation after a short period of time, please rollback to your previous application binaries by deploying the code with your previous KCL version."  
**Required action:** This means that your workers were running in KCL 3.x mode and the KCL Migration Tool switched all workers to KCL 1.x compatible mode. Redeploy the code with the previous KCL version to your workers.

Message 3  
"Application was already rolled back. Any KCLv3 resources that could be deleted were cleaned up to avoid charges until the application can be rolled forward with migration."  
**Required action:** This means that your workers were already rolled back to run in the KCL 1.x compatible mode. Redeploy the code with the previous KCL version to your workers.

# Roll forward to KCL 3.x after a rollback
<a name="kcl-migration-rollforward"></a>

This topic explains how to roll forward your consumer application to KCL 3.x after a rollback. When you need to roll forward, you must complete a two-step process:

1. Run the [KCL Migration Tool](https://github.com/awslabs/amazon-kinesis-client/blob/master/amazon-kinesis-client/scripts/KclMigrationTool.py).

1. Deploy the code with KCL 3.x.

## Step 1: Run the KCL Migration Tool
<a name="kcl-migration-rollforward-step1"></a>

Run the KCL Migration Tool with the following command to roll forward to KCL 3.x:

```
python3 ./KclMigrationTool.py --region region --mode rollforward [--application_name applicationName] [--coordinator_state_table_name coordinatorStateTableName]
```

### Parameters
<a name="kcl-migration-rollforward-parameters"></a>

`--region`  
Replace *region* with your AWS Region.

`--application_name`  
This parameter is required if you're using default names for your coordinator state table. If you have specified custom names for the coordinator state table, you can omit this parameter. Replace *applicationName* with your actual KCL application name. The tool uses this name to derive the default table names if custom names are not provided.

`--coordinator_state_table_name`  
This parameter is needed when you have set a custom name for the coordinator state table in your KCL configuration. If you're using the default table name, you can omit this parameter. Replace *coordinatorStateTableName* with the custom table name you specified for your coordinator state table.

After you run the migration tool in roll-forward mode, KCL creates the following DynamoDB resources required for KCL 3.x:
+ A Global Secondary Index on the lease table
+ A worker metrics table

## Step 2: Deploy the code with KCL 3.x
<a name="kcl-migration-rollforward-step2"></a>

After running the KCL Migration Tool for a roll forward, deploy your code with KCL 3.x to your workers. To complete your migration, see [Step 8: Complete the migration](https://docs.aws.amazon.com/streams/latest/dev/kcl-migration-from-2-3.html#kcl-migration-from-2-3-finish).

# Walkthrough: DynamoDB Streams Kinesis adapter
<a name="Streams.KCLAdapter.Walkthrough"></a>

This section is a walkthrough of a Java application that uses the Amazon Kinesis Client Library and the Amazon DynamoDB Streams Kinesis Adapter. The application shows an example of data replication, in which write activity from one table is applied to a second table, with both tables' contents staying in sync. For the source code, see [Complete program: DynamoDB Streams Kinesis adapter](Streams.KCLAdapter.Walkthrough.CompleteProgram.md).

The program does the following:

1. Creates two DynamoDB tables named `KCL-Demo-src` and `KCL-Demo-dst`. Each of these tables has a stream enabled on it.

1. Generates update activity in the source table by adding, updating, and deleting items. This causes data to be written to the table's stream.

1. Reads the records from the stream, reconstructs them as DynamoDB requests, and applies the requests to the destination table.

1. Scans the source and destination tables to ensure that their contents are identical.

1. Cleans up by deleting the tables.

These steps are described in the following sections, and the complete application is shown at the end of the walkthrough.

**Topics**
+ [Step 1: Create DynamoDB tables](#Streams.KCLAdapter.Walkthrough.Step1)
+ [Step 2: Generate update activity in source table](#Streams.KCLAdapter.Walkthrough.Step2)
+ [Step 3: Process the stream](#Streams.KCLAdapter.Walkthrough.Step3)
+ [Step 4: Ensure that both tables have identical contents](#Streams.KCLAdapter.Walkthrough.Step4)
+ [Step 5: Clean up](#Streams.KCLAdapter.Walkthrough.Step5)
+ [Complete program: DynamoDB Streams Kinesis adapter](Streams.KCLAdapter.Walkthrough.CompleteProgram.md)

## Step 1: Create DynamoDB tables
<a name="Streams.KCLAdapter.Walkthrough.Step1"></a>

The first step is to create two DynamoDB tables—a source table and a destination table. The `StreamViewType` on the source table's stream is `NEW_IMAGE`. This means that whenever an item is modified in this table, the item's "after" image is written to the stream. In this way, the stream keeps track of all write activity on the table.

The following example shows the code that is used for creating both tables.

```
java.util.List<AttributeDefinition> attributeDefinitions = new ArrayList<AttributeDefinition>();
attributeDefinitions.add(new AttributeDefinition().withAttributeName("Id").withAttributeType("N"));

java.util.List<KeySchemaElement> keySchema = new ArrayList<KeySchemaElement>();
keySchema.add(new KeySchemaElement().withAttributeName("Id").withKeyType(KeyType.HASH)); // Partition
                                                                                         // key

ProvisionedThroughput provisionedThroughput = new ProvisionedThroughput().withReadCapacityUnits(2L)
    .withWriteCapacityUnits(2L);

StreamSpecification streamSpecification = new StreamSpecification();
streamSpecification.setStreamEnabled(true);
streamSpecification.setStreamViewType(StreamViewType.NEW_IMAGE);
CreateTableRequest createTableRequest = new CreateTableRequest().withTableName(tableName)
    .withAttributeDefinitions(attributeDefinitions).withKeySchema(keySchema)
    .withProvisionedThroughput(provisionedThroughput).withStreamSpecification(streamSpecification);
```

## Step 2: Generate update activity in source table
<a name="Streams.KCLAdapter.Walkthrough.Step2"></a>

The next step is to generate some write activity on the source table. While this activity is taking place, the source table's stream is also updated in near-real time.

The application defines a helper class with methods that call the `PutItem`, `UpdateItem`, and `DeleteItem` API operations for writing the data. The following code example shows how these methods are used.

```
StreamsAdapterDemoHelper.putItem(dynamoDBClient, tableName, "101", "test1");
StreamsAdapterDemoHelper.updateItem(dynamoDBClient, tableName, "101", "test2");
StreamsAdapterDemoHelper.deleteItem(dynamoDBClient, tableName, "101");
StreamsAdapterDemoHelper.putItem(dynamoDBClient, tableName, "102", "demo3");
StreamsAdapterDemoHelper.updateItem(dynamoDBClient, tableName, "102", "demo4");
StreamsAdapterDemoHelper.deleteItem(dynamoDBClient, tableName, "102");
```

## Step 3: Process the stream
<a name="Streams.KCLAdapter.Walkthrough.Step3"></a>

Now the program begins processing the stream. The DynamoDB Streams Kinesis Adapter acts as a transparent layer between the KCL and the DynamoDB Streams endpoint, so that the code can fully use KCL rather than having to make low-level DynamoDB Streams calls. The program performs the following tasks:
+ It defines a record processor class, `StreamsRecordProcessor`, with methods that comply with the KCL interface definition: `initialize`, `processRecords`, and `shutdown`. The `processRecords` method contains the logic required for reading from the source table's stream and writing to the destination table.
+ It defines a class factory for the record processor class (`StreamsRecordProcessorFactory`). This is required for Java programs that use the KCL.
+ It instantiates a new KCL `Worker`, which is associated with the class factory.
+ It shuts down the `Worker` when record processing is complete.

Optionally, enable catch-up mode in your Streams KCL Adapter configuration to automatically scale GetRecords API calling rate by 3x (default) when stream processing lag exceeds one minute (default), helping your stream consumer handle high throughput spikes in your table.

To learn more about the KCL interface definition, see [Developing consumers using the Kinesis client library](https://docs.aws.amazon.com/kinesis/latest/dev/developing-consumers-with-kcl.html) in the *Amazon Kinesis Data Streams Developer Guide*. 

The following code example shows the main loop in `StreamsRecordProcessor`. The `case` statement determines what action to perform, based on the `OperationType` that appears in the stream record.

```
for (Record record : records) {
    String data = new String(record.getData().array(), Charset.forName("UTF-8"));
    System.out.println(data);
    if (record instanceof RecordAdapter) {
                software.amazon.dynamodb.model.Record streamRecord = ((RecordAdapter) record)
                    .getInternalObject();

                switch (streamRecord.getEventName()) {
                    case "INSERT":
                    case "MODIFY":
                        StreamsAdapterDemoHelper.putItem(dynamoDBClient, tableName,
                            streamRecord.getDynamodb().getNewImage());
                        break;
                    case "REMOVE":
                        StreamsAdapterDemoHelper.deleteItem(dynamoDBClient, tableName,
                            streamRecord.getDynamodb().getKeys().get("Id").getN());
                }
    }
    checkpointCounter += 1;
    if (checkpointCounter % 10 == 0) {
        try {
            checkpointer.checkpoint();
        }
        catch (Exception e) {
            e.printStackTrace();
        }
    }
}
```

## Step 4: Ensure that both tables have identical contents
<a name="Streams.KCLAdapter.Walkthrough.Step4"></a>

At this point, the source and destination tables' contents are in sync. The application issues `Scan` requests against both tables to verify that their contents are, in fact, identical.

The `DemoHelper` class contains a `ScanTable` method that calls the low-level `Scan` API. The following example shows how this is used.

```
if (StreamsAdapterDemoHelper.scanTable(dynamoDBClient, srcTable).getItems()
    .equals(StreamsAdapterDemoHelper.scanTable(dynamoDBClient, destTable).getItems())) {
    System.out.println("Scan result is equal.");
}
else {
    System.out.println("Tables are different!");
}
```

## Step 5: Clean up
<a name="Streams.KCLAdapter.Walkthrough.Step5"></a>

The demo is complete, so the application deletes the source and destination tables. See the following code example. Even after the tables are deleted, their streams remain available for up to 24 hours, after which they are automatically deleted.

```
dynamoDBClient.deleteTable(new DeleteTableRequest().withTableName(srcTable));
dynamoDBClient.deleteTable(new DeleteTableRequest().withTableName(destTable));
```

# Complete program: DynamoDB Streams Kinesis adapter
<a name="Streams.KCLAdapter.Walkthrough.CompleteProgram"></a>

The following is the complete Java program that performs the tasks described in [Walkthrough: DynamoDB Streams Kinesis adapter](Streams.KCLAdapter.Walkthrough.md). When you run it, you should see output similar to the following.

```
Creating table KCL-Demo-src
Creating table KCL-Demo-dest
Table is active.
Creating worker for stream: arn:aws:dynamodb:us-west-2:111122223333:table/KCL-Demo-src/stream/2015-05-19T22:48:56.601
Starting worker...
Scan result is equal.
Done.
```

**Important**  
 To run this program, ensure that the client application has access to DynamoDB and Amazon CloudWatch using policies. For more information, see [Identity-based policies for DynamoDB](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies). 

The source code consists of four `.java` files. To build this program, add the following dependency, which includes the Amazon Kinesis Client Library (KCL) 3.x and AWS SDK for Java v2 as transitive dependencies:

------
#### [ Maven ]

```
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>dynamodb-streams-kinesis-adapter</artifactId>
    <version>2.1.0</version>
</dependency>
```

------
#### [ Gradle ]

```
implementation 'com.amazonaws:dynamodb-streams-kinesis-adapter:2.1.0'
```

------

The source files are:
+ `StreamsAdapterDemo.java`
+ `StreamsRecordProcessor.java`
+ `StreamsRecordProcessorFactory.java`
+ `StreamsAdapterDemoHelper.java`

## StreamsAdapterDemo.java
<a name="Streams.KCLAdapter.Walkthrough.CompleteProgram.StreamsAdapterDemo"></a>

```
package com.amazonaws.codesamples;

import com.amazonaws.services.dynamodbv2.streamsadapter.AmazonDynamoDBStreamsAdapterClient;
import com.amazonaws.services.dynamodbv2.streamsadapter.StreamsSchedulerFactory;
import com.amazonaws.services.dynamodbv2.streamsadapter.polling.DynamoDBStreamsPollingConfig;
import software.amazon.awssdk.auth.credentials.AwsCredentialsProvider;
import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.cloudwatch.CloudWatchAsyncClient;
import software.amazon.awssdk.services.dynamodb.DynamoDbAsyncClient;
import software.amazon.awssdk.services.dynamodb.model.DeleteTableRequest;
import software.amazon.awssdk.services.dynamodb.model.DescribeTableResponse;
import software.amazon.kinesis.common.ConfigsBuilder;
import software.amazon.kinesis.common.InitialPositionInStream;
import software.amazon.kinesis.common.InitialPositionInStreamExtended;
import software.amazon.kinesis.coordinator.Scheduler;
import software.amazon.kinesis.processor.ShardRecordProcessorFactory;
import software.amazon.kinesis.processor.StreamTracker;
import software.amazon.kinesis.retrieval.RetrievalConfig;

public class StreamsAdapterDemo {

    private static DynamoDbAsyncClient dynamoDbAsyncClient;
    private static CloudWatchAsyncClient cloudWatchAsyncClient;
    private static AmazonDynamoDBStreamsAdapterClient amazonDynamoDbStreamsAdapterClient;

    private static String tablePrefix = "KCL-Demo";
    private static String streamArn;

    private static Region region = Region.US_EAST_1;
    private static AwsCredentialsProvider credentialsProvider = DefaultCredentialsProvider.create();

    public static void main( String[] args ) throws Exception {
        System.out.println("Starting demo...");
        dynamoDbAsyncClient = DynamoDbAsyncClient.builder()
                .credentialsProvider(credentialsProvider)
                .region(region)
                .build();
        cloudWatchAsyncClient = CloudWatchAsyncClient.builder()
                .credentialsProvider(credentialsProvider)
                .region(region)
                .build();
        amazonDynamoDbStreamsAdapterClient = new AmazonDynamoDBStreamsAdapterClient(credentialsProvider, region);

        String srcTable = tablePrefix + "-src";
        String destTable = tablePrefix + "-dest";

        setUpTables();

        StreamTracker streamTracker = StreamsSchedulerFactory.createSingleStreamTracker(streamArn,
                InitialPositionInStreamExtended.newInitialPosition(InitialPositionInStream.TRIM_HORIZON));

        ShardRecordProcessorFactory shardRecordProcessorFactory =
                new StreamsAdapterDemoProcessorFactory(dynamoDbAsyncClient, destTable);

        ConfigsBuilder configsBuilder = new ConfigsBuilder(
                streamTracker,
                "streams-adapter-demo",
                amazonDynamoDbStreamsAdapterClient,
                dynamoDbAsyncClient,
                cloudWatchAsyncClient,
                "streams-demo-worker",
                shardRecordProcessorFactory
        );

        DynamoDBStreamsPollingConfig pollingConfig = new DynamoDBStreamsPollingConfig(amazonDynamoDbStreamsAdapterClient);
        RetrievalConfig retrievalConfig = configsBuilder.retrievalConfig();
        retrievalConfig.retrievalSpecificConfig(pollingConfig);

        System.out.println("Creating scheduler for stream " + streamArn);
        Scheduler scheduler = StreamsSchedulerFactory.createScheduler(
                configsBuilder.checkpointConfig(),
                configsBuilder.coordinatorConfig(),
                configsBuilder.leaseManagementConfig(),
                configsBuilder.lifecycleConfig(),
                configsBuilder.metricsConfig(),
                configsBuilder.processorConfig(),
                retrievalConfig,
                amazonDynamoDbStreamsAdapterClient
        );

        System.out.println("Starting scheduler...");
        Thread t = new Thread(scheduler);
        t.start();

        Thread.sleep(250000);

        System.out.println("Stopping scheduler...");
        scheduler.shutdown();
        t.join();

        if (StreamsAdapterDemoHelper.scanTable(dynamoDbAsyncClient, srcTable).items()
                .equals(StreamsAdapterDemoHelper.scanTable(dynamoDbAsyncClient, destTable).items())) {
            System.out.println("Scan result is equal.");
        } else {
            System.out.println("Tables are different!");
        }

        System.out.println("Done.");
        cleanupAndExit(0);
    }

    private static void setUpTables() {
        String srcTable = tablePrefix + "-src";
        String destTable = tablePrefix + "-dest";
        streamArn = StreamsAdapterDemoHelper.createTable(dynamoDbAsyncClient, srcTable);
        StreamsAdapterDemoHelper.createTable(dynamoDbAsyncClient, destTable);

        awaitTableCreation(srcTable);

        performOps(srcTable);
    }

    private static void awaitTableCreation(String tableName) {
        Integer retries = 0;
        Boolean created = false;
        while (!created && retries < 100) {
            DescribeTableResponse result = StreamsAdapterDemoHelper.describeTable(dynamoDbAsyncClient, tableName);
            created = result.table().tableStatusAsString().equals("ACTIVE");
            if (created) {
                System.out.println("Table is active.");
                return;
            } else {
                retries++;
                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    // do nothing
                }
            }
        }
        System.out.println("Timeout after table creation. Exiting...");
        cleanupAndExit(1);
    }

    private static void performOps(String tableName) {
        StreamsAdapterDemoHelper.putItem(dynamoDbAsyncClient, tableName, "101", "test1");
        StreamsAdapterDemoHelper.updateItem(dynamoDbAsyncClient, tableName, "101", "test2");
        StreamsAdapterDemoHelper.deleteItem(dynamoDbAsyncClient, tableName, "101");
        StreamsAdapterDemoHelper.putItem(dynamoDbAsyncClient, tableName, "102", "demo3");
        StreamsAdapterDemoHelper.updateItem(dynamoDbAsyncClient, tableName, "102", "demo4");
        StreamsAdapterDemoHelper.deleteItem(dynamoDbAsyncClient, tableName, "102");
    }

    private static void cleanupAndExit(Integer returnValue) {
        String srcTable = tablePrefix + "-src";
        String destTable = tablePrefix + "-dest";
        dynamoDbAsyncClient.deleteTable(DeleteTableRequest.builder().tableName(srcTable).build());
        dynamoDbAsyncClient.deleteTable(DeleteTableRequest.builder().tableName(destTable).build());
        System.exit(returnValue);
    }
}
```

## StreamsRecordProcessor.java
<a name="Streams.KCLAdapter.Walkthrough.CompleteProgram.StreamsRecordProcessor"></a>

```
package com.amazonaws.codesamples;

import com.amazonaws.services.dynamodbv2.streamsadapter.adapter.DynamoDBStreamsClientRecord;
import com.amazonaws.services.dynamodbv2.streamsadapter.model.DynamoDBStreamsProcessRecordsInput;
import com.amazonaws.services.dynamodbv2.streamsadapter.processor.DynamoDBStreamsShardRecordProcessor;
import software.amazon.awssdk.services.dynamodb.DynamoDbAsyncClient;
import software.amazon.awssdk.services.dynamodb.model.Record;
import software.amazon.kinesis.exceptions.InvalidStateException;
import software.amazon.kinesis.exceptions.ShutdownException;
import software.amazon.kinesis.lifecycle.events.InitializationInput;
import software.amazon.kinesis.lifecycle.events.LeaseLostInput;
import software.amazon.kinesis.lifecycle.events.ShardEndedInput;
import software.amazon.kinesis.lifecycle.events.ShutdownRequestedInput;

import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;

public class StreamsRecordProcessor implements DynamoDBStreamsShardRecordProcessor {

    private Integer checkpointCounter;

    private final DynamoDbAsyncClient dynamoDbAsyncClient;
    private final String tableName;

    public StreamsRecordProcessor(DynamoDbAsyncClient dynamoDbAsyncClient, String tableName) {
        this.dynamoDbAsyncClient = dynamoDbAsyncClient;
        this.tableName = tableName;
    }

    @Override
    public void initialize(InitializationInput initializationInput) {
        this.checkpointCounter = 0;
    }

    @Override
    public void processRecords(DynamoDBStreamsProcessRecordsInput dynamoDBStreamsProcessRecordsInput) {
        for (DynamoDBStreamsClientRecord record: dynamoDBStreamsProcessRecordsInput.records()) {
            String data = new String(record.data().array(), StandardCharsets.UTF_8);
            System.out.println(data);
            Record streamRecord = record.getRecord();

            switch (streamRecord.eventName()) {
                case INSERT:
                case MODIFY:
                    StreamsAdapterDemoHelper.putItem(dynamoDbAsyncClient, tableName,
                            streamRecord.dynamodb().newImage());
                case REMOVE:
                    StreamsAdapterDemoHelper.deleteItem(dynamoDbAsyncClient, tableName,
                            streamRecord.dynamodb().keys().get("Id").n());
            }
            checkpointCounter += 1;
            if (checkpointCounter % 10 == 0) {
                try {
                    dynamoDBStreamsProcessRecordsInput.checkpointer().checkpoint();
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        }
    }

    @Override
    public void leaseLost(LeaseLostInput leaseLostInput) {
        System.out.println("Lease Lost");
    }

    @Override
    public void shardEnded(ShardEndedInput shardEndedInput) {
        try {
            shardEndedInput.checkpointer().checkpoint();
        } catch (ShutdownException | InvalidStateException e) {
            e.printStackTrace();
        }
    }

    @Override
    public void shutdownRequested(ShutdownRequestedInput shutdownRequestedInput) {
        try {
            shutdownRequestedInput.checkpointer().checkpoint();
        } catch (ShutdownException | InvalidStateException e) {
            e.printStackTrace();
        }
    }
}
```

## StreamsRecordProcessorFactory.java
<a name="Streams.KCLAdapter.Walkthrough.CompleteProgram.StreamsRecordProcessorFactory"></a>

```
package com.amazonaws.codesamples;

import software.amazon.awssdk.services.dynamodb.DynamoDbAsyncClient;
import software.amazon.kinesis.processor.ShardRecordProcessor;
import software.amazon.kinesis.processor.ShardRecordProcessorFactory;

public class StreamsAdapterDemoProcessorFactory implements ShardRecordProcessorFactory {
    private final String tableName;
    private final DynamoDbAsyncClient dynamoDbAsyncClient;

    public StreamsAdapterDemoProcessorFactory(DynamoDbAsyncClient asyncClient, String tableName) {
        this.tableName = tableName;
        this.dynamoDbAsyncClient = asyncClient;
    }

    @Override
    public ShardRecordProcessor shardRecordProcessor() {
        return new StreamsRecordProcessor(dynamoDbAsyncClient, tableName);
    }
}
```

## StreamsAdapterDemoHelper.java
<a name="Streams.KCLAdapter.Walkthrough.CompleteProgram.StreamsAdapterDemoHelper"></a>

```
package com.amazonaws.codesamples;

import software.amazon.awssdk.services.dynamodb.DynamoDbAsyncClient;
import software.amazon.awssdk.services.dynamodb.model.AttributeDefinition;
import software.amazon.awssdk.services.dynamodb.model.AttributeValue;
import software.amazon.awssdk.services.dynamodb.model.BillingMode;
import software.amazon.awssdk.services.dynamodb.model.CreateTableRequest;
import software.amazon.awssdk.services.dynamodb.model.CreateTableResponse;
import software.amazon.awssdk.services.dynamodb.model.DeleteItemRequest;
import software.amazon.awssdk.services.dynamodb.model.DescribeTableRequest;
import software.amazon.awssdk.services.dynamodb.model.DescribeTableResponse;
import software.amazon.awssdk.services.dynamodb.model.KeySchemaElement;
import software.amazon.awssdk.services.dynamodb.model.KeyType;
import software.amazon.awssdk.services.dynamodb.model.OnDemandThroughput;
import software.amazon.awssdk.services.dynamodb.model.PutItemRequest;
import software.amazon.awssdk.services.dynamodb.model.ResourceInUseException;
import software.amazon.awssdk.services.dynamodb.model.ScanRequest;
import software.amazon.awssdk.services.dynamodb.model.ScanResponse;
import software.amazon.awssdk.services.dynamodb.model.StreamSpecification;
import software.amazon.awssdk.services.dynamodb.model.StreamViewType;
import software.amazon.awssdk.services.dynamodb.model.UpdateItemRequest;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class StreamsAdapterDemoHelper {

    /**
     * @return StreamArn
     */
    public static String createTable(DynamoDbAsyncClient client, String tableName) {
        List<AttributeDefinition> attributeDefinitions = new ArrayList<>();
        attributeDefinitions.add(AttributeDefinition.builder()
                .attributeName("Id")
                .attributeType("N")
                .build());

        List<KeySchemaElement> keySchema = new ArrayList<>();
        keySchema.add(KeySchemaElement.builder()
                .attributeName("Id")
                .keyType(KeyType.HASH) // Partition key
                .build());

        StreamSpecification streamSpecification = StreamSpecification.builder()
                .streamEnabled(true)
                .streamViewType(StreamViewType.NEW_IMAGE)
                .build();

        CreateTableRequest createTableRequest = CreateTableRequest.builder()
                .tableName(tableName)
                .attributeDefinitions(attributeDefinitions)
                .keySchema(keySchema)
                .billingMode(BillingMode.PAY_PER_REQUEST)
                .streamSpecification(streamSpecification)
                .build();

        try {
            System.out.println("Creating table " + tableName);
            CreateTableResponse result = client.createTable(createTableRequest).join();
            return result.tableDescription().latestStreamArn();
        } catch (Exception e) {
            if (e.getCause() instanceof ResourceInUseException) {
                System.out.println("Table already exists.");
                return describeTable(client, tableName).table().latestStreamArn();
            }
            throw e;
        }
    }

    public static DescribeTableResponse describeTable(DynamoDbAsyncClient client, String tableName) {
        return client.describeTable(DescribeTableRequest.builder()
                        .tableName(tableName)
                        .build())
                .join();
    }

    public static ScanResponse scanTable(DynamoDbAsyncClient dynamoDbClient, String tableName) {
        return dynamoDbClient.scan(ScanRequest.builder()
                        .tableName(tableName)
                        .build())
                .join();
    }

    public static void putItem(DynamoDbAsyncClient dynamoDbClient, String tableName, String id, String val) {
        Map<String, AttributeValue> item = new HashMap<>();
        item.put("Id", AttributeValue.builder().n(id).build());
        item.put("attribute-1", AttributeValue.builder().s(val).build());

        putItem(dynamoDbClient, tableName, item);
    }

    public static void putItem(DynamoDbAsyncClient dynamoDbClient, String tableName,
                               Map<String, AttributeValue> items) {
        PutItemRequest putItemRequest = PutItemRequest.builder()
                .tableName(tableName)
                .item(items)
                .build();
        dynamoDbClient.putItem(putItemRequest).join();
    }

    public static void updateItem(DynamoDbAsyncClient dynamoDbClient, String tableName, String id, String val) {
        Map<String, AttributeValue> key = new HashMap<>();
        key.put("Id", AttributeValue.builder().n(id).build());

        Map<String, String> expressionAttributeNames = new HashMap<>();
        expressionAttributeNames.put("#attr2", "attribute-2");

        Map<String, AttributeValue> expressionAttributeValues = new HashMap<>();
        expressionAttributeValues.put(":val", AttributeValue.builder().s(val).build());

        UpdateItemRequest updateItemRequest = UpdateItemRequest.builder()
                .tableName(tableName)
                .key(key)
                .updateExpression("SET #attr2 = :val")
                .expressionAttributeNames(expressionAttributeNames)
                .expressionAttributeValues(expressionAttributeValues)
                .build();

        dynamoDbClient.updateItem(updateItemRequest).join();
    }

    public static void deleteItem(DynamoDbAsyncClient dynamoDbClient, String tableName, String id) {
        Map<String, AttributeValue> key = new HashMap<>();
        key.put("Id", AttributeValue.builder().n(id).build());

        DeleteItemRequest deleteItemRequest = DeleteItemRequest.builder()
                .tableName(tableName)
                .key(key)
                .build();
        dynamoDbClient.deleteItem(deleteItemRequest).join();
    }
}
```

# DynamoDB Streams low-level API: Java example
<a name="Streams.LowLevel.Walkthrough"></a>

**Note**  
The code on this page is not exhaustive and does not handle all scenarios for consuming Amazon DynamoDB Streams. The recommended way to consume stream records from DynamoDB is through the Amazon Kinesis Adapter using the Kinesis Client Library (KCL), as described in [Using the DynamoDB Streams Kinesis adapter to process stream records](Streams.KCLAdapter.md).

This section contains a Java program that shows DynamoDB Streams in action. The program does the following:

1. Creates a DynamoDB table with a stream enabled.

1. Describes the stream settings for this table.

1. Modifies data in the table.

1. Describes the shards in the stream.

1. Reads the stream records from the shards.

1. Fetches child shards and continues reading records.

1. Cleans up.

When you run the program, you will see output similar to the following.

```
Testing Streams Demo
Creating an Amazon DynamoDB table TestTableForStreams with a simple primary key: Id
Waiting for TestTableForStreams to be created...
Current stream ARN for TestTableForStreams: arn:aws:dynamodb:us-east-2:123456789012:table/TestTableForStreams/stream/2018-03-20T16:49:55.208
Stream enabled: true
Update view type: NEW_AND_OLD_IMAGES

Performing write activities on TestTableForStreams
Processing item 1 of 100
Processing item 2 of 100
Processing item 3 of 100
...
Processing item 100 of 100
Shard: {ShardId: shardId-1234567890-...,SequenceNumberRange: {StartingSequenceNumber: 100002572486797508907,},}
    Shard iterator: EjYFEkX2a26eVTWe...
        StreamRecord(ApproximateCreationDateTime=2025-04-09T13:11:58Z, Keys={Id=AttributeValue(S=4)}, NewImage={Message=AttributeValue(S=New Item!), Id=AttributeValue(S=4)}, SequenceNumber=2000001584047545833909, SizeBytes=22, StreamViewType=NEW_AND_OLD_IMAGES)
        StreamRecord(ApproximateCreationDateTime=2025-04-09T13:11:58Z, Keys={Id=AttributeValue(S=4)}, NewImage={Message=AttributeValue(S=This is an updated item), Id=AttributeValue(S=4)}, OldImage={Message=AttributeValue(S=New Item!), Id=AttributeValue(S=4)}, SequenceNumber=2100003604869767892701, SizeBytes=55, StreamViewType=NEW_AND_OLD_IMAGES)
        StreamRecord(ApproximateCreationDateTime=2025-04-09T13:11:58Z, Keys={Id=AttributeValue(S=4)}, OldImage={Message=AttributeValue(S=This is an updated item), Id=AttributeValue(S=4)}, SequenceNumber=2200001099771112898434, SizeBytes=36, StreamViewType=NEW_AND_OLD_IMAGES)
...
Deleting the table...
Table StreamsDemoTable deleted.
Demo complete
```

**Example**  

```
import java.util.HashMap;
import java.util.List;
import java.util.Map;

import software.amazon.awssdk.core.waiters.WaiterResponse;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.AttributeAction;
import software.amazon.awssdk.services.dynamodb.model.AttributeDefinition;
import software.amazon.awssdk.services.dynamodb.model.AttributeValue;
import software.amazon.awssdk.services.dynamodb.model.AttributeValueUpdate;
import software.amazon.awssdk.services.dynamodb.model.BillingMode;
import software.amazon.awssdk.services.dynamodb.model.CreateTableRequest;
import software.amazon.awssdk.services.dynamodb.model.CreateTableResponse;
import software.amazon.awssdk.services.dynamodb.model.DeleteItemRequest;
import software.amazon.awssdk.services.dynamodb.model.DeleteTableRequest;
import software.amazon.awssdk.services.dynamodb.model.DescribeStreamRequest;
import software.amazon.awssdk.services.dynamodb.model.DescribeStreamResponse;
import software.amazon.awssdk.services.dynamodb.model.DescribeTableRequest;
import software.amazon.awssdk.services.dynamodb.model.DescribeTableResponse;
import software.amazon.awssdk.services.dynamodb.model.DynamoDbException;
import software.amazon.awssdk.services.dynamodb.model.GetRecordsRequest;
import software.amazon.awssdk.services.dynamodb.model.GetRecordsResponse;
import software.amazon.awssdk.services.dynamodb.model.GetShardIteratorRequest;
import software.amazon.awssdk.services.dynamodb.model.GetShardIteratorResponse;
import software.amazon.awssdk.services.dynamodb.model.KeySchemaElement;
import software.amazon.awssdk.services.dynamodb.model.KeyType;
import software.amazon.awssdk.services.dynamodb.model.PutItemRequest;
import software.amazon.awssdk.services.dynamodb.model.Record;
import software.amazon.awssdk.services.dynamodb.model.ScalarAttributeType;
import software.amazon.awssdk.services.dynamodb.model.Shard;
import software.amazon.awssdk.services.dynamodb.model.ShardFilter;
import software.amazon.awssdk.services.dynamodb.model.ShardFilterType;
import software.amazon.awssdk.services.dynamodb.model.ShardIteratorType;
import software.amazon.awssdk.services.dynamodb.model.StreamSpecification;
import software.amazon.awssdk.services.dynamodb.model.TableDescription;
import software.amazon.awssdk.services.dynamodb.model.UpdateItemRequest;
import software.amazon.awssdk.services.dynamodb.streams.DynamoDbStreamsClient;
import software.amazon.awssdk.services.dynamodb.waiters.DynamoDbWaiter;

public class StreamsLowLevelDemo {


    public static void main(String[] args) {
        final String usage = "Testing Streams Demo";
        try {
            System.out.println(usage);

            String tableName = "StreamsDemoTable";
            String key = "Id";
            System.out.println("Creating an Amazon DynamoDB table " + tableName + " with a simple primary key: " + key);
            Region region = Region.US_WEST_2;
            DynamoDbClient ddb = DynamoDbClient.builder()
                    .region(region)
                    .build();

            DynamoDbStreamsClient ddbStreams = DynamoDbStreamsClient.builder()
                    .region(region)
                    .build();
            DescribeTableRequest describeTableRequest = DescribeTableRequest.builder()
                    .tableName(tableName)
                    .build();
            TableDescription tableDescription = null;
            try{
                tableDescription = ddb.describeTable(describeTableRequest).table();
            }catch (Exception e){
                System.out.println("Table " + tableName + " does not exist.");
                tableDescription = createTable(ddb, tableName, key);
            }

            // Print the stream settings for the table
            String streamArn = tableDescription.latestStreamArn();
           
            StreamSpecification streamSpec = tableDescription.streamSpecification();
            System.out.println("Current stream ARN for " + tableDescription.tableName() + ": " +
                   streamArn);
            System.out.println("Stream enabled: " + streamSpec.streamEnabled());
            System.out.println("Update view type: " + streamSpec.streamViewType());
            System.out.println();
            // Generate write activity in the table
            System.out.println("Performing write activities on " + tableName);
            int maxItemCount = 100;
            for (Integer i = 1; i <= maxItemCount; i++) {
                System.out.println("Processing item " + i + " of " + maxItemCount);
                // Write a new item
                putItemInTable(key, i, tableName, ddb);
                // Update the item
                updateItemInTable(key, i, tableName, ddb);
                // Delete the item
                deleteDynamoDBItem(key, i, tableName, ddb);
            }

            // Process Stream
            processStream(streamArn, maxItemCount, ddb, ddbStreams, tableName);

            // Delete the table
            System.out.println("Deleting the table...");
            DeleteTableRequest deleteTableRequest = DeleteTableRequest.builder()
                    .tableName(tableName)
                    .build();
            ddb.deleteTable(deleteTableRequest);
            System.out.println("Table " + tableName + " deleted.");
            System.out.println("Demo complete");
            ddb.close();
        } catch (Exception e) {
            System.out.println("Error: " + e.getMessage());
        }
    }

    private static void processStream(String streamArn, int maxItemCount, DynamoDbClient ddb, DynamoDbStreamsClient ddbStreams, String tableName) {
        // Get all the shard IDs from the stream. Note that DescribeStream returns
        // the shard IDs one page at a time.
        String lastEvaluatedShardId = null;
        do {
            DescribeStreamRequest describeStreamRequest = DescribeStreamRequest.builder()
                    .streamArn(streamArn)
                    .exclusiveStartShardId(lastEvaluatedShardId).build();
            DescribeStreamResponse describeStreamResponse = ddbStreams.describeStream(describeStreamRequest);

            List<Shard> shards = describeStreamResponse.streamDescription().shards();

            // Process each shard on this page

            fetchShardsAndReadRecords(streamArn, maxItemCount, ddbStreams, shards);

            // If LastEvaluatedShardId is set, then there is
            // at least one more page of shard IDs to retrieve
            lastEvaluatedShardId = describeStreamResponse.streamDescription().lastEvaluatedShardId();

        } while (lastEvaluatedShardId != null);

    }

    private static void fetchShardsAndReadRecords(String streamArn, int maxItemCount, DynamoDbStreamsClient ddbStreams, List<Shard> shards) {
        for (Shard shard : shards) {
            String shardId = shard.shardId();
            System.out.println("Shard: " + shard);

            // Get an iterator for the current shard
            GetShardIteratorRequest shardIteratorRequest = GetShardIteratorRequest.builder()
                    .streamArn(streamArn).shardId(shardId)
                    .shardIteratorType(ShardIteratorType.TRIM_HORIZON).build();

            GetShardIteratorResponse getShardIteratorResult = ddbStreams.getShardIterator(shardIteratorRequest);

            String currentShardIter = getShardIteratorResult.shardIterator();

            // Shard iterator is not null until the Shard is sealed (marked as READ_ONLY).
            // To prevent running the loop until the Shard is sealed, we process only the
            // items that were written into DynamoDB and then exit.
            int processedRecordCount = 0;
            while (currentShardIter != null && processedRecordCount < maxItemCount) {
                // Use the shard iterator to read the stream records
                GetRecordsRequest getRecordsRequest = GetRecordsRequest.builder()
                        .shardIterator(currentShardIter).build();
                GetRecordsResponse getRecordsResult = ddbStreams.getRecords(getRecordsRequest);
                List<Record> records = getRecordsResult.records();
                for (Record record : records) {
                    System.out.println("        " + record.dynamodb());
                }
                processedRecordCount += records.size();
                currentShardIter = getRecordsResult.nextShardIterator();
            }
            if (currentShardIter == null){
                System.out.println("Shard has been fully processed. Shard iterator is null.");
                System.out.println("Fetch the child shard to continue processing instead of bulk fetching all shards");
                DescribeStreamRequest describeStreamRequestForChildShards = DescribeStreamRequest.builder()
                        .streamArn(streamArn)
                        .shardFilter(ShardFilter.builder()
                                .type(ShardFilterType.CHILD_SHARDS)
                                .shardId(shardId).build())
                        .build();
                DescribeStreamResponse describeStreamResponseChildShards = ddbStreams.describeStream(describeStreamRequestForChildShards);
                fetchShardsAndReadRecords(streamArn, maxItemCount, ddbStreams, describeStreamResponseChildShards.streamDescription().shards());
            }
        }
    }

    private static void putItemInTable(String key, Integer i, String tableName, DynamoDbClient ddb) {
        Map<String, AttributeValue> item = new HashMap<>();
        item.put(key, AttributeValue.builder()
                .s(i.toString())
                .build());
        item.put("Message", AttributeValue.builder()
                .s("New Item!")
                .build());
        PutItemRequest request = PutItemRequest.builder()
                .tableName(tableName)
                .item(item)
                .build();
        ddb.putItem(request);
    }

    private static void updateItemInTable(String key, Integer i, String tableName, DynamoDbClient ddb) {

        HashMap<String, AttributeValue> itemKey = new HashMap<>();
        itemKey.put(key, AttributeValue.builder()
                .s(i.toString())
                .build());


        HashMap<String, AttributeValueUpdate> updatedValues = new HashMap<>();
        updatedValues.put("Message", AttributeValueUpdate.builder()
                .value(AttributeValue.builder().s("This is an updated item").build())
                .action(AttributeAction.PUT)
                .build());

        UpdateItemRequest request = UpdateItemRequest.builder()
                .tableName(tableName)
                .key(itemKey)
                .attributeUpdates(updatedValues)
                .build();
        ddb.updateItem(request);
    }

    public static void deleteDynamoDBItem(String key, Integer i, String tableName, DynamoDbClient ddb) {
        HashMap<String, AttributeValue> keyToGet = new HashMap<>();
        keyToGet.put(key, AttributeValue.builder()
                .s(i.toString())
                .build());

        DeleteItemRequest deleteReq = DeleteItemRequest.builder()
                .tableName(tableName)
                .key(keyToGet)
                .build();
        ddb.deleteItem(deleteReq);
    }

    public static TableDescription createTable(DynamoDbClient ddb, String tableName, String key) {
        DynamoDbWaiter dbWaiter = ddb.waiter();
        StreamSpecification streamSpecification = StreamSpecification.builder()
                .streamEnabled(true)
                .streamViewType("NEW_AND_OLD_IMAGES")
                .build();
        CreateTableRequest request = CreateTableRequest.builder()
                .attributeDefinitions(AttributeDefinition.builder()
                        .attributeName(key)
                        .attributeType(ScalarAttributeType.S)
                        .build())
                .keySchema(KeySchemaElement.builder()
                        .attributeName(key)
                        .keyType(KeyType.HASH)
                        .build())
                .billingMode(BillingMode.PAY_PER_REQUEST) //  DynamoDB automatically scales based on traffic.
                .tableName(tableName)
                .streamSpecification(streamSpecification)
                .build();

        TableDescription newTable;
        try {
            CreateTableResponse response = ddb.createTable(request);
            DescribeTableRequest tableRequest = DescribeTableRequest.builder()
                    .tableName(tableName)
                    .build();
                    
            System.out.println("Waiting for " + tableName + " to be created...");

            // Wait until the Amazon DynamoDB table is created.
            WaiterResponse<DescribeTableResponse> waiterResponse = dbWaiter.waitUntilTableExists(tableRequest);
            waiterResponse.matched().response().ifPresent(System.out::println);
            newTable = response.tableDescription();
            return newTable;

        } catch (DynamoDbException e) {
            System.err.println(e.getMessage());
            System.exit(1);
        }
        return null;
    }



}
```

# DynamoDB Streams and AWS Lambda triggers
<a name="Streams.Lambda"></a>

Amazon DynamoDB is integrated with AWS Lambda so that you can create *triggers*—pieces of code that automatically respond to events in DynamoDB Streams. With triggers, you can build applications that react to data modifications in DynamoDB tables.

**Topics**
+ [Tutorial \$11: Using filters to process all events with Amazon DynamoDB and AWS Lambda using the AWS CLI](Streams.Lambda.Tutorial.md)
+ [Tutorial \$12: Using filters to process some events with DynamoDB and Lambda](Streams.Lambda.Tutorial2.md)
+ [Best practices using DynamoDB Streams with Lambda](Streams.Lambda.BestPracticesWithDynamoDB.md)

If you enable DynamoDB Streams on a table, you can associate the stream Amazon Resource Name (ARN) with an AWS Lambda function that you write. All mutation actions to that DynamoDB table can then be captured as an item on the stream. For example, you can set a trigger so that when an item in a table is modified a new record immediately appears in that table's stream. 

**Note**  
If you subscribe more than two Lambda functions to one DynamoDB stream, read throttling might occur.

The [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html) service polls the stream for new records four times per second. When new stream records are available, your Lambda function is synchronously invoked. You can subscribe up to two Lambda functions to the same DynamoDB stream. If you subscribe more than two Lambda functions to the same DynamoDB stream, read throttling might occur.

The Lambda function can send a notification, initiate a workflow, or perform many other actions that you specify. You can write a Lambda function to simply copy each stream record to persistent storage, such as Amazon S3 File Gateway (Amazon S3), and create a permanent audit trail of write activity in your table. Or suppose that you have a mobile gaming app that writes to a `GameScores` table. Whenever the `TopScore` attribute of the `GameScores` table is updated, a corresponding stream record is written to the table's stream. This event could then trigger a Lambda function that posts a congratulatory message on a social media network. This function could also be written to ignore any stream records that are not updates to `GameScores`, or that do not modify the `TopScore` attribute.

If your function returns an error, Lambda retries the batch until it processes successfully or the data expires. You can also configure Lambda to retry with a smaller batch, limit the number of retries, discard records once they become too old, and other options.

As performance best practices, the Lambda function needs to be short lived. To avoid introducing unnecessary processing delays, it also should not execute complex logic. For a high velocity stream in particular, it is better to trigger an asynchronous post-processing step function workflows than synchronous long running Lambdas.

 You can use Lambda triggers across different AWS accounts by configuring a resource-based policy on the DynamoDB stream to grant the cross-account read access to Lambda function. To learn more about how to configure your stream to allow cross-account access, see [Share access with cross-account AWS Lambda functions](rbac-cross-account-access.md#shared-access-cross-acount-lambda) in the DynamoDB Developer Guide.

For more information about AWS Lambda, see the [AWS Lambda Developer Guide](https://docs.aws.amazon.com/lambda/latest/dg/).

# Tutorial \$11: Using filters to process all events with Amazon DynamoDB and AWS Lambda using the AWS CLI
<a name="Streams.Lambda.Tutorial"></a>

 

In this tutorial, you will create an AWS Lambda trigger to process a stream from a DynamoDB table.

**Topics**
+ [Step 1: Create a DynamoDB table with a stream enabled](#Streams.Lambda.Tutorial.CreateTable)
+ [Step 2: Create a Lambda execution role](#Streams.Lambda.Tutorial.CreateRole)
+ [Step 3: Create an Amazon SNS topic](#Streams.Lambda.Tutorial.SNSTopic)
+ [Step 4: Create and test a Lambda function](#Streams.Lambda.Tutorial.LambdaFunction)
+ [Step 5: Create and test a trigger](#Streams.Lambda.Tutorial.CreateTrigger)

The scenario for this tutorial is Woofer, a simple social network. Woofer users communicate using *barks* (short text messages) that are sent to other Woofer users. The following diagram shows the components and workflow for this application.

![\[Woofer application workflow of a DynamoDB table, stream record, Lambda function, and Amazon SNS topic.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/StreamsAndTriggers.png)


1. A user writes an item to a DynamoDB table (`BarkTable`). Each item in the table represents a bark.

1. A new stream record is written to reflect that a new item has been added to `BarkTable`.

1. The new stream record triggers an AWS Lambda function (`publishNewBark`).

1. If the stream record indicates that a new item was added to `BarkTable`, the Lambda function reads the data from the stream record and publishes a message to a topic in Amazon Simple Notification Service (Amazon SNS).

1. The message is received by subscribers to the Amazon SNS topic. (In this tutorial, the only subscriber is an email address.)

**Before You Begin**  
This tutorial uses the AWS Command Line Interface AWS CLI. If you have not done so already, follow the instructions in the [AWS Command Line Interface User Guide](https://docs.aws.amazon.com/cli/latest/userguide/) to install and configure the AWS CLI.

## Step 1: Create a DynamoDB table with a stream enabled
<a name="Streams.Lambda.Tutorial.CreateTable"></a>

In this step, you create a DynamoDB table (`BarkTable`) to store all of the barks from Woofer users. The primary key is composed of `Username` (partition key) and `Timestamp` (sort key). Both of these attributes are of type string.

`BarkTable` has a stream enabled. Later in this tutorial, you create a trigger by associating an AWS Lambda function with the stream.

1. Enter the following command to create the table.

   ```
   aws dynamodb create-table \
       --table-name BarkTable \
       --attribute-definitions AttributeName=Username,AttributeType=S AttributeName=Timestamp,AttributeType=S \
       --key-schema AttributeName=Username,KeyType=HASH  AttributeName=Timestamp,KeyType=RANGE \
       --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
       --stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES
   ```

1. In the output, look for the `LatestStreamArn`.

   ```
   ...
   "LatestStreamArn": "arn:aws:dynamodb:region:accountID:table/BarkTable/stream/timestamp
   ...
   ```

   Make a note of the `region` and the `accountID`, because you need them for the other steps in this tutorial.

## Step 2: Create a Lambda execution role
<a name="Streams.Lambda.Tutorial.CreateRole"></a>

In this step, you create an AWS Identity and Access Management (IAM) role (`WooferLambdaRole`) and assign permissions to it. This role is used by the Lambda function that you create in [Step 4: Create and test a Lambda function](#Streams.Lambda.Tutorial.LambdaFunction). 

You also create a policy for the role. The policy contains all of the permissions that the Lambda function needs at runtime.

1. Create a file named `trust-relationship.json` with the following contents.

------
#### [ JSON ]

****  

   ```
   {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
   ```

------

1. Enter the following command to create `WooferLambdaRole`.

   ```
   aws iam create-role --role-name WooferLambdaRole \
       --path "/service-role/" \
       --assume-role-policy-document file://trust-relationship.json
   ```

1. Create a file named `role-policy.json` with the following contents. (Replace `region` and `accountID` with your AWS Region and account ID.)

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "logs:CreateLogGroup",
                   "logs:CreateLogStream",
                   "logs:PutLogEvents"
               ],
               "Resource": "arn:aws:logs:us-east-1:111122223333:*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "dynamodb:DescribeStream",
                   "dynamodb:GetRecords",
                   "dynamodb:GetShardIterator",
                   "dynamodb:ListStreams"
               ],
               "Resource": "arn:aws:dynamodb:us-east-1:111122223333:table/BarkTable/stream/*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "sns:Publish"
               ],
               "Resource": [
                   "*"
               ]
           }
       ]
   }
   ```

------

   The policy has four statements that allow `WooferLambdaRole` to do the following:
   + Run a Lambda function (`publishNewBark`). You create the function later in this tutorial.
   + Access Amazon CloudWatch Logs. The Lambda function writes diagnostics to CloudWatch Logs at runtime.
   + Read data from the DynamoDB stream for `BarkTable`.
   + Publish messages to Amazon SNS.

1. Enter the following command to attach the policy to `WooferLambdaRole`.

   ```
   aws iam put-role-policy --role-name WooferLambdaRole \
       --policy-name WooferLambdaRolePolicy \
       --policy-document file://role-policy.json
   ```

## Step 3: Create an Amazon SNS topic
<a name="Streams.Lambda.Tutorial.SNSTopic"></a>

In this step, you create an Amazon SNS topic (`wooferTopic`) and subscribe an email address to it. Your Lambda function uses this topic to publish new barks from Woofer users.

1. Enter the following command to create a new Amazon SNS topic.

   ```
   aws sns create-topic --name wooferTopic
   ```

1. Enter the following command to subscribe an email address to `wooferTopic`. (Replace `region` and `accountID` with your AWS Region and account ID, and replace `example@example.com` with a valid email address.)

   ```
   aws sns subscribe \
       --topic-arn arn:aws:sns:region:accountID:wooferTopic \
       --protocol email \
       --notification-endpoint example@example.com
   ```

1. Amazon SNS sends a confirmation message to your email address. Choose the **Confirm subscription** link in that message to complete the subscription process.

## Step 4: Create and test a Lambda function
<a name="Streams.Lambda.Tutorial.LambdaFunction"></a>

In this step, you create an AWS Lambda function (`publishNewBark`) to process stream records from `BarkTable`.

The `publishNewBark` function processes only the stream events that correspond to new items in `BarkTable`. The function reads data from such an event, and then invokes Amazon SNS to publish it.

1. Create a file named `publishNewBark.js` with the following contents. Replace `region` and `accountID` with your AWS Region and account ID.

   ```
   'use strict';
   var AWS = require("aws-sdk");
   var sns = new AWS.SNS();
   
   exports.handler = (event, context, callback) => {
   
       event.Records.forEach((record) => {
           console.log('Stream record: ', JSON.stringify(record, null, 2));
   
           if (record.eventName == 'INSERT') {
               var who = JSON.stringify(record.dynamodb.NewImage.Username.S);
               var when = JSON.stringify(record.dynamodb.NewImage.Timestamp.S);
               var what = JSON.stringify(record.dynamodb.NewImage.Message.S);
               var params = {
                   Subject: 'A new bark from ' + who,
                   Message: 'Woofer user ' + who + ' barked the following at ' + when + ':\n\n ' + what,
                   TopicArn: 'arn:aws:sns:region:accountID:wooferTopic'
               };
               sns.publish(params, function(err, data) {
                   if (err) {
                       console.error("Unable to send message. Error JSON:", JSON.stringify(err, null, 2));
                   } else {
                       console.log("Results from sending message: ", JSON.stringify(data, null, 2));
                   }
               });
           }
       });
       callback(null, `Successfully processed ${event.Records.length} records.`);
   };
   ```

1. Create a zip file to contain `publishNewBark.js`. If you have the zip command-line utility, you can enter the following command to do this.

   ```
   zip publishNewBark.zip publishNewBark.js
   ```

1. When you create the Lambda function, you specify the Amazon Resource Name (ARN) for `WooferLambdaRole`, which you created in [Step 2: Create a Lambda execution role](#Streams.Lambda.Tutorial.CreateRole). Enter the following command to retrieve this ARN.

   ```
   aws iam get-role --role-name WooferLambdaRole
   ```

   In the output, look for the ARN for `WooferLambdaRole`.

   ```
   ...
   "Arn": "arn:aws:iam::region:role/service-role/WooferLambdaRole"
   ...
   ```

   Enter the following command to create the Lambda function. Replace *roleARN* with the ARN for `WooferLambdaRole`.

   ```
   aws lambda create-function \
       --region region \
       --function-name publishNewBark \
       --zip-file fileb://publishNewBark.zip \
       --role roleARN \
       --handler publishNewBark.handler \
       --timeout 5 \
       --runtime nodejs16.x
   ```

1. Now test `publishNewBark` to verify that it works. To do this, you provide input that resembles a real record from DynamoDB Streams.

   Create a file named `payload.json` with the following contents. Replace `region` and `accountID` with your AWS Region and account ID.

   ```
   {
       "Records": [
           {
               "eventID": "7de3041dd709b024af6f29e4fa13d34c",
               "eventName": "INSERT",
               "eventVersion": "1.1",
               "eventSource": "aws:dynamodb",
               "awsRegion": "region",
               "dynamodb": {
                   "ApproximateCreationDateTime": 1479499740,
                   "Keys": {
                       "Timestamp": {
                           "S": "2016-11-18:12:09:36"
                       },
                       "Username": {
                           "S": "John Doe"
                       }
                   },
                   "NewImage": {
                       "Timestamp": {
                           "S": "2016-11-18:12:09:36"
                       },
                       "Message": {
                           "S": "This is a bark from the Woofer social network"
                       },
                       "Username": {
                           "S": "John Doe"
                       }
                   },
                   "SequenceNumber": "13021600000000001596893679",
                   "SizeBytes": 112,
                   "StreamViewType": "NEW_IMAGE"
               },
               "eventSourceARN": "arn:aws:dynamodb:region:account ID:table/BarkTable/stream/2016-11-16T20:42:48.104"
           }
       ]
   }
   ```

   Enter the following command to test the `publishNewBark` function.

   ```
   aws lambda invoke --function-name publishNewBark --payload file://payload.json --cli-binary-format raw-in-base64-out output.txt
   ```

   If the test was successful, you will see the following output.

   ```
   {
       "StatusCode": 200,
       "ExecutedVersion": "$LATEST"
   }
   ```

   In addition, the `output.txt` file will contain the following text.

   ```
   "Successfully processed 1 records."
   ```

   You will also receive a new email message within a few minutes.
**Note**  
AWS Lambda writes diagnostic information to Amazon CloudWatch Logs. If you encounter errors with your Lambda function, you can use these diagnostics for troubleshooting purposes:  
Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).
In the navigation pane, choose **Logs**.
Choose the following log group: `/aws/lambda/publishNewBark`
Choose the latest log stream to view the output (and errors) from the function.

## Step 5: Create and test a trigger
<a name="Streams.Lambda.Tutorial.CreateTrigger"></a>

In [Step 4: Create and test a Lambda function](#Streams.Lambda.Tutorial.LambdaFunction), you tested the Lambda function to ensure that it ran correctly. In this step, you create a *trigger* by associating the Lambda function (`publishNewBark`) with an event source (the `BarkTable` stream).

1. When you create the trigger, you need to specify the ARN for the `BarkTable` stream. Enter the following command to retrieve this ARN.

   ```
   aws dynamodb describe-table --table-name BarkTable
   ```

   In the output, look for the `LatestStreamArn`.

   ```
   ...
    "LatestStreamArn": "arn:aws:dynamodb:region:accountID:table/BarkTable/stream/timestamp
   ...
   ```

1. Enter the following command to create the trigger. Replace `streamARN` with the actual stream ARN.

   ```
   aws lambda create-event-source-mapping \
       --region region \
       --function-name publishNewBark \
       --event-source streamARN  \
       --batch-size 1 \
       --starting-position TRIM_HORIZON
   ```

1. Test the trigger. Enter the following command to add an item to `BarkTable`.

   ```
   aws dynamodb put-item \
       --table-name BarkTable \
       --item Username={S="Jane Doe"},Timestamp={S="2016-11-18:14:32:17"},Message={S="Testing...1...2...3"}
   ```

   You should receive a new email message within a few minutes.

1. Open the DynamoDB console and add a few more items to `BarkTable`. You must specify values for the `Username` and `Timestamp` attributes. (You should also specify a value for `Message`, even though it is not required.) You should receive a new email message for each item you add to `BarkTable`.

   The Lambda function processes only new items that you add to `BarkTable`. If you update or delete an item in the table, the function does nothing.

**Note**  
AWS Lambda writes diagnostic information to Amazon CloudWatch Logs. If you encounter errors with your Lambda function, you can use these diagnostics for troubleshooting purposes.  
Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).
In the navigation pane, choose **Logs**.
Choose the following log group: `/aws/lambda/publishNewBark`
Choose the latest log stream to view the output (and errors) from the function.

# Tutorial \$12: Using filters to process some events with DynamoDB and Lambda
<a name="Streams.Lambda.Tutorial2"></a>

In this tutorial, you will create an AWS Lambda trigger to process only some events in a stream from a DynamoDB table.

**Topics**
+ [Putting it all together - CloudFormation](#Streams.Lambda.Tutorial2.Cloudformation)
+ [Putting it all together - CDK](#Streams.Lambda.Tutorial2.CDK)

With [Lambda event filtering](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html) you can use filter expressions to control which events Lambda sends to your function for processing. You can configure up to 5 different filters per DynamoDB streams. If you are using batching windows, Lambda applies the filter criteria to each new event to see if it should be included in the current batch.

Filters are applied via structures called `FilterCriteria`. The 3 main attributes of `FilterCriteria` are `metadata properties`, `data properties` and `filter patterns`. 

Here is an example structure of a DynamoDB Streams event:

```
{
  "eventID": "c9fbe7d0261a5163fcb6940593e41797",
  "eventName": "INSERT",
  "eventVersion": "1.1",
  "eventSource": "aws:dynamodb",
  "awsRegion": "us-east-2",
  "dynamodb": {
    "ApproximateCreationDateTime": 1664559083.0,
    "Keys": {
      "SK": { "S": "PRODUCT#CHOCOLATE#DARK#1000" },
      "PK": { "S": "COMPANY#1000" }
    },
    "NewImage": {
      "quantity": { "N": "50" },
      "company_id": { "S": "1000" },
      "fabric": { "S": "Florida Chocolates" },
      "price": { "N": "15" },
      "stores": { "N": "5" },
      "product_id": { "S": "1000" },
      "SK": { "S": "PRODUCT#CHOCOLATE#DARK#1000" },
      "PK": { "S": "COMPANY#1000" },
      "state": { "S": "FL" },
      "type": { "S": "" }
    },
    "SequenceNumber": "700000000000888747038",
    "SizeBytes": 174,
    "StreamViewType": "NEW_AND_OLD_IMAGES"
  },
  "eventSourceARN": "arn:aws:dynamodb:us-east-2:111122223333:table/chocolate-table-StreamsSampleDDBTable-LUOI6UXQY7J1/stream/2022-09-30T17:05:53.209"
}
```

The `metadata properties` are the fields of the event object. In the case of DynamoDB Streams, the `metadata properties` are fields like `dynamodb` or `eventName`. 

The `data properties` are the fields of the event body. To filter on `data properties`, make sure to contain them in `FilterCriteria` within the proper key. For DynamoDB event sources, the data key is `NewImage` or `OldImage`.

Finally, the filter rules will define the filter expression that you want to apply to a specific property. Here are some examples:


| Comparison operator | Example | Rule syntax (Partial) | 
| --- | --- | --- | 
|  Null  |  Product Type is null  |  `{ "product_type": { "S": null } } `  | 
|  Empty  |  Product name is empty  |  `{ "product_name": { "S": [ ""] } } `  | 
|  Equals  |  State equals Florida  |  `{ "state": { "S": ["FL"] } } `  | 
|  And  |  Product state equals Florida and product category Chocolate  |  `{ "state": { "S": ["FL"] } , "category": { "S": [ "CHOCOLATE"] } } `  | 
|  Or  |  Product state is Florida or California  |  `{ "state": { "S": ["FL","CA"] } } `  | 
|  Not  |  Product state is not Florida  |  `{"state": {"S": [{"anything-but": ["FL"]}]}}`  | 
|  Exists  |  Product Homemade exists  |  `{"homemade": {"S": [{"exists": true}]}}`  | 
|  Does not exist  |  Product Homemade does not exist  |  `{"homemade": {"S": [{"exists": false}]}}`  | 
|  Begins with  |  PK begins with COMPANY  |  `{"PK": {"S": [{"prefix": "COMPANY"}]}}`  | 

You can specify up to 5 event filtering patterns for a Lambda function. Notice that each one of those 5 events will be evaluated as a logical OR. So if you configure two filters named `Filter_One` and `Filter_Two`, the Lambda function will execute `Filter_One` OR `Filter_Two`.

**Note**  
In the [Lambda event filtering](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html) page there are some options to filter and compare numeric values, however in the case of DynamoDB filter events it doesn’t apply because numbers in DynamoDB are stored as strings. For example ` "quantity": { "N": "50" }`, we know its a number because of the `"N"` property.

## Putting it all together - CloudFormation
<a name="Streams.Lambda.Tutorial2.Cloudformation"></a>

To show event filtering functionality in practice, here is a sample CloudFormation template. This template will generate a Simple DynamoDB table with a Partition Key PK and a Sort Key SK with Amazon DynamoDB Streams enabled. It will create a lambda function and a simple Lambda Execution role that will allow write logs to Amazon Cloudwatch, and read the events from the Amazon DynamoDB Stream. It will also add the event source mapping between the DynamoDB Streams and the Lambda function, so the function can be executed every time there is an event in the Amazon DynamoDB Stream.

```
AWSTemplateFormatVersion: "2010-09-09"

Description: Sample application that presents AWS Lambda event source filtering 
with Amazon DynamoDB Streams.

Resources:
  StreamsSampleDDBTable:
    Type: AWS::DynamoDB::Table
    Properties:
      AttributeDefinitions:
        - AttributeName: "PK"
          AttributeType: "S"
        - AttributeName: "SK"
          AttributeType: "S"
      KeySchema:
        - AttributeName: "PK"
          KeyType: "HASH"
        - AttributeName: "SK"
          KeyType: "RANGE"
      StreamSpecification:
        StreamViewType: "NEW_AND_OLD_IMAGES"
      ProvisionedThroughput:
        ReadCapacityUnits: 5
        WriteCapacityUnits: 5

  LambdaExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17",		 	 	 
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - lambda.amazonaws.com
            Action:
              - sts:AssumeRole
      Path: "/"
      Policies:
        - PolicyName: root
          PolicyDocument:
            Version: "2012-10-17",		 	 	 
            Statement:
              - Effect: Allow
                Action:
                  - logs:CreateLogGroup
                  - logs:CreateLogStream
                  - logs:PutLogEvents
                Resource: arn:aws:logs:*:*:*
              - Effect: Allow
                Action:
                  - dynamodb:DescribeStream
                  - dynamodb:GetRecords
                  - dynamodb:GetShardIterator
                  - dynamodb:ListStreams
                Resource: !GetAtt StreamsSampleDDBTable.StreamArn

  EventSourceDDBTableStream:
    Type: AWS::Lambda::EventSourceMapping
    Properties:
      BatchSize: 1
      Enabled: True
      EventSourceArn: !GetAtt StreamsSampleDDBTable.StreamArn
      FunctionName: !GetAtt ProcessEventLambda.Arn
      StartingPosition: LATEST

  ProcessEventLambda:
    Type: AWS::Lambda::Function
    Properties:
      Runtime: python3.7
      Timeout: 300
      Handler: index.handler
      Role: !GetAtt LambdaExecutionRole.Arn
      Code:
        ZipFile: |
          import logging

          LOGGER = logging.getLogger()
          LOGGER.setLevel(logging.INFO)

          def handler(event, context):
            LOGGER.info('Received Event: %s', event)
            for rec in event['Records']:
              LOGGER.info('Record: %s', rec)

Outputs:
  StreamsSampleDDBTable:
    Description: DynamoDB Table ARN created for this example
    Value: !GetAtt StreamsSampleDDBTable.Arn
  StreamARN:
    Description: DynamoDB Table ARN created for this example
    Value: !GetAtt StreamsSampleDDBTable.StreamArn
```

After you deploy this cloud formation template you can insert the following Amazon DynamoDB Item:

```
{
 "PK": "COMPANY#1000",
 "SK": "PRODUCT#CHOCOLATE#DARK",
 "company_id": "1000",
 "type": "",
 "state": "FL",
 "stores": 5,
 "price": 15,
 "quantity": 50,
 "fabric": "Florida Chocolates"
}
```

Thanks to the simple lambda function included inline in this cloud formation template, you will see the events in the Amazon CloudWatch log groups for the lambda function as follows:

```
{
  "eventID": "c9fbe7d0261a5163fcb6940593e41797",
  "eventName": "INSERT",
  "eventVersion": "1.1",
  "eventSource": "aws:dynamodb",
  "awsRegion": "us-east-2",
  "dynamodb": {
    "ApproximateCreationDateTime": 1664559083.0,
    "Keys": {
      "SK": { "S": "PRODUCT#CHOCOLATE#DARK#1000" },
      "PK": { "S": "COMPANY#1000" }
    },
    "NewImage": {
      "quantity": { "N": "50" },
      "company_id": { "S": "1000" },
      "fabric": { "S": "Florida Chocolates" },
      "price": { "N": "15" },
      "stores": { "N": "5" },
      "product_id": { "S": "1000" },
      "SK": { "S": "PRODUCT#CHOCOLATE#DARK#1000" },
      "PK": { "S": "COMPANY#1000" },
      "state": { "S": "FL" },
      "type": { "S": "" }
    },
    "SequenceNumber": "700000000000888747038",
    "SizeBytes": 174,
    "StreamViewType": "NEW_AND_OLD_IMAGES"
  },
  "eventSourceARN": "arn:aws:dynamodb:us-east-2:111122223333:table/chocolate-table-StreamsSampleDDBTable-LUOI6UXQY7J1/stream/2022-09-30T17:05:53.209"
}
```

**Filter Examples**
+ **Only products that matches a given state**

This example modifies the CloudFormation template to include a filter to match all products which come from Florida, with the abbreviation “FL”.

```
EventSourceDDBTableStream:
    Type: AWS::Lambda::EventSourceMapping
    Properties:
      BatchSize: 1
      Enabled: True
      FilterCriteria:
        Filters:
          - Pattern: '{ "dynamodb": { "NewImage": { "state": { "S": ["FL"] } } } }'
      EventSourceArn: !GetAtt StreamsSampleDDBTable.StreamArn
      FunctionName: !GetAtt ProcessEventLambda.Arn
      StartingPosition: LATEST
```

Once you redeploy the stack, you can add the following DynamoDB item to the table. Note that it will not appear in the Lambda function logs, because the product in this example is from California.

```
{
 "PK": "COMPANY#1000",
 "SK": "PRODUCT#CHOCOLATE#DARK#1000",
 "company_id": "1000",
 "fabric": "Florida Chocolates",
 "price": 15,
 "product_id": "1000",
 "quantity": 50,
 "state": "CA",
 "stores": 5,
 "type": ""
}
```
+ **Only the items that starts with some values in the PK and SK**

This example modifies the CloudFormation template to include the following condition:

```
EventSourceDDBTableStream:
    Type: AWS::Lambda::EventSourceMapping
    Properties:
      BatchSize: 1
      Enabled: True
      FilterCriteria:
        Filters:
          - Pattern: '{"dynamodb": {"Keys": {"PK": { "S": [{ "prefix": "COMPANY" }] },"SK": { "S": [{ "prefix": "PRODUCT" }] }}}}'
      EventSourceArn: !GetAtt StreamsSampleDDBTable.StreamArn
      FunctionName: !GetAtt ProcessEventLambda.Arn
      StartingPosition: LATEST
```

Notice the AND condition requires the condition to be inside the pattern, where Keys PK and SK are in the same expression separated by comma.

Either start with some values on PK and SK or is from certain state.

This example modifies the CloudFormation template to include the following conditions:

```
  EventSourceDDBTableStream:
    Type: AWS::Lambda::EventSourceMapping
    Properties:
      BatchSize: 1
      Enabled: True
      FilterCriteria:
        Filters:
          - Pattern: '{"dynamodb": {"Keys": {"PK": { "S": [{ "prefix": "COMPANY" }] },"SK": { "S": [{ "prefix": "PRODUCT" }] }}}}'
          - Pattern: '{ "dynamodb": { "NewImage": { "state": { "S": ["FL"] } } } }'
      EventSourceArn: !GetAtt StreamsSampleDDBTable.StreamArn
      FunctionName: !GetAtt ProcessEventLambda.Arn
      StartingPosition: LATEST
```

Notice the OR condition is added by introducing new patterns in the filter section.

## Putting it all together - CDK
<a name="Streams.Lambda.Tutorial2.CDK"></a>

The following sample CDK project formation template walks through event filtering functionality. Before working with this CDK project you will need to [ install the pre-requisites](https://docs.aws.amazon.com/cdk/v2/guide/work-with.html) including [ running preparation scripts](https://docs.aws.amazon.com/cdk/v2/guide/work-with-cdk-python.html).

**Create a CDK project**

First create a new AWS CDK project, by invoking `cdk init` in an empty directory.

```
mkdir ddb_filters
cd ddb_filters
cdk init app --language python
```

The `cdk init` command uses the name of the project folder to name various elements of the project, including classes, subfolders, and files. Any hyphens in the folder name are converted to underscores. The name should otherwise follow the form of a Python identifier. For example, it should not start with a number or contain spaces.

To work with the new project, activate its virtual environment. This allows the project's dependencies to be installed locally in the project folder, instead of globally.

```
source .venv/bin/activate
python -m pip install -r requirements.txt
```

**Note**  
You may recognize this as the Mac/Linux command to activate a virtual environment. The Python templates include a batch file, `source.bat`, that allows the same command to be used on Windows. The traditional Windows command `.venv\Scripts\activate.bat` works too. If you initialized your AWS CDK project using AWS CDK Toolkit v1.70.0 or earlier, your virtual environment is in the `.env` directory instead of `.venv`. 

**Base Infrastructure**

Open the file `./ddb_filters/ddb_filters_stack.py` with your preferred text editor. This file was auto generated when you created the AWS CDK project. 

Next, add the functions `_create_ddb_table` and `_set_ddb_trigger_function`. These functions will create a DynamoDB table with partition key PK and sort key SK in provision mode on-demand mode, with Amazon DynamoDB Streams enabled by default to show New and Old images.

The Lambda function will be stored in the folder `lambda` under the file `app.py`. This file will be created later. It will include an environment variable `APP_TABLE_NAME`, which will be the name of the Amazon DynamoDB Table created by this stack. In the same function we will grant stream read permissions to the Lambda function. Finally, it will subscribe to the DynamoDB Streams as the event source for the lambda function. 

At the end of the file in the `__init__` method, you will call the respective constructs to initialize them in the stack. For bigger projects that require additional components and services, it might be best to define these constructs outside the base stack. 

```
import os
import json

import aws_cdk as cdk
from aws_cdk import (
    Stack,
    aws_lambda as _lambda,
    aws_dynamodb as dynamodb,
)
from constructs import Construct


class DdbFiltersStack(Stack):

    def _create_ddb_table(self):
        dynamodb_table = dynamodb.Table(
            self,
            "AppTable",
            partition_key=dynamodb.Attribute(
                name="PK", type=dynamodb.AttributeType.STRING
            ),
            sort_key=dynamodb.Attribute(
                name="SK", type=dynamodb.AttributeType.STRING),
            billing_mode=dynamodb.BillingMode.PAY_PER_REQUEST,
            stream=dynamodb.StreamViewType.NEW_AND_OLD_IMAGES,
            removal_policy=cdk.RemovalPolicy.DESTROY,
        )

        cdk.CfnOutput(self, "AppTableName", value=dynamodb_table.table_name)
        return dynamodb_table

    def _set_ddb_trigger_function(self, ddb_table):
        events_lambda = _lambda.Function(
            self,
            "LambdaHandler",
            runtime=_lambda.Runtime.PYTHON_3_9,
            code=_lambda.Code.from_asset("lambda"),
            handler="app.handler",
            environment={
                "APP_TABLE_NAME": ddb_table.table_name,
            },
        )

        ddb_table.grant_stream_read(events_lambda)

        event_subscription = _lambda.CfnEventSourceMapping(
            scope=self,
            id="companyInsertsOnlyEventSourceMapping",
            function_name=events_lambda.function_name,
            event_source_arn=ddb_table.table_stream_arn,
            maximum_batching_window_in_seconds=1,
            starting_position="LATEST",
            batch_size=1,
        )

    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        ddb_table = self._create_ddb_table()
        self._set_ddb_trigger_function(ddb_table)
```

Now we will create a very simple lambda function that will print the logs into Amazon CloudWatch. To do this, create a new folder called `lambda`.

```
mkdir lambda
touch app.py
```

Using your favorite text editor, add the following content to the `app.py` file:

```
import logging

LOGGER = logging.getLogger()
LOGGER.setLevel(logging.INFO)


def handler(event, context):
    LOGGER.info('Received Event: %s', event)
    for rec in event['Records']:
        LOGGER.info('Record: %s', rec)
```

Ensuring you are in the `/ddb_filters/` folder, type the following command to create the sample application:

```
cdk deploy
```

At some point you will be asked to confirm if you want to deploy the solution. Accept the changes by typing `Y`.

```
├───┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────┤
│ + │ ${LambdaHandler/ServiceRole} │ arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole │
└───┴──────────────────────────────┴────────────────────────────────────────────────────────────────────────────────┘

Do you wish to deploy these changes (y/n)? y

...

✨  Deployment time: 67.73s

Outputs:
DdbFiltersStack.AppTableName = DdbFiltersStack-AppTable815C50BC-1M1W7209V5YPP
Stack ARN:
arn:aws:cloudformation:us-east-2:111122223333:stack/DdbFiltersStack/66873140-40f3-11ed-8e93-0a74f296a8f6
```

Once the changes are deployed, open your AWS console and add one item to your table. 

```
{
 "PK": "COMPANY#1000",
 "SK": "PRODUCT#CHOCOLATE#DARK",
 "company_id": "1000",
 "type": "",
 "state": "FL",
 "stores": 5,
 "price": 15,
 "quantity": 50,
 "fabric": "Florida Chocolates"
}
```

The CloudWatch logs should now contain all the information from this entry. 

**Filter Examples**
+ **Only products that matches a given state**

Open the file `ddb_filters/ddb_filters/ddb_filters_stack.py`, and modify it to include the filter that matches all the products that are equals to “FL”. This can be revised just below the `event_subscription` in line 45.

```
event_subscription.add_property_override(
    property_path="FilterCriteria",
    value={
        "Filters": [
            {
                "Pattern": json.dumps(
                    {"dynamodb": {"NewImage": {"state": {"S": ["FL"]}}}}
                )
            },
        ]
    },
)
```
+ **Only the items that starts with some values in the PK and SK**

Modify the python script to include the following condition:

```
event_subscription.add_property_override(
    property_path="FilterCriteria",
    value={
        "Filters": [
            {
                "Pattern": json.dumps(
                    {
                        {
                            "dynamodb": {
                                "Keys": {
                                    "PK": {"S": [{"prefix": "COMPANY"}]},
                                    "SK": {"S": [{"prefix": "PRODUCT"}]},
                                }
                            }
                        }
                    }
                )
            },
        ]
    },
```
+ **Either start with some values on PK and SK or is from certain state.**

Modify the python script to include the following conditions:

```
event_subscription.add_property_override(
    property_path="FilterCriteria",
    value={
        "Filters": [
            {
                "Pattern": json.dumps(
                    {
                        {
                            "dynamodb": {
                                "Keys": {
                                    "PK": {"S": [{"prefix": "COMPANY"}]},
                                    "SK": {"S": [{"prefix": "PRODUCT"}]},
                                }
                            }
                        }
                    }
                )
            },
            {
                "Pattern": json.dumps(
                    {"dynamodb": {"NewImage": {"state": {"S": ["FL"]}}}}
                )
            },
        ]
    },
)
```

Notice that the OR condition is added by adding more elements to the Filters array.

**Cleanup**

Locate the filter stack in the base of your working directory, and execute `cdk destroy`. You will be asked to confirm the resource deletion:

```
cdk destroy
Are you sure you want to delete: DdbFiltersStack (y/n)? y
```

# Best practices using DynamoDB Streams with Lambda
<a name="Streams.Lambda.BestPracticesWithDynamoDB"></a>

An AWS Lambda function runs within a *container*—an execution environment that is isolated from other functions. When you run a function for the first time, AWS Lambda creates a new container and begins executing the function's code.

A Lambda function has a *handler* that is run once per invocation. The handler contains the main business logic for the function. For example, the Lambda function shown in [Step 4: Create and test a Lambda function](Streams.Lambda.Tutorial.md#Streams.Lambda.Tutorial.LambdaFunction) has a handler that can process records in a DynamoDB stream. 

You can also provide initialization code that runs one time only—after the container is created, but before AWS Lambda runs the handler for the first time. The Lambda function shown in [Step 4: Create and test a Lambda function](Streams.Lambda.Tutorial.md#Streams.Lambda.Tutorial.LambdaFunction) has initialization code that imports the SDK for JavaScript in Node.js, and creates a client for Amazon SNS. These objects should only be defined once, outside of the handler.

After the function runs, AWS Lambda might opt to reuse the container for subsequent invocations of the function. In this case, your function handler might be able to reuse the resources that you defined in your initialization code. (You cannot control how long AWS Lambda will retain the container, or whether the container will be reused at all.)

For DynamoDB triggers using AWS Lambda, we recommend the following:
+ AWS service clients should be instantiated in the initialization code, not in the handler. This allows AWS Lambda to reuse existing connections, for the duration of the container's lifetime.
+ In general, you do not need to explicitly manage connections or implement connection pooling because AWS Lambda manages this for you.

A Lambda consumer for a DynamoDB stream doesn't guarantee exactly once delivery and may lead to occasional duplicates. Make sure your Lambda function code is idempotent to prevent unexpected issues from arising because of duplicate processing.

For more information, see [Best practices for working with AWS Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html) in the *AWS Lambda Developer Guide*.

# DynamoDB Streams and Apache Flink
<a name="StreamsApacheFlink.xml"></a>

You can consume Amazon DynamoDB Streams records with Apache Flink. With [Amazon Managed Service for Apache Flink](https://aws.amazon.com/managed-service-apache-flink/), you can transform and analyze streaming data in real time using Apache Flink. Apache Flink is an open-source stream processing framework for processing real-time data. The Amazon DynamoDB Streams connector for Apache Flink simplifies building and managing Apache Flink workloads and allows you to integrate applications with other AWS services.

Amazon Managed Service for Apache Flink helps you to quickly build end-to-end stream processing applications for log analytics, clickstream analytics, Internet of Things (IoT), ad tech, gaming, and more. The four most common use cases are streaming extract-transform-load (ETL), event driven applications, responsive real-time analytics, and interactive querying of data streams. For more information on writing to Apache Flink from Amazon DynamoDB Streams, see [Amazon DynamoDB Streams Connector](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/dynamodb/).