

# Using agentic AI with DynamoDB


Amazon DynamoDB is a serverless, fully managed, distributed NoSQL database with single-digit millisecond performance at any scale. DynamoDB is optimized for high-throughput workloads and you can extend its capabilities by integrating with generative AI models. Using generative AI models, you can work with data stored in DynamoDB tables in real-time and build applications that are contextually aware and highly personalized. You can also enhance the end user experience by fully leveraging your business, user, and application data to customize your generative AI solutions.

For more information about gen AI and the solutions AWS provides to build gen AI applications, see [Transform your business with generative AI](https://aws.amazon.com/ai/generative-ai/).

**Topics**
+ [

## Generative AI use cases for DynamoDB
](#gen-ai-use-case-ddb)
+ [

## Generative AI blogs for DynamoDB
](#gen-ai-blogs)
+ [

# Leveraging DynamoDB Zero-ETL integration with OpenSearch Service
](ddb-and-amazon-bedrock.md)
+ [

# Using DynamoDB as a checkpoint store for LangGraph agents
](ddb-langgraph-checkpoint.md)

## Generative AI use cases for DynamoDB


DynamoDB is widely used in AI powered conversational applications, such as chatbots and call centers built with a [Foundation Model (FM)](https://aws.amazon.com/what-is/foundation-models/). You can access FMs through Amazon Bedrock, Amazon SageMaker AI, or other model providers. Such applications commonly use DynamoDB to improve personalization and enhance the user experience across three data patterns: application data, business data, and user data. Some examples of these data patterns are as follows:
+ Storage of application data, such as chat message history, through integrations with [LangChain](https://js.langchain.com/v0.1/docs/integrations/chat_memory/dynamodb/), [LlamaIndex](https://docs.llamaindex.ai/en/stable/examples/docstore/DynamoDBDocstoreDemo/), or a custom code. This context enhances the user experience by allowing the model to *converse* back and forth with the user.
+ Creation of a customized user experience by leveraging business data, such as inventory, pricing, and documentation.
+ Application of user data, such as web history, past orders, and user preferences, to provide personalized answers.

For instance, an insurance company can build a chatbot using DynamoDB to provide their [Retrieval-Augmented Generation (RAG)](https://docs.aws.amazon.com/sagemaker/latest/dg/jumpstart-foundation-models-customize-rag.html) based gen AI model access to near real-time data. Examples of such data are real-time mortgage rates, product pricing, compliant/standard contract copy, user web history, and user preferences. Combining DynamoDB with RAG adds in-depth and updated information about insurance products and the user data. This enriches the prompts and answers to provide end users with an accurate, personalized, and near real-time experience.

Similarly, financial services industry customers use DynamoDB, [Amazon Bedrock knowledge bases](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base.html), and [Amazon Bedrock agents](https://aws.amazon.com/bedrock/agents/) to build RAG-based gen AI applications. These applications can use open-source earnings reports and call transcripts. They can also use user-specific portfolio and transaction history to generate an on-demand summary of portfolio including an outlook for the future.

## Generative AI blogs for DynamoDB


The following articles offer detailed use cases, best practices, and step-by-step guides to help you leverage DynamoDB's capabilities in building advanced AI-powered applications.
+ [Amazon DynamoDB data models for generative AI chatbots ](https://aws.amazon.com/blogs/database/amazon-dynamodb-data-models-for-generative-ai-chatbots/) 
+ [Build a scalable, context-aware chatbot with Amazon DynamoDB, Amazon Bedrock, and LangChain](https://aws.amazon.com/blogs/database/build-a-scalable-context-aware-chatbot-with-amazon-dynamodb-amazon-bedrock-and-langchain/) 
+ [Build durable AI agents with LangGraph and Amazon DynamoDB](https://aws.amazon.com/blogs/database/build-durable-ai-agents-with-langgraph-and-amazon-dynamodb/) 

# Leveraging DynamoDB Zero-ETL integration with OpenSearch Service


You can use Amazon Bedrock with DynamoDB to provide serverless access to [foundational models (FMs)](https://aws.amazon.com/what-is/foundation-models/), such as Amazon Titan and other third-party models. You can leverage the Zero-ETL integration with Amazon OpenSearch Service to enable vector search capabilities when building generative AI applications. The [Generative AI with DynamoDB zero-ETL to OpenSearch integration and Amazon Bedrock](https://catalog.workshops.aws/dynamodb-labs/en-US/dynamodb-opensearch-zetl) workshop provides you hands-on experience in setting up DynamoDB Zero-ETL integration with OpenSearch. This workshop does the following tasks:
+ Creates a pipeline from your DynamoDB table to OpenSearch.
+ Creates an Amazon Bedrock Connector in OpenSearch.
+ Queries Amazon Bedrock leveraging OpenSearch as a vector store.
+ Uses the Claude FM in Amazon Bedrock to create a written response in plain English explaining the search results returned by OpenSearch.

This workshop enables you to integrate DynamoDB with OpenSearch to build generative AI applications. It also demonstrates the flexible querying capability across database engines to help you integrate DynamoDB and OpenSearch for traditional use cases. This workshop is one of the seven modules in the [Amazon DynamoDB Immersion Day](https://catalog.workshops.aws/dynamodb-labs/en-US). You can run this workshop in any AWS account.

You can also refer to the following blog post about how to set up a Zero-ETL integration between DynamoDB and OpenSearch Service. This blog post also describes how to set up model connectors in OpenSearch Service to automatically generate embeddings using Amazon Bedrock for incoming data. [Vector search for Amazon DynamoDB with zero ETL for Amazon OpenSearch Service](https://aws.amazon.com/blogs/database/vector-search-for-amazon-dynamodb-with-zero-etl-for-amazon-opensearch-service/).

# Using DynamoDB as a checkpoint store for LangGraph agents
LangGraph checkpoint store

[LangGraph](https://langchain-ai.github.io/langgraph/) is a framework for building stateful, multi-actor AI applications with Large Language Models (LLMs). LangGraph agents require persistent storage to maintain conversation state, enable human-in-the-loop workflows, support fault tolerance, and provide time-travel debugging capabilities. DynamoDB's serverless architecture, single-digit millisecond latency, and automatic scaling make it an ideal checkpoint store for production LangGraph deployments on AWS.

The `langgraph-checkpoint-aws` package provides a `DynamoDBSaver` class that implements the LangGraph checkpoint interface, enabling you to persist agent state in DynamoDB with optional Amazon Simple Storage Service offloading for large checkpoints.

## Key features


State persistence  
Automatically saves agent state after each step, enabling agents to resume from interruptions and recover from failures.

Time to Live-based cleanup  
Automatically expire old checkpoints using DynamoDB Time to Live to manage storage costs.

Compression  
Optionally compress checkpoint data with gzip to reduce storage costs and improve throughput.

Amazon S3 offloading  
Automatically offload large checkpoints (greater than 350 KB) to Amazon Simple Storage Service to work within DynamoDB item size limits.

Sync and async support  
Both synchronous and asynchronous APIs for flexibility in different application architectures.

## Prerequisites

+ Python 3.10 or later
+ An AWS account with permissions to create DynamoDB tables (and optionally Amazon S3 buckets)
+ AWS credentials configured (see the AWS documentation for credential setup options)

**Important**  
This guide creates AWS resources that may incur charges. DynamoDB uses pay-per-request billing by default, and Amazon S3 charges apply if you enable large checkpoint offloading. Follow the [Clean up](#langgraph-cleanup) section to delete resources when you are done.

## Installation


Install the checkpoint package from PyPI:

```
pip install langgraph-checkpoint-aws
```

## Basic usage


The following example demonstrates how to configure DynamoDB as a checkpoint store for a LangGraph agent:

```
from langgraph.graph import StateGraph
from langgraph_checkpoint_aws import DynamoDBSaver
from typing import TypedDict

# Define your state schema
class State(TypedDict):
    input: str
    result: str

# Initialize the DynamoDB checkpoint saver
checkpointer = DynamoDBSaver(
    table_name="langgraph-checkpoints",
    region_name="us-east-1"
)

# Build your LangGraph workflow
builder = StateGraph(State)
builder.add_node("process", lambda state: {"result": "processed"})
builder.set_entry_point("process")
builder.set_finish_point("process")

# Compile the graph with the DynamoDB checkpointer
graph = builder.compile(checkpointer=checkpointer)

# Invoke the graph with a thread ID to enable state persistence
config = {"configurable": {"thread_id": "session-123"}}
result = graph.invoke({"input": "data"}, config)
```

The `thread_id` in the configuration acts as the partition key in DynamoDB, allowing you to maintain separate conversation threads and retrieve historical states for any thread.

## Production configuration


For production deployments, you can enable Time to Live, compression, and Amazon S3 offloading. You can also use the `endpoint_url` parameter to point to a local DynamoDB instance for testing:

```
import boto3
from botocore.config import Config
from langgraph_checkpoint_aws import DynamoDBSaver

# Production configuration
session = boto3.Session(
    profile_name="production",
    region_name="us-east-1"
)

checkpointer = DynamoDBSaver(
    table_name="langgraph-checkpoints",
    session=session,
    ttl_seconds=86400 * 7,           # Expire checkpoints after 7 days
    enable_checkpoint_compression=True,  # Enable gzip compression
    boto_config=Config(
        retries={"mode": "adaptive", "max_attempts": 6},
        max_pool_connections=50
    ),
    s3_offload_config={
        "bucket_name": "my-checkpoint-bucket"
    }
)

# Local testing with DynamoDB Local
local_checkpointer = DynamoDBSaver(
    table_name="langgraph-checkpoints",
    region_name="us-east-1",
    endpoint_url="http://localhost:8000"
)
```

## DynamoDB table configuration


The checkpoint saver requires a DynamoDB table with a composite primary key. You can create the table using the following AWS CloudFormation template:

```
AWSTemplateFormatVersion: '2010-09-09'
Description: 'DynamoDB table for LangGraph checkpoint storage'

Parameters:
  TableName:
    Type: String
    Default: langgraph-checkpoints

Resources:
  CheckpointTable:
    Type: AWS::DynamoDB::Table
    DeletionPolicy: Retain
    UpdateReplacePolicy: Retain
    Properties:
      TableName: !Ref TableName
      BillingMode: PAY_PER_REQUEST
      AttributeDefinitions:
        - AttributeName: PK
          AttributeType: S
        - AttributeName: SK
          AttributeType: S
      KeySchema:
        - AttributeName: PK
          KeyType: HASH
        - AttributeName: SK
          KeyType: RANGE
      TimeToLiveSpecification:
        AttributeName: ttl
        Enabled: true
      PointInTimeRecoverySpecification:
        PointInTimeRecoveryEnabled: true
      SSESpecification:
        SSEEnabled: true
```

Deploy the template with the AWS CLI:

```
aws cloudformation deploy \
  --template-file template.yaml \
  --stack-name langgraph-checkpoint \
  --parameter-overrides TableName=langgraph-checkpoints
```

## Required IAM permissions


The following IAM policy provides the minimum permissions required for the DynamoDB checkpoint saver. Replace *111122223333* with your AWS account ID and update the Region to match your environment.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "dynamodb:GetItem",
        "dynamodb:PutItem",
        "dynamodb:Query",
        "dynamodb:BatchGetItem",
        "dynamodb:BatchWriteItem"
      ],
      "Resource": "arn:aws:dynamodb:us-east-1:111122223333:table/langgraph-checkpoints"
    }
  ]
}
```

If you enable Amazon S3 offloading, add the following statement to the policy:

```
{
  "Effect": "Allow",
  "Action": [
    "s3:PutObject",
    "s3:GetObject",
    "s3:DeleteObject",
    "s3:PutObjectTagging"
  ],
  "Resource": "arn:aws:s3:::my-checkpoint-bucket/*"
},
{
  "Effect": "Allow",
  "Action": [
    "s3:GetBucketLifecycleConfiguration",
    "s3:PutBucketLifecycleConfiguration"
  ],
  "Resource": "arn:aws:s3:::my-checkpoint-bucket"
}
```

## Asynchronous usage


For asynchronous applications, use the async methods provided by the checkpoint saver:

```
import asyncio
from langgraph.graph import StateGraph
from langgraph_checkpoint_aws import DynamoDBSaver
from typing import TypedDict

class State(TypedDict):
    input: str
    result: str

async def main():
    checkpointer = DynamoDBSaver(
        table_name="langgraph-checkpoints",
        region_name="us-east-1"
    )
    builder = StateGraph(State)
    builder.add_node("process", lambda state: {"result": "processed"})
    builder.set_entry_point("process")
    builder.set_finish_point("process")
    graph = builder.compile(checkpointer=checkpointer)

    config = {"configurable": {"thread_id": "async-session-123"}}
    result = await graph.ainvoke({"input": "data"}, config)
    return result

asyncio.run(main())
```

## Clean up


To avoid ongoing charges, delete the resources you created:

```
# Delete the DynamoDB table
aws dynamodb delete-table --table-name langgraph-checkpoints

# Delete the CloudFormation stack (if you used the template above)
aws cloudformation delete-stack --stack-name langgraph-checkpoint

# If you created an S3 bucket for large checkpoint offloading, empty and delete it
aws s3 rm s3://my-checkpoint-bucket --recursive
aws s3 rb s3://my-checkpoint-bucket
```

## Error handling


Common error scenarios:
+ **Table not found**: Verify the `table_name` and `region_name` match your DynamoDB table.
+ **Throttling**: If you see `ProvisionedThroughputExceededException`, consider switching to on-demand billing mode or increasing provisioned capacity.
+ **Item size exceeded**: If checkpoints exceed 350 KB, enable Amazon S3 offloading (see [Production configuration](#langgraph-production-config)).
+ **Credential errors**: Verify your AWS credentials are valid and have the [required permissions](#langgraph-iam-permissions).

## Additional resources

+ [langgraph-checkpoint-aws on PyPI](https://pypi.org/project/langgraph-checkpoint-aws/)
+ [langgraph-checkpoint-aws on GitHub](https://github.com/langchain-ai/langchain-aws/blob/main/libs/langgraph-checkpoint-aws/docs/dynamodb/DynamoDBSaver.md)
+ [LangGraph documentation](https://langchain-ai.github.io/langgraph/)
+ [DynamoDB best practices](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/best-practices.html)
+ [Build durable AI agents with LangGraph and Amazon DynamoDB](https://aws.amazon.com/blogs/database/build-durable-ai-agents-with-langgraph-and-amazon-dynamodb/)