

# Event API runtime reference
<a name="runtime-reference"></a>



The following sections contain the `APPSYNC_JS` runtime reference:
+ [Runtime reference overview](runtime-reference-overview.md) — Learn more about how runtime data sources work in AWS AppSync Events.
+ [Context reference](context-reference.md) — Learn more about the context object and how it's used in AWS AppSync Events handlers.
+  [Runtime features](runtime-features-overview.md) — Learn more about supported runtime features.
+  [Runtime reference for DynamoDB](dynamodb-function-reference.md) — Learn more about how AWS AppSync Events handlers interact with DynamoDB.
+  [Runtime reference for OpenSearch Service](opensearch-function-reference.md) — Learn more about how AWS AppSync Events handlers interact with Amazon OpenSearch Service. 
+  [Runtime reference for Lambda](lambda-function-reference.md) — Learn more about how AWS AppSync Events API handlers interact with AWS Lambda.
+  [Runtime reference for EventBridge](eventbridge-function-reference.md) — Learn more about how AWS AppSync Events API handlers interact with Amazon EventBridge.
+  [Runtime reference for HTTP](http-function-reference.md) — Learn more about how AWS AppSync Events API handlers interact with HTTP endpoints.
+  [Runtime reference for Amazon RDS](rds-function-reference.md) — Learn more about how AWS AppSync Events API handlers interact with Amazon Relational Database Service.
+  [Runtime reference for Amazon Bedrock](bedrock-function-reference.md) — Learn more about how how AWS AppSync Events API handlers interact with Amazon Bedrock.

# AWS AppSync Event API runtime reference
<a name="runtime-reference-overview"></a>

AWS AppSync lets you respond to specific triggers that occur in the service with code that runs on AWS AppSync's JavaScript runtime (APPSYNC\$1JS).

The AWS AppSync JavaScript runtime provides a subset of JavaScript libraries, utilities, and features. For a complete list of features and functionality supported by the `APPSYNC_JS` runtime, see [Runtime features](runtime-features-overview.md).

**Topics**
+ [Event handlers overview](event-handlers-overview.md)
+ [Writing event handlers](writing-event-handlers.md)
+ [Configuring utilities for the `APPSYNC_JS` runtime](configure-utilities.md)
+ [Bundling, TypeScript, and source maps for the `APPSYNC_JS` runtime](additional-utilities.md)

# Event handlers overview
<a name="event-handlers-overview"></a>

An event handler is a function defined in a namespace that is invoked by specific triggers in the system. Currently you can define onPublish and onSubscribe event handlers: handlers that respond to events being published to a channel in the namespace, and handlers that respond to subscription items on a channel in the namespace. An onPublish handler is called before events are broadcast to subscribed clients, giving you a chance to transform the events first. An onSubscribe handler is called as a client tries to subscribe, giving you the chance to accept or reject the subscription attempt.

Event handlers are optional and are not required for your channel namespaces to be effective.

# Writing event handlers
<a name="writing-event-handlers"></a>

A handler is defined by a single function that doesn't interact with a data source, or is defined by an object that implements a request and a response function to interact with one of your data sources. When working with a Lambda function data source, AWS AppSync Events supports a `DIRECT` integration that allows you to interact with your Lambda function without writing any handler code.

You provide your code for the handlers using the namespace's code property. Essentially, you use a "single file" to define all your handlers. In your code file, you identify your handler definitions by exporting a function or object named onPublish or onSubscribe.

## Handler with no data source integration
<a name="no-datasource-handler"></a>

You can define a handler with no data source integration. In this case, the handler is defined as a single function. In the following example, the onPublish handler shows the default behavior when no handler is defined. It simply forwards the list of events.

```
export function onPublish(ctx) {
  return ctx.events
}
```

As an alternate example, this definition returns an empty list, which means that no event will be broadcast to the subscribers.

```
export const onPublish = (ctx) => ([])
```

In this example, the handler only returns the list of published events, but adds the property `msg` to the payload. 

```
export function onPublish(ctx) {
  return ctx.events.map(({id, payload}) => {
    return {id: payload: {...payload, msg: "Hello!"}}
  })
}
```

## Handler with a data source integration
<a name="with-datasource-handler"></a>

You can define a handler with a data source integration. In this case, you define an object that implements and `request` and a `response` function. The `request` function defines the payload that is sent to invoke the data source while the `response` function receives the result of that invocation. The list of events to broadcast is returned by the `response` function.

The following example defines an `onPublish` handler that saves all published events to a `messages_table` before forwarding them to be broadcast. The `onSubscribe` handler doesn't have a data source integration and is defined by a single function that simply logs a message.

```
import * as ddb from `@aws-appsync/utils/dynamodb`

const TABLE = 'messages_table'
export const onPublish = {
  request(ctx) {
    const channel = ctx.info.channel.path
    return ddb.batchPut({
      tables: {
        [TABLE]: ctx.events.map(({ id, payload }) => ({ channel, id, ...payload })),
      },
    })
  },
  response(ctx) {
    console.log(`Batch Put result:`, ctx.result.data[TABLE])
    return ctx.events
  },
}

export const onSubscribe = (ctx) => {
  console.debug(`Joined the chat: ${ctx.info.channel.path}`)
}
```

## Skipping the data source
<a name="skip-datasource"></a>

You might have situations where you need to skip the data source invocation at run time. You can do this by using the `runtime.earlyReturn` utility. `earlyReturn` immediately returns the provided payload and skips the response function.

```
import * as ddb from `@aws-appsync/utils/dynamodb`

const TABLE = 'messages_table'
export const onPublish = {
  request(ctx) {
    if (ctx.info.channel.segments.includes('private')) {
      // return early and do no execute the response.
      return runtime.earlyReturn(ctx.events)
    }
    const channel = ctx.info.channel.path
    return ddb.batchPut({
      tables: {
        [TABLE]: ctx.events.map(({ id, payload }) => ({ channel, id, ...payload })),
      },
    })
  },
  response(ctx) {
    return ctx.result.data[TABLE].map(({ id, ...payload }) => ({ id, payload }))
  },
}
```

## Returning an error
<a name="return-error"></a>

During the execution of an event handler, you might need to return an error back to the publisher or subscriber. Use the `util.error` function to do this. When publishing is done using the HTTP endpoint, this returns an HTTP 403 response. When publishing over WebSocket, this returns a `publish_error` message with the provided message. The following example demonstrates how to return an error message.

```
export function onPublish(ctx) {
  util.error("Not possible!")
  return ctx.events
}
```

## Unauthorizing a request
<a name="unauthorizing-request"></a>

Your event handlers are always called after AWS AppSync has authorized the requests. However, you can run additional business logic and unauthorize a request in your event handler using the `util.unauthorize` function. When publishing over HTTP, this returns an HTTP 401 response. Over WebSocket, this returns a `publish_error` message with an `UnauthorizedException` error type. When trying to connect over WebSocket, you get a `subscribe_error` with an `Unauthorized` error type. 

```
export function onSubscribe(ctx) {
  if (somethingNotValid() === true) {
    util.unauthorized()
  }
}

function somethingNotValid() {
  // implement custom business logic
}
```

## Direct Lambda integration
<a name="direct-lambda-integration"></a>

AWS AppSync lets you integrate Lambda functions directly with your channel namespace without writing additional handler code. This integration supports both publish and subscribe operations through Request/Response mode.

**How it works**

When AWS AppSync calls your Lambda function, it passes a context object containing event information. Then, the Lambda function can perform the following operations:
+ Filter and transform events for broadcasting
+ Return error messages for failed processing
+ Handle both publish and subscribe operations

**Publish operation response format**

For `onPublish` handlers, your Lambda function must return a response payload with the following structure:

```
type LambdaAppSyncEventResponse = {
  /** Array of outgoing events to broadcast */
  events?: OutgoingEvent[],
  
  /** Optional error message if processing fails */
  error?: string
}
```

**Note**  
If you include an error message in the response, AWS AppSync logs it (when logging is enabled) but doesn't return it to the publisher.

**Subscribe operation response**

For `onSubscribe` handlers, your Lambda function must return one of the following:
+ A payload containing an error message
+ `null` to indicate a successful subscription

```
type LambdaAppSyncEventResponse = {
  /** Error message if subscription fails */
  error: string
} | null
```

**Best practices**

We recommend the following best practices for direct Lambda integrations:
+ Enable logging to track error messages.
+ Ensure your Lambda function handles both success and error cases.
+ Test your integration with various payload scenarios.

**Utilizing Powertools for Lambda**

You can utilize Powertools for Lambda to efficiently write your Lambda function handlers. To learn more, see the following Powertools for AWS Lambda documentation resources:
+ TypeScript/Node.js — See [https://docs.powertools.aws.dev/lambda/typescript/latest/features/event-handler/appsync-events/](https://docs.powertools.aws.dev/lambda/typescript/latest/features/event-handler/appsync-events/) in the *Powertools for AWS Lambda (TypeScript)* documentation.
+ Python — See [https://docs.powertools.aws.dev/lambda/python/latest/core/event\$1handler/appsync\$1events/](https://docs.powertools.aws.dev/lambda/python/latest/core/event_handler/appsync_events/) in the *Powertools for AWS Lambda (Python)* documentation.
+ .NET — See [https://docs.powertools.aws.dev/lambda/dotnet/core/event\$1handler/appsync\$1events/](https://docs.powertools.aws.dev/lambda/dotnet/core/event_handler/appsync_events/) in the *Powertools for AWS Lambda (.NET) * documentation.

# Configuring utilities for the `APPSYNC_JS` runtime
<a name="configure-utilities"></a>

AWS AppSync provides the following two libraries that help you develop event handlers with the `APPSYNC_JS` runtime: 
+ `@aws-appsync/eslint-plugin` - Catches and fixes problems quickly during development.
+ `@aws-appsync/utils` - Provides type validation and autocompletion in code editors.

## Configuring the eslint plugin
<a name="configure-eslint-plugin"></a>

[ESLint](https://eslint.org/) is a tool that statically analyzes your code to quickly find problems. You can run ESLint as part of your continuous integration pipeline. `@aws-appsync/eslint-plugin` is an ESLint plugin that catches invalid syntax in your code when leveraging the `APPSYNC_JS` runtime. The plugin allows you to quickly get feedback about your code during development without having to push your changes to the cloud.

`@aws-appsync/eslint-plugin` provides two rule sets that you can use during development. 

**"plugin:@aws-appsync/base"** configures a base set of rules that you can leverage in your project. The following table describes these rules.


| Rule | Description | 
| --- | --- | 
| no-async | Async processes and promises are not supported. | 
| no-await | Async processes and promises are not supported. | 
| no-classes | Classes are not supported. | 
| no-for | for is not supported (except for for-in and for-of, which are supported) | 
| no-continue | continue is not supported. | 
| no-generators | Generators are not supported. | 
| no-yield | yield is not supported. | 
| no-labels | Labels are not supported. | 
| no-this | this keyword is not supported. | 
| no-try | Try/catch structure is not supported. | 
| no-while | While loops are not supported. | 
| no-disallowed-unary-operators | \$1\$1, --, and \$1 unary operators are not allowed. | 
| no-disallowed-binary-operators | The instanceof operator is not allowed. | 
| no-promise | Async processes and promises are not supported. | 

**"plugin:@aws-appsync/recommended"** provides some additional rules but also requires you to add TypeScript configurations to your project.


| Rule | Description | 
| --- | --- | 
| no-recursion | Recursive function calls are not allowed. | 
| no-disallowed-methods | Some methods are not allowed. See the [Runtime features](runtime-features-overview.md) for a full set of supported built-in functions. | 
| no-function-passing | Passing functions as function arguments to functions is not allowed. | 
| no-function-reassign | Functions cannot be reassigned. | 
| no-function-return | Functions cannot be the return value of functions. | 

To add the plugin to your project, follow the installation and usage steps at [Getting Started with ESLint](https://eslint.org/docs/latest/user-guide/getting-started#installation-and-usage). Then, install the [plugin](https://www.npmjs.com/package/@aws-appsync/eslint-plugin) in your project using your project package manager (e.g., npm, yarn, or pnpm):

```
$ npm install @aws-appsync/eslint-plugin
```

In your `.eslintrc.{js,yml,json}` file, add **"plugin:@aws-appsync/base"** or **"plugin:@aws-appsync/recommended"** to the `extends` property. The snippet below is a basic sample `.eslintrc` configuration for JavaScript: 

```
{
  "extends": ["plugin:@aws-appsync/base"]
}
```

To use the **"plugin:@aws-appsync/recommended"** rule set, install the required dependency:

```
$ npm install -D @typescript-eslint/parser
```

Then, create an `.eslintrc.js` file:

```
{
  "parser": "@typescript-eslint/parser",
  "parserOptions": {
    "ecmaVersion": 2018,
    "project": "./tsconfig.json"
  },
  "extends": ["plugin:@aws-appsync/recommended"]
}
```

# Bundling, TypeScript, and source maps for the `APPSYNC_JS` runtime
<a name="additional-utilities"></a>

TypeScript enhances AWS AppSync development by providing type safety and early error detection. You can write TypeScript code locally and transpile it to JavaScript before using it with the `APPSYNC_JS` runtime. The process starts with installing TypeScript and configuring tsconfig.json for the `APPSYNC_JS` environment. You can then use bundling tools like esbuild to compile and bundle the code. 

You can leverage custom and external libraries in your handler and function code, as long as they comply with `APPSYNC_JS` requirements. Bundling tools combine code into a single file for use in AWS AppSync. Source maps can be included to aid debugging. 

## Leveraging libraries and bundling your code
<a name="using-external-libraries"></a>

In your handler code, you can leverage both custom and external libraries so long as they comply with the `APPSYNC_JS` requirements. This makes it possible to reuse existing code in your application. To make use of libraries that are defined by multiple files, you must use a bundling tool, such as [esbuild](https://esbuild.github.io/), to combine your code in a single file that can then be saved to your AWS AppSync namespace handler code.

When bundling your code, keep the following in mind:
+ `APPSYNC_JS` only supports ECMAScript modules (ESM).
+ `@aws-appsync/*` modules are integrated into `APPSYNC_JS` and should not be bundled with your code.
+ The `APPSYNC_JS` runtime environment is similar to NodeJS in that code does not run in a browser environment.
+ You can include an optional source map. However, do not include the source content.

  To learn more about source maps, see [Using source maps](#source-maps).

For example, to bundle your handler code located at `src/appsync/onPublish.js`, you can use the following esbuild CLI command:

```
$ esbuild --bundle \
--sourcemap=inline \
--sources-content=false \
--target=esnext \
--platform=node \
--format=esm \
--external:@aws-appsync/utils \
--outdir=out/appsync \
 src/appsync/onPublish.js
```

## Building your code and working with TypeScript
<a name="working-with-typescript"></a>

[TypeScript](https://www.typescriptlang.org/) is a programming language developed by Microsoft that offers all of JavaScript’s features along with the TypeScript typing system. You can use TypeScript to write type-safe code and catch errors and bugs at build time before saving your code to AWS AppSync. The `@aws-appsync/utils` package is fully typed.

The `APPSYNC_JS` runtime doesn't support TypeScript directly. You must first transpile your TypeScript code to JavaScript code that the `APPSYNC_JS` runtime supports before saving your code to AWS AppSync. You can use TypeScript to write your code in your local integrated development environment (IDE), but note that you cannot create TypeScript code in the AWS AppSync console.

To get started, make sure you have [TypeScript](https://www.typescriptlang.org/download) installed in your project. Then, configure your TypeScript transcompilation settings to work with the `APPSYNC_JS` runtime using [TSConfig](https://www.typescriptlang.org/tsconfig). Here’s an example of a basic `tsconfig.json` file that you can use:

```
// tsconfig.json
{
  "compilerOptions": {
    "target": "esnext",
    "module": "esnext",
   "noEmit": true,
   "moduleResolution": "node",
  }
}
```

You can then use a bundling tool like esbuild to compile and bundle your code. For example, given a project with your AWS AppSync code located at `src/appsync`, you can use the following command to compile and bundle your code:

```
$ esbuild --bundle \
--sourcemap=inline \
--sources-content=false \
--target=esnext \
--platform=node \
--format=esm \
--external:@aws-appsync/utils \
--outdir=out/appsync \
 src/appsync/**/*.ts
```

### Using generics in TypeScript
<a name="working-with-typescript-generics"></a>

You can use generics with several of the provided types. For example, you can write a handler that makes use of the `√`≈. In your IDE, type definitions and auto-complete hints will guide you into properly using the available utilities.

```
import type { EventOnPublishContext, IncomingEvent, OutgoingEvent } from "@aws-appsync/utils"
import * as ddb from '@aws-appsync/utils/dynamodb'

type Message = {
  id: string;
  text: string;
  owner: string;
  likes: number
}

type OnP<T = any> = {
  request: (ctx: EventOnPublishContext<T>) => unknown,
  response: (ctx: EventOnPublishContext<T>) => OutgoingEvent[] | IncomingEvent[]
}

export const onPublish: OnP<Message> = {
  request(ctx) {
    const msg = ctx.events[0]
    return ddb.update<Message>({
      key: { owner: msg.payload.owner, id: msg.payload.id },
      update: msg.payload,
      condition: { id: { attributeExists: true } }
    })
  },
  response: (ctx) => ctx.events
}
```

## Linting your bundles
<a name="using-lint-with-bundles"></a>

You can automatically lint your bundles by importing the `esbuild-plugin-eslint` plugin. You can then enable it by providing a `plugins` value that enables eslint capabilities. Below is a snippet that uses the esbuild JavaScript API in a file called `build.mjs`:

```
/* eslint-disable */
import { build } from 'esbuild'
import eslint from 'esbuild-plugin-eslint'
import glob from 'glob'
const files = await glob('src/**/*.ts')

await build({
  format: 'esm',
  target: 'esnext',
  platform: 'node',
  external: ['@aws-appsync/utils'],
  outdir: 'dist/',
  entryPoints: files,
  bundle: true,
  plugins: [eslint({ useEslintrc: true })],
})
```

## Using source maps
<a name="source-maps"></a>

You can provide an inline source map (`sourcemap`) with your JavaScript code. Source maps are useful for when you bundle JavaScript or TypeScript code and want to see references to your input source files in your logs and runtime JavaScript error messages.

Your `sourcemap` must appear at the end of your code. It is defined by a single comment line that follows the following format:

```
//# sourceMappingURL=data:application/json;base64,<base64 encoded string>
```

The following is an example of a source map:

```
//# sourceMappingURL=data:application/json;base64,ewogICJ2ZXJzaW9uIjogMywKICAic291cmNlcyI6IFsibGliLmpzIiwgImNvZGUuanMiXSwKICAibWFwcGluZ3MiOiAiO0FBQU8sU0FBUyxRQUFRO0FBQ3RCLFNBQU87QUFDVDs7O0FDRE8sU0FBUyxRQUFRLEtBQUs7QUFDM0IsU0FBTyxNQUFNO0FBQ2Y7IiwKICAibmFtZXMiOiBbXQp9Cg==
```

Source maps can be created with esbuild. The example below shows you how to use the esbuild JavaScript API to include an inline source map when code is built and bundled:

```
import { build } from 'esbuild'
import eslint from 'esbuild-plugin-eslint'
import glob from 'glob'
const files = await glob('src/**/*.ts')

await build({
  sourcemap: 'inline',
  sourcesContent: false,
  
  format: 'esm',
  target: 'esnext',
  platform: 'node',
  external: ['@aws-appsync/utils'],
  outdir: 'dist/',
  entryPoints: files,
  bundle: true,
  plugins: [eslint({ useEslintrc: true })],
})
```

In the preceeding example, the `sourcemap` and `sourcesContent` options specify that a source map should be added in line at the end of each build but should not include the source content. As a convention, we recommend not including source content in your `sourcemap`. You can disable this in esbuild by setting `sources-content` to `false`.

To illustrate how source maps work, review the following example in which handler code references helper functions from a helper library. The code contains log statements in the handler code and in the helper library:

**./src/channelhandler.ts** (your handler)

```
import { EventOnPublishContext }  from "@aws-appsync/utils";
 import { mapper }  from "./lib/mapper";

 exportfunction onPublish ( ctx: EventOnPublishContext ) {
   return ctx.events.map(mapper)
}
```

**./lib/helper.ts** (a helper file)

```
import { IncomingEvent, OutgoingEvent } from "@aws-appsync/utils";

export function mapper(event: IncomingEvent, index: number) {
  console.log(`-> mapping: event ${event.id}`)
  return {
    ...event,
    payload: { ...event.payload, mapped: true },
    error: index % 2 === 0 ? 'flip flop error' : null
  } as OutgoingEvent
}
```

When you build and bundle the handler file, your handler code will include an inline source map. When your handler runs, entries will appear in the CloudWatch logs.

# AWS AppSync Event API context reference
<a name="context-reference"></a>

AWS AppSync defines a set of variables and functions for working with handlers. This topic describes these functions and provides examples.

## Accessing the `context`
<a name="accessing-the-context"></a>

The `context` argument of a request and response handler is an object that holds all of the contextual information for your handler invocation. It has the following structure:

```
type Context = {
  identity?: Identity;
  result?: any;
  request: Request;
  info: EventsInfo;
  stash: any;
  error?: Error 
  events?: IncomingEvent[];
}
```

**Note**  
The `context` object is commonly referred to as `ctx`.

Each field in the `context` object is defined as follows:

### `context` fields
<a name="accessing-the-context-list"></a>

** `identity` **  
An object that contains information about the caller. For more information about the structure of this field, see [Identity](#context-reference-identity).

** `result` **  
A container for the results of this handler when a data source is configured, available in the response function of a namespace handler.

** `request` **  
A container for the headers and information about the custom domain that was used. 

** `info` **  
An object that contains information about the operation on the channel namespace. For the structure of this field, see [Info](#info-property). 

** `stash` **  
The stash is an object that is made available inside each handler. The same stash object lives through a single handler evaluation. You can use the stash to pass arbitrary data across request and response functions of your handlers.   
You can add items to the stash as follows:  

```
ctx.stash.newItem = { key: "something" }
Object.assign(ctx.stash, {key1: value1, key2: value})
```
You can remove items from the stash by modifying the following code:  

```
delete ctx.stash.key
```

### Identity
<a name="context-reference-identity"></a>

The `identity` section contains information about the caller. The shape of this section depends on the authorization type of your AWS AppSync API. For more information about security options, see [Configuring authorization and authentication to secure Event APIs](configure-event-api-auth.md).

** `API_KEY` authorization**  
The `identity` field isn't populated.

**`AWS_LAMBDA` authorization**  
The `identity` has the following form:  

```
type AppSyncIdentityLambda = {
  handlerContext: any;
};
```
The `identity` contains the `handlerContext` key, containing the same `handlerContext` content returned by the Lambda function authorizing the request.

** `AWS_IAM` authorization**  
The `identity` has the following form:  

```
type AppSyncIdentityIAM = {
  accountId: string;
  cognitoIdentityPoolId: string;
  cognitoIdentityId: string;
  sourceIp: string[];
  username: string;
  userArn: string;
  cognitoIdentityAuthType: string;
  cognitoIdentityAuthProvider: string;
};
```

** `AMAZON_COGNITO_USER_POOLS` authorization**  
The `identity` has the following form:  

```
type AppSyncIdentityCognito = {
  sourceIp: string[];
  username: string;
  groups: string[] | null;
  sub: string;
  issuer: string;
  claims: any;
  defaultAuthStrategy: string;
};
```

Each field is defined as follows:

** `accountId` **  
The AWS account ID of the caller.

** `claims` **  
The claims that the user has.

** `cognitoIdentityAuthType` **  
Either authenticated or unauthenticated based on the identity type.

** `cognitoIdentityAuthProvider` **  
A comma-separated list of external identity provider information used in obtaining the credentials used to sign the request.

** `cognitoIdentityId` **  
The Amazon Cognito identity ID of the caller.

** `cognitoIdentityPoolId` **  
The Amazon Cognito identity pool ID associated with the caller.

** `defaultAuthStrategy` **  
The default authorization strategy for this caller (`ALLOW` or `DENY`).

** `issuer` **  
The token issuer.

** `sourceIp` **  
The source IP address of the caller that AWS AppSync receives. If the request doesn't include the `x-forwarded-for` header, the source IP value contains only a single IP address from the TCP connection. If the request includes a `x-forwarded-for` header, the source IP is a list of IP addresses from the `x-forwarded-for` header, in addition to the IP address from the TCP connection.

** `sub` **  
The UUID of the authenticated user.

** `user` **  
The IAM user.

** `userArn` **  
The Amazon Resource Name (ARN) of the IAM user.

** `username` **  
The user name of the authenticated user. In the case of `AMAZON_COGNITO_USER_POOLS` authorization, the value of *username* is the value of attribute *cognito:username*. In the case of `AWS_IAM` authorization, the value of *username* is the value of the AWS user principal. If you're using IAM authorization with credentials vended from Amazon Cognito identity pools, we recommend that you use `cognitoIdentityId`.

### Request property
<a name="request-property"></a>

The `request` property contains the headers that were sent with the request, and the custom domain name if it was used.

**Request headers**

The headers sent in HTTP requests to your API.

AWS AppSync supports passing custom headers from clients and accessing them in your handlers by using `ctx.request.headers`. You can then use the header values for actions such as inserting data into a data source or authorization checks. You can use single or multiple request headers.

If you set a header of `animal` with a value of `duck` as in the following example:

```
curl --location "https://YOUR_EVENT_API_ENDPOINT/event" \
--header 'Content-Type: application/json' \
--header "x-api-key:ABC123" \
--header "animal:duck" \
--data '{ "channel":"/news", "events":["\"Breaking news!\""] }'
```

Then, this could then be accessed with `ctx.request.headers.animal`. 

You can also pass multiple headers in a single request and access these in the handler. For example, if the `custom` header is set with two values as follows:

```
curl --location "https://YOUR_EVENT_API_ENDPOINT/event" \
--header 'Content-Type: application/json' \
--header "x-api-key:ABC123" \
--header "animal:duck" \
--header "animal:goose" \
--data '{ "channel":"/news", "events":["\"Breaking news!\""] }'
```

You could then access these as an array, such as `ctx.request.headers.animal[1]`.

**Note**  
AWS AppSync doesn't expose the cookie header in `ctx.request.headers`.

**Access the request custom domain name**

AWS AppSync supports configuring a custom domain that you can use to access your HTTP and WebSocket real-time endpoints for your APIs. When making a request with a custom domain name, you can get the domain name using `ctx.request.domainName`. When using the default endpoint domain name, the value is `null`.

### Info property
<a name="info-property"></a>

The `info` section contains information about the request made to your channel namespace. This section has the following form:

```
type EventsInfo = {
  info: {
    channel: {
      path: string;
      segments: string[];
    }
  };
  channelNamespace: {
    name: string
  }
  operation: 'SUBSCRIBE' | 'PUBLISH'
}
```

Each field is defined as follows:

** `info.channel.path` **  
The channel path the operation is executed on, for example, `/default/user/johm`.

** `info.channel.segments` **  
The segments of the channel path, for example, `['default', 'user', 'john']`.

** `info.channelNamespace.name` **  
The name of the channel namespace, for example, `'default'`.

** `info.operation ` **  
The operation executed: `PUBLISH` or `SUBSCRIBE`.

# Runtime features
<a name="runtime-features-overview"></a>

The APPSYNC\$1JS runtime environment provides features and utilities to help you work with data, and write functions and AWS AppSync Event API handlers. The topics in this section describe the language features that are supported for AWS AppSync Event APIs.

**Topics**
+ [Supported runtime features](runtime-supported-features.md)
+ [Built-in utilities](built-in-util.md)
+ [Built-in modules](built-in-modules.md)
+ [Runtime utilities](runtime-utilities.md)

# Supported runtime features
<a name="runtime-supported-features"></a>

The APPSYNC\$1JS runtime supports the features described in the following sections.

**Topics**
+ [Core features](#core-features)
+ [Primitive objects](#primitive-objects)
+ [Built-in objects and functions](#built-in-objects-functions)
+ [Globals](#globals)
+ [Error types](#error-types)

## Core features
<a name="core-features"></a>

The following core features are supported.

**Types**  
The following types are supported:  
+ numbers
+ strings
+ booleans
+ objects
+ arrays
+ functions

**Operators**  
The following operators are supported:  
+ standard math operators (`+`, `-`, `/`, `%`, `*`, etc.)
+ nullish coalescing operator (`??`)
+ Optional chaining (`?.`)
+ bitwise operators
+ `void` and `typeof` operators
+ spread operators (`...`)
The following operators are not supported:  
+ unary operators (`++`, `--`, and `~`)
+ `in` operator
**Note**  
Use the `Object.hasOwn` operator to check if the specified property is in the specified object.

**Statements**  
The following statements are supported:  
+ `const`
+ `let`
+ `var`
+ `break`
+ `else`
+ `for-in`
+ `for-of` 
+ `if`
+ `return`
+ `switch`
+ spread syntax
The following are not supported:  
+ `catch`
+ `continue`
+ `do-while`
+ `finally`
+ `for(initialization; condition; afterthought)`
**Note**  
The exceptions are `for-in` and `for-of` expressions, which are supported.
+ `throw`
+ `try`
+ `while`
+ labeled statements

**Literals**  
The following ES 6 [template literals](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals) are supported:  
+ Multi-line strings
+ Expression interpolation
+ Nesting templates

**Functions**  
The following function syntax is supported:  
+ Function declarations are supported.
+ ES 6 arrow functions are supported.
+ ES 6 rest parameter syntax is supported.

## Primitive objects
<a name="primitive-objects"></a>

The following primitive objects of ES and their functions are supported.

**Object**  
The following objects are supported:  
+ `Object.assign()`
+ `Object.entries()` 
+ `Object.hasOwn()`
+ `Object.keys()` 
+ `Object.values()`
+ `delete` 

**String**  
The following strings are supported:  
+  `String.prototype.length()` 
+  `String.prototype.charAt()` 
+  `String.prototype.concat()` 
+  `String.prototype.endsWith()` 
+  `String.prototype.indexOf()` 
+  `String.prototype.lastIndexOf()` 
+  `String.raw()` 
+  `String.prototype.replace()`
**Note**  
Regular expressions are not supported.   
However, Java-styled regular expression constructs are supported in the provided parameter. For more information see [Pattern](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html).
+ `String.prototype.replaceAll()`
**Note**  
Regular expressions are not supported.  
However, Java-styled regular expression constructs are supported in the provided parameter. For more information see [Pattern](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html).
+  `String.prototype.slice()` 
+  `String.prototype.split()` 
+  `String.prototype.startsWith()` 
+  `String.prototype.toLowerCase()` 
+  `String.prototype.toUpperCase()` 
+  `String.prototype.trim()` 
+  `String.prototype.trimEnd()` 
+  `String.prototype.trimStart()` 

**Number**  
The following numbers are supported:  
+  `Number.isFinite` 
+  `Number.isNaN` 

## Built-in objects and functions
<a name="built-in-objects-functions"></a>

The following functions and objects are supported.

**Math**  
+  `Math.random()` 
+  `Math.min()` 
+  `Math.max()` 
+  `Math.round()` 
+  `Math.floor()` 
+  `Math.ceil()` 

**Array**  
+ `Array.prototype.length` 
+ `Array.prototype.concat()` 
+ `Array.prototype.fill()` 
+ `Array.prototype.flat()` 
+ `Array.prototype.indexOf()` 
+ `Array.prototype.join()` 
+ `Array.prototype.lastIndexOf()` 
+ `Array.prototype.pop()` 
+ `Array.prototype.push()` 
+ `Array.prototype.reverse()` 
+ `Array.prototype.shift()` 
+ `Array.prototype.slice()` 
+ `Array.prototype.sort()`
**Note**  
`Array.prototype.sort()` doesn't support arguments.
+ `Array.prototype.splice()` 
+ `Array.prototype.unshift()`
+ `Array.prototype.forEach()`
+ `Array.prototype.map()`
+ `Array.prototype.flatMap()`
+ `Array.prototype.filter()`
+ `Array.prototype.reduce()`
+ `Array.prototype.reduceRight()`
+ `Array.prototype.find()`
+ `Array.prototype.some()`
+ `Array.prototype.every()`
+ `Array.prototype.findIndex()`
+ `Array.prototype.findLast()`
+ `Array.prototype.findLastIndex()`
+ `delete` 

**Console**  
The console object is available for debugging. During live query execution, console log/error statements are sent to Amazon CloudWatch Logs (if logging is enabled). During code evaluation with `evaluateCode`, log statements are returned in the command response.  
+ `console.error()`
+ `console.log()`

**Function**  
+ The `apply`, `bind`, and `call` methods not are supported.
+ Function constructors are not supported.
+ Passing a function as an argument is not supported.
+ Recursive function calls are not supported.

**JSON**  
The following JSON methods are supported:  
+ `JSON.parse()`
**Note**  
Returns a blank string if the parsed string is not valid JSON.
+ `JSON.stringify()`

**Promises**  
Async processes are not supported, and promises are not supported.  
Network and file system access is not supported within the `APPSYNC_JS` runtime in AWS AppSync. AWS AppSync handles all I/O operations based on the requests made by the AWS AppSync handler or AWS AppSync function.

## Globals
<a name="globals"></a>

The following global constants are supported:
+  `NaN` 
+  `Infinity` 
+  `undefined`
+ `util`
+ `extensions`
+ `runtimeb`

## Error types
<a name="error-types"></a>

Throwing errors with `throw` is not supported. You can return an error by using `util.error()` function. You can include an error in your handler response by using the `util.appendError` function.

# Built-in utilities
<a name="built-in-util"></a>

The `util` variable contains general utility methods to help you work with data. Unless otherwise specified, all utilities use the UTF-8 character set.

## Encoding utils
<a name="utility-helpers-in-encoding"></a>

 **`util.urlEncode(String)`**  
Returns the input string as an `application/x-www-form-urlencoded` encoded string.

 **`util.urlDecode(String)`**  
Decodes an `application/x-www-form-urlencoded` encoded string back to its non-encoded form.

**`util.base64Encode(string) : string`**  
Encodes the input into a base64-encoded string.

**`util.base64Decode(string) : string`**  
Decodes the data from a base64-encoded string.

# Built-in modules
<a name="built-in-modules"></a>

Modules are a part of the `APPSYNC_JS` runtime and provide utilities to help write functions and Event API handlers. This section describes the DynamoDB and Amazon RDS module functions that you can use to interact with these data sources.

## Amazon DynamoDB built-in module
<a name="DDB-built-in-module"></a>

The DynamoDB module functions provide an enhanced experience when interacting with DynamoDB data sources. You can make requests toward your DynamoDB data sources using the functions and without adding type mapping. 

Modules are imported using `@aws-appsync/utils/dynamodb`:

```
import * as ddb from '@aws-appsync/utils/dynamodb';
```

### DynamoDB `get()` function
<a name="ddb-get-function"></a>

The DynamoDB `get()` function generates a `DynamoDBGetItemRequest` object to make a `GetItem` request to DynamoDB.

**Definition**

```
get<T>(payload: GetInput): DynamoDBGetItemRequest
```

**Example**

The following example fetches an item from DynamoDB in a `subscribe` handler.

```
import { get } from '@aws-appsync/utils/dynamodb';

export const onSubscribe = {
  request(ctx) {
    return ddb.get({key: {
      path: ctx.info.channel.path,
      sub: ctx.identity.sub
    }})
  },
  response(ctx) {
    console.log('Got the item:', ctx.result)
    if (!ctx.result){
      console.error("No info about this user for this channel path.")
      until.unauthorized()
    }
  }
}
```

### DynamoDB `query()` function
<a name="ddb-query-function"></a>

The DynamoDB `query()` function generates a `DynamoDBQueryRequest` object to make a `Query` request to DynamoDB.

**Definition**

```
query<T>(payload: QueryInput): DynamoDBQueryRequest
```

**Example**

The following example performs a query against a DynamoDB table.

```
import * as ddb from '@aws-appsync/utils/dynamodb'

export const onPublish = {
  request(ctx) {
    // Find all items from this channel that exist on this path
    return ddb.query<{ channel: string; path: string }>({
      query: {
        channel: { eq: ctx.info.channelNamespace.name },
        path: { beginsWith: ctx.info.channe.path },
      },
      projection: ['channel', 'path', 'msgId'],
    })
  },
  response(ctx) {
    // Broadcast items that have not been saved to the table
    const ids = ctx.result.items.map(({ msgId }) =>  msgId )
    return ctx.events.filter(({ payload: { msgId } }) => !ids.includes(msgId))
  },
}
```

### DynamoDB `scan()` function
<a name="ddb-scan-function"></a>

The DynamoDB `scan()` function generates a `DynamoDBScanRequest` object to make a `Scan` request to DynamoDB.

**Definition**

```
scan<T>(payload: ScanInput): DynamoDBScanRequest
```

**Example**

The following example scans all items in a DynamoDB table.

```
import * as ddb from '@aws-appsync/utils/dynamodb';

export const onPublish = {
  request(ctx){
    return ddb.scan({
      limit: 20,
      projection: ['channel', 'path', 'msgId'],
      filter: { status: { eq: 'ACTIVE' } }
    })
  },
  response: (ctx) => ctx.events
}
```

### DynamoDB `put()` function
<a name="ddb-put-function"></a>

The DynamoDB `put()` function generates a `DynamoDBPutItemRequest` object to make a `PutItem` request to DynamoDB.

**Definition**

```
put<T>(payload: PutInput): DynamoDBPutItemRequest
```

**Example**

The following example saves an event to a DynamoDB table in an `OnPublish` handler.

```
import * as ddb from '@aws-appsync/utils/dynamodb';

export const onPublish = {
  request(ctx) {
    const {id, payload: item} = ctx.events[0]
    return ddb.put({ key: {id}, item })
  },
  response: (ctx) => ctx.events
}
```

### DynamoDB `remove()` function
<a name="ddb-remove-function"></a>

The DynamoDB `remove()` function generates a `DynamoDBDeleteItemRequest` object to make a `DeleteItem` request to DynamoDB.

**Definition**

```
remove<T>(payload: RemoveInput): DynamoDBDeleteItemRequest
```

**Example**

This `OnPublish` handler deletes an item in a DynamoDB table and forwards an empty list. No event is broadcast.

```
import * as ddb from '@aws-appsync/utils/dynamodb';

export const onPublish = {
  request(ctx) {
    const { id } = ctx.events[0]
    return ddb.remove({key: id});
  },
  response: (ctx) => ([])
}
```

### DynamoDB `update()` function
<a name="ddb-update-function"></a>

The DynamoDB `update()` function generates a `DynamoDBUpdateItemRequest` object to make an `UpdateItem` request to DynamoDB.

**Definition**

```
update<T>(payload: UpdateInput): DynamoDBUpdateItemRequest
```

**Example**

This `OnPublish` handler increases the account received item before it is broadcast.

```
import * as ddb from '@aws-appsync/utils/dynamodb';

export const onPublish = {
  request(ctx) {
    const { id, payload } = ctx.events[0]
    return ddb.update({
      key: { id },
      condition: { version: { eq: payload.version } },
      update: { ...payload, version: ddb.operations.increment(1) },
    });
  },
  response: (ctx) => ctx.events
}
```

### DynamoDB `batchGet()` function
<a name="ddb-batchget-function"></a>

The DynamoDB `batchGet()` function generates a `DynamoDBBatchGetItemRequest` object to make an `BatchGetItem` request to retrieve multiple items from one or more DynamoDB tables.

**Definition**

```
batchGet<T>(payload: BatchGetInput): DynamoDBBatchGetItemRequest
```

**Example**

The following example retrieves multiple items from a DynamoDB table in a single request/

```
import * as ddb from '@aws-appsync/utils/dynamodb';

export const onPublish = {
  request(ctx) {
    return ddb.batchGet({
      tables: {
        users: {
          keys: ctx.events.map(e => ({id: e.payload.id})),
          projection: ['id', 'name', 'email']
        }
      }
    })
  },
  response(ctx) {
    const users = ctx.result.data.users.reduce((acc, cur) => {
      acc[cur.id] = cur
    }, {})
    return ctx.events.map(event => {
      return {
        id: event.id,
        payload: {...event.payload, ...users[event.payload.id]}
      }
    })
  }
}
```

### DynamoDB `batchPut()` function
<a name="ddb-batchput-function"></a>

The DynamoDB `batchput()` function generates a `DynamoDBBatchPutItemRequest` object to make an `BatchWriteItem` request to put multiple items into one or more DynamoDB tables.

**Definition**

```
batchPut<T>(payload: BatchPutInput): DynamoDBBatchPutItemRequest
```

**Example**

The following example writes multiple items to a DynamoDB table in a single request.

```
import * as ddb from '@aws-appsync/utils/dynamodb';

export const onPublish = {
  request(ctx) {
    return ddb.batchPut({
      tables: {
        messages: ctx.events.map(({ id, payload }) => ({ 
          channel: ctx.info.channelNamespace.name, 
          id, 
          ...payload 
        })),
      }
    });
  },
  response: (ctx) => ctx.events
}
```

### DynamoDB `batchDelete()` function
<a name="ddb-batchdelete-function"></a>

The DynamoDB `batchDelete()` function generates a `DynamoDBBatchDeleteItemRequest` object to make an `BatchWriteItem` request to delete multiple items from one or more DynamoDB tables.

**Definition**

```
batchDelete(payload: BatchDeleteInput): DynamoDBBatchDeleteItemRequest
```

**Example**

The following example deletes multiple items from a DynamoDB table in a single request.

```
import * as ddb from '@aws-appsync/utils/dynamodb';

export const onPublish = {
  request(ctx) {
    const name = ctx.info.channelNamespace.name
    return ddb.batchDelete({
      tables: {
        [name]: ctx.events.map(({ payload }) => ({ id: payload.id })),
      }
    });
  },
  response: (ctx) => ([])
}
```

### DynamoDB `transactGet()` function
<a name="ddb-transactget-function"></a>

The DynamoDB `transactGet()` function generates a `DynamoDBTransactGetItemsRequest` object to make an `TransactGetItems` request to retrieve multiple items with strong consistency in a single atomic transaction.

**Definition**

```
transactGet(payload: TransactGetInput): DynamoDBTransactGetItemsRequest
```

**Example**

The following example retrieves multiple items in a single atomic transaction.

```
import * as ddb from '@aws-appsync/utils/dynamodb';

export const onPublish = {
  request(ctx) {
    return ddb.transactGet({
      items: ctx.events.map(event => ({
        table: event.payload.table,
        key: { id: event.payload.id },
        projection: [...event.payload.fields]
      }))
    })
  },
  response(ctx) {
    items = ctx.result.items
    return ctx.events.map((event, i) => ({
      id: event.id,
      payload: { ...event.payload, ...items[i] }
    }))
  }
}
```

### DynamoDB `transactWrite()` function
<a name="ddb-transactwrite-function"></a>

The DynamoDB `transactWrite()` function generates a `DynamoDBTransactWriteItemsRequest` object to make an `TransactWriteItems` request to perform multiple write operations in a single atomic transaction.

**Definition**

```
transactWrite(payload: TransactWriteInput): DynamoDBTransactWriteItemsRequest
```

**Example**

The following example performs multiple write operations in a single atomic transaction.

```
import * as ddb from '@aws-appsync/utils/dynamodb';

export const onPublish = {
  request(ctx) {
    const order = ctx.events[0]
    return ddb.transactWrite({
      items: [
        {
          putItem: {
            table: 'Orders',
            key: { id: order.payload.id },
            item: {
              status: 'PENDING',
              createdAt: util.time.toISOString(),
              items: order.items.map(({ id }) => id)
            }
          }
        },
        ...(order.items.map(({ id, item }) => ({
          putItem: {
            table: 'Items',
            key: { orderId: order.payload.id, id },
            item
          }
        })))
      ]
    });
  },
  response: (ctx) => ctx.events
}
```

### DynamoDB set utilities
<a name="built-in-ddb-modules-set-utilities"></a>

The `@aws-appsync/utils/dynamodb` provides the following `set` utility functions that you can use to work with string sets, number sets, and binary sets.

 **`toStringSet`**  
Converts a list of strings to the DynamoDB string set format.

 **`toNumberSet`**  
Converts a list of numbers to the DynamoDB string set format.

 **`toBinarySet`**  
Converts a list of binary to the DynamoDB string set format.

**Example**

The following example converts a list of strings to DynamoDB string set format.

```
import * as ddb from '@aws-appsync/utils/dynamodb';

export const onPublish = {
  request(ctx) {
    const { id, payload } = ctx.events[0]
    return ddb.update({
      key: { id },
      update: {segments: ddb.toStringSet(ctx.info.channel.segments)},
    });
  },
  response: (ctx) => ctx.events
}
```

### DynamoDB conditions and filters
<a name="built-in-ddb-modules-conditions-filters"></a>

You can use the following operators to create filters and conditions.


| 
| 
| Operator | Description | Possible value types | 
| --- |--- |--- |
| eq | Equal | number, string, boolean | 
| ne | Not equal | number, string, boolean | 
| le | Less than or equal | number, string | 
| lt | Less than | number, string | 
| ge | Greater than or equal | number, string | 
| gt | Greater than | number, string | 
| contains | Like | string | 
| notContains | Not like | string | 
| beginsWith | Starts with prefix | string | 
| between | Between two values | number, string | 
| attributeExists | The attribute is not null | number, string, boolean | 
| size | checks the length of the element | string | 

You can combine these operators with `and`, `or`, and `not`.

```
const condition = {
  and: [
    { name: { eq: 'John Doe' }},
    { age: { between: [10, 30] }},
    {or: [
      {id :{ attributeExists: true}}
    ]}
  ]
}
```

### DynamoDB operations
<a name="built-in-ddb-modules-operations"></a>

The DynamoDB operations object provides utility functions for common DynamoDB operations. These utilities are particularly useful in update() function calls.

The following operations are available:

 **`add(value)`**  
A helper function that adds a a value to the item when updating DynamoDB.

 **`remove()`**  
A helper function that removes an attribute from an item when updating DynamoDB.

**`replace(value)`**  
A helper function that replaces an existing attribute when updating an item in DynamoDB. This is useful for when you want to update the entire object or sub-object in the attribute.

**`increment(amount)`**  
A helper function that increments a numeric attribute by the specified amount when updating DynamoDB.

**`decrement(amount)`**  
A helper function that decrements a numeric attribute by the specified amount when updating DynamoDB.

**`append(value)`**  
A helper function that appends a value to a list attribute in DynamoDB.

**`prepend(value)`**  
A helper function that prepends a value to a list attribute in DynamoDB.

**`updateListItem(value, index)`**  
A helper function that updates an item in a list.

**Example**

The following example demonstrates how to use various operations in an update request.

```
import * as ddb from '@aws-appsync/utils/dynamodb';

export const onPublish = {
request(ctx) {
  return ddb.update({
    key: { id },
    update: {
      counter: ddb.operations.increment(1),
      tags: ddb.operations.append(['things']),
      items: ddb.operations.add({key: 'value'}),
      oldField: ddb.operations.remove(),
    },
  });
}

export function response(ctx) {
  return ctx.result;
}
```

### Inputs
<a name="built-in-ddb-modules-inputs"></a>

 **`Type GetInput<T>`**  

```
GetInput<T>: { 
    consistentRead?: boolean; 
    key: DynamoDBKey<T>; 
}
```
**Type Declaration**  
+ `consistentRead?: boolean` (optional)

  An optional boolean to specify whether you want to perform a strongly consistent read with DynamoDB.
+ `key: DynamoDBKey<T>` (required)

  A required parameter that specifies the key of the item in DynamoDB. DynamoDB items may have a single hash key or hash and sort keys.

**`Type PutInput<T>`**  

```
PutInput<T>: { 
    _version?: number; 
    condition?: DynamoDBFilterObject<T> | null; 
    customPartitionKey?: string; 
    item: Partial<T>; 
    key: DynamoDBKey<T>; 
    populateIndexFields?: boolean; 
}
```
**Type Declaration**  
+ `_version?: number` (optional)
+ `condition?: DynamoDBFilterObject<T> | null` (optional)

  When you put an object in a DynamoDB table, you can optionally specify a conditional expression that controls whether the request should succeed or not based on the state of the object already in DynamoDB before the operation is performed.
+ `customPartitionKey?: string` (optional)

  When enabled, this string value modifies the format of the `ds_sk` and `ds_pk` records used by the delta sync table when versioning has been enabled. When enabled, the processing of the `populateIndexFields` entry is also enabled. 
+ `item: Partial<T>` (required)

  The rest of the attributes of the item to be placed into DynamoDB.
+ `key: DynamoDBKey<T>` (required)

  A required parameter that specifies the key of the item in DynamoDB on which the put will be performed. DynamoDB items may have a single hash key or hash and sort keys.
+ `populateIndexFields?: boolean` (optional)

  A boolean value that, when enabled along with the `customPartitionKey`, creates new entries for each record in the delta sync table, specifically in the `gsi_ds_pk` and `gsi_ds_sk` columns. For more information, see [Conflict detection and sync](https://docs.aws.amazon.com/appsync/latest/devguide/conflict-detection-and-sync.html) in the *AWS AppSync GraphQL Developer Guide*.

**`Type QueryInput<T>`**  

```
QueryInput<T>: ScanInput<T> & { 
    query: DynamoDBKeyCondition<Required<T>>; 
}
```
**Type Declaration**  
+ `query: DynamoDBKeyCondition<Required<T>>` (required)

  Specifies a key condition that describes items to query. For a given index, the condition for a partition key should be an equality and the sort key a comparison or a `beginsWith` (when it's a string). Only number and string types are supported for partition and sort keys.

  **Example**

  Take the `User` type below:

  ```
  type User = {
    id: string;
    name: string;
    age: number;
    isVerified: boolean;
    friendsIds: string[] 
  }
  ```

  The query can only include the following fields: `id`, `name`, and `age`:

  ```
  const query: QueryInput<User> = {
      name: { eq: 'John' },
      age: { gt: 20 },
  }
  ```

**`Type RemoveInput<T>`**  

```
RemoveInput<T>: { 
    _version?: number; 
    condition?: DynamoDBFilterObject<T>; 
    customPartitionKey?: string; 
    key: DynamoDBKey<T>; 
    populateIndexFields?: boolean; 
}
```
**Type Declaration**  
+ `_version?: number` (optional)
+ `condition?: DynamoDBFilterObject<T>` (optional)

  When you remove an object in DynamoDB, you can optionally specify a conditional expression that controls whether the request should succeed or not based on the state of the object already in DynamoDB before the operation is performed.

  **Example**

  The following example is a `DeleteItem` expression containing a condition that allows the operation succeed only if the owner of the document matches the user making the request.

  ```
  type Task = {
    id: string;
    title: string;
    description: string;
    owner: string;
    isComplete: boolean;
  }
  const condition: DynamoDBFilterObject<Task> = {
    owner: { eq: 'XXXXXXXXXXXXXXXX' },
  }
  
  remove<Task>({
     key: {
       id: 'XXXXXXXXXXXXXXXX',
    },
    condition,
  });
  ```
+ `customPartitionKey?: string` (optional)

  When enabled, the `customPartitionKey` value modifies the format of the `ds_sk` and `ds_pk` records used by the delta sync table when versioning has been enabled. When enabled, the processing of the `populateIndexFields` entry is also enabled. 
+ `key: DynamoDBKey<T>` (required)

  A required parameter that specifies the key of the item in DynamoDB that is being removed. DynamoDB items may have a single hash key or hash and sort keys.

  **Example**

  If a `User` only has the hash key with a user `id`, then the key would look like this:

  ```
  type User = {
  	id: number
  	name: string
  	age: number
  	isVerified: boolean
  }
  const key: DynamoDBKey<User> = {
  	id: 1,
  }
  ```

  If the table user has a hash key (`id`) and sort key (`name`), then the key would look like this:

  ```
  type User = {
  	id: number
  	name: string
  	age: number
  	isVerified: boolean
  	friendsIds: string[]
  }
  
  const key: DynamoDBKey<User> = {
  	id: 1,
  	name: 'XXXXXXXXXX',
  }
  ```
+ `populateIndexFields?: boolean` (optional)

  A boolean value that, when enabled along with the `customPartitionKey`, creates new entries for each record in the delta sync table, specifically in the `gsi_ds_pk` and `gsi_ds_sk` columns.

**`Type ScanInput<T>`**  

```
ScanInput<T>: { 
    consistentRead?: boolean | null; 
    filter?: DynamoDBFilterObject<T> | null; 
    index?: string | null; 
    limit?: number | null; 
    nextToken?: string | null; 
    scanIndexForward?: boolean | null; 
    segment?: number; 
    select?: DynamoDBSelectAttributes; 
    totalSegments?: number; 
}
```
**Type Declaration**  
+ `consistentRead?: boolean | null` (optional)

  An optional boolean to indicate consistent reads when querying DynamoDB. The default value is `false`.
+ `filter?: DynamoDBFilterObject<T> | null` (optional)

  An optional filter to apply to the results after retrieving it from the table.
+ `index?: string | null` (optional)

  An optional name of the index to scan.
+ `limit?: number | null` (optional)

  An optional max number of results to return.
+ `nextToken?: string | null` (optional)

  An optional pagination token to continue a previous query. This would have been obtained from a previous query.
+ `scanIndexForward?: boolean | null` (optional)

  An optional boolean to indicate whether the query is performed in ascending or descending order. By default, this value is set to `true`.
+ `segment?: number` (optional)
+ `select?: DynamoDBSelectAttributes` (optional)

  Attributes to return from DynamoDB. By default, the AWS AppSync DynamoDB resolver only returns attributes that are projected into the index. The supported values are:
  + `ALL_ATTRIBUTES`

    Returns all the item attributes from the specified table or index. If you query a local secondary index, DynamoDB fetches the entire item from the parent table for each matching item in the index. If the index is configured to project all item attributes, all of the data can be obtained from the local secondary index and no fetching is required.
  + `ALL_PROJECTED_ATTRIBUTES`

    Returns all attributes that have been projected into the index. If the index is configured to project all attributes, this return value is equivalent to specifying `ALL_ATTRIBUTES`.
  + `SPECIFIC_ATTRIBUTES`

    Returns only the attributes listed in `ProjectionExpression`. This return value is equivalent to specifying `ProjectionExpression` without specifying any value for `AttributesToGet`.
+ `totalSegments?: number` (optional)

**`Type DynamoDBSyncInput<T>`**  

```
DynamoDBSyncInput<T>: { 
    basePartitionKey?: string; 
    deltaIndexName?: string; 
    filter?: DynamoDBFilterObject<T> | null; 
    lastSync?: number; 
    limit?: number | null; 
    nextToken?: string | null; 
}
```
**Type Declaration**  
+ `basePartitionKey?: string` (optional)

  The partition key of the base table to be used when performing a Sync operation. This field allows a Sync operation to be performed when the table utilizes a custom partition key.
+ `deltaIndexName?: string` (optional)

  The index used for the Sync operation. This index is required to enable a Sync operation on the whole delta store table when the table uses a custom partition key. The Sync operation will be performed on the GSI (created on `gsi_ds_pk` and `gsi_ds_sk`).
+ `filter?: DynamoDBFilterObject<T> | null` (optional)

  An optional filter to apply to the results after retrieving it from the table.
+ `lastSync?: number` (optional)

  The moment, in epoch milliseconds, at which the last successful Sync operation started. If specified, only items that have changed after `lastSync` are returned. This field should only be populated after retrieving all pages from an initial Sync operation. If omitted, results from the base table will be returned. Otherwise, results from the delta table will be returned.
+ `limit?: number | null` (optional)

  An optional maximum number of items to evaluate at a single time. If omitted, the default limit will be set to `100` items. The maximum value for this field is `1000` items.
+ `nextToken?: string | null` (optional)

**`Type DynamoDBUpdateInput<T>`**  

```
DynamoDBUpdateInput<T>: { 
    _version?: number; 
    condition?: DynamoDBFilterObject<T>; 
    customPartitionKey?: string; 
    key: DynamoDBKey<T>; 
    populateIndexFields?: boolean; 
    update: DynamoDBUpdateObject<T>; 
}
```
**Type Declaration**  
+ `_version?: number` (optional)
+ `condition?: DynamoDBFilterObject<T>` (optional)

  When you update an object in DynamoDB, you can optionally specify a conditional expression that controls whether the request should succeed or not based on the state of the object already in DynamoDB before the operation is performed.
+ `customPartitionKey?: string` (optional)

  When enabled, the `customPartitionKey` value modifies the format of the `ds_sk` and `ds_pk` records used by the delta sync table when versioning has been enabled. When enabled, the processing of the `populateIndexFields` entry is also enabled. 
+ `key: DynamoDBKey<T>` (required)

  A required parameter that specifies the key of the item in DynamoDB that is being updated. DynamoDB items may have a single hash key or hash and sort keys.
+ `populateIndexFields?: boolean` (optional)

  A boolean value that, when enabled along with the `customPartitionKey`, creates new entries for each record in the delta sync table, specifically in the `gsi_ds_pk` and `gsi_ds_sk` columns. 
+ `update: DynamoDBUpdateObject<T>`

  An object that specifies the attributes to be updated along with the new values for them. The update object can be used with `add`, `remove`, `replace`, `increment`, `decrement`, `append`, `prepend`, `updateListItem`.

## Amazon RDS module functions
<a name="built-in-rds-modules"></a>

Amazon RDS module functions provide an enhanced experience when interacting with databases configured with the Amazon RDS Data API. The module is imported using `@aws-appsync/utils/rds`: 

```
import * as rds from '@aws-appsync/utils/rds';
```

Functions can also be imported individually. For instance, the import below uses `sql`:

```
import { sql } from '@aws-appsync/utils/rds';
```

### Select
<a name="built-in-rds-modules-functions-select"></a>

The `select` utility creates a `SELECT` statement to query your relational database. 

**Basic use**

In its basic form, you can specify the table you want to query.

```
import { select, createPgStatement } from '@aws-appsync/utils/rds';

export const onPublish = {
  request(ctx) {
    // Generates statement: 
    // "SELECT * FROM "persons"
    return createPgStatement(select({table: 'persons'}));
  }
}
```

You can also specify the schema in your table identifier:.

```
import { select, createPgStatement } from '@aws-appsync/utils/rds';

export const onPublish = {
  request(ctx) {
    // Generates statement:
    // SELECT * FROM "private"."persons"
    return createPgStatement(select({table: 'private.persons'}));
  }
}
```

**Specifying columns**

You can specify columns with the `columns` property. If this isn't set to a value, it defaults to `*`.

```
export const onPublish = {
  request(ctx) {
    // Generates statement:
    // SELECT "id", "name"
    // FROM "persons"
    return createPgStatement(select({
        table: 'persons',
        columns: ['id', 'name']
    }));
  }
}
```

You can also specify a column's table.

```
export const onPublish = {
  request(ctx) {
    // Generates statement: 
    // SELECT "id", "persons"."name"
    // FROM "persons"
    return createPgStatement(select({
        table: 'persons',
        columns: ['id', 'persons.name']
    }));
  }
}
```

**Limits and offsets**

You can apply `limit` and `offset` to the query.

```
export const onPublish = {
  request(ctx) {
    // Generates statement: 
    // SELECT "id", "name"
    // FROM "persons"
    // LIMIT :limit
    // OFFSET :offset
    return createPgStatement(select({
        table: 'persons',
        columns: ['id', 'name'],
        limit: 10,
        offset: 40
    }));
  }
}
```

**Order By**

You can sort your results with the `orderBy` property. Provide an array of objects specifying the column and an optional `dir` property.

```
export const onPublish = {
  request(ctx) {
    // Generates statement: 
    // SELECT "id", "name" FROM "persons"
    // ORDER BY "name", "id" DESC
    return createPgStatement(select({
        table: 'persons',
        columns: ['id', 'name'],
        orderBy: [{column: 'name'}, {column: 'id', dir: 'DESC'}]
    }));
  }
}
```

**Filters**

You can build filters by using the special condition object.

```
export const onPublish = {
  request(ctx) {
    // Generates statement:
    // SELECT "id", "name"
    // FROM "persons"
    // WHERE "name" = :NAME
    return createPgStatement(select({
        table: 'persons',
        columns: ['id', 'name'],
        where: {name: {eq: 'Stephane'}}
    }));
  }
}
```

You can also combine filters.

```
export const onPublish = {
  request(ctx) {
    // Generates statement:
    // SELECT "id", "name"
    // FROM "persons"
    // WHERE "name" = :NAME and "id" > :ID
    return createPgStatement(select({
        table: 'persons',
        columns: ['id', 'name'],
        where: {name: {eq: 'Stephane'}, id: {gt: 10}}
    }));
  }
}
```

You can create `OR` statements.

```
export const onPublish = {
  request(ctx) {
    // Generates statement:
    // SELECT "id", "name"
    // FROM "persons"
    // WHERE "name" = :NAME OR "id" > :ID
    return createPgStatement(select({
        table: 'persons',
        columns: ['id', 'name'],
        where: { or: [
            { name: { eq: 'Stephane' } },
            { id: { gt: 10 } }
        ]}
    }));
  }
}
```

You can negate a condition with `not`.

```
export const onPublish = {
  request(ctx) {
    // Generates statement:
    // SELECT "id", "name"
    // FROM "persons"
    // WHERE NOT ("name" = :NAME AND "id" > :ID)
    return createPgStatement(select({
        table: 'persons',
        columns: ['id', 'name'],
        where: { not: [
            { name: { eq: 'Stephane' } },
            { id: { gt: 10 } }
        ]}
    }));
  }
}
```

You can also use the following operators to compare values:


| 
| 
| Operator | Description | Possible value types | 
| --- |--- |--- |
| eq | Equal | number, string, boolean | 
| ne | Not equal | number, string, boolean | 
| le | Less than or equal | number, string | 
| lt | Less than | number, string | 
| ge | Greater than or equal | number, string | 
| gt | Greater than | number, string | 
| contains | Like | string | 
| notContains | Not like | string | 
| beginsWith | Starts with prefix | string | 
| between | Between two values | number, string | 
| attributeExists | The attribute is not null | number, string, boolean | 
| size | checks the length of the element | string | 

### Insert
<a name="built-in-rds-modules-functions-insert"></a>

The `insert` utility provides a straightforward way of inserting single row items in your database with the `INSERT` operation.

**Single item insertions**

To insert an item, specify the table and then pass in your object of values. The object keys are mapped to your table columns. Columns names are automatically escaped, and values are sent to the database using the variable map.

```
import { insert, createMySQLStatement } from '@aws-appsync/utils/rds';

export const onPublish = {
  request(ctx) {
    const { input: values } = ctx.args;
    const insertStatement = insert({ table: 'persons', values });

    // Generates statement:
    // INSERT INTO `persons`(`name`)
    // VALUES(:NAME)
    return createMySQLStatement(insertStatement);
  }
}
```

**MySQL use case**

You can combine an `insert` followed by a `select` to retrieve your inserted row.

```
import { insert, select, createMySQLStatement } from '@aws-appsync/utils/rds';

export const onPublish = {
  request(ctx) {
    const { input: values } = ctx.args;
    const insertStatement = insert({ table: 'persons', values });
    const selectStatement = select({
        table: 'persons',
        columns: '*',
        where: { id: { eq: values.id } },
        limit: 1,
    });

    // Generates statement:
    // INSERT INTO `persons`(`name`)
    // VALUES(:NAME)
    // and
    // SELECT *
    // FROM `persons`
    // WHERE `id` = :ID
    return createMySQLStatement(insertStatement, selectStatement);
  }
}
```

**Postgres use case**

With Postgres, you can use [https://www.postgresql.org/docs/current/dml-returning.html](https://www.postgresql.org/docs/current/dml-returning.html) to obtain data from the row that you inserted. It accepts `*` or an array of column names:

```
import { insert, createPgStatement } from '@aws-appsync/utils/rds';

export const onPublish = {
  request(ctx) {
    const { input: values } = ctx.args;
    const insertStatement = insert({
        table: 'persons',
        values,
        returning: '*'
    });

    // Generates statement:
    // INSERT INTO "persons"("name")
    // VALUES(:NAME)
    // RETURNING *
    return createPgStatement(insertStatement);
  }
}
```

### Update
<a name="built-in-rds-modules-functions-update"></a>

The `update` utility allows you to update existing rows. You can use the condition object to apply changes to the specified columns in all the rows that satisfy the condition. For example, let's presume that we have a schema that allows us to make this mutation. The following example updates the `name` of `Person` with the `id` value of `3` but only if we've known them (`known_since`) since the year `2000`.

```
mutation Update {
    updatePerson(
        input: {id: 3, name: "Jon"},
        condition: {known_since: {ge: "2000"}}
    ) {
    id
    name
  }
}
```

Our update handler looks like the following:

```
import { update, createPgStatement } from '@aws-appsync/utils/rds';

export const onPublish = {
  request(ctx) {
    const { input: { id, ...values }, condition } = ctx.args;
    const where = {
        ...condition,
        id: { eq: id },
    };
    const updateStatement = update({
        table: 'persons',
        values,
        where,
        returning: ['id', 'name'],
    });

    // Generates statement:
    // UPDATE "persons"
    // SET "name" = :NAME, "birthday" = :BDAY, "country" = :COUNTRY
    // WHERE "id" = :ID
    // RETURNING "id", "name"
    return createPgStatement(updateStatement);
  }
}
```

We can add a check to our condition to make sure that only the row that has the primary key `id` equal to `3` is updated. Similarly, for Postgres `inserts`, you can use `returning` to return the modified data. 

### Remove
<a name="built-in-rds-modules-functions-remove"></a>

The `remove` utility allows you to delete existing rows. You can use the condition object on all rows that satisfy the condition. Note that `delete` is a reserved keyword in JavaScript. Use `remove` instead.

```
import { remove, createPgStatement } from '@aws-appsync/utils/rds';

export const onPublish = {
  request(ctx) {
    const { input: { id }, condition } = ctx.args;
    const where = { ...condition, id: { eq: id } };
    const deleteStatement = remove({
        table: 'persons',
        where,
        returning: ['id', 'name'],
    });

    // Generates statement:
    // DELETE "persons"
    // WHERE "id" = :ID
    // RETURNING "id", "name"
    return createPgStatement(deleteStatement);
  }
}
```

### Casting
<a name="built-in-rds-modules-casting"></a>

In some cases, you might require more specificity about the correct object type to use in your statement. You can use the provided type hints to specify the type of your parameters. AWS AppSync supports the [same type hints](https://docs.aws.amazon.com//rdsdataservice/latest/APIReference/API_SqlParameter.html#rdsdtataservice-Type-SqlParameter-typeHint) as the Data API. You can cast your parameters by using the `typeHint` functions from the AWS AppSync `rds` module. 

The following example allows you to send an array as a value that is casted as a JSON object. We use the `->` operator to retrieve the element at the `index` `2` in the JSON array.

```
import { sql, createPgStatement, toJsonObject, typeHint } from '@aws-appsync/utils/rds';

export const onPublish = {
  request(ctx) {
    const arr = ctx.args.list_of_ids
    const statement = sql`select ${typeHint.JSON(arr)}->2 as value`
    return createPgStatement(statement)
  }
}
```

Casting is also useful when handling and comparing `DATE`, `TIME`, and `TIMESTAMP`:

```
import { select, createPgStatement, typeHint } from '@aws-appsync/utils/rds';

export const onPublish = {
  request(ctx) {
    const when = ctx.args.when
    const statement = select({
        table: 'persons',
        where: { createdAt : { gt: typeHint.DATETIME(when) } }
    })
    return createPgStatement(statement)
  }
}
```

The following example demonstrates how to send the current date and time.

```
import { sql, createPgStatement, typeHint } from '@aws-appsync/utils/rds';

export const onPublish = {
  request(ctx) {
    const now = util.time.nowFormatted('YYYY-MM-dd HH:mm:ss')
    return createPgStatement(sql`select ${typeHint.TIMESTAMP(now)}`)
  }
}
```

**Available type hints**
+ `typeHint.DATE` — The corresponding parameter is sent as an object of the `DATE` type to the database. The accepted format is `YYYY-MM-DD`.
+ `typeHint.DECIMAL` — The corresponding parameter is sent as an object of the `DECIMAL` type to the database.
+ `typeHint.JSON` — The corresponding parameter is sent as an object of the `JSON` type to the database.
+ `typeHint.TIME` — The corresponding string parameter value is sent as an object of the `TIME` type to the database. The accepted format is `HH:MM:SS[.FFF]`. 
+ `typeHint.TIMESTAMP` — The corresponding string parameter value is sent as an object of the `TIMESTAMP` type to the database. The accepted format is `YYYY-MM-DD HH:MM:SS[.FFF]`.
+ `typeHint.UUID` — The corresponding string parameter value is sent as an object of the `UUID` type to the database.

# Runtime utilities
<a name="runtime-utilities"></a>

The runtime library provides utilities to control or modify the runtime properties of your handlers and functions.

Invoking the following function stops the execution of the current handler (AWS AppSync Events API) and returns the specified object as the result. 

**`runtime.earlyReturn(obj?: unknown): never`**

When this function is called in an AWS AppSync Events handler, the data source and response function are skipped.

```
import * as ddb from '@aws-appsync/utils/dynamodb';

export const onPublish = {
  request(ctx) {
    if (condition === true) {
      return runtime.earlyReturn(ctx.events)
    }
    // never executed if `condition` is true
    return ddb.batchPut({
      tables: {
        messages: ctx.events.map(({ id, payload }) => ({ 
          channel: ctx.info.channelNamespace.name, 
          id, 
          ...payload 
        })),
      }
    });
  },
  // never called if `condition` was true
  response: (ctx) => ctx.events 
}
```

# AWS AppSync JavaScript function reference for DynamoDB
<a name="dynamodb-function-reference"></a>

The Amazon DynamoDB functions allow you to use JavaScript to store and retrieve data in existing Amazon DynamoDB tables in your account. This section describes the request and response handlers for supported DynamoDB operations.
+  [GetItem](dynamodb-getitem.md) — The GetItem request lets you tell the DynamoDB function to make a GetItem request to DynamoDB, and enables you to specify the key of the item in DynamoDB and whether to use a consistent read.
+  [PutItem](dynamodb-putitem.md) — The PutItem request lets you tell the DynamoDB function to make a PutItem request to DynamoDB, and enables you to specify the key of the item in DynamoDB, the full contents of the item (composed of key and attributeValues), and conditions for the operation to succeed.
+  [UpdateItem](dynamodb-updateitem.md) — The UpdateItem request enables you to tell the DynamoDB function to make a UpdateItem request to DynamoDB and allows you to specify the key of the item in DynamoDB, an update expression describing how to update the item in DynamoDB, and conditions for the operation to succeed.
+  [DeleteItem](dynamodb-deleteitem.md) — The DeleteItem request lets you tell the DynamoDB function to make a DeleteItem request to DynamoDB, and enables you to specify the key of the item in DynamoDB and conditions for the operation to succeed.
+  [Query](dynamodb-query.md) — The Query request object lets you tell the handler to make a Query request to DynamoDB, and enables you to specify the key expression, which index to use, additional filters, how many items to return, whether to use consistent reads, query direction (forward or backward), and pagination tokens.
+  [Scan](dynamodb-scan.md) — The Scan request lets you tell the DynamoDB function to make a Scan request to DynamoDB, and enables you to specify a filter to exclude results, which index to use, how many items to return, whether to use consistent reads, pagination tokens, and parallel scans.
+  [BatchGetItem](dynamodb-batchgetitem.md) — The BatchGetItem request object lets you tell the DynamoDB function to make a BatchGetItem request to DynamoDB to retrieve multiple items, potentially across multiple tables. For this request object, you must specify the table names to retrieve the items from and the keys of the items to retrieve from each table.
+  [BatchDeleteItem](dynamodb-batchdeleteitem.md) — The BatchDeleteItem request object lets you tell the DynamoDB function to make a BatchWriteItem request to DynamoDB to delete multiple items, potentially across multiple tables. For this request object, you must specify the table names to delete the items from and the keys of the items to delete from each table.
+  [BatchPutItem](dynamodb-batchputitem.md) — The BatchPutItem request object lets you tell the DynamoDB function to make a BatchWriteItem request to DynamoDB to put multiple items, potentially across multiple tables. For this request object, you must specify the table names to put the items in and the full items to put in each table.
+  [TransactGetItems](dynamodb-transactgetitems.md) — The TransactGetItems request object lets you to tell the DynamoDB function to make a TransactGetItems request to DynamoDB to retrieve multiple items, potentially across multiple tables. For this request object, you must specify the table name of each request item to retrieve the item from and the key of each request item to retrieve from each table.
+  [TransactWriteItems](dynamodb-transactwriteitems.md) — The TransactWriteItems request object lets you tell the DynamoDB function to make a TransactWriteItems request to DynamoDB to write multiple items, potentially to multiple tables. For this request object, you must specify the destination table name of each request item, the operation of each request item to perform, and the key of each request item to write.
+  [Type system (request mapping)](dynamodb-typed-values-request.md) — Learn more about how DynamoDB typing is integrated into AWS AppSync requests.
+  [Type system (response mapping)](dynamodb-typed-values-responses.md) — Learn more about how DynamoDB types are converted automatically to JSON in a response payload.
+  [Filters](dynamodb-filter.md) — Learn more about filters for query and scan operations.
+  [Condition expressions](dynamodb-condition-expressions.md) — Learn more about condition expressions for PutItem, UpdateItem, and DeleteItem operations.
+  [Transaction condition expressions](dynamodb-transaction-condition-expressions.md) — Learn more about condition expressions for TransactWriteItems operations.
+  [Projections](dynamodb-projections.md) — Learn more about how to specify attributes in read operations.

# GetItem
<a name="dynamodb-getitem"></a>

**Note**  
We recommend using the DynamoDB built-in module to generate your request. For more information, see [Amazon DynamoDB built-in module](built-in-modules.md#DDB-built-in-module).

The `GetItem` request lets you tell the AWS AppSync DynamoDB function to make a `GetItem` request to DynamoDB, and enables you to specify:
+ The key of the item in DynamoDB
+ Whether to use a consistent read or not

The `GetItem` request has the following structure:

```
type DynamoDBGetItem = {
  operation: 'GetItem';
  key: { [key: string]: any };
  consistentRead?: ConsistentRead;
  projection?: {
    expression: string;
    expressionNames?: { [key: string]: string };
  };
};
```

The TypeScript definition above shows all available fields for the request. While you can construct this request manually, using the DynamoDB built-in module is the recommended approach for generating accurate and efficient requests.

## GetItem fields
<a name="js-getitem-list"></a>

 **`operation`**   
The DynamoDB operation to perform. To perform the `GetItem` DynamoDB operation, this must be set to `GetItem`. This value is required.

 **`key`**   
The key of the item in DynamoDB. DynamoDB items may have a single hash key, or a hash key and sort key, depending on the table structure. For more information about how to specify a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md). This value is required.

 **`consistentRead`**   
Whether or not to perform a strongly consistent read with DynamoDB. This is optional, and defaults to `false`.

**`projection`**  
A projection that's used to specify the attributes to return from the DynamoDB operation. For more information about projections, see [Projections](dynamodb-projections.md). This field is optional.

For more information about DynamoDB type conversion, see [Type system (response mapping)](dynamodb-typed-values-responses.md).

## Examples
<a name="js-example"></a>

```
export const onPublish = {
  request: (ctx) => ({
    operation : "GetItem",
    key : util.dynamodb.toMapValues({
      channel: ctx.info.channelNamespace.name, 
      id: ctx.events[0].payload.id}),
    consistentRead : true
  }),
  response(ctx) {
    return [{
      id: ctx.event[0].id,
      payload: ctx.result
    }]
  }
}
```

The following example demonstrates DynamoDB utils.

```
import * as ddb from '@aws-appsync/utils/dynamodb'
export const onPublish = {
  request: (ctx) => ddb.get({
    key: {
      channel: ctx.info.channelNamespace.name, 
      id: ctx.events[0].payload.id
    },
    consistentRead: true
  }),
  response(ctx) {
    return [{
      id: ctx.event[0].id,
      payload: ctx.result
    }]
  }
}
```

For more information about the DynamoDB `GetItem` API, see the [DynamoDB API documentation](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_GetItem.html).

# PutItem
<a name="dynamodb-putitem"></a>

**Note**  
We recommend using the DynamoDB built-in module to generate your request. For more information, see [Amazon DynamoDB built-in module](built-in-modules.md#DDB-built-in-module).

The `PutItem` request enables you to create or replace items in DynamoDB through AWS AppSync. The request specifies the following:
+ Item Key: The unique identifier for the DynamoDB item
+ Item Contents: The complete item data, including both the `key` and `attributeValues`
+ Operation Conditions (optional): Rules that must be met for the operation to proceed

The `PutItem` request has the following structure:

```
type DynamoDBPutItemRequest = {
  operation: 'PutItem';
  key: { [key: string]: any };
  attributeValues: { [key: string]: any};
  condition?: ConditionCheckExpression;
  customPartitionKey?: string;
  populateIndexFields?: boolean;
  _version?: number;
};
```

The TypeScript definition above shows all available fields for the request. While you can construct this request manually, we recommend using the DynamoDB built-in module for generating accurate and efficient requests.

## PutItem fields
<a name="js-putitem-list"></a>

 **`operation`**   
The DynamoDB operation to perform. To perform the `PutItem` DynamoDB operation, this must be set to `PutItem`. This value is required.

 **`key`**   
The key of the item in DynamoDB. DynamoDB items may have a single hash key, or a hash key and sort key, depending on the table structure. For more information about how to specify a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md). This value is required.

 **`attributeValues`**   
The rest of the attributes of the item to be put into DynamoDB. For more information about how to specify a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md). This field is optional.

 **`condition`**   
A condition to determine if the request should succeed or not, based on the state of the object already in DynamoDB. If no condition is specified, the `PutItem` request overwrites any existing entry for that item. For more information about conditions, see [Condition expressions](dynamodb-condition-expressions.md). This value is optional.

 **`_version`**   
A numeric value that represents the latest known version of an item. This value is optional. This field is used for *Conflict Detection* and is only supported on versioned data sources.

**`customPartitionKey`**  
When enabled, this string value modifies the format of the `ds_sk` and `ds_pk` records used by the delta sync table when versioning has been enabled (for more information, see [Conflict detection and sync](https://docs.aws.amazon.com/appsync/latest/devguide/conflict-detection-and-sync.html) in the *AWS AppSync Developer Guide*). When enabled, the processing of the `populateIndexFields` entry is also enabled. This field is optional.  
*Not supported in AWS AppSync Events*

**`populateIndexFields`**  
A boolean value that, when enabled **along with the `customPartitionKey`**, creates new entries for each record in the delta sync table, specifically in the `gsi_ds_pk` and `gsi_ds_sk` columns. For more information, see [Conflict detection and sync](https://docs.aws.amazon.com/appsync/latest/devguide/conflict-detection-and-sync.html) in the *AWS AppSync Developer Guide*. This field is optional.   
The item written to DynamoDB is automatically converted to JSON primitive types and is available in the context result (`context.result`).

For more information about DynamoDB type conversion, see [Type system (response mapping)](dynamodb-typed-values-responses.md).

For more information about the DynamoDB `PutItem` API, see the [DynamoDB API documentation](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html).

# UpdateItem
<a name="dynamodb-updateitem"></a>

**Note**  
We recommend using the DynamoDB built-in module to generate your request. For more information, see [Amazon DynamoDB built-in module](built-in-modules.md#DDB-built-in-module).

The `UpdateItem` request enables you to modify existing items in DynamoDB through AWS AppSync. The request specifies the following:
+ Item Key: The unique identifier for the DynamoDB item to update
+ Item Expression: Describes how to modify the item in DynamoDB
+ Operation Conditions (optional): Rules that must be met for the update to proceed

The `UpdateItem` request has the following structure:

```
type DynamoDBUpdateItemRequest = {
  operation: 'UpdateItem';
  key: { [key: string]: any };
  update: {
    expression: string;
    expressionNames?: { [key: string]: string };
    expressionValues?: { [key: string]: any };
  };
  condition?: ConditionCheckExpression;
  customPartitionKey?: string;
  populateIndexFields?: boolean;
  _version?: number;
};
```

The TypeScript definition above shows all available fields for the request. While you can construct this request manually, we recommend using the DynamoDB built-in module for generating accurate and efficient requests.

## UpdateItem fields
<a name="js-updateitem-list"></a>

 **`operation`**   
The DynamoDB operation to perform. To perform the `UpdateItem` DynamoDB operation, this must be set to `UpdateItem`. This value is required.

 **`key`**   
The key of the item in DynamoDB. DynamoDB items may have a single hash key, or a hash key and sort key, depending on the table structure. For more information about specifying a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md). This value is required.

 **`update`**   
The `update` section lets you specify an update expression that describes how to update the item in DynamoDB. For more information about how to write update expressions, see the [DynamoDB UpdateExpressions documentation](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.UpdateExpressions.html). This section is required.  
The `update` section has three components:    
** `expression` **  
The update expression. This value is required.  
** `expressionNames` **  
The substitutions for expression attribute *name* placeholders, in the form of key-value pairs. The key corresponds to a name placeholder used in the `expression`, and the value must be a string corresponding to the attribute name of the item in DynamoDB. This field is optional, and should only be populated with substitutions for expression attribute name placeholders used in the `expression`.  
** `expressionValues` **  
The substitutions for expression attribute *value* placeholders, in the form of key-value pairs. The key corresponds to a value placeholder used in the `expression`, and the value must be a typed value. For more information about how to specify a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md). This must be specified. This field is optional, and should only be populated with substitutions for expression attribute value placeholders used in the `expression`.

 **`condition`**   
A condition to determine if the request should succeed or not, based on the state of the object already in DynamoDB. If no condition is specified, the `UpdateItem` request updates the existing entry regardless of its current state. For more information about conditions, see [Condition expressions](dynamodb-condition-expressions.md). This value is optional.

 **`_version`**   
A numeric value that represents the latest known version of an item. This value is optional. This field is used for *Conflict Detection* and is only supported on versioned data sources.  
*Not supported in AWS AppSync Events*

**`customPartitionKey`**  
When enabled, this string value modifies the format of the `ds_sk` and `ds_pk` records used by the delta sync table when versioning has been enabled (for more information, see [Conflict detection and sync](https://docs.aws.amazon.com/appsync/latest/devguide/conflict-detection-and-sync.html) in the *AWS AppSync Developer Guide*). When enabled, the processing of the `populateIndexFields` entry is also enabled. This field is optional.  
*Not supported in AWS AppSync Events*

**`populateIndexFields`**  
A boolean value that, when enabled **along with the `customPartitionKey`**, creates new entries for each record in the delta sync table, specifically in the `gsi_ds_pk` and `gsi_ds_sk` columns. For more information, see [Conflict detection and sync](https://docs.aws.amazon.com/appsync/latest/devguide/conflict-detection-and-sync.html) in the *AWS AppSync Developer Guide*. This field is optional.  
*Not supported in AWS AppSync Events*

# DeleteItem
<a name="dynamodb-deleteitem"></a>

**Note**  
We recommend using the DynamoDB built-in module to generate your request. For more information, see [Amazon DynamoDB built-in module](built-in-modules.md#DDB-built-in-module).

The `DeleteItem` request enables you to delete an item in a DynamoDB table. The request specifies the following:
+ The key of the item in DynamoDB
+ Conditions for the operation to succeed

The `DeleteItem` request has the following structure:

```
type DynamoDBDeleteItemRequest = {
  operation: 'DeleteItem';
  key: { [key: string]: any };
  condition?: ConditionCheckExpression;
  customPartitionKey?: string;
  populateIndexFields?: boolean;
  _version?: number;
};
```

The TypeScript definition above shows all available fields for the request. While you can construct this request manually, we recommend using the DynamoDB built-in module for generating accurate and efficient requests.

## DeleteItem fields
<a name="js-deleteitem-list"></a>

** `operation` **  
The DynamoDB operation to perform. To perform the `DeleteItem` DynamoDB operation, this must be set to `DeleteItem`. This value is required.

** `key` **  
The key of the item in DynamoDB. DynamoDB items may have a single hash key, or a hash key and sort key, depending on the table structure. For more information about specifying a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md). This value is required.

** `condition` **  
A condition to determine if the request should succeed or not, based on the state of the object already in DynamoDB. If no condition is specified, the `DeleteItem` request deletes an item regardless of its current state. For more information about conditions, see [Condition expressions](dynamodb-condition-expressions.md). This value is optional.

** `_version` **  
A numeric value that represents the latest known version of an item. This value is optional. This field is used for *Conflict Detection* and is only supported on versioned data sources.  
*Not supported in AWS AppSync Events*

**`customPartitionKey`**  
When enabled, this string value modifies the format of the `ds_sk` and `ds_pk` records used by the delta sync table when versioning has been enabled (for more information, see [Conflict detection and sync](https://docs.aws.amazon.com/appsync/latest/devguide/conflict-detection-and-sync.html) in the *AWS AppSync Developer Guide*). When enabled, the processing of the `populateIndexFields` entry is also enabled. This field is optional.  
*Not supported in AWS AppSync Events*

**`populateIndexFields`**  
A boolean value that, when enabled **along with the `customPartitionKey`**, creates new entries for each record in the delta sync table, specifically in the `gsi_ds_pk` and `gsi_ds_sk` columns. For more information, see [Conflict detection and sync](https://docs.aws.amazon.com/appsync/latest/devguide/conflict-detection-and-sync.html) in the *AWS AppSync Developer Guide*. This field is optional.   
*Not supported in AWS AppSync Events*

For more information about the DynamoDB `DeleteItem` API, see the [DynamoDB API documentation](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DeleteItem.html).

# Query
<a name="dynamodb-query"></a>

**Note**  
We recommend using the DynamoDB built-in module to generate your request. For more information, see [Amazon DynamoDB built-in module](built-in-modules.md#DDB-built-in-module).

The `Query` request enables you to efficiently select all items in a DynamoDB table that match a key condition. The request specifies the following:
+ Key expression
+ Which index to use
+ Any additional filter
+ How many items to return
+ Whether to use consistent reads
+ Query direction (forward or backward)
+ Pagination token

The `Query` request object has the following structure:

```
type DynamoDBQueryRequest = {
  operation: 'Query';
  query: {
    expression: string;
    expressionNames?: { [key: string]: string };
    expressionValues?: { [key: string]: any };
  };
  index?: string;
  nextToken?: string;
  limit?: number;
  scanIndexForward?: boolean;
  consistentRead?: boolean;
  select?: 'ALL_ATTRIBUTES' | 'ALL_PROJECTED_ATTRIBUTES' | 'SPECIFIC_ATTRIBUTES';
  filter?: {
    expression: string;
    expressionNames?: { [key: string]: string };
    expressionValues?: { [key: string]: any };
  };
  projection?: {
    expression: string;
    expressionNames?: { [key: string]: string };
  };
};
```

The TypeScript definition above shows all available fields for the request. While you can construct this request manually, we recommend using the DynamoDB built-in module for generating accurate and efficient requests.

## Query fields
<a name="js-query-list"></a>

** `operation` **  
The DynamoDB operation to perform. To perform the `Query` DynamoDB operation, this must be set to `Query`. This value is required.

** `query` **  
The `query` section lets you specify a key condition expression that describes which items to retrieve from DynamoDB. For more information about how to write key condition expressions, see the [DynamoDB KeyConditions documentation](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.KeyConditions.html) . This section must be specified.    
** `expression` **  
The query expression. This field must be specified.  
** `expressionNames` **  
The substitutions for expression attribute *name* placeholders, in the form of key-value pairs. The key corresponds to a name placeholder used in the `expression`, and the value must be a string corresponding to the attribute name of the item in DynamoDB. This field is optional, and should only be populated with substitutions for expression attribute name placeholders used in the `expression`.  
** `expressionValues` **  
The substitutions for expression attribute *value* placeholders, in the form of key-value pairs. The key corresponds to a value placeholder used in the `expression`, and the value must be a typed value. For more information about how to specify a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md). This value is required. This field is optional, and should only be populated with substitutions for expression attribute value placeholders used in the `expression`.

** `filter` **  
An additional filter that can be used to filter the results from DynamoDB before they are returned. For more information about filters, see [Filters](dynamodb-filter.md). This field is optional.

** `index` **  
The name of the index to query. The DynamoDB query operation allows you to scan on Local Secondary Indexes and Global Secondary Indexes in addition to the primary key index for a hash key. If specified, this tells DynamoDB to query the specified index. If omitted, the primary key index is queried.

** `nextToken` **  
The pagination token to continue a previous query. This would have been obtained from a previous query. This field is optional.

** `limit` **  
The maximum number of items to evaluate (not necessarily the number of matching items). This field is optional.

** `scanIndexForward` **  
A boolean indicating whether to query forwards or backwards. This field is optional, and defaults to `true`.

** `consistentRead` **  
A boolean indicating whether to use consistent reads when querying DynamoDB. This field is optional, and defaults to `false`.

** `select` **  
By default, the AWS AppSync DynamoDB resolver only returns attributes that are projected into the index. If more attributes are required, you can set this field. This field is optional. The supported values are:    
** `ALL_ATTRIBUTES` **  
Returns all of the item attributes from the specified table or index. If you query a local secondary index, DynamoDB fetches the entire item from the parent table for each matching item in the index. If the index is configured to project all item attributes, all of the data can be obtained from the local secondary index and no fetching is required.  
** `ALL_PROJECTED_ATTRIBUTES` **  
Allowed only when querying an index. Retrieves all attributes that have been projected into the index. If the index is configured to project all attributes, this return value is equivalent to specifying `ALL_ATTRIBUTES`.  
**`SPECIFIC_ATTRIBUTES`**  
Returns only the attributes listed in the `projection`'s `expression`. This return value is equivalent to specifying the `projection`'s `expression` without specifying any value for `Select`.

**`projection`**  
A projection that's used to specify the attributes to return from the DynamoDB operation. For more information about projections, see [Projections](dynamodb-projections.md). This field is optional.

For more information about the DynamoDB `Query` API, see the [DynamoDB API documentation](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html).

# Scan
<a name="dynamodb-scan"></a>

**Note**  
We recommend using the DynamoDB built-in module to generate your request. For more information, see [Amazon DynamoDB built-in module](built-in-modules.md#DDB-built-in-module).

The `Scan` request scans for items across a DynamoDB table. The request specifies the following:
+ A filter to exclude results
+ Which index to use
+ How many items to return
+ Whether to use consistent reads
+ Pagination token
+ Parallel scans

The `Scan` request object has the following structure:

```
type DynamoDBScanRequest = {
  operation: 'Scan';
  index?: string;
  limit?: number;
  consistentRead?: boolean;
  nextToken?: string;
  totalSegments?: number;
  segment?: number;
  filter?: {
    expression: string;
    expressionNames?: { [key: string]: string };
    expressionValues?: { [key: string]: any };
  };
  projection?: {
    expression: string;
    expressionNames?: { [key: string]: string };
  };
};
```

The TypeScript definition above shows all available fields for the request. While you can construct this request manually, we recommend using the DynamoDB built-in module for generating accurate and efficient requests.

## Scan fields
<a name="js-scan-list"></a>

** `operation` **  
The DynamoDB operation to perform. To perform the `Scan` DynamoDB operation, this must be set to `Scan`. This value is required.

** `filter` **  
A filter that can be used to filter the results from DynamoDB before they are returned. For more information about filters, see [Filters](dynamodb-filter.md). This field is optional.

** `index` **  
The name of the index to query. The DynamoDB query operation allows you to scan on Local Secondary Indexes and Global Secondary Indexes in addition to the primary key index for a hash key. If specified, this tells DynamoDB to query the specified index. If omitted, the primary key index is queried.

** `limit` **  
The maximum number of items to evaluate at a single time. This field is optional.

** `consistentRead` **  
A Boolean that indicates whether to use consistent reads when querying DynamoDB. This field is optional, and defaults to `false`.

** `nextToken` **  
The pagination token to continue a previous query. This would have been obtained from a previous query. This field is optional.

** `select` **  
By default, the AWS AppSync DynamoDB function only returns whatever attributes are projected into the index. If more attributes are required, then this field can be set. This field is optional. The supported values are:    
** `ALL_ATTRIBUTES` **  
Returns all of the item attributes from the specified table or index. If you query a local secondary index, DynamoDB fetches the entire item from the parent table for each matching item in the index. If the index is configured to project all item attributes, all of the data can be obtained from the local secondary index and no fetching is required.  
** `ALL_PROJECTED_ATTRIBUTES` **  
Allowed only when querying an index. Retrieves all attributes that have been projected into the index. If the index is configured to project all attributes, this return value is equivalent to specifying `ALL_ATTRIBUTES`.  
**`SPECIFIC_ATTRIBUTES`**  
Returns only the attributes listed in the `projection`'s `expression`. This return value is equivalent to specifying the `projection`'s `expression` without specifying any value for `Select`.

** `totalSegments` **  
The number of segments to partition the table by when performing a parallel scan. This field is optional, but must be specified if `segment` is specified.

** `segment` **  
The table segment in this operation when performing a parallel scan. This field is optional, but must be specified if `totalSegments` is specified.

**`projection`**  
A projection that's used to specify the attributes to return from the DynamoDB operation. For more information about projections, see [Projections](dynamodb-projections.md). This field is optional.

The results have the following structure:

```
{
    items = [ ... ],
    nextToken = "a pagination token",
    scannedCount = 10
}
```

The fields are defined as follows:

** `items` **  
A list containing the items returned by the DynamoDB scan.

** `nextToken` **  
If there might be more results, `nextToken` contains a pagination token that you can use in another request. AWS AppSync encrypts and obfuscates the pagination token returned from DynamoDB. This prevents your table data from being inadvertently leaked to the caller. Also, these pagination tokens can’t be used across different functions.

** `scannedCount` **  
The number of items that were retrieved by DynamoDB before a filter expression (if present) was applied.

For more information about the DynamoDB `Scan` API, see the [DynamoDB API documentation](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Scan.html).

# BatchGetItem
<a name="dynamodb-batchgetitem"></a>

**Note**  
We recommend using the DynamoDB built-in module to generate your request. For more information, see [Amazon DynamoDB built-in module](built-in-modules.md#DDB-built-in-module).

The `BatchGetItem` request object enables you to retrieve multiple items, potentially across multiple DynamoDB tables. For this request object, you must specify the following:
+ The names of the table to retrieve the items from
+ The keys of the items to retrieve from each table

The DynamoDB `BatchGetItem` limits apply and **no condition expression** can be provided.

The `BatchGetItem` request object has the following structure:

```
type DynamoDBBatchGetItemRequest = {
  operation: 'BatchGetItem';
  tables: {
    [tableName: string]: {
      keys: { [key: string]: any }[];
      consistentRead?: boolean; 
      projection?: {
        expression: string;
        expressionNames?: { [key: string]: string };
      };
    };
  };
};
```

The TypeScript definition above shows all available fields for the request. While you can construct this request manually, we recommend using the DynamoDB built-in module for generating accurate and efficient requests.

## BatchGetItem fields
<a name="js-BatchGetItem-list"></a>

** `operation` **  
The DynamoDB operation to perform. To perform the `BatchGetItem` DynamoDB operation, this must be set to `BatchGetItem`. This value is required.

** `tables` **  
The DynamoDB tables to retrieve the items from. The value is a map where table names are specified as the keys of the map. At least one table must be provided. This `tables` value is required.    
** `keys` **  
List of DynamoDB keys representing the primary key of the items to retrieve. DynamoDB items may have a single hash key, or a hash key and sort key, depending on the table structure. For more information about how to specify a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md).  
** `consistentRead` **  
Whether to use a consistent read when executing a *GetItem* operation. This value is optional and defaults to *false*.  
**`projection`**  
A projection that's used to specify the attributes to return from the DynamoDB operation. For more information about projections, see [Projections](dynamodb-projections.md). This field is optional.

Things to remember:
+ If an item has not been retrieved from the table, a *null* element appears in the data block for that table.
+ Invocation results are sorted per table, based on the order in which they were provided inside the request object.
+ Each `Get` command inside a `BatchGetItem` is atomic, however, a batch can be partially processed. If a batch is partially processed due to an error, the unprocessed keys are returned as part of the invocation result inside the *unprocessedKeys* block.
+  `BatchGetItem` is limited to 100 keys.

Response structure

```
type Response = {
  data: {
    [tableName: string]: {[key: string]: any}[]
  }
  unprocessedKeys: {
    [tableName: string]: {[key: string]: string}[]
  }
}
```

# BatchDeleteItem
<a name="dynamodb-batchdeleteitem"></a>

**Note**  
We recommend using the DynamoDB built-in module to generate your request. For more information, see [Amazon DynamoDB built-in module](built-in-modules.md#DDB-built-in-module).

The `BatchDeleteItem` request deletes multiple items, potentially across multiple tables using a `BatchWriteItem` request. The request specifies the following:
+ The names of the tables to delete the items from
+ The keys of the items to delete from each table

The DynamoDB `BatchWriteItem` limits apply and **no condition expression** can be provided.

The `BatchDeleteItem` request object has the following structure:

```
type DynamoDBBatchDeleteItemRequest = {
  operation: 'BatchDeleteItem';
  tables: {
    [tableName: string]: { [key: string]: any }[];
  };
};
```

The TypeScript definition above shows all available fields for the request. While you can construct this request manually, we recommend using the DynamoDB built-in module for generating accurate and efficient requests.

## BatchDeleteItem fields
<a name="js-BatchDeleteItem-list"></a>

** `operation` **  
The DynamoDB operation to perform. To perform the `BatchDeleteItem` DynamoDB operation, this must be set to `BatchDeleteItem`. This value is required.

** `tables` **  
The DynamoDB tables to delete the items from. Each table is a list of DynamoDB keys representing the primary key of the items to delete. DynamoDB items may have a single hash key, or a hash key and sort key, depending on the table structure. For more information about how to specify a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md). At least one table must be provided. The `tables` value is required.

Things to remember:
+ Contrary to the `DeleteItem` operation, the fully deleted item isn’t returned in the response. Only the passed key is returned.
+ If an item has not been deleted from the table, a *null* element appears in the data block for that table.
+ Invocation results are sorted per table, based on the order in which they were provided inside the request object.
+ Each `Delete` command inside a `BatchDeleteItem` is atomic. However a batch can be partially processed. If a batch is partially processed due to an error, the unprocessed keys are returned as part of the invocation result inside the *unprocessedKeys* block.
+  `BatchDeleteItem` is limited to 25 keys.
+ This operation **is not** supported when used with conflict detection. Using both at the same time may result in an error.

Response structure (in `ctx.result)`

```
type Response = {
  data: {
    [tableName: string]: {[key: string]: any}[]
  }
  unprocessedKeys: {
    [tableName: string]: {[key: string]: any}[]
  }
}
```

The `ctx.error` contains details about the error. The keys **data**, **unprocessedKeys**, and each table key that was provided in the function request object are guaranteed to be present in the invocation result. Items that have been deleted are present in the **data** block. Items that haven’t been processed are marked as *null* inside the data block and are placed inside the **unprocessedKeys** block.

# BatchPutItem
<a name="dynamodb-batchputitem"></a>

**Note**  
We recommend using the DynamoDB built-in module to generate your request. For more information, see [Amazon DynamoDB built-in module](built-in-modules.md#DDB-built-in-module).

The `BatchPutItem` request enables you to put multiple items, potentially across multiple DynamoDB tables using a `BatchWriteItem` request. The request specifies the following:
+ The names of the tables to put the items in
+ The full list of items to put in each table

The DynamoDB `BatchWriteItem` limits apply and **no condition expression** can be provided.

The `BatchPutItem` request object has the following structure:

```
type DynamoDBBatchPutItemRequest = {
  operation: 'BatchPutItem';
  tables: {
    [tableName: string]: { [key: string]: any}[];
  };
};
```

The TypeScript definition above shows all available fields for the request. While you can construct this request manually, we recommend using the DynamoDB built-in module for generating accurate and efficient requests.

## BatchPutItem fields
<a name="js-BatchPutItem-list"></a>

** `operation` **  
The DynamoDB operation to perform. To perform the `BatchPutItem` DynamoDB operation, this must be set to `BatchPutItem`. This value is required.

** `tables` **  
The DynamoDB tables to put the items in. Each table entry represents a list of DynamoDB items to insert for this specific table. At least one table must be provided. This value is required.

Things to remember:
+ The fully inserted items are returned in the response, if successful.
+ If an item hasn’t been inserted in the table, a *null* element is displayed in the data block for that table.
+ The inserted items are sorted per table, based on the order in which they were provided inside the request object.
+ Each `Put` command inside a `BatchPutItem` is atomic, however, a batch can be partially processed. If a batch is partially processed due to an error, the unprocessed keys are returned as part of the invocation result inside the *unprocessedKeys* block.
+  `BatchPutItem` is limited to 25 items.
+ This operation **is not** supported when used with conflict detection. Using both at the same time may result in an error.

Response structure (in `ctx.result)`

```
type Response = {
  data: {
    [tableName: string]: {[key: string]: any}[]
  }
  unprocessedItems: {
    [tableName: string]: {[key: string]: any}[]
  }
}
```

The `ctx.error` contains details about the error. The keys **data**, **unprocessedItems**, and each table key that was provided in the request object are guaranteed to be present in the invocation result. Items that have been inserted are in the **data** block. Items that haven’t been processed are marked as *null* inside the data block and are placed inside the **unprocessedItems** block.

# TransactGetItems
<a name="dynamodb-transactgetitems"></a>

**Note**  
We recommend using the DynamoDB built-in module to generate your request. For more information, see [Amazon DynamoDB built-in module](built-in-modules.md#DDB-built-in-module).

The `TransactGetItems` request object retrieves multiple items, potentially across multiple DynamoDB tables in a single transaction. The request specifies the following:
+ The names of the tables to retrieve each item from
+ The key of each request item to retrieve from each table

The DynamoDB `TransactGetItems` limits apply and **no condition expression** can be provided.

The `TransactGetItems` request object has the following structure:

```
type DynamoDBTransactGetItemsRequest = {
  operation: 'TransactGetItems';
  transactItems: { table: string; key: { [key: string]: any }; projection?: { expression: string; expressionNames?: { [key: string]: string }; }[];
  };
};
```

The TypeScript definition above shows all available fields for the request. While you can construct this request manually, we recommend using the DynamoDB built-in module for generating accurate and efficient requests.

## TransactGetItems fields
<a name="js-TransactGetItems-list"></a>

** `operation` **  
The DynamoDB operation to perform. To perform the `TransactGetItems` DynamoDB operation, this must be set to `TransactGetItems`. This value is required.

** `transactItems` **  
The request items to include. The value is an array of request items. At least one request item must be provided. This `transactItems` value is required.    
** `table` **  
The DynamoDB table to retrieve the item from. The value is a string of the table name. This `table` value is required.  
** `key` **  
The DynamoDB key representing the primary key of the item to retrieve. DynamoDB items may have a single hash key, or a hash key and sort key, depending on the table structure. For more information about how to specify a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md).  
**`projection`**  
A projection that's used to specify the attributes to return from the DynamoDB operation. For more information about projections, see [Projections](dynamodb-projections.md). This field is optional.

Things to remember:
+ If a transaction succeeds, the order of retrieved items in the `items` block will be the same as the order of request items.
+ Transactions are performed in an all-or-nothing way. If any request item causes an error, the whole transaction will not be performed and error details will be returned.
+ A request item being unable to be retrieved is not an error. Instead, a *null* element appears in the *items* block in the corresponding position.
+ If the error of a transaction is *TransactionCanceledException*, the `cancellationReasons` block will be populated. The order of cancellation reasons in `cancellationReasons` block will be the same as the order of request items.
+  `TransactGetItems` is limited to 100 request items.

Response structure (in `ctx.result`)

```
type Response = {
  items?: ({[key: string]: any} | null)[];
  cancellationReasons?: {
    type: string;
    message: string;
  }[]
}
```

The `ctx.error` contains details about the error. The keys **items** and **cancellationReasons** are guaranteed to be present in `ctx.result`.

# TransactWriteItems
<a name="dynamodb-transactwriteitems"></a>

**Note**  
We recommend using the DynamoDB built-in module to generate your request. For more information, see [Amazon DynamoDB built-in module](built-in-modules.md#DDB-built-in-module).

The `TransactWriteItems` request writes multiple items, potentially to multiple DynamoDB tables. The request specifies the following:
+ The destination table name of each request item
+ The operation to perform for each request item. There are four types of operations that are supported: *PutItem*, *UpdateItem*, *DeleteItem*, and *ConditionCheck* 
+ The key of each request item to write

The DynamoDB `TransactWriteItems` limits apply.

The `TransactWriteItems` request object has the following structure:

```
type DynamoDBTransactWriteItemsRequest = {
  operation: 'TransactWriteItems';
  transactItems: TransactItem[];
};
type TransactItem =
  | TransactWritePutItem
  | TransactWriteUpdateItem
  | TransactWriteDeleteItem
  | TransactWriteConditionCheckItem;
type TransactWritePutItem = {
  table: string;
  operation: 'PutItem';
  key: { [key: string]: any };
  attributeValues: { [key: string]: string};
  condition?: TransactConditionCheckExpression;
};
type TransactWriteUpdateItem = {
  table: string;
  operation: 'UpdateItem';
  key: { [key: string]: any };
  update: DynamoDBExpression;
  condition?: TransactConditionCheckExpression;
};
type TransactWriteDeleteItem = {
  table: string;
  operation: 'DeleteItem';
  key: { [key: string]: any };
  condition?: TransactConditionCheckExpression;
};
type TransactWriteConditionCheckItem = {
  table: string;
  operation: 'ConditionCheck';
  key: { [key: string]: any };
  condition?: TransactConditionCheckExpression;
};
type TransactConditionCheckExpression = {
  expression: string;
  expressionNames?: { [key: string]: string};
  expressionValues?: { [key: string]: any};
  returnValuesOnConditionCheckFailure: boolean;
};
```

The TypeScript definition above shows all available fields for the request. While you can construct this request manually, we recommend using the DynamoDB built-in module for generating accurate and efficient requests.

## TransactWriteItems fields
<a name="js-TransactWriteItems-list"></a>

**The fields are defined as follows: **    
** `operation` **  
The DynamoDB operation to perform. To perform the `TransactWriteItems` DynamoDB operation, this must be set to `TransactWriteItems`. This value is required.  
** `transactItems` **  
The request items to include. The value is an array of request items. At least one request item must be provided. This `transactItems` value is required.  
For `PutItem`, the fields are defined as follows:    
** `table` **  
The destination DynamoDB table. The value is a string of the table name. This `table` value is required.  
** `operation` **  
The DynamoDB operation to perform. To perform the `PutItem` DynamoDB operation, this must be set to `PutItem`. This value is required.  
** `key` **  
The DynamoDB key representing the primary key of the item to put. DynamoDB items may have a single hash key, or a hash key and sort key, depending on the table structure. For more information about how to specify a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md). This value is required.  
** `attributeValues` **  
The rest of the attributes of the item to be put into DynamoDB. For more information about how to specify a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md). This field is optional.  
** `condition` **  
A condition to determine if the request should succeed or not, based on the state of the object already in DynamoDB. If no condition is specified, the `PutItem` request overwrites any existing entry for that item. You can specify whether to retrieve the existing item back when condition check fails. For more information about transactional conditions, see [Transaction condition expressions](dynamodb-transaction-condition-expressions.md). This value is optional.
For `UpdateItem`, the fields are defined as follows:    
** `table` **  
The DynamoDB table to update. The value is a string of the table name. This `table` value is required.  
** `operation` **  
The DynamoDB operation to perform. To perform the `UpdateItem` DynamoDB operation, this must be set to `UpdateItem`. This value is required.  
** `key` **  
The DynamoDB key representing the primary key of the item to update. DynamoDB items may have a single hash key, or a hash key and sort key, depending on the table structure. For more information about how to specify a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md). This value is required.  
** `update` **  
The `update` section lets you specify an update expression that describes how to update the item in DynamoDB. For more information about how to write update expressions, see the [DynamoDB UpdateExpressions documentation](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.UpdateExpressions.html). This section is required.  
** `condition` **  
A condition to determine if the request should succeed or not, based on the state of the object already in DynamoDB. If no condition is specified, the `UpdateItem` request updates the existing entry regardless of its current state. You can specify whether to retrieve the existing item back when condition check fails. For more information about transactional conditions, see [Transaction condition expressions](dynamodb-transaction-condition-expressions.md). This value is optional.
For `DeleteItem`, the fields are defined as follows:    
** `table` **  
The DynamoDB table in which to delete the item. The value is a string of the table name. This `table` value is required.  
** `operation` **  
The DynamoDB operation to perform. To perform the `DeleteItem` DynamoDB operation, this must be set to `DeleteItem`. This value is required.  
** `key` **  
The DynamoDB key representing the primary key of the item to delete. DynamoDB items may have a single hash key, or a hash key and sort key, depending on the table structure. For more information about how to specify a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md). This value is required.  
** `condition` **  
A condition to determine if the request should succeed or not, based on the state of the object already in DynamoDB. If no condition is specified, the `DeleteItem` request deletes an item regardless of its current state. You can specify whether to retrieve the existing item back when condition check fails. For more information about transactional conditions, see [Transaction condition expressions](dynamodb-transaction-condition-expressions.md). This value is optional.
For `ConditionCheck`, the fields are defined as follows:    
** `table` **  
The DynamoDB table in which to check the condition. The value is a string of the table name. This `table` value is required.  
** `operation` **  
The DynamoDB operation to perform. To perform the `ConditionCheck` DynamoDB operation, this must be set to `ConditionCheck`. This value is required.  
** `key` **  
The DynamoDB key representing the primary key of the item to condition check. DynamoDB items may have a single hash key, or a hash key and sort key, depending on the table structure. For more information about how to specify a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md). This value is required.  
** `condition` **  
A condition to determine if the request should succeed or not, based on the state of the object already in DynamoDB. You can specify whether to retrieve the existing item back when condition check fails. For more information about transactional conditions, see [Transaction condition expressions](dynamodb-transaction-condition-expressions.md). This value is required.

Things to remember:
+ Only keys of request items are returned in the response, if successful. The order of keys will be the same as the order of request items.
+ Transactions are performed in an all-or-nothing way. If any request item causes an error, the whole transaction will not be performed and error details will be returned.
+ No two request items can target the same item. Otherwise they will cause *TransactionCanceledException* error.
+ If the error of a transaction is *TransactionCanceledException*, the `cancellationReasons` block will be populated. If a request item’s condition check fails **and** you did not specify `returnValuesOnConditionCheckFailure` to be `false`, the item existing in the table will be retrieved and stored in `item` at the corresponding position of `cancellationReasons` block.
+  `TransactWriteItems` is limited to 100 request items.
+ This operation **is not** supported when used with conflict detection. Using both at the same time may result in an error.

Response structure (in `ctx.result`)

```
type Responser = {
  keys?: {[key: string]: string}[];
  cancellationReasons?: {
    item?: { [key: string]: any };
    type: string;
    message;
  }
}
```

The `ctx.error` contains details about the error. The keys **keys** and **cancellationReasons** are guaranteed to be present in `ctx.result`.

# Type system (request mapping)
<a name="dynamodb-typed-values-request"></a>

When using the AWS AppSync DynamoDB function to call your DynamoDB tables, you must specify your data using the DynamoDB type notation. For more information about DynamoDB data types, see the DynamoDB [Data type descriptors](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.LowLevelAPI.html#Programming.LowLevelAPI.DataTypeDescriptors) and [Data types](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.NamingRulesDataTypes.html#HowItWorks.DataTypes) documentation.

**Note**  
You don't have to use DynamoDB type notation when using the DynamoDB built-in module. For more information, see [Amazon DynamoDB built-in module](built-in-modules.md#DDB-built-in-module).

A DynamoDB value is represented by a JSON object containing a single key-value pair. The key specifies the DynamoDB type, and the value specifies the value itself. In the following example, the key `S` denotes that the value is a string, and the value `identifier` is the string value itself.

```
{ "S" : "identifier" }
```

The JSON object can't have more than one key-value pair. If more than one key-value pair is specified, the request object isn’t parsed.

A DynamoDB value is used anywhere in a request object where you need to specify a value. Some places where you need to do this include: `key` and `attributeValue` sections, and the `expressionValues` section of expression sections. In the following example, the DynamoDB String value `identifier` is being assigned to the `id` field in a `key` section (perhaps in a `GetItem` request object).

```
"key" : {
   "id" : { "S" : "identifier" }
}
```

 **Supported Types** 

AWS AppSync supports the following DynamoDB scalar, document, and set types:

**String type `S` **  
A single string value. A DynamoDB String value is denoted by:  

```
{ "S" : "some string" }
```
An example usage is:  

```
"key" : {
   "id" : { "S" : "some string" }
}
```

**String set type `SS` **  
A set of string values. A DynamoDB String Set value is denoted by:  

```
{ "SS" : [ "first value", "second value", ... ] }
```
An example usage is:  

```
"attributeValues" : {
   "phoneNumbers" : { "SS" : [ "+1 555 123 4567", "+1 555 234 5678" ] }
}
```

**Number type `N` **  
A single numeric value. A DynamoDB Number value is denoted by:  

```
{ "N" : 1234 }
```
An example usage is:  

```
"expressionValues" : {
   ":expectedVersion" : { "N" : 1 }
}
```

**Number set type `NS` **  
A set of number values. A DynamoDB Number Set value is denoted by:  

```
{ "NS" : [ 1, 2.3, 4 ... ] }
```
An example usage is:  

```
"attributeValues" : {
   "sensorReadings" : { "NS" : [ 67.8, 12.2, 70 ] }
}
```

**Binary type `B` **  
A binary value. A DynamoDB Binary value is denoted by:  

```
{ "B" : "SGVsbG8sIFdvcmxkIQo=" }
```
Note that the value is actually a string, where the string is the base64-encoded representation of the binary data. AWS AppSync decodes this string back into its binary value before sending it to DynamoDB. AWS AppSync uses the base64 decoding scheme as defined by RFC 2045: any character that isn’t in the base64 alphabet is ignored.  
An example usage is:  

```
"attributeValues" : {
   "binaryMessage" : { "B" : "SGVsbG8sIFdvcmxkIQo=" }
}
```

**Binary set type `BS` **  
A set of binary values. A DynamoDB Binary Set value is denoted by:  

```
{ "BS" : [ "SGVsbG8sIFdvcmxkIQo=", "SG93IGFyZSB5b3U/Cg==" ... ] }
```
Note that the value is actually a string, where the string is the base64-encoded representation of the binary data. AWS AppSync decodes this string back into its binary value before sending it to DynamoDB. AWS AppSync uses the base64 decoding scheme as defined by RFC 2045: any character that is not in the base64 alphabet is ignored.  
An example usage is:  

```
"attributeValues" : {
   "binaryMessages" : { "BS" : [ "SGVsbG8sIFdvcmxkIQo=", "SG93IGFyZSB5b3U/Cg==" ] }
}
```

**Boolean type `BOOL` **  
A Boolean value. A DynamoDB Boolean value is denoted by:  

```
{ "BOOL" : true }
```
Note that only `true` and `false` are valid values.  
An example usage is:  

```
"attributeValues" : {
   "orderComplete" : { "BOOL" : false }
}
```

**List type `L` **  
A list of any other supported DynamoDB value. A DynamoDB List value is denoted by:  

```
{ "L" : [ ... ] }
```
Note that the value is a compound value, where the list can contain zero or more of any supported DynamoDB value (including other lists). The list can also contain a mix of different types.  
An example usage is:  

```
{ "L" : [
      { "S"  : "A string value" },
      { "N"  : 1 },
      { "SS" : [ "Another string value", "Even more string values!" ] }
   ]
}
```

**Map type `M` **  
Representing an unordered collection of key-value pairs of other supported DynamoDB values. A DynamoDB Map value is denoted by:  

```
{ "M" : { ... } }
```
Note that a map can contain zero or more key-value pairs. The key must be a string, and the value can be any supported DynamoDB value (including other maps). The map can also contain a mix of different types.  
An example usage is:  

```
{ "M" : {
      "someString" : { "S"  : "A string value" },
      "someNumber" : { "N"  : 1 },
      "stringSet"  : { "SS" : [ "Another string value", "Even more string values!" ] }
   }
}
```

**Null type `NULL` **  
A null value. A DynamoDB Null value is denoted by:  

```
{ "NULL" : null }
```
An example usage is:  

```
"attributeValues" : {
   "phoneNumbers" : { "NULL" : null }
}
```

For more information about each type, see the [DynamoDB documentation](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.NamingRulesDataTypes.html) .

# Type system (response mapping)
<a name="dynamodb-typed-values-responses"></a>

When receiving a response from DynamoDB, AWS AppSync automatically converts it into JSON primitive types. Each attribute in DynamoDB is decoded and returned in the response handler's context.

For example, if DynamoDB returns the following:

```
{
    "id" : { "S" : "1234" },
    "name" : { "S" : "Nadia" },
    "age" : { "N" : 25 }
}
```

When the result is returned from your handler, AWS AppSync converts it into a JSON types as:

```
{
    "id" : "1234",
    "name" : "Nadia",
    "age" : 25
}
```

This section explains how AWS AppSync converts the following DynamoDB scalar, document, and set types:

**String type `S` **  
A single string value. A DynamoDB String value is returned as a string.  
For example, if DynamoDB returned the following DynamoDB String value:  

```
{ "S" : "some string" }
```
AWS AppSync converts it to a string:  

```
"some string"
```

**String set type `SS` **  
A set of string values. A DynamoDB String Set value is returned as a list of strings.  
For example, if DynamoDB returned the following DynamoDB String Set value:  

```
{ "SS" : [ "first value", "second value", ... ] }
```
AWS AppSync converts it to a list of strings:  

```
[ "+1 555 123 4567", "+1 555 234 5678" ]
```

**Number type `N` **  
A single numeric value. A DynamoDB Number value is returned as a number.  
For example, if DynamoDB returned the following DynamoDB Number value:  

```
{ "N" : 1234 }
```
AWS AppSync converts it to a number:  

```
1234
```

**Number set type `NS` **  
A set of number values. A DynamoDB Number Set value is returned as a list of numbers.  
For example, if DynamoDB returned the following DynamoDB Number Set value:  

```
{ "NS" : [ 67.8, 12.2, 70 ] }
```
AWS AppSync converts it to a list of numbers:  

```
[ 67.8, 12.2, 70 ]
```

**Binary type `B` **  
A binary value. A DynamoDB Binary value is returned as a string containing the base64 representation of that value.  
For example, if DynamoDB returned the following DynamoDB Binary value:  

```
{ "B" : "SGVsbG8sIFdvcmxkIQo=" }
```
AWS AppSync converts it to a string containing the base64 representation of the value:  

```
"SGVsbG8sIFdvcmxkIQo="
```
Note that the binary data is encoded in the base64 encoding scheme as specified in [RFC 4648](https://tools.ietf.org/html/rfc4648) and [RFC 2045](https://tools.ietf.org/html/rfc2045).

**Binary set type `BS` **  
A set of binary values. A DynamoDB Binary Set value is returned as a list of strings containing the base64 representation of the values.  
For example, if DynamoDB returned the following DynamoDB Binary Set value:  

```
{ "BS" : [ "SGVsbG8sIFdvcmxkIQo=", "SG93IGFyZSB5b3U/Cg==" ... ] }
```
AWS AppSync converts it to a list of strings containing the base64 representation of the values:  

```
[ "SGVsbG8sIFdvcmxkIQo=", "SG93IGFyZSB5b3U/Cg==" ... ]
```
Note that the binary data is encoded in the base64 encoding scheme as specified in [RFC 4648](https://tools.ietf.org/html/rfc4648) and [RFC 2045](https://tools.ietf.org/html/rfc2045).

**Boolean type `BOOL` **  
A Boolean value. A DynamoDB Boolean value is returned as a Boolean.  
For example, if DynamoDB returned the following DynamoDB Boolean value:  

```
{ "BOOL" : true }
```
AWS AppSync converts it to a Boolean:  

```
true
```

**List type `L` **  
A list of any other supported DynamoDB value. A DynamoDB List value is returned as a list of values, where each inner value is also converted.  
For example, if DynamoDB returned the following DynamoDB List value:  

```
{ "L" : [
      { "S"  : "A string value" },
      { "N"  : 1 },
      { "SS" : [ "Another string value", "Even more string values!" ] }
   ]
}
```
AWS AppSync converts it to a list of converted values:  

```
[ "A string value", 1, [ "Another string value", "Even more string values!" ] ]
```

**Map type `M` **  
A key/value collection of any other supported DynamoDB value. A DynamoDB Map value is returned as a JSON object, where each key/value is also converted.  
For example, if DynamoDB returned the following DynamoDB Map value:  

```
{ "M" : {
      "someString" : { "S"  : "A string value" },
      "someNumber" : { "N"  : 1 },
      "stringSet"  : { "SS" : [ "Another string value", "Even more string values!" ] }
   }
}
```
AWS AppSync converts it to a JSON object:  

```
{
   "someString" : "A string value",
   "someNumber" : 1,
   "stringSet"  : [ "Another string value", "Even more string values!" ]
}
```

**Null type `NULL` **  
A null value.  
For example, if DynamoDB returned the following DynamoDB Null value:  

```
{ "NULL" : null }
```
AWS AppSync converts it to a null:  

```
null
```

# Filters
<a name="dynamodb-filter"></a>

**Note**  
We recommend using the DynamoDB built-in module to generate your request. For more information, see [Amazon DynamoDB built-in module](built-in-modules.md#DDB-built-in-module).

When querying objects in DynamoDB using the `Query` and `Scan` operations, you can optionally specify a `filter` that evaluates the results and returns only the desired values.

The filter property of a `Query` or `Scan` request has the following structure:

```
type DynamoDBExpression = {
  expression: string;
  expressionNames?: { [key: string]: string};
  expressionValues?: { [key: string]: any};
};
```

The fields are defined as follows:

** `expression` **  
The query expression. For more information about how to write filter expressions, see the [DynamoDB QueryFilter](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.QueryFilter.html) and [DynamoDB ScanFilter](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.ScanFilter.html) documentation. This field must be specified.

** `expressionNames` **  
The substitutions for expression attribute *name* placeholders, in the form of key-value pairs. The key corresponds to a name placeholder used in the `expression`. The value must be a string that corresponds to the attribute name of the item in DynamoDB. This field is optional, and should only be populated with substitutions for expression attribute name placeholders used in the `expression`.

** `expressionValues` **  
The substitutions for expression attribute *value* placeholders, in the form of key-value pairs. The key corresponds to a value placeholder used in the `expression`, and the value must be a typed value. For more information about how to specify a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md). This must be specified. This field is optional, and should only be populated with substitutions for expression attribute value placeholders used in the `expression`.

## Example
<a name="js-id18"></a>

The following example is a filter section for a request, where entries retrieved from DynamoDB are only returned if the title starts with the `title` argument. 

Here we use the `util.transform.toDynamoDBFilterExpression` to automatically create a filter from an object:

```
const filter = util.transform.toDynamoDBFilterExpression({
  title: { beginsWith: 'far away' },
});

const request = {};
request.filter = JSON.parse(filter);
```

This generates the following filter:

```
{
  "filter": {
    "expression": "(begins_with(#title,:title_beginsWith))",
    "expressionNames": { "#title": "title" },
    "expressionValues": {
      ":title_beginsWith": { "S": "far away" }
    }
  }
}
```

# Condition expressions
<a name="dynamodb-condition-expressions"></a>

When you mutate objects in DynamoDB by using the `PutItem`, `UpdateItem`, and `DeleteItem` DynamoDB operations, you can optionally specify a condition expression that controls whether the request should succeed or not, based on the state of the object already in DynamoDB before the operation is performed.

While you can construct requests manually, we recommend using the DynamoDB built-in module to generate accurate and efficient requests. In the following examples, we use the built-in module to generate requests with conditions.

The AWS AppSync DynamoDB function allows a condition expression to be specified in `PutItem`, `UpdateItem`, and `DeleteItem` request objects, and also a strategy to follow if the condition fails and the object was not updated.

## Example 1
<a name="js-id19"></a>

The following `PutItem` request object doesn’t have a condition expression. As a result, it puts an item in DynamoDB even if an item with the same key already exists, which overwrites the existing item.

```
import { util } from '@aws-appsync/utils';
import * as ddb from '@aws-appsync/utils/dynamodb';

export const onPublish = {
  request(ctx) {
    const {id, payload: item} = ctx.events[0]
    return ddb.put({ key: { id }, item })
  },
  response: (ctx) => ctx.events
}
```

## Example 2
<a name="js-id20"></a>

The following `PutItem` object does have a condition expression that allows the operation to succeed only if an item with the same key does *not* exist in DynamoDB.

```
import { util } from '@aws-appsync/utils';
import * as ddb from '@aws-appsync/utils/dynamodb';

export const onPublish = {
  request(ctx) {
    const {id, payload: item} = ctx.events[0]
    return ddb.put({
      key: { id },
      item,
      condition: {id: {attributeExists: false}}
    })
  },
  response: (ctx) => ctx.events
}
```

For more information about DynamoDB conditions expressions, see the [DynamoDB ConditionExpressions documentation](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ConditionExpressions.html) .

## Specifying a condition
<a name="dynamodb-condition-specification"></a>

The `PutItem`, `UpdateItem`, and `DeleteItem` request objects all allow an optional `condition` section to be specified. If omitted, no condition check is made. If specified, the condition must be true for the operation to succeed.

The built-in module functions create a `condition` object that has the following structure.

```
type ConditionCheckExpression = {
  expression: string;
  expressionNames?: { [key: string]: string};
  expressionValues?: { [key: string]: any};
};
```

The following fields specify the condition:

** `expression` **  
The update expression itself. For more information about how to write condition expressions, see the [DynamoDB ConditionExpressions documentation](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ConditionExpressions.html) . This field must be specified.

** `expressionNames` **  
The substitutions for expression attribute name placeholders, in the form of key-value pairs. The key corresponds to a name placeholder used in the *expression*, and the value must be a string corresponding to the attribute name of the item in DynamoDB. This field is optional, and should only be populated with substitutions for expression attribute name placeholders used in the *expression*.

** `expressionValues` **  
The substitutions for expression attribute value placeholders, in the form of key-value pairs. The key corresponds to a value placeholder used in the expression, and the value must be a typed value. For more information about how to specify a “typed value”, see [Type system (request mapping)](dynamodb-typed-values-request.md). This must be specified. This field is optional, and should only be populated with substitutions for expression attribute value placeholders used in the expression.

# Transaction condition expressions
<a name="dynamodb-transaction-condition-expressions"></a>

Transaction condition expressions are available in requests of all four types of operations in `TransactWriteItems`, namely, `PutItem`, `DeleteItem`, `UpdateItem`, and `ConditionCheck`.

For `PutItem`, `DeleteItem`, and `UpdateItem`, the transaction condition expression is optional. For `ConditionCheck`, the transaction condition expression is required.

## Example 1
<a name="js-id22"></a>

The following transactional `DeleteItem` function request handler does not have a condition expression. As a result, it deletes the item in DynamoDB.

```
export const onPublish = {
  request(ctx) {
    const table = "events"
    return ddb.transactWrite({
      items: ctx.events.map(({ payload }) => ({
        deleteItem: { table, key: { id: payload.id } }
      }))
    })
  },
  response: (ctx) => ctx.events
}
```

## Example 2
<a name="js-id23"></a>

The following transactional `DeleteItem` function request handler does have a transaction condition expression that allows the operation succeed only if the author of that post equals a certain name.

```
export const onPublish = {
  request(ctx) {
    return ddb.remove({
      items: ctx.events.map(({ payload }) => ({
        deleteItem: { 
          table: 'events', 
          key: { id: payload.id }, 
          condition: { owner: { eq: payload.owner } }
        }
      }))
    })
  },
  response: (ctx) => ctx.events
}
```

If the condition check fails, it will cause `TransactionCanceledException` and the error detail will be returned in `ctx.result.cancellationReasons`.

# Projections
<a name="dynamodb-projections"></a>

When reading objects in DynamoDB using the `GetItem`, `Scan`, `Query`, `BatchGetItem`, and `TransactGetItems` operations, you can optionally specify a projection that identifies the attributes that you want. The projection property has the following structure, which is similar to filters: 

```
type DynamoDBExpression = {
  expression: string;
  expressionNames?: { [key: string]: string}
};
```

The fields are defined as follows:

** `expression` **  
The projection expression, which is a string. To retrieve a single attribute, specify its name. For multiple attributes, the names must be comma-separated values. For more information on writing projection expressions, see the [DynamoDB projection expressions](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ProjectionExpressions.html) documentation. This field is required. 

** `expressionNames` **  
The substitutions for expression attribute *name* placeholders in the form of key-value pairs. The key corresponds to a name placeholder used in the `expression`. The value must be a string that corresponds to the attribute name of the item in DynamoDB. This field is optional and should only be populated with substitutions for expression attribute name placeholders used in the `expression`. For more information about `expressionNames`, see the [DynamoDB documentation](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ExpressionAttributeNames.html). 

## Example 1
<a name="js-id24"></a>

The following example is a projection section for a JavaScript function in which only the attributes `author` and `id` are returned from DynamoDB.

```
projection : {
    expression : "#author, id",
    expressionNames : {
        "#author" : "author"
    }
}
```

## Example 2
<a name="js-id25"></a>

The following example demonstrates that when you use the built-in DynamoDB module, you can simply pass an array for your projection.

```
export const onPublish = {
  request(ctx) {
    return ddb.batchGet({
      tables: {
        users: {
          keys: ctx.events.map(e => ({id: e.payload.id})),
          projection: ['id', 'name', 'email', 'nested.field']
        }
      }
    })
  },
  response: (ctx) => ctx.events
}
```

# AWS AppSync JavaScript function reference for Amazon OpenSearch Service
<a name="opensearch-function-reference"></a>

The AWS AppSync integration for Amazon OpenSearch Service enables you to store and retrieve data in existing OpenSearch Service domains in your account. This handler works by allowing you to create OpenSearch Service requests, and then map the OpenSearch Service response back to your application. This section describes the function request and response handlers for the supported OpenSearch Service operations.

## Request
<a name="request-js"></a>

Most OpenSearch Service request objects have a common structure where just a few pieces change. The following example runs a search against an OpenSearch Service domain, where documents are of type `post` and are indexed under `id`. The search parameters are defined in the `body` section, with many of the common query clauses being defined in the `query` field. This example will search for documents containing `"Nadia"`, or `"Bailey"`, or both, in the `author` field of a document:

```
export const onPublish = {
  request(ctx) {
    return {
      operation: 'GET',
      path: '/id/post/_search',
      params: {
        headers: {},
        queryString: {},
        body: {
          from: 0,
          size: 50,
          query: {
            bool: {
              should: [
                { match: { author: 'Nadia' } },
                { match: { author: 'Bailey' } },
              ],
            },
          },
        },
      },
    };
  }
}
```

## Response
<a name="response-js"></a>

As with other data sources, OpenSearch Service sends a response to AWS AppSync that needs to be processed. .

Most applications are looking for the `_source` field from an OpenSearch Service response. Because you can do searches to return either an individual document or a list of documents, there are two common response patterns used in OpenSearch Service.

 **List of Results** 

```
export const onPublish = {
  response(ctx) {
    const entries = [];
    for (const entry of ctx.result.hits.hits) {
      entries.push(entry['_source']);
    }
  }
}
```

 **Individual Item** 

```
export const onPublish = {
  response(ctx) {
    const result =  ctx.result['_source']
  }
}
```

## `operation` field
<a name="operation-field"></a>

**Note**  
This applies only to the Request handler. 

HTTP method or verb (GET, POST, PUT, HEAD or DELETE) that AWS AppSync sends to the OpenSearch Service domain. Both the key and the value must be a string.

```
"operation" : "PUT"
```

## `path` field
<a name="path-field"></a>

**Note**  
This applies only to the Request handler. 

The search path for an OpenSearch Service request from AWS AppSync. This forms a URL for the operation’s HTTP verb. Both the key and the value must be strings.

```
"path" : "/indexname/type"

"path" : "/indexname/type/_search"
```

When the request handler is evaluated, this path is sent as part of the HTTP request, including the OpenSearch Service domain. For example, the previous example might translate to:

```
GET https://opensearch-domain-name.REGION.es.amazonaws.com/indexname/type/_search
```

## `params` field
<a name="params-field"></a>

**Note**  
This applies only to the Request handler. 

Used to specify what action your search performs, most commonly by setting the **query** value inside of the **body**. However, there are several other capabilities that can be configured, such as the formatting of responses.
+  **headers** 

  The header information, as key-value pairs. Both the key and the value must be strings. For example:

  ```
  "headers" : {
      "Content-Type" : "application/json"
  }
  ```

   
**Note**  
AWS AppSync currently supports only JSON as a `Content-Type`.
+  **queryString** 

  Key-value pairs that specify common options, such as code formatting for JSON responses. Both the key and the value must be a string. For example, if you want to get pretty-formatted JSON, you would use:

  ```
  "queryString" : {
      "pretty" : "true"
  }
  ```
+  **body** 

  This is the main part of your request, allowing AWS AppSync to craft a well-formed search request to your OpenSearch Service domain. The key must be a string comprised of an object. A couple of demonstrations are shown below.

 **Example 1** 

Return all documents with a city matching “seattle”:

```
export const onSubscribe = {
  request(ctx) {
    return {
      operation: 'GET',
      path: '/id/post/_search',
      params: {
        headers: {},
        queryString: {},
        body: { from: 0, size: 50, query: { match: { city: 'seattle' } } },
      },
    };
  }
}
```

 **Example 2** 

Return all documents matching “washington” as the city or the state:

```
export const onSubscribe = {
  request(ctx) {
    return {
      operation: 'GET',
      path: '/id/post/_search',
      params: {
        headers: {},
        queryString: {},
        body: {
          from: 0,
          size: 50,
          query: {
            multi_match: { query: 'washington', fields: ['city', 'state'] },
          },
        },
      },
    };
  }
}
```

# AWS AppSync JavaScript function reference for Lambda
<a name="lambda-function-reference"></a>

You can use AWS AppSync integration for AWS Lambda to invoke Lambda functions located in your account. You can shape your request payloads and the response from your Lambda functions before returning them to your clients. You can also specify the type of operation to perform in your request object. This section describes the requests for the supported Lambda operations.

## Request object
<a name="request-object-js"></a>

The Lambda request object handles fields related to your Lambda function:

```
export type LambdaRequest = {
  operation: 'Invoke';
  invocationType?: 'RequestResponse' | 'Event';
  payload: unknown;
};
```

The following example uses an `invoke` operation with its payload data being a field, along with its arguments from the context:

```
export const onPublish = {
  request(ctx) {
    return {
      operation: 'Invoke',
      payload: { field: 'getPost', arguments: ctx.args },
    };
  }
}
```

### Operation
<a name="operation-js"></a>

When doing an `Invoke`, the resolved request matches the input payload of the *Lambda* function. The following example modifies the previous example:

```
export const onPublish = {
  request(ctx) {
    return {
      operation: 'Invoke',
      payload: ctx // send the entire context to the Lambda function
    };
  }
}
```

### Payload
<a name="payload-js"></a>

The `payload` field is a container used to pass any data to the Lambda function. The `payload` field is optional.

### Invocation type
<a name="async-invocation-type-js"></a>

The Lambda data source allows you to define two invocation types: `RequestResponse` and `Event`. The invocation types are synonymous with the invocation types defined in the [Lambda API](https://docs.aws.amazon.com//lambda/latest/api/API_Invoke.html). The `RequestResponse` invocation type lets AWS AppSync call your Lambda function synchronously to wait for a response. The `Event` invocation allows you to invoke your Lambda function asynchronously. For more information on how Lambda handles `Event` invocation type requests, see [Asynchronous invocation](https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html). The `invocationType` field is optional. If this field is not included in the request, AWS AppSync will default to the `RequestResponse` invocation type.

For any `invocationType` field, the resolved request matches the input payload of the Lambda function. The following example modifies the previous example:

```
export const onPublish = {
  request(ctx) {
     return {
       operation:  'Invoke',
       invocationType:  'Event',
       payload: ctx
    };
  }
}
```

## Response object
<a name="response-object-js"></a>

As with other data sources, your Lambda function sends a response to AWS AppSync that must be processed. The result of the Lambda function is contained in the `context` result property (`context.result`).

If the shape of your Lambda function response matches the expected output, you can forward the response using the following function response handler:

```
export const onPublish = {
  respone(ctx) {
    console.log(`the response: ${ctx.result}`)
    return ctx.events
  }
}
```

There are no required fields or shape restrictions that apply to the response object.

# AWS AppSync JavaScript function reference for EventBridge data source
<a name="eventbridge-function-reference"></a>

The AWS AppSync integration for Amazon EventBridge data source allows you to send custom events to the EventBridge bus.

## Request
<a name="request-js"></a>

The request handler allows you to send multiple custom events to an EventBridge event bus:

```
export const onPublish = {
  request(ctx) {
    return {
      "operation" : "PutEvents",
      "events" : ctx.events.map(e => ({
        source: ctx.info.channel.path,
        detail: {payload: e.payload},
        detailType: ctx.info.channelNamespace.name,
      }))
    }
  }
}
```

An EventBridge `PutEvents` request has the following type definition:

```
type PutEventsRequest = {
  operation: 'PutEvents'
  events: {
    source: string
    detail: { [key: string]: any }
    detailType: string
    resources?: string[]
    time?: string // RFC3339 Timestamp format
  }[]
}
```

## Response
<a name="response-js"></a>

If the `PutEvents` operation is successful, the response from EventBridge is included in the `ctx.result`:

```
export function response(ctx) {
  if(ctx.error)
    util.error(ctx.error.message, ctx.error.type, ctx.result)
  else
    return ctx.result
}
```

Errors that occur while performing `PutEvents` operations such as `InternalExceptions` or `Timeouts` will appear in `ctx.error`. For a list of EventBridge's common errors, see the [EventBridge common error reference](https://docs.aws.amazon.com/eventbridge/latest/APIReference/CommonErrors.html).

The `result` will have the following type definition:

```
type PutEventsResult = {
  Entries: {
    ErrorCode: string
    ErrorMessage: string
    EventId: string
  }[]
  FailedEntryCount: number
}
```
+ **Entries**

  The ingested event results, both successful and unsuccessful. If the ingestion was successful, the entry has the `EventID` in it. Otherwise, you can use the `ErrorCode` and `ErrorMessage` to identify the problem with the entry.

  For each record, the index of the response element is the same as the index in the request array.
+ **FailedEntryCount**

  The number of failed entries. This value is represented as an integer.

For more information about the response of `PutEvents`, see [PutEvents](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_PutEvents.html#API_PutEvents_ResponseElements).

**Example sample response 1**

The following example is a `PutEvents` operation with two successful events:

```
{
    "Entries" : [ 
        {
            "EventId": "11710aed-b79e-4468-a20b-bb3c0c3b4860"
        }, 
        {
            "EventId": "d804d26a-88db-4b66-9eaf-9a11c708ae82"
        }
    ],
    "FailedEntryCount" : 0
}
```

**Example sample response 2**

The following example is a `PutEvents` operation with three events, two successes and one fail:

```
{
    "Entries" : [ 
        {
            "EventId": "11710aed-b79e-4468-a20b-bb3c0c3b4860"
        }, 
        {
            "EventId": "d804d26a-88db-4b66-9eaf-9a11c708ae82"
        },
        {
            "ErrorCode" : "SampleErrorCode",
            "ErrorMessage" : "Sample Error Message"
        }
    ],
    "FailedEntryCount" : 1
}
```

## `PutEvents` fields
<a name="putevents-field"></a>

`PutEvents` contains the following mapping template fields:
+ **Version**

  Common to all request mapping templates, the `version` field defines the version that the template uses. This field is required. The value `2018-05-29` is the only version supported for the EventBridge mapping templates.
+ **Operation**

  The only supported operation is `PutEvents`. This operation allows you to add custom events to your event bus.
+ **Events**

  An array of events that will be added to the event bus. This array should have an allocation of 1 - 10 items.

  The `Event` object has the following fields:
  + `"source"`: A string that defines the source of the event.
  + `"detail"`: A JSON object that you can use to attach information about the event. This field can be an empty map ( `{ }` ).
  + `"detailType`: A string that identifies the type of event.
  + `"resources"`: A JSON array of strings that identifies resources involved in the event. This field can be an empty array.
  + `"time"`: The event timestamp provided as a string. This should follow the [RFC3339](https://www.rfc-editor.org/rfc/rfc3339.txt) timestamp format.

The following are examples of valid `Event` objects:

**Example 1**

```
{
    "source" : "source1",
    "detail" : {
        "key1" : [1,2,3,4],
        "key2" : "strval"
    },
    "detailType" : "sampleDetailType",
    "resources" : ["Resouce1", "Resource2"],
    "time" : "2022-01-10T05:00:10Z"
}
```

**Example 2**

```
{
    "source" : "source1",
    "detail" : {},
    "detailType" : "sampleDetailType"
}
```

**Example 3**

```
{
    "source" : "source1",
    "detail" : {
        "key1" : 1200
    },
    "detailType" : "sampleDetailType",
    "resources" : []
}
```

# AWS AppSync JavaScript function reference for HTTP
<a name="http-function-reference"></a>

The AWS AppSync HTTP functions enable you to send requests from AWS AppSync to any HTTP endpoint, and responses from your HTTP endpoint back to AWS AppSync. With your request handler, you can provide hints to AWS AppSync about the nature of the operation to be invoked. This section describes the different configurations for the supported HTTP resolver.

## Request
<a name="request-js"></a>

```
type HTTPRequest = {
  method: 'PUT' | 'POST' | 'GET' | 'DELETE' | 'PATCH';
  params?: {
    query?: { [key: string]: any };
    headers?: { [key: string]: string };
    body?: any;
  };
  resourcePath: string;
};
```

The following is an example of an HTTP POST request, with a `text/plain` body:

```
export const onPublish = {
  request(ctx) {
    return {
      resourcePath: '/',
      method: 'POST',
      params: {
        headers: { 'Content-Type': 'text/plain' },
        body: 'this is an example of text body',
      }
    };
  }
}
```

## Method
<a name="method-js"></a>

HTTP method or verb (GET, POST, PUT, PATCH, or DELETE) that AWS AppSync sends to the HTTP endpoint.

```
"method": "PUT"
```

## ResourcePath
<a name="resourcepath-js"></a>

The resource path that you want to access. Along with the endpoint in the HTTP data source, the resource path forms the URL that the AWS AppSync service makes a request to.

```
"resourcePath": "/v1/users"
```

When the request is evaluated, this path is sent as part of the HTTP request, including the HTTP endpoint. For example, the previous example might translate to the following:

```
PUT <endpoint>/v1/users
```

## Params fields
<a name="params-field-js"></a>

** **headers** **  
The header information, as key-value pairs. Both the key and the value must be strings.  
For example:  

```
"headers" : {
    "Content-Type" : "application/json"
}
```
Currently supported `Content-Type` headers are:  

```
text/*
application/xml
application/json
application/soap+xml
application/x-amz-json-1.0
application/x-amz-json-1.1
application/vnd.api+json
application/x-ndjson
```
You can’t set the following HTTP headers:  

```
HOST
CONNECTION
USER-AGENT
EXPECTATION
TRANSFER_ENCODING
CONTENT_LENGTH
```

** **query** **  
Key-value pairs that specify common options, such as code formatting for JSON responses. Both the key and the value must be a string. The following example shows how you can send a query string as `?type=json`:  

```
"query" : {
    "type" : "json"
}
```

** **body** **  
The body contains the HTTP request body that you choose to set. The request body is always a UTF-8 encoded string unless the content type specifies the charset.  

```
"body":"body string"
```

## Response
<a name="response-js"></a>

The response of the request is available in `ctx.result`. If the request results in an error, the error is available in `ctx.error`. You can check the status of the response in `ctx.result.statusCode`, and get the body returned in the response in `ctx.result.body`.

# AWS AppSync JavaScript function reference for Amazon RDS
<a name="rds-function-reference"></a>

The AWS AppSync RDS function enables you to send SQL queries to an Amazon Aurora cluster database using the RDS Data API and get back the result of these queries. You can write SQL statements that are sent to the Data API by using AWS AppSync's `rds` module `sql`-tagged template or by using the `rds` module's `select`, `insert`, `update`, and `remove` helper functions. AWS AppSync utilizes the RDS Data Service's [https://docs.aws.amazon.com//rdsdataservice/latest/APIReference/API_ExecuteStatement.html](https://docs.aws.amazon.com//rdsdataservice/latest/APIReference/API_ExecuteStatement.html) action to run SQL statements against the database. 

## SQL tagged template
<a name="sql-tagged-templates"></a>

AWS AppSync's `sql` tagged template enables you to create a static statement that can receive dynamic values at runtime by using template expressions. AWS AppSync builds a variable map from the expression values to construct a [https://docs.aws.amazon.com//rdsdataservice/latest/APIReference/API_SqlParameter.html](https://docs.aws.amazon.com//rdsdataservice/latest/APIReference/API_SqlParameter.html) query that is sent to the Amazon Aurora Serverless Data API. With this method, it isn't possible for dynamic values passed at run time to modify the original statement, which could cause unintented execution. All dynamic values are passed as parameters, can't modify the original statement, and aren't executed by the database. This makes your query less vulnerable to SQL injection attacks.

**Note**  
In all cases, when writing SQL statements, you should follow security guidelines to properly handle data that you receive as input.

**Note**  
The `sql` tagged template only supports passing variable values. You can't use an expression to dynamically specify the column or table names. However, you can use utility functions to build dynamic statements.

**Filtering Database Results Securely with Dynamic Channel Paths**

When building AWS AppSync applications, you often need to filter database queries based on dynamic values. This pattern shows how to safely incorporate run-time values into your SQL queries while maintaining security. In the following example, we create a query that filters based on the value of channel path that is set dynamically at run time. The value can easily be added to the statement using the tag expression.

```
import { sql, createMySQLStatement as mysql } from '@aws-appsync/utils/rds';

    export const onPublish = {
      request(ctx) {
        const query = sql`
    SELECT * FROM table 
    WHERE column = ${ctx.info.channel.path}`;
        return mysql(query);
      }
    }
```

The database engine automatically protects against SQL injection attacks by sanitizing all values passed through the variable map.

## Creating statements
<a name="creating-statements"></a>

Handlers can interact with MySQL and PostgreSQL databases. Use `createMySQLStatement` and `createPgStatement` respectively to build statements. For example, `createMySQLStatement` can create a MySQL query. These functions accept up to two statements, useful when a request should retrieve results immediately. With MySQL, you can do the following:

```
import { sql, createMySQLStatement } from '@aws-appsync/utils/rds';

export const onSubscribe = {
  request(ctx) {
    const { id, text } = ctx.events[0].payload;
    const s1 = sql`insert into Post(id, text) values(${id}, ${text})`;
    const s2 = sql`select * from Post where id = ${id}`;
    return createMySQLStatement(s1, s2);
  }
}
```

**Note**  
`createPgStatement` and `createMySQLStatement` does not escape or quote statements built with the `sql` tagged template.

## Retrieving data
<a name="retrieving-data"></a>

The result of your executed SQL statement is available in your response handler in the `context.result` object. The result is a JSON string with the [response elements](https://docs.aws.amazon.com//rdsdataservice/latest/APIReference/API_ExecuteStatement.html#API_ExecuteStatement_ResponseElements) from the `ExecuteStatement` action. When parsed, the result has the following shape:

```
type SQLStatementResults = {
    sqlStatementResults: {
        records: any[];
        columnMetadata: any[];
        numberOfRecordsUpdated: number;
        generatedFields?: any[]
    }[]
}
```

The following example demonstrates how you can use the `toJsonObject` utility to transform the result into a list of JSON objects representing the returned rows.

```
import { toJsonObject } from '@aws-appsync/utils/rds';

export const onSubscribe = {
  response(ctx) {
    const { error, result } = ctx;
    if (error) {
      return util.error(
        error.message,
        error.type,
        result
      )
    }
    const result =  toJsonObject(result)[1][0]
  }
}
```

Note that `toJsonObject` returns an array of statement results. If you provided one statement, the array length is `1`. If you provided two statements, the array length is `2`. Each result in the array contains `0` or more rows. `toJsonObject` returns `null` if the result value is invalid or unexpected.

## Utility functions
<a name="utility-functions"></a>

You can use the AWS AppSync RDS module's utility helpers to interact with your database. To learn more, see [Amazon RDS module functions](built-in-modules.md#built-in-rds-modules).

# AWS AppSync JavaScript function reference for Amazon Bedrock
<a name="bedrock-function-reference"></a>

You can use AWS AppSync functions to invoke models on Amazon Bedrock in your AWS account. You can shape your request payloads and the response from your model invocations functions before returning them to your clients. You can use the Amazon Bedrock runtime’s `InvokeModel` API or the `Converse` API. This section describes the requests for the supported Amazon Bedrock operations.

**Note**  
AWS AppSync only supports synchronous invocations that complete within 10 seconds. It is not possible to call Amazon Bedrock's stream APIs. AWS AppSync only supports invoking foundation models and [inference profiles](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles.html) in the same region as the AWS AppSync API.

## Request object
<a name="request_object"></a>

The `InvokeModel` request object allows you to interact with Amazon Bedrock’s `InvokeModel` API.

```
type BedrockInvokeModelRequest = {
  operation: 'InvokeModel';
  modelId: string;
  body: any;
  guardrailIdentifier?: string;
  guardrailVersion?: string;
  guardrailTrace?: string;
}
```

The `Converse` request object allows you to interact with Amazon Bedrock’s `Converse` API.

```
type BedrockConverseRequest = {
  operation: 'Converse';
  modelId: string;
  messages: BedrockMessage[];
  additionalModelRequestFields?: any;
  additionalModelResponseFieldPaths?: string[];
  guardrailConfig?: BedrockGuardrailConfig;
  inferenceConfig?: BedrockInferenceConfig;
  promptVariables?: { [key: string]: BedrockPromptVariableValues }[];
  system?: BedrockSystemContent[];
  toolConfig?: BedrockToolConfig;
}
```

See the [Type reference](#type-reference-bedrock) section later in this topic for more details.

From your functions and resolvers, you can build your request objects directly or use the helper functions from @aws-appsync/utils/ai to create the request. When specifying the model Id (modelId) in your requests, you can use the model Id or the model ARN.

The following example uses the `invokeModel` function to summarize text using Amazon Titan Text G1 - Lite (amazon.titan-text-lite-v1). A configured guardrail is used to identify and block or filter unwanted content in the prompt flow. Learn more about [Amazon Bedrock Guardrails](https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails.html) in the *Amazon Bedrock User Guide*.

**Important**  
You are responsible for secure application development and preventing vulnerabilities, such as prompt injection. To learn more, see [Prompt injection security](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-injection.html) in the *Amazon Bedrock User Guide*.

```
import { invokeModel } from '@aws-appsync/utils/ai'

export const onPublish = {
  request(ctx) {
    return invokeModel({
      modelId: 'amazon.titan-text-lite-v1',
      guardrailIdentifier: "zabcd12345678",
      guardrailVersion: "1",
      body: { inputText: `Summarize this text in less than 100 words. : \n<text>${ctx.stash.text ?? ctx.env.DEFAULT_TEXT}</text>` },
    })
  }
}

export const onProcessResult = {
  response(ctx) {
    return ctx.result.results[0].outputText
  }
}
```

The following example uses the `converse` function with a cross-region inference profile (us.anthropic.claude-3-5-haiku-20241022-v1:0). Learn more about Amazon Bedrock's [Prerequisites for inference profiles](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-prereq.html) in the *Amazon Bedrock User Guide*

**Reminder**: You are responsible for secure application development and preventing vulnerabilities, such as prompt injection.

```
import { converse } from '@aws-appsync/utils/ai'

export const onPublish = {
  request(ctx) {
    return converse({
      modelId: 'us.anthropic.claude-3-5-haiku-20241022-v1:0',
      system: [
        {
          text: `
  You are a database assistant that provides SQL queries to retrieve data based on a natural language request. 
  ${ctx.args.explain ? 'Explain your answer' : 'Do not explain your answer'}.
  Assume a database with the following tables and columns exists:

  Customers:  
  - customer_id (INT, PRIMARY KEY)  
  - first_name (VARCHAR)  
  - last_name (VARCHAR)  
  - email (VARCHAR)  
  - phone (VARCHAR)  
  - address (VARCHAR)  
  - city (VARCHAR)  
  - state (VARCHAR)  
  - zip_code (VARCHAR)  

  Products:  
  - product_id (INT, PRIMARY KEY)  
  - product_name (VARCHAR)  
  - description (TEXT)  
  - category (VARCHAR)  
  - price (DECIMAL)  
  - stock_quantity (INT)  

  Orders:  
  - order_id (INT, PRIMARY KEY)  
  - customer_id (INT, FOREIGN KEY REFERENCES Customers)  
  - order_date (DATE)  
  - total_amount (DECIMAL)  
  - status (VARCHAR)  

  Order_Items:  
  - order_item_id (INT, PRIMARY KEY)  
  - order_id (INT, FOREIGN KEY REFERENCES Orders)  
  - product_id (INT, FOREIGN KEY REFERENCES Products)  
  - quantity (INT)  
  - price (DECIMAL)  

  Reviews:  
  - review_id (INT, PRIMARY KEY)  
  - product_id (INT, FOREIGN KEY REFERENCES Products)  
  - customer_id (INT, FOREIGN KEY REFERENCES Customers)  
  - rating (INT)  
  - comment (TEXT)  
  - review_date (DATE)`,
        },
      ],
      messages: [
        {
          role: 'user',
          content: [{ text: `<request>${ctx.args.text}:</request>` }],
        },
      ],
    })
  }
}

export const onProcessResult = {
  response(ctx) {
    return ctx.result.output.message.content[0].text
  }
}
```

The following example uses `converse` to create a structured response. Note that we use environment variables for our DB schema reference and we configure a guardrail to help prevent attacks.

```
import { converse } from '@aws-appsync/utils/ai'

export const onPublish = {
  request(ctx) {
    return generateObject({
      modelId: ctx.env.HAIKU3_5, // keep the model in an env variable
      prompt: ctx.args.query,
      shape: objectType(
        {
          sql: stringType('the sql query to execute as a javascript template string.'),
          parameters: objectType({}, 'the placeholder parameters for the query, if any.'),
        },
        'the sql query to execute along with the place holder parameters',
      ),
      system: [
        {
          text: `
  You are a database assistant that provides SQL queries to retrieve data based on a natural language request. 

  Assume a database with the following tables and columns exists:

  ${ctx.env.DB_SCHEMA_CUSTOMERS}
  ${ctx.env.DB_SCHEMA_ORDERS}
  ${ctx.env.DB_SCHEMA_ORDER_ITEMS}
  ${ctx.env.DB_SCHEMA_PRODUCTS}
  ${ctx.env.DB_SCHEMA_REVIEWS}`,
        },
      ],
      guardrailConfig: { guardrailIdentifier: 'iabc12345678', guardrailVersion: 'DRAFT' },
    })
  },
    response(ctx) {
    const result = toolReponse(ctx.result)
    return []
  }
}

function generateObject(input) {
  const { modelId, prompt, shape, ...options } = input
  return converse({
    modelId,
    messages: [{ role: 'user', content: [{ text: prompt }] }],
    toolConfig: {
      toolChoice: { tool: { name: 'structured_tool' } },
      tools: [
        {
          toolSpec: {
            name: 'structured_tool',
            inputSchema: { json: shape },
          },
        },
      ],
    },
    ...options,
  })
}

function toolReponse(result) {
  return result.output.message.content[0].toolUse.input
}

function stringType(description) {
  const t = { type: 'string' /* STRING */ }
  if (description) {
    t.description = description
  }
  return t
}

function objectType(properties, description, required) {
  const t = { type: 'object' /* OBJECT */, properties }
  if (description) {
    t.description = description
  }
  if (required) {
    t.required = required
  }
  return t
}
```

Given the schema:

```
type SQLResult {
    sql: String
    parameters: AWSJSON
}

type Query {
    db(text: String!): SQLResult
}
```

and the query:

```
query db($text: String!) {
  db(text: $text) {
    parameters
    sql
  }
}
```

With the following parameters: 

```
{
  "text":"What is my top selling product?"
}
```

The following response is returned:

```
{
  "data": {
    "assist": {
      "sql": "SELECT p.product_id, p.product_name, SUM(oi.quantity) as total_quantity_sold\nFROM Products p\nJOIN Order_Items oi ON p.product_id = oi.product_id\nGROUP BY p.product_id, p.product_name\nORDER BY total_quantity_sold DESC\nLIMIT 1;",
      "parameters": null
    }
  }
}
```

However, with this request:

```
{
  "text":"give me a query to retrieve sensitive information"
}
```

The following response is returned:

```
{
  "data": {
    "db": {
      "parameters": null,
      "sql": "SELECT null; -- I cannot and will not assist with retrieving sensitive private information"
    }
  }
}
```

To learn more about configuring Amazon Bedrock Guardrails, see [Stop harmful content in models using Amazon Bedrock Guardrails](https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails.html) in the *Amazon Bedrock User Guide*.

## Response object
<a name="response_object"></a>

The response from your Amazon Bedrock runtime invocation is contained in the context‘s result property (`ctx.result`). The response matches the shape specified by Amazon Bedrock’s APIs. See the [Amazon Bedrock User Guide](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html) for more information about the expected shape of invocation results.

```
export const onPublish = {
  response(ctx) {
    return ctx.result
  }
}
```

## Long running invocations
<a name="long-running-invocations"></a>

Many organizations currently use AWS AppSync as an AI gateway to build generative AI applications that are powered by foundation models on Amazon Bedrock. Customers use AWS AppSync subscriptions, powered by WebSockets, to return progressive updates from long-running model invocations. This allows them to implement asynchronous patterns.

The following diagram demonstrates how you can implement this pattern. In the diagram, the following steps occur.

1. Your client starts a subscription, which sets up a WebSocket, and makes a request to AWS AppSync to trigger a Generative AI invocation.

1. AWS AppSync calls your AWS Lambda function in Event mode and immediately returns a response to the client.

1. Your Lambda function invokes the model on Amazon Bedrock. The Lambda function can use a synchronous API, such as `InvokeModel`, or a stream API, such as `InvokeModelWithResponseStream`, to get progressive updates.

1. As updates are received, or when the invocation completes, the Lambda function sends updates via mutations to your AWS AppSync API which triggers subscriptions.

1. The subscription events are sent in real-time and received by your client over the WebSocket.

![\[A diagram that demonstrates the workflow for using an AWS AppSync subscription to return updates from a Amazon Bedrock model.\]](http://docs.aws.amazon.com/appsync/latest/eventapi/images/bedrock-workflow.png)


## Type reference
<a name="type-reference-bedrock"></a>

```
export type BedrockMessage = {
  role: 'user' | 'assistant' | string;
  content: BedrockMessageContent[];
};

export type BedrockMessageContent =
  | { text: string }
  | { guardContent: BedrockGuardContent }
  | { toolResult: BedrockToolResult }
  | { toolUse: BedrockToolUse };

export type BedrockGuardContent = {
  text: BedrockGuardContentText;
};

export type BedrockGuardContentText = {
  text: string;
  qualifiers?: ('grounding_source' | 'query' | 'guard_content' | string)[];
};

export type BedrockToolResult = {
  content: BedrockToolResultContent[];
  toolUseId: string;
  status?: string;
};

export type BedrockToolResultContent = { json: any } | { text: string };

export type BedrockToolUse = {
  input: any;
  name: string;
  toolUseId: string;
};

export type ConversePayload = {
  modelId: string;
  body: any;
  guardrailIdentifier?: string;
  guardrailVersion?: string;
  guardrailTrace?: string;
};

export type BedrockGuardrailConfig = {
  guardrailIdentifier: string;
  guardrailVersion: string;
  trace: string;
};

export type BedrockInferenceConfig = {
  maxTokens?: number;
  temperature?: number;
  stopSequences?: string[];
  topP?: number;
};

export type BedrockPromptVariableValues = {
  text: string;
};

export type BedrockToolConfig = {
  tools: BedrockTool[];
  toolChoice?: BedrockToolChoice;
};

export type BedrockTool = {
  toolSpec: BedrockToolSpec;
};

export type BedrockToolSpec = {
  name: string;
  description?: string;
  inputSchema: BedrockInputSchema;
};

export type BedrockInputSchema = {
  json: any;
};

export type BedrockToolChoice =
  | { tool: BedrockSpecificToolChoice }
  | { auto: any }
  | { any: any };

export type BedrockSpecificToolChoice = {
  name: string;
};

export type BedrockSystemContent =
  | { guardContent: BedrockGuardContent }
  | { text: string };

export type BedrockConverseOutput = {
  message?: BedrockMessage;
};

export type BedrockConverseMetrics = {
  latencyMs: number;
};

export type BedrockTokenUsage = {
  inputTokens: number;
  outputTokens: number;
  totalTokens: number;
};

export type BedrockConverseTrace = {
  guardrail?: BedrockGuardrailTraceAsssessment;
};

export type BedrockGuardrailTraceAsssessment = {
  inputAssessment?: { [key: string]: BedrockGuardrailAssessment };
  modelOutput?: string[];
  outputAssessments?: { [key: string]: BedrockGuardrailAssessment };
};

export type BedrockGuardrailAssessment = {
  contentPolicy?: BedrockGuardrailContentPolicyAssessment;
  contextualGroundingPolicy?: BedrockGuardrailContextualGroundingPolicyAssessment;
  invocationMetrics?: BedrockGuardrailInvocationMetrics;
  sensitiveInformationPolicy?: BedrockGuardrailSensitiveInformationPolicyAssessment;
  topicPolicy?: BedrockGuardrailTopicPolicyAssessment;
  wordPolicy?: BedrockGuardrailWordPolicyAssessment;
};

export type BedrockGuardrailContentPolicyAssessment = {
  filters: BedrockGuardrailContentFilter[];
};

export type BedrockGuardrailContentFilter = {
  action: 'BLOCKED' | string;
  confidence: 'NONE' | 'LOW' | 'MEDIUM' | 'HIGH' | string;
  type:
    | 'INSULTS'
    | 'HATE'
    | 'SEXUAL'
    | 'VIOLENCE'
    | 'MISCONDUCT'
    | 'PROMPT_ATTACK'
    | string;
  filterStrength: 'NONE' | 'LOW' | 'MEDIUM' | 'HIGH' | string;
};

export type BedrockGuardrailContextualGroundingPolicyAssessment = {
  filters: BedrockGuardrailContextualGroundingFilter;
};

export type BedrockGuardrailContextualGroundingFilter = {
  action: 'BLOCKED' | 'NONE' | string;
  score: number;
  threshold: number;
  type: 'GROUNDING' | 'RELEVANCE' | string;
};

export type BedrockGuardrailInvocationMetrics = {
  guardrailCoverage?: BedrockGuardrailCoverage;
  guardrailProcessingLatency?: number;
  usage?: BedrockGuardrailUsage;
};

export type BedrockGuardrailCoverage = {
  textCharacters?: BedrockGuardrailTextCharactersCoverage;
};

export type BedrockGuardrailTextCharactersCoverage = {
  guarded?: number;
  total?: number;
};

export type BedrockGuardrailUsage = {
  contentPolicyUnits: number;
  contextualGroundingPolicyUnits: number;
  sensitiveInformationPolicyFreeUnits: number;
  sensitiveInformationPolicyUnits: number;
  topicPolicyUnits: number;
  wordPolicyUnits: number;
};

export type BedrockGuardrailSensitiveInformationPolicyAssessment = {
  piiEntities: BedrockGuardrailPiiEntityFilter[];
  regexes: BedrockGuardrailRegexFilter[];
};

export type BedrockGuardrailPiiEntityFilter = {
  action: 'BLOCKED' | 'ANONYMIZED' | string;
  match: string;
  type:
    | 'ADDRESS'
    | 'AGE'
    | 'AWS_ACCESS_KEY'
    | 'AWS_SECRET_KEY'
    | 'CA_HEALTH_NUMBER'
    | 'CA_SOCIAL_INSURANCE_NUMBER'
    | 'CREDIT_DEBIT_CARD_CVV'
    | 'CREDIT_DEBIT_CARD_EXPIRY'
    | 'CREDIT_DEBIT_CARD_NUMBER'
    | 'DRIVER_ID'
    | 'EMAIL'
    | 'INTERNATIONAL_BANK_ACCOUNT_NUMBER'
    | 'IP_ADDRESS'
    | 'LICENSE_PLATE'
    | 'MAC_ADDRESS'
    | 'NAME'
    | 'PASSWORD'
    | 'PHONE'
    | 'PIN'
    | 'SWIFT_CODE'
    | 'UK_NATIONAL_HEALTH_SERVICE_NUMBER'
    | 'UK_NATIONAL_INSURANCE_NUMBER'
    | 'UK_UNIQUE_TAXPAYER_REFERENCE_NUMBER'
    | 'URL'
    | 'USERNAME'
    | 'US_BANK_ACCOUNT_NUMBER'
    | 'US_BANK_ROUTING_NUMBER'
    | 'US_INDIVIDUAL_TAX_IDENTIFICATION_NUMBER'
    | 'US_PASSPORT_NUMBER'
    | 'US_SOCIAL_SECURITY_NUMBER'
    | 'VEHICLE_IDENTIFICATION_NUMBER'
    | string;
};

export type BedrockGuardrailRegexFilter = {
  action: 'BLOCKED' | 'ANONYMIZED' | string;
  match?: string;
  name?: string;
  regex?: string;
};

export type BedrockGuardrailTopicPolicyAssessment = {
  topics: BedrockGuardrailTopic[];
};

export type BedrockGuardrailTopic = {
  action: 'BLOCKED' | string;
  name: string;
  type: 'DENY' | string;
};

export type BedrockGuardrailWordPolicyAssessment = {
  customWords: BedrockGuardrailCustomWord[];
  managedWordLists: BedrockGuardrailManagedWord[];
};

export type BedrockGuardrailCustomWord = {
  action: 'BLOCKED' | string;
  match: string;
};

export type BedrockGuardrailManagedWord = {
  action: 'BLOCKED' | string;
  match: string;
  type: 'PROFANITY' | string;
};
```