

# Subscribing AWS Event Fork Pipelines to an Amazon SNS topic
Subscribing an event pipeline to a topic

To accelerate the development of your event-driven applications, you can subscribe event-handling pipelines—powered by AWS Event Fork Pipelines—to Amazon SNS topics. AWS Event Fork Pipelines is a suite of open-source [nested applications](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-template-nested-applications.html), based on the [AWS Serverless Application Model](https://aws.amazon.com/serverless/sam/) (AWS SAM), which you can deploy directly from the [AWS Event Fork Pipelines suite](https://serverlessrepo.aws.amazon.com/applications?query=aws-event-fork-pipelines) (choose **Show apps that create custom IAM roles or resource policies**) into your AWS account. For more information, see [How AWS Event Fork Pipelines works](sns-fork-pipeline-as-subscriber.md#how-sns-fork-works).

This section show how you can use the AWS Management Console to deploy a pipeline and then subscribe AWS Event Fork Pipelines to an Amazon SNS topic. Before you begin, [create an Amazon SNS topic](sns-create-topic.md).

To delete the resources that comprise a pipeline, find the pipeline on the **Applications** page of on the AWS Lambda console, expand the **SAM template section**, choose **CloudFormation stack**, and then choose **Other Actions**, **Delete Stack**.

# Deploying and subscribing the Event Storage and Backup Pipeline to Amazon SNS
Deploying and subscribing the Event Storage and Backup Pipeline


|  | 
| --- |
| For event archiving and analytics, Amazon SNS now recommends using its native integration with Amazon Data Firehose. You can subscribe Firehose delivery streams to SNS topics, which allows you to send notifications to archiving and analytics endpoints such as Amazon Simple Storage Service (Amazon S3) buckets, Amazon Redshift tables, Amazon OpenSearch Service (OpenSearch Service), and more. Using Amazon SNS with Firehose delivery streams is a fully-managed and codeless solution that doesn't require you to use AWS Lambda functions. For more information, see [Fanout to Firehose delivery streams](sns-firehose-as-subscriber.md). | 

This page shows how to deploy the [Event Storage and Backup Pipeline](sns-fork-pipeline-as-subscriber.md#sns-fork-event-storage-and-backup-pipeline) and subscribe it to an Amazon SNS topic. This process automatically turns the AWS SAM template associated with the pipeline into an CloudFormation stack, and then deploys the stack into your AWS account. This process also creates and configures the set of resources that comprise the Event Storage and Backup Pipeline, including the following:
+ Amazon SQS queue
+ Lambda function
+ Firehose delivery stream
+ Amazon S3 backup bucket

For more information about configuring a stream with an Amazon S3 bucket as a destination, see `[S3DestinationConfiguration](https://docs.aws.amazon.com/firehose/latest/APIReference/API_S3DestinationConfiguration.html)` in the *Amazon Data Firehose API Reference*.

For more information about transforming events and about configuring event buffering, event compression, and event encryption, see [Creating an Delivery Stream](https://docs.aws.amazon.com/firehose/latest/dev/basic-create.html) in the *Amazon Data Firehose Developer Guide*.

For more information about filtering events, see [Amazon SNS subscription filter policies](sns-subscription-filter-policies.md) in this guide.

1. Sign in to the [AWS Lambda console](https://console.aws.amazon.com/lambda/).

1. On the navigation panel, choose **Functions** and then choose **Create function**.

1. On the **Create function** page, do the following:

   1. Choose **Browse serverless app repository**, **Public applications**, **Show apps that create custom IAM roles or resource policies**.

   1. Search for `fork-event-storage-backup-pipeline` and then choose the application.

1. On the **fork-event-storage-backup-pipeline** page, do the following:

   1. In the **Application settings** section, enter an **Application name** (for example, `my-app-backup`).
**Note**  
For each deployment, the application name must be unique. If you reuse an application name, the deployment will update only the previously deployed CloudFormation stack (rather than create a new one).

   1. (Optional) For **BucketArn**, enter the ARN of the Amazon S3 bucket into which incoming events are loaded. If you don't enter a value, a new Amazon S3 bucket is created in your AWS account.

   1. (Optional) For **DataTransformationFunctionArn**, enter the ARN of the Lambda function through which the incoming events are transformed. If you don't enter a value, data transformation is disabled.

   1. (Optional) Enter one of the following **LogLevel** settings for the execution of your application's Lambda function:
      + `DEBUG`
      + `ERROR`
      + `INFO` (default)
      + `WARNING`

   1. For **TopicArn**, enter the ARN of the Amazon SNS topic to which this instance of the fork pipeline is to be subscribed.

   1. (Optional) For **StreamBufferingIntervalInSeconds** and **StreamBufferingSizeInMBs**, enter the values for configuring the buffering of incoming events. If you don't enter any values, 300 seconds and 5 MB are used.

   1. (Optional) Enter one of the following **StreamCompressionFormat** settings for compressing incoming events:
      + `GZIP`
      + `SNAPPY`
      + `UNCOMPRESSED` (default)
      + `ZIP`

   1. (Optional) For **StreamPrefix**, enter the string prefix to name files stored in the Amazon S3 backup bucket. If you don't enter a value, no prefix is used.

   1. (Optional) For **SubscriptionFilterPolicy**, enter the Amazon SNS subscription filter policy, in JSON format, to be used for filtering incoming events. The filter policy decides which events are indexed in the OpenSearch Service index. If you don't enter a value, no filtering is used (all events are indexed).

   1. (Optional) For **SubscriptionFilterPolicyScope**, enter the string `MessageBody` or `MessageAttributes` to enable payload-based or attribute-based message filtering. 

   1. Choose **I acknowledge that this app creates custom IAM roles, resource policies and deploys nested applications.** and then choose **Deploy**.

On the **Deployment status for *my-app*** page, Lambda displays the **Your application is being deployed** status.

In the **Resources** section, CloudFormation begins to create the stack and displays the **CREATE\$1IN\$1PROGRESS** status for each resource. When the process is complete, CloudFormation displays the **CREATE\$1COMPLETE** status.

When the deployment is complete, Lambda displays the **Your application has been deployed** status.

Messages published to your Amazon SNS topic are stored in the Amazon S3 backup bucket provisioned by the Event Storage and Backup pipeline automatically.

# Deploying and subscribing the Event Search and Analytics Pipeline to Amazon SNS
Deploying and subscribing the Event Search and Analytics Pipeline


|  | 
| --- |
| For event archiving and analytics, Amazon SNS now recommends using its native integration with Amazon Data Firehose. You can subscribe Firehose delivery streams to SNS topics, which allows you to send notifications to archiving and analytics endpoints such as Amazon Simple Storage Service (Amazon S3) buckets, Amazon Redshift tables, Amazon OpenSearch Service (OpenSearch Service), and more. Using Amazon SNS with Firehose delivery streams is a fully-managed and codeless solution that doesn't require you to use AWS Lambda functions. For more information, see [Fanout to Firehose delivery streams](sns-firehose-as-subscriber.md). | 

This page shows how to deploy the [Event Search and Analytics Pipeline](sns-fork-pipeline-as-subscriber.md#sns-fork-event-search-and-analytics-pipeline) and subscribe it to an Amazon SNS topic. This process automatically turns the AWS SAM template associated with the pipeline into an CloudFormation stack, and then deploys the stack into your AWS account. This process also creates and configures the set of resources that comprise the Event Search and Analytics Pipeline, including the following:
+ Amazon SQS queue
+ Lambda function
+ Firehose delivery stream
+ Amazon OpenSearch Service domain
+ Amazon S3 dead-letter bucket

For more information about configuring a stream with an index as a destination, see `[ElasticsearchDestinationConfiguration](https://docs.aws.amazon.com/firehose/latest/APIReference/API_ElasticsearchDestinationConfiguration.html)` in the *Amazon Data Firehose API Reference*.

For more information about transforming events and about configuring event buffering, event compression, and event encryption, see [Creating an Delivery Stream](https://docs.aws.amazon.com/firehose/latest/dev/basic-create.html) in the *Amazon Data Firehose Developer Guide*.

For more information about filtering events, see [Amazon SNS subscription filter policies](sns-subscription-filter-policies.md) in this guide.

1. Sign in to the [AWS Lambda console](https://console.aws.amazon.com/lambda/).

1. On the navigation panel, choose **Functions** and then choose **Create function**.

1. On the **Create function** page, do the following:

   1. Choose **Browse serverless app repository**, **Public applications**, **Show apps that create custom IAM roles or resource policies**.

   1. Search for `fork-event-search-analytics-pipeline` and then choose the application.

1. On the **fork-event-search-analytics-pipeline** page, do the following:

   1. In the **Application settings** section, enter an **Application name** (for example, `my-app-search`).
**Note**  
For each deployment, the application name must be unique. If you reuse an application name, the deployment will update only the previously deployed CloudFormation stack (rather than create a new one).

   1. (Optional) For **DataTransformationFunctionArn**, enter the ARN of the Lambda function used for transforming incoming events. If you don't enter a value, data transformation is disabled.

   1. (Optional) Enter one of the following **LogLevel** settings for the execution of your application's Lambda function:
      + `DEBUG`
      + `ERROR`
      + `INFO` (default)
      + `WARNING`

   1. (Optional) For **SearchDomainArn**, enter the ARN of the OpenSearch Service domain, a cluster that configures the needed compute and storage functionality. If you don't enter a value, a new domain is created with the default configuration.

   1. For **TopicArn**, enter the ARN of the Amazon SNS topic to which this instance of the fork pipeline is to be subscribed.

   1. For **SearchIndexName**, enter the name of the OpenSearch Service index for event search and analytics.
**Note**  
The following quotas apply to index names:  
Can't include uppercase letters
Can't include the following characters: `\ / * ? " < > | ` , #`
Can't begin with the following characters: `- + _`
Can't be the following: `. ..`
Can't be longer than 80 characters
Can't be longer than 255 bytes
Can't contain a colon (from OpenSearch Service 7.0)

   1. (Optional) Enter one of the following **SearchIndexRotationPeriod** settings for the rotation period of the OpenSearch Service index:
      + `NoRotation` (default)
      + `OneDay`
      + `OneHour`
      + `OneMonth`
      + `OneWeek`

      Index rotation appends a timestamp to the index name, facilitating the expiration of old data. 

   1. For **SearchTypeName**, enter the name of the OpenSearch Service type for organizing the events in an index.
**Note**  
OpenSearch Service type names can contain any character (except null bytes) but can't begin with `_`.
For OpenSearch Service 6.x, there can be only one type per index. If you specify a new type for an existing index that already has another type, Firehose returns a runtime error.

   1. (Optional) For **StreamBufferingIntervalInSeconds** and **StreamBufferingSizeInMBs**, enter the values for configuring the buffering of incoming events. If you don't enter any values, 300 seconds and 5 MB are used.

   1. (Optional) Enter one of the following **StreamCompressionFormat** settings for compressing incoming events:
      + `GZIP`
      + `SNAPPY`
      + `UNCOMPRESSED` (default)
      + `ZIP`

   1. (Optional) For **StreamPrefix**, enter the string prefix to name files stored in the Amazon S3 dead-letter bucket. If you don't enter a value, no prefix is used.

   1. (Optional) For **StreamRetryDurationInSecons**, enter the retry duration for cases when Firehose can't index events in the OpenSearch Service index. If you don't enter a value, then 300 seconds is used.

   1. (Optional) For **SubscriptionFilterPolicy**, enter the Amazon SNS subscription filter policy, in JSON format, to be used for filtering incoming events. The filter policy decides which events are indexed in the OpenSearch Service index. If you don't enter a value, no filtering is used (all events are indexed).

   1. Choose **I acknowledge that this app creates custom IAM roles, resource policies and deploys nested applications.** and then choose **Deploy**.

On the **Deployment status for *my-app-search*** page, Lambda displays the **Your application is being deployed** status.

In the **Resources** section, CloudFormation begins to create the stack and displays the **CREATE\$1IN\$1PROGRESS** status for each resource. When the process is complete, CloudFormation displays the **CREATE\$1COMPLETE** status.

When the deployment is complete, Lambda displays the **Your application has been deployed** status.

Messages published to your Amazon SNS topic are indexed in the OpenSearch Service index provisioned by the Event Search and Analytics pipeline automatically. If the pipeline can't index an event, it stores it in a Amazon S3 dead-letter bucket.

# Deploying the Event Replay Pipeline with Amazon SNS integration
Deploying the Event Replay Pipeline with Amazon SNS integration

This page shows how to deploy the [Event Replay Pipeline](sns-fork-pipeline-as-subscriber.md#sns-fork-event-replay-pipeline) and subscribe it to an Amazon SNS topic. This process automatically turns the AWS SAM template associated with the pipeline into an CloudFormation stack, and then deploys the stack into your AWS account. This process also creates and configures the set of resources that comprise the Event Replay Pipeline, including an Amazon SQS queue and a Lambda function.

For more information about filtering events, see [Amazon SNS subscription filter policies](sns-subscription-filter-policies.md) in this guide.

1. Sign in to the [AWS Lambda console](https://console.aws.amazon.com/lambda/).

1. On the navigation panel, choose **Functions** and then choose **Create function**.

1. On the **Create function** page, do the following:

   1. Choose **Browse serverless app repository**, **Public applications**, **Show apps that create custom IAM roles or resource policies**.

   1. Search for `fork-event-replay-pipeline` and then choose the application.

1. On the **fork-event-replay-pipeline** page, do the following:

   1. In the **Application settings** section, enter an **Application name** (for example, `my-app-replay`).
**Note**  
For each deployment, the application name must be unique. If you reuse an application name, the deployment will update only the previously deployed CloudFormation stack (rather than create a new one).

   1. (Optional) Enter one of the following **LogLevel** settings for the execution of your application's Lambda function:
      + `DEBUG`
      + `ERROR`
      + `INFO` (default)
      + `WARNING`

   1. (Optional) For **ReplayQueueRetentionPeriodInSeconds**, enter the amount of time, in seconds, for which the Amazon SQS replay queue keeps the message. If you don't enter a value, 1,209,600 seconds (14 days) is used.

   1. For **TopicArn**, enter the ARN of the Amazon SNS topic to which this instance of the fork pipeline is to be subscribed.

   1. For **DestinationQueueName**, enter the name of the Amazon SQS queue to which the Lambda replay function forwards messages.

   1. (Optional) For **SubscriptionFilterPolicy**, enter the Amazon SNS subscription filter policy, in JSON format, to be used for filtering incoming events. The filter policy decides which events are buffered for replay. If you don't enter a value, no filtering is used (all events are buffered for replay).

   1. Choose **I acknowledge that this app creates custom IAM roles, resource policies and deploys nested applications.** and then choose **Deploy**.

On the **Deployment status for *my-app-replay*** page, Lambda displays the **Your application is being deployed** status.

In the **Resources** section, CloudFormation begins to create the stack and displays the **CREATE\$1IN\$1PROGRESS** status for each resource. When the process is complete, CloudFormation displays the **CREATE\$1COMPLETE** status.

When the deployment is complete, Lambda displays the **Your application has been deployed** status.

Messages published to your Amazon SNS topic are buffered for replay in the Amazon SQS queue provisioned by the Event Replay Pipeline automatically.

**Note**  
By default, replay is disabled. To enable replay, navigate to the function's page on the Lambda console, expand the **Designer** section, choose the **SQS** tile and then, in the **SQS** section, choose **Enabled**.